uSciences e-Learning 3.0 Conference: “This is Not (Just!) a Simulation”

Over the past few decades, online learning has evolved from the so-called 1.0 phase (in-person classes augmented by static web pages and PDFs) to the more revolutionary 2.0 stage (i.e., the dawn of online courses, classroom-blended “talking head” videos, and rudimentary analytics) to today, as the 3.0 era (high-tech custom learning experiences) begins to take shape.

This year, the latest evolutionary advancements were once again spotlighted, explored and celebrated at the (17th) annual e-Learning 3.0 Conference – and the Learning Lab’s very own IT Director, Joe Lee, was on-hand as one of 30 chosen speakers presenting at the May 14 event, along with luminaries from other regional colleges, and panel discussions with LMS vendors, researchers, and stakeholders.

Hosted by the University of the Sciences and kicked off by a keynote address from renowned edtech consultant  Phil Hill, the conference showcased the use of technology to enhance teaching and learning in higher ed, allowing participants to share best practices and creative approaches for learning enrichment.

Joe’s presentation, “This is Not a Simulation: Supporting Games/Sims in the Classroom Setting,” pulled back the curtain on what goes into making effective, engaging e-learning tools in the 3.0 era. Using case studies from the Lab’s own experiences delivering and supporting some of its most popular customized learning experiences, he posed key questions that are critical to the success, or failure, of a sim or teaching game:

How do faculty get comfortable to take the leap into the technological unknown? What problem are you trying to solve? How do students get help?  Does this game/simulation achieve the professor’s goals? Can this be supported at scale?

Given that the Lab annually supports more than 10,000 student plays of over 33 different games for Wharton faculty in almost every discipline (and does so with a small team of fewer than 5 people), Joe offered a unique, insider perspective on what it takes to ensure that each run of a sim or classroom game goes as smoothly as possible. From preparation, evaluation and testing, to technical issues, setup, and in-class support, he shared the lessons we’ve learned and the best practices we follow (well-honed through years of trial and error).

In case you missed it, these were Joe’s key takeaways:

  • Never lose sight of the learning objectives.
  • There must be painful dedication to testing and retesting (and re-retesting!) of a sim or new teaching tool prior to classroom delivery – aka, the “trust but verify” approach.
  • Close. The. Loop.
  • Keep in mind that your e-Learning technology is but one piece of the class – so never lose sight of the big picture!
  • Lastly: There is always an area where you can do better!

The Lab was proud to be part of this exciting day of collective edtech wisdom – and, together with the dozens of other presenters at the forefront of the 3.0 era, is happy to be part of an ever- growing community engaged in improving teaching and learning by inventing and deploying new pedagogies and technologies.

VR in the Classroom: If We Get This Right, Nobody Explodes

STAY TUNED: This is the first of a few “insider” posts with IT Tech Director Joe Lee, as he looks at the educational use of Oculus Go through the lens of the Wharton Learning Lab…


Keep Talking and Nobody Explodes is a cooperative, team-based game where one player is trapped with a ticking time bomb that must be defused before it goes off, and their teammates are tasked with coaching them through the process with information found in a complicated Bomb Defusal Manual. The trick here is that only the defuser can see the bomb, and only the “experts” deciphering the defusal instructions can see the manual. The high-pressure play that ensues is a great lesson on teamwork, communication styles, and operating in a crisis. 

Wharton professor Ethan Mollick uses the game to illuminate these and other learning points within the context of entrepreneurship, and has long partnered with the Learning Lab to deliver the simulation with laptops that we lend out from our Exec Ed group. However, when Facebook released the Oculus Go, and Mollick subsequently discovered that Keep Talking was one of the available game apps, he asked the Lab to help him bring his bomb-diffusing entrepreneurship experience up to the next level – and into the realm of virtual reality (VR).

The Discovery Process

On the surface, Oculus Go kits present an intriguing value proposition over laptops. First off, they’re a lot cheaper ($249, as of this post), they don’t require wires or cables, and they’re compact (thus, highly portable). These are all key factors that play into the efficacy of supporting a technology-based solution at classroom scale. After all, with the limited amount of time between scheduled classes, there are precious few minutes to burn setting up an interactive learning experience (ergo, quicker is better).

That being said, Facebook’s Go devices posed their own set of challenges – largely due to the fact that this was the first instance of the Learning Lab supporting a VR game. Laptops, of course, are a tried-and-true technology with a predictable set of parameters that can practically be managed on auto-pilot; they are a “solved” solution in that way, given that my team has been working with and troubleshooting classroom laptops for years. VR headsets, though? Logistically speaking, that’s a whole other ballgame…

Some of my initial concerns were:

  • Can “Go” products connect to University wifi?
  • Do they need to be tethered to a mobile device? If so, how many can you tether at once?*
  • How long is a headset’s battery life?
  • Do we need to purchase the bomb-defusal game on each Oculus Go?*
  • Will operating the VR accessories be intuitive for students?
  • If something goes wrong, how do I triage issues?

These were critical questions that needed to be answered before I could commit to supporting Oculus devices in the classroom setting, no matter how excited Prof. Mollick was by the prospect. Fortunately, I was able to allay most of those concerns once I got my hands on a few headsets of my own – for research purposes, naturally.

With a bunch of identical Go kits spread out in front of me, I immediately realized we’d need an effective way to track which controller went with which headset. Since they all look the same – and are completely wireless – it’d be virtually impossible (no pun intended) to tell at a glance what went with what, once they were out of their cases. To solve this issue, I bought a label-maker and came up with a naming convention to keep everything straight.  

I then downloaded the Oculus app for my iPhone, inadvertently queueing up my next problem: Just as I feared, the devices could not connect to the University’s wifi. Later I found out they can be used without a web connection, but the initial setup does require the internet. Penn’s security necessitated a far more complex pairing process than the Oculus could manage, however. After a series of tries and misses (connecting to the guest wifi, using a special devices-only wifi ssid, etc.), I had to bring out the big guns and create a wireless network. Luckily I was able to use my phone as a mobile hotspot, but unless you’re in a similar pinch, I would not recommend this route because the speeds are lacking (and the initial setup is a lonnnng process). If I had to do it again, I’d work with our IT support staff to find a workaround for connecting to the University wifi – or get a wireless router and set it up myself.  

Once my devices were on the internet, I went through the setup for each one. It was all fairly straightforward and basically involved creating an Oculus account then adding each Go to it through the app. While it certainly doesn’t take a wizard to complete these steps, a wizard is in fact your guide during the entire connectivity process (including forcing you to watch a mandatory safety video, which got very old very quickly). Dark magic aside, setup ends with the Oculus downloading updates – taking an additional 20 minutes per device, using my slow hotspot connection. I learned after the fact that multiple devices can indeed be set up at the same time, allowing them to download the updates all at the same time. (Thanks, wizard!)

Finally, I was ready to Go.

Stay tuned for my next post, where I’ll share the Learning Lab’s inaugural experience supporting students with VR in a live “bomb defusal” class! tick… tick… BOOM

*A quick note – I was able to pair all 9 of the Go devices I had at my disposal to one Oculus account on my mobile device. I bought a Keep Talking game in the Oculus app store and was able to easily download it to each device, as applications are tied to the account, not the Oculus Go itself, so I was able to purchase the game once and load it onto each headset. After that, I was able to turn off the wifi connection and still play the game – a critical answer to the question of whether these VR devices are practical for classroom use on campus.

Guest Post: Student Researcher Cracks Open the Case for Classroom AR

As the Learning Lab continues to explore applications for augmented reality (AR) in higher ed – specifically, in the business-school setting – we are eager to give voice to fresh perspectives and innovative experimentation with the technology. This week we’re excited to hand our blog space over to Wharton student Jesse Cui, who recently served as an AR research assistant here in the Lab, so he can share his findings with you directly. (Take it away, Jesse!) 

Related imageAugmented reality (AR) is a potentially great game-changer in the classroom setting. It provides two new aspects to education that can greatly boost the learning experience: namely, control and visualization. As an undergraduate student-researcher in the Learning Lab, I worked on developing a proof of concept mobile app that is able to demonstrate the benefits of AR when studying more complex, difficult material.

For this app, I demonstrated how simple usage of AR can help students learn statistical concepts such as multivariate regression, which involves 3D visual aspects of visualization. This app allows users to visualize 3D data space with control over the visuals (unlike current 2D static images) by holding their phones up to a PowerPoint image. Users can also toggle regression on and off to see how it would appear in 3D space versus 2D.

Many people would imagine building an AR app would be difficult but, in fact, with the numerous amounts of software packages and libraries – including marker-based image recognition software and a variety of 3D objects to use in apps – developing AR apps has never been easier. For example, Vuforia is a company that provides software that allows developers to create model and image targets that can be easily identified and tracked in an AR app. Combining this technology with Unity, a game engine for developing apps and games that involves graphics, and a couple lines of C# scripts, I was able to create a fully functioning mobile app in no time. Another bonus with the combination of these technologies for AR app development is that you can easily build apps in both IOS and Android.

Tweet-Mining for AR Sentiment in the Twitterverse

Image result for twitter dataAnother part of my research under the Learning Lab was mining social-media feeds for performing sentiment analysis to gain a better understanding of the perception of augmented reality in the public sphere. Using GWU’s Social Feed Manager, an open-source software library that allows researchers to mine tweets from Twitter on a large-scale basis, I was able to collect a database of over 50,000 recent tweets surrounding AR and VR (virtual reality), and specific products surrounding these technologies. To dive deeper on the latter, I was able to pull tweets surrounding Microsoft’s HoloLens and the startup Magic Leap’s One AR glasses. Pulling tweets on these products can allow researchers like me to better understand market perception of different products.

However, in order to perform sentiment analysis on tweets, I trained a machine-learning classifier model –specifically, a deep Recurrent Neural Network (RNN) – that uses Long Short-Term Memory (LSTM) units with PyTorch. This type of model is often used for text analysis and natural language processing, since it keeps memory of information received previously. I opted against using a pre-trained model, since I wanted to use a model that was specifically trained on tweets. Then I pulled Sentiment140’s tweet dataset, which has 1.6 million tweets and sentiment labels for each tweet, and performed data cleaning and preprocessing, including stemming and tokenization of the tweets. Lastly, I trained the RNN on the tweets. Overall, the trained model had a testing accuracy of around 84%, which is decent for natural language processing.

Below are a couple of results I received from running the model on the AR/VR tweets.

  • Random tweets have .6403 (out of 1) positive sentiment (11122 total tweets) – used as a control
  • “Augmented Reality” tweets have .738 positive sentiment (16339 total tweets)
  • “Virtual Reality” tweets have .722 positive sentiment (25990 total tweets)
  • “HoloLens” tweets have .701 positive sentiment (6924 total tweets)
  • “Magic Leap One” tweets have .886 positive sentiment (266 total tweets)

Take from this what you will, but it’s great to see that AR/VR has overwhelmingly positive sentiments compared to the control, and I have no doubt that, as we continue to perfect the technologies, we will also continue to realize the benefits of AR as an education technology.

Jesse Cui is a dual-degree Wharton undergrad in the Jerome Fisher program in Management and Technology, who studies computer and cognitive science at the Penn School of Engineering and Applied Science, as well as operations, information, and decisions.

Penn’s Startup School Gets into the (Startup) Game

Joshua Davidson, CEO & Founder of Chop Dawg, talks with student entrepreneurs in the Penn YouthHack Startup School and Ventures programs.

For Wharton student Abhinav Kajaria (‘20), entrepreneurship is in his blood. His family’s home-improvement business was started by his grandfather, passed down to his father, and soon after he graduates, Kajaria – who co-founded Mumbai-based micro-finance NGO 1Ghar while still in high school – will take over the reins, “helping it grow to new heights,” he told Learning Lab project delivery manager Lan Ngo, in a recent interview.

Thus, it was only natural that Kajaria was drawn to the Penn chapter of YouthHack Philly, through which he currently serves as director of the Startup School – a semester-long bootcamp designed to introduce incoming students to the campus’ robust startup culture.

YouthHack was started in the Philippines by David Ongchoco (Penn ‘18) four years ago, with the goal of enabling students in that area of the world to get involved in startups, technology and entrepreneurship. By 2015, the initiative had expanded to Philadelphia, sparking the YouthHack Undergraduate Penn Society (YUPS), founded to help students learn more about technology startups and entrepreneurship through experiential and action-based learning programs. Since that time it has become a crucial component in the citywide effort to build Philadelphia’s startup ecosystem.

The Startup School and the YouthHack Ventures Accelerator program are unique to the Penn chapter of the worldwide YouthHack organization, Kajaria notes. The Startup School is aimed at students who want to learn about startups but have no entrepreneurial experience. “We help them develop a concept, a business model, and then form teams that stick on one idea through all the processes required to build a startup around it, and eventually put the idea out to benefit the overall community,” he explains. Serial entrepreneurs, professors, venture capitalists, and student entrepreneurs teach weekly lessons ranging from ideation to building an MVP to digital marketing and more. At the end of the program, students pitch their business idea to investors.

The Ventures Accelerator, on the other hand, adopts experienced student entrepreneurs seeking to launch their startups, providing intensive mentoring and workshops, while helping them identify resources and connect with in-place YUPS networks so they can build their concept and products over the course of a semester.

Kajaria joined the core team running the first class of Startup School when he was a sophomore, using his tech skills to build a database of people, speakers and founders coming through the Penn chapter of YouthHack. He then transitioned to running the group itself, building a strong program of student-engagement and popular events around it. It was in this role he first heard about Wharton Prof. Ethan Mollick’s Startup Game – a simulation designed in conjunction with the Learning Lab to immerse students in the direct experience of launching (and getting funding for) an early-stage business.

Intrigued at the game’s application within the Startup School, Kajaria enrolled in a MGMT 230 course, where the game was played. “It provided a great foundation for understanding what running startup is all about, and whether pursuing entrepreneurship studies is a good fit for you,” he told Ngo, who oversaw the simulation for his class. Deeming it a perfect fit for the students he was currently working with via YUPS, Kajaria then ran the game at the Startup School.

In addition to being a great enhancement to the group’s current programming, Kajaria says he took away even more from the simulation by playing it a second time, outside of his professor’s classroom. “I (observed how) everyone’s negotiation skills improved through the game,” he says, noting, “there’s a lot more to it than simply learning the basic functions of running a startup.”

Like all Learning Lab simulations, the linchpin of the experience is a culminating debrief, which reveals the team or player results accrued throughout the game, the logic behind certain decisions, and generally puts everything in crystal-clear context. It was this aspect of his second go at the Startup Game that proved to be the biggest aha moment for Kajaria, who thought for sure that his well-organized and successfully financed (fictional) startup would earn him one of the game’s highest scores. “I wasn’t even among the top-three founders,” he told Ngo, expressing pride at how well his Startup School freshmen performed. “It really opens your eyes (to the entrepreneurship journey) – it’s not just about the fun you have playing it, it’s a great learning experience, too.”


To learn more about the Startup Game, email the Learning Lab team at

Style Points: Augmented Reality and the Tailored Learning Experience

In case you missed the memo, the next wave of the Digital Revolution – in the form of immersive computing – is rapidly approaching the shores of higher ed, and with it, one of the greatest opportunities to transform learning in a generation.

Surfing along the crest of this radical wave of new technologies is augmented reality (AR). Sometimes referred to as “blended reality,” it allows users to experience the real world, printed text, or even a classroom lesson with an overlay of additional 3D data content, amplifying access to instant information and bringing it to life; in turn, bringing thrilling new opportunities for experiential education.

Perhaps more importantly, AR has the potential to democratize learning and tailor visual or data displays to fit a wide range of individual cognitive strengths. Augmented-reality apps and wearables enable access to rich, immersive educational experiences, and have the potential to differentiate instruction by catering to the specific learning needs and styles of an increasingly diverse student population. Because, let’s face it – many educators on the ground have already realized that a one-size-fits-all approach to curricular material does not always lead to strong learning outcomes.

Learning in Style

A better understanding of what differentiated learning means, in and of itself, may be helpful for developing lesson plans and instructional materials that meet the needs of individual students. Delving into the concept of “learning styles,” for one, can drive home the point that different students perceive and interact differently to information within their learning environment and, therefore, have varying preferences and necessities in terms of how they’re taught. (However, I should note that research on learning styles is an area of study that continues to evolve, so there is no definitive consensus on how to address this increasingly relevant issue in education as of this writing.)

To illustrate how AR can provide various entry points to learning, let’s discuss a few examples of learning preferences that researchers have identified, along with potential AR experiences that could speak to those learning styles.

Visual Learners

Many students learn best when they’re able to access visual rather than verbal information. Whereas classroom materials that integrate visuals might include presentation slides, textbooks, handouts and the like, AR takes visuals to the next level. Augmented Chemistry, a tangible user interface (TUI), is an example of the visual affordances of AR. Using TUI, chemistry students can pick up virtual atoms, position them to compose molecules, and rotate the 3D molecule to view it from all angles.  Compare this learning experience to the use of traditional textbooks consisting of 2D images that can’t be manipulated – the latter now seems pretty, well, flat in comparison, no?

Kinesthetic Learners

Kinesthetic learners respond well to physically engaging exercises, which place-based or location-based AR can offer in spades. Geological positioning systems (GPS) within place- or location-based AR systems give users access to relevant information as they arrive at a location, requiring them to physically move within an environment to complete tasks. AR provides kinesthetic learning opportunities, too, by allowing users to use bodily motions to manipulate virtual objects.

Social, Field-Dependent, and Application-Directed Learners

Researchers have also identified a learning-styles dimension that emphasizes the social aspect of learning. To wit, some learners desire interaction with others as a means of co-constructing knowledge. In addition to a preference for interacting with others, field-dependent learners rely on an external frame of reference (which may be provided by other learners); and then there are application-directed learners, who mainly prefer concrete applications of subject matter. Through leveraging connected learning and providing a virtual platform for social activity, AR has the potential to meet the needs of such learners.   

For example, in Environmental Detectives – an augmented-reality simulation game – users role-play environmental scientists. Players move about in a real space while being provided with location-specific information. They interview non-players to gather info, and they’re able to beam data to one another. Such a game incorporates social aspects of learning while also accommodating users who learn by interacting with an external frame of reference, as well as those learners who benefit from concretely applying their knowledge in a scenario.

Wave of the Future

With so many possibilities and applications, AR could truly be a game-changer in education. It allows for dynamic instruction that can’t be accomplished through traditional classroom experiences (without, of course, replacing the classroom altogether). Think of it as a powerful supplemental learning tool with the awesome ability to reach every style of student.

So join the Learning Lab team as we continue this journey and further explore the exciting realm of unprecedented opportunities AR presents us with here in higher ed. Together, we’ll face this new wave of immersive technology with open arms, encouraging educators to push the boundaries of teaching and, ultimately, the very boundaries of learning itself.

This blog post, written by Learning Lab Project Delivery Manager Lan Ngo, is the first in a series of posts that will explore AR technology and its applications in education. If you would like to add to this conversation, please leave a comment!

Learning Lab = World-Class Games for a Global University

Learning Lab Technical Dir. Sarah Toms (center) stands with students she guided through the Executive Development Program (EDP) sim in Thailand last year.

What do the Hong Kong University of Science and Technology, the IE Business School in Madrid, HEC Paris, and Dubai’s S P Jain School of Global Management all have in common with Wharton? Well, besides being among the Financial Timestop-ranked MBA programs in 2017, they all use simulation games developed right here in the Learning Lab.

And they’re not alone. Around the globe, from the Grandes Écoles (“Grand Schools”) of France to top universities in Copenhagan, Australia, India and dozens of other countries, there are thousands of students applying their burgeoning business acumen to The Startup Game and OPEQ — two of our best-selling sims available through Harvard Business Publishing (HBP), which recently issued a report detailing worldwide distribution of both games in 2016.

Their popularity in Wharton entrepreneurship and negotiation classes notwithstanding, the HBP report is a noteworthy success for our team in that it illustrates the symbiotic synergy between the goals of the Learning Lab and those of the University at large.

The former reflects the expressed intentions of our namesake, Alfred West Jr., who gifted the School with $10 million in 2001 to establish a veritable laboratory for creating “innovative learning tools that challenge students to think strategically across business functions and organizations” and enable Wharton to “take a lead role in rethinking the learning paradigm.” Nearly two decades later, the Learning Lab’s historic mission is increasingly central to President Amy Gutmann’s own vision for the future of the University of Pennsylvania.

According to Gutmann, “Our commitment to global engagement is essential to what I call ‘educational diplomacy. Now more than ever, we are bringing Penn to the world and the world to Penn. And in doing so, we are building stronger cross-cultural connections, deeper relationships, and mutual understanding within the global community.”

Wharton Dean Geoffrey Garrett in Seoul during his “Global Conversations Tour,” where he shared his vision for the School.

Sharing an ethos that embraces collaboration and the exchange of knowledge is Wharton Dean Geoffrey Garrett. “Globalization and technological change are poised to transform business education. I have no doubt Wharton will be in the vanguard of this transformation here and in other countries,” he stated upon taking over the position in 2014.

The School’s Executive Education division has helped draw an international audience as well, partnering with the Learning Lab to build custom learning experiences for foreign audiences both on-campus and abroad.

In 2016 alone, more than 1,000 participants experienced one of our simulations in their Wharton Exec Ed program. One of Africa’s foremost financial institutions, for example, has sent over managerial staffers for a two-week business-leadership bootcamp built around the EDP Simulation four times in the last two years! (And it always ends the same: with a celebratory, fist-pumping “warrior chant.” See it in the video below.) Among dozens of other games to cheer for, I should note, we developed a similar EDP program for a multi-national manufacturer in Thailand, which Technical Director extraordinaire Sarah Toms flew out to personally facilitate in 2016.

From the start, Dean Garrett has made it known that, though seated in the U.S., he sees Wharton as an asset to the entire world — and, in turn, bringing the world into the classroom in order to prepare students to be truly global leaders.

Whether that classroom is in Singapore, Saudi Arabia, Switzerland or Estonia, Learning Lab sims like OPEQThe Startup Game, and EDP are doing just that, augmenting traditional learning in undergraduate, MBA, and executive education programs with dynamic, virtual “real-life” business experiences. And while they may be created with faculty members here in Philadelphia, they are now driving home their educational underpinnings on campuses around the globe.

SIMPL: One Data Model to Rule Them All

Code starts at the model-level. So before we wrote one line of SIMPL (the Learning Lab’s new simulation framework), we needed to figure out what, exactly, our data model would look like. Considering the ambitious goal of the project — a simulation framework that could support all of our current games as well as games yet unknown — we had to be very careful to create one that would be flexible enough to adjust to our growing needs, but not so complex as to make development overly challenging. Luckily, we have decades worth of simulation development expertise on our team, and were able to draw from that wellspring of knowledge when we worked on SIMPL’s foundational data model.

A data model, I should say, is basically the definition of how data is stored in the system, and how the pieces of data relate to one another. When we began the process of creating SIMPL, we needed to define the logical pieces that create a simulation, and build relationships among those pieces that, well, made sense.  

Speaking the Same Language

Our first challenge was agreeing upon a nomenclature for the pieces that comprise a simulation in general. This may seem like a fairly trivial process; after all, everyone pretty much knows what we mean when we say a “game run,” or  a “decision,” or a “scenario.” However, the implications of this language when developing a data model meant different things to different people — especially when we tried to communicate these requirements to the outside vendor working with us on the platform. With that in mind, we ended up creating a glossary of terms, defined right in the context of the simulation platform. This glossary helped us bridge the gap between our team and the vendor, allowing us to talk about terms in ways we all agreed upon and understood.  

Start with What You Know

Once we agreed on the definitions of various parts that make up a sim, we began to map out what our data model would look like. To assist us in this process, we leaned on our collective years of simulation experience here in the Learning Lab — namely, the games we’ve already supported and developed. Then came the whiteboarding (sooo much white boarding), wherein we drew relationships between objects and assessed if the connections we were making made sense.

We then broke down existing games and made sure the new data model would be able to accommodate the unique implementation of each of those sims. This served as a valuable “smoke test” for us — i.e., a way to ensure we were on the right track. To that end, we picked games with diverse implementations in order to be 100-percent certain the model we were creating was flexible enough to meet our needs.

The results of one of our white-boarding sessions. 


The current SIMPL data model.

Where to Go from Here?

After a long period of iteration, we finally settled on a data model that made sense to both us and our vendor. We made further changes along the way as development progressed, but the main structure we came up with remained the same from whiteboard etchings to the implementation of our first sim. Going forward, of course, every new simulation we develop will be an opportunity to test the limits of this model, which we can improve or simplify where and when the need arises.

Moreover, the lessons we learned building our data model for SIMPL could be applied to any data-driven application. In that regard, here are the main things we came away with:

  • Take time to think deeply about your data model, and do so in collaboration with project managers and developers who will ultimately be responsible for the application. The decisions you make here will dramatically impact the future of your application. It’s easy to make changes when you’re working on a whiteboard; it’s a lot harder to do so once you’ve written applications dependent on the model.
  • Don’t assume everyone knows what you mean when describing the model. And, perhaps equally important, empower your team members to speak up when something does not make sense. Data models can be complex animals, and the more everyone understands, the better end result you will have.
  • Test your assumptions. Before a single line of code is written, walk through hypothetical applications with your data model. Can you get the data you need in a duly sensible way? Do the relationships you’ve built reflect the logic required within the application? The more tests you run, the more confidence you have in making sure you have a solid model.

In the wise words of George Harrison, “If you don’t know where you are going, any road will take you there.”

That same logic can be applied to creating a data model that hits all the right notes. Given that this critical construct would be the cornerstone of our new simulation framework, if we hadn’t spent the time to exhaustively map out all of our needs (as well as how SIMPL would meet them), then there was a good chance we would have lost direction and the whole project could have veered off course. So take it from us —  while it can be tempting to take shortcuts when embarking on a project of this scale, carefully inching your way through a proper planning phase goes a long way toward ensuring that you’re ultimately able to reach your destination and meet your end-goals.

SIMPL Magic: Automatic Browser Page Updates

In multiplayer web-based games, all users should be able to see up-to-date game data without having to manually refresh their browsers. For example, players need to be notified when the game has moved from a phase in which they can submit decisions to a phase in which they cannot submit decisions. Monitoring the game state for such changes is often handled by the simulation’s front-end code.

One of the real pleasures of developing simulations using the Learning Lab-authored SIMPL framework is never needing to request fresh data in front-end code. That’s because SIMPL’s architecture ensures a game user’s browser page is always up to date. Curious how we managed to pull that off? Then keep reading!

First, it’s important to understand that each SIMPL game comprises three components:

  • SIMPL-Games-API (a service shared with other games that maintains the SIMPL database)
  • Model Service (defines and runs the game’s simulation model)
  • Front-end Server (provides the game’s user interface assets to the browser)


Architecture of a SIMPL game


Our SIMPL-Games-API service manages the SIMPL database. It provides a REST API used by the game’s model service.

The game’s model service defines the game’s simulation model and handles running the simulation and database updates. It is implemented in Python using classes provided by our SIMPL-Modelservice package.

The game’s front-end user interface code is implemented in Javascript using SIMPL game front-end functions provided by our SIMPL-React JavaScript library (built using React and Redux).

These SIMPL game components work together in concert to ensure that game users consistently see the current state-of-the-game data stored in the SIMPL database.

Here’s how it works: Each time the model service updates the database using the SIMPL-Games-API’s REST API, a webhook is triggered that notifies SIMPL-Modelservice functions of the update. SIMPL-Modelservice code then pushes an update notification to each game user’s browser via WAMP. There, SIMPL-React code handles updating the browser’s Redux store state, automatically updating the React components.

And there you have it — the user is automagically guaranteed to see fresh data, without game authors having to write a line of code! It works like magic, but it’s actually quite SIMPL


For more details, please see our SIMPL Framework docs.

SIMPL: Wharton Launches Its Own Simulation Framework

Imagine a world where simulations are simply … SIMPL!

Simulations are expensive to create, require highly specialized expertise, and if you create something that doesn’t deliver against the intended learning objectives, the process to make tweaks and updates can be more complicated than is necessary. There are other persistent challenges that keep us managers of teams working on simulations up at night. Such as retention of technical talent, which is difficult because if you’re authoring simulations on a commercial platform, your team will need to learn a large amount of specialized know-how, and these are skills that most times don’t translate to other careers in technology. Authoring platforms also present other problems – including a lack of integration with our learning management systems (LMS), the best source for user management, and no single sign on authentication integration.

When we hit the mark, simulations are an incredibly powerful and effective form of educational technology, that can far outperform traditional lectures and cases.

For example, students who completed our Looking Glass for Entrepreneurship simulation performed one standard deviation better on the final exam than students who didn’t go through this experience. And we have lots and lots of examples just like this one!

With a burning desire to overcome the challenges we face in the simulation space, in 2016 the Learning Lab authored our own simulation framework called SIMPL, which is written on the already open source Python/Django. We’re incredibly excited about this new direction for the team, and are already seeing a myriad of returns on our efforts.

At Wharton we have completed our first multi-player simulation on SIMPL – Rules of Engagement – a marketing strategy simulation. Intermap – a mind mapping tool used in idea generation – is utilizing certain aspects of SIMPL, namely the LTI integration libraries for authentication and the tool is published within Canvas as a module, also using aspects of SIMPL (User management? What user management!). There are a number of other simulation projects in the pipeline for the coming year, and all will be written on SIMPL.

Possibly the most exciting part about controlling our own destinies is that in mid-2017 we will release SIMPL to the world, free of charge, and under an open source license. Our goal is to develop a rich community of practitioners and other experts around this framework, because we believe a rising tide lifts all boats. If you’re interested in getting a sneak peak into SIMPL, here’s the docs. In the coming weeks, there will be a variety of blog posts from other SIMPL authors about more specific areas of the SIMPL framework. And if you’re interested in being included in the beta, please email


SIMPL Architecture



Here’s the team behind SIMPL, left to right – Donna St. Louis, Flavio Curella (Revsys), Joseph Lee, Jane Eisenstein, Sarah Toms.

Not pictured: Frank Wiles and Jeff Triplett, Revsys




Does this process make me look fat?

bang-head-here…Hey. Where did everybody go?!

Relax. When it comes to workflow efficiency or best practices, this is not a trick question. But it’s also not a question you’re ever likely to hear. That’s because people rarely ask for – or voice – an honest opinion on bad, bloated, or outdated processes. They’re just something we grudgingly don in order to get our work completed.

Truth be told, it’s oftentimes easier to bear the burden of “that’s just how it’s always been done” than to actually address the flaws of ill-fitting processes directly.

However, as part of my new role with the Learning Lab I’ve been given a unique opportunity to do just that. This need was born out of the Lab taking on an ever-increasing amount of multi-day simulations (such as the Executive Development Program) that require gritting through the bad stuff in the heat of the moment, while thinking of ways we could definitely do it better in the future. And just as you’d expect, I’ve found that tackling baked-in redundancies or antiquated inefficiencies can be a delicate dance between helping and offending. I’ve also learned that altering a process requires a process all of its own…

Read more ›