e-OPEQ: How We Learned to Stop Worrying and Love Remote Delivery

When the COVID-19 crisis forced all Wharton courses and classes online in March, many faculty members were caught off-guard and scrambled to adapt their in-person lectures and labs for distance learning. Naturally, the Learning Lab saw an opportunity to help. Our team immediately got to work identifying universally applicable business simulations that could be set-up and delivered remotely – and the first game to serve as a virtual guinea pig in this regard was OPEQ.

IT Director Joe Lee points out that OPEQ, in particular, is a good fit for remote delivery – though he admits the setup process was considerably more difficult than it would be in normal times: “Faculty need to be coached to manage their expectations regarding the effectiveness of the exercise, especially for faculty who have run the same exercise in-class previously.”

Erica Boothby, a Wharton lecturer and postdoctoral fellow, was on of the School’s first instructor’s to embrace remote delivery of what is normally a popular in-class simulation for her spring course on negotiations. “Students were very engaged,” she said of the OPEQ experience, noting that the “team” element of the game seemed to inject a boost of energy into her socially isolated students.

It quickly became apparent how much her class enjoyed it during the debrief, Boothby added. “They really felt the power of the social dilemma firsthand – how can you maximize the profits of your team but also of your world, which includes your competitors? OPEQ really set the stage nicely for our conversation about defection and cooperation, and trust in negotiation.”

And speaking of setting the stage, Joe also emphasizes how important it is to spend one-on-one time or hold training sessions with faculty members so they’re comfortable with the technology tools available to them to facilitate the exercise.

Home Screen during Play phase of OPEQ

“Go over the ‘flow and show’ of the experience,” he adds, referencing the end-to-end explanation of what faculty and students can expect before, during, and after the remote exercise.  And, lastly, “Do everything you can to test and vet the process before getting in the classroom,” he adds, noting that he and the LL’s head of operations, Heather Meiers, met with each faculty member for 30 minutes before their class started to go through all the major steps in the upcoming experience, ensuring they were as comfortable as possible.

For Boothby, this made a huge difference in her level of comfort conducting OPEQ from home. “I couldn’t have asked for better support,” she said. “Heather walked me through the procedures thoroughly well in advance, she sent an email to my students informing them how to access the game and providing any information they needed in order to log into the correct world etc. And Joe was available on Slack to field any questions that came up during the simulation itself, so I felt very well supported.”

As a result, Boothby says she’d recommend remote use of OPEQ to other faculty members “without hesitation!” The main selling point, in her words, is that ‘the process is fully streamlined, which really enables the students – and instructor – to focus their full attention on the experience of the negotiation and the content of the lesson.”

Other tips and tricks the Learning Lab recommends for remote delivery:

  • Make sure you have a way for students to get help during the exercise should they experience issues. We gave students a help e-mail address they could contact if they have technical issues and we dropped into each virtual meeting room personally to check up on them during the exercise.
  • Make sure the exercise can run with variable attendance. OPEQ can run if there is at least 1 person on each team; typically, teams are comprised of 4-5 students to ensure that the experience can still run in the event of some absences.
  • Build in some more time for the exercise if you are used to running it in-class. This allows time for the remediation of technical issues/questions.
  • Determine how you’ll communicate with faculty or other facilitators during the exercise. OPEQ takes place in breakout rooms and the faculty typically dip into and out of each room throughout the simulation. Heather had the idea of using Slack as a way us to communicate with them while the exercise is being played – that way, no matter which “room” people were in, we were only a few keystrokes away if they needed assistance.

Things to consider when deciding if an exercise is a good fit for remote delivery and/or if it can be modified to meet your usage needs:

    • What does this simulation require to work in class?
      • think about teams, breakout rooms, faculty intervention, student conversations between teams, certain number of students to play, etc.
    • Will this simulation work remotely?
      • e.g., Is it web-based? If not, do you have virtual machines or workspaces you can deploy to students and, if so, what type of setup does that entail on both the faculty and student sides?
      • What does the setup process entail? If it requires breakout rooms, how many do you need and how can they be created? Is the faculty member comfortable running a simulation remotely?
      • Can you modify aspects of the experience to make the simulation work remotely?
        • Things to consider: Can you replace in-person interactions with digital ones? (For instance, with OPEQ we removed a “face-to-face” round and replaced it with chat.)
        • Can the simulation work asynchronously? Is it possible to give students a window of time to complete the exercise?
    • What is your normal level of support during an in-class run, and does that need to increase or decrease for a remote run?
      • i.e., Can you be simply “on call” for tech questions, or do you have to sit in a virtual lecture room to monitor questions for the duration of the simulation? (For instance, with in-person OPEQ we are usually in the class and wandering the halls; for remote delivery, we were there for the first 10 minutes of the virtual class for the intro, then monitored the game remotely in an “on call” style while communicating with the professors over Slack and answering questions from students via email – a much more hands-off approach, which may be optimal for you.)

Given that the Learning Lab had never run OPEQ remotely before, the exercise was a huge success and revealed a whole new (virtual) pathway for delivering Wharton’s interactive simulations while we’re all off-campus. This should give everyone hope that we can still provide students and faculty with many of the uniquely effective digital exercises and teaching tools that bring their course material to life.

Here are some other popular sims the Learning Lab can deliver remotely:

Customer Centricity

The Startup Game


Pivot or Perish


and more, including 3rd party simulations!

For more information on the Learning Lab’s remote-delivery options, email the team at learninglab@wharton.penn.edu.

uSciences e-Learning 3.0 Conference: “This is Not (Just!) a Simulation”

Over the past few decades, online learning has evolved from the so-called 1.0 phase (in-person classes augmented by static web pages and PDFs) to the more revolutionary 2.0 stage (i.e., the dawn of online courses, classroom-blended “talking head” videos, and rudimentary analytics) to today, as the 3.0 era (high-tech custom learning experiences) begins to take shape.

This year, the latest evolutionary advancements were once again spotlighted, explored and celebrated at the (17th) annual e-Learning 3.0 Conference – and the Learning Lab’s very own IT Director, Joe Lee, was on-hand as one of 30 chosen speakers presenting at the May 14 event, along with luminaries from other regional colleges, and panel discussions with LMS vendors, researchers, and stakeholders.

Hosted by the University of the Sciences and kicked off by a keynote address from renowned edtech consultant  Phil Hill, the conference showcased the use of technology to enhance teaching and learning in higher ed, allowing participants to share best practices and creative approaches for learning enrichment.

Joe’s presentation, “This is Not a Simulation: Supporting Games/Sims in the Classroom Setting,” pulled back the curtain on what goes into making effective, engaging e-learning tools in the 3.0 era. Using case studies from the Lab’s own experiences delivering and supporting some of its most popular customized learning experiences, he posed key questions that are critical to the success, or failure, of a sim or teaching game:

How do faculty get comfortable to take the leap into the technological unknown? What problem are you trying to solve? How do students get help?  Does this game/simulation achieve the professor’s goals? Can this be supported at scale?

Given that the Lab annually supports more than 10,000 student plays of over 33 different games for Wharton faculty in almost every discipline (and does so with a small team of fewer than 5 people), Joe offered a unique, insider perspective on what it takes to ensure that each run of a sim or classroom game goes as smoothly as possible. From preparation, evaluation and testing, to technical issues, setup, and in-class support, he shared the lessons we’ve learned and the best practices we follow (well-honed through years of trial and error).

In case you missed it, these were Joe’s key takeaways:

  • Never lose sight of the learning objectives.
  • There must be painful dedication to testing and retesting (and re-retesting!) of a sim or new teaching tool prior to classroom delivery – aka, the “trust but verify” approach.
  • Close. The. Loop.
  • Keep in mind that your e-Learning technology is but one piece of the class – so never lose sight of the big picture!
  • Lastly: There is always an area where you can do better!

The Lab was proud to be part of this exciting day of collective edtech wisdom – and, together with the dozens of other presenters at the forefront of the 3.0 era, is happy to be part of an ever- growing community engaged in improving teaching and learning by inventing and deploying new pedagogies and technologies.

Style Points: Augmented Reality and the Tailored Learning Experience

In case you missed the memo, the next wave of the Digital Revolution – in the form of immersive computing – is rapidly approaching the shores of higher ed, and with it, one of the greatest opportunities to transform learning in a generation.

Surfing along the crest of this radical wave of new technologies is augmented reality (AR). Sometimes referred to as “blended reality,” it allows users to experience the real world, printed text, or even a classroom lesson with an overlay of additional 3D data content, amplifying access to instant information and bringing it to life; in turn, bringing thrilling new opportunities for experiential education.

Perhaps more importantly, AR has the potential to democratize learning and tailor visual or data displays to fit a wide range of individual cognitive strengths. Augmented-reality apps and wearables enable access to rich, immersive educational experiences, and have the potential to differentiate instruction by catering to the specific learning needs and styles of an increasingly diverse student population. Because, let’s face it – many educators on the ground have already realized that a one-size-fits-all approach to curricular material does not always lead to strong learning outcomes.

Learning in Style

A better understanding of what differentiated learning means, in and of itself, may be helpful for developing lesson plans and instructional materials that meet the needs of individual students. Delving into the concept of “learning styles,” for one, can drive home the point that different students perceive and interact differently to information within their learning environment and, therefore, have varying preferences and necessities in terms of how they’re taught. (However, I should note that research on learning styles is an area of study that continues to evolve, so there is no definitive consensus on how to address this increasingly relevant issue in education as of this writing.)

To illustrate how AR can provide various entry points to learning, let’s discuss a few examples of learning preferences that researchers have identified, along with potential AR experiences that could speak to those learning styles.

Visual Learners

Many students learn best when they’re able to access visual rather than verbal information. Whereas classroom materials that integrate visuals might include presentation slides, textbooks, handouts and the like, AR takes visuals to the next level. Augmented Chemistry, a tangible user interface (TUI), is an example of the visual affordances of AR. Using TUI, chemistry students can pick up virtual atoms, position them to compose molecules, and rotate the 3D molecule to view it from all angles.  Compare this learning experience to the use of traditional textbooks consisting of 2D images that can’t be manipulated – the latter now seems pretty, well, flat in comparison, no?

Kinesthetic Learners

Kinesthetic learners respond well to physically engaging exercises, which place-based or location-based AR can offer in spades. Geological positioning systems (GPS) within place- or location-based AR systems give users access to relevant information as they arrive at a location, requiring them to physically move within an environment to complete tasks. AR provides kinesthetic learning opportunities, too, by allowing users to use bodily motions to manipulate virtual objects.

Social, Field-Dependent, and Application-Directed Learners

Researchers have also identified a learning-styles dimension that emphasizes the social aspect of learning. To wit, some learners desire interaction with others as a means of co-constructing knowledge. In addition to a preference for interacting with others, field-dependent learners rely on an external frame of reference (which may be provided by other learners); and then there are application-directed learners, who mainly prefer concrete applications of subject matter. Through leveraging connected learning and providing a virtual platform for social activity, AR has the potential to meet the needs of such learners.   

For example, in Environmental Detectives – an augmented-reality simulation game – users role-play environmental scientists. Players move about in a real space while being provided with location-specific information. They interview non-players to gather info, and they’re able to beam data to one another. Such a game incorporates social aspects of learning while also accommodating users who learn by interacting with an external frame of reference, as well as those learners who benefit from concretely applying their knowledge in a scenario.

Wave of the Future

With so many possibilities and applications, AR could truly be a game-changer in education. It allows for dynamic instruction that can’t be accomplished through traditional classroom experiences (without, of course, replacing the classroom altogether). Think of it as a powerful supplemental learning tool with the awesome ability to reach every style of student.

So join the Learning Lab team as we continue this journey and further explore the exciting realm of unprecedented opportunities AR presents us with here in higher ed. Together, we’ll face this new wave of immersive technology with open arms, encouraging educators to push the boundaries of teaching and, ultimately, the very boundaries of learning itself.

This blog post, written by Learning Lab Project Delivery Manager Lan Ngo, is the first in a series of posts that will explore AR technology and its applications in education. If you would like to add to this conversation, please leave a comment!

SIMPL: One Data Model to Rule Them All

Code starts at the model-level. So before we wrote one line of SIMPL (the Learning Lab’s new simulation framework), we needed to figure out what, exactly, our data model would look like. Considering the ambitious goal of the project — a simulation framework that could support all of our current games as well as games yet unknown — we had to be very careful to create one that would be flexible enough to adjust to our growing needs, but not so complex as to make development overly challenging. Luckily, we have decades worth of simulation development expertise on our team, and were able to draw from that wellspring of knowledge when we worked on SIMPL’s foundational data model.

A data model, I should say, is basically the definition of how data is stored in the system, and how the pieces of data relate to one another. When we began the process of creating SIMPL, we needed to define the logical pieces that create a simulation, and build relationships among those pieces that, well, made sense.  

Speaking the Same Language

Our first challenge was agreeing upon a nomenclature for the pieces that comprise a simulation in general. This may seem like a fairly trivial process; after all, everyone pretty much knows what we mean when we say a “game run,” or  a “decision,” or a “scenario.” However, the implications of this language when developing a data model meant different things to different people — especially when we tried to communicate these requirements to the outside vendor working with us on the platform. With that in mind, we ended up creating a glossary of terms, defined right in the context of the simulation platform. This glossary helped us bridge the gap between our team and the vendor, allowing us to talk about terms in ways we all agreed upon and understood.  

Start with What You Know

Once we agreed on the definitions of various parts that make up a sim, we began to map out what our data model would look like. To assist us in this process, we leaned on our collective years of simulation experience here in the Learning Lab — namely, the games we’ve already supported and developed. Then came the whiteboarding (sooo much white boarding), wherein we drew relationships between objects and assessed if the connections we were making made sense.

The results of one of our white-boarding sessions. 

We then broke down existing games and made sure the new data model would be able to accommodate the unique implementation of each of those sims. This served as a valuable “smoke test” for us — i.e., a way to ensure we were on the right track. To that end, we picked games with diverse implementations in order to be 100-percent certain the model we were creating was flexible enough to meet our needs.

 The current SIMPL data model.

Where to Go from Here?

After a long period of iteration, we finally settled on a data model that made sense to both us and our vendor. We made further changes along the way as development progressed, but the main structure we came up with remained the same from whiteboard etchings to the implementation of our first sim. Going forward, of course, every new simulation we develop will be an opportunity to test the limits of this model, which we can improve or simplify where and when the need arises.

Moreover, the lessons we learned building our data model for SIMPL could be applied to any data-driven application. In that regard, here are the main things we came away with:

  • Take time to think deeply about your data model, and do so in collaboration with project managers and developers who will ultimately be responsible for the application. The decisions you make here will dramatically impact the future of your application. It’s easy to make changes when you’re working on a whiteboard; it’s a lot harder to do so once you’ve written applications dependent on the model.
  • Don’t assume everyone knows what you mean when describing the model. And, perhaps equally important, empower your team members to speak up when something does not make sense. Data models can be complex animals, and the more everyone understands, the better end result you will have.
  • Test your assumptions. Before a single line of code is written, walk through hypothetical applications with your data model. Can you get the data you need in a duly sensible way? Do the relationships you’ve built reflect the logic required within the application? The more tests you run, the more confidence you have in making sure you have a solid model.

In the wise words of George Harrison, “If you don’t know where you are going, any road will take you there.”

That same logic can be applied to creating a data model that hits all the right notes. Given that this critical construct would be the cornerstone of our new simulation framework, if we hadn’t spent the time to exhaustively map out all of our needs (as well as how SIMPL would meet them), then there was a good chance we would have lost direction and the whole project could have veered off course. So take it from us —  while it can be tempting to take shortcuts when embarking on a project of this scale, carefully inching your way through a proper planning phase goes a long way toward ensuring that you’re ultimately able to reach your destination and meet your end-goals.

SIMPL: Wharton Launches Its Own Simulation Framework

Imagine a world where simulations are simply … SIMPL!

Simulations are expensive to create, require highly specialized expertise, and if you create something that doesn’t deliver against the intended learning objectives, the process to make tweaks and updates can be more complicated than is necessary. There are other persistent challenges that keep us managers of teams working on simulations up at night. Such as retention of technical talent, which is difficult because if you’re authoring simulations on a commercial platform, your team will need to learn a large amount of specialized know-how, and these are skills that most times don’t translate to other careers in technology. Authoring platforms also present other problems – including a lack of integration with our learning management systems (LMS), the best source for user management, and no single sign on authentication integration.

When we hit the mark, simulations are an incredibly powerful and effective form of educational technology, that can far outperform traditional lectures and cases.

For example, students who completed our Looking Glass for Entrepreneurship simulation performed one standard deviation better on the final exam than students who didn’t go through this experience. And we have lots and lots of examples just like this one!

With a burning desire to overcome the challenges we face in the simulation space, in 2016 the Learning Lab authored our own simulation framework called SIMPL, which is written on the already open source Python/Django. We’re incredibly excited about this new direction for the team, and are already seeing a myriad of returns on our efforts.

At Wharton we have completed our first multi-player simulation on SIMPL – Rules of Engagement – a marketing strategy simulation. Intermap – a mind mapping tool used in idea generation – is utilizing certain aspects of SIMPL, namely the LTI integration libraries for authentication and the tool is published within Canvas as a module, also using aspects of SIMPL (User management? What user management!). There are a number of other simulation projects in the pipeline for the coming year, and all will be written on SIMPL.

Possibly the most exciting part about controlling our own destinies is that in mid-2017 we will release SIMPL to the world, free of charge, and under an open source license. Our goal is to develop a rich community of practitioners and other experts around this framework, because we believe a rising tide lifts all boats. If you’re interested in getting a sneak peak into SIMPL, here’s the docs. In the coming weeks, there will be a variety of blog posts from other SIMPL authors about more specific areas of the SIMPL framework. And if you’re interested in being included in the beta, please email learninglab@wharton.upenn.edu.


SIMPL Architecture



Here’s the team behind SIMPL, left to right – Donna St. Louis, Flavio Curella (Revsys), Joseph Lee, Jane Eisenstein, Sarah Toms.

Not pictured: Frank Wiles and Jeff Triplett, Revsys




Monopoly’s Anti-Capitalist, Socialist Roots as a Teaching Game at Wharton

Monopoly board game

“A virtue of gaming that is sometimes overlooked by those seeking grander goals is its unparalleled advantages in training and educational programs. A game can easily be made fascinating enough to put over the dullest facts. To sit down and play through a game is to be convinced as by no argument, however persuasively presented.”

— A.M. Mood, RAND Corporation (1954)

Look no further than the Learning Lab for proof that games play an increasingly valuable role in the classroom and beyond, having long been recognized as a uniquely effective means of experiential education. But while, today, we harness technology and data to craft immersive, competitive simulation platforms, sometimes all you need to teach complex concepts is a board, some moveable pieces, and a pedagogical goal.

Take chess, for instance, which has been used for centuries to impart lessons of military strategy – its rules and competitive purpose create the conditions for tactical thinking and planning needed to checkmate one’s opponent.

Then there’s Monopoly, wherein the primary objective is to bankrupt everyone else through clever investment strategies. Hard to square that with lofty, Ivy-league business objectives, right? Yet, what is arguably the world’s best-selling board-based simulation of capitalism (and frequent ruiner of family game night) was once used as a teaching aid in Wharton economics classes. But before you cynically smirk at the very idea, there’s something you should know about the game’s hidden history: A century ago, Monopoly was not a platform to illustrate the merits of a laissez-faire system; rather, it was a way to demonstrate an alternative to the corporate rent-seeking that drives inequality.

Read more Monopoly’s Anti-Capitalist, Socialist Roots as a Teaching Game at Wharton

Recipe for Quick Coding: How to Cook Up a Good Glossary, Fast

glossaryTwo weeks before the Learning Lab’s new Customer Centricity simulation was set to go live for the first time in a Wharton MBA class, I was asked to add a CRM glossary to it – one that could grow as more data reports became available to a player throughout the course of the game.

Suffice it to say this was a quite a task given the timeframe. Nevertheless, I approached the challenge with an open mind and a lot of quick thinking. Viewing it as a somewhat exploratory endeavor, I managed to meet the deadline and our sim made its scheduled debut with a fresh-baked working glossary. Now, having devised an efficient process for whipping one up on the fly, I’d like to share with you my recipe:

Read more Recipe for Quick Coding: How to Cook Up a Good Glossary, Fast

Fighting the Good-Code Fight (Or, ‘We Need To Talk About Tech Debt’)


“What we do in life echoes in eternity…” 

I’ll hazard a guess that “Gen. Maximus Decimus Meridius” (hint: the gladiator in Gladiator) was not thinking about the importance of code quality and documentation when he addressed the above wisdom to his cavalry on the battlefield.

But really, what IT organization wouldn’t benefit from a fictional Roman general showing up before the start of a new project to gravely remind everyone about lasting consequences? After all, the decisions you make in code design today will affect your organization for months – or years – into the future.

Read more Fighting the Good-Code Fight (Or, ‘We Need To Talk About Tech Debt’)