SIMPL: One Data Model to Rule Them All

Code starts at the model-level. So before we wrote one line of SIMPL (the Learning Lab’s new simulation framework), we needed to figure out what, exactly, our data model would look like. Considering the ambitious goal of the project — a simulation framework that could support all of our current games as well as games yet unknown — we had to be very careful to create one that would be flexible enough to adjust to our growing needs, but not so complex as to make development overly challenging. Luckily, we have decades worth of simulation development expertise on our team, and were able to draw from that wellspring of knowledge when we worked on SIMPL’s foundational data model.

A data model, I should say, is basically the definition of how data is stored in the system, and how the pieces of data relate to one another. When we began the process of creating SIMPL, we needed to define the logical pieces that create a simulation, and build relationships among those pieces that, well, made sense.  

Speaking the Same Language

Our first challenge was agreeing upon a nomenclature for the pieces that comprise a simulation in general. This may seem like a fairly trivial process; after all, everyone pretty much knows what we mean when we say a “game run,” or  a “decision,” or a “scenario.” However, the implications of this language when developing a data model meant different things to different people — especially when we tried to communicate these requirements to the outside vendor working with us on the platform. With that in mind, we ended up creating a glossary of terms, defined right in the context of the simulation platform. This glossary helped us bridge the gap between our team and the vendor, allowing us to talk about terms in ways we all agreed upon and understood.  

Start with What You Know

Once we agreed on the definitions of various parts that make up a sim, we began to map out what our data model would look like. To assist us in this process, we leaned on our collective years of simulation experience here in the Learning Lab — namely, the games we’ve already supported and developed. Then came the whiteboarding (sooo much white boarding), wherein we drew relationships between objects and assessed if the connections we were making made sense.

The results of one of our white-boarding sessions. 

We then broke down existing games and made sure the new data model would be able to accommodate the unique implementation of each of those sims. This served as a valuable “smoke test” for us — i.e., a way to ensure we were on the right track. To that end, we picked games with diverse implementations in order to be 100-percent certain the model we were creating was flexible enough to meet our needs.

 The current SIMPL data model.

Where to Go from Here?

After a long period of iteration, we finally settled on a data model that made sense to both us and our vendor. We made further changes along the way as development progressed, but the main structure we came up with remained the same from whiteboard etchings to the implementation of our first sim. Going forward, of course, every new simulation we develop will be an opportunity to test the limits of this model, which we can improve or simplify where and when the need arises.

Moreover, the lessons we learned building our data model for SIMPL could be applied to any data-driven application. In that regard, here are the main things we came away with:

  • Take time to think deeply about your data model, and do so in collaboration with project managers and developers who will ultimately be responsible for the application. The decisions you make here will dramatically impact the future of your application. It’s easy to make changes when you’re working on a whiteboard; it’s a lot harder to do so once you’ve written applications dependent on the model.
  • Don’t assume everyone knows what you mean when describing the model. And, perhaps equally important, empower your team members to speak up when something does not make sense. Data models can be complex animals, and the more everyone understands, the better end result you will have.
  • Test your assumptions. Before a single line of code is written, walk through hypothetical applications with your data model. Can you get the data you need in a duly sensible way? Do the relationships you’ve built reflect the logic required within the application? The more tests you run, the more confidence you have in making sure you have a solid model.

In the wise words of George Harrison, “If you don’t know where you are going, any road will take you there.”

That same logic can be applied to creating a data model that hits all the right notes. Given that this critical construct would be the cornerstone of our new simulation framework, if we hadn’t spent the time to exhaustively map out all of our needs (as well as how SIMPL would meet them), then there was a good chance we would have lost direction and the whole project could have veered off course. So take it from us —  while it can be tempting to take shortcuts when embarking on a project of this scale, carefully inching your way through a proper planning phase goes a long way toward ensuring that you’re ultimately able to reach your destination and meet your end-goals.

SIMPL Magic: Automatic Browser Page Updates

In multiplayer web-based games, all users should be able to see up-to-date game data without having to manually refresh their browsers. For example, players need to be notified when the game has moved from a phase in which they can submit decisions to a phase in which they cannot submit decisions. Monitoring the game state for such changes is often handled by the simulation’s front-end code.

One of the real pleasures of developing simulations using the Learning Lab-authored SIMPL framework is never needing to request fresh data in front-end code. That’s because SIMPL’s architecture ensures a game user’s browser page is always up to date. Curious how we managed to pull that off? Then keep reading!

SIMPL-game-architecture
Architecture of a SIMPL game

 

First, it’s important to understand that each SIMPL game comprises three components:

  • SIMPL-Games-API
    a service shared with other games that maintains the SIMPL database
  • Model Service
    defines and runs the game’s simulation model
  • Front-end Server
    provides the game’s user interface assets to the browser

 

Our SIMPL-Games-API service manages the SIMPL database. It provides a REST API used by the game’s model service.

The game’s model service defines the game’s simulation model and handles running the simulation and database updates. It is implemented in Python using classes provided by our SIMPL-Modelservice package.

The game’s front-end user interface code is implemented in Javascript using SIMPL game front-end functions provided by our SIMPL-React JavaScript library (built using React and Redux).

These SIMPL game components work together in concert to ensure that game users consistently see the current state-of-the-game data stored in the SIMPL database.

Here’s how it works: Each time the model service updates the database using the SIMPL-Games-API’s REST API, a webhook is triggered that notifies SIMPL-Modelservice functions of the update. SIMPL-Modelservice code then pushes an update notification to each game user’s browser via WAMP. There, SIMPL-React code handles updating the browser’s Redux store state, automatically updating the React components.

And there you have it — the user is automagically guaranteed to see fresh data, without game authors having to write a line of code! It works like magic, but it’s actually quite SIMPL

 

For more details, please see our SIMPL Framework docs.

SIMPL: Wharton Launches Its Own Simulation Framework

Imagine a world where simulations are simply … SIMPL!

Simulations are expensive to create, require highly specialized expertise, and if you create something that doesn’t deliver against the intended learning objectives, the process to make tweaks and updates can be more complicated than is necessary. There are other persistent challenges that keep us managers of teams working on simulations up at night. Such as retention of technical talent, which is difficult because if you’re authoring simulations on a commercial platform, your team will need to learn a large amount of specialized know-how, and these are skills that most times don’t translate to other careers in technology. Authoring platforms also present other problems – including a lack of integration with our learning management systems (LMS), the best source for user management, and no single sign on authentication integration.

When we hit the mark, simulations are an incredibly powerful and effective form of educational technology, that can far outperform traditional lectures and cases.

For example, students who completed our Looking Glass for Entrepreneurship simulation performed one standard deviation better on the final exam than students who didn’t go through this experience. And we have lots and lots of examples just like this one!

With a burning desire to overcome the challenges we face in the simulation space, in 2016 the Learning Lab authored our own simulation framework called SIMPL, which is written on the already open source Python/Django. We’re incredibly excited about this new direction for the team, and are already seeing a myriad of returns on our efforts.

At Wharton we have completed our first multi-player simulation on SIMPL – Rules of Engagement – a marketing strategy simulation. Intermap – a mind mapping tool used in idea generation – is utilizing certain aspects of SIMPL, namely the LTI integration libraries for authentication and the tool is published within Canvas as a module, also using aspects of SIMPL (User management? What user management!). There are a number of other simulation projects in the pipeline for the coming year, and all will be written on SIMPL.

Possibly the most exciting part about controlling our own destinies is that in mid-2017 we will release SIMPL to the world, free of charge, and under an open source license. Our goal is to develop a rich community of practitioners and other experts around this framework, because we believe a rising tide lifts all boats. If you’re interested in getting a sneak peak into SIMPL, here’s the docs. In the coming weeks, there will be a variety of blog posts from other SIMPL authors about more specific areas of the SIMPL framework. And if you’re interested in being included in the beta, please email learninglab@wharton.upenn.edu.

SIMPL-Architecture

SIMPL Architecture

 

SIMPL-Team

Here’s the team behind SIMPL, left to right – Donna St. Louis, Flavio Curella (Revsys), Joseph Lee, Jane Eisenstein, Sarah Toms.

Not pictured: Frank Wiles and Jeff Triplett, Revsys