uSciences e-Learning 3.0 Conference: “This is Not (Just!) a Simulation”

Over the past few decades, online learning has evolved from the so-called 1.0 phase (in-person classes augmented by static web pages and PDFs) to the more revolutionary 2.0 stage (i.e., the dawn of online courses, classroom-blended “talking head” videos, and rudimentary analytics) to today, as the 3.0 era (high-tech custom learning experiences) begins to take shape.

This year, the latest evolutionary advancements were once again spotlighted, explored and celebrated at the (17th) annual e-Learning 3.0 Conference – and the Learning Lab’s very own IT Director, Joe Lee, was on-hand as one of 30 chosen speakers presenting at the May 14 event, along with luminaries from other regional colleges, and panel discussions with LMS vendors, researchers, and stakeholders.

Hosted by the University of the Sciences and kicked off by a keynote address from renowned edtech consultant  Phil Hill, the conference showcased the use of technology to enhance teaching and learning in higher ed, allowing participants to share best practices and creative approaches for learning enrichment.

Joe’s presentation, “This is Not a Simulation: Supporting Games/Sims in the Classroom Setting,” pulled back the curtain on what goes into making effective, engaging e-learning tools in the 3.0 era. Using case studies from the Lab’s own experiences delivering and supporting some of its most popular customized learning experiences, he posed key questions that are critical to the success, or failure, of a sim or teaching game:

How do faculty get comfortable to take the leap into the technological unknown? What problem are you trying to solve? How do students get help?  Does this game/simulation achieve the professor’s goals? Can this be supported at scale?

Given that the Lab annually supports more than 10,000 student plays of over 33 different games for Wharton faculty in almost every discipline (and does so with a small team of fewer than 5 people), Joe offered a unique, insider perspective on what it takes to ensure that each run of a sim or classroom game goes as smoothly as possible. From preparation, evaluation and testing, to technical issues, setup, and in-class support, he shared the lessons we’ve learned and the best practices we follow (well-honed through years of trial and error).

In case you missed it, these were Joe’s key takeaways:

  • Never lose sight of the learning objectives.
  • There must be painful dedication to testing and retesting (and re-retesting!) of a sim or new teaching tool prior to classroom delivery – aka, the “trust but verify” approach.
  • Close. The. Loop.
  • Keep in mind that your e-Learning technology is but one piece of the class – so never lose sight of the big picture!
  • Lastly: There is always an area where you can do better!

The Lab was proud to be part of this exciting day of collective edtech wisdom – and, together with the dozens of other presenters at the forefront of the 3.0 era, is happy to be part of an ever- growing community engaged in improving teaching and learning by inventing and deploying new pedagogies and technologies.

Style Points: Augmented Reality and the Tailored Learning Experience

In case you missed the memo, the next wave of the Digital Revolution – in the form of immersive computing – is rapidly approaching the shores of higher ed, and with it, one of the greatest opportunities to transform learning in a generation.

Surfing along the crest of this radical wave of new technologies is augmented reality (AR). Sometimes referred to as “blended reality,” it allows users to experience the real world, printed text, or even a classroom lesson with an overlay of additional 3D data content, amplifying access to instant information and bringing it to life; in turn, bringing thrilling new opportunities for experiential education.

Perhaps more importantly, AR has the potential to democratize learning and tailor visual or data displays to fit a wide range of individual cognitive strengths. Augmented-reality apps and wearables enable access to rich, immersive educational experiences, and have the potential to differentiate instruction by catering to the specific learning needs and styles of an increasingly diverse student population. Because, let’s face it – many educators on the ground have already realized that a one-size-fits-all approach to curricular material does not always lead to strong learning outcomes.

Learning in Style

A better understanding of what differentiated learning means, in and of itself, may be helpful for developing lesson plans and instructional materials that meet the needs of individual students. Delving into the concept of “learning styles,” for one, can drive home the point that different students perceive and interact differently to information within their learning environment and, therefore, have varying preferences and necessities in terms of how they’re taught. (However, I should note that research on learning styles is an area of study that continues to evolve, so there is no definitive consensus on how to address this increasingly relevant issue in education as of this writing.)

To illustrate how AR can provide various entry points to learning, let’s discuss a few examples of learning preferences that researchers have identified, along with potential AR experiences that could speak to those learning styles.

Visual Learners

Many students learn best when they’re able to access visual rather than verbal information. Whereas classroom materials that integrate visuals might include presentation slides, textbooks, handouts and the like, AR takes visuals to the next level. Augmented Chemistry, a tangible user interface (TUI), is an example of the visual affordances of AR. Using TUI, chemistry students can pick up virtual atoms, position them to compose molecules, and rotate the 3D molecule to view it from all angles.  Compare this learning experience to the use of traditional textbooks consisting of 2D images that can’t be manipulated – the latter now seems pretty, well, flat in comparison, no?

Kinesthetic Learners

Kinesthetic learners respond well to physically engaging exercises, which place-based or location-based AR can offer in spades. Geological positioning systems (GPS) within place- or location-based AR systems give users access to relevant information as they arrive at a location, requiring them to physically move within an environment to complete tasks. AR provides kinesthetic learning opportunities, too, by allowing users to use bodily motions to manipulate virtual objects.

Social, Field-Dependent, and Application-Directed Learners

Researchers have also identified a learning-styles dimension that emphasizes the social aspect of learning. To wit, some learners desire interaction with others as a means of co-constructing knowledge. In addition to a preference for interacting with others, field-dependent learners rely on an external frame of reference (which may be provided by other learners); and then there are application-directed learners, who mainly prefer concrete applications of subject matter. Through leveraging connected learning and providing a virtual platform for social activity, AR has the potential to meet the needs of such learners.   

For example, in Environmental Detectives – an augmented-reality simulation game – users role-play environmental scientists. Players move about in a real space while being provided with location-specific information. They interview non-players to gather info, and they’re able to beam data to one another. Such a game incorporates social aspects of learning while also accommodating users who learn by interacting with an external frame of reference, as well as those learners who benefit from concretely applying their knowledge in a scenario.

Wave of the Future

With so many possibilities and applications, AR could truly be a game-changer in education. It allows for dynamic instruction that can’t be accomplished through traditional classroom experiences (without, of course, replacing the classroom altogether). Think of it as a powerful supplemental learning tool with the awesome ability to reach every style of student.

So join the Learning Lab team as we continue this journey and further explore the exciting realm of unprecedented opportunities AR presents us with here in higher ed. Together, we’ll face this new wave of immersive technology with open arms, encouraging educators to push the boundaries of teaching and, ultimately, the very boundaries of learning itself.

This blog post, written by Learning Lab Project Delivery Manager Lan Ngo, is the first in a series of posts that will explore AR technology and its applications in education. If you would like to add to this conversation, please leave a comment!

SIMPL: One Data Model to Rule Them All

Code starts at the model-level. So before we wrote one line of SIMPL (the Learning Lab’s new simulation framework), we needed to figure out what, exactly, our data model would look like. Considering the ambitious goal of the project — a simulation framework that could support all of our current games as well as games yet unknown — we had to be very careful to create one that would be flexible enough to adjust to our growing needs, but not so complex as to make development overly challenging. Luckily, we have decades worth of simulation development expertise on our team, and were able to draw from that wellspring of knowledge when we worked on SIMPL’s foundational data model.

A data model, I should say, is basically the definition of how data is stored in the system, and how the pieces of data relate to one another. When we began the process of creating SIMPL, we needed to define the logical pieces that create a simulation, and build relationships among those pieces that, well, made sense.  

Speaking the Same Language

Our first challenge was agreeing upon a nomenclature for the pieces that comprise a simulation in general. This may seem like a fairly trivial process; after all, everyone pretty much knows what we mean when we say a “game run,” or  a “decision,” or a “scenario.” However, the implications of this language when developing a data model meant different things to different people — especially when we tried to communicate these requirements to the outside vendor working with us on the platform. With that in mind, we ended up creating a glossary of terms, defined right in the context of the simulation platform. This glossary helped us bridge the gap between our team and the vendor, allowing us to talk about terms in ways we all agreed upon and understood.  

Start with What You Know

Once we agreed on the definitions of various parts that make up a sim, we began to map out what our data model would look like. To assist us in this process, we leaned on our collective years of simulation experience here in the Learning Lab — namely, the games we’ve already supported and developed. Then came the whiteboarding (sooo much white boarding), wherein we drew relationships between objects and assessed if the connections we were making made sense.

The results of one of our white-boarding sessions. 

We then broke down existing games and made sure the new data model would be able to accommodate the unique implementation of each of those sims. This served as a valuable “smoke test” for us — i.e., a way to ensure we were on the right track. To that end, we picked games with diverse implementations in order to be 100-percent certain the model we were creating was flexible enough to meet our needs.

 The current SIMPL data model.

Where to Go from Here?

After a long period of iteration, we finally settled on a data model that made sense to both us and our vendor. We made further changes along the way as development progressed, but the main structure we came up with remained the same from whiteboard etchings to the implementation of our first sim. Going forward, of course, every new simulation we develop will be an opportunity to test the limits of this model, which we can improve or simplify where and when the need arises.

Moreover, the lessons we learned building our data model for SIMPL could be applied to any data-driven application. In that regard, here are the main things we came away with:

  • Take time to think deeply about your data model, and do so in collaboration with project managers and developers who will ultimately be responsible for the application. The decisions you make here will dramatically impact the future of your application. It’s easy to make changes when you’re working on a whiteboard; it’s a lot harder to do so once you’ve written applications dependent on the model.
  • Don’t assume everyone knows what you mean when describing the model. And, perhaps equally important, empower your team members to speak up when something does not make sense. Data models can be complex animals, and the more everyone understands, the better end result you will have.
  • Test your assumptions. Before a single line of code is written, walk through hypothetical applications with your data model. Can you get the data you need in a duly sensible way? Do the relationships you’ve built reflect the logic required within the application? The more tests you run, the more confidence you have in making sure you have a solid model.

In the wise words of George Harrison, “If you don’t know where you are going, any road will take you there.”

That same logic can be applied to creating a data model that hits all the right notes. Given that this critical construct would be the cornerstone of our new simulation framework, if we hadn’t spent the time to exhaustively map out all of our needs (as well as how SIMPL would meet them), then there was a good chance we would have lost direction and the whole project could have veered off course. So take it from us —  while it can be tempting to take shortcuts when embarking on a project of this scale, carefully inching your way through a proper planning phase goes a long way toward ensuring that you’re ultimately able to reach your destination and meet your end-goals.

SIMPL: Wharton Launches Its Own Simulation Framework

Imagine a world where simulations are simply … SIMPL!

Simulations are expensive to create, require highly specialized expertise, and if you create something that doesn’t deliver against the intended learning objectives, the process to make tweaks and updates can be more complicated than is necessary. There are other persistent challenges that keep us managers of teams working on simulations up at night. Such as retention of technical talent, which is difficult because if you’re authoring simulations on a commercial platform, your team will need to learn a large amount of specialized know-how, and these are skills that most times don’t translate to other careers in technology. Authoring platforms also present other problems – including a lack of integration with our learning management systems (LMS), the best source for user management, and no single sign on authentication integration.

When we hit the mark, simulations are an incredibly powerful and effective form of educational technology, that can far outperform traditional lectures and cases.

For example, students who completed our Looking Glass for Entrepreneurship simulation performed one standard deviation better on the final exam than students who didn’t go through this experience. And we have lots and lots of examples just like this one!

With a burning desire to overcome the challenges we face in the simulation space, in 2016 the Learning Lab authored our own simulation framework called SIMPL, which is written on the already open source Python/Django. We’re incredibly excited about this new direction for the team, and are already seeing a myriad of returns on our efforts.

At Wharton we have completed our first multi-player simulation on SIMPL – Rules of Engagement – a marketing strategy simulation. Intermap – a mind mapping tool used in idea generation – is utilizing certain aspects of SIMPL, namely the LTI integration libraries for authentication and the tool is published within Canvas as a module, also using aspects of SIMPL (User management? What user management!). There are a number of other simulation projects in the pipeline for the coming year, and all will be written on SIMPL.

Possibly the most exciting part about controlling our own destinies is that in mid-2017 we will release SIMPL to the world, free of charge, and under an open source license. Our goal is to develop a rich community of practitioners and other experts around this framework, because we believe a rising tide lifts all boats. If you’re interested in getting a sneak peak into SIMPL, here’s the docs. In the coming weeks, there will be a variety of blog posts from other SIMPL authors about more specific areas of the SIMPL framework. And if you’re interested in being included in the beta, please email learninglab@wharton.upenn.edu.

SIMPL-Architecture

SIMPL Architecture

 

SIMPL-Team

Here’s the team behind SIMPL, left to right – Donna St. Louis, Flavio Curella (Revsys), Joseph Lee, Jane Eisenstein, Sarah Toms.

Not pictured: Frank Wiles and Jeff Triplett, Revsys

 

 

 

Recipe for Quick Coding: How to Cook Up a Good Glossary, Fast

glossaryTwo weeks before the Learning Lab’s new Customer Centricity simulation was set to go live for the first time in a Wharton MBA class, I was asked to add a CRM glossary to it – one that could grow as more data reports became available to a player throughout the course of the game.

Suffice it to say this was a quite a task given the timeframe. Nevertheless, I approached the challenge with an open mind and a lot of quick thinking. Viewing it as a somewhat exploratory endeavor, I managed to meet the deadline and our sim made its scheduled debut with a fresh-baked working glossary. Now, having devised an efficient process for whipping one up on the fly, I’d like to share with you my recipe:

Read more Recipe for Quick Coding: How to Cook Up a Good Glossary, Fast

Bringing Customer Centricity to Life (and Biz Models to Black)

customercen_photo_bookIf you’ve ever sat through one of Wharton marketing professor Peter Fader’s highly engaging lectures on the merits of a customer-centric business model (or read his book on the subject), then you know how quickly he’s able to convince an audience that adopting this game-changing go-to-market strategy can trump a product-focused approach (from a profitability standpoint), and that – if done well – can create the conditions for longterm viability in the perilous, race-to-the-bottom age of commoditization that many companies are grappling with these days.

He’ll also handily disabuse any naysaying marketing execs of the notion that using data is just about collecting numbers, driving the point home that simply having a CRM system in place is not where the work ends, it’s where it begins – and leaving no doubt in their minds that if they think all customers are created equal, they have a lot to learn. Indeed, Fader is a maestro at shifting paradigms to embrace the power of data analytics for calculating customer lifetime value (CLV). But for this revolution in marketing strategy to truly take hold, theory must meet practice – and that’s where the Learning Lab comes into play.

Read more Bringing Customer Centricity to Life (and Biz Models to Black)

Reimagining Education: A Journey Through the ‘Looking Glass’

For the past year, students taking Wharton Prof. Ethan Mollick’s MGMT 801 class have been offered a unique alternative to traditional coursework – those who were game, as it were, got invited down a rabbit-hole and emerged in the fast-paced world of launching a startup.

This soup-to-nuts, wholly immersive experience in the “Looking Glass” was intricately designed by Wharton’s Learning Lab and built by Forio, a San Francisco-based system dynamic company that specializes in custom simulations. Under Mollick’s guidance, our bicoastal team crafted a storyboard comprised of sequential scenarios – from company onboarding to gaining angel investment – all tied to learning objectives that paralleled his course syllabus.

Read more Reimagining Education: A Journey Through the ‘Looking Glass’