Finding the Right Level: Adaptivity in Learning Games

By Kristen DiCerbo, first appeared on Pearson Research & Innovation Network on February 12, 2013
In my discussion of problems with our current use of the term gamification (i.e., our hyper-focus on rewards), I posited that one of the things that actually keeps game players engaged is the presentation of exactly the right level of challenge at exactly the right time. Keeping the game play at a level that is challenging but not overwhelming appears to be key to engaging players. Csikszentmihalyi and Csikszentmihalyi (1988) write, “Optimal experience requires a balance between the challenges perceived in a given situation and the skills a person brings to it.”
Since not everyone is at the same level coming into a game, this implies we need to make games adaptive. Charles et al (2005) list many ways that adaptivity can help improve game play, including: “to moderate the challenge levels for a player, help players avoid getting stuck, adapt gameplay more to a player’s preference/taste, or perhaps even detect players using or abusing an oversight in the game design to their advantage.” Lopes and Bidarra (2011) suggest that without adaptivity games: are predictable, which encourages “gaming the system,” encourage practice when it is not needed, and have little replay value.
So, how can games adapt to players’ levels? There are some basic things that can be done to get closer to players’ levels without player modeling, such as:
1. Static leveling – This would be the lowest level of adaptivity, in which everyone goes through the same levels and has the same requirements to move to the next level. For example, you must successfully complete four quests to move to level 2. In this case, the game is not trying to create an estimate of player skill. Presumably more skilled players will complete those four quests quicker than less-skilled players and will move on to the next level. Very skilled players will have to move through many levels that are too basic in order to get to the more advanced levels and there isn’t much replayability.
2. Branching based on behavioral measures – Players can go down a limited number of pre-defined paths based on simple measures of performance or choice. In this case, there are a limited number of predefined paths. Players might all play level A, and then depending on whether they get more or fewer than 60% of problems correct, go from A -> B -> D or A -> C ->D, for example.  The key here is that the decision used to choose the path is based on a simple behavioral measure of what they did in the game. Again, this does not attempt to make an estimate of any underlying knowledge, skill, or player attribute; it is just a report of player behavior. This means that it is less likely that simple, unweighted measures are used, and we aren’t getting at why we would want these players to have different paths. What we’re really probably saying here is that players who get more than 60% correct are no longer novices at our skill. However, if we want to make that judgment, we should try to model the skill with more advanced methods that take probability and error into account.
To really get to the point where we are matching situations to skills, we need to create more sophisticated models of the players.
3. Player Modeling – Estimates of characteristics.  What we need to do to get to really good matching between players and activities is to infer player characteristics from what happens in the game. For example, we might want to infer traits like creativity, puzzle solving ability, ability to add fractions, and/or conversational English ability from their game play and then adapt from that. We could create a profile of skills for each player where we estimate whether they are at novice, intermediate, advanced, or expert levels.
In order to create such profiles, we need to start with defining the domains we’re interested in. What characteristics will we want to adapt to? We’ll then need to determine how to get evidence of players’ levels of those characteristics. Two ways to start out are: 1. have players rate themselves and/or 2. design the game so the initial levels are very good at gathering this evidence.
How do we make estimates of these characteristics? This is where some statistical lifting is required. This Iseli et al article from 2010, from a group designing Navy simulations, is where things are headed statistically (if you want to look at the nuts and bolts). They use Bayesian Networks, a statistical method that, in this application, estimates the probabilities of different proficiency levels based on evidence from the simulation. So, based on a whole bunch of player actions that get statistically combined, the Bayesian Network might estimate that there is a 92% chance that they are at an expert level of Navy damage control operations (as assessed by Iseli et al). The researchers do a nice job showing how to make these estimates based on what is happening in the simulation, but they do all the calculation after the fact (outside the environment). The next move is to make this happen within the game so the game can adapt based on the results of the computation.
So, if it isn’t obvious… this all requires data! We have to be thinking very carefully about the data we collect from player actions, and how we use it both in game and after the fact.
Once we know what a player’s characteristics are, we can choose the next activity to maximize different things. For example, if we are noting flagging engagement, we might choose what we think will be most motivating. If we want to maximize learning, we might choose the activity most within the player’s zone of proximal development. If we want a better estimate of their characteristics, we might choose an activity that will best help us differentiate between novice and intermediate players.
Finally, at the end of the day, all of these player estimates can also serve as assessment data, providing feedback for teachers and students about their levels of knowledge, skills, and abilities. So, making the game better also serves to give us better information about our students… win, win!
For good reading:

 

Guest Author

Getting Smart loves its varied and ranging staff of guest contributors. From edleaders, educators and students to business leaders, tech experts and researchers we are committed to finding diverse voices that highlight the cutting edge of learning.

Discover the latest in learning innovations

Sign up for our weekly newsletter.

0 Comments

Leave a Comment

Your email address will not be published. All fields are required.