Finding the Right Level: Adaptivity in Learning Games
By Kristen DiCerbo, first appeared on Pearson Research & Innovation Network on February 12, 2013
In my discussion of problems with our current use of the term gamification (i.e., our hyper-focus on rewards), I posited that one of the things that actually keeps game players engaged is the presentation of exactly the right level of challenge at exactly the right time. Keeping the game play at a level that is challenging but not overwhelming appears to be key to engaging players. Csikszentmihalyi and Csikszentmihalyi (1988) write, “Optimal experience requires a balance between the challenges perceived in a given situation and the skills a person brings to it.”
Since not everyone is at the same level coming into a game, this implies we need to make games adaptive. Charles et al (2005) list many ways that adaptivity can help improve game play, including: “to moderate the challenge levels for a player, help players avoid getting stuck, adapt gameplay more to a player’s preference/taste, or perhaps even detect players using or abusing an oversight in the game design to their advantage.” Lopes and Bidarra (2011) suggest that without adaptivity games: are predictable, which encourages “gaming the system,” encourage practice when it is not needed, and have little replay value.
So, how can games adapt to players’ levels? There are some basic things that can be done to get closer to players’ levels without player modeling, such as:
1. Static leveling – This would be the lowest level of adaptivity, in which everyone goes through the same levels and has the same requirements to move to the next level. For example, you must successfully complete four quests to move to level 2. In this case, the game is not trying to create an estimate of player skill. Presumably more skilled players will complete those four quests quicker than less-skilled players and will move on to the next level. Very skilled players will have to move through many levels that are too basic in order to get to the more advanced levels and there isn’t much replayability.
2. Branching based on behavioral measures – Players can go down a limited number of pre-defined paths based on simple measures of performance or choice. In this case, there are a limited number of predefined paths. Players might all play level A, and then depending on whether they get more or fewer than 60% of problems correct, go from A -> B -> D or A -> C ->D, for example. The key here is that the decision used to choose the path is based on a simple behavioral measure of what they did in the game. Again, this does not attempt to make an estimate of any underlying knowledge, skill, or player attribute; it is just a report of player behavior. This means that it is less likely that simple, unweighted measures are used, and we aren’t getting at why we would want these players to have different paths. What we’re really probably saying here is that players who get more than 60% correct are no longer novices at our skill. However, if we want to make that judgment, we should try to model the skill with more advanced methods that take probability and error into account.
To really get to the point where we are matching situations to skills, we need to create more sophisticated models of the players.
3. Player Modeling – Estimates of characteristics. What we need to do to get to really good matching between players and activities is to infer player characteristics from what happens in the game. For example, we might want to infer traits like creativity, puzzle solving ability, ability to add fractions, and/or conversational English ability from their game play and then adapt from that. We could create a profile of skills for each player where we estimate whether they are at novice, intermediate, advanced, or expert levels.
In order to create such profiles, we need to start with defining the domains we’re interested in. What characteristics will we want to adapt to? We’ll then need to determine how to get evidence of players’ levels of those characteristics. Two ways to start out are: 1. have players rate themselves and/or 2. design the game so the initial levels are very good at gathering this evidence.
How do we make estimates of these characteristics? This is where some statistical lifting is required. This Iseli et al article from 2010, from a group designing Navy simulations, is where things are headed statistically (if you want to look at the nuts and bolts). They use Bayesian Networks, a statistical method that, in this application, estimates the probabilities of different proficiency levels based on evidence from the simulation. So, based on a whole bunch of player actions that get statistically combined, the Bayesian Network might estimate that there is a 92% chance that they are at an expert level of Navy damage control operations (as assessed by Iseli et al). The researchers do a nice job showing how to make these estimates based on what is happening in the simulation, but they do all the calculation after the fact (outside the environment). The next move is to make this happen within the game so the game can adapt based on the results of the computation.
So, if it isn’t obvious… this all requires data! We have to be thinking very carefully about the data we collect from player actions, and how we use it both in game and after the fact.
Once we know what a player’s characteristics are, we can choose the next activity to maximize different things. For example, if we are noting flagging engagement, we might choose what we think will be most motivating. If we want to maximize learning, we might choose the activity most within the player’s zone of proximal development. If we want a better estimate of their characteristics, we might choose an activity that will best help us differentiate between novice and intermediate players.
Finally, at the end of the day, all of these player estimates can also serve as assessment data, providing feedback for teachers and students about their levels of knowledge, skills, and abilities. So, making the game better also serves to give us better information about our students… win, win!
For good reading:
- Charles, D., Kerr, A., McNeill, M., McAlister, M., Black, M., Kucklich, J., … Stringer, K. (2005). Player-centred game design: Player modelling and adaptive digital games. In Proceedings of the Digital Games Research Conference (Vol. 285). Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.97.735&rep=rep1&type=pdf
- Csikszentmihalyi, M and Csikszentmihalyi I.S, “Optimal experience: psychological studies of flow in consciousness.” 1988, Cambridge; New York: Cambridge University Press.
- Iseli, M. R., Koenig, A. D., Lee, J., & Wainess, R. (2010). Automatic assessment of complex task performance in games and simulations. In The Interservice/Industry Training, Simulation & Education Conference (I/ITSEC) (Vol. 2010). Retrieved from https://www.cse.ucla.edu/products/reports/R775.pdf
- Lopes, R., & Bidarra, R. (2011). Adaptivity challenges in games and simulations: a survey. Computational Intelligence and AI in Games, IEEE Transactions on, 3(2), 85–99.Available:http://graphics.tudelft.nl/~rafa/myPapers/bidarra.TCIAIG.2011a.pdf
- Magerko, B., Heeter, C., Fitzgerald, J., & Medler, B. (2008). Intelligent adaptation of digital game-based learning. In Proceedings of the 2008 Conference on Future Play: Research, Play, Share (pp. 200–203). Available:http://lmc.gatech.edu/~bmedler3/papers/MagerkoHeeterMedlerFitzgerald-Intelligent AdaptationofDigitalGame-BasedLearning.pdf
0 Comments
Leave a Comment
Your email address will not be published. All fields are required.