The Department of Education announced a data partnership with SRI called Evidence Framework for Innovation and Excellence in Education (and it is apparent that these are evaluators and data scientists not brand managers). The first project is a draft paper with the catchy name Enhancing Teaching and Learning Through Educational Data Mining and Learning Analytics: An Issue Brief.
“Big data, it seems, is everywhere—even in education,” said edtech director Karen Cator in her opening blog. “Researchers and developers of online learning systems, intelligent tutoring systems, virtual labs, simulations, games and learning management systems are exploring ways to better understand and use data from learners’ activities online to improve teaching and learning.”
The paper provides a useful intro to data mining and learning analytics. It suggests that data mining “develops methods and applies techniques from statistics, machine learning, and data mining to analyze data collected during teaching and learning” and will be useful in predicting student learning behavior, characterizing successful learning experiences, and identifying features of productive learning environments.
A cousin of data mining is analytics, a broad array of techniques used to analyze instructional data in an effort answer such questions as:
• When are students ready to move onto the next topic?
• When are students falling behind in a course?
• When is a student at risk for not completing a course?
• What grade is a student likely to get without intervention?
• What is the best next course for a given student?
• Should a student be referred to a counselor for help?
It’s a good think that the Department is joining the Big Data conversation. They have provided a useful primer on the subject. However, a Big Data partnership with the with SRI, the high priests of random control trials, is a bit disconcerting. Take for example the $3.5M grant the Department gave to SRI last week to study ASSISTments–it’s a four year study. It will prove what we already know about the productivity of adaptive systems, but four years from now we’ll be two product generations down the road.
The promise of (and the rush to) blended learning suggests that we need nimble approaches to quickly glean ‘good enough’ information to support program design decisions keep experimenting. When you visit a Rocketship learning lab, you get the sense that you’re watching a very dynamic live trial. The sector would benefit from 10,000 sites actively experimenting and sharing results. School leaders need to know by the end of the month–not 2016–whether ASSISTments or Mangahigh or ST Math or i-Ready appears to be more successful with their kids. Right now, while we’re all experimenting and designing blended learning environments, quick measures of efficacy are more important that rigorous measures of reliability.
The data partnership appears to have oversampled academia and missed the edu-venture and startup community–on this topic, thats a big mistake. They won’t get the ‘evidence framework’ right without more entrepreneurial viewpoints.
The paper did not address the U.S. comparability trap. As states began adding stakes to applications of simple inexpensive multiple choice tests, psychometricians (and then policy makers) demanded high reliability including the same instrument administered the same way at the same time. For Big Data to have a big impact, we’ll need to invent new definitions comparability. We should be able to compare big data sets for children progressing through third grade content and make some inferences with ‘good-enough’ comparability about their progress, about the learning experiences that were most productive, about the schools they attended. Without innovation in comparability, we’ll be stuck using a very narrow subset of data to inform decisions.
Finally, the recommendations miss the mark on the importance of states. The foundational insight supporting DigitalLearningNow, supplied by two former governors, is that state policy is vitally important in the shift to personal digital learning. (The same can be said for the DQC.) Taking full advantage of big data will require robust data systems, better connectivity, access devices for all students, agreements on portable student ‘data backpacks’ (an expanded electronic student record), a shift to digital instructional materials, and staff development–all state involved issues. Taking advantage of Big Data demands big leadership from governors, chiefs, state boards, and legislatures.
For another take, see Audrey Watter’s comments.
The Department is accepting comments on the paper. Send them to:
Office of Educational Technology
U.S. Department of Education
Mangahigh is a portfolio company of Learn Capital where Tom is a partner. Getting Smart provides support to Digital Learning Now.