What EdTech Can Learn From the Grocery Store

By Nathan Martin
When it comes to the grocery run, I’ve never been the most sophisticated shopper.
Overwhelmed by the aisles, I will forget essential ingredients, gravitate to the familiar (hello frozen pizza!) and somehow end up with a cart full of kale. I’m not uninformed. I just don’t make the best decisions.
For schools and teachers, choosing which education product to use can feel a bit like a trip to a grocery store–only one with incoherently organized aisles and sporadically restocked shelves, filled with products lacking any kind of a common labelling system. Evaluating the ingredients (or quality) of a product can be impossible. And then there’s the bureaucratic layers (unique to every school or district) which can complicate even the best plans.
So when it comes time to make a decision, schools and teachers can struggle. They take the task seriously and work with what information they have, but as Digital Promise found in Improving EdTech Purchasing (2014)

 “With a growing number of products and limited trusted information about them, many districts rely on informal sources instead of data and evidence to make decisions”

Since districts make decisions in this manner, “companies perceive little incentive to produce rigorous evidence.”
This self-perpetuating cycle fails teachers, students or companies trying to make better products. It makes sense that SRI research found the most popular edtech products were familiar ones which fit within the existing system, not the most impactful.
Making better decisions in edtech is complex. The excellent Smart Series Guide to EdTech Procurement (released by Getting Smart and my former colleagues at the Foundation for Excellence in Education) provides an excellent navigational tool. One part of that decision-making process comes with better evidence.
There is growing consensus (see articles from Tom Kane, Michael Horn and Julia Freedland Fisher) that education needs a better way to translate research into action.

Knowing What Works in EdTech

Education needs a way to know and track “what works.” It also needs ways for people to make better decisions. Living in the U.K., I’ve seen my decision-making improve through tools like BBC recipes to know “what works,” and the easy signals of traffic light packaging to make that final decision. Interestingly, the importance of labels in nutritious decision-making led the U.S. to announce a revamping its food labels for the first time in over 20 years.
While groups like EdSurgeDigital Promise and Graphite are taking important steps to help bring a bit of coherence and research to the edtech grocery store, companies should play a greater role in helping to untangle this Gordian knot.

So what can (and should) companies do to help break this cycle?

Education (in the U.S. at least) will never have a body to compile research, track education products and issue official guidance on what works–barring a massive shift in the U.S. political landscape. Power and decision-making is explicitly reserved to the states. Federal research funding remains minimal. As Kane writes, “There is no ‘FDA’ for education, and there never will be.”
There may never be an FDA for education, but education companies could do more to help schools and teachers understand the research and “nutritional value” of educational products.
Here’s the modest first step:

Like with the traffic-light labeling system or badging, education companies should adopt a standard, easily-recognizable way of identifying what evidence or research backs their product.

Labels help identify ingredients. Badging and micro-credentialing capture the accumulated skills of an individual. Both could be used to help tackle this question of evidence.
Hold the skepticism for a few paragraphs.
Currently, some education companies collect evidence on products. The strength of that evidence varies, ranging from a simple satisfaction survey, case study, or to a Randomized Control Trial (RCT). The evidence normally shows up in claims made about products (i.e., schools in X made Y gains through use of Z!).
For a school or teacher, making a judgement about what this evidence means (and what it supports) can be challenging. There are no standard labels, as terms and the definitions of those terms (such as efficacy, effectiveness, case study) may change from company to company. Schools and teachers know that RCTs are the gold standard for “what works,” but assessing other types can be less clear. And since RCT’s are expensive, time-consuming to conduct and (as AEI pointed out recently) are not without flaws, the answer is not just to conduct more RCT’s.
As Michael Horn notes:

“It’s not that RCTs are unhelpful; they can be very helpful. It’s just that stopping with a randomized-control trial represents, at its best, an incomplete research process… the federal government should help by supporting research that progresses past initial RCTs and promotes alternative methods for unearthing what drives student outcomes in different circumstances.”

All evidence should be used in making education purchasing decisions, but all evidence and research are not equal and shouldn’t be given equal weight. How evidence is collected and reported determines how much to trust the larger claims of the impact of the product. So, while the experience of teachers and students (whether through surveys or reports) are important, knowing whether those products will have a repeatable and replicable impact relies on more rigorously collected and presented evidence. We, as education companies, should help make that process clearer and more transparent.

EdTech Label Ingredients

Any new set of labels, packaging or badging–whether driven by a set of leading companies or a non-profit — should be:
1. Clearly Defined and Easily Understandable: Defining types of evidence is challenging, but vital (as we’ve found out at Pearson). By agreeing on general definitions of types of evidence, the definitions could link to easily-recognizable badges or labels (imagine what a Correlation Study badge might look like). Like Creative Commons, which provides clear visuals defining copyright and repurpose, these badges could then be used by participating companies to designate what evidence had been collected on their product.
The greater amount of evidence supporting the product, the greater number of badges. By focusing on presenting information in a standard and understandable way, it would make decisions easier for educators– much like the My School Info Challenge tried to do with school performance data.
2. Focused on Creating Value for Teachers and Students: Education is an industry that should increasingly judge its success based on whether it creates real and demonstrable value for students and teachers. It animates the company I work at; as our CEO said in December, “the profit we make is the by-product of making a useful and meaningful addition to society… our ultimate mission is to make people’s lives better. If we fail at that, we fail as a business.”
Sustainable growth will only come by growing impact on the learners served, creating shared value, not just growing the numbers of the learners reached. Greater impact can come with smarter decision-making. There is consensus among edtech companies that purchasing decisions should be smarter–focused on answering the local needs of teachers and students. A coalition of education companies could be formed, dedicated to promoting actionable and understandable research, with the goal of better serving teachers and students. Better purchasing decisions are only as valuable as it leads to better outcomes. That focus is critical.
3. Verifiable and Linked to Evaluation Tools: Like with Mozilla’s badges, these badges, icons or labels could link to the actual research or evidence supporting the badge. It would be an imperfect tracking (there wouldn’t be a What Works Clearinghouse evaluating the actual submitted evidence) but it would allow a standard and verifiable way of displaying evidence. That work should be combined with tools to help schools and districts evaluate the evidence, like Digital Promise’s  Evaluating the Studies of Ed-Tech Products. That clear distinction would encourage the market to move closer towards evidence-based decision-making–like we’ve seen with buying habits towards locally-sourced food.

What is Next?

Even in the best circumstances, this proposed first step wouldn’t fix the evidence component of ed-tech purchasing decisions. A self-regulated industry movement would face its external challenges and natural skepticism. Arguments over definitions of different types of studies and evidence. Bad actors pushing poor quality evidence.
But those challenges are not new to the market. It’s part of the reason why schools and teachers struggle to evaluate claims and products. Solving the problem of knowing what works (and creating quality products) is only one part of the evidence equation–there has to be a demand for products that can show they make a difference to learners. It will take a huge effort. Research needs to be done differently, catalogues built, new tools will need to be developed. Companies need to change how they market and label their products.
Smarter procurement and purchasing (a better education “grocery store”) is complex. Better evidence is one part. Labels or badging might not fix it, but it could help move towards a future process that is more coherent, understandable and centered around evidence.
For more, see:

Nathan Martin is a senior researcher at Pearson. Follow him on Twitter: @nathanmart.

Stay in-the-know with all things EdTech and innovations in learning by signing up to receive the weekly Smart Update. This post includes mentions of a Getting Smart partner. For a full list of partners, affiliate organizations and all other disclosures please see our Partner page.

Guest Author

Getting Smart loves its varied and ranging staff of guest contributors. From edleaders, educators and students to business leaders, tech experts and researchers we are committed to finding diverse voices that highlight the cutting edge of learning.

Discover the latest in learning innovations

Sign up for our weekly newsletter.



Wow...to respond to all your valid points in this article would take another article of equal length. So...I focus on just a few:
1. Labels. I've been saying I want some kind of label like nutrition labels required by the FDA. Guess what! We have something just like that and it is provided by Balefirelabs (www.balefirelabs.com) - not subjective reviews, just metrics of what is/is not included.
2. Nutrition labels are not without problems. Definitions of long foreign words. How to interpret percentages. Conflicting advice on dietary requirements. False claims. Validation of accuracy of measurements....on and on.
These exist in our ed system as well. What Works Clearing House, Best Evidence Encyclopedia and other forums were meant to help clear the fog. Not working at scale. You address many of the problems related to ed research; specifically RCTs. We need to fix the model. In my own practice, I'm seeing some hopeful movement; though not nearly as quickly, efficiently, or deeply as I'd like. For example, programs like Harvard's and the University of Texas at Arlington's Mind, Brain and Education programs. Still, translating research to practice has not proven easy.
Trans-disciplinary work is not easy. See how several researchers approached tackling issues in this article from the Journal of Numerical Cognition: http://jnc.psychopen.eu/issue/view/2 Be sure to read the commentaries as well. Also note a collaboration between the work of this new journal and the Mathematical Cognition and Learning Society. It's a start. An important start.
Some of the issues raised in the above dialogue can be seen in a model I included in this tweet: https://twitter.com/MrsG2nd/status/731531894886555648 - That model is a simplified version that includes interaction/dialogue/feedback loops between all elements. In other words, the "special sciences" must collaborate, the stakeholders in education must collaborate, and it really ends up looking like a beautiful, intricate web with many, many nodes and connections. That's how systems work. That's how networks work.
When talking specifically about edtech, and even more refined, adaptive/mastery software, I believe we can learn a lot from using Evidence Centered Design. See work on this in "Bayesian Networks in Educational Assessment" --Williamson, Yan, Steinberg, Mislevy, & Almond. http://tinyurl.com/h6qql9q
3. The power of social media. This has been one of the most eye opening experiences for me. True, like any communication, social media can/does fall prey to group think. But it has become aook by David Williamson, Duanli Yan, Linda Steinberg, Robert J. Mislevy, and Russell G. Almond grass roots effort that has moved the system. Look at this tweet and see how quickly teachers' needs were heard and changes were put into place:
I wish I could teleport to 20 years in the future and look back to see what changes were initiated and propelled through social media (good and bad). I do not doubt for one moment that the current opt out movement, and the fairly rapid disintegration of Race to the Top (and it's predecessor NCLB) advanced because of the power of collective voice from voices who previously went unheard.
4. We now have the opportunity with ESSA to adopt both a medical Mayo Clinic-ish approach to teaching (prevention, treatment, sustained health, recommended habits, testing) and the FDA model...but only if we look deeply and carefully and listen to all elements. I recently read "An Everyone Culture" by Kegan and Lahey. This is an excellent model of how communication and iteration can create stable, healthy systems, regardless of context.
We ARE seeing movement, we DO have good models, we MUST have a mind shift for paradigm shift. But we can. "Will we?" is the question.


Correction: Yikes...cut and pasting error. Point #4- skip erroneous pasted reference to authors of Bayesian Networks. Should read "a grassroots effort..." :-(

Leave a Comment

Your email address will not be published. All fields are required.