The National Science Foundation (NSF) and U.S Department of Education (ED) are gathering input to shape a $60 million math R&D initiative. “This initiative will support researchers, practitioners, and institutions with the greatest potential for transformational impact, and provide opportunities for state, local and institutional decision-makers to infuse proven practices into mathematics education.”
The weak math achievement distribution in America is complex phenomenon. Four of the root causes are culture, preparation, standards, and practices:
- A cultural bias against math. It’s perfectly acceptable in America to say, “I just wasn’t good at math.”
- Weak preparation of math teachers. A small percentage of teachers have math degrees (in part because career options are more attractive outside education).
- States standards that are often low, broad, and incoherent. (The Common Core will go a long way to fixing that as William Schmidt reported recently.)
- The limitations of batch process kids by birthday through a print curriculum. Most schools rely on a string of individual practitioners doing their best rather than an instructional system (as outlined in the 10 Elements of Competency-Based Learning).
Each of these are big hairy problems that $60 million won’t come close to fixing. So, let’s come at this a different way. What are the emerging opportunities in math instruction?
- Competency tracking and achievement recognition systems that make it easier to manage competency-based environments. A system of Common Core micro-standards and some agreement about a portable student record would help.
- Adaptive engines smart enough to take advantage of comprehensive learning profiles and to incorporate proprietary as well as open content.
- More examples of high performing blended learning environments that personalize learning, extend the day/year, and leverage teaching talent with technology.
- Motivation sciences: evidence about learning experiences produce persistence and performance for which kids. The potential to completely customize a learning pathway for every student requires much deeper understanding of learning and the role persistence plays (as we’ve learned from casual games).
The first two opportunities will be most efficiently attacked by venture-backed enterprises. Grants for data standards will help. (Learn Capital, where I’m a partner, would be happy to manage a $20m math education fund; we’ve already got MangaHigh, MasteryConnect, LearnZillion. How about a little SBIC venture debt for companies focused on math achievement?)
Number 3, development of new blended schools and programs (see 10 Reasons Every District Should Open a Flex School) is most efficiently sponsored by grants to intermediaries like Educate Texas, KnowledgeWorks, New Visions, and EdVisions. This is probably better handled by foundation and new funds like Silicon Schools Fund.
The last one—advances in motivation sciences—involves a mixture of fundamental neuroscience research and dynamic trials of new tools. NSF is better suited to the first than the second. My suggestion is that NSF/ED invests half the money in a series of grants that study some basic questions including:
- Is the ability to apply concepts improved by learning concepts in multiple ways/modalities?
- What learning modalities work best for which kids?
- What role does automaticity play in deeper learning?
- What leads to persistence and how important is that to achievement?
- What data elements best predict math success?
They should invest the other half with a partner like Digital Promise‘s League of Innovative Schools that is well positioned to manage dynamic trials with multiple districts. This could be structured as a combination of prizes and dynamic research trials.
It’s not much money, so they should focus, pick an opportunity and a couple of capable partners. They should try to avoid the political pressure to spread the money like peanut butter.
This blog first appeared on Huffington Post.