Bror Saxberg who cross-posted to our blog a couple of days ago, has spent some time ruminating on logic, cognition and the science of learning, and he’s got some points that are worth re-broadcasting. Most importantly, the work that is being done by teachers — and students — in public education is not accessible to learning companies, biotech companies or research groups that could enhance what teachers know about how kids learn.
We know partly why that is: we’re dealing with children, first of all. And public education is our most sacred of sacred cows. But is the problem also logistical? The only real data we have currently for how students learn is the scores from standardized tests, which can track trends and progress or devolution. Or, we have grades. Okay. Grades are rather subjective, no matter how you slice it. I’m not ever sure what the real difference is between an A or an A-.
It’s strange that we treat learning like the way we often treat music styles or art. We like what we like when we see it, but we can’t really describe it. It’s like the cop-out Supreme Court definition of pornography. I know it when I see it, but I’m not prepared to tell you exactly what it is.
Saxberg says there are some icons that need to be brought into the discussion, and some talking points that would help focus the discussion and at least get us started. At a recent forum he attended, he noticed:
* No reference to the many sources of research about fundamental limitations on thinking and learning (finite working memory, for example, or the absolute requirement for new expertise to be built on fluent competencies burned in through practice) that have multiple lines of evidence behind them.
* No mentions of the great work of people like Richard Mayer, or John Sweller, or David Merrill, who’ve built up decades of understanding on fundamental limits and opportunities for media and instructional design to dramatically improve (or hinder) learning.
* No mention of the work of researchers like Jan Plass and others, directly investigating what specific design elements of simulations and games lead to better learning.
* No sighting of local empirical investigators like Kurt Van Lehn, one of the leading researchers on automated tutoring systems, who’s a faculty member at ASU.
He’s got a point. But it’s also understandable why this happens. We’re talking about entrepreneurs here. They want to start companies that answer some fundamental questions they developed as students, or in talking with students. Still, anecdotal is no way to go when we are talking about something so personalized as to be granular. How do we chart this? Who do we bring in as partners to discover truly how people learn?