ESEA didn’t get reauthorized this year, but the new framework for US education was initiated this week when the Department awarded $330M grants to two state consortia. Read EdWeek’s Catherine Gewertz for a summary of the implications for kids and teachers. These two grants, leveraging the widely adopted Common Core standards, will guide how American kids are tested for another ten year.
The Partnership for the Assessment of Readiness for College and Careers (PARCC) received $170M on behalf of 26 state. Leadership came from Mike Cohen, Achieve, Eric Smith, Florida, and Paul Pastorek, Louisiana. The PARCC summary suggests that most assessments will be online. The attempt to involve more than 200 colleges may result in a more transparent college readiness measure than today’s placement exams.
The Smarter Balanced Assessment Consortium, led by Linda Darling Hammond, and chiefs in West Virginia, Maine, and Oregon received $160M. The summary is a little vague, but from this group of 31 states we’ll see more adaptive assessment (e.g., NWEA) and performance assessment in these states with wider variation than is likely across the PARCC consortium.
Overshadowed by RttT state awards, this wonky announcement received little attention. It’s complicated and nothing happens for years—assessments don’t go live until 2014. But it is important work, a real inflection point in the history of US education. These lumbering consortia will make big steps forward from 1950s psychometrics to something appropriate to 2004 (introduced in 2014). Next gen assessment will mark the beginning of a differentiated world—differentiated instruction for kids and differentiated roles and pay for learning professionals.
These grand compromises will bring up the rear but may dampen assessment innovation because they are trying to solve the wrong problem. Rather than “what’s the best common assessment system?” they should be working on “what’s the best sequence of learning experiences for an individual student?” And, “how do we build a flexible framework that can incorporate lots of assessment data from a variety of sources?” The comparability of common formative assessments is great, but the lockstep application forces standardization. I’d much rather see a marketplace of powerful instructional systems that invisibly embed assessment. Folks will try to work within these testing systems, but I don’t think either will fully harness the power of content-embedded assessment (which is likely to be the most important capability to be developed over the next five years). Assessment that counts for anything will, for another decade, remain outside the instructional experience—and that is unfortunate.
Here’s two examples. A learning sequence that includes a lot of games and sims will have mountains of feedback data. However, kids will need to stop learning and take an online quiz that may have nothing to do with their learning. A merit badge system incorporates performance assessment and end of unit assessments in support of an individual progress model. End of year tests for age cohorts will have little relevance to an individual learning pathway.
An assessment framework, or harness, would incorporate new data sources as they are developed. (A year ago I suggested an assessment marketplace after a chat with John Katzman). The related project the consortia should take on is a comprehensive student profile—one that tracks learning levels, interests, and best learning modes. A student’s motivational profile will prove to be more important than learning levels to academic and professional success.
The assessment grants are a big step forward particularly from an equity standpoint. They are most likely to benefit weak instructional systems. But from first glance, it looks like they failed to anticipate the most important developments of the decade to come. We should be targeting 2014 opportunities not trying to solve 2004 problems.