My wife appreciates psychodrama (today we’ll see another triumphant women in a Nancy Meyer movie). I, on the other hand, enjoy a juicy psychometric-drama. And we have a thriller in the making. A couple agreements in the next few weeks will set the stage for the next decade of testing in America.
Here’s the plot outline:
- Moving goal posts: Common Core adoption will cause states to adjust student learning expectations
- High stakes: maturing state accountability systems use test scores to hold students and school accountable
- Increased stakes: federal grant programs are encouraging links between test scores and teacher evaluation
- One camp of leaders interest in locking in high standards in traditional “national” tests
- Another camp of leaders interested in progressive systems incorporating performance assessment (kids demonstrate what they know)
- Exponential growth of online assessment much of it embedded in games and learning software
- And about $4 billion at stake
Over the next two weeks, more than 30 states will wrap up a Race to the Top applications. They have been encouraged to focus on local formative and interim assessments (the kind that should help teachers improve instruction). Duncan would like to see the $350 million carved out for testing projects used for common summative assessments (i.e., end of year and end of course exams). But a group of state chiefs would like to use the big pot of cash to develop the next generation of assessment.
If this was a movie, it would include a couple cut aways to other developed countries that would illustrate the strange American reliance on bubble sheet assessment. It may also show young people instantly responding to performance feedback while participating in a virtual role playing game.
I’m no psychometrician, but I’m confident that the $350 million for test development could be used to build several innovative next gen assessment platforms that could incorporate content-embedded assessment, online adaptive assessment, performance assessment, and more traditional summative assessment.
There are two endings to this movie. The likely ending is that agreements are made to reduce the number of variables in play, lock in higher common standards, and encourage widespread adoption of common traditional “national” tests. This will slow the race to the bottom (making it easier to pass state tests to increase passing rates) and set the stage for an ESEA bargain supported by a ‘college ready’ business friendly consortium that could split both political parties. In this ending, we’ll need to be cautious about the risk of using old psychometrics for three distinct purposes: student, teacher, and school accountability.
There are a couple versions of the alternate ending—all a bit messier—including prizes and grants for innovative mostly online testing systems that easily incorporate learning games, writing assignments, and science projects. One version includes a marketplace where assessment systems compete for customers (states, districts, and networks). A similar version involves several multi-state consortia collaborating on assessment systems—some innovative, some progressive, some traditional. This variegated version leads to a broader coalition of support for reauthorization but worries equity advocates and gap closers. How would the feds ensure that states keep the good school promise?
We’ll soon know how this holiday drama will play out.