Several knowledgable policy analysts have read the open letter to the testing consortia and have responded, “I still don’t get it, what’s the problem?” Perhaps the letter was too oblique. In the three areas that the letter addresses, my big concerns are
1) by taking the swiss-army knife approach (tests that improve instruction, measure teacher effectiveness, hold schools accountable) the consortia will default to end of year mostly multiple choice tests. Instead of trying to build/buy ‘super tests’ they should be building the framework for a ‘super system’ that would utilize many forms of assessment data for appropriate purposes. An assessment ecosystem should support multiple platforms and tools that states and districts/networks can use to assemble assessment aligned assessments–that’s very different than half the states giving the same end of year multiple choice test.
Cost and comparability are the drivers here. By over-weighting these factors states will stifle innovation and lock in the system we have. Instead, we want the consortia to design assessment systems for personalized digital learning environments of 2014.
2) PARRC’s idea of through course assessments is a good one but they shouldn’t require everyone to use the same benchmark assessments at the same time–that will prove to be misaligned with instruction in most schools, particularly high performing schools with a well developed curriculum. Most digital learning programs/schools have built in assessments, so what is proposed is redundant. We’re advocating for a plug-and-play approach where districts/networks can show their state they’ve got this covered.
Like #1, this comes down to comparability. The shift to digital learning and the resulting flood of keystroke data will yield thousands of entries into a student’s standards based gradebook–that will be more than enough data correlated to the Core to make comparisons of academic growth and kids at/above grade level. I think it will be possible to build lists of tests and testing components can be certified for comparability rather than give everyone the same test.
3) New tests will hinder rather than help competency-based models. I’m afraid that consortia/states will get cheap and only do end of year tests rather than systems that measure progress along the way. Take FL Virtual for example, they have rolling enrollments (the gold standard). When kids finish a course they should take the test, not wait until May.
In short, I don’t want one big cheap end of year test used for more than it should be. I don’t think this was the direction contemplated by the RttT assessment grants. I don’t want it to lock in the teacher-centric age cohort model for another decade. I don’t want simple assessments, I want complex performance based assessments. I want a system that will incorporate all the performance feedback that students will be receiving a few years from now. I understand the cost pressures, but think we can build assessment systems that work better and may be cheaper than what is being contemplated.