Good Assessment Key to Strong Accountability
Some #EdReform, #EdEquality, and #EdPolicy colleagues thought my blog Assessment Will Advance When it Moves Into the Background suggested that I didn’t support the idea of summative assessments and their associated use in school accountability systems. This section may have begged the question:
We’ll need to invent our way out of the Cost-Comparability Conundrum, our reliance on cheap standardized tests given under the same conditions. The datasets from different aligned instructional systems will provide equally valid measures of student achievement compared to standards. We’ll be able to compare achievement and growth with sufficient comparability despite the fact that students took none of the same assessments.
Here’s my main concern: we try to use cheap year end tests to do too many jobs and I’m afraid some states will just move them online (at least they will be cheaper and faster).
Strong accountability must be supported by good assessment. A month ago in a blog about Authorizing High Performance I stated that:
Good measures, clear targets, transparent results, and strong authorizing are all key to improving educational outcomes. Effective authorizing is the quality lynchpin to the Digital Learning Now framework which suggests that “States should hold schools and online providers accountable using student learning to evaluate the quality of content or instruction. Providers and programs that are poor performing should have their contracts terminated.”
Teachers and students will benefit from the shift to personal digital learning–they will receive more instant feedback to guide teaching and learning. Students will make more frequent contributions to their electronic gradebook and portfolio. System heads and policy makers will eventually be able to make some judgments based in part on this body of evidence (using soon to be invented comparability strategies).
For the foreseeable future, system heads and states will need to use summative assessments to ensure that schools and programs are producing sufficient gains in and adequate levels of student learning. For accountability purposes, as other sources of achievement data are added and new comparability strategies are developed it will be possible to shift some summative assessment to a NAEP like sampling strategies. Incorporating adaptive strategies into summative assessments can also reduce the number of questions required.
The Hewlett Foundation, in conjunction with the state testing consortia–PARCC and SBAC–is sponsoring a demonstration of automated scoring of essays and constructed response activities. The project seeks to identify gaps between current intelligent scoring capabilities and what is required by the Common Core and to accelerate innovation to close those gaps. Better, faster, cheaper scoring of essays and constructed response will promote better summative assessments. Many students will soon benefit from receiving daily writing feedback from the same scoring engines.
The shift from age cohorts to competency-based learning with rolling year-round enrollments makes the notion of a year end exam less relevant. As a director of iNACOL, I appreciate Susan Patrick’s leadership on competency-based learning. Several of her recent reports on this topic are listed below. In a competency-based system, students progress when they have demonstrated that they have meet learning expectations. That means that gateway assessments must be made available on demand or on a frequently scheduled basis. This creates two choice for policy makers: 1) create exams to manage matriculation and a separate dipstick assessment (e.g. NAEP) to check system quality, or 2) use a single set of exams, with some schedule flexibility, to manage matriculation and accountability. Most states will choose the later for cost reasons, but the compromise is less than ideal for either purpose. PARCC and SBAC will help work out these details. In the long run (7-10 years), the shift to personal digital learning will add so much data to the system that it will be much easier to meet different data needs with little compromise.
I am worried that the shift back to state accountability systems (via waivers and the next ESEA) could mean a reduced commitment to strong accountability. NCLB, at the core, was a baseline school accountability system that required progressive intervention in failing schools. There were lots of problems with NCLB and they didn’t get fixed but many of us hoped that it would put an end to accepting chronic failure. That was only partially successful.
Every state should make the good school promise–every family will have access to at least one good neighborhood school and a handful of good online options. Good assessment and strong accountability systems can help make that promise a reality.
For more see:
- Clearing the Path: Creating Innovation Space for Serving Over-Age, Under-Credited Students in Competency-Based Pathways (Chris Sturgis, Bob Rath, Ephraim Weisstein, and Susan Patrick, January 2011)
- Digital Learning Now Report: 10 Elements (December 2010)
- When Success is the Only Option: Designing Competency-Based Pathways for Next Generation Learning (Chris Sturgis and Susan Patrick, November 2010)
0 Comments
Leave a Comment
Your email address will not be published. All fields are required.