Good schools know how every student is doing in every subject every day. They don’t need a week of testing in the spring to tell them what they already know.
For 25 years, states have imposed standardized tests on schools as an external check on student progress. The tests provide an inexpensive comparable source of information that helps identify schools that are struggling and groups of students being poorly served. But let’s face it, everyone hates these tests.
While most OECD countries have sweated validity (good measures of what’s important), the US has been preoccupied with reliability (inexpensive measures of what’s measurable). The development of Common Core State Standards was a national effort to raise expectations and implement better tests. The addition of more writing made the tests longer and just added to the backlash against testing.
One problem with state-mandated tests is that they don’t take advantage of everything teachers know about their students. With the shift to digital learning, many students have experienced a big increase in formative feedback from adaptive assessments, embedded quizzes, and online resources like Khan Academy. All of these new forms of feedback don’t integrate very well (because we still have an interoperability problem) but they set the stage for what David Conely calls cumulative validity.
An example of cumulative validity is 500 data points from six sources collected over eight months about a middle-grade student’s progress on ratios and proportions. With that much information, you have a pretty good idea of what they know and you don’t need to start from scratch with 50 new questions—but that’s what exactly what standardized tests do. (Adaptive assessments can automatically adjust difficulty and short cut the process but they still don’t take advantage of what is know about a learner trajectory.)
States are beginning to take advantage of cumulative validity. In partnership with NWEA, Louisiana will shorten their end of year test by measuring what students have learned via passages from books that students have read and assessing students through several brief assessments throughout the year.
The End of the Big Test
To curtail or end standardized testing, states could verify that good systems have more than adequate student performance data. That would involve a three-step process:
- Districts and networks would petition for an assessment exemption by submitting a cohort of learner profiles. (To promote security, these profiles could be anonymized.)
- A comparability analysis would determine if the system can reliably and accurately report student progress (both achievement levels and growth rates). If yes, the system would be granted a three-year testing exemption.
- After an initial exemption period, states could sample student profiles to periodically check the accuracy of local systems.
This approach would work adequately for reading, writing and math (to be fully enacted, it would require an update of the federal policy but that seems doable). Given the interest in broader aims, networks of schools that share outcome frameworks hold real promise.
The Potential of Diploma Networks
Diploma networks are affiliations of schools that share goals, systems, and supports. They help schools adopt a comprehensive outcome framework and assess student and school progress. Schools share a learning model, a platform and teacher supports.
International Baccalaureate is an example of a diploma option available to motivated students. With a learner profile and curriculum requirements, IB represents a comprehensive outcome framework but is short of a schoolwide model with strong systems and supports.
With two managed schools and five affiliates, Building 21 is an example of a diploma network with shared goals, systems and supports. Another example is the 10 school Place Network, sponsored by Teton Science School, a group of rural microschools.
By adopting broader aims including elements of character development and social and emotional learning, a growing number of charter management organizations including KIPP and DSST act like diploma networks.
Many smaller school districts (like those belonging to the League of Innovative Schools) also operate like a diploma network with a comprehensive outcome framework, shared tools and supports.
Diploma networks can encourage groups of schools to adopt broader outcome frameworks. States that have expanded graduate profiles (including Virginia, South Carolina, and Vermont) could approve the comprehensive outcome frameworks as well as providing assessment exemptions.
In the same way that New Hampshire is planning to phase districts into the PACE assessment pilot, states could create grants and incentives for schools to join certified diploma networks. Over three to five years, a state could curtail and then eliminate testing and rely on the authorization of diploma networks. Borrowing from the charter sector, all public schools could operate under a performance contract and be part of a diploma network. Schools in diploma networks would promise to report achievement and growth in consistent and accurate ways while maintaining adequate performance.
Instead of administering a state test, a state like Colorado could authorize two dozen diploma networks, some in geographic clusters, some thematic groups (like career and technical schools), and some managed networks like DSST and Strive.
How Autoscoring Will Help
Artificial intelligence is widely used to review and rate hiring profiles. Similarly, portfolios of student work can automatically be scored on many dimensions. In the last few months capabilities have matured enough that with large enough data sets, scoring engines would not require the extensive training historically required. This sort of permissioned sampling of student work would allow states to periodically check the quality of local systems.
Dallas students are beginning to build blockchain profiles that include certifications, credits and artifacts. In the near future, permissioned colleges will be able to review these profiles using their own selection criteria and offer admissions to qualifying students. Now that colleges can quickly and accurately review 100 writing samples, 50 science lab reports, and 50 computations, a college entrance exam will soon be of little value to students or colleges.
It’s time to end a century of standardized testing and focus instead on helping young people do work that matters. We no longer need to interrupt learning and test kids to find out what they know. A couple of brave state policy leaders could trigger what would be a quick change because everyone hates the tests.
For more, see:
- The Future of Testing
- David Conley on Next Generation Assessment
- How School Administrators Can Support and Promote Formative Assessment
Stay in-the-know with innovations in learning by signing up for the weekly Smart Update.
This post was originally published on Forbes.