By Kathy Dyer
One of the enduring challenges that teachers and education leaders must confront on a daily basis is the charge of finding the best ways to meet the diverse learning needs of all students, regardless of their achievement levels. For anyone familiar with this, it is clear that what works for one student may not work as well, or at all, for another. The increasing emphasis on personalized learning in the education community shows that this challenge is widespread, and that we understand the value of educating students as individuals—and teaching them accordingly, rather than as part of an undifferentiated group.
The guiding focus of personalized learning strategies, as well as the way in which they are used in the classroom, raises broader questions about how school- and district-level systems can support the instructional approaches and strategies that make effective personalized learning for all students possible, as well as making it viable school- and district-wide. And as student learning and assessment go hand in hand, this is equally true when it comes to designing thoughtful and coherent assessment systems.
However, under No Child Left Behind, the measuring of student proficiency—both as an educational goal and as way to evaluate school performance—became the prevailing focus over personalized instruction. While this was intended to ensure that all students were able to meet achievement benchmarks, the emphasis on proficiency as the key purpose of measurement has led to an unequal focus on students who are proficient, or those on the cusp of proficiency—perhaps 15 to 20 percent of students in a given school—to the detriment of their peers who are not achieving at the same levels. Consider the following questions:
- Which data or metric gets the most focus?
- What metric gets used for goal setting?
- What gets talked about most often in data conversations?
- Which metric is used in teacher evaluation?
In many cases, the answers to these questions will boil down to student proficiency.
As a metric for informing instruction, proficiency has limited value—and assessment data has a limited shelf life. Measuring proficiency requires measuring what a student knows at one point in time, like at the end of the year, and then comparing the student’s score with a predetermined benchmark. In this way, it only offers a one-dimensional view of student achievement, rather than information collected on an ongoing basis. And assessment data, particularly that which measures proficiency at a single point in time, effectively “expires” soon after its collection date, rendering it less useful for informing instruction.
The new Every Student Succeeds Act is changing the way we view assessments by shifting our focus from proficiency to one that incorporates growth, which can empower educators to truly talk about all kids and work with all kids. This shift welcomes analysis and discussion of trend data for individual students—data that can include metrics that go beyond achievement to measure factors such as attendance, discipline, engagement, and program participation. These multiple measures of assessment can provide a more complete picture of each student. When focusing on student growth, educators must look at students on an individual basis, regardless of their proficiency, to gauge their progress—or lack thereof. Determining their status relative to a benchmark is just not enough.
While multiple measures of assessment provide educators with valuable data that represents the growth of the whole student, it requires a new way of approaching assessments and, for many schools and districts, it calls for a thorough review of their current assessment system. For school and district leaders looking to improve their ability to effectively measure all students, John Cronin in Multiple Measures Done Right: The 7 Principles of Coherent Assessment Systems offers the following guiding questions to help districts determine next steps:
- What three to five metrics drive decision making in our school or district? Is everything centered on the state assessment, or do we take advantage of other data sources?
- What behavior do those metrics incentivize, and are all students encompassed or required to improve in those metrics?
- Are programs including all kids, and are we sustaining participation of all kids over time? If not, revise the metrics you use to evaluate programs so that they reward not only improvements in performance, but also programs that are increasing the number and diversity of students participating.
For more, see:
- Superintendents Reflect on Their Move Beyond Test Scores
- Innovating Large-Scale Assessment: Design Consideration for Education Leaders
- Performance Assessment in Different Learning Environments
Kathy Dyer is a senior professional development specialist for NWEA, where she designs and develops learning opportunities for partners and internal staff. Follow her on Twitter at @kdyer13.
Stay in-the-know with all things EdTech and innovations in learning by signing up to receive the weekly Smart Update.