What Detroit and Chicago Can Teach Us About Student Growth and School Rating Systems

By: Chris Minnich

For many years, schools in the U.S. have been rated based on a single measure—how well students do on the annual summative test. With the passage of the Every Student Succeeds Act (ESSA) in 2015, however, states gained the opportunity to build accountability systems that consider additional measures of student and school success. States have embraced the opportunity to look more holistically at school performance. In addition to including nonacademic indicators—such as chronic absenteeism and school climate—in their ESSA plans they have also added other measures of academic success, most notably academic growth. All 50 states and the District of Columbia featured academic growth in their recently approved ESSA plans, though the nature of its inclusion varies widely—and this matters. Recent headlines help us understand why.

In Michigan, a new A–F grading system mandated for Detroit Public Schools made the news because it will predominantly weight academic growth, and some are hoping that the system will be adopted by the rest of the state. And in Illinois, the new state report card, in which growth factors carry significant weight, captured attention for giving most schools a passing grade. To make it even more interesting, Chicago Public Schools, which has its own school rating system that heavily weights growth, shared the headlines for yielding results that were in some cases at odds with the state’s ratings.

These developments illustrate that two key issues are emerging: how much academic growth is weighted in school accountability systems as compared to levels of student proficiency, and how it is measured. These factors are critical if we are to create accountability systems that recognize the contributions that schools make to student progress—the most appropriate measure of school effectiveness. They are also essential if we want to reduce bias against educators and schools serving students in diverse, high-poverty communities—like many of those found in Detroit and Chicago.

In systems that weight proficiency heavily, schools serving students from disadvantaged backgrounds often come up short, even when they move their students further during a school year than do schools serving students from more advantaged families. This doesn’t mean proficiency shouldn’t be part of the equation; it is also important. Proficiency standards indicate whether students are meeting grade-level expectations, whether they should be considered for additional supports or enrichment programs, and whether they are prepared for college.

But rating systems that use growth as a primary measure, rather than those that weight proficiency heavily, are far fairer to schools serving students from disadvantaged student groups. They provide a more detailed picture by allowing us to differentiate between schools with low achievement scores but high rates of growth, and those with low achievement and little growth. They also give more useful information to educators and acknowledge the efforts of those in high-need communities who are working hard to change the odds for their students.

A recent national study by NWEA of 1,500 randomly selected schools supports the value of weighting growth over achievement in accountability systems. The study, Evaluating the Relationships Between Poverty and School Performance, found that many schools with low achievement were producing average or above-average growth. Strikingly, this was true for 60% of schools in which more than 90% of students were eligible for free and reduced-price lunch. Conversely, student growth in schools with few disadvantaged students varied widely, calling into question the assumption that students educated in more affluent schools are learning at higher rates.

The difference between the Chicago Public Schools and the State of Illinois rating systems, however, shows that it’s not as simple as including growth in accountability formulas, or even as weighting growth more heavily than proficiency. How growth is measured also matters. Growth measures that track changes in summative test performance alone don’t measure how much students learn over the course of a school year—the most crucial element impacting a student’s pathway to proficiency. Moreover, because this approach compares spring scores to spring scores, summer learning is attributed to the school, making the results potentially biased against schools serving primarily underserved populations, who are more likely to experience summer learning loss.

In other words, school rating systems that don’t consider within-year growth are at risk of misidentifying the schools in need of support and disrupting effective practices that are moving the needle for our most socially and economically disadvantaged populations. Let’s consider, for example, Moos Elementary School in Chicago. Moos was identified as among the 5% lowest performing schools in the state under the Illinois system, which defines growth as year-over-year improvement in summative test performance. Under the Chicago Public Schools School Quality Rating Policy (SQRP), however, Moos Elementary earned a top rating.

Why the difference? Because the SQRP measures how much students learned during the year, regardless of achievement level on the summative test. It measures their academic growth from the beginning to the end of the school year in the context of how similar students grow nationally and looks at how well schools are supporting the growth of priority populations. Moos Elementary, which serves an economically disadvantaged student population of mostly Hispanic and African American students, earned 4.3 out of 5 points on the SQRP, largely because it earned high marks on the SQRP’s growth-related categories. The Chicago rating system is effective and fair because the only way to close achievement gaps is by increasing opportunity to learn and grow, and schools doing this well should be recognized and encouraged to continue their best practices.

As Detroit Public Schools Community District Superintendent Nikolai Vitti stated in The Detroit News, an accountability system that measures how much students learn during the year produces a more accurate picture of “where schools are rising and improving, and where schools continue to fall short of expectations,” while also instigating “practical change so more students are showing growth and moving to grade level performance.”

These “pulled-from-the headlines” examples show that education leaders and policymakers are examining how they measure student and school success and reevaluating what’s in the best interest of students. They are understanding nuances—something we encourage students to do so they develop more sophisticated and holistic ways of experiencing the world, and something teachers can do better with measures that help them tailor instruction to student needs. A paradigm shift is underway, and it’s both exciting and critical to our ability to create equity in opportunity and outcomes for all students.

For more, see:

This was originally posted learningpolicyinstitute.org.

Chris Minnich is the Chief Executive Officer of NWEA, a research-based nonprofit that creates assessment solutions that precisely measure growth and proficiency—and provide insights to help tailor instruction. Before joining NWEA in January 2018, he served as the Executive Director of the Council of Chief State School Officers (CCSSO).


Stay in-the-know with innovations in learning by signing up for the weekly Smart Update.

Guest Author

Getting Smart loves its varied and ranging staff of guest contributors. From edleaders, educators and students to business leaders, tech experts and researchers we are committed to finding diverse voices that highlight the cutting edge of learning.

Discover the latest in learning innovations

Sign up for our weekly newsletter.

0 Comments

Leave a Comment

Your email address will not be published. All fields are required.