No one can argue with the statement that every student must be taught by an effective teacher. Research AND common sense tells us so. Placing an effective teacher in every classroom has become a unified rallying cry for large-scale reform, accountability efforts, and school funding.
When I entered the teaching profession, my effectiveness was defined by an ability to plan, knowledge of subject area content, classroom management, sensitivity to the needs of different learners, and increased student achievement through powerful instructional techniques. Thirty years later, these general areas—with the addition of data driven decision-making —remain as the major pillars for effective teaching. In the past decade, leaders in the field such as Charlotte Danielson and Jon Saphier have studied teacher effectiveness in great depth in order to parse out the different components of what makes a successful teacher. Approaches such as peer observation, instructional coaches, and mentors are frequently used to support the professional growth of teachers.
We begin to splinter into various camps of thought when we turn to the conversation on how to measure effectiveness for the purposes of accountability—be it tenure, increased pay, or a school’s status. Measuring effectiveness for the purpose of supporting teacher professional development is, more often than not, welcomed; measuring effectiveness for the purpose of rewarding or punishing teachers creates the current climate of distrust, tension and disagreements.
Teacher evaluations, in the current post-No Child Left Behind environment, increasingly rely on student performance measures, which can often mean a single test. Yet none of the assessments of student performance, including state tests, were designed as measures of teacher effectiveness. They were developed to measure students against state or other learning standards or against other students. Interim assessments like Northwest Evaluation Association’s Measures of Academic Progress (MAP) were designed to provide educators with robust information about an individual student’s achievement level and growth that, in turn, would support instructional planning. Indeed, neither summative nor interim assessments were ever intended to be the sole determining factor as to whether a classroom teacher is doing a good job or not.
That said, summative and interim tests are now being used as evidence of student gains in teacher evaluation systems across the country. Whether used within value-added models (where other factors about students are taken into account) or used on their own, assessment results help inform crucial ‘high stakes’ decisions.
I would argue that the real opportunity is for school and district leaders to use assessment data, particularly from interim assessments, to help educators grow their expertise. Shouldn’t we at least spend as much time helping teachers understand how to use assessment data so they are better at their craft as we do spending time building evaluation models that judge them? As professionals, teachers deserve (and want) both accurate insight into their performance and resources to help strengthen their practice over time—neither of which should be neglected in the effort to improve student learning through teacher evaluation.
Unlike thirty years ago, teachers have access to lots of data about their students. The rich amount of data available to use gives us so much opportunity to help teachers learn how to work together in learning communities to analyze data; to identify trends and then, how to group for instruction; how to engage students in setting their own learning targets; or to self-reflect on their own skills. None of us in the world of education are opposed to the need for stronger accountability. We need to figure out the very best way to do that, and I applaud those who are seeking to figure out an equitable and fair way to do so. But meanwhile, let’s commit ourselves to supporting and embracing the professional development needs of our educators. As Dylan Wiliam said—‘why not adopt the love the one you are with’ strategy? I don’t know about you, but I always thought the carrot was a lot more appealing than the stick.Dr. Anne Udall is currently the Vice-President of Professional Development at Northwest Evaluation Association (NWEA). She has spent her professional career supporting students and educators as a teacher, administrator, community facilitator, and author.
Student Data and Educator Evaluation: Focus on Learning and Professional Growth
Across the country, school districts have responded to state and federal calls for heightened accountability in part by reshaping educator evaluation systems. Increasingly, district leaders are introducing student assessment data into the formulas used to inform these processes. This is the second of the three-part guest series, following Dr. Raymonds Yeagley’s thoughts last week. The leaders at the not-for-profit Northwest Evaluation Association (NWEA) consider the impact of using data from tests designed for instructional purposes to guide educator evaluations and call for a renewed focus on student learning and professional growth for teachers.