A writer in the Daily Iowan is worried about automated essay scoring killing creativity. He confused two issues. The online scoring engines use the same rubrics to score essays as human graders. Any ‘standardization’ of writing is not a function of the method of scoring but the nature of the prompt, i.e., if a state requires every 8th grader to write a five paragraph essay every year it may lead to formulaic teaching—that’s a teaching issue driven by a testing issue, not a scoring issue.
The sponsors of the Automated Student Assessment Prize (ASAP) want students to write more on state tests and in classrooms. They have the same ends in mind as the op-ed author.
The writer’s suggestion that scoring engines will replace teachers is ridiculous. As the three case studies in this blog illustrate, classroom use of scoring engines allows teachers to assign more writing and focus on higher order feedback. Here’s an effort to set the record straight.
There was also a concern about scoring the content of an essay. In the large scale demonstration held in February, eight scoring engines did at least as well or better than expert graders scoring source-based essays. The automated essay scoring engines scored essays requiring lots of content knowledge even better than non-source data sets.
This writer and other critics are just sick of standardized tests. Last week MindShift recapped this resistance. This should not be surprising because most states are using old psychometric technology to administer inexpensive tests with little real performance assessment. Given the state of data poverty, we’ve been using these tests for more than they were designed for—to hold schools accountable, to manage student matriculation, to evaluate teachers, and to improve instruction.
But remember the state of the sector in the early 90s before state tests were widely used. There was no data, chronic failure was accepted, and the achievement gap was largely unrecognized. Measurement is key to improvement.
The foundational benefit of the shift to digital learning is formative—more specifically informative—assessment. Students will benefit from constant feedback during and after every learning experience. Essay graders will soon be incorporated into word processors and will be used as commonly as spell-check. Data visualizations (i.e., badges and bar charts) will make academic progress (or lack thereof) evident to students, teachers and parents.
The answer is more assessment not less, but much of it will occur in the background behind engaging learning activities.