Some writers and teachers of writing have been quick to criticize or make light of the Automated Student Assessment Prize (ASAP). The team running this project (which I co-direct) has been driven by a clear mission: we want students to write more on state tests and in classroom.  We want teachers to have more help providing constructive feedback.

The Hewlett Foundation funded the project because they want the Race-to-the-Top funded PARCC and SBAC tests to be used by most states to be as good as possible with lots of writing.  We know from the last decade that the quality of state tests influences the quality of instruction.

As noted last week, I want, “Less grading, more teaching.  More feedback, less waiting. Fewer worksheets, more writing.  Less multiple choice, deeper learning. These are the reasons I’m spending a good portion of 2012 working on online assessment.  Better assessment tools means better state tests and richer teaching and learning.”  For schools and states, intelligent scoring will help create Intelligent systems.

ASAP academic advisory Dr. Shermis said, “What the critics in the writing community don’t seem to get is that proponents of AES are not out to replace teachers of writing, but rather to supplement instruction and assessments with the machine technology.”

A few writing instructors have small classes and can provide weekly feedback but, Dr. Shermis notes that for “the remaining 99.997 percent, the technology has a place in providing assistance to overworked instructors.”

One writer quotes Les Perelman from MIT (who admits he’s only reviewed one AES products) about ways he can game the system.  We tested nine products using 24,000 essays in eight data sets from six states.  Compared to 50 states administering multiple choice tests, we’d be happy to host more tests of automated essay scoring to make sure kids can’t game the system.   The writer asks, “Who’s really interested in this robot grader technology?”  The answer is anyone that wants students to write more on state tests and in the nation’s classrooms.

One critique wondered how much time students had to write, arguing that more time would mean better essays (technically speaking, dah).  The point of the study is to improve state tests.  They come with time limits.  We want the best possible tests given the constraints of state testing programs.

Other writes argue that the engines just use length as a proxy for quality.  We anticipated that and added a word count score to the leaderboard of the Kaggle.com hosted open competition.  Of the 130 teams competing for the $100,000 prize, 104 have outscored a simple word count engine.

Efforts to make the study, presented today at NCME in Vancouver, more robust included essays that were either source dependent and a wide variety of the grading protocols.  We delivered data that would intentionally test the limits of the scoring engines, and Dr. Shermis’s paper explains that variety in details.

It is now our intention to host Phase Two of ASAP, in which we will focus on automated scoring of short-answers (<150 words).  We believe this will prove to be a more difficult challenge.  We’re also preparing to test the automated scoring of  math and logical reasoning.  And, we are developing a trial to test applications in a formative setting.

Peter Foltz from Pearson, one of the nine vendors to participate in the demonstration, provided a great example of why automated scoring is such a powerful instructional tool.  “One middle school teacher tells us that he assigns 23 essays a year to 142 students and each student does about 8 revisions. That’s about 27,000 essays.”  Like this classroom using Write to Learn, smart scoring has the potential to support a dramatic increase in writing.

For more see:

 

Open Education Solutions, where Tom is CEO, is managing ASAP with support from the Hewlett Foundation.  Dr. Mark Shermis, Dean of Education, Akron University, is the academic advisor and author of the report and leading expert in automated essay scoring.  

1 COMMENT

  1. An auto essay and short answer scoring program would be awesome in conjunction with a program like webassign that can grade numerical answers and single answer questions for different math and science textbooks. I am a math teacher who wants her students to actually write and explain what they are doing in their math work but the return time for proper feedback is so daunting. Implementing this writing and demonstration of a numerical concept is the challenge set before already overstretched teachers.

LEAVE A REPLY

Please enter your comment!
Please enter your name here