There was lots of press on the OpenEd-led Automated Student Assessment Prize (ASAP) this week. Unfortunately, most of the stories made dumb robot jokes (no robots here, just predictive algorthms), suggested they would replace teachers (no, just increase the amount of writing American kids do on tests and in classrooms), or gave air time to a prof that thinks Word is better than essay graders (no, it’s a word processor that is couple generations behind the tested engines).
Dr. Shermis did a great job describing the successful demonstration of essay graders (here’s the report) in a radio interview on KPCC, summarized here: Better Tests, More Writing, Deeper Learning.
What we’re really excited about is How Formative Assessment Supports Writing to Learn. Check out this blog for three classroom examples of feedback tools boost the amount and quality of student writing.
Last week Auto Essay Scoring Headlined NCME and ASAP addressed critics.
Here’s more background on the intent and launch of the prize on GettingSmart.com:
- Automated Essay Scoring Demonstrated Effective in Big Trial
- Less Grading, More Teaching, Deeper Learning
- Deeper Learning Not Lighter Journalism
- Getting Ready for Online Assessment
- Hewlett Sponsored Assessment Prize Draws Amazing Talent
- Hewlett Foundation Sponsors Prize to Improve Automated Scoring
- How Intelligent Scoring Will Help Create an Intelligent System
Here’s more coverage of ASAP that got the story more right than wrong:
- Associations of American Educators – Study: “Robo-Readers” More Accurate in Scoring Essays
- Education Week – Study Supports Essay-Grading Technology
- Air Talk (89.3 KPCC) – Robots stole my job! Essay grading edition
- NPR (All Things Considered – blog) – Can A Computer Grade Essays As Well As A Human? Maybe Even Better, Study Says
- USA Today – Computer scoring of essays shows promise, analysis shows
- Discover Magazine (blog) – Computers Can Grade Essays As Well As People Can
- New Scientist (blog) – AI graders get top marks for scoring essay questions