The Automated Student Assessment Prize (ASAP) invited data scientists worldwide to take on the challenge of creating new approaches to high-stakes testing for state departments of education. Sponsored by the William and Flora Hewlett Foundation, the first two phases of ASAP were designed to demonstrate the ability of existing summative writing assessment products and to accelerate innovation in machine scoring. Phase one focused on the ability of technology to assess long-form constructed responses (essays) while phase two focused on short-form constructed responses (short answers). ASAP was designed to answer a basic question: Can a computer grade a student-written response on a state administered test as well as or better than a human grader?
The results of these two studies present unique opportunities to:
- Establish standards for state departments of education to utilize assessment technologies.
- Advance the field of machine scoring in the application of student assessment.
- Introduce new players with different and disruptive approaches to the field.
Download “ASAP Case Study“