To garner accurate assessments of student skills, in-depth questions, where the student provides a written response, are being widely adopted. This raises questions about the cost associated with grading constructed response items. “Pricing Study: Machine Scoring of Student Essays” addresses these cost questions. The paper describes that, “In a higher quality assessment, human scoring of student essays can comprise over 60 percent of the total cost of the assessment. Automating this scoring process holds tremendous promise in making higher quality assessments affordable.” Machine scoring of essays may be 20 to 50 percent as much as human scoring with large volumes.
The paper continues by recommending that rather than designing items and then worrying about scoring, states and consortia should, “Work jointly with the vendor community to develop the type of items that can both assess students’ Deeper Learning skills and be efficiently scored by current vendor machine scoring engines.”
Download: “Pricing Study: Machine Scoring of Student Essays“