One of our goals here is to begin to create the data abundance mindset in U.S. K-12–prepping for policies and practices informed by big data surrounding anywhere anytime learning. To that end, we like to highlight interesting projects and proposals–and we have a good one today.
Mark Shermis, Dean of Education at the Univeristy of Akron, is an academic advisor to an assessment innovation project that we’re working on and one of the most knowledgable folks we’ve found when it comes to automated scoring. He contributed a chapter to an academic book called Innovative Assessment for the 21st Centurey: Supporting Educational Needs. He defines automated essay scoring (AES) as the evaluation of work via computer-based analysis.
Dr. Shermis laments the fact that US secondary students receive an average of 3 assessment per semester in a writing class. We’d both like to see student writing every day and getting feedback instantly.
Here’s Mark’s proposal:
Rather than administer a high-stakes writing test in the Spring of each year, administer about 15 AES-scored essays throughout the year (it could be more). The electronic portfolio could monitor student progress from the beginning of the year through to the end. Towards the end of the year, average the scores for the last three writing assignments and use it as the accountability meausre for the doman of writing. To keep the process secure, the topics for the last three prompts can be controlled by the state department of education and relased on a strict schedule.
Whether you like the policy proposal or not, AES is beginning to encourage a lot more writing with frequent entries into an electronic gradebook and detailed feedback (e.g., Pearson’s Write to Learn, CTB’s Writing Roadmap).*
In many cases, AES is as accurate as human grading. The Common Core demands of writing to text (e.g., narrative,expository, descriptive) will challenge the current generation of scoring engines. An upcoming demonstration will outline the contours of current capabilities.
Dr. Shermis points to a number of benefits of his proposal:
- Integration, not competition, with instruction–assessment that informs instruction.
- Students get instant feedback
- Reliable picture of student abilities and a big trail of evidence
- Realistic expectations of a no-surprises environment
We could add security to the list. It will be harder to tamper with AES results compared to paper and pencil exams. With a big trail of evidence of weekly entries a big change in score (up or down) on a high stakes exam would be easy to spot.
The benefits of instructional tech that can double as a test are numerous. It’s great to have academics like Shermis pushing the boundaries of technology and policy and helping to create the big data future of learning.
* Disclosure: Pearson is an investor in Learn Capital where Tom is a partner