Next Gen Assesment

Science teacher Vilimaka Foliaki asked about my views on next gen assessment.  We have to start with the question, assess for what purpose?  Assessment has traditionally been used for at least four purposes: guide learning and advancement, and make judgments about educator effectiveness and school quality.
1. Guide learning.  Regular formative assessment, like weekly quizzes at KIPP, are currently considered best practice, but still seems pretty crude to me.  Content embedded-assessment will be the most important development of the decade. Instead of weekly quizzes, students will soon receive dozen forms of instant feedback daily from learning games, computer scored writing, end of unit quizzes, simulations, and virtual environments.
This flood of achievement data will guide regular goal setting conversations between a student and a teacher/advisor.
The casual game space suggests that assessment can be used to promote persistence, achievement, and self-directed progress (an additional purpose for assessment beyond the traditional four).
2. Guide advancement.  In addition to a standards-based gradebook of scores, student advancement should be based in part on periodic public demonstrations of learning. The opportunity to dive deeply into a subject and present findings (e.g., science fair) is an important form of assessment as well as a valuable learning experience.
I think smaller chunks of demonstrated competence (i.e., merit badges) will prove to be more useful matriculation management in competency-based environments than school years and courses.
The PARCC consortium is planning through-course assessments to guide instruction and inform advancement.  However, the thick portfolio of performance data resulting from 10,000 keystroke student days obviates these imposed assessments likely to be irrelevant to student learning experiences.
3. Judge school quality.  I haven’t mentioned standardized measures to this point, but a NAEP style dipstick sample assessment could overlay the system described above just to verify the two key metrics: student achievement (i.e., kids on track) and student growth (i.e., kids making at least one year worth of progress/year).
4. Judge educator effectiveness.  Multiple measures should be used to make judgments about educator effectiveness.  Obviously, we must include evidence that answers the question “are kids learning,” but the shift to personal digital learning is introducing new instructional experiences and new differentiated/distributed staffing models that make the ‘effectiveness’ question more complicated.   So build temporary agreements and update them every year or two.
I recently heard Rick Hess, AEI, cautioned against the swiss-army-knife view of teachers.  He and Yong Zhao, Oregon, suggested that we should be smarter about identifying and leveraging strengths.  Larry said High Tech High leaders can see an informative web based on results of asking the question, “who do you get your best instructional ideas from?”  The NYC ARIS system attempts to combine performance management with knowledge management by combining student performance data and user ratings to queue
Louis Gomez, Carnegie Foundation made an interesting comment about the different school staffing challenges in the two Americas.  In Growth American, schools lose staff members every few years, but it’s not a problem maintaining or building staff
Declining America; lower staff turnover, but flat skill levels, problem isn’t turnover, it’s that stay is staying.
 
Originally posted March 11, 2011

Tom Vander Ark

Tom Vander Ark is the CEO of Getting Smart. He has written or co-authored more than 50 books and papers including Getting Smart, Smart Cities, Smart Parents, Better Together, The Power of Place and Difference Making. He served as a public school superintendent and the first Executive Director of Education for the Bill & Melinda Gates Foundation.

Discover the latest in learning innovations

Sign up for our weekly newsletter.

0 Comments

Leave a Comment

Your email address will not be published. All fields are required.