From Aspiration to Application: Working Examples of Assessment in the Service of Learning
 
              For decades, educators have aspired to transform educational tests and assessments from merely means of evaluating students’ progress into catalysts for learning—a vision championed by Edmund W. Gordon. The third volume in the Handbook of Assessment in the Service of Learning series reflects a shift from aspiration to application. The chapters, collectively, move from the ‘why’ to the ‘how’, and in doing so demonstrate that an educational measurement’s capacity to improve learning is a function of the constructs it purports to measure and the quality and utility of the feedback they provide to learners.
The examples in Volume III braid core design principles (articulated by Baker. Everson, Tucker and Gordon), including: transparency, explicit focus, support for learner processes (such as motivation and metacognition), modeling of growth, and validity. These efforts are briefly described below.

Assessment as a Catalyst for Learning
Roberts, Younger, Corrado, Felline, and Lovato discuss how PBS KIDS assesses early learning through play, leveraging stealth assessment to turn gameplay choices from millions of children into diagnostic data at scale.
Stone-Danahy, Escoffery, Tabony, and Packer discuss Advanced Placement (AP®) innovation, detailing how AP Art and Design transforms a high-stakes credential into a year-long learning journey that motivates revision and reflection. Students document their experimentation and inquiry, and the rubrics validate the creative process, not just the final product.
Baker and Chung take readers behind the scenes at UCLA-CRESST for decades of research and development, presenting four major areas: writing assessment, rifle marksmanship, evaluation of artificial intelligence systems, and games-based learning and assessment.
Pellegrino and Everson examine challenges and solutions in designing stackable, instructionally-embedded, and portable assessments grounded in the richness of the Next Generation Science Standards (NGSS).
DiCerbo illustrates how Khan Academy guides students with scaffolding and feedback. Digital platforms collect and analyze performance data to offer skill-level insights, exploring how generative AI-powered applications such as Khanmigo might enhance what and how we assess.
Buckley and Snow share lessons on using Game-Based Assessment to measure a broader range of skills (such as creative problem solving and collaboration) within Roblox’s workforce selection context.
Odemwingie and Cockrell discuss how the Achievement Network (ANet) works alongside schools, designing assessments to guide teaching, inform learning, and strengthen instructional coherence.
Op den Bosch, Charlot, Deverel-Rico, and Lyons explore how RevX’s assessment system aims to nurture identity development and skill-building through its DEEDS framework (Discover, Examine, Engineer, Do, Share), structuring a cycle of real-world problem-solving, agency, and personal reflection.
Sutherland, Schreuder, and Townley-Flores zero in on how older students struggle with reading because of inadequate decoding skills. The Rapid Online Assessment of Reading (ROAR) is a light-weight, gamified tool that diagnoses existing reading skills and areas of instructional need.
Liu, Liu, Sherer, and LeMahieu articulate approaches to leverage AI and technology to measure skills acquisition, particularly skills such as communication, collaboration, and critical thinking with validity and reliability.
Hanno, Horner, Portilla, and Hsueh pinpoint the lack of easy-to-use, scientifically sound tools to inform early education practitioners and leaders. The Measures for Early Success Initiative supports the development of child assessments that accurately capture what young learners know and are able to do. Co-design that centers voices of pre-K educators and families helps create relevant tools grounded in their lived contexts.
Yowell and Delacruz propose open badges as assessment innovation, exploring how these flexible, portable digital credentials might recognize a learner’s full spectrum of capabilities and create more transparent pathways in the AI-enabled workforce.
Betts, Gunderia, Hughes, Owen, and Bang discuss how a Personalized Mastery Learning Ecosystem aims to provide timely, actionable feedback and data-informed recommendations to embed formative assessment and adaptive learning in core teaching and learning practices.

To Assess is to Teach and to Learn
Skills for the future—including sensemaking, collaborative problem-solving, and navigating relational complexities—rely on a learner’s ability to forge novel connections between varied inputs. Professor Gordon’s vision suggests that we shift our focus from mere status determination to the cultivation of intellective capacity. Assessments must become instruments of inquiry—not just evaluation—embedded directly in teaching and learning processes. Like muscles strengthened by exercise, the mind’s capacities are enriched when challenged to adjudicate contradictions and engage in strenuous, meaning-making activities of generating novel relationships between phenomena. Measurement must document this dynamic process of adaptation and adjustment, giving us a portrait of possibility rather than a static snapshot based on summative scores.
The volume provides a collection of practical examples for architects of future assessment systems. The chapters affirm that the goal is to create coherent, learner-centered frameworks where assessment becomes an integral act of teaching and learning—to move beyond mere measurement and into meaningful improvement. The volume’s tangible examples are offered not as fully formed solutions but as invitations to reflect, iterate, and build upon. They equip test developers, researchers, and educators to construct more coherent, learner-centered assessment systems that genuinely promote learning and achievement.
 
        Edmund W. Gordon
 
        Eva L. Baker
 
        Howard T. Everson
 
         
					 
					 
					 
					 
					
0 Comments
Leave a Comment
Your email address will not be published. All fields are required.