By: Devin Vodicka

Our current K-12 educational system continues to operate on a model of age-based cohorts where students matriculate through a set of disconnected experiences, organized by subject area for elementary schools and by courses for secondary schools, with a net result that few learners complete high school prepared for lifelong learning. This approach, initially inspired by the efficiencies of Prussian schools in the mid-1800s and cemented by the shifts to mass-produced education during industrialism, has outlived its utility and is in urgent need of systemic evolution.

The invention opportunity here is to create new policies, practices, and tools that encourage the development of purposeful goals tied to whole-child, competency-based learning progressions. Building on the successes that we see at the edges of innovation now, we can invent new models that elevate the desired outcomes from a knowledge-oriented perspective to one that includes knowledge, habits, and skills. We can move away from a myopic focus on high-stakes, episodic tests. We can eliminate harmful grading practices like averaging. We can reframe assessment as a means to an end as opposed to an end in and of itself.

These shifts were underway before the COVID-19 pandemic and our current context magnifies the need to make these changes now. Longstanding anchors of our test-focused system such as the SAT and ACT have been suspended or eliminated. Colleges such as the University of California system are adjusting admissions policies to de-emphasize standardized tests or even ignore them altogether. Advanced Placement tests were altered to the point that their value has come into question. International Baccalaureate exams were canceled. State testing, still the lynchpin of many K-12 accountability systems, was canceled.

Metrics that have been used by schools and districts as proxies of learning such as attendance were rendered impractical. Many systems changed their grading policies to a pass/fail or credit/no-credit system. In parallel with the ending of many familiar practices, suddenly most students were embedded in digital learning experiences, much of it with embedded, real-time feedback, where they also had the opportunity to engage in more self-directed learning.

In the span of several months in the springs of 2020, our K-12 system was suddenly unconstrained and freed of legacies in a way that had been almost inconceivable before the pandemic. What will fill the void? How will we know if students are learning? How will we know if schools are effective? Talk about an invention opportunity.

Measurement and Assessment: Five Guiding Principles

Before we go deeper into the invention opportunity, the following guiding principles are useful to provide context and also to illuminate some of the complexities as we consider how to reframe the role of assessment and effectively measure what matters.

Principle 1: We need to begin with our learners. Historically, we’ve taken an outside-in approach in education, beginning with the needs of policymakers and working from there to implement assessment systems “to” students. We need to be reversing that approach and beginning with the learners.  We must start by designing to inform learning and then move to accountability once we are confident that we’ve achieved the primary purpose of empowering our students. By orienting first to our learners, we ensure that the utility of the assessment is optimized for the person who stands to benefit the most. In addition, by beginning with the learner we can shift from the student being a passive recipient of an externally-imposed assessment to an active, co-constructor of assessment as part of a meaningful learning cycle.

Principle 2: There are two fundamentally different types of learning outcomes. It is important to be clear that some learning follows a fairly linear pathway where there are clear right and wrong answers. The former, which is referred to as “ladder” learning or as a technical problem, can be mastered and typically can be assessed using software. This technical learning tends to be oriented around knowledge acquisition. As an example, successfully completing two-digit multiplication is preceded by competence with one-digit multiplication.  In this mode of learning, the outcomes can be binary (mastered/not yet mastered or competent/not yet competent). These binary determinations are used to inform advancement and to certify competency.

Other learning is much more adaptive and contextual with multiple possible “solutions” to open-ended challenges. This type of learning, which is referred to as “knot” learning, typically cannot be mastered and competency progressions are nonlinear. Habits such as curiosity or creativity, for example, are deeply contextualized and dynamic. Inputs to inform progress are also more complex, requiring self-reflection, peer, observation, educator observation, and even external “expert” observation. As an example, a challenge tied to one of the United Nations Sustainable Development Goals (such as “no poverty”) is unlikely to result in a binary outcome and the valuable outcomes are more directional than determinative. In this case, feedback is designed to inform ongoing growth in the learners’ knowledge, habits, and skills.

Principle 3: To empower lifelong learners we need to implement learning cycles that shift the leadership of the experiences from teacher-led to co-led and then to student-led. In this cycle, the student initially sees, owns, and then drives the learning. Evaluation and understanding are critical components of each of these cycles and they should be thoughtfully constructed to build capacity in learners to self-evaluate and increase self-awareness over time.  In other words, while the teacher is essential in the process, a key shift is to see the student as an active participant, co-constructor, and creator in all aspects of the learning cycle–including planning, engaging, evaluating, and understanding.

Combining these frameworks, I believe that we should be redesigning a system where the majority of student learning for the youngest learners is focused on teacher-led, technical learning and that by the time that they graduate from school the majority of their time should be focused on student-led, adaptive learning. The following graphs are intended to be directional and illustrative and not prescriptive.

Principle 4: When it comes to measurement and assessment, we should value volume over precision. Notwithstanding the flaws and massive variation that we see in grading practices, grade-point averages are consistently found to be better predictors of future performance than tests such as the SAT. As noted in a recent conversation with the University of Oregon professor David Conley, he pointed out that a single grade is the aggregate of multiple inputs and that a grade-point average is the aggregate of many grades. While there is less reliability (from a statistical perspective) in the inputs that inform a grade-point average, the multitude of inputs tends to have more value than a single, statistically-reliable exam. Conley refers to this concept as “high cumulative validity.”

Without getting into a prolonged explanation of Heisenberg’s Uncertainty Principle, we should be aware of the following:

  1. Measurement has an impact. With a firm understanding that the act of measurement affects what we are measuring, we need to be mindful of what we intend to measure when we will conduct the assessment, and what the implications of the measurement will be. While in many cases the act of observing and measuring can have a positive result (often referred to as the Hawthorne Effect), we must also be mindful of potential adverse impacts.  For example, when school accountability systems focused on standardized tests in language arts and mathematics, was there an appreciation for how that would result in a narrowing of the learner experience to exclude other subjects such as science, social studies, music, art, world languages, dance, and physical education? Was there an expectation that recess would be reduced or eliminated?
  1. Precise measurements often impede momentum. This means that burdensome assessments impede learning. Again, standardized tests are a prime example here.  When we spend weeks conducting these high-stakes assessments the learning essentially stops. We must ask ourselves if the information that we gain through these tests outweighs the benefits of the learning opportunities that are lost.
  1. Understanding the interplay between position and velocity, we can increase velocity (i.e., learning) by reducing the intensity of measurements (i.e., assessments). A minor adjustment to collecting less precise information more frequently can result in less interference and promote greater rates of ongoing learning.

Principle 5: Given the orientation to a high-volume of imprecise measurements, we should learn from the experiences with industrial-era grading and strictly avoid averages, particularly over extended durations of time. Decayed averages and power laws are more sound indicators of recent performance and widening the use of more sophisticated data collection systems makes these metrics accessible at the click of a button.

The Role of Technology

A note of caution is warranted here regarding the current potential for artificial intelligence (AI) or machine learning algorithms as they relate to assessment. Since they rely on self-reference feedback loops, the value in this approach is only evident when there is a massive amount of data. For an individual learner, the only areas where there might exist such an abundance of information would at a granular level tied to “ladder” learning experiences focused on knowledge. Even in those cases, we have found that AI-informed insights are only valuable as input to an educator who should apply their own subjective perspective and observation to make any holistic determinations about performance. In terms of adaptive, knot-learning there is presently very little value in AI approaches due to limited data inputs coupled with the reality that adaptive learning is inherently contextualized. As a result, subjective observations tend to be far more valuable than AI inputs when it comes to informing complex problem-solving or the development of habits and skills.  In sum, computers and sophisticated analyses can be helpful tools but at this time it is imperative that we ensure that the technology is in service to humans. Reversing this ordering is unwise and potentially quite harmful.

Technology will be an accelerator. For example, on the Altitude Learning platform students can create their own learning cycles and the authoring tools make it easier to co-construct and assess valuable learning experiences. In addition, there are already several options on the market that provide a number of tools that simplify combinations of assessments to help understand mastery judgment milestones. Other tools combine adaptive and performance-based assessments. Eventually, we will see competency-tracking tools that incorporate learning that has been credentialed by multiple partners. These resources will evolve into the critical infrastructure for learner records and “mastery transcripts” that will replace our current models.

At all times, we should be placing the learner at the center of our decision-making as we determine the best approach. What will best serve the learner?  What will best promote ongoing learning? These are the questions that should guide our decisions around measurement and assessment.

Putting the Pieces Together: The Impact Framework

As I wrote in Learner-Centered Leadership, the post-industrial education system will require post-industrial measures of success. We can no longer rely on letter grades and seat time as proxies of learning – we are at a stage where evidence of mastery learning is clearly a better way to represent competence. A competency-based, learner-centered approach provides opportunities to extend beyond traditional academic outcomes and take into account alternative measures of progress, including habits and skills in social-emotional learning domains that will be essential for lifelong learning. We believe that post-industrial measures of success should also reflect the reality that individuals are situated in communities and have the responsibility to contribute not only to those communities but also to the broader social system.

Developing these post-industrial measures has not proven to be easy for a number of reasons:

  • An industrial set of measures is deeply ingrained in our systems and psyche
  • More expansive measures are largely unproven and immature
  • There is a lack of clarity regarding what these measures should be and how they should be brought together to provide an actionable and complete picture of progress

Even so, there is a widespread need for a method of measuring student progress that is more reflective of the whole-child outcomes we know are necessary for success. As a result, I engaged in a process to research existing models and to generate a synthesized impact framework that could serve as a framework for a holistic, learner-centered model.

The first step in developing the framework was to conduct a literature review and formulate an initial hypothesis (source materials are listed at the conclusion of this article). Based on the review of existing literature, the initial thinking was that a post-industrial education system should include measures situated at the levels of self, others, and community.

In the early stages of developing the framework, we sought feedback through consultations with both internal and external experts, teams of teachers from schools across the country, and we also conducted student forums.  Through these conversations, we were encouraged to avoid any system that would be used to rank, sort, or select opportunities.  Students, in particular, were mindful of how these outcomes could be “gamed” in various ways if they were used for accountability purposes.  In addition to these practical suggestions, the preponderance of the feedback was affirming with statements such as “I couldn’t agree more with everything you articulate in your framework” and “this is desperately needed.”

After the consultation phase, I refined the framework to include measures of agency, collaboration, and real-world problem solving, which correspond (respectively) to the levels of self, others, and community.

Agency

Defining Agency

Agency is fundamentally about learners demonstrating an ability to meet their unique, self-generated goals. While there is a developmentally appropriate sequence of educator-led to co-led to learner-led approaches, the overall trend should be in the direction of learners who co-create all aspects of their learning in a community context, including goal setting, planning, engagement, assessment, and reflection.

Measuring Agency

The process of measuring agency should take three forms:

  • Learner self-perception regarding the proportion of time that they spend driving their learning (self-referenced).
  • Competency-based evidence of mastery learning in academic domains such as language and literacy, mathematics, social studies, sciences, the arts, and physical wellness (criterion-referenced).
  • Growth and competency demonstration comparisons with other learners.  While this has typically taken the form of intermittent standardized tests, this will ideally emerge from the aggregation of daily learning interactions that form the basis of a comprehensive, valid, and reliable data set (norm-referenced, see “How to test less” for more information).

Collaboration

Defining Collaboration

Collaboration is an umbrella term that is used here to describe the set of habits and skills that are critical for social interaction.  There are various models of collaboration, such as the Character Lab set, which includes self-control, grit, curiosity, growth mindset, gratitude, purpose, social intelligence, and zest, or Covey’s “The Leader In Me” habits, which include being proactive, beginning with the end in mind, putting first things first, thinking win-win, seeking first to understand and then be understood, synergizing, and sharpening the saw.

Measuring Collaboration

Regardless of the model, collaboration is an area where measures of success cannot be represented through a competency-based or mastery model.  Developing these habits and skills is an ongoing process that can vary in different contexts.  As a result, measures of collaboration should be grounded in self-reflection, peer assessment, and educator observations that are aggregated over time to illustrate patterns and trends that inform ongoing development.  Such measures should encompass both the formal and informal collaborative opportunities that occur through peer interactions.  Additionally, these measures can be grounded in frameworks of developmentally-appropriate indicators such as those in the Essential Skills and Dispositions work.

Real-world problem solving (hereafter referred to as problem-solving)

Defining problem solving

Problem-solving occurs when the application of agency and collaboration results in improvements for the benefit of a community. Problem-solving can be grounded in project-based learning, service learning, challenge-based learning, or any number of models that extend the learning to authentic, real-world contexts. As an example, many schools are orienting students to the United Nations Sustainable Development Goals to provide a framework for contextualized problem-solving.

Measuring problem solving

The measurement of problem-solving may be achieved through expert feedback. This feedback may come through exhibitions of applied learning that rely on the learner to share their journey with experts who can provide meaningful feedback to validate impact and suggest the next steps. Portfolios are helpful references for these exhibitions, particularly insofar as they offer the right medium for demonstrations over time and the corresponding appropriate evaluations.

Conclusion: What’s Possible?

Just as the inputs are different in this post-industrial system, so too are the outputs — the “learner records.” Unlike report cards or transcripts that are historically organized by subjects or courses, the post-industrial, learner-centered system is organized around the learner. Holistic learner profiles, digital portfolios, and other portable learner records are designed to inform ongoing learning. We are on the cusp of an era where students will have a “data backpack” that will transform into a “digital briefcase” where their learning will extend beyond school on a cradle to career journey that focuses on lifelong learning.

It should be noted that there remains much work to be done to improve data interoperability in order for us to scale these practices. Even so, we are already seeing early movement from the initial “badging” efforts to add validation through blockchain to improve the credibility and trustworthiness of the certification. It is entirely possible that we will see significant progress on interoperability and validation in the very near future and we should not underestimate how these developments will accelerate an open-walled, ecosystem approach of any time, anywhere learning.

In addition to providing glimpses into the future of assessment through research and ideation, there are examples of these practices already underway in schools and districts across the United States. Tamim Academy, a network of Jewish Day Schools and a partner of Altitude Learning, has developed a set of learner outcomes that closely resemble the Impact Framework. Design 39 Campus (Poway Unified School District, San Diego County) and Odyssey STEM Academy (Paramount Unified School District, Los Angeles County) make extensive use of holistic learner profiles. El Segundo Unified School District (Los Angeles County) uses a scorecard aligned to their graduate profile to assess holistic competencies.

Intermediaries such as the San Diego County Office of Education are developing “universal transcript collaboratives” oriented around digital diplomas. Higher education innovators such as  the Minerva Project are developing high school programs that orient around transferable skills based on the learning sciences. Coursera and Google are rolling out certification programs to compete with traditional university models.  Employers like Walmart are creating “lifelong learning” programs.

The changes are already underway. These innovative models are solutions for the invention opportunity in front of us now.  Making these exceptional experiences the norm is on the horizon.

Margaret Wheatley encourages us to keep asking “What is possible?” In terms of the invention opportunity, it is clear that we can encourage the development of purposeful goals tied to whole-child, competency-based learning progressions. In fact, the movement is already underway. We can shift from a system over-anchored on assessment to one where learners and their learning is the focus. We can empower learners to see, own, and drive their learning, orienting to adaptive challenges that develop the knowledge, habits, and skills of all students.  By promoting agency, collaboration, and problem-solving our learners will learn how to learn, improving their metacognitive capabilities through lifelong learning. Our students will become changemakers, making their communities, our society, and our world a better place.  Isn’t that the purpose of education?  It is possible here. We need to embrace the invention opportunity together and we need to do it now.


References

Literature Review – Core resources

Additional resources

Learning Frameworks

Related resources:

Impact Framework Consultations:

  • Dr. Alan Daly, University of California San Diego
  • Andy Calkins, Next Generation Learning Challenges (NGLC)
  • Karen Cator, Digital Promise
  • Dr. Bror Saxberg, Chan Zuckerberg Initiative
  • Dr. Brooke Stafford-Brizard, Chan Zuckerberg Initiative
  • Sumeet Bakshi, Admissions Advisor
  • Jack McDermott, Panorama Education
  • Dr. Chris Cerf, Former Superintendent, Newark
  • Dr. Jon Snyder, Stanford University
  • Dr. Keith Nuthall, Principal, Odyssey STEM Academy
  • Kelly Young, Education Reimagined

Many thanks to Tom Vander Ark and to David Conley for their input and suggestions. This article would not be possible without their wisdom and expertise. 

For more, see:


Stay in-the-know with innovations in learning by signing up for the weekly Smart Update.

LEAVE A REPLY

Please enter your comment!
Please enter your name here