What The Learning Sciences Tell us about Competency Education

By Bror Saxberg

This post was originally published by CompetencyWorks.

At a recent meeting sponsored by iNACOL to think deeply about competency and assessment, we talked about what impact the last few decades of learning science should have on doing the best job planning and using competencies for learning.

The good news is the learning science lines up with the idea of personalizing instruction, and the pace of instruction, for individual learners: they’re likely to be more motivated, and more successful, if they can work and master at different rates, doing different things, to get to the same competencies.

However, not every way you could conceive of making learning personalized is likely to match how learning and expertise actually work. Let’s look at a few examples of how learning science can guide us:

Much Expertise is Non-Conscious

Cognitive psychologists looking closely at the difference between experts and novices find that experts make many decisions as part of their work without conscious thought. Consider driving to the grocery store while thinking about work: “you” (the conscious you) are thinking about work, using your working memory to process ideas, words, thoughts, but other routines (embedded in long-term memory) are making complex decisions about other drivers, lights, navigation, etc. Non-conscious routines and pattern-identifiers are a part of all expertise.

The problem is that when teachers/faculty are observed teaching novices, they end up telling novices less than 30% of what the novices really need to know to operate at an expert level. It’s very hard to describe all the things you no longer think about!

This means we have to be quite diligent about the competencies we are picking out as important for learners to master – have we covered the things that experts really decide and do, whether consciously or non-consciously? Are we including things that experts do not, in fact, decide or do, because we always have, or because we think “they ought to?” (That last can be quite a can of worms to open up!)

Once past the basics, competencies do not cross domains

It would be great if we could just teach “good writing” once, and be done with it. Unfortunately, most research on experts in domains shows their expertise is very context- and domain-dependent, even for things that seem like they can be described generically, like “good writing.” It turns out “good writing” in history is not the same as “good writing” in science, etc. – very different skills and decisions need to be made in different fields. That’s not to say there aren’t common underpinnings – e.g., basic reading, many mechanics of writing, some principles about sentence and paragraph construction – but we will do students (and teachers) a disservice if we insist that “it is all the same” past the basics – and we should make clear, and give practice on, those differences, and make sure our competencies and assessments reflect this.

“Deciding and doing” in a domain are the critical competencies for future success

The critical competencies we need students to get right are the ability to decide and do things within a domain: solve problems, write papers, investigate what’s going on, design experiments, compare sources, diagnose problems, etc. So instruction, practice, and assessment should give plenty of time to these.

To become good at these (not easy) things, which definitely require conscious work, learners have to internalize, sometimes building non-conscious fluency, a variety of supporting information: facts, concepts, processes, and principles. What’s critical is that these supporting elements are learned in the context of using them to “decide and do” in a domain – and that the practice, and assessment is done in ways that match how they will be used to “decide and do,” not in some abstract other way. So instead of, say, abstractly memorizing definitions of these supporting elements:

  • For facts, you do need to recall them, but in the context of “deciding and doing,” not isolated. If you don’t practice retrieving these facts in such a context, your mind does not develop the easy familiarity in context you need to bring them up “for real.”
  • For concepts, learners need to practice classifying things among the concepts and generate examples of them, again within a “deciding and doing” context, since that’s how concepts help experts in solving problems.
  • For processes, learners need to practice predicting what happens when inputs or conditions change in a process, and practice diagnosing what might be wrong given outputs or behaviors of processes, not merely “describe” the steps.
  • For principles, learners need to practice applying these to solve new situations, or to predict what will happen, in domain-specific situations.

The right level of “chunking” of competencies for learning varies by learner

What is in a learner’s long-term memory has a critical role in their experiencing a learning task as “easy,” “hard,” or (the ideal) “challenging, but doable.” That sense comes from how a learner’s non-consciously learned “chunks” react to the latest challenge – familiar structures and patterns in what they’re asked to do make the challenge feel easier, while if too many things are new at once, their conscious working memory becomes overloaded, and the learner won’t make progress.

This means there could be real advantages to a more dynamic network of competencies, one that shrinks or expands depending on what masteries a learner already has internalized. A group of students all working on the same objective, “Add fractions with unlike denominators” or “Compare two historical sources for their utility in answering questions of the type…”, may need very different networks of competencies to succeed, based on what’s already been mastered or not. Having data flow from assessments at different levels of “chunking” can help instructors and developers see if they have the resources needed for the spread of students working through a competency-based system.

* * *

There’s much more (e.g., see Daniel Willingham’s Why Don’t Students Like School, or Clark and Mayer’s E-Learning and the Science of Instruction for more details on what learning science tells us to do, and not do, to enhance student success), but even this taste affects what we should be doing as we think about competencies and their assessment:

  • We need to keep connecting our assumed competencies with real expertise – real future success for our learners. Are the things we’re picking out, really the things that drive success at the next level? What evidence do we have, since our expert’s own intuitions on this often miss critical non-conscious competencies?
  • Our assessment tasks need, most importantly, to probe the ability of learners to decide and do things, tasks, real problem-solving in the domain, because that’s what we need learners to do after instruction. If learners practice abstract assessment tasks that are not related to real tasks, we’re wasting the opportunity to assess progress important to students’ futures.
  • For competencies related to supporting information, our assessments should again match how learners are supposed to use supporting information in real tasks, not be abstracted and separated from real use in “deciding and doing” in the domain.
  • We need to think clearly about networks of competencies, because what is “easy” for one learner is “hard” for another. More experienced learners can use larger “chunks” of competency, relying on their existing mastery, but less experienced learners need more “chunks” to cover the same outcomes, both in instruction, and in assessments used to provide feedback and guide instruction.

All of this will be tricky to work through, but with automated tools and instructional support, there’s never been a better time to be a “learning engineer” – working to apply what’s known about learning, to the real challenges of developing affordable, reliable, available, data-rich competency-based learning environments.

For more, see:

Bror Saxberg is Vice President of Learning Science at the Chan Zuckerberg Initiative.


Stay in-the-know with all things EdTech and innovations in learning by signing up to receive the weekly Smart Update.

Guest Author

Getting Smart loves its varied and ranging staff of guest contributors. From edleaders, educators and students to business leaders, tech experts and researchers we are committed to finding diverse voices that highlight the cutting edge of learning.

Discover the latest in learning innovations

Sign up for our weekly newsletter.

0 Comments

Leave a Comment

Your email address will not be published. All fields are required.