Ten Things for DQC2.0 to Figure Out

I’m a big fan of the DQC and the progress they’ve helped this country make on longitudinal data—great policy platform, great advocacy strategy, great project management.
But DQC is solving a 2002 problem.  The 2012 problem will be a flood (in some places) of data from content-embedded assessment (games, simulations, virtual environment, adaptive quizzes) as well as formative and diagnostic assessments.
Increasingly schools are adopting new learning tools but none of it fits together very well.  All this new personalized learning stuff is jammed into an old model of grades/courses/credits with information systems based on a couple data points.  A kid learning online can create 10,000 rich minable keystrokes of data overnight, but we don’t know what to do with it.  Should that data just sit in the application?  How much should get sucked into a learning management system?  What data would be useful for teachers and school leaders?  What could researchers learn if they had data mining access to keystroke data.
Our data systems and policies are simply not equipped for what’s coming.  Districts like Colorado’s Adams 50 that have adopted a personalized progress model have the double challenge of building something new and retrofitting it to look like something old.

  1. Build a scale against the Common Core
  2. Create methods to validating scores of various forms of assessments to standards (the way NWEA’s MAP does)
  3. Define the units of content (eg., the shipping container for edu) for learning objects, lessons, units and courses (or better yet, competency clusters like merit badges)
  4. Build a meta-tagging system that not only includes level but learning mode, motivational scheme, and themes
  5. Suggest templates for new student progress/matriculation systems and means of validating student learning that combine formative and summative assessment (and convince college admissions officers that it’s a better system than counting credits and grade point averages)
  6. Make some data extraction decisions about what stays in applications, what goes to the district/network platform, and what gets set to the state (and research protocols accessing the treasure trove)
  7. Discuss the security, privacy, and access issues around rich student profiles that advisors and smart systems will use to queue instruction
  8. Debate the instructional and policy implications around what data is used for what decisions (instruction, student progress, teacher effectiveness and compensation, school accountability, etc.
  9. Step 1-8 are just for the Core, that leaves the rest of the curriculum to think about not to mention vital college/work readiness indicators
  10. Repeat, because when the DQC answers these questions, they’ll need to start over because the opportunity set will be even more interesting

    We need robust data systems that make extraction and analysis very simple while protecting privacy and security.  Vendors should have the ability to gain access to depersonalized data to build ‘mash-up’ applications based on learning insights.  Data systems need to be flexible enough to support a variety of learning models that innovate up over time.
    While implementation will take years, we can almost call DQC the law of the land, but let’s call that DQC1.0 and start working on DQC2.0.

    Tom Vander Ark

    Tom Vander Ark is the CEO of Getting Smart. He has written or co-authored more than 50 books and papers including Getting Smart, Smart Cities, Smart Parents, Better Together, The Power of Place and Difference Making. He served as a public school superintendent and the first Executive Director of Education for the Bill & Melinda Gates Foundation.

    Discover the latest in learning innovations

    Sign up for our weekly newsletter.


    Leave a Comment

    Your email address will not be published. All fields are required.