Artificial Intelligence – The New Digital Divide?

Image by Rawpixel.

First published in 1962, Everett Rogers’ book Diffusion of Innovation explains the conditions required for new ideas and technologies to spread throughout society, but it is in the final chapter that he raises the issue of unintended consequences. Although an innovation could have tremendous benefits, it might simultaneously result in unintentional, negative effects—especially if the innovation creates a condition of disequilibrium. When disequilibrium occurs, an innovation advances faster than society, research, and policy, reducing the ability to identify or assess any adverse effects.

Over the past few decades, artificial intelligence (AI) has created a state of disequilibrium not only in society but also in education. Currently, AI can be found driving search engines; powering adaptive learning platforms and intelligent tutoring systems; enabling text-to-speech, dictation, and translation; and monitoring school security. However, these technologies have flooded education faster than research and policy can keep up. As a result, despite all of its promises, there could be very real and significant consequences—particularly when it comes to digital equity.

Educators and policymakers have warned of the effects of the digital divide since the 1990s. Initially, this deficit referred to lack of access with computers and the internet. By 2016, the National Education Technology Plan warned of another issue, an emerging digital use divide, as some students learned to use technology for active construction of knowledge and understanding while others remained passive consumers of digital content. With the continued rise of AI, another chasm may emerge as a result of varying experiences with, and exposure to, this innovation,

The AI Use Divide

The State of Creativity in Schools, a report conducted by Gallup, found that while no difference existed between the learning experiences of students across geography, grade level, or discipline, students who attended schools located in underserved communities reported fewer opportunities for creative learning experiences with transformative uses of technology. This finding mirrors prior studies into the digital use divide.

Particularly in under-resourced schools, which often include higher proportions of students of color, studies show that students largely use technology for content acquisition, remediation, and test-prep rather than more creative endeavors. As AI continues to permeate the education space, a similar dichotomy could emerge. Some students may leverage AI in support of critical thinking and complex problem-solving or even to create new forms of AI. AI4K12, a partnership between the Association for the Advancement of AI and The Computer Science Teachers Association, even recommends standards and competencies to ensure that students and teachers comprehend AI, as well as learn how to work with it. However, even if students from traditionally under-resourced schools have equal access to AI, that exposure may occur within a curriculum-poor environment, expanding the digital use divide.

The Opportunity Gap

In education, the great promise of AI is wide-scale personalization as platforms purport to simulate the experience of learning alongside a personal tutor. Since the 1920s, educators have looked to “teaching machines” to provide immediate, individual learning experiences at scale. Whereas the machines created by behaviorists like B.F. Skinner, Edward Thorndike, and Sidney Pressey simply dispensed feedback in response to multiple-choice questions, AI platforms could guide problem-solving, suggest resources, and even analyze writing or speech. However, if a computer decides what a student learns, when, and how; questions emerge about whether the experience can truly be described as personalized, leaving little room for student interest or agency.

Equally important, volumes of evidence demonstrate that students learn best when motivated by curiosity, have opportunities to develop their thinking through social interactions, participate in authentic experiences, and can test different ideas within a supportive environment. Now consider the potential ramifications of implementing AI platforms for personalization absent these meaningful face-to-face learning experiences. Some of the backlash to AI and personalized learning can be attributed to the perception (or reality) that students spend their days in technology-rich solitary confinement. For example, in Brooklyn, NY and across Kansas, students staged walk-outs and families protested after their schools instituted “personalized learning” models that meant little more than sitting in front of computers. The risk of this type of opportunity-poor implementation is greater in underserved communities where schools face pressure to focus on test scores, receive fewer professional learning resources, and have a history of teacher-directed practices. As a result, an “opportunity gap” may continue to expand as students in more affluent communities benefit not only from creative opportunities working with AI but also deeper learning experiences that occur outside of it.

The New “Jim Code”

In Race After Technology, Ruha Benjamin explains that new technologies replicate and reinforce existing forms of segregation and bias. Much like how Jim Crow laws formalized racial segregation in physical places, bias and prejudice within AI have the potential to create a new “Jim Code” in virtual spaces. This raises significant concerns when applied to education.

Further, as illustrated by Safiya Uroja Nobel, the false assumption that technology is neutral has ultimately led to the development of algorithms of oppression. Her book demonstrates how the presence of AI in everything from search engines, to loan application programs, to recruiting systems, and even learning platforms both obfuscates and deepens existing social inequities. Without careful consideration of potential bias in the algorithms and underlying datasets driving AI, the unintentional consequence exists to widen, rather than reduce, opportunity and achievement gaps.

What Works May Hurt

Rogers warned that innovation may produce unintended consequences—especially when it spreads faster than research and policy. Nothing amplifies this concept more than the infusion of AI into education. From the introduction of voice assistants like Alexa, which violate federal student privacy laws, to the adoption of learning platforms that risk reproducing stereotypes and bias, the unintended consequences need to be carefully examined alongside the potential benefits of personalization, differentiation, and augmentation. As Dr. Yong Zhao argues, What Works May Hurt. In the rush to adopt AI, it will be critical to not only assess the promise but also to proactively measure the potential side-effects. Because even though AI could bring tremendous benefit to education, it could also exacerbate existing inequities and further widen the digital divide.

For more, see:


Stay in-the-know with innovations in learning by signing up for the weekly Smart Update.

Beth Holland

Dr. Beth Holland leads Research & Measurement as a Partner at The Learning Accelerator and is the Digital Equity Project Director at the Consortium for School Networking.

Discover the latest in learning innovations

Sign up for our weekly newsletter.

0 Comments

Leave a Comment

Your email address will not be published. All fields are required.