What Happens When AI Doesn’t Understand Students? An example for creative and equitable AI policy in education

Key Points

  • Developing effective policies and best practices to combat AI bias is essential for ensuring edtech effectiveness and equity.

  • Speech recognition technologies provide an excellent example of approaches to solving AI bias problems in education.

  • Solving AI bias problems in speech recognition requires innovative approaches to R&D and the development of policies that promote equity and effectiveness.

By: Russell Shilling, Ph.D.

Most of us have by now experienced the frustration of speaking to a device that uses speech recognition but fails to understand what we are saying. In consumer products, users will quit using products that don’t meet their needs, but students do not have this option in the classroom.  Inefficient algorithms and bias in AI datasets are primary concerns for education researchers and educators, who worry that the applications will not be effective across the wide diversity of students in our nation’s classrooms. These concerns are certainly not limited to the United States but represent a global concern. Systematically addressing speech recognition effectiveness is difficult, given the lack of policy guidance from the government, districts, and even public and private funding sources. Eliminating bias also requires clear guidance on education technologies’ research and development requirements.

Many recent discussions highlighted in the news have concerned various technologies and applications prone to AI bias. Still, an excellent example that deserves more attention in edtech is speech recognition applications requiring automatic speech recognition (ASR) and Natural Language Processing (NLP). Speech recognition has proliferated in various consumer-based products, toys, games, productivity apps, and education. Accurate speech recognition opens the door for more naturalistic edtech products and real-time assessment opportunities in the classroom for early interventions in speech, language, and reading difficulties. However, these systems currently do not work well across the vast diversity of users they aspire to reach. For example, ASR systems don’t work equally well against different dialects, age groups, or individuals with speech difficulties.  

Therefore, this type of bias will result in frustration and adverse outcomes in education. However, bias in ASR, like many AI bias issues, is largely solvable by recognizing the sources of bias, implementing research programs for scalable solutions, and requiring reliable efficacy studies before speech recognition-based products reach the classroom.

Focusing on one specific example of bias, ASR applications become increasingly inaccurate as age decreases. Children’s speech differs considerably from adults, including frequency spectra, prosody, and complexity of sentence structures. Considering the wide variety of dialects and nationalities in our schools, we face a complex challenge that requires collaboration between researchers, educators, product developers, and funders to bring innovative, effective, and scalable solutions to the market. There are pockets of progress, such as Soapbox Labs, an excellent example of a company trying to apply rigorous criteria for developing more representative data sets to assess fluency and speech issues. We need more efforts along these lines and policy supports to ensure that the needs of all students are served, not just those whose needs are more easily supported with currently available off-the-shelf systems.

While things are improving, the field is still not at the level we need to effectively and consistently implement educational tools, assessments, or speech therapy that work accurately for all kids. What is required is additional research funding and policy related to improved data sets (corpora) and linguistics research aimed at developing improved algorithms. Several policy recommendations can be made to move the field forward.

Eliminating bias also requires clear guidance on education technologies’ research and development requirements.

Russell Shilling, Ph.D.

First, creating and funding interdisciplinary teams is critical. From my time as a program officer at the Defense Advanced Research Projects Agency (DARPA) and applying those philosophies and techniques to education,  I have learned that funding teams that reflect the diversity of thought and expertise, in addition to ethnic diversity, are crucial to innovation. In this case, we need to include linguists, computer scientists, data scientists, and psychologists on the team and consult ethicists in the process.

Second, we need to improve the quality and size of data sets that represent the diversity of our target populations in naturalistic environments, including age, ethnicity, gender, socioeconomic backgrounds, language issues, and dialects. And given global trends of mobility and migration, we should foster international cooperation to create more diverse and representative ASR data sets.

Third, data sets, along with the algorithms, should be open to scrutiny. We must ensure that the algorithms, data sets, and evaluations are fair and transparent. Data and evaluations should be available for examination, and datasets and algorithms should be open whenever possible.

Finally, evaluations of the models and data should be continuous even after the solutions are adopted so that bias or drift in the response of target populations can be detected. This policy strategy is advised for all edtech, not just AI-based solutions.

The suggested policy recommendations above are not all-inclusive but represent a start at making ASR more effective and equitable. These recommendations are not unique to the application of speech recognition technologies; they can be adapted to a wide range of AI edtech issues in the United States and abroad.

Russell Shilling, Ph.D., is Senior Advisor to the EdSafe AI Alliance, an expert on edtech R&D innovation and is a former Navy Captain, DARPA Program Manager, and STEM lead for the Dept of Education during the Obama Administration.

The EdSAFE AI Alliance exists to inform and influence global policy and develop standards for using artificial intelligence (AI) enhanced education technologies (edtech). The primary goal is to ensure public confidence and trust by making edtech safe, secure, and effective while maintaining an open, innovative environment. At the EdSAFE AI Alliance, we welcome input and active participation from educators, researchers, policymakers, and funding organizations to tackle these issues and the myriad additional challenges introduced by AI systems’ disruptive yet exciting addition to education.

Guest Author

Getting Smart loves its varied and ranging staff of guest contributors. From edleaders, educators and students to business leaders, tech experts and researchers we are committed to finding diverse voices that highlight the cutting edge of learning.

Discover the latest in learning innovations

Sign up for our weekly newsletter.

0 Comments

Leave a Comment

Your email address will not be published. All fields are required.