AI-powered education platforms are proliferating but there’s a lot of confusion and hype in the market.
In order to instill public confidence in AI’s education potential, the industry needs to adopt common benchmarks and standards.
By: Jim Larimore
AI-powered education platforms are proliferating but there’s a lot of confusion and hype in the market. In order to instill public confidence in AI’s education potential, the industry needs to adopt common benchmarks and standards.
To meet that need, Riiid and Dxtera, a nonprofit membership organization that builds open technology solutions to lower barriers in education delivery, have formed a cross-sector alliance of companies and associations to launch an AI in Education benchmark initiative.
The initiative, launched in early August, is focused on establishing benchmarks and standards in four critical categories – Safety (security, privacy), Accountability (defining stakeholder responsibilities), Fairness (equity, ethics, and lack of bias), and Efficacy (quantified improved learning outcomes). In a word, SAFE educational AI.
Jim Larimore, Riiid’s chief officer for equity in learning, is helping the formation of the alliance.
Dxtera, a nonprofit membership organization that builds open technology solutions that lower barriers in education delivery, stepped forward as a trusted nonprofit player who manages the day-to-day work of the Alliance. They will be the fiscal agent and the contracting agent to hire staff and experts. Riiid is financing the foundation of the Alliance, which intends to become self-supporting through membership dues.
In the month since the alliance has been around, it has grown from 20 members to 80, representing a dozen countries. Organizations involved in the initiative include Getting Smart, Carnegie Learning, ETS, GSV Ventures and Digital Promise.
The alliance has established a leadership council and a governance model and is rolling out working groups.
“We need people from all levels of the industry, from educational delivery agents, from users and from governments to be involved,” said Dale Allen.
The alliance has also aligned itself with UNESCO and its Broadband Commission for Sustainable Development, whose goal is to connect everyone in the world to the Internet.
Meanwhile, the alliance is completing membership agreements that outline the roles and responsibilities of members.
Allen expects the working groups to start on general topics and then to form subgroups that will drill down on technical aspects of AI in education.
The alliance expects to eventually hire paid experts to develop standards that could be tested and certified. Standards would then be considered by the Leadership Council and, ultimately, a steering committee for approval.
The alliance won’t be working in a vacuum, nor developing standards from scratch. There is already widespread activity on developing AI standards, some of which is relevant to AI for education.
The Institute of Electrical and Electronics Engineers, a professional association, has established an Adaptive Instructional Systems working group within its Standards Association to explore the need for standards governing AI tutoring systems and other related learning technologies.
“We’ve already reached out to them,” said Allen, adding that the alliance can build on the work already done.
IBM, meanwhile, has an open-source suite of products called Factsheets 360 that is designed to ensure AI models are transparent, explainable, robust, privacy-preserving and fair. Allen said the alliance may reach out to them, too.
Underwriters Laboratory, the private certification company, is a member of the alliance and has independently developed a kind of rubric that they use for inspecting algorithms. UL, as it is known today, has participated in the safety analysis of many new technologies since it was founded in 1894.
Nearly every American product that uses electricity has the UL logo on it, which means that it has undergone rigorous testing to meet various standards.
The alliance intends to do something similar for AI education tools and platforms, eventually implementing a voluntary review process for such products that would give consumers confidence in the way that nutritional labels do on packaged food products today.
The alliance hopes that school districts and other organizations governing the purchase and use of such tools would then prohibit AI products for education that didn’t have alliance certification.
“Today, the users – the parents, the students, the instructors – have no sense of whether a tool is safe, and they’re afraid of most AI enabled tools,” said Allen.
The alliance also hopes to develop equitable, fair, unbiased and anonymized data sets to build AI tools, as well as develop random controlled trials and other means to objectively measure the efficacy of AI digital learning programs.
Stringent testing may also help determine whether products meet existing data privacy laws, such as General Data Protection Regulation guidelines in the European Union and data privacy laws in California.
The alliance could also play a role in improving existing technologies, such as proctoring tools that use webcams and facial recognition to monitor students and discourage cheating.
Students with ADHD, for example, have been wrongfully flagged by these anti-cheating programs that monitor for behaviors deemed suspicious. This is largely due to a lack of AI-model training data that accounts for symptoms of ADHD such as fidgeting and an inability to maintain focus.
Some universities have dropped the use of proctoring programs altogether after students with darker skin tones reported not being recognized by the software, a recurring limitation with AI facial recognition since its early development.
The AI ed-tech industry can tackle shortcomings like these by building larger, more representative data sets.
But the alliance isn’t focused on the US market alone. It is engaged with people in Israel, Russia, and the EU EdTech Consortium, which represents all the EU countries, and Education Alliance Finland, among others. The German Alliance for Education brings to the table representatives from about 100 groups ranging from education ministries and companies to universities and schools.
Allen cautions that the process won’t be quick. The alliance hopes to have working groups and a roadmap ready by the end of this year. Thereafter, it will give quarterly updates for the education community and the public about what is measurable already and what is coming, with some standards announced in 2022.
Jim Larimore is the Chief Officer for Equity in Learning at Riiid, where he leads strategy, programs and partnerships to leverage Riiid’s strengths in Artificial Intelligence (AI) to close gaps in educational opportunity, achievement and student success.