After stumbling into the kitchen and making coffee, I turned on the TV and a CNBC host began barking orders at an Amazon Echo on set: “Alexa, what is the best-selling minivan?” The voice-activated digital assistant–the breakout gift of Christmas 2016–quickly replied that it was Dodge Grand Caravan.
There was a synchronous response from the Echo on TV and the one sitting next to the TV. My Echo added, “I’ve added one to your shopping cart, I just need your pin to complete the transaction.”
“WAIT, WAIT, WAIT,” I screamed, “Alexa, I don’t want a minivan; Alexa STOP!”
Welcome to the brave new world of voice as a primary user interface (UI).
A (Very Brief) History of Computer Interfaces
When I was in college, the interface was a punch card. In Fortran, an early programming language, we turned lines of code into punch cards, stacked them into big boxes and dropped them off at a computer center and hoped that we hadn’t missed a comma. A few hours later, a long printout on continuous perforated paper would spell out success or failure.
When my wife started her computer business, the interface was a keyboard and DOS software on a little orange screen on a heavy luggable Compaq computer.
My kids grew up with a mouse. For teens today, touch on a smart device is the primary UI. For my (nearly) two-year-old granddaughter, her primary interface will be her voice.
Voice: The New UI
Tech blogger Michael Wolf said, “The epicenter of new technology innovation has moved beyond mobile devices such as smartphones to the things and systems that surround us.”
This shift is bringing new interfaces to access those things, and the AI and machine learning layers are being built into and around those things.
Amazon is doing with voice and artificial intelligence what Apple did with apps. The Alexa Skills marketplace is the first scaled app store for voice and AI with over 130 applications.
“Bots are an application–an application being most helpful if it is based upon a minimal level of artificial intelligence, and that particularly serves interaction purposes,” explained Thomas Wieberneit.
Bots, especially chatbots, are a UI technology that allow people to interact with websites using short text or voice messages. They are becoming ubiquitous on customer service websites, but it’s clear that some are more intelligent than others.
Right behind voice as UI is gesture, eye tracking and even brain waves according to CB Insights.
Annual startup funding for 91 next generation UI startups was $526 million last year, four times as many deals as in 2012.
While we’ve grown accustomed to banging away at keyboards, it’s clear that young people will interact with their technology in much more intimate ways.
Implications for Education
Access: “For students with motor skill limitations, physical disabilities, blindness/low vision or other difficulties accessing a standard keyboard and mouse, hands-free computing through the use of speech recognition technologies may be beneficial,” reported NCTI.
Reading: Speech recognition tools built into word processors allow students to see words on the screen as the dictate building phonemic awareness.
Writing: Speech-to-text has been used to help struggling writers boost their writing production. Combined with writing feedback systems like WriteLab, speech-to-text and text-to-speech will help struggling writers improve the quality of their writing.
With more schools allowing students to use smart devices at school, it’s bound to get interesting with kids barking instructions into their phones.
For more, see:
- Bots & Big Cities: What Do They Mean for Our Kids?
- Robots & Implications For Life On Planet Earth
- Telepresence Robots: Connecting Online Students & Teachers
Stay in-the-know with all things EdTech and innovations in learning by signing up to receive the weekly Smart Update.