This month Stanford launched a 100-year study of AI (AI100) with a report: Artificial Intelligence and Life in 2030.
The 16 member study panel issuing the report sees increasingly useful applications of AI, with potentially profound positive impacts on our society and economy over the next decade.
The study identifies eight domains where AI is already having or is projected to have the greatest impact: transportation, healthcare, education, low-resource communities, public safety and security, employment and workplace, home/service robots and entertainment.
New to the subject of AI and its uses in education? Check out this Pearson video (and our review of their report):
Let’s start with public safety and a few emerging AI applications.
Speech, Sight, Safety & Security
Google’s AI artificial company DeepMind announced an app that generates human-like speech. Called WaveNet, it marks an advancement over existing speech synthesizers–and it can write pretty good classical music. We can hope that virtual AI assistants like Siri or Cortana get better soon.
DeepMind’s recently beat the world champion of the complex 19 layer game Go. Rather than programming solutions, the machine learning program plows through millions of games in minutes and teaches itself winning strategies. Check out this video for more on DeepMind:
AI can even predict looks from DNA sequencing. Scientist Riccardo Sabatini says we have the power to read this code, predicting things like height, eye color, age — all from a vial of blood.
Using facial recognition and biometrics, AI can recognize human emotions which could prove useful in suicide prevention and aiding people on the autism spectrum.
The ability for smart machines to recognize faces and speech, and to carry on conversations raises some ethical and economic issues.
Last month, we outlined seven other current application of AI in safety and security:
- Detect and identify bad behaviors from good behaviors (Economic Times)
- Predict a defendant’s future criminality (Propublica)
- Quickly find security vulnerabilities (Defense One)
- Detect security anomaly using machine vision (IBM)
- Predictive models for crime to improve resource allocation (IBM)
- Autonomous aerial and undersea warfighting (Nextgov)
- Cruise missile guidance (Express)
Five big tech companies—Alphabet, Amazon, Facebook, IBM and Microsoft–are working on a standard of ethics for AI applications, even warfare. Let’s explore a couple of the questions they are likely to discuss.
Ethics & Economics of Recognition
Now that AI can recognize faces and cameras are everywhere, is there no assumption of privacy in public places? Are we all a Person of Interest where every move is tracked by smart machines?
Will pervasive surveillance make us safer? Or, as Thomas Ricks suggests, will AI and smart swarms profoundly upset security assumptions in modern cities?
What biases will AI surveillance learn? What kind of sanctioned profiling will this lead to? Could a court order tell a computer to unlearn a profile?
What job clusters will grow as foreign call centers are closed and replaced by smart chatbots? What new centers of expertise will form around AI, marketing, and customer service?
How will speech recognition and conversational AI change language acquisition–or the need for language acquisition?
These developments make it a good time to #AskAboutAI.
For more see:
- Cause + Code: The New Impact Formula
- Artificial Intelligence is Reshaping Life On Earth: 101 Examples
- Batch of One: How AI and Robots Will Bring Manufacturing Home to the US
- 8 Ways Machine Learning Will Improve Education
- Machine Learning: The New Infrastructure for Everything
- Intelligence Unleashed: How Artificial Intelligence Will Improve Education
- AI and Push Learning for Student Guidance and Advisory
- What Learning Will Look Like in 2035?
Stay in-the-know with all things EdTech and innovations in learning by signing up to receive the weekly Smart Update.