Massive Global Benefit. Waves of Dislocation and Challenge. Time to #AskAboutAI.

Image technologies that will reduce drudgery, help to cure disease, make transportation cheaper and safer, and make energy more efficient. Artificial intelligence (AI) and related technologies are making all of that possible and more.

But a world of benefit will come at a steep price. There will be waves of job loss (different by sector and geography) and growing income inequality. New questions about biology, medicine and economics will vex policymakers.

To understand the opportunities and challenges, we are co-hosting community conversations about the implications of AI. We think it’s a good time for our #AskAboutAI series. To kick-off the series, we invited Bay Area technologists, social scientists, philanthropists and educators to discuss three questions: What’s happening? What does it mean? How do we prepare?

1. What’s Happening? How is AI and enabling technologies like robotics changing sectors of the economy and society?

With such a diverse group, knowledge about current AI deployments varied. Mark Nitzberg,  Executive Director of the Center for Human Compatible Artificial Intelligence at UC Berkeley, said that despite the hype about self-driving cars, technologists actually working on the technology don’t believe that fully autonomous cars–human-level or better in all driving situations–will be achieved anytime in the near future. (But UM students will ride driverless shuttles this fall).

Timothy Melano, an IBM Research Staff member, said, “There are many exciting developments in public transportation and healthcare systems. Many researchers are focusing on building tools so that people without a computer science background can use AI tools.”

2. What does it mean? What are the social implications of AI–both short term and long term? The group identified an equal number of benefits and concerns:

Social Benefits of AI Social Concerns about AI
  • Less repetition, more interesting jobs
  • Ability to do more with less
  • Safer travel
  • Better decision making
  • Accelerated discovery
  • More gender equity with value of empathy
  • Waves of job loss
  • Need to up upskill/reskill at scale
  • More income inequality
  • New issues will stress civic system
  • More surveillance, less privacy
  • Need for AI/media literacy

An educator noted that as every commercial and social platform relies more heavily on algorithmic feeds and responses, we all need to be more algorithmically aware of how our daily experience is shaped by tools that learn and reinforce what we like.

Ryan Panchadsaram, former U.S. Deputy Chief Technology Officer at the White House and now Partner at Kleiner Perkins Caufield & Byers, shared that “there is a positive emphasis on the growing ability to do more with less” and “the use of Artificial Intelligence and machine learning is allowing us to build, pilot and produce at a much faster rate.”

With the rapid development of AI tools comes the need for a national dialog about the values and ethics that will guide the regulation of artificial intelligence. Regulation is hard to get right, especially in this political climate. On one hand, “poorly informed regulation that stifles innovation would be a tragic mistake,” as noted in the 2016 AI100 report.

On the other hand, the government will need to scrutinize standards and technology developed by the private and public sector, and to craft regulations where necessary. As the AI100 recommended, governments should be encouraging helpful innovation, generate and transfer expertise, and foster broad corporate and civic responsibility for addressing critical societal issues raised by these technologies.

3. How to prepare? What should high school and college graduates know and be able to do? Participants outlined new learning priorities in five categories (with a bunch of overlap).

  • Social and emotional learning. Understand and manage emotions, set and achieve positive goals, feel and show empathy for others, establish and maintain positive relationships, and make responsible decisions (CASEL)
  • Deeper learning. Mastering rigorous academic content, learning how to think critically and solve problems, working collaboratively, communicating effectively, directing one’s own learning, and developing an academic mindset — a belief in one’s ability to grow (Hewlett). Friendly amendments on curiosity and creative confidence
  • Computational and design thinking. Building empathy, testing assumptions, prototyping, (check out this podcast with IDEO on design thinking as core pedagogy in schools)
  • Financial and entrepreneurial literacy. Basics of personal finance and starting, capitalizing and scaling a business.  
  • Expressing ideas (after exploring complex dimensions/issues). Visual and performing arts, philosophy, debate and writing.

It’s interesting that Hewlett focused its education work on deeper learning eight years ago and the rise of AI makes the focus more important than ever.

In a discussion of how to equip young adults for the automation economy, a few specific implementation challenges were identified:

  1. How do we map the likely impacts of AI by sector/geography? This dynamic mapping will help adapt career and technical education pathways around emerging job clusters.
  2. How can we describe learner expectations as competencies and experiences? A series of microcredentials would help develop an adaptable and stackable curriculum driven by individualized profiles.
  3. How do we support high impact design projects including wrangling data sets and applying open AI tools?

We’ll continue to explore what’s happening, what it means, and how to prepare in a series of regional convenings this year. If you’d like to co-host an #AskAboutAI event, contact Katie Vander Ark: [email protected]

Our #AskAboutAI campaign investigates the implications of AI on employment, education and ethics. For more, visit our series page and see:


Stay in-the-know with all things EdTech and innovations in learning by signing up to receive the weekly Smart Update. This post includes mentions of a Getting Smart partner. For a full list of partners, affiliate organizations and all other disclosures please see our Partner page.

Tom Vander Ark

Tom Vander Ark is the CEO of Getting Smart. He has written or co-authored more than 50 books and papers including Getting Smart, Smart Cities, Smart Parents, Better Together, The Power of Place and Difference Making. He served as a public school superintendent and the first Executive Director of Education for the Bill & Melinda Gates Foundation.

Discover the latest in learning innovations

Sign up for our weekly newsletter.

0 Comments

Leave a Comment

Your email address will not be published. All fields are required.