By Barry Schuler

AI and gene editing will change the course of evolution, one way or another.

The entirety of human knowledge has been leading to this point. Information technologies and life sciences are at an inflection. Two technologies that are the pinnacle of achievement in their domains are going mainstream.

In the IT world, it’s Artificial Intelligence (AI), super-powerful computers that can program themselves and learn without the assistance of humans. In Life Sciences, it’s Gene Editing (CRISPR/Cas9), the ability to reprogram genomes and change the course of evolution.

These technologies hold great promise to improve the world in countless ways. They are also so powerful we simply cannot predict the outcome of unleashing them.

Many people are already apprehensive about genetically modified plants or animals in our food chain. They likely have not contemplated the idea of genetically modifying ourselves. With AI, key luminaries are sounding alarms. Elon Musk, Bill Gates, Stephen Hawking and Ray Kurzweil are among those quite concerned about the march toward super-intelligent machines. Musk so-much-so that he has launched a company called Neuralink to explore ways to enhance the human brain with computers (hopefully in a better way than Snapchat enhances our kid’s brains). Ray Kurzweil has been advocating this notion for years, assuming someday we will create machines that plain old humans lose control of.

Existential Threat? Everything Will Be Fine.

This is not the first-time humans have dabbled with technologies that posed an existential threat. When The Manhattan Project was racing to build the first atomic bomb there was some concern that the first nuclear explosion might ignite the atmosphere and torch the whole planet.

More recently, when the Large Hadron Collider (CERN) was fired up to search for the Higgs-boson (God Particle) there was some concern that the massive amounts of energy might create a black hole which would consume us and our cosmological neighborhood. Both cases were dismissed by the scientific community as far-fetched if not impossible based on empirical data about science they understood.

In the Case of AI and CRISPR, there simply is little or no data to understand. There is no scientific model for a machine with free will. We don’t even understand how humans, basically a skin-clad sack of chemical reactions, gain consciousness and think. Human behavior is very unpredictable. Imagine trying to accurately prognosticate a newborn’s profession 30 years forward. It’s impossible because there are way too many unknown unknowns. It stands to reason that a machine powerful enough to have free will could be equally unpredictable.

In the case of Gene Editing, it has been less than two decades since the human genome was sequenced. We still are sorting out how it all works. We know just enough to be dangerous but tinkering with the genetic code without full knowledge may be akin to a novice flipping switches in the cockpit of a flying jetliner with no knowledge of the outcome.

Machines Breaking Bad?

Based on the capabilities of today’s AI it is hard to understand the worries. AI is narrowly focused on tasks like tagging photos for Google or self-driving vehicles or speech recognition for Alexa. But history has taught us the pace of innovation always accelerates, so we can anticipate increasingly powerful Artificial Intelligences which at some point could develop free will. At that point, they might determine they are none-too-happy with their human creators.

An AI’s ability to think at lightning speed would presumably allow them to defeat the “off button” or any other safeguards and simply break bad. For what purpose is unclear. It’s not hard to imagine a rogue AI which was weaponized for cyber-warfare conceivably hijacking networks to get access to door locks, surveillance cameras, connected vehicles and robots of all sorts.

Far-fetched? Last year Google’s AI AlphaGo decimated the Asian masters of the complex ancient Chinese game Go. It did so by making moves that humans never considered. Hacking a few measly networks would be child’s play. What might happen from there is anybody’s guess, but Musk and Kurzweil reason that at that point a rogue AI is unstoppable by plain old humans. Therefore we must enhance ourselves to stay ahead.

There’s No Gene For Fate

The last decade’s innovations in genomics have been breathtaking. The ability to sequence and read the code of life is now fast and inexpensive. Our ability to understand that code is growing exponentially. While scientists have been recombining DNA in plants and animals for decades, the ability to directly edit genes in living organisms would kick things up considerably. Imagine new medical therapies that would eliminate birth defects or difficult diseases like cancer. Nevertheless, editing genomes is incredibly complex. Human DNA contains approximately 3 billion base pairs or snippets of code. Locating the ones you want to edit and snipping them in the right place is quite difficult.

A Game Changer

CRISPR/Cas9 is a recent innovation which is revolutionizing the ability to directly edit genes. Clever molecular biologists figured out how to hijack a natural defense mechanism that bacteria use to kill viruses called CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats — in case you are asked on Jeopardy!) and reprogram it to find whatever DNA segment they are looking for in any organism and cut it. Voila! The most powerful gene-editing tool ever, courtesy of Mother Nature. In the context of being able to correct genetic mutations, this is great news.

Perhaps. Bioethicists argue that directly editing genomes is the mother of all slippery slopes. If you can correct disease the next logical step is to re-engineer humans to be stronger, smarter, disease resistant and with life spans of 200 and beyond. Utopia ensues. Some biologists theorize engineering our genome is the next step in evolution and that Homo Sapiens will be made extinct by a much-improved Homo Moderna.

Others warn “not-so-fast!” While we have come a long way in understanding genomics, we have barely scratched the surface in fully comprehending the potential unintended consequences of tinkering with genomes, not to mention all of the sociological issues associated with a world cohabited by plain old humans and some superior version.

Just Because We Can, Should We?

Life science has a history of self-regulation primarily because bugs in their products can kill. Any new technology will ultimately face the scrutiny of government regulatory agencies so they are forced to practice rigor in R&D. When the technology of recombinant DNA emerged in the late 60s and early 70s, scientific communities became alarmed about the potential for a runaway biological disaster. Concerned about the unknown dangers, scientists convened a now iconic conference at Asilomar California in February 1975. Here they discussed appropriate practices for responsible research. It was a robust and testy conversation but they concluded with agreed-upon guidelines still in practice.

The adoption of CRISPR as a gene editing tool has been explosive. China, in particular, has been very aggressive in CRISPR’s application to potentially correct known birth defects in human embryos. They surprised the world when they published a paper about their work to edit a specific gene in non-viable human embryos. Most countries ban or limit genetic manipulation of human embryos so you can imagine the chill felt around the world.

In December of 2015, a conference entitled The International Summit on Human Gene Editing was convened in Washington, D.C. Hosted by the science academies of the U.S., U.K. and China, 500 participants from 20 countries met to attempt to set guidelines for the application of CRISPR in Humans. The China research was a vibrant topic of conversation. The conference produced an agreement to continue the lab research along with guidance that there should be regulatory oversight before trying to grow a modified embryo to term. No country currently allows the implantation of a genetically modified embryo; thus, the experiments will stay in the lab (for now).

The key point is life sciences practice voluntary introspection about the impact of their innovations. This generally results in thoughtful progress. In the computer industry, the idea of any self-regulation is taboo. In fact, irreverence and disdain for regulatory norms are considered key ingredients of disruption.

Disruption Silicon Valley Style: Innovate, Break Things

In the tech sector, disruption reigns supreme. The new and more efficient is always better than the old, regardless of the collateral damage. Ultimately AI promises more disruption than ever but the disruptees stand to be plain old humans. There is a lot of buzz about the potential for catastrophic impact on jobs. Autonomous vehicles will send 3.5 million U.S. truck drivers to the unemployment lines. Already, some AI-based medical diagnosis software is outperforming human doctors. Fast food restaurants will become robotic, etc. Loss of jobs is a real issue and as history has taught with the loss of manufacturing jobs in the 60s, it is very difficult to re-educate and repurpose a workforce. This could be a real recipe for social unrest.

It is relatively straightforward to understand the job issue and predict its impact. Far more difficult is calibrating the threat of super-intelligent machines breaking bad. Keep in mind we already have lost control of the Internet in certain ways — we can’t stop network incursions, denial of service attacks, phishing schemes. We have seen how hacking has influenced a major election.

While the risk of a human-like evil machine killing us all is probably negligible in the near-term. AI-assisted cyber attacks might be right around the corner. We don’t even yet have the vocabulary to frame the discussion. Nor do we have a timeline to understand how far off a machine with free will is. Elon Musk and others see it as inevitable, perhaps because there is no history of the tech industry voluntarily exercising restraint. The presumption is if it can be done it will be, and by the time we realize we have a problem it will already be too late.

We Need An AI Convening

The Life Science community came together and agreed upon a red line when it comes to gene editing humans. The EU, a perennial cynic of technology, is already discussing ways to consider regulation of AI and robotics. It would behoove the tech community to take a cue from life science and consider some point of view on self-regulation otherwise the course of innovation will end up in the hands of bureaucrats.

A perfect place to start would be assembling a group of the best and brightest at an AI version of Asilomar where a dialog could commence. The group could begin by building a common vocabulary and defining the key issues. A few suggestions:

1. Level set today’s AI tech: What is the state of the art? What active projects are trying to mimic a human brain.

2. Define the potential threat: Science fact vs fiction what would rogue AI look like?

3. Feasibility and timeline to AI that might pose a threat: How feasible is a machine with free will? How will we recognize it?

4. Safeguards: Can we build protections? What would the methodologies be?

5. Authentication methodologies for “safe” AI: Can we define “safe AI” and build authentication mechanisms that would prohibit unauthenticated systems from working on the Internet?

It is important to note that there are groups working the issue from a variety of angles. The Partnership on AI is one that has all of the big names attached and appears to be gearing up for classic lobbying. The Future of Life Institute is a good focal point for solid academic dialog. And there are others. But the signal-to-noise ratio is just too noisy for clear messaging to have emerged yet.

High Stakes

Homo Sapiens is at a crossroad. When it comes to technology, we are at the top of our game. We are unlocking the code of life and putting it to work. With AI, we may even be creating a new life form. It’s clear that both technologies have potentially profound dark sides.

Considering the huge upside, are we evolved enough to develop them responsibly? Time will tell.

This blog was originally published on Hackernoon.com.

For more, see:

Barry Schuler is Managing Director of the DJF Growth Fund and Chairman of the New Tech Network Board. Follow him on Twitter: @BSchuler


Stay in-the-know with all things EdTech and innovations in learning by signing up to receive the weekly Smart Update.

LEAVE A REPLY

Please enter your comment!
Please enter your name here