It’s not just the elephant in the room, it’s the elephant in the universe — and the elephant is still a newborn. I’m talking about the development of artificial intelligence and how we can be prepared for what happens when, as M.I.T. professor Max Tegmark put it, “machines outsmart us at all tasks.” The need to have this conversation, “the most important conversation of our time,” is the subject of his book, Life 3.0: Being Human in the Age of Artificial Intelligence, which I’ve just finished. It’s one of those books you not only can’t put down but instantly call up your friends and harangue them into reading.
Tegmark is a physicist, cosmologist, scientific director of the Foundational Questions Institute, and co-founder of the Future of Life Institute. And his new book should be required reading for anybody who cares about technology or the future, which is to say, everybody.
Though A.I. is a hot topic right now, far too much of the discussion is about robot vacuums, self-driving cars, or being able to dim the lights with a voice command. And while that’s all great (I love videos of cats riding on Roombas as much as the next person), the problem with our conversation about A.I. is the same as it is with our conversation about technology in general: we are missing the big picture. That is, we’re not asking the questions of what it truly means to be human, of what is sacred and irreducible about our humanity, and how to redraw and protect the borders of that humanity as technology is mounting a full-scale invasion.
As if this weren’t enough, the resources we need in order to rise to this challenge — wisdom, creativity, intuition, reflection and thoughtful decision-making — are the very things we’re losing access to with our addiction to our screens and devices.
Science continues to be able to explain more and more of our external, and even internal, existence, but the accelerated progress of artificial intelligence should be forcing us to clarify what it means to be human.
And that’s because the new advances are different from anything that’s come before. The rise of A.I., and the increasing and overwhelming hyper-connectivity of our daily lives, has the potential to erode our humanity in unprecedented ways. In fact, it’s already happening — our addiction to our phones and our screens, allowing them into every part of our lives, is changing how we interact with each other and with ourselves. As Thích Nhất Hạnh, the renowned Vietnamese Buddhist monk, put it, “it has never been easier to run away from ourselves.” A 2015 study from Microsoft found that the human attention span now drops off after about eight seconds (about one second less than a goldfish). And studies have also found that the presence of a phone in social interactions degrades the quality of the conversation and lowers the level of empathy people feel for each other. A 2015 Pew Research study found that 89 percent of mobile phone owners had used their phone in their last social encounter, and 82 percent said it damaged the interactions.
Our technology allows us to do amazing things, but it’s also accelerated the pace of our lives beyond our capacity to keep up. As Isaac Asimov wrote in 1988, “The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”
And with the advances in A.I. that are right around the corner, we’re going to need all the wisdom we can get. It’s easy to caricature those sounding the alarms about A.I. as being, well, alarmist. But it becomes harder when you realize that many of them are among the most visionary voices in science and technology. Like Stephen Hawking, who told The BBC that “the development of full artificial intelligence could spell the end of the human race.” Or Bill Gates, who said he’s “in the camp that is concerned about super intelligence” and doesn’t “understand why some people are not concerned.”
And then there’s Elon Musk, who certainly can’t be called a luddite. In 2014, he warned that “with artificial intelligence we are summoning the demon,” and said that “if I were to guess what our biggest existential threat is, it’s probably that.”
This is why in 2015 Musk donated $10 million to Tegmark’s Future of Life Institute to help assure that A.I. is developed in a safe way. “Here are all these leading A.I. researchers saying that A.I. safety is important,” he said at the time. “I agree with them, so I’m today committing $10M to support research aimed at keeping AI beneficial for humanity.”
What’s fascinating about the debate about artificial intelligence is that it isn’t just about the threat A.I. potentially represents to humanity, but — a much more interesting and consequential debate — about what it actually means to be human.
If humans were simply intelligent machines, they could be seamlessly blended with the most intelligent of artificial intelligence with nothing essential lost. But if there is something unique and ineffable about being human, if there is such a thing as a soul, an inner essence, a consciousness beyond our minds, becoming more and more connected with that self — which is also what truly connects us with others — is what gives meaning to life. And it’s also what ultimately determines why technological progress decoupled from wisdom is so dangerous to our humanity. As Yuval Harari wrote in Homo Deus, “technological progress has a very different agenda. It doesn’t want to listen to our inner voices, it wants to control them. We’ll give Ritalin to the distracted lawyer, Prozac to the guilty solder and Cipralex to the dissatisfied wife. And that’s just the beginning.”
So A.I. is — or should be — forcing us to think seriously about what it is to be human. And then to take steps to protect our humanity from the onslaught of technology in every aspect our lives as we’re becoming increasingly addicted to our smartphones and all our ubiquitous screens.
If the debate is won by those who believe that if human beings are nothing more that the product of biochemical algorithms, does it really matter if we are reduced to, as Harari put it, “useless bums who pass their days devouring artificial experiences in lala land”? Or, for that matter, measuring our self-worth by the number of likes on Instagram, or the number of continued Snapstreaks on Snapchat?
Part of our wish list for our lives and our future should be disentangling wisdom from intelligence. In our era of Big Data and algorithms, they’re easy to conflate. But the truth is that we’re drowning in data and starved for wisdom. As Harari put it, “in the past censorship worked by blocking the flow of information. In the twenty-first century, censorship works by flooding people with irrelevant information… In ancient times having power meant having access to data. Today, having power means knowing what to ignore.”
As we’re flooded with more and more data and more and more distractions, and as artificial intelligence grows more intelligent, it’s essential that we appreciate and protect separate and innately human qualities like wisdom and wonder. In contrast with intelligence, Tegmark writes, “the future of consciousness is even more important, since that’s what enables meaning.” He goes on to contrast sapience, or “the ability to think intelligently,” with sentience, “the ability to subjectively experience qualia,” which he earlier defines as “the basic building blocks of consciousness such as the redness of a rose, the sound of a cymbal, the smell of a steak, the taste of a tangerine or the pain of a pinprick.” Up until now, he writes, “we humans have built our identity on being Homo sapiens, the smartest entities around.” But “as we prepare to be humbled by ever smarter machines,” he urges us to “rebrand ourselves as Homo sentiens.”
This view is echoed by Stuart Russell, a computer scientist at the University of California, Berkeley, and also the co-author of one of the seminal artificial intelligence textbooks. “As if somehow intelligence was the thing that mattered and not the quality of human experience,” he said. “I think if we replaced ourselves with machines that as far as we know would have no conscious existence, no matter how many amazing things they invented, I think that would be the biggest possible tragedy.”
Of course, there are some who believe we are nothing but machines, and that to even bring up the idea that there’s something unique or sacred about humans or human consciousness is somehow anti-science. But science and qualities like awe and wonder — which have often gone hand-in-hand with scientific discovery — aren’t antithetical. They have co-existed for millennia. Here is how the astrophysicist Neil deGrasse Tyson described it: “When I say spiritual I am referring to a feeling you would have that connects you to the universe in a way that it may defy simple vocabulary,” he said. “We think of spirituality as an intellectual playground but the moment you learn something that touches an emotion rather than just something intellectual, I would call that a spiritual encounter with the universe.”
And it’s that kind of encounter that has the potential to be lost if we don’t take the warnings about technology and humanity seriously. And, unlike with other technological advances, with AI there might not be a chance to see what the problems are and then address them as they come. “When we got fire and messed up with it, we invented the fire extinguisher,” Tegmark said. “When we got cars and messed up, we invented the seat belt, airbag, and traffic light. But with nuclear weapons and A.I., we don’t want to learn from our mistakes. We want to plan ahead.”
Unlike with A.I., the threat of nuclear weapons was for obvious reasons very tangible and was taken seriously from very early on — with commissions, public debate, treaties, etc. That threat still exists, but nobody considers it alarmist to raise ethical questions about it.
In January of 2017, Tegmark organized a conference at Asilomar in California, devoted to AI safety and what Tegmark calls “Technological Stewardship.” The participants, a who’s who of the tech and A.I. world (including Hawking and Russell), came up with 23 principles, including:
- The goal of A.I. research should be to create not undirected intelligence, but beneficial intelligence.
- Humans should choose how and whether to delegate decisions to A.I. systems, to accomplish human-chosen objectives.
- Advanced A.I. could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
- Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.
And the rest of us should join the debate. But to do that, we’ll need to deal with our addiction to our devices, since the blurring of the lines we need to reestablish has already begun. “We’re already cyborgs,” Musk said. “Your phone and your computer are extensions of you, but the interface is through finger movements or speech, which are very slow.”
The way to ensure a safe, beneficial and healthy relationship with technology is to begin by taking control of that relationship right now, when the technology is much more manageable. Or, as Tegmark put it, “one of the best ways for you to improve the future of life is to improve tomorrow.” We can be role models, he says, but we have to choose which sort of role model we want to be: “Do you want to be someone who interrupts all their conversations by checking their smartphone or someone who feels empowered by using technology in a planned or deliberate way? Do you want to own your technology or do you want your technology to own you? What do you want it to mean to be human in the age of A.I.?”
He urges us to have this discussion with everyone around us: “Our future isn’t written in stone and just waiting to happen to us. It’s ours to create. Let’s create an inspiring one together!”