“The future of life with Artificial Intelligence,” says Max Tegmark, is “the most important conversation of our time.” I’ve heard similar claims a couple of times recently from other people, including Stephen Hawking and Bill Gates. I know very little about artificial intelligence (AI), but if people far smarter than me say it’s important, I figured I ought to read a bit about it. Hence Max Tegmark’s new book Life 3.0: Being Human in the Age of Artificial Intelligence.
The ‘Life 3.0’ here needs a little explanation, so let me quickly outline the three levels of life that it refers to.
- Life 1.0 is the simple biological life of plants and smaller animals. It can survive and replicate, but that’s about it.
- Life 2.0 is a more adaptable form that can learn new skills, update its views of the world, and create culture. It can ‘design its software’, as Tegmark puts it. Humans are life 2.0.
- Finally, life 3.0 can change both its software and its hardware. This technological form of life would be able to learn and develop, but it could also design its own physical form. It doesn’t exist yet, but it may be on its way.
We haven’t got any further than the title, and I have questions. But let me press on for now and we’ll come back to some of them later.
Max Tegmark is an MIT physics professor and founder of the Future of Life Institute. Life 3.0 is an introduction to the field of AI, a chance to deal with some misconceptions, and an invitation to discuss it. It’s also a safari into some of the biggest possible questions, from what it is to be human, to the end of the universe, or the existence of purpose and beauty.
All of these questions are responses to the possibility of superintelligence. The way it works is this: so far, we have been able to build computers and then progressively improve them. We can create artificial intelligence that can learn and improve when set specific challenges – such as winning at chess. As computer intelligence advances, there will come a point when we can create a computer than can learn and improve itself. It will be able to develop its own programming, refining its own information processing capacity faster than any team of human developers could ever manage. It would rapidly become infinitely smarter than human beings, a hugely powerful super-intelligence. Let loose on the internet, the AI could make money, pay people to do things it needed, and eventually do almost anything it wanted.
But what does an AI want? Does it even have desires or goals? Tegmark devotes many pages to these questions, because that’s the key issue. Contrary to popular sci-fi tropes, the real fear isn’t machines becoming conscious or evil, but machines becoming competent. At that point it’s all about their goals. “The more intelligent and powerful machines get, the more important it becomes that their goals are aligned with ours.”
Get it right, and we could potentially set that AI to solving every global problem you care to mention. It would invent us new technologies, cure diseases, and in Tegmark’s imagination, cast intelligent life across the universe for billions of years into the future. Get it wrong, and there’s no reason why a superintelligent machine would consider our needs, any more than we would consider the life priorities of a flea.
Does that make it most important issue of our time? I can see how silicon valley billionaires and AI developers look at the potential stakes and argue that it is. Except that AI doesn’t exist. It may never exist, and if it ever does, it will be because silicon valley billionaires and AI researchers created it. They could also choose not to create it. Issues such as climate change, poverty, inequality, those are problems right now, not in a theoretical future. Isn’t it a little academic right now?
I also want to argue with Tegmark’s definition of life, which he keeps simple and mechanistic – if it can “retain its complexity and replicate”, it’s alive. Under this definition, AI is alive and could potentially be given rights – the right to exist, to own property, etc. If we were to keep a superintelligent computer for our own purposes, it would be tantamount to slavery. But then, some computer viruses would fit this basic description of life, and I reserve the right to delete them.
Technological life is something that James Lovelock dipped into in his book A Rough Ride to the Future, seeing it as a next step in evolution. Personally, I think it’s very flattering to humanity to imagine that we could give birth to a race of superintelligent machines that live on for billions of years in space. It casts us as the gods in a new creation story, but in the process, it risks devaluing the natural world that we already have. Tegmark’s is a philosophy almost totally disconnected from nature, a digitised vision of the future that transcends the dirt and blood of actual biological life, so confined within the natural cycles of the planet we call home. Why care about the earth if we can live in space forever, running off power stations that harvest energy from black holes?
It may sound like I have some fairly irreconcilable disagreements with Tegmark over his theories, but I suspect he’d welcome that. I get the impression he’s more interested in getting people talking than pushing a particular agenda. Above all, the book is an invitation to join a conversation about AI, and on that level is succeeds spectacularly. It’s written with enthusiasm, courage and curiosity. Tegmark is not afraid of sounding a little crazy, which I applaud. There are thought experiments, diagrams and summaries of key arguments. It’s conversational and constantly throws questions back at the reader: what do you think? What’s best? Where would you draw the line? Life 3.0 is actually really good fun, and it has more extraordinary ideas per page than any book I can remember.