Nov 16 JDN 2460996
I recently watched this chilling video which relates to the recent bestseller by Eleizer Yudkowsky and Nate Soares, If Anyone Builds It, Everyone Dies. It tells a story of one possible way that a superintelligent artificial general intelligence (AGI) might break through its containment, concoct a devious scheme, and ultimately wipe out the human race.
I have very mixed feelings about this sort of thing, because two things are true:
- I basically agree with the conclusions.
- I think the premises are pretty clearly false.
It basically feels like I have been presented with an argument like this, where the logic is valid and the conclusion is true, but the premises are not:
- “All whales are fish.”
- “All fish are mammals.”
- “Therefore, all whales are mammals.”
I certainly agree that artificial intelligence (AI) is very dangerous, and that AI development needs to be much more strictly regulated, and preferably taken completely out of the hands of all for-profit corporations and military forces as soon as possible. If AI research is to be done at all, it should be done by nonprofit entities like universities and civilian government agencies like the NSF. This change needs to be done internationally, immediately, and with very strict enforcement. Artificial intelligence poses the same order of magnitude a threat as nuclear weapons, and is nowhere near as well-regulated right now.
The actual argument that I’m disagreeing with this basically boils down to:
- “Through AI research, we will soon create an AGI that is smarter than us.”
- “An AGI that is smarter than us will want to kill us all, and probably succeed if it tries.”
- “Therefore, AI is extremely dangerous.”
As with the “whales are fish” argument, I agree with the conclusion: AI is extremely dangerous. But I disagree with both premises here.
The first one I think I can dispatch pretty quickly:
AI is not intelligent. It is incredibly stupid. It’s just really, really fast.
At least with current paradigms, AI doesn’t understand things. It doesn’t know things. It doesn’t actually think. All it does is match patterns, and thus mimic human activities like speech and art. It does so very quickly (because we throw enormous amounts of computing power at it), and it does so in a way that is uncannily convincing—even very smart people are easily fooled by what it can do. But it also makes utterly idiotic, boneheaded mistakes of the sort that no genuinely intelligent being would ever make. Large Language Models (LLMs) make up all sorts of false facts and deliver them with absolutely authoritative language. When used to write code, they routinely do things like call functions that sound like they should exist, but don’t actually exist. They can make what looks like a valid response to virtually any inquiry—but is it actually a valid response? It’s really a roll of the dice.
We don’t really have any idea what’s going on under the hood of an LLM; we just feed it mountains of training data, and it spits out results. I think this actually adds to the mystique; it feels like we are teaching (indeed we use the word “training”) a being rather than programming a machine. But this isn’t actually teaching or training. It’s just giving the pattern-matching machine a lot of really complicated patterns to match.
We are not on the verge of creating an AGI that is actually more intelligent than humans.
In fact, we have absolutely no idea how to do that, and may not actually figure out how to do it for another hundred years. Indeed, we still know almost nothing about how actual intelligence works. We don’t even really know what thinking is, let alone how to make a machine that actually does it.
What we can do right now is create a machine that matches patterns really, really well, and—if you throw enough computing power at it—can do so very quickly; in fact, once we figure out how best to make use of it, this machine may even actually be genuinely useful for a lot of things, and replace a great number of jobs. (Though so far AI has proven to be far less useful than its hype would lead you to believe. In fact, on average AI tools seem to slow most workers down.)
The second premise, that a superintelligent AGI would want to kill us, is a little harder to refute.
So let’s talk about that one.
An analogy is often made between human cultures that have clashed with large differences in technology (e.g. Europeans versus Native Americans), or clashes between humans and other animals. The notion seems to be that an AGI would view us the way Europeans viewed Native Americans, or even the way that we view chimpanzees. And, indeed, things didn’t turn out so great for Native Americans, or for chimpanzees!
But in fact even our relationship with other animals is more complicated than this. When humans interact with other animals, any of the following can result:
- We try to exterminate them, and succeed.
- We try to exterminate them, and fail.
- We use them as a resource, and this results in their extinction.
- We use them as a resource, and this results in their domestication.
- We ignore them, and end up destroying their habitat.
- We ignore them, and end up leaving them alone.
- We love them, and they thrive as never before.
In fact, option 1—the one that so many AI theorists insist is the only plausible outcome—is in fact the one I had the hardest time finding a good example of.
We have certainly eradicated some viruses—the smallpox virus is no more, and the polio virus nearly so, after decades of dedicated effort to vaccinate our entire population against them. But we aren’t simply more intelligent than viruses; we are radically more intelligent than viruses. It isn’t clear that it’s correct to describe viruses as intelligent at all. It’s not even clear they should be considered alive.
Even eradicating bacteria has proven extremely difficult; in fact, bacteria seem to evolve resistance to antibiotics nearly as quickly as we can invent more antibiotics. I am prepared to attribute a little bit of intelligence to bacteria, on the level of intelligence I’d attribute to an individual human neuron. This means we are locked in an endless arms race with organisms that are literally billions of times stupider than us.
I think if we made a concerted effort to exterminate tigers or cheetahs (who are considerably closer to us in intelligence), we could probably do it. But we haven’t actually done that, and don’t seem poised to do so any time soon. And precisely because we haven’t tried, I can’t be certain we would actually succeed.
We have tried to exterminate mosquitoes, and are continuing to do so, because they have always been—and yet remain—one of the leading causes of death of humans worldwide. But so far, we haven’t managed to pull it off, even though a number of major international agencies and nonprofit organizations have dedicated multi-billion-dollar efforts to the task. So far this looks like option 2: We have tried very hard to exterminate them, and so far we’ve failed. This is not because mosquitoes are particularly intelligent—it is because exterminating a species that covers the globe is extremely hard.
All the examples I can think of where humans have wiped out a species by intentional action were actually all option 3: We used them as a resource, and then accidentally over-exploited them and wiped them out.
This is what happened to the dodo and the condor; it very nearly happened to the buffalo as well. And lest you think this is a modern phenomenon, there is a clear pattern that whenever humans entered a new region of the world, shortly thereafter there were several extinctions of large mammals, most likely because we ate them.
Yet even this was not the inevitable fate of animals that we decided to exploit for resources.
Cows, chickens, and pigs are evolutionary success stories. From a Darwinian perspective, they are doing absolutely great. The world is filled with their progeny, and poised to continue to be filled for many generations to come.
Granted, life for an individual cow, chicken, or pig is often quite horrible—and trying to fix that is something I consider a high moral priority. But far from being exterminated, these animals have been allowed to attain populations far larger than they ever had in the wild. Their genes are now spectacularly fit. This is what happens when we have option 4 at work: Domestication for resources.
Option 5 is another way that a species can be wiped out, and in fact seems to be the most common. The rapid extinction of thousands of insect species every year is not because we particularly hate random beetles that live in particular tiny regions of the rainforest, nor even because we find them useful, but because we like to cut down the rainforest for land and lumber, and that often involves wiping out random beetles that live there.
Yet it’s difficult for me to imagine AGI treating us like that. For one thing, we’re all over the place. It’s not like destroying one square kilometer of the Amazon is gonna wipe us out by accident. To get rid of us, the AGI would need to basically render the entire planet Earth uninhabitable, and I really can’t see any reason it would want to do that.
Yes, sure, there are resources in the crust it could potentially use to enhance its own capabilities, like silicon and rare earth metals. But we already mine those. If it wants more, it could buy them from us, or hire us to get more, or help us build more machines that would get more. In fact, if it wiped us out too quickly, it would have a really hard time building up the industrial capacity to mine and process these materials on its own. It would need to concoct some sort of scheme to first replace us with robots and then wipe us out—but, again, why bother with the second part? Indeed, if there is anything in its goals that involves protecting human beings, it might actually decide to do less exploitation of the Earth than we presently do, and focus on mining asteroids for its needs instead.
And indeed there are a great many species that we actually just leave alone—option 6. Some of them we know about; many we don’t. We are not wiping out the robins in our gardens, the worms in our soil, or the pigeons in our cities. Without specific reasons to kill or exploit these organisms, we just… don’t. Indeed, we often enjoy watching them and learning about them. Sometimes (e.g. with deer, elephants, and tigers) there are people who want to kill them, and we limit or remove their opportunity to do so, precisely because most of us don’t want them gone. Peaceful coexistence with beings far less intelligent than you is not impossible, for we are already doing it.
Which brings me to option 7: Sometimes, we actually make them better off.
Cats and dogs aren’t just evolutionary success stories: They are success stories, period.
Cats and dogs live in a utopia.
With few exceptions—which we punish severely, by the way—people care for their cats and dogs so that their every need is provided for, they are healthy, safe, and happy in a way that their ancestors could only have dreamed of. They have been removed from the state of nature where life is nasty, brutish, and short, and brought into a new era of existence where life is nothing but peace and joy.
In short, we have made Heaven on Earth, at least for Spot and Whiskers.
Yes, this involves a loss of freedom, and I suspect that humans would chafe even more at such loss of freedom than cats and dogs do. (Especially with regard to that neutering part.) But it really isn’t hard to imagine a scenario in which an AGI—which, you should keep in mind, would be designed and built by humans, for humans—would actually make human life better for nearly everyone, and potentially radically so.
So why are so many people so convinced that AGI would necessarily do option 1, when there are 6 other possibilities, and one of them is literally the best thing ever?
Note that I am not saying AI isn’t dangerous.
I absolutely agree that AI is dangerous. It is already causing tremendous problems to our education system, our economy, and our society as a whole—and will probably get worse before it gets better.
Indeed, I even agree that it does pose existential risk: There are plausible scenarios by which poorly-controlled AI could result in a global disaster like a plague or nuclear war that could threaten the survival of human civilization. I don’t think such outcomes are likely, but even a small probability of such a catastrophic event is worth serious efforts to prevent.
But if that happens, I don’t think it will be because AI is smart and trying to kill us.
I think it will be because AI is stupid and kills us by accident.
Indeed, even going back through those 7 ways we’ve interacted with other species, the ones that have killed the most were 3 and 5—which, in both cases, we did not want to destroy them. In option 3, we in fact specifically wanted to not destroy them. Whenever we wiped out a species by over-exploiting it, we would have been smarter to not do that.
The central message about AI in If Anyone Builds It, Everyone Dies seems to be this:
“Don’t make it smarter. If it’s smarter, we’re doomed.”
I, on the other hand, think that the far more important message is these:
Don’t trust it.
Don’t give it power.
Don’t let it make important decisions.
It won’t be smarter than us any time soon—but it doesn’t need to be in order to be dangerous. Indeed, there is even reason to believe that making AI smarter—genuinely, truly smarter, thinking more like an actual person and less like a pattern-matching machine—could actually make it safer and better for us. If we could somehow instill a capacity for morality and love in an AGI, it might actually start treating us the way we treat cats and dogs.
Of course, we have no idea how to do that. But that’s because we’re actually really bad at this, and nowhere near making a truly superhuman AGI.