Everyone includes your mother and Los Angeles

Apr 28 JDN 2460430

What are the chances that artificial intelligence will destroy human civilization?

A bunch of experts were surveyed on that question and similar questions, and half of respondents gave a probability of 5% or more; some gave probabilities as high as 99%.

This is incredibly bizarre.

Most AI experts are people who work in AI. They are actively participating in developing this technology. And yet more than half of them think that the technology they are working on right now has a more than 5% chance of destroying human civilization!?

It feels to me like they honestly don’t understand what they’re saying. They can’t really grasp at an intuitive level just what a 5% or 10% chance of global annihilation means—let alone a 99% chance.

If something has a 5% chance of killing everyone, we should consider that at least as bad asthan something that is guaranteed to kill 5% of people.

Probably worse, in fact, because you can recover from losing 5% of the population (we have, several times throughout history). But you cannot recover from losing everyone. So really, it’s like losing 5% of all future people who will ever live—which could be a very large number indeed.

But let’s be a little conservative here, and just count people who already, currently exist, and use 5% of that number.

5% of 8 billion people is 400 million people.

So anyone who is working on AI and also says that AI has a 5% chance of causing human extinction is basically saying: “In expectation, I’m supporting 20 Holocausts.”

If you really think the odds are that high, why aren’t you demanding that any work on AI be tried as a crime against humanity? Why aren’t you out there throwing Molotov cocktails at data centers?

(To be fair, Eliezer Yudkowsky is actually calling for a global ban on AI that would be enforced by military action. That’s the kind of thing you should be doing if indeed you believe the odds are that high. But most AI doomsayers don’t call for such drastic measures, and many of them even continue working in AI as if nothing is wrong.)

I think this must be scope neglector something even worse.

If you thought a drug had a 99% chance of killing your mother, you would never let her take the drug, and you would probably sue the company for making it.

If you thought a technology had a 99% chance of destroying Los Angeles, you would never even consider working on that technology, and you would want that technology immediately and permanently banned.

So I would like to remind anyone who says they believe the danger is this great and yet continues working in the industry:

Everyone includes your mother and Los Angeles.

If AI destroys human civilization, that means AI destroys Los Angeles. However shocked and horrified you would be if a nuclear weapon were detonated in the middle of Hollywood, you should be at least that shocked and horrified by anyone working on advancing AI, if indeed you truly believe that there is at least a 5% chance of AI destroying human civilization.

But people just don’t seem to think this way. Their minds seem to take on a totally different attitude toward “everyone” than they would take toward any particular person or even any particular city. The notion of total human annihilation is just so remote, so abstract, they can’t even be afraid of it the way they are afraid of losing their loved ones.

This despite the fact that everyone includes all your loved ones.

If a drug had a 5% chance of killing your mother, you might let her take it—but only if that drug was the best way to treat some very serious disease. Chemotherapy can be about that risky—but you don’t go on chemo unless you have cancer.

If a technology had a 5% chance of destroying Los Angeles, I’m honestly having trouble thinking of scenarios in which we would be willing to take that risk. But the closest I can come to it is the Manhattan Project. If you’re currently fighting a global war against fascist imperialists, and they are also working on making an atomic bomb, then being the first to make an atomic bomb may in fact be the best option, even if you know that it carries a serious risk of utter catastrophe.

In any case, I think one thing is clear: You don’t take that kind of serious risk unless there is some very large benefit. You don’t take chemotherapy on a whim. You don’t invent atomic bombs just out of curiosity.

Where’s the huge benefit of AI that would justify taking such a huge risk?

Some forms of automation are clearly beneficial, but so far AI per se seems to have largely made our society worse. ChatGPT lies to us. Robocalls inundate us. Deepfakes endanger journalism. What’s the upside here? It makes a ton of money for tech companies, I guess?

Now, fortunately, I think 5% is too high an estimate.

(Scientific American agrees.)

My own estimate is that, over the next two centuries, there is about a 1% chance that AI destroys human civilization, and only a 0.1% chance that it results in human extinction.

This is still really high.

People seem to have trouble with that too.

“Oh, there’s a 99.9% chance we won’t all die; everything is fine, then?” No. There are plenty of other scenarios that would also be very bad, and a total extinction scenario is so terrible that even a 0.1% chance is not something we can simply ignore.

0.1% of people is still 8 million people.

I find myself in a very odd position: On the one hand, I think the probabilities that doomsayers are giving are far too high. On the other hand, I think the actions that are being taken—even by those same doomsayers—are far too small.

Most of them don’t seem to consider a 5% chance to be worthy of drastic action, while I consider a 0.1% chance to be well worthy of it. I would support a complete ban on all AI research immediately, just from that 0.1%.

The only research we should be doing that is in any way related to AI should involve how to make AI safer—absolutely no one should be trying to make it more powerful or apply it to make money. (Yet in reality, almost the opposite is the case.)

Because 8 million people is still a lot of people.

Is it fair to treat a 0.1% chance of killing everyone as equivalent to killing 0.1% of people?

Well, first of all, we have to consider the uncertainty. The difference between a 0.05% chance and a 0.015% chance is millions of people, but there’s probably no way we can actually measure it that precisely.

But it seems to me that something expected to kill between 4 million and 12 million people would still generally be considered very bad.

More importantly, there’s also a chance that AI will save people, or have similarly large benefits. We need to factor that in as well. Something that will kill 4-12 million people but also save 15-30 million people is probably still worth doing (but we should also be trying to find ways to minimize the harm and maximize the benefit).

The biggest problem is that we are deeply uncertain about both the upsides and the downsides. There are a vast number of possible outcomes from inventing AI. Many of those outcomes are relatively mundane; some are moderately good, others are moderately bad. But the moral question seems to be dominated by the big outcomes: With some small but non-negligible probability, AI could lead to either a utopian future or an utter disaster.

The way we are leaping directly into applying AI without even being anywhere close to understanding AI seems to me especially likely to lean toward disaster. No other technology has ever become so immediately widespread while also being so poorly understood.

So far, I’ve yet to see any convincing arguments that the benefits of AI are anywhere near large enough to justify this kind of existential risk. In the near term, AI really only promises economic disruption that will largely be harmful. Maybe one day AI could lead us into a glorious utopia of automated luxury communism, but we really have no way of knowing that will happen—and it seems pretty clear that Google is not going to do that.

Artificial intelligence technology is moving too fast. Even if it doesn’t become powerful enough to threaten our survival for another 50 years (which I suspect it won’t), if we continue on our current path of “make money now, ask questions never”, it’s still not clear that we would actually understand it well enough to protect ourselves by then—and in the meantime it is already causing us significant harm for little apparent benefit.

Why are we even doing this? Why does halting AI research feel like stopping a freight train?

I dare say it’s because we have handed over so much power to corporations.

The paperclippers are already here.

When maximizing utility doesn’t

Jun 4 JDN 2460100

Expected utility theory behaves quite strangely when you consider questions involving mortality.

Nick Beckstead and Teruji Thomas recently published a paper on this: All well-defined utility functions are either reckless in that they make you take crazy risks, or timid in that they tell you not to take even very small risks. It’s starting to make me wonder if utility theory is even the right way to make decisions after all.

Consider a game of Russian roulette where the prize is $1 million. The revolver has 6 chambers, 3 with a bullet. So that’s a 1/2 chance of $1 million, and a 1/2 chance of dying. Should you play?

I think it’s probably a bad idea to play. But the prize does matter; if it were $100 million, or $1 billion, maybe you should play after all. And if it were $10,000, you clearly shouldn’t.

And lest you think that there is no chance of dying you should be willing to accept for any amount of money, consider this: Do you drive a car? Do you cross the street? Do you do anything that could ever have any risk of shortening your lifespan in exchange for some other gain? I don’t see how you could live a remotely normal life without doing so. It might be a very small risk, but it’s still there.

This raises the question: Suppose we have some utility function over wealth; ln(x) is a quite plausible one. What utility should we assign to dying?


The fact that the prize matters means that we can’t assign death a utility of negative infinity. It must be some finite value.

But suppose we choose some value, -V, (so V is positive), for the utility of dying. Then we can find some amount of money that will make you willing to play: ln(x) = V, x = e^(V).

Now, suppose that you have the chance to play this game over and over again. Your marginal utility of wealth will change each time you win, so we may need to increase the prize to keep you playing; but we could do that. The prizes could keep scaling up as needed to make you willing to play. So then, you will keep playing, over and over—and then, sooner or later, you’ll die. So, at each step you maximized utility—but at the end, you didn’t get any utility.

Well, at that point your heirs will be rich, right? So maybe you’re actually okay with that. Maybe there is some amount of money ($1 billion?) that you’d be willing to die in order to ensure your heirs have.

But what if you don’t have any heirs? Or, what if we consider making such a decision as a civilization? What if death means not only the destruction of you, but also the destruction of everything you care about?

As a civilization, are there choices before us that would result in some chance of a glorious, wonderful future, but also some chance of total annihilation? I think it’s pretty clear that there are. Nuclear technology, biotechnology, artificial intelligence. For about the last century, humanity has been at a unique epoch: We are being forced to make this kind of existential decision, to face this kind of existential risk.

It’s not that we were immune to being wiped out before; an asteroid could have taken us out at any time (as happened to the dinosaurs), and a volcanic eruption nearly did. But this is the first time in humanity’s existence that we have had the power to destroy ourselves. This is the first time we have a decision to make about it.

One possible answer would be to say we should never be willing to take any kind of existential risk. Unlike the case of an individual, when we speaking about an entire civilization, it no longer seems obvious that we shouldn’t set the utility of death at negative infinity. But if we really did this, it would require shutting down whole industries—definitely halting all research in AI and biotechnology, probably disarming all nuclear weapons and destroying all their blueprints, and quite possibly even shutting down the coal and oil industries. It would be an utterly radical change, and it would require bearing great costs.

On the other hand, if we should decide that it is sometimes worth the risk, we will need to know when it is worth the risk. We currently don’t know that.

Even worse, we will need some mechanism for ensuring that we don’t take the risk when it isn’t worth it. And we have nothing like such a mechanism. In fact, most of our process of research in AI and biotechnology is widely dispersed, with no central governing authority and regulations that are inconsistent between countries. I think it’s quite apparent that right now, there are research projects going on somewhere in the world that aren’t worth the existential risk they pose for humanity—but the people doing them are convinced that they are worth it because they so greatly advance their national interest—or simply because they could be so very profitable.

In other words, humanity finally has the power to make a decision about our survival, and we’re not doing it. We aren’t making a decision at all. We’re letting that responsibility fall upon more or less randomly-chosen individuals in government and corporate labs around the world. We may be careening toward an abyss, and we don’t even know who has the steering wheel.

A guide to surviving the apocalypse

Aug 21 JDN 2459820

Some have characterized the COVID pandemic as an apocalypse, though it clearly isn’t. But a real apocalypse is certainly possible, and its low probability is offset by its extreme importance. The destruction of human civilization would be quite literally the worst thing that ever happened, and if it led to outright human extinction or civilization was never rebuilt, it could prevent a future that would have trillions if not quadrillions of happy, prosperous people.

So let’s talk about things people like you and me could do to survive such a catastrophe, and hopefully work to rebuild civilization. I’ll try to inject a somewhat light-hearted tone into this otherwise extraordinarily dark topic; we’ll see how well it works. What specifically we would want—or be able—to do will depend on the specific scenario that causes the apocalypse, so I’ll address those specifics shortly. But first, let’s talk about general stuff that should be useful in most, if not all, apocalypse scenarios.

It turns out that these general pieces of advice are also pretty good advice for much smaller-scale disasters such as fires, tornados, or earthquakes—all of which are far more likely to occur. Your top priority is to provide for the following basic needs:

1. Water: You will need water to drink. You should have some kind of stockpile of clean water; bottled water is fine but overpriced, and you’d do just as well to bottle tap water (as long as you do it before the crisis occurs and the water system goes down). Better still would be to have water filtration and purification equipment so that you can simply gather whatever water is available and make it drinkable.

2. Food: You will need nutritious, non-perishable food. Canned vegetables and beans are ideal, but you can also get a lot of benefit from dry staples such as crackers. Processed foods and candy are not as nutritious, but they do tend to keep well, so they can do in a pinch. Avoid anything that spoils quickly or requires sophisticated cooking. In the event of a disaster, you will be able to make fire and possibly run a microwave on a solar panel or portable generator—but you can’t rely on the electrical or gas mains to stay operational, and even boiling will require precious water.

3. Shelter: Depending on the disaster, your home may or may not remain standing—and even if it is standing, it may not be fit for habitation. Consider backup options for shelter: Do you have a basement? Do you own any tents? Do you know people you could move in with, if their homes survive and yours doesn’t?

4. Defense: It actually makes sense to own a gun or two in the event of a crisis. (In general it’s actually a big risk, though, so keep that in mind: the person your gun is most likely to kill is you.) Just don’t go overboard and do what we all did in Oregon Trail, stocking plenty of bullets but not enough canned food. Ammo will be hard to replace, though; your best option may actually be a gauss rifle (yes, those are real, and yes, I want one), because all they need for ammo is ferromagnetic metal of the appropriate shape and size. Then, all you need is a solar panel to charge its battery and some machine tools to convert scrap metal into ammo.

5. Community: Humans are highly social creatures, and we survive much better in groups. Get to know your neighbors. Stay in touch with friends and family. Not only will this improve your life in general, it will also give you people to reach out to if you need help during the crisis and the government is indisposed (or toppled). Having a portable radio that runs on batteries, solar power, or hand-crank operation will also be highly valuable for staying in touch with people during a crisis. (Likewise flashlights!)

Now, on to the specific scenarios. I will consider the following potential causes of apocalypse: Alien Invasion, Artificial Intelligence Uprising, Climate Disaster, Conventional War, Gamma-Ray Burst, Meteor Impact, Plague, Nuclear War, and last (and, honestly, least), Zombies.

I will rate each apocalypse by its risk level, based on its probability of occurring within the next 100 years (roughly the time I think it will take us to meaningfully colonize space and thereby change the game):

Very High: 1% or more

High: 0.1% – 1%

Moderate: 0.01% – 0.1%

Low: 0.001% – 0.01%

Very Low: 0.0001% – 0.001%

Tiny: 0.00001% – 0.0001%

Miniscule: 0.00001% or less

I will also rate your relative safety in different possible locations you might find yourself during the crisis:

Very Safe: You will probably survive.

Safe: You will likely survive if you are careful.

Dicey: You may survive, you may not. Hard to say.

Dangerous: You will likely die unless you are very careful.

Very Dangerous: You will probably die.

Hopeless: You will definitely die.

I’ll rate the following locations for each, with some explanation: City, Suburb, Rural Area, Military Base, Underground Bunker, Ship at Sea. Certain patterns will emerge—but some results may surprise you. This may tell you where to go to have the best chance of survival in the event of a disaster (though I admit bunkers are often in short supply).

All right, here goes!

Alien Invasion

Risk: Low

There are probably sapient aliens somewhere in this vast universe, maybe even some with advanced technology. But they are very unlikely to be willing to expend the enormous resources to travel across the stars just to conquer us. Then again, hey, it could happen; maybe they’re imperialists, or they have watched our TV commercials and heard the siren song of oregano.

City: Dangerous

Population centers are likely to be primary targets for their invasion. They probably won’t want to exterminate us outright (why would they?), but they may want to take control of our cities, and are likely to kill a lot of people when they do.

Suburb: Dicey

Outside the city centers will be a bit safer, but hardly truly safe.

Rural Area: Dicey

Where humans are spread out, we’ll present less of a target. Then again, if you own an oregano farm….

Military Base: Very Dangerous

You might think that having all those planes and guns around would help, but these will surely be prime targets in an invasion. Since the aliens are likely to be far more technologically advanced, it’s unlikely our military forces could put up much resistance. Our bases would likely be wiped out almost immediately.

Underground Bunker: Safe

This is a good place to be. Orbital and aerial weapons won’t be very effective against underground targets, and even ground troops would have trouble finding and attacking an isolated bunker. Since they probably won’t want to exterminate us, hiding in your bunker until they establish a New World Order could work out for you.

Ship at Sea: Dicey

As long as it’s a civilian vessel, you should be okay. A naval vessel is just as dangerous as a base, if not more so; they would likely strike our entire fleets from orbit almost instantly. But the aliens are unlikely to have much reason to bother attacking a cruise ship or a yacht. Then again, if they do, you’re toast.

Artificial Intelligence Uprising

Risk: Very High

While it sounds very sci-fi, this is one of the most probable apocalypse scenarios, and we should be working to defend against it. There are dozens of ways that artificial intelligence could get out of control and cause tremendous damage, particularly if the AI got control of combat drones or naval vessels. This could mean a superintelligent AI beyond human comprehension, but it need not; it could in fact be a very stupid AI that was programmed to make profits for Hasbro and decided that melting people into plastic was the best way to do that.

City: Very Dangerous

Cities don’t just have lots of people; they also have lots of machines. If the AI can hack our networks, they may be able to hack into not just phones and laptops, but even cars, homes, and power plants. Depending on the AI’s goals (which are very hard to predict), cities could become disaster zones almost immediately, as thousands of cars shut down and crash and all the power plants get set to overload.

Suburb: Dangerous

Definitely safer than the city, but still, you’ve got plenty of technology around you for the AI to exploit.

Rural Area: Dicey

The further you are from other people and their technology, the safer you’ll be. Having bad wifi out in the boonies may actually save your life. Then again, even tractors have software updates now….

Military Base: Very Dangerous

The military is extremely high-tech and all network-linked. Unless they can successfully secure their systems against the AI very well, very fast, suddenly all the guided missiles and combat drones and sentry guns will be deployed in service of the robot revolution.

Underground Bunker: Safe

As long as your bunker is off the grid, you should be okay. The robots won’t have any weapons we don’t already have, and bunkers are built because they protect pretty well against most weapons.

Ship at Sea: Hopeless

You are surrounded by technology and you have nowhere to run. A military vessel is worse than a civilian ship, but either way, you’re pretty much doomed. The AI is going to take over the radio, the GPS system, maybe even the controls of the ship themselves. It could intentionally overload the engines, or drive you into rocks, or simply shut down everything and leave you to starve at sea. A sailing yacht with a hand-held compass and sextant should be relatively safe, if you manage to get your hands on one of those somehow.

Climate Disaster

Risk: Moderate

Let’s be clear here. Some kind of climate disaster is inevitable; indeed, it’s already in progress. But what I’m talking about is something really severe, something that puts all of human civilization in jeopardy. That, fortunately, is fairly unlikely—and even more so after the big bill that just passed!

City: Dicey

Buildings provide shelter from the elements, and cities will be the first places we defend. Dikes will be built around Manhattan like the ones around Amsterdam. You won’t need to worry about fires, snowstorms, or flooding very much. Still, a really severe crisis could cause all utility systems to break down, meaning you won’t have heating and cooling.

Suburb: Dicey

The suburbs will be about as safe as the cities, maybe a little worse because there isn’t as much shelter if you lose your home to a disaster event.

Rural Area: Dangerous

Remote areas are going to have it the worst. Especially if you’re near a coast that can flood or a forest that can burn, you’re exposed to the elements and there won’t be much infrastructure to protect you. Your best bet is to move in toward the city, where other people will try to help you against the coming storms.

Military Base: Very Safe

Military infrastructure will be prioritized in defense plans, and soldiers are already given lots of survival tools and training. If you can get yourself to a military base and they actually let you in, you really won’t have much to worry about.

Underground Bunker: Very Safe

Underground doesn’t have a lot of weather, it turns out. As long as your bunker is well sealed against flooding, earthquakes are really your only serious concern, and climate change isn’t going to affect those very much.

Ship at Sea: Safe

Increased frequency of hurricanes and other storms will make the sea more dangerous, but as long as you steer clear of storms as they come, you should be okay.

Conventional War

Risk: Moderate

Once again, I should clarify. Obviously there are going to be wars—there are wars going on this very minute. But a truly disastrous war, a World War 3 still fought with conventional weapons, is fairly unlikely. We can’t rule it out, but we don’t have to worry too much—or rather, it’s nukes we should worry about, as I’ll get to in a little bit. It’s unlikely that truly apocalyptic damage could be caused by conventional weapons alone.

City: Dicey

Cities will often be where battles are fought, as they are strategically important. Expect bombing raids and perhaps infantry or tank battalions. Still, it’s actually pretty feasible to survive in a city that is under attack by conventional weapons; while lots of people certainly die, in most wars, most people actually don’t.

Suburb: Safe

Suburbs rarely make interesting military targets, so you’ll mainly have to worry about troops passing through on their way to cities.

Rural Area: Safe

For similar reasons to the suburbs, you should be relatively safe out in the boonies. You may encounter some scattered skirmishes, but you’re unlikely to face sustained attack.

Military Base: Dicey

Whether military bases are safe really depends on whether your side is winning or not. If they are, then you’re probably okay; that’s where all the soldiers and military equipment are, there to defend you. If they aren’t, then you’re in trouble; military bases make nice, juicy targets for attack.

Ship at Sea: Safe

There’s a reason it is big news every time a civilian cruise liner gets sunk in a war (does the Lusitania ring a bell?); it really doesn’t happen that much. Transport ships are at risk of submarine raids, and of course naval vessels will face constant threats; but cruise liners aren’t strategically important, so military forces have very little reason to target them.

Gamma-Ray Burst

Risk: Tiny

While gamma-ray bursts certainly happen all the time, so far they have all been extremely remote from Earth. It is currently estimated that they only happen a few times in any given galaxy every few million years. And each one is concentrated in a narrow beam, so even when they happen they only affect a few nearby stars. This is very good news, because if it happened… well, that’s pretty much it. We’d be doomed.

If a gamma-ray burst happened within a few light-years of us, and happened to be pointed at us, it would scour the Earth, boil the water, burn the atmosphere. Our entire planet would become a dead, molten rock—if, that is, it wasn’t so close that it blew the planet up completely. And the same is going to be true of Mars, Mercury, and every other planet in our solar system.

Underground Bunker: Very Dangerous

Your one meager hope of survival would be to be in an underground bunker at the moment the burst hit. Since most bursts give very little warning, you are unlikely to achieve this unless you, like, live in a bunker—which sounds pretty terrible. Moreover, your bunker needs to be a 100% closed system, and deep underground; the surface will be molten and the air will be burned away. There’s honestly a pretty narrow band of the Earth’s crust that’s deep enough to protect you but not already hot enough to doom you.

Anywhere Else: Hopeless

If you aren’t deep underground at the moment the burst hits us, that’s it; you’re dead. If you are on the side of the Earth facing the burst, you will die mercifully quickly, burned to a crisp instantly. If you are not, your death will be a bit slower, as the raging firestorm that engulfs the Earth, boils the oceans, and burns away the atmosphere will take some time to hit you. But your demise is equally inevitable.

Well, that was cheery. Remember, it’s really unlikely to happen! Moving on!

Meteor Impact

Risk: Tiny

Yes, “it has happened before, and it will happen again; the only question is when.” However, meteors with sufficient size to cause a global catastrophe only seem to hit the Earth about once every couple hundred million years. Moreover, right now the first time in human history where we might actually have a serious chance of detecting and deflecting an oncoming meteor—so even if one were on the way, we’d still have some hope of saving ourselves.

Underground Bunker: Dangerous

A meteor impact would be a lot like a gamma-ray burst, only much less so. (Almost anything is “much less so” than a gamma-ray burst, with the lone exception of a supernova, which is always “much more so”.) It would still boil a lot of ocean and start a massive firestorm, but it wouldn’t boil all the ocean, and the firestorm wouldn’t burn away all the oxygen in the atmosphere. Underground is clearly the safest place to be, preferably on the other side of the planet from the impact.

Anywhere Else: Very Dangerous

If you are above ground, it wouldn’t otherwise matter too much where you are, at least not in any way that’s easy to predict. Further from the impact is obviously better than closer, but the impact could be almost anywhere. After the initial destruction there would be a prolonged impact winter, which could cause famines and wars. Rural areas might be a bit safer than cities, but then again if you are in a remote area, you are less likely to get help if you need it.

Plague

Risk: Low

Obviously, the probability of a pandemic is 100%. You best start believing in pandemics; we’re in one. But pandemics aren’t apocalyptic plagues. To really jeopardize human civilization, there would have to be a superbug that spreads and mutates rapidly, has a high fatality rate, and remains highly resistant to treatment and vaccination. Fortunately, there aren’t a lot of bacteria or viruses like that; the last one we had was the Black Death, and humanity made it through that one. In fact, there is good reason to believe that with modern medical technology, even a pathogen like the Black Death wouldn’t be nearly as bad this time around.

City: Dangerous

Assuming the pathogen spreads from human to human, concentrations of humans are going to be the most dangerous places to be. Staying indoors and following whatever lockdown/mask/safety protocols that authorities recommend will surely help you; but if the plague gets bad enough, infrastructure could start falling apart and even those things will stop working.

Suburb: Safe

In a suburb, you are much more isolated from other people. You can stay in your home and be fairly safe from the plague, as long as you are careful.

Rural Area: Dangerous

The remoteness of a rural area means that you’d think you wouldn’t have to worry as much about human-to-human transmission. But as we’ve learned from COVID, rural areas are full of stubborn right-wing people who refuse to follow government safety protocols. There may not be many people around, but they probably will be taking stupid risks and spreading the disease all over the place. Moreover, if the disease can be carried by animals—as quite a few can—livestock will become an added danger.

Military Base: Safe

If there’s one place in the world where people follow government safety protocols, it’s a military base. Bases will have top-of-the-line equipment, skilled and disciplined personnel, and up-to-the-minute data on the spread of the pathogen.

Underground Bunker: Very Safe

The main thing you need to do is be away from other people for awhile, and a bunker is a great place to do that. As long as your bunker is well-stocked with food and water, you can ride out the plague and come back out once it dies down.

Ship at Sea: Dicey

This is an all-or-nothing proposition. If no one on the ship has the disease, you’re probably safe as long as you remain at sea, because very few pathogens can spread that far through the air. On the other hand, if someone on your ship does carry the disease, you’re basically doomed.

Nuclear War

Risk: Very High

Honestly, this is the one that terrifies me. I have no way of knowing that Vladmir Putin or Xi Jinping won’t wake up one morning any day now and give the order to launch a thousand nuclear missiles. (I honestly wasn’t even sure Trump wouldn’t, so it’s a damn good thing he’s out of office.) They have no reason to, but they’re psychopathic enough that I can’t be sure they won’t.

City: Dangerous

Obviously, most of those missiles are aimed at cities. And if you happen to be in the center of such a city, this is very bad for your health. However, nukes are not the automatic death machines that they are often portrayed to be; sure, right at the blast center you’re vaporized. But Hiroshima and Nagasaki both had lots of survivors, many of whom lived on for years or even decades afterward, even despite the radiation poisoning.

Suburb: Dangerous

Being away from a city center might provide some protection, but then again it might not; it really depends on how the nukes are targeted. It’s actually quite unlikely that Russia or China (or whoever) would deploy large megaton-yield missiles, as they are very expensive; so you could only have a few, making it easier to shoot them all down. The far more likely scenario is lots of kiloton-yield missiles, deployed in what is called a MIRV: multiple independent re-entry vehicle. One missile launches into space, then splits into many missiles, each of which can have a different target. It’s sort of like a cluster bomb, only the “little” clusters are each Hiroshima bombs. Those clusters might actually be spread over metropolitan areas relatively evenly, so being in a suburb might not save you. Or it might. Hard to say.

Rural Area: Dicey

If you are sufficiently remote from cities, the nukes probably won’t be aimed at you. And since most of the danger really happens right when the nuke hits, this is good news for you. You won’t have to worry about the blast or the radiation; your main concerns will be fallout and the resulting collapse of infrastructure. Nuclear winter could also be a risk, but recent studies suggest that’s relatively unlikely even in a full-scale nuclear exchange.

Military Base: Hopeless

The nukes are going to be targeted directly at military bases. Probably multiple nukes per base, in case some get shot down. Basically, if you are on a base at the time the missiles hit, you’re doomed. If you know the missiles are coming, your best bet would be to get as far from that base as you can, into as remote an area as you can. You’ll have a matter of minutes, so good luck.

Underground Bunker: Safe

There’s a reason we built a bunch of underground bunkers during the Cold War; they’re one of the few places you can go to really be safe from a nuclear attack. As long as your bunker is well-stocked and well-shielded, you can hide there and survive not only the initial attack, but the worst of the fallout as well.

Ship at Sea: Safe

Ships are small enough that they probably wouldn’t be targeted by nukes. Maybe if you’re on or near a major naval capital ship, like an aircraft carrier, you’d be in danger; someone might try to nuke that. (Even then, aircraft carriers are tough: Anything short of a direct hit might actually be survivable. In tests, carriers have remained afloat and largely functional even after a 100-kiloton nuclear bomb was detonated a mile away. They’re even radiation-shielded, because they have nuclear reactors.) But a civilian vessel or even a smaller naval vessel is unlikely to be targeted. Just stay miles away from any cities or any other ships, and you should be okay.

Zombies

Risk: Miniscule

Zombies per se—the literal undeadaren’t even real, so that’s just impossible. But something like zombies could maybe happen, in some very remote scenario in which some bizarre mutant strain of rabies or something spreads far and wide and causes people to go crazy and attack other people. Even then, if the infection is really only spread through bites, it’s not clear how it could ever reach a truly apocalyptic level; more likely, it would cause a lot of damage locally and then be rapidly contained, and we’d remember it like Pearl Harbor or 9/11: That terrible, terrible day when 5,000 people became zombies in Portland, and then they all died and it was over. An airborne or mosquito-borne virus would be much more dangerous, but then we’re really talking about a plague, not zombies. The ‘turns people into zombies’ part of the virus would be a lot less important than the ‘spreads through the air and kills you’ part.

Seriously, why is this such a common trope? Why do people think that this could cause an apocalypse?

City: Safe

Yes, safe, dammit. Once you have learned that zombies are on the loose, stay locked in your home, wearing heavy clothing (to block bites; a dog suit is ideal, but a leather jacket or puffy coat would do) with a shotgun (or a gauss rifle, see above) at the ready, and you’ll probably be fine. Yes, this is the area of highest risk, due to the concentration of people who could potentially be infected with the zombie virus. But unless you are stupid—which people in these movies always seem to be—you really aren’t in all that much danger. Zombies can at most be as fast and strong as humans (often, they seem to be less!), so all you need to do is shoot them before they can bite you. And unlike fake movie zombies, anything genuinely possible will go down from any mortal wound, not just a perfect headshot—I assure you, humans, however crazed by infection they might be, can’t run at you if their hearts (or their legs) are gone. It might take a bit more damage to drop them than an ordinary person, if they aren’t slowed down by pain; but it wouldn’t require perfect marksmanship or any kind of special weaponry. Buckshot to the chest will work just fine.

Suburb: Safe

Similar to the city, only more so, because people there are more isolated.

Rural Area: Very Safe

And rural areas are even more isolated still—plus you have more guns than people, so you’ll have more guns than zombies.

Military Base: Very Safe

Even more guns, plus military training and a chain of command! The zombies don’t stand a chance. A military base would be a great place to be, and indeed that’s where the containment would begin, as troops march from the bases to the cities to clear out the zombies. Shaun of the Dead (of all things!) actually got this right: One local area gets pretty bad, but then the Army comes in and takes all the zombies out.

Underground Bunker: Very Safe

A bunker remains safe in the event of zombies, just as it is in most other scenarios.

Ship at Sea: Very Safe

As long as the infection hasn’t spread to the ship you are currently on and the zombies can’t swim, you are at literally zero risk.

The real Existential Risk we should be concerned about

JDN 2457458

There is a rather large subgroup within the rationalist community (loosely defined because organizing freethinkers is like herding cats) that focuses on existential risks, also called global catastrophic risks. Prominent examples include Nick Bostrom and Eliezer Yudkowsky.

Their stated goal in life is to save humanity from destruction. And when you put it that way, it sounds pretty darn important. How can you disagree with wanting to save humanity from destruction?

Well, there are actually people who do (the Voluntary Human Extinction movement), but they are profoundly silly. It should be obvious to anyone with even a basic moral compass that saving humanity from destruction is a good thing.

It’s not the goal of fighting existential risk that bothers me. It’s the approach. Specifically, they almost all seem to focus on exotic existential risks, vivid and compelling existential risks that are the stuff of great science fiction stories. In particular, they have a rather odd obsession with AI.

Maybe it’s the overlap with Singularitarians, and their inability to understand that exponentials are not arbitrarily fast; if you just keep projecting the growth in computing power as growing forever, surely eventually we’ll have a computer powerful enough to solve all the world’s problems, right? Well, yeah, I guess… if we can actually maintain the progress that long, which we almost certainly can’t, and if the problems turn out to be computationally tractable at all (the fastest possible computer that could fit inside the observable universe could not brute-force solve the game of Go, though a heuristic AI did just beat one of the world’s best players), and/or if we find really good heuristic methods of narrowing down the solution space… but that’s an awful lot of “if”s.

But AI isn’t what we need to worry about in terms of saving humanity from destruction. Nor is it asteroid impacts; NASA has been doing a good job watching for asteroids lately, and estimates the current risk of a serious impact (by which I mean something like a city-destroyer or global climate shock, not even a global killer) at around 1/10,000 per year. Alien invasion is right out; we can’t even find clear evidence of bacteria on Mars, and the skies are so empty of voices it has been called a paradox. Gamma ray bursts could kill us, and we aren’t sure about the probability of that (we think it’s small?), but much like brain aneurysms, there really isn’t a whole lot we can do to prevent them.

There is one thing that we really need to worry about destroying humanity, and one other thing that could potentially get close over a much longer timescale. The long-range threat is ecological collapse; as global climate change gets worse and the oceans become more acidic and the aquifers are drained, we could eventually reach the point where humanity cannot survive on Earth, or at least where our population collapses so severely that civilization as we know it is destroyed. This might not seem like such a threat, since we would see this coming decades or centuries in advance—but we are seeing it coming decades or centuries in advance, and yet we can’t seem to get the world’s policymakers to wake up and do something about it. So that’s clearly the second-most important existential risk.

But the most important existential risk, by far, no question, is nuclear weapons.

Nuclear weapons are the only foreseeable, preventable means by which humanity could be destroyed in the next twenty minutes.

Yes, that is approximately the time it takes an ICBM to hit its target after launch. There are almost 4,000 ICBMs currently deployed, mostly by the US and Russia. Once we include submarine-launched missiles and bombers, the total number of global nuclear weapons is over 15,000. I apologize for terrifying you by saying that these weapons could be deployed in a moment’s notice to wipe out most of human civilization within half an hour, followed by a global ecological collapse and fallout that would endanger the future of the entire human race—but it’s the truth. If you’re not terrified, you’re not paying attention.

I’ve intentionally linked the Union of Concerned Scientists as one of those sources. Now they are people who understand existential risk. They don’t talk about AI and asteroids and aliens (how alliterative). They talk about climate change and nuclear weapons.

We must stop this. We must get rid of these weapons. Next to that, literally nothing else matters.

“What if we’re conquered by tyrants?” It won’t matter. “What if there is a genocide?” It won’t matter. “What if there is a global economic collapse?” None of these things will matter, if the human race wipes itself out with nuclear weapons.

To speak like an economist for a moment, the utility of a global nuclear war must be set at negative infinity. Any detectable reduction in the probability of that event must be considered worth paying any cost to achieve. I don’t care if it costs $20 trillion and results in us being taken over by genocidal fascists—we are talking about the destruction of humanity. We can spend $20 trillion (actually the US as a whole does every 14 months!). We can survive genocidal fascists. We cannot survive nuclear war.

The good news is, we shouldn’t actually have to pay that sort of cost. All we have to do is dismantle our nuclear arsenal, and get other countries—particularly Russia—to dismantle theirs. In the long run, we will increase our wealth as our efforts are no longer wasted maintaining doomsday machines.

The main challenge is actually a matter of game theory. The surprisingly-sophisticated 1990s cartoon show the Animaniacs basically got it right when they sang: “We’d beat our swords into liverwurst / Down by the East Riverside / But no one wants to be the first!”

The thinking, anyway, is that this is basically a Prisoner’s Dilemma. If the US disarms and Russia doesn’t, Russia can destroy the US. Conversely, if Russia disarms and the US doesn’t, the US can destroy Russia. If neither disarms, we’re left where we are. Whether or not the other country disarms, you’re always better off not disarming. So neither country disarms.

But I contend that it is not, in fact, a Prisoner’s Dilemma. It could be a Stag Hunt; if that’s the case, then only multilateral disarmament makes sense, because the best outcome is if we both disarm, but the worst outcome is if we disarm and they don’t. Once we expect them to disarm, we have no temptation to renege on the deal ourselves; but if we think there’s a good chance they won’t, we might not want to either. Stag Hunts have two stable Nash equilibria; one is where both arm, the other where both disarm.

But in fact, I think it may be simply the trivial game.

There aren’t actually that many possible symmetric two-player nonzero-sum games (basically it’s a question of ordering 4 possibilities, and it’s symmetric, so 12 possible games), and one that we never talk about (because it’s sort of boring) is the trivial game: If I do the right thing and you do the right thing, we’re both better off. If you do the wrong thing and I do the right thing, I’m better off. If we both do the wrong thing, we’re both worse off. So, obviously, we both do the right thing, because we’d be idiots not to. Formally, we say that cooperation is a strictly dominant strategy. There’s no dilemma, no paradox; the self-interested strategy is the optimal strategy. (I find it kind of amusing that laissez-faire economics basically amounts to assuming that all real-world games are the trivial game.)

That is, I don’t think the US would actually benefit from nuking Russia, even if we could do so without retaliation. Likewise, I don’t think Russia would actually benefit from nuking the US. One of the things we’ve discovered—the hardest way possible—through human history is that working together is often better for everyone than fighting. Russia could nuke NATO, and thereby destroy all of their largest trading partners, or they could continue trading with us. Even if they are despicable psychopaths who think nothing of committing mass murder (Putin might be, but surely there are people under his command who aren’t?), it’s simply not in Russia’s best interest to nuke the US and Europe. Likewise, it is not in our best interest to nuke them.

Nuclear war is a strange game: The only winning move is not to play.

So I say, let’s stop playing. Yes, let’s unilaterally disarm, the thing that so many policy analysts are terrified of because they’re so convinced we’re in a Prisoner’s Dilemma or a Stag Hunt. “What’s to stop them from destroying us, if we make it impossible for us to destroy them!?” I dunno, maybe basic human decency, or failing that, rationality?

Several other countries have already done this—South Africa unilaterally disarmed, and nobody nuked them. Japan refused to build nuclear weapons in the first place—and I think it says something that they’re the only people to ever have them used against them.

Our conventional military is plenty large enough to defend us against all realistic threats, and could even be repurposed to defend against nuclear threats as well, by a method I call credible targeted conventional response. Instead of building ever-larger nuclear arsenals to threaten devastation in the world’s most terrifying penis-measuring contest, you deploy covert operatives (perhaps Navy SEALS in submarines, or double agents, or these days even stealth drones) around the world, with the standing order that if they have reason to believe a country initiated a nuclear attack, they will stop at nothing to hunt down and kill the specific people responsible for that attack. Not the country they came from; not the city they live in; those specific people. If a leader is enough of a psychopath to be willing to kill 300 million people in another country, he’s probably enough of a psychopath to be willing to lose 150 million people in his own country. He likely has a secret underground bunker that would allow him to survive, at least if humanity as a whole does. So you should be threatening the one thing he does care about—himself. You make sure he knows that if he pushes that button, you’ll find that bunker, drop in from helicopters, and shoot him in the face.

The “targeted conventional response” should be clear by now—you use non-nuclear means to respond, and you target the particular leaders responsible—but let me say a bit more about the “credible” part. The threat of mutually-assured destruction is actually not a credible one. It’s not what we call in game theory a subgame perfect Nash equilibrium. If you know that Russia has launched 1500 ICBMs to destroy every city in America, you actually have no reason at all to retaliate with your own 1500 ICBMs, and the most important reason imaginable not to. Your people are dead either way; you can’t save them. You lose. The only question now is whether you risk taking the rest of humanity down with you. If you have even the most basic human decency, you will not push that button. You will not “retaliate” in useless vengeance that could wipe out human civilization. Thus, your threat is a bluff—it is not credible.

But if your response is targeted and conventional, it suddenly becomes credible. It’s exactly reversed; you now have every reason to retaliate, and no reason not to. Your covert operation teams aren’t being asked to destroy humanity; they’re being tasked with finding and executing the greatest mass murderer in history. They don’t have some horrific moral dilemma to resolve; they have the opportunity to become the world’s greatest heroes. Indeed, they’d very likely have the whole world (or what’s left of it) on their side; even the population of the attacking country would rise up in revolt and the double agents could use the revolt as cover. Now you have no reason to even hesitate; your threat is completely credible. The only question is whether you can actually pull it off, and if we committed the full resources of the United States military to preparing for this possibility, I see no reason to doubt that we could. If a US President can be assassinated by a lone maniac (and yes, that is actually what happened), then the world’s finest covert operations teams can assassinate whatever leader pushed that button.

This is a policy that works both unilaterally and multilaterally. We could even assemble an international coalition—perhaps make the UN “peacekeepers” put their money where their mouth is and train the finest special operatives in the history of the world tasked with actually keeping the peace.

Let’s not wait for someone else to save humanity from destruction. Let’s be the first.