Now is the time for CTCR

Nov 6 JDN 2459890

We live in a terrifying time. As Ukraine gains ground in its war with Russia, thanks in part to the deployment of high-tech weapons from NATO, Vladimir Putin has begun to make thinly-veiled threats of deploying his nuclear arsenal in response. No one can be sure how serious he is about this. Most analysts believe that he was referring to the possible use of small-scale tactical nuclear weapons, not a full-scale apocalyptic assault. Many think he’s just bluffing and wouldn’t resort to any nukes at all. Putin has bluffed in the past, and could be doing so again. Honestly, “this is not a bluff” is exactly the sort of thing you say when you’re bluffing—people who aren’t bluffing have better ways of showing it. (It’s like whenever Trump would say “Trust me”, and you’d know immediately that this was an especially good time not to. Of course, any time is a good time not to trust Trump.)

(By the way, financial news is a really weird thing: I actually found this article discussing how a nuclear strike would be disastrous for the economy. Dude, if there’s a nuclear strike, we’ve got much bigger things to worry about than the economy. It reminds me of this XKCD.)

But if Russia did launch nuclear weapons, and NATO responded with its own, it could trigger a nuclear war that would kill millions in a matter of hours. So we need to be prepared, and think very carefully about the best way to respond.

The current debate seems to be over whether to use economic sanctions, conventional military retaliation, or our own nuclear weapons. Well, we already have economic sanctions, and they aren’t making Russia back down. (Though they probably are hurting its war effort, so I’m all for keeping them in place.) And if we were to use our own nuclear weapons, that would only further undermine the global taboo against nuclear weapons and could quite possibly trigger that catastrophic nuclear war. Right now, NATO seems to be going for a bluff of our own: We’ll threaten an overwhelming nuclear response, but then we obviously won’t actually carry it out because that would be murder-suicide on a global scale.

That leaves conventional military retaliation. What sort of retaliation? Several years ago I came up with a very specific method of conventional retaliation I call credible targeted conventional response (CTCR, which you can pronounce “cut-core”). I believe that now would be an excellent time to carry it out.

The basic principle of CTCR is really quite simple: Don’t try to threaten entire nations. A nation is an abstract entity. Threaten people. Decisions are made by people. The response to Vladimir Putin launching nuclear weapons shouldn’t be to kill millions of innocent people in Russia that probably mean even less to Putin than they do to us. It should be to kill Vladimir Putin.

How exactly to carry this out is a matter for military strategists to decide. There are a variety of weapons at our disposal, ranging from the prosaic (covert agents) to the exotic (precision strikes from high-altitude stealth drones). Indeed, I think we should leave it purposefully vague, so that Putin can’t try to defend himself against some particular mode of attack. The whole gamut of conventional military responses should be considered on the table, from a single missile strike to a full-scale invasion.

But the basic goal is quite simple: Launching a nuclear weapon is one of the worst possible war crimes, and it must be met with an absolute commitment to bring the perpetrator to justice. We should be willing to accept some collateral damage, even a lot of collateral damage; carpet-bombing a city shouldn’t be considered out of the question. (If that sounds extreme, consider that we’ve done it before for much weaker reasons.) The only thing that we should absolutely refuse to do is deploy nuclear weapons ourselves.

The great advantage of this strategy—even aside from being obviously more humane than nuclear retaliation—is that it is more credible. It sounds more like something we’d actually be willing to do. And in fact we likely could even get help from insiders in Russia, because there are surely many people in the Russian government who aren’t so loyal to Putin that they’d want him to get away with mass murder. It might not just be an assassination; it might end up turning into a coup. (Also something we’ve done for far weaker reasons.)


This is how we preserve the taboo on nuclear weapons: We refuse to use them, but otherwise stop at nothing to kill anyone who does use them.

I therefore call upon the world to make this threat:

Launch a nuclear weapon, Vladimir Putin, and we will kill you. Not your armies, not your generals—you. It could be a Tomahawk missile at the Kremlin. It could be a car bomb in your limousine, or a Stinger missile at Aircraft One. It could be a sniper at one of your speeches. Or perhaps we’ll poison your drink with polonium, like you do to your enemies. You won’t know when or where. You will live the rest of your short and miserable life in terror. There will be nowhere for you to hide. We will stop at nothing. We will deploy every available resource around the world, and it will be our top priority. And you will die.

That’s how you threaten a psychopath. And it’s what we must do in order to keep the world safe from nuclear war.

A guide to surviving the apocalypse

Aug 21 JDN 2459820

Some have characterized the COVID pandemic as an apocalypse, though it clearly isn’t. But a real apocalypse is certainly possible, and its low probability is offset by its extreme importance. The destruction of human civilization would be quite literally the worst thing that ever happened, and if it led to outright human extinction or civilization was never rebuilt, it could prevent a future that would have trillions if not quadrillions of happy, prosperous people.

So let’s talk about things people like you and me could do to survive such a catastrophe, and hopefully work to rebuild civilization. I’ll try to inject a somewhat light-hearted tone into this otherwise extraordinarily dark topic; we’ll see how well it works. What specifically we would want—or be able—to do will depend on the specific scenario that causes the apocalypse, so I’ll address those specifics shortly. But first, let’s talk about general stuff that should be useful in most, if not all, apocalypse scenarios.

It turns out that these general pieces of advice are also pretty good advice for much smaller-scale disasters such as fires, tornados, or earthquakes—all of which are far more likely to occur. Your top priority is to provide for the following basic needs:

1. Water: You will need water to drink. You should have some kind of stockpile of clean water; bottled water is fine but overpriced, and you’d do just as well to bottle tap water (as long as you do it before the crisis occurs and the water system goes down). Better still would be to have water filtration and purification equipment so that you can simply gather whatever water is available and make it drinkable.

2. Food: You will need nutritious, non-perishable food. Canned vegetables and beans are ideal, but you can also get a lot of benefit from dry staples such as crackers. Processed foods and candy are not as nutritious, but they do tend to keep well, so they can do in a pinch. Avoid anything that spoils quickly or requires sophisticated cooking. In the event of a disaster, you will be able to make fire and possibly run a microwave on a solar panel or portable generator—but you can’t rely on the electrical or gas mains to stay operational, and even boiling will require precious water.

3. Shelter: Depending on the disaster, your home may or may not remain standing—and even if it is standing, it may not be fit for habitation. Consider backup options for shelter: Do you have a basement? Do you own any tents? Do you know people you could move in with, if their homes survive and yours doesn’t?

4. Defense: It actually makes sense to own a gun or two in the event of a crisis. (In general it’s actually a big risk, though, so keep that in mind: the person your gun is most likely to kill is you.) Just don’t go overboard and do what we all did in Oregon Trail, stocking plenty of bullets but not enough canned food. Ammo will be hard to replace, though; your best option may actually be a gauss rifle (yes, those are real, and yes, I want one), because all they need for ammo is ferromagnetic metal of the appropriate shape and size. Then, all you need is a solar panel to charge its battery and some machine tools to convert scrap metal into ammo.

5. Community: Humans are highly social creatures, and we survive much better in groups. Get to know your neighbors. Stay in touch with friends and family. Not only will this improve your life in general, it will also give you people to reach out to if you need help during the crisis and the government is indisposed (or toppled). Having a portable radio that runs on batteries, solar power, or hand-crank operation will also be highly valuable for staying in touch with people during a crisis. (Likewise flashlights!)

Now, on to the specific scenarios. I will consider the following potential causes of apocalypse: Alien Invasion, Artificial Intelligence Uprising, Climate Disaster, Conventional War, Gamma-Ray Burst, Meteor Impact, Plague, Nuclear War, and last (and, honestly, least), Zombies.

I will rate each apocalypse by its risk level, based on its probability of occurring within the next 100 years (roughly the time I think it will take us to meaningfully colonize space and thereby change the game):

Very High: 1% or more

High: 0.1% – 1%

Moderate: 0.01% – 0.1%

Low: 0.001% – 0.01%

Very Low: 0.0001% – 0.001%

Tiny: 0.00001% – 0.0001%

Miniscule: 0.00001% or less

I will also rate your relative safety in different possible locations you might find yourself during the crisis:

Very Safe: You will probably survive.

Safe: You will likely survive if you are careful.

Dicey: You may survive, you may not. Hard to say.

Dangerous: You will likely die unless you are very careful.

Very Dangerous: You will probably die.

Hopeless: You will definitely die.

I’ll rate the following locations for each, with some explanation: City, Suburb, Rural Area, Military Base, Underground Bunker, Ship at Sea. Certain patterns will emerge—but some results may surprise you. This may tell you where to go to have the best chance of survival in the event of a disaster (though I admit bunkers are often in short supply).

All right, here goes!

Alien Invasion

Risk: Low

There are probably sapient aliens somewhere in this vast universe, maybe even some with advanced technology. But they are very unlikely to be willing to expend the enormous resources to travel across the stars just to conquer us. Then again, hey, it could happen; maybe they’re imperialists, or they have watched our TV commercials and heard the siren song of oregano.

City: Dangerous

Population centers are likely to be primary targets for their invasion. They probably won’t want to exterminate us outright (why would they?), but they may want to take control of our cities, and are likely to kill a lot of people when they do.

Suburb: Dicey

Outside the city centers will be a bit safer, but hardly truly safe.

Rural Area: Dicey

Where humans are spread out, we’ll present less of a target. Then again, if you own an oregano farm….

Military Base: Very Dangerous

You might think that having all those planes and guns around would help, but these will surely be prime targets in an invasion. Since the aliens are likely to be far more technologically advanced, it’s unlikely our military forces could put up much resistance. Our bases would likely be wiped out almost immediately.

Underground Bunker: Safe

This is a good place to be. Orbital and aerial weapons won’t be very effective against underground targets, and even ground troops would have trouble finding and attacking an isolated bunker. Since they probably won’t want to exterminate us, hiding in your bunker until they establish a New World Order could work out for you.

Ship at Sea: Dicey

As long as it’s a civilian vessel, you should be okay. A naval vessel is just as dangerous as a base, if not more so; they would likely strike our entire fleets from orbit almost instantly. But the aliens are unlikely to have much reason to bother attacking a cruise ship or a yacht. Then again, if they do, you’re toast.

Artificial Intelligence Uprising

Risk: Very High

While it sounds very sci-fi, this is one of the most probable apocalypse scenarios, and we should be working to defend against it. There are dozens of ways that artificial intelligence could get out of control and cause tremendous damage, particularly if the AI got control of combat drones or naval vessels. This could mean a superintelligent AI beyond human comprehension, but it need not; it could in fact be a very stupid AI that was programmed to make profits for Hasbro and decided that melting people into plastic was the best way to do that.

City: Very Dangerous

Cities don’t just have lots of people; they also have lots of machines. If the AI can hack our networks, they may be able to hack into not just phones and laptops, but even cars, homes, and power plants. Depending on the AI’s goals (which are very hard to predict), cities could become disaster zones almost immediately, as thousands of cars shut down and crash and all the power plants get set to overload.

Suburb: Dangerous

Definitely safer than the city, but still, you’ve got plenty of technology around you for the AI to exploit.

Rural Area: Dicey

The further you are from other people and their technology, the safer you’ll be. Having bad wifi out in the boonies may actually save your life. Then again, even tractors have software updates now….

Military Base: Very Dangerous

The military is extremely high-tech and all network-linked. Unless they can successfully secure their systems against the AI very well, very fast, suddenly all the guided missiles and combat drones and sentry guns will be deployed in service of the robot revolution.

Underground Bunker: Safe

As long as your bunker is off the grid, you should be okay. The robots won’t have any weapons we don’t already have, and bunkers are built because they protect pretty well against most weapons.

Ship at Sea: Hopeless

You are surrounded by technology and you have nowhere to run. A military vessel is worse than a civilian ship, but either way, you’re pretty much doomed. The AI is going to take over the radio, the GPS system, maybe even the controls of the ship themselves. It could intentionally overload the engines, or drive you into rocks, or simply shut down everything and leave you to starve at sea. A sailing yacht with a hand-held compass and sextant should be relatively safe, if you manage to get your hands on one of those somehow.

Climate Disaster

Risk: Moderate

Let’s be clear here. Some kind of climate disaster is inevitable; indeed, it’s already in progress. But what I’m talking about is something really severe, something that puts all of human civilization in jeopardy. That, fortunately, is fairly unlikely—and even more so after the big bill that just passed!

City: Dicey

Buildings provide shelter from the elements, and cities will be the first places we defend. Dikes will be built around Manhattan like the ones around Amsterdam. You won’t need to worry about fires, snowstorms, or flooding very much. Still, a really severe crisis could cause all utility systems to break down, meaning you won’t have heating and cooling.

Suburb: Dicey

The suburbs will be about as safe as the cities, maybe a little worse because there isn’t as much shelter if you lose your home to a disaster event.

Rural Area: Dangerous

Remote areas are going to have it the worst. Especially if you’re near a coast that can flood or a forest that can burn, you’re exposed to the elements and there won’t be much infrastructure to protect you. Your best bet is to move in toward the city, where other people will try to help you against the coming storms.

Military Base: Very Safe

Military infrastructure will be prioritized in defense plans, and soldiers are already given lots of survival tools and training. If you can get yourself to a military base and they actually let you in, you really won’t have much to worry about.

Underground Bunker: Very Safe

Underground doesn’t have a lot of weather, it turns out. As long as your bunker is well sealed against flooding, earthquakes are really your only serious concern, and climate change isn’t going to affect those very much.

Ship at Sea: Safe

Increased frequency of hurricanes and other storms will make the sea more dangerous, but as long as you steer clear of storms as they come, you should be okay.

Conventional War

Risk: Moderate

Once again, I should clarify. Obviously there are going to be wars—there are wars going on this very minute. But a truly disastrous war, a World War 3 still fought with conventional weapons, is fairly unlikely. We can’t rule it out, but we don’t have to worry too much—or rather, it’s nukes we should worry about, as I’ll get to in a little bit. It’s unlikely that truly apocalyptic damage could be caused by conventional weapons alone.

City: Dicey

Cities will often be where battles are fought, as they are strategically important. Expect bombing raids and perhaps infantry or tank battalions. Still, it’s actually pretty feasible to survive in a city that is under attack by conventional weapons; while lots of people certainly die, in most wars, most people actually don’t.

Suburb: Safe

Suburbs rarely make interesting military targets, so you’ll mainly have to worry about troops passing through on their way to cities.

Rural Area: Safe

For similar reasons to the suburbs, you should be relatively safe out in the boonies. You may encounter some scattered skirmishes, but you’re unlikely to face sustained attack.

Military Base: Dicey

Whether military bases are safe really depends on whether your side is winning or not. If they are, then you’re probably okay; that’s where all the soldiers and military equipment are, there to defend you. If they aren’t, then you’re in trouble; military bases make nice, juicy targets for attack.

Ship at Sea: Safe

There’s a reason it is big news every time a civilian cruise liner gets sunk in a war (does the Lusitania ring a bell?); it really doesn’t happen that much. Transport ships are at risk of submarine raids, and of course naval vessels will face constant threats; but cruise liners aren’t strategically important, so military forces have very little reason to target them.

Gamma-Ray Burst

Risk: Tiny

While gamma-ray bursts certainly happen all the time, so far they have all been extremely remote from Earth. It is currently estimated that they only happen a few times in any given galaxy every few million years. And each one is concentrated in a narrow beam, so even when they happen they only affect a few nearby stars. This is very good news, because if it happened… well, that’s pretty much it. We’d be doomed.

If a gamma-ray burst happened within a few light-years of us, and happened to be pointed at us, it would scour the Earth, boil the water, burn the atmosphere. Our entire planet would become a dead, molten rock—if, that is, it wasn’t so close that it blew the planet up completely. And the same is going to be true of Mars, Mercury, and every other planet in our solar system.

Underground Bunker: Very Dangerous

Your one meager hope of survival would be to be in an underground bunker at the moment the burst hit. Since most bursts give very little warning, you are unlikely to achieve this unless you, like, live in a bunker—which sounds pretty terrible. Moreover, your bunker needs to be a 100% closed system, and deep underground; the surface will be molten and the air will be burned away. There’s honestly a pretty narrow band of the Earth’s crust that’s deep enough to protect you but not already hot enough to doom you.

Anywhere Else: Hopeless

If you aren’t deep underground at the moment the burst hits us, that’s it; you’re dead. If you are on the side of the Earth facing the burst, you will die mercifully quickly, burned to a crisp instantly. If you are not, your death will be a bit slower, as the raging firestorm that engulfs the Earth, boils the oceans, and burns away the atmosphere will take some time to hit you. But your demise is equally inevitable.

Well, that was cheery. Remember, it’s really unlikely to happen! Moving on!

Meteor Impact

Risk: Tiny

Yes, “it has happened before, and it will happen again; the only question is when.” However, meteors with sufficient size to cause a global catastrophe only seem to hit the Earth about once every couple hundred million years. Moreover, right now the first time in human history where we might actually have a serious chance of detecting and deflecting an oncoming meteor—so even if one were on the way, we’d still have some hope of saving ourselves.

Underground Bunker: Dangerous

A meteor impact would be a lot like a gamma-ray burst, only much less so. (Almost anything is “much less so” than a gamma-ray burst, with the lone exception of a supernova, which is always “much more so”.) It would still boil a lot of ocean and start a massive firestorm, but it wouldn’t boil all the ocean, and the firestorm wouldn’t burn away all the oxygen in the atmosphere. Underground is clearly the safest place to be, preferably on the other side of the planet from the impact.

Anywhere Else: Very Dangerous

If you are above ground, it wouldn’t otherwise matter too much where you are, at least not in any way that’s easy to predict. Further from the impact is obviously better than closer, but the impact could be almost anywhere. After the initial destruction there would be a prolonged impact winter, which could cause famines and wars. Rural areas might be a bit safer than cities, but then again if you are in a remote area, you are less likely to get help if you need it.

Plague

Risk: Low

Obviously, the probability of a pandemic is 100%. You best start believing in pandemics; we’re in one. But pandemics aren’t apocalyptic plagues. To really jeopardize human civilization, there would have to be a superbug that spreads and mutates rapidly, has a high fatality rate, and remains highly resistant to treatment and vaccination. Fortunately, there aren’t a lot of bacteria or viruses like that; the last one we had was the Black Death, and humanity made it through that one. In fact, there is good reason to believe that with modern medical technology, even a pathogen like the Black Death wouldn’t be nearly as bad this time around.

City: Dangerous

Assuming the pathogen spreads from human to human, concentrations of humans are going to be the most dangerous places to be. Staying indoors and following whatever lockdown/mask/safety protocols that authorities recommend will surely help you; but if the plague gets bad enough, infrastructure could start falling apart and even those things will stop working.

Suburb: Safe

In a suburb, you are much more isolated from other people. You can stay in your home and be fairly safe from the plague, as long as you are careful.

Rural Area: Dangerous

The remoteness of a rural area means that you’d think you wouldn’t have to worry as much about human-to-human transmission. But as we’ve learned from COVID, rural areas are full of stubborn right-wing people who refuse to follow government safety protocols. There may not be many people around, but they probably will be taking stupid risks and spreading the disease all over the place. Moreover, if the disease can be carried by animals—as quite a few can—livestock will become an added danger.

Military Base: Safe

If there’s one place in the world where people follow government safety protocols, it’s a military base. Bases will have top-of-the-line equipment, skilled and disciplined personnel, and up-to-the-minute data on the spread of the pathogen.

Underground Bunker: Very Safe

The main thing you need to do is be away from other people for awhile, and a bunker is a great place to do that. As long as your bunker is well-stocked with food and water, you can ride out the plague and come back out once it dies down.

Ship at Sea: Dicey

This is an all-or-nothing proposition. If no one on the ship has the disease, you’re probably safe as long as you remain at sea, because very few pathogens can spread that far through the air. On the other hand, if someone on your ship does carry the disease, you’re basically doomed.

Nuclear War

Risk: Very High

Honestly, this is the one that terrifies me. I have no way of knowing that Vladmir Putin or Xi Jinping won’t wake up one morning any day now and give the order to launch a thousand nuclear missiles. (I honestly wasn’t even sure Trump wouldn’t, so it’s a damn good thing he’s out of office.) They have no reason to, but they’re psychopathic enough that I can’t be sure they won’t.

City: Dangerous

Obviously, most of those missiles are aimed at cities. And if you happen to be in the center of such a city, this is very bad for your health. However, nukes are not the automatic death machines that they are often portrayed to be; sure, right at the blast center you’re vaporized. But Hiroshima and Nagasaki both had lots of survivors, many of whom lived on for years or even decades afterward, even despite the radiation poisoning.

Suburb: Dangerous

Being away from a city center might provide some protection, but then again it might not; it really depends on how the nukes are targeted. It’s actually quite unlikely that Russia or China (or whoever) would deploy large megaton-yield missiles, as they are very expensive; so you could only have a few, making it easier to shoot them all down. The far more likely scenario is lots of kiloton-yield missiles, deployed in what is called a MIRV: multiple independent re-entry vehicle. One missile launches into space, then splits into many missiles, each of which can have a different target. It’s sort of like a cluster bomb, only the “little” clusters are each Hiroshima bombs. Those clusters might actually be spread over metropolitan areas relatively evenly, so being in a suburb might not save you. Or it might. Hard to say.

Rural Area: Dicey

If you are sufficiently remote from cities, the nukes probably won’t be aimed at you. And since most of the danger really happens right when the nuke hits, this is good news for you. You won’t have to worry about the blast or the radiation; your main concerns will be fallout and the resulting collapse of infrastructure. Nuclear winter could also be a risk, but recent studies suggest that’s relatively unlikely even in a full-scale nuclear exchange.

Military Base: Hopeless

The nukes are going to be targeted directly at military bases. Probably multiple nukes per base, in case some get shot down. Basically, if you are on a base at the time the missiles hit, you’re doomed. If you know the missiles are coming, your best bet would be to get as far from that base as you can, into as remote an area as you can. You’ll have a matter of minutes, so good luck.

Underground Bunker: Safe

There’s a reason we built a bunch of underground bunkers during the Cold War; they’re one of the few places you can go to really be safe from a nuclear attack. As long as your bunker is well-stocked and well-shielded, you can hide there and survive not only the initial attack, but the worst of the fallout as well.

Ship at Sea: Safe

Ships are small enough that they probably wouldn’t be targeted by nukes. Maybe if you’re on or near a major naval capital ship, like an aircraft carrier, you’d be in danger; someone might try to nuke that. (Even then, aircraft carriers are tough: Anything short of a direct hit might actually be survivable. In tests, carriers have remained afloat and largely functional even after a 100-kiloton nuclear bomb was detonated a mile away. They’re even radiation-shielded, because they have nuclear reactors.) But a civilian vessel or even a smaller naval vessel is unlikely to be targeted. Just stay miles away from any cities or any other ships, and you should be okay.

Zombies

Risk: Miniscule

Zombies per se—the literal undeadaren’t even real, so that’s just impossible. But something like zombies could maybe happen, in some very remote scenario in which some bizarre mutant strain of rabies or something spreads far and wide and causes people to go crazy and attack other people. Even then, if the infection is really only spread through bites, it’s not clear how it could ever reach a truly apocalyptic level; more likely, it would cause a lot of damage locally and then be rapidly contained, and we’d remember it like Pearl Harbor or 9/11: That terrible, terrible day when 5,000 people became zombies in Portland, and then they all died and it was over. An airborne or mosquito-borne virus would be much more dangerous, but then we’re really talking about a plague, not zombies. The ‘turns people into zombies’ part of the virus would be a lot less important than the ‘spreads through the air and kills you’ part.

Seriously, why is this such a common trope? Why do people think that this could cause an apocalypse?

City: Safe

Yes, safe, dammit. Once you have learned that zombies are on the loose, stay locked in your home, wearing heavy clothing (to block bites; a dog suit is ideal, but a leather jacket or puffy coat would do) with a shotgun (or a gauss rifle, see above) at the ready, and you’ll probably be fine. Yes, this is the area of highest risk, due to the concentration of people who could potentially be infected with the zombie virus. But unless you are stupid—which people in these movies always seem to be—you really aren’t in all that much danger. Zombies can at most be as fast and strong as humans (often, they seem to be less!), so all you need to do is shoot them before they can bite you. And unlike fake movie zombies, anything genuinely possible will go down from any mortal wound, not just a perfect headshot—I assure you, humans, however crazed by infection they might be, can’t run at you if their hearts (or their legs) are gone. It might take a bit more damage to drop them than an ordinary person, if they aren’t slowed down by pain; but it wouldn’t require perfect marksmanship or any kind of special weaponry. Buckshot to the chest will work just fine.

Suburb: Safe

Similar to the city, only more so, because people there are more isolated.

Rural Area: Very Safe

And rural areas are even more isolated still—plus you have more guns than people, so you’ll have more guns than zombies.

Military Base: Very Safe

Even more guns, plus military training and a chain of command! The zombies don’t stand a chance. A military base would be a great place to be, and indeed that’s where the containment would begin, as troops march from the bases to the cities to clear out the zombies. Shaun of the Dead (of all things!) actually got this right: One local area gets pretty bad, but then the Army comes in and takes all the zombies out.

Underground Bunker: Very Safe

A bunker remains safe in the event of zombies, just as it is in most other scenarios.

Ship at Sea: Very Safe

As long as the infection hasn’t spread to the ship you are currently on and the zombies can’t swim, you are at literally zero risk.

Russia has invaded Ukraine.

Mar 6 JDN 2459645

Russia has invaded Ukraine. No doubt you have heard it by now, as it’s all over the news now in dozens of outlets, from CNN to NBC to The Guardian to Al-Jazeera. And as well it should be, as this is the first time in history that a nuclear power has annexed another country. Yes, nuclear powers have fought wars before—the US just got out of one in Afghanistan as you may recall. They have even started wars and led invasions—the US did that in Iraq. And certainly, countries have been annexing and conquering other countries for millennia. But never before—never before, in human historyhas a nuclear-armed state invaded another country simply to claim it as part of itself. (Trump said he thought the US should have done something like that, and the world was rightly horrified.)

Ukraine is not a nuclear power—not anymore. The Soviet Union built up a great deal of its nuclear production in Ukraine, and in 1991 when Ukraine became independent it still had a sizable nuclear arsenal. But starting in 1994 Ukraine began disarming that arsenal, and now it is gone. Now that Russia has invaded them, the government of Ukraine has begun publicly reconsidering their agreements to disarm their nuclear arsenal.

Russia’s invasion of Ukraine has just disproved the most optimistic models of international relations, which basically said that major power wars for territory were over at the end of WW2. Some thought it was nuclear weapons, others the United Nations, still others a general improvement in trade integration and living standards around the world. But they’ve all turned out to be wrong; maybe such wars are rarer, but they can clearly still happen, because one just did.

I would say that only two major theories of the Long Peace are still left standing in light of this invasion, and that is nuclear deterrence and the democratic peace. Ukraine gave up its nuclear arsenal and later got attacked—that’s consistent with nuclear deterrence. Russia under Putin is nearly as authoritarian as the Soviet Union, and Ukraine is a “hybrid regime” (let’s call it a solid D), so there’s no reason the democratic peace would stop this invasion. But any model which posits that trade or the UN prevent war is pretty much off the table now, as Ukraine had very extensive trade with both Russia and the EU and the UN has been utterly toothless so far. (Maybe we could say the UN prevents wars except those led by permanent Security Council members.)

Well, then, what if the nuclear deterrence theory is right? What would have happened if Ukraine had kept its nuclear weapons? Would that have made this situation better, or worse? It could have made it better, if it acted as a deterrent against Russian aggression. But it could also have made it much, much worse, if it resulted in a nuclear exchange between Russia and Ukraine.

This is the problem with nukes. They are not a guarantee of safety. They are a guarantee of fat tails. To explain what I mean by that, let’s take a brief detour into statistics.

A fat-tailed distribution is one for which very extreme events have non-negligible probability. For some distributions, like a uniform distribution, events are clearly contained within a certain interval and nothing outside is even possible. For others, like a normal distribution or lognormal distribution, extreme events are theoretically possible, but so vanishingly improbable they aren’t worth worrying about. But for fat-tailed distributions like a Cauchy distribution or a Pareto distribution, extreme events are not so improbable. They may be unlikely, but they are not so unlikely they can simply be ignored. Indeed, they can actually dominate the average—most of what happens, happens in a handful of extreme events.

Deaths in war seem to be fat-tailed, even in conventional warfare. They seem to follow a Pareto distribution. There are lots of tiny skirmishes, relatively frequent regional conflicts, occasional major wars, and a handful of super-deadly global wars. This kind of pattern tends to emerge when a phenomenon is self-reinforcing by positive feedback—hence why we also see it in distributions of income and wildfire intensity.

Fat-tailed distributions typically (though not always—it’s easy to construct counterexamples, like the Cauchy distribution with low values truncated off) have another property as well, which is that minor events are common. More common, in fact, than they would be under a normal distribution. What seems to happen is that the probability mass moves away from the moderate outcomes and shifts to both the extreme outcomes and the minor ones.

Nuclear weapons fit this pattern perfectly. They may in fact reduce the probability of moderate, regional conflicts, in favor of increasing the probability of tiny skirmishes or peaceful negotiations. But they also increase the probability of utterly catastrophic outcomes—a full-scale nuclear war could kill billions of people. It probably wouldn’t wipe out all of humanity, and more recent analyses suggest that a catastrophic “nuclear winter” is unlikely. But even 2 billion people dead would be literally the worst thing that has ever happened, and nukes could make it happen in hours when such a death toll by conventional weapons would take years.

If we could somehow guarantee that such an outcome would never occur, then the lower rate of moderate conflicts nuclear weapons provide would justify their existence. But we can’t. It hasn’t happened yet, but it doesn’t have to happen often to be terrible. Really, just once would be bad enough.

Let us hope, then, that the democratic peace turns out to be the theory that’s right. Because a more democratic world would clearly be better—while a more nuclearized world could be better, but could also be much, much worse.

Slides from my presentation at Worldcon

Whether you are a regular reader curious about my Worldcon talk, or a Worldcon visitor interested in seeing the slides, The slides from my presentation, “How do we get to the Federation from here?” can be found here.

Why risking nuclear war should be a war crime

Nov 19, JDN 2458078

“What is the value of a human life?” is a notoriously difficult question, probably because people keep trying to answer it in terms of dollars, and it rightfully offends our moral sensibilities to do so. We shouldn’t be valuing people in terms of dollars—we should be valuing dollars in terms of their benefits to people.

So let me ask a simpler question: Does the value of an individual human life increase, decrease, or stay the same, as we increase the number of people in the world?

A case can be made that it should stay the same: Why should my value as a person depend upon how many other people there are? Everything that I am, I still am, whether there are a billion other people or a thousand.

But in fact I think the correct answer is that it decreases. This is for two reasons: First, anything that I can do is less valuable if there are other people who can do it better. This is true whether we’re talking about writing blog posts or ending world hunger. Second, and most importantly, if the number of humans in the world gets small enough, we begin to face danger of total permanent extinction.

If the value of a human life is constant, then 1,000 deaths is equally bad whether it happens in a population of 10,000 or a population of 10 billion. That doesn’t seem right, does it? It seems more reasonable to say that losing ten percent should have a roughly constant effect; in that case losing 1,000 people in a population of 10,000 is equally bad as losing 1 billion in a population of 10 billion. If that seems too strong, we could choose some value in between, and say perhaps that losing 1,000 out of 10,000 is equally bad as losing 1 million out of 1 billion. This would mean that the value of 1 person’s life today is about 1/1,000 of what it was immediately after the Toba Event.

Of course, with such uncertainty, perhaps it’s safest to assume constant value. This seems the fairest, and it is certainly a reasonable approximation.

In any case, I think it should be obvious that the inherent value of a human life does not increase as you add more human lives. Losing 1,000 people out of a population of 7 billion is not worse than losing 1,000 people out of a population of 10,000. That way lies nonsense.

Yet if we agree that the value of a human life is not increasing, this has a very important counter-intuitive consequence: It means that increasing the risk of a global catastrophe is at least as bad as causing a proportional number of deaths. Specifically, it implies that a 1% risk of global nuclear war is worse than killing 10 million people outright.

The calculation is simple: If the value of a human life is a constant V, then the expected utility (admittedly, expected utility theory has its flaws) from killing 10 million people is -10 million V. But the expected utility from a 1% risk of global nuclear war is 1% times -V times the expected number of deaths from such a nuclear war—and I think even 2 billion is a conservative estimate. (0.01)(-2 billion) V = -20 million V.

This probably sounds too abstract, or even cold, so let me put it another way. Suppose we had the choice between two worlds, and these were the only worlds we could choose from. In world A, there are 100 leaders who each make choices that result in 10 million deaths. In world B, there are 100 leaders who each make choices that result in a 1% chance of nuclear war. Which world should we choose?

The choice is a terrible one, to be sure.

In world A, 1 billion people die.

Yet what happens in world B?

If the risks are independent, we can’t just multiply by 100 to get a guarantee of nuclear war. The actual probability is 1-(1-0.01)^100 = 63%. Yet even so, (0.63)(2 billion) = 1.26 billion. The expected number of deaths is higher in world B. Indeed, the most likely scenario is that 2 billion people die.

Yet this is probably too conservative. The risks are most likely positively correlated; two world leaders who each take a 1% chance of nuclear war probably do so in response to one another. Therefore maybe adding up the chances isn’t actually so unreasonable—for all practical intents and purposes, we may be best off considering nuclear war in world B as guaranteed to happen. In that case, world B is even worse.

And that is all assuming that the nuclear war is relatively contained. Major cities are hit, then a peace treaty is signed, and we manage to rebuild human civilization more or less as it was. This is what most experts on the issue believe would happen; but I for one am not so sure. The nuclear winter and total collapse of institutions and infrastructure could result in a global apocalypse that would result in human extinctionnot 2 billion deaths but 7 billion, and an end to all of humanity’s projects once and forever This is the kind of outcome we should be prepared to do almost anything to prevent.

What does this imply for global policy? It means that we should be far more aggressive in punishing any action that seems to bring the world closer to nuclear war. Even tiny increases in risk, of the sort that would ordinarily be considered negligible, are as bad as murder. A measurably large increase is as bad as genocide.

Of course, in practice, we have to be able to measure something in order to punish it. We can’t have politicians imprisoned over 0.000001% chances of nuclear war, because such a chance is so tiny that there would be no way to attain even reasonable certainty that such a change had even occurred, much less who was responsible.

Even for very large chances—and in this context, 1% is very large—it would be highly problematic to directly penalize increasing the probability, as we have no consistent, fair, objective measure of that probability.

Therefore in practice what I think we must do is severely and mercilessly penalize certain types of actions that would be reasonably expected to increase the probability of catastrophic nuclear war.

If we had the chance to start over from the Manhattan Project, maybe simply building a nuclear weapon should be considered a war crime. But at this point, nuclear proliferation has already proceeded far enough that this is no longer a viable option. At least the US and Russia for the time being seem poised to maintain their nuclear arsenals, and in fact it’s probably better for them to keep maintaining and updating them rather than leaving decades-old ICBMs to rot.

What can we do instead?

First, we probably need to penalize speech that would tend to incite war between nuclear powers. Normally I am fiercely opposed to restrictions on speech, but this is nuclear war we’re talking about. We can’t take any chances on this one. If there is even a slight chance that a leader’s rhetoric might trigger a nuclear conflict, they should be censored, punished, and probably even imprisoned. Making even a veiled threat of nuclear war is like pointing a gun at someone’s head and threatening to shoot them—only the gun is pointed at everyone’s head simultaneously. This isn’t just yelling “fire” in a crowded theater; it’s literally threatening to burn down every theater in the world at once.

Such a regulation must be designed to allow speech that is necessary for diplomatic negotiations, as conflicts will invariably arise between any two countries. We need to find a way to draw the line so that it’s possible for a US President to criticize Russia’s intervention in the Ukraine or for a Chinese President to challenge US trade policy, without being accused of inciting war between nuclear powers. But one thing is quite clear: Wherever we draw that line, President Trump’s statement about “fire and fury” definitely crosses it. This is a direct threat of nuclear war, and it should be considered a war crime. That reason by itself—let alone his web of Russian entanglements and violations of the Emoluments Clause—should be sufficient to not only have Trump removed from office, but to have him tried at the Hague. Impulsiveness and incompetence are no excuse when weapons of mass destruction are involved.

Second, any nuclear policy that would tend to increase first-strike capability rather than second-strike capability should be considered a violation of international law. In case you are unfamiliar with such terms: First-strike capability consists of weapons such as ICBMs that are only viable to use as the opening salvo of an attack, because their launch sites can be easily located and targeted. Second-strike capability consists of weapons such as submarines that are more concealable, so it’s much more likely that they could wait for an attack to happen, confirm who was responsible and how much damage was done, and then retaliate afterward.
Even that retaliation would be difficult to justify: It’s effectively answering genocide with genocide, the ultimate expression of “an eye for an eye” writ large upon humanity’s future. I’ve previously written about my Credible Targeted Conventional Response strategy that makes it both more ethical and more credible to respond to a nuclear attack with a non-nuclear retaliation. But at least second-strike weapons are not inherently only functional at starting a nuclear war. A first-strike weapon can theoretically be fired in response to a surprise attack, but only before the attack hits you—which gives you literally minutes to decide the fate of the world, most likely with only the sketchiest of information upon which to base your decision. Second-strike weapons allow deliberation. They give us a chance to think carefully for a moment before we unleash irrevocable devastation.

All the launch codes should of course be randomized onetime pads for utmost security. But in addition to the launch codes themselves, I believe that anyone who wants to launch a nuclear weapon should be required to type, letter by letter (no copy-pasting), and then have the machine read aloud, Oppenheimer’s line about Shiva, “Now I am become Death, the destroyer of worlds.” Perhaps the passphrase should conclude with something like “I hereby sentence millions of innocent children to death by fire, and millions more to death by cancer.” I want it to be as salient as possible in the heads of every single soldier and technician just exactly how many innocent people they are killing. And if that means they won’t turn the key—so be it. (Indeed, I wouldn’t mind if every Hellfire missile required a passphrase of “By the authority vested in me by the United States of America, I hereby sentence you to death or dismemberment.” Somehow I think our drone strike numbers might go down. And don’t tell me they couldn’t; this isn’t like shooting a rifle in a firefight. These strikes are planned days in advance and specifically designed to be unpredictable by their targets.)

If everyone is going to have guns pointed at each other, at least in a second-strike world they’re wearing body armor and the first one to pull the trigger won’t automatically be the last one left standing.

Third, nuclear non-proliferation treaties need to be strengthened into disarmament treaties, with rapid but achievable timelines for disarmament of all nuclear weapons, starting with the nations that have the largest arsenals. Random inspections of the disarmament should be performed without warning on a frequent schedule. Any nation that is so much as a day late on their disarmament deadlines needs to have its leaders likewise hauled off to the Hague. If there is any doubt at all in your mind whether your government will meet its deadlines, you need to double your disarmament budget. And if your government is too corrupt or too bureaucratic to meet its deadlines even if they try, well, you’d better shape up fast. We’ll keep removing and imprisoning your leaders until you do. Once again, nothing can be left to chance.

We might want to maintain some small nuclear arsenal for the sole purpose of deflecting asteroids from colliding with the Earth. If so, that arsenal should be jointly owned and frequently inspected by both the United States and Russia—not just the nuclear superpowers, but also the only two nations with sufficient rocket launch capability in any case. The launch of the deflection missiles should require joint authorization from the presidents of both nations. But in fact nuclear weapons are probably not necessary for such a deflection; nuclear rockets would probably be a better option. Vaporizing the asteroid wouldn’t accomplish much, even if you could do it; what you actually want to do is impart as much sideways momentum as possible.

What I’m saying probably sounds extreme. It may even seem unjust or irrational. But look at those numbers again. Think carefully about the value of a human life. When we are talking about a risk of total human extinction, this is what rationality looks like. Zero tolerance for drug abuse or even terrorism is a ridiculous policy that does more harm than good. Zero tolerance for risk of nuclear war may be the only hope for humanity’s ongoing survival.

Throughout the vastness of the universe, there are probably billions of civilizations—I need only assume one civilization for every hundred galaxies. Of the civilizations that were unwilling to adopt zero tolerance policies on weapons of mass destruction and bear any cost, however unthinkable, to prevent their own extinction, there is almost boundless diversity, but they all have one thing in common: None of them will exist much longer. The only civilizations that last are the ones that refuse to tolerate weapons of mass destruction.

I think I know what the Great Filter is now

Sep 3, JDN 2458000

One of the most plausible solutions to the Fermi Paradox of why we have not found any other intelligent life in the universe is called the Great Filter: Somewhere in the process of evolving from unicellular prokaryotes to becoming an interstellar civilization, there is some highly-probable event that breaks the process, a “filter” that screens out all but the luckiest species—or perhaps literally all of them.

I previously thought that this filter was the invention of nuclear weapons; I now realize that this theory is incomplete. Nuclear weapons by themselves are only an existential threat because they co-exist with widespread irrationality and bigotry. The Great Filter is the combination of the two.

Yet there is a deep reason why we would expect that this is precisely the combination that would emerge in most species (as it has certainly emerged in our own): The rationality of a species is not uniform. Some individuals in a species will always be more rational than others, so as a species increases its level of rationality, it does not do so all at once.

Indeed, the processes of economic development and scientific advancement that make a species more rational are unlikely to be spread evenly; some cultures will develop faster than others, and some individuals within a given culture will be further along than others. While the mean level of rationality increases, the variance will also tend to increase.

On some arbitrary and oversimplified scale where 1 is the level of rationality needed to maintain a hunter-gatherer tribe, and 20 is the level of rationality needed to invent nuclear weapons, the distribution of rationality in a population starts something like this:

Great_Filter_1

Most of the population is between levels 1 and 3, which we might think of as lying between the bare minimum for a tribe to survive and the level at which one can start to make advances in knowledge and culture.

Then, as the society advances, it goes through a phase like this:

Great_Filter_2

This is about where we were in Periclean Athens. Most of the population is between levels 2 and 8. Level 2 used to be the average level of rationality back when we were hunter-gatherers. Level 8 is the level of philosophers like Archimedes and Pythagoras.

Today, our society looks like this:
Great_Filter_3

Most of the society is between levels 4 and 20. As I said, level 20 is the point at which it becomes feasible to develop nuclear weapons. Some of the world’s people are extremely intelligent and rational, and almost everyone is more rational than even the smartest people in hunter-gatherer times, but now there is enormous variation.

Where on this chart are racism and nationalism? Importantly, I think they are above the level of rationality that most people had in ancient times. Even Greek philosophers had attitudes toward slaves and other cultures that the modern KKK would find repulsive. I think on this scale racism is about a 10 and nationalism is about a 12.

If we had managed to uniformly increase the rationality of our society, with everyone gaining at the same rate, our distribution would instead look like this:
Great_Filter_4

If that were the case, we’d be fine. The lowest level of rationality widespread in the population would be 14, which is already beyond racism and nationalism. (Maybe it’s about the level of humanities professors today? That makes them substantially below quantum physicists who are 20 by construction… but hey, still almost twice as good as the Greek philosophers they revere.) We would have our nuclear technology, but it would not endanger our future—we wouldn’t even use it for weapons, we’d use it for power generation and space travel. Indeed, this lower-variance high-rationality state seems to be about what they have the Star Trek universe.

But since we didn’t, a large chunk of our population is between 10 and 12—that is, still racist or nationalist. We have the nuclear weapons, and we have people who might actually be willing to use them.

Great_Filter_5

I think this is what happens to most advanced civilizations around the galaxy. By the time they invent space travel, they have also invented nuclear weapons—but they still have their equivalent of racism and nationalism. And most of the time, the two combine into a volatile mix that results in the destruction or regression of their entire civilization.

If this is right, then we may be living at the most important moment in human history. It may be right here, right now, that we have the only chance we’ll ever get to turn the tide. We have to find a way to reduce the variance, to raise the rest of the world’s population past nationalism to a cosmopolitan morality. And we may have very little time.

Nuclear power is safe. Why don’t people like it?

Sep 24, JDN 2457656

This post will have two parts, corresponding to each sentence. First, I hope to convince you that nuclear power is safe. Second, I’ll try to analyze some of the reasons why people don’t like it and what we might be able to do about that.

Depending on how familiar you are with the statistics on nuclear power, the idea that nuclear power is safe may strike you as either a completely ridiculous claim or an egregious understatement. If your primary familiarity with nuclear power safety is via the widely-publicized examples of Chernobyl, Three Mile Island, and more recently Fukushima, you may have the impression that nuclear power carries huge, catastrophic risks. (You may also be confusing nuclear power with nuclear weapons—nuclear weapons are indeed the greatest catastrophic risk on Earth today, but equating the two is like equating automobiles and machine guns because both of them are made of metal and contain lubricant, flammable materials, and springs.)

But in fact nuclear energy is astonishingly safe. Indeed, even those examples aren’t nearly as bad as people have been led to believe. Guess how many people died as a result of Three Mile Island, including estimated increased cancer deaths from radiation exposure?

Zero. There are zero confirmed deaths and the consensus estimate of excess deaths caused by the Three Mile Island incident by all causes combined is zero.

What about Fukushima? Didn’t 10,000 people die there? From the tsunami, yes. But the nuclear accident resulted in zero fatalities. If anything, those 10,000 people were killed by coal—by climate change. They certainly weren’t killed by nuclear.

Chernobyl, on the other hand, did actually kill a lot of people. Chernobyl caused 31 confirmed direct deaths, as well as an estimated 4,000 excess deaths by all causes. On the one hand, that’s more than 9/11; on the other hand, it’s about a month of US car accidents. Imagine if people had the same level of panic and outrage at automobiles after a month of accidents that they did at nuclear power after Chernobyl.

The vast majority of nuclear accidents cause zero fatalities; other than Chernobyl, none have ever caused more than 10. Deepwater Horizon killed 11 people, and yet for some reason Americans did not unite in opposition against ever using oil (or even offshore drilling!) ever again.

In fact, even that isn’t fair to nuclear power, because we’re not including the thousands of lives saved every year by using nuclear instead of coal and oil.

Keep in mind, the WHO estimates 10 to 100 million excess deaths due to climate change over the 21st century. That’s an average of 100,000 to 1 million deaths every year. Nuclear power currently produces about 11% of the world’s energy, so let’s do a back-of-the-envelope calculation for how many lives that’s saving. Assuming that additional climate change would be worse in direct proportion to the additional carbon emissions (which is conservative), and assuming that half that energy would be replaced by coal or oil (also conservative, using Germany’s example), we’re looking at about a 6% increase in deaths due to climate change if all those nuclear power plants were closed. That’s 6,000 to 60,000 lives that nuclear power plants save every year.

I also haven’t included deaths due to pollution—note that nuclear power plants don’t pollute air or water whatsoever, and only produce very small amounts of waste that can be quite safely stored. Air pollution in all its forms is responsible for one in eight deaths worldwide. Let me say that again: One in eight of all deaths in the world is caused by air pollution—so this is on the order of 7 million deaths per year, every year. We burn our way to a biannual Holocaust. Most of this pollution is actually caused by burning wood—fireplaces, wood stoves, and bonfires are terrible for the air—and many countries would actually see a substantial reduction in their toxic pollution if they switched to oil or even coal in favor of wood. But a large part of that pollution is caused by coal, and a nontrivial amount is caused by oil. Coal-burning factories and power plants are responsible for about 1 million deaths per year in China alone. Most of that pollution could be prevented if those power plants were nuclear instead.

Factor all that in, and nuclear power currently saves tens if not hundreds of thousands of lives per year, and expanding it to replace all fossil fuels could save millions more. Indeed, a more precise estimate of the benefits of nuclear power published a few years ago in Environmental Science and Technology is that nuclear power plants have saved some 1.8 million human lives since their invention, putting them on a par with penicillin and the polio vaccine.

So, I hope I’ve convinced you of the first proposition: Nuclear power plants are safe—and not just safe, but heroic, in fact one of the greatest life-saving technologies ever invented. So, why don’t people like them?

Unfortunately, I suspect that no amount of statistical data by itself will convince those who still feel a deep-seated revulsion to nuclear power. Even many environmentalists, people who could be nuclear energy’s greatest advocates, are often opposed to it. I read all the way through Naomi Klein’s This Changes Everything and never found even a single cogent argument against nuclear power; she simply takes it as obvious that nuclear power is “more of the same line of thinking that got us in this mess”. Perhaps because nuclear power could be enormously profitable for certain corporations (which is true; but then, it’s also true of solar and wind power)? Or because it also fits this narrative of “raping and despoiling the Earth” (sort of, I guess)? She never really does explain; I’m guessing she assumes that her audience will simply share her “gut feeling” intuition that nuclear power is dangerous and untrustworthy. One of the most important inconvenient truths for environmentalists is that nuclear power is not only safe, it is almost certainly our best hope for stopping climate change.

Perhaps all this is less baffling when we recognize that other heroic technologies are often also feared or despised for similarly bizarre reasons—vaccines, for instance.

First of all, human beings fear what we cannot understand, and while the human immune system is certainly immensely complicated, nuclear power is based on quantum mechanics, a realm of scientific knowledge so difficult and esoteric that it is frequently used as the paradigm example of something that is hard to understand. (As Feynman famously said, “I think I can safely say that nobody understands quantum mechanics.”) Nor does it help that popular treatments of quantum physics typically bear about as much resemblance to the actual content of the theory as the X-Men films do to evolutionary biology, and con artists like Deepak Chopra take advantage of this confusion to peddle their quackery.

Nuclear radiation is also particularly terrifying because it is invisible and silent; while a properly-functioning nuclear power plant emits less ionizing radiation than the Capitol Building and eating a banana poses substantially higher radiation risk than talking on a cell phone, nonetheless there is real danger posed by ionizing radiation, and that danger is particularly terrifying because it takes a form that human senses cannot detect. When you are burned by fire or cut by a knife, you know immediately; but gamma rays could be coursing through you right now and you’d feel no different. (Huge quantities of neutrinos are coursing through you, but fear not, for they’re completely harmless.) The symptoms of severe acute radiation poisoning also take a particularly horrific form: After the initial phase of nausea wears off, you can enter a “walking ghost phase”, where your eventual death is almost certain due to your compromised immune and digestive systems, but your current condition is almost normal. This makes the prospect of death by nuclear accident a particularly vivid and horrible image.

Vividness makes ideas more available to our memory; and thus, by the availability heuristic, we automatically infer that it must be more probable than it truly is. You can think of horrific nuclear accidents like Chernobyl, and all the carnage they caused; but all those millions of people choking to death in China don’t make for a compelling TV news segment (or at least, our TV news doesn’t seem to think so). Vividness doesn’t actually seem to make things more persuasive, but it does make them more memorable.

Yet even if we allow for the possibility that death by radiation poisoning is somewhat worse than death by coal pollution (if I had to choose between the two, okay, maybe I’d go with the coal), surely it’s not ten thousand times worse? Surely it’s not worth sacrificing entire cities full of people to coal in order to prevent a handful of deaths by nuclear energy?

Another reason that has been proposed is a sense that we can control risk from other sources, but a nuclear meltdown would be totally outside our control. Perhaps that is the perception, but if you think about it, it really doesn’t make a lot of sense. If there’s a nuclear meltdown, emergency services will report it, and you can evacuate the area. Yes, the radiation moves at the speed of light; but it also dissipates as the inverse square of distance, so if you just move further away you can get a lot safer quite quickly. (Think about the brightness of a lamp in your face versus across a football field. Radiation works the same way.) The damage is also cumulative, so the radiation risk from a meltdown is only going to be serious if you stay close to the reactor for a sustained period of time. Indeed, it’s much easier to avoid nuclear radiation than it is to avoid air pollution; you can’t just stand behind a concrete wall to shield against air pollution, and moving further away isn’t possible if you don’t know where it’s coming from. Control would explain why we fear cars less than airplanes (which is also statistically absurd), but it really can’t explain why nuclear power scares people more than coal and oil.

Another important factor may be an odd sort of bipartisan consensus: While the Left hates nuclear power because it makes corporations profitable or because it’s unnatural and despoils the Earth or something, the Right hates nuclear power because it requires substantial government involvement and might displace their beloved fossil fuels. (The Right’s deep, deep love of the fossil fuel industry now borders on the pathological. Even now that they are obviously economically inefficient and environmentally disastrous, right-wing parties around the world continue to defend enormous subsidies for oil and coal companies. Corruption and regulatory capture could partly explain this, but only partly. Campaign contributions can’t explain why someone would write a book praising how wonderful fossil fuels are and angrily denouncing anyone who would dare criticize them.) So while the two sides may hate each other in general and disagree on most other issues—including of course climate change itself—they can at least agree that nuclear power is bad and must be stopped.

Where do we go from here, then? I’m not entirely sure. As I said, statistical data by itself clearly won’t be enough. We need to find out what it is that makes people so uniquely terrified of nuclear energy, and we need to find a way to assuage those fears.

And we must do this now. For every day we don’t—every day we postpone the transition to a zero-carbon energy grid—is another thousand people dead.

The facts will not speak for themselves, so we must speak for them

August 3, JDN 2457604

I finally began to understand the bizarre and terrifying phenomenon that is the Donald Trump Presidential nomination when I watched this John Oliver episode:

https://www.youtube.com/watch?v=U-l3IV_XN3c

These lines in particular, near the end, finally helped me put it all together:

What is truly revealing is his implication that believing something to be true is the same as it being true. Because if anything, that was the theme of the Republican Convention this week; it was a four-day exercise in emphasizing feelings over facts.

The facts against Donald Trump are absolutely overwhelming. He is not even a competent business man, just a spectacularly manipulative one—and even then, it’s not clear he made any more money than he would have just keeping his inheritance in a diversified stock portfolio. His casinos were too fraudulent for Atlantic City. His university was fraudulent. He has the worst honesty rating Politifact has ever given a candidate. (Bernie Sanders, Barack Obama, and Hillary Clinton are statistically tied for some of the best.)

More importantly, almost every policy he has proposed or even suggested is terrible, and several of them could be truly catastrophic.

Let’s start with economic policy: His trade policy would set back decades of globalization and dramatically increase global poverty, while doing little or nothing to expand employment in the US, especially if it sparks a trade war. His fiscal policy would permanently balloon the deficit by giving one of the largest tax breaks to the rich in history. His infamous wall would probably cost about as much as the federal government currently spends on all basic scientific research combined, and his only proposal for funding it fundamentally misunderstands how remittances and trade deficits work. He doesn’t believe in climate change, and would roll back what little progress we have made at reducing carbon emissions, thereby endangering millions of lives. He could very likely cause a global economic collapse comparable to the Great Depression.

His social policy is equally terrible: He has proposed criminalizing abortion, (in express violation of Roe v. Wade) which even many pro-life people find too extreme. He wants to deport all Muslims and ban Muslims from entering, which not just a direct First Amendment violation but also literally involves jackbooted soldiers breaking into the homes of law-abiding US citizens to kidnap them and take them out of the country. He wants to deport 11 million undocumented immigrants, the largest deportation in US history.

Yet it is in foreign policy above all that Trump is truly horrific. He has explicitly endorsed targeting the families of terrorists, which is a war crime (though not as bad as what Ted Cruz wanted to do, which is carpet-bombing cities). Speaking of war crimes, he thinks our torture policy wasn’t severe enough, and doesn’t even care if it is ineffective. He has made the literally mercantilist assertion that the purpose of military alliances is to create trade surpluses, and if European countries will not provide us with trade surpluses (read: tribute), he will no longer commit to defending them, thereby undermining decades of global stability that is founded upon America’s unwavering commitment to defend our allies. And worst of all, he will not rule out the first-strike deployment of nuclear weapons.

I want you to understand that I am not exaggerating when I say that a Donald Trump Presidency carries a nontrivial risk of triggering global nuclear war. Will this probably happen? No. It has a probability of perhaps 1%. But a 1% chance of a billion deaths is not a risk anyone should be prepared to take.

 

All of these facts scream at us that Donald Trump would be a catastrophe for America and the world. Why, then, are so many people voting for him? Why do our best election forecasts give him a good chance of winning the election?

Because facts don’t speak for themselves.

This is how the left, especially the center-left, has dropped the ball in recent decades. We joke that reality has a liberal bias, because so many of the facts are so obviously on our side. But meanwhile the right wing has nodded and laughed, even mockingly called us the “reality-based community”, because they know how to manipulate feelings.

Donald Trump has essentially no other skills—but he has that one, and it is enough. He knows how to fan the flames of anger and hatred and point them at his chosen targets. He knows how to rally people behind meaningless slogans like “Make America Great Again” and convince them that he has their best interests at heart.

Indeed, Trump’s persuasiveness is one of his many parallels with Adolf Hitler; I am not yet prepared to accuse Donald Trump of seeking genocide, yet at the same time I am not yet willing to put it past him. I don’t think it would take much of a spark at this point to trigger a conflagration of hatred that launches a genocide against Muslims in the United States, and I don’t trust Trump not to light such a spark.

Meanwhile, liberal policy wonks are looking on in horror, wondering how anyone could be so stupid as to believe him—and even publicly basically calling people stupid for believing him. Or sometimes we say they’re not stupid, they’re just racist. But people don’t believe Donald Trump because they are stupid; they believe Donald Trump because he is persuasive. He knows the inner recesses of the human mind and can harness our heuristics to his will. Do not mistake your unique position that protects you—some combination of education, intellect, and sheer willpower—for some inherent superiority. You are not better than Trump’s followers; you are more resistant to Trump’s powers of persuasion. Yes, statistically, Trump voters are more likely to be racist; but racism is a deep-seated bias in the human mind that to some extent we all share. Trump simply knows how to harness it.

Our enemies are persuasive—and therefore we must be as well. We can no longer act as though facts will automatically convince everyone by the power of pure reason; we must learn to stir emotions and rally crowds just as they do.

Or rather, not just as they do—not quite. When we see lies being so effective, we may be tempted to lie ourselves. When we see people being manipulated against us, we may be tempted to manipulate them in return. But in the long run, we can’t afford to do that. We do need to use reason, because reason is the only way to ensure that the beliefs we instill are true.

Therefore our task must be to make people see reason. Let me be clear: Not demand they see reason. Not hope they see reason. Not lament that they don’t. This will require active investment on our part. We must actually learn to persuade people in such a manner that their minds become more open to reason. This will mean using tools other than reason, but it will also mean treading a very fine line, using irrationality only when rationality is insufficient.

We will be tempted to take the easier, quicker path to the Dark Side, but we must resist. Our goal must be not to make people do what we want them to—but to do what they would want to if they were fully rational and fully informed. We will need rhetoric; we will need oratory; we may even need some manipulation. But as we fight our enemy, we must be vigilant not to become them.

This means not using bad arguments—strawmen and conmen—but pointing out the flaws in our opponents’ arguments even when they seem obvious to us—bananamen. It means not overstating our case about free trade or using implausible statistical results simply because they support our case.

But it also means not understating our case, not hiding in page 17 of an opaque technical report that if we don’t do something about climate change right now millions of people will die. It means not presenting our ideas as “political opinions” when they are demonstrated, indisputable scientific facts. It means taking the media to task for their false balance that must find a way to criticize a Democrat every time they criticize a Republican: Sure, he is a pathological liar and might trigger global economic collapse or even nuclear war, but she didn’t secure her emails properly. If you objectively assess the facts and find that Republicans lie three times as often as Democrats, maybe that’s something you should be reporting on instead of trying to compensate for by changing your criteria.

Speaking of the media, we should be pressuring them to include a regular—preferably daily, preferably primetime—segment on climate change, because yes, it is that important. How about after the weather report every day, you show a climate scientist explaining why we keep having record-breaking summer heat and more frequent natural disasters? If we suffer a global ecological collapse, this other stuff you’re constantly talking about really isn’t going to matter—that is, if it mattered in the first place. When ISIS kills 200 people in an attack, you don’t just report that a bunch of people died without examining the cause or talking about responses. But when a typhoon triggered by climate change kills 7,000, suddenly it’s just a random event, an “act of God” that nobody could have predicted or prevented. Having an appropriate caution about whether climate change caused any particular disaster should not prevent us from drawing the very real links between more carbon emissions and more natural disasters—and sometimes there’s just no other explanation.

It means demanding fact-checks immediately, not as some kind of extra commentary that happens after the debate, but as something the moderator says right then and there. (You have a staff, right? And they have Google access, right?) When a candidate says something that is blatantly, demonstrably false, they should receive a warning. After three warnings, their mic should be cut for that question. After ten, they should be kicked off the stage for the remainder of the debate. Donald Trump wouldn’t have lasted five minutes. But instead, they not only let him speak, they spent the next week repeating what he said in bold, exciting headlines. At least CNN finally realized that their headlines could actually fact-check Trump’s statements rather than just repeat them.
Above all, we will need to understand why people think the way they do, and learn to speak to them persuasively and truthfully but without elitism or condescension. This is one I know I’m not very good at myself; sometimes I get so frustrated with people who think the Earth is 6,000 years old (over 40% of Americans) or don’t believe in climate change (35% don’t think it is happening at all, another 30% don’t think it’s a big deal) that I come off as personally insulting them—and of course from that point forward they turn off. But irrational beliefs are not proof of defective character, and we must make that clear to ourselves as well as to others. We must not say that people are stupid or bad; but we absolutely must say that they are wrong. We must also remember that despite our best efforts, some amount of reactance will be inevitable; people simply don’t like having their beliefs challenged.

Yet even all this is probably not enough. Many people don’t watch mainstream media, or don’t believe it when they do (not without reason). Many people won’t even engage with friends or family members who challenge their political views, and will defriend or even disown them. We need some means of reaching these people too, and the hardest part may be simply getting them to listen to us in the first place. Perhaps we need more grassroots action—more protest marches, or even activists going door to door like Jehovah’s Witnesses. Perhaps we need to establish new media outlets that will be as widely accessible but held to a higher standard.

But we must find a way–and we have little time to waste.

Alien invasions: Could they happen, and could we survive?

July 30, JDN 2457600

alien-invasion

It’s not actually the top-grossing film in the US right now (that would be The Secret Life of Pets), but Independence Day: Resurgence made a quite respectable gross of $343 million worldwide, giving it an ROI of 108% over its budget of $165 million. It speaks to something deep in our minds—and since most of the money came from outside the US, apparently not just Americans, though it is a deeply American film—about the fear, but perhaps also the excitement, of a possible alien invasion.

So, how likely are alien invasions anyway?

Well, first of all, how likely are aliens?

One of the great mysteries of astronomy is the Fermi Paradox: Everything we know about astronomy, biology, and probability tells us that there should be, somewhere out in the cosmos, a multitude of extraterrestrial species, and some of them should even be intelligent enough to form civilizations and invent technology. So why haven’t we found any clear evidence of any of them?

Indeed, the Fermi Paradox became even more baffling in just the last two years, as we found literally thousands of new extrasolar planets, many of them quite likely to be habitable. More extrasolar planets have been found since 2014 than in all previous years of human civilization. Perhaps this is less surprising when we remember that no extrasolar planets had ever been confirmed before 1992—but personally I think that just makes it this much more amazing that we are lucky enough to live in such a golden age of astronomy.

The Drake equation was supposed to tell us how probable it is that we should encounter an alien civilization, but the equation isn’t much use to us because so many of its terms are so wildly uncertain. Maybe we can pin down how many planets there are soon, but we still don’t know what proportion of planets can support life, what proportion of those actually have life, or above all what proportion of ecosystems ever manage to evolve a technological civilization or how long such a civilization is likely to last. All possibilities from “they’re everywhere but we just don’t notice or they actively hide from us” to “we are actually the only ones in the last million years” remain on the table.

But let’s suppose that aliens do exist, and indeed have technology sufficient to reach our solar system. Faster-than-light capability would certainly do it, but it isn’t strictly necessary; with long lifespans, cryonic hibernation, or relativistic propulsion aliens could reasonably expect to travel at least between nearby stars within their lifetimes. The Independence Day aliens appear to have FTL travel, but interestingly it makes the most sense if they do not have FTL communication—it took them 20 years to get the distress call because it was sent at lightspeed. (Or perhaps the ansible was damaged in the war, and they fell back to a lightspeed emergency system?) Otherwise I don’t quite get why it would take the Queen 20 years to deploy her personal battlecruiser after the expeditionary force she sent was destroyed—maybe she was just too busy elsewhere to bother with our backwater planet? What did she want from our planet again?

That brings me to my next point: Just what motivation would aliens have for attacking us? We often take it for granted that if aliens exist, and have the capability to attack us, they would do so. But that really doesn’t make much sense. Do they just enjoy bombarding primitive planets? I guess it’s possible they’re all sadistic psychopaths, but it seems like any civilization stable enough to invent interstellar travel has got to have some kind of ethical norms. Maybe they see us as savages or even animals, and are therefore willing to kill us—but that still means they need a reason.

Another idea, taken seriously in V and less so in Cowboys & Aliens, is that there is some sort of resource we have that they want, and they’re willing to kill us to get it. This is probably such a common trope because it has been a common part of human existence; we are very familiar with people killing other people in order to secure natural resources such as gold, spices, or oil. (Indeed, to some extent it continues to this day.)

But this actually doesn’t make a lot of sense on an interstellar scale. Certainly water (V) and gold (Cowboys & Aliens) are not things they would have even the slightest reason to try to claim from an inhabited planet, as comets are a better source of water and asteroids are a better source of gold. Indeed, almost nothing inorganic could really be cost-effective to obtain from an inhabited planet; far easier to just grab it from somewhere that won’t fight back, and may even have richer veins and lower gravity.

It’s possible they would want something organic—lumber or spices, I guess. But I’m not sure why they’d want those things, and it seems kind of baffling that they wouldn’t just trade if they really want them. I’m sure we’d gladly give up a great deal of oregano and white pine in exchange for nanotechnology and FTL. I guess I could see this happening because they assume we’re too stupid to be worth trading with, or they can’t establish reliable means of communication. But one of the reasons why globalization has succeeded where colonialism failed is that trade is a lot more efficient than theft, and I find it unlikely that aliens this advanced would have failed to learn that lesson.

Media that imagines they’d enslave us makes even less sense; slavery is wildly inefficient, and they probably have such ludicrously high productivity that they are already coping with a massive labor glut. (I suppose maybe they send off unemployed youths to go conquer random planets just to give them something to do with their time? Helps with overpopulation too.)

I actually thought Independence Day: Resurgence did a fairlygood job of finding a resource that is scarce enough to be worth fighting over while also not being something we would willingly trade. Spoiler alert, I suppose:

Molten cores. Now, I haven’t the foggiest what one does with molten planet cores that somehow justifies the expenditure of all that energy flying between solar systems and digging halfway through planets with gigantic plasma drills, but hey, maybe they are actually tremendously useful somehow. They certainly do contain huge amounts of energy, provided you can extract it efficiently. Moreover, they are scarce; of planets we know about, most of them do not have molten cores. Earth, Venus, and Mercury do, and we think Mars once did; but none of the gas giants do, and even if they did, it’s quite plausible that the Queen’s planet-cracker drill just can’t drill that far down. Venus sounds like a nightmare to drill, so really the only planet I’d expect them to extract before Earth would be Mercury. And maybe they figured they needed both cores to justify the trip, in which case it would make sense to hit the inhabited planet first so we don’t have time to react and prepare our defenses. (I can’t imagine we’d take giant alien ships showing up and draining Mercury’s core lying down.) I’m imagining the alien economist right now, working out the cost-benefit analysis of dealing with Venus’s superheated atmosphere and sulfuric acid clouds compared to the cost of winning a war against primitive indigenous apes with nuclear missiles: Well, doubling our shield capacity is cheaper than covering the whole ship in sufficient anticorrosive, so I guess we’ll go hit the ape planet. (They established in the first film that their shields can withstand direct hits from nukes—the aliens came prepared.)

So, maybe killing us for our resources isn’t completely out of the question, but it seems unlikely.

Another possibility is religious fanaticism: Every human culture has religion in some form, so why shouldn’t the aliens? And if they do, it’s likely radically different from anything we believe. If they become convinced that our beliefs are not simply a minor nuisance but an active threat to the holy purity of the galaxy, they could come to our system on a mission to convert or destroy at any cost; and since “convert” seems very unlikely, “destroy” would probably become their objective pretty quickly. It wouldn’t have to make sense in terms of a cost-benefit analysis—fanaticism doesn’t have to make sense at all. The good news here is that any culture fanatical enough to randomly attack other planets simply for believing differently from them probably won’t be cohesive enough to reach that level of technology. (Then again, we somehow managed a world with both ISIS and ICBMs.)

Personally I think there is a far more likely scenario for alien invasions, and that is benevolent imperialism.

Why do I specify “benevolent”? Because if they aren’t interested in helping us, there’s really no reason for them to bother with us in the first place. But if their goal is to uplift our civilization, the only way they can do that is by interacting with us.

Now, note that I use the word “benevolent”, not the word “beneficent”. I think they would have to desire to make our lives better—but I’m not so convinced they actually would make our lives better. In our own history, human imperialism was rarely benevolent in the first place, but even where it was, it was even more rarely actually beneficent. Their culture would most likely be radically different from our own, and what they think of as improvements might seem to us strange, pointless, or even actively detrimental. But don’t you see that the QLX coefficient is maximized if you convert all your mountains into selenium extractors? (This is probably more or less how Native Americans felt when Europeans started despoiling their land for things called “coal” and “money”.) They might even try to alter us biologically to be more similar to them: But haven’t you always wanted tentacles? Hands are so inefficient!

Moreover, even if their intentions were good and their methods of achieving them were sound, it’s still quite likely that we would violently resist. I don’t know if humans are a uniquely rebellious species—let’s hope not, lest the aliens be shocked into overreacting when we rebel—but in general humans do not like being ruled over and forced to do things, even when those rulers are benevolent and the things they are forced to do are worth doing.

So, I think the most likely scenario for a war between humans and aliens is that they come in and start trying to radically reorganize our society, and either because their demands actually are unreasonable, or at least because we think they are, we rebel against their control.

Then what? Could we actually survive?

The good news is: Yes, we probably could.

If aliens really did come down trying to extract our molten core or something, the movies are all wrong: We’d have basically no hope. It really makes no sense at all that we could win a full-scale conflict with a technologically superior species if they were willing to exterminate us. Indeed, if what they were after didn’t depend upon preserving local ecology, their most likely mode of attack is to arrive in the system and immediately glass the planet. Nuclear weapons are already available to us for that task; if they’re more advanced they might have antimatter bombs, relativistic kinetic warheads, or even something more powerful still. We might be all dead before we even realized what was happening, or they might destroy 90% of us right away and mop up the survivors later with little difficulty.

If they wanted something that required ecological stability (I shall henceforth dub this the “oregano scenario”), yet weren’t willing to trade for some reason, then they wouldn’t unleash full devastation, and we’d have the life-dinner principle on our side: The hare runs for his life, but the fox only runs for her dinner. So if the aliens are trying to destroy us to get our delicious spices, we have a certain advantage from the fact that we are willing to win at essentially any cost, while at some point that alien economist is going to run the numbers and say, “This isn’t cost-effective. Let’s cut our losses and hit another system instead.”

If they wanted to convert us to their religion, well, we’d better hope enough people convert, because otherwise they’re going to revert to, you guessed it, glass the planet. At least this means they would probably at least try to communicate first, so we’d have some time to prepare; but it’s unlikely that even if their missionaries spent decades trying to convert us we could seriously reduce our disadvantage in military technology during that time. So really, our best bet is to adopt the alien religion. I guess what I’m really trying to say here is “All Hail Xemu.”

But in the most likely scenario that their goal is actually to make our lives better, or at least better as they see it, they will not be willing to utilize their full military capability against us. They might use some lethal force, especially if they haven’t found reliable means of nonlethal force on sufficient scale; but they aren’t going to try to slaughter us outright. Maybe they kill a few dissenters to set an example, or fire into a crowd to disperse a riot. But they are unlikely to level a city, and they certainly wouldn’t glass the entire planet.

Our best bet would probably actually be nonviolent resistance, as this has a much better track record against benevolent imperialism. Gandhi probably couldn’t have won a war against Britain, but he achieved India’s independence because he was smart enough to fight on the front of public opinion. Likewise, even with one tentacle tied behind their backs by their benevolence, the aliens would still probably be able to win any full-scale direct conflict; but if our nonviolent resistance grew strong enough, they might finally take the hint and realize we don’t want their so-called “help”.

So, how about someone makes that movie? Aliens come to our planet, not to kill us, but to change us, make us “better” according to their standards. QLX coefficients are maximized, and an intrepid few even get their tentacles installed. But the Resistance arises, and splits into two factions: One tries to use violence, and is rapidly crushed by overwhelming firepower, while the other uses nonviolent resistance. Ultimately the Resistance grows strong enough to overthrow the alien provisional government, and they decide to cut their losses and leave our planet. Then, decades later, we go back to normal, and wonder if we made the right decision, or if maybe QLX coefficients really were the most important thing after all.

[The image is released under a CC0 copyleft from Pixabay.]

The real Existential Risk we should be concerned about

JDN 2457458

There is a rather large subgroup within the rationalist community (loosely defined because organizing freethinkers is like herding cats) that focuses on existential risks, also called global catastrophic risks. Prominent examples include Nick Bostrom and Eliezer Yudkowsky.

Their stated goal in life is to save humanity from destruction. And when you put it that way, it sounds pretty darn important. How can you disagree with wanting to save humanity from destruction?

Well, there are actually people who do (the Voluntary Human Extinction movement), but they are profoundly silly. It should be obvious to anyone with even a basic moral compass that saving humanity from destruction is a good thing.

It’s not the goal of fighting existential risk that bothers me. It’s the approach. Specifically, they almost all seem to focus on exotic existential risks, vivid and compelling existential risks that are the stuff of great science fiction stories. In particular, they have a rather odd obsession with AI.

Maybe it’s the overlap with Singularitarians, and their inability to understand that exponentials are not arbitrarily fast; if you just keep projecting the growth in computing power as growing forever, surely eventually we’ll have a computer powerful enough to solve all the world’s problems, right? Well, yeah, I guess… if we can actually maintain the progress that long, which we almost certainly can’t, and if the problems turn out to be computationally tractable at all (the fastest possible computer that could fit inside the observable universe could not brute-force solve the game of Go, though a heuristic AI did just beat one of the world’s best players), and/or if we find really good heuristic methods of narrowing down the solution space… but that’s an awful lot of “if”s.

But AI isn’t what we need to worry about in terms of saving humanity from destruction. Nor is it asteroid impacts; NASA has been doing a good job watching for asteroids lately, and estimates the current risk of a serious impact (by which I mean something like a city-destroyer or global climate shock, not even a global killer) at around 1/10,000 per year. Alien invasion is right out; we can’t even find clear evidence of bacteria on Mars, and the skies are so empty of voices it has been called a paradox. Gamma ray bursts could kill us, and we aren’t sure about the probability of that (we think it’s small?), but much like brain aneurysms, there really isn’t a whole lot we can do to prevent them.

There is one thing that we really need to worry about destroying humanity, and one other thing that could potentially get close over a much longer timescale. The long-range threat is ecological collapse; as global climate change gets worse and the oceans become more acidic and the aquifers are drained, we could eventually reach the point where humanity cannot survive on Earth, or at least where our population collapses so severely that civilization as we know it is destroyed. This might not seem like such a threat, since we would see this coming decades or centuries in advance—but we are seeing it coming decades or centuries in advance, and yet we can’t seem to get the world’s policymakers to wake up and do something about it. So that’s clearly the second-most important existential risk.

But the most important existential risk, by far, no question, is nuclear weapons.

Nuclear weapons are the only foreseeable, preventable means by which humanity could be destroyed in the next twenty minutes.

Yes, that is approximately the time it takes an ICBM to hit its target after launch. There are almost 4,000 ICBMs currently deployed, mostly by the US and Russia. Once we include submarine-launched missiles and bombers, the total number of global nuclear weapons is over 15,000. I apologize for terrifying you by saying that these weapons could be deployed in a moment’s notice to wipe out most of human civilization within half an hour, followed by a global ecological collapse and fallout that would endanger the future of the entire human race—but it’s the truth. If you’re not terrified, you’re not paying attention.

I’ve intentionally linked the Union of Concerned Scientists as one of those sources. Now they are people who understand existential risk. They don’t talk about AI and asteroids and aliens (how alliterative). They talk about climate change and nuclear weapons.

We must stop this. We must get rid of these weapons. Next to that, literally nothing else matters.

“What if we’re conquered by tyrants?” It won’t matter. “What if there is a genocide?” It won’t matter. “What if there is a global economic collapse?” None of these things will matter, if the human race wipes itself out with nuclear weapons.

To speak like an economist for a moment, the utility of a global nuclear war must be set at negative infinity. Any detectable reduction in the probability of that event must be considered worth paying any cost to achieve. I don’t care if it costs $20 trillion and results in us being taken over by genocidal fascists—we are talking about the destruction of humanity. We can spend $20 trillion (actually the US as a whole does every 14 months!). We can survive genocidal fascists. We cannot survive nuclear war.

The good news is, we shouldn’t actually have to pay that sort of cost. All we have to do is dismantle our nuclear arsenal, and get other countries—particularly Russia—to dismantle theirs. In the long run, we will increase our wealth as our efforts are no longer wasted maintaining doomsday machines.

The main challenge is actually a matter of game theory. The surprisingly-sophisticated 1990s cartoon show the Animaniacs basically got it right when they sang: “We’d beat our swords into liverwurst / Down by the East Riverside / But no one wants to be the first!”

The thinking, anyway, is that this is basically a Prisoner’s Dilemma. If the US disarms and Russia doesn’t, Russia can destroy the US. Conversely, if Russia disarms and the US doesn’t, the US can destroy Russia. If neither disarms, we’re left where we are. Whether or not the other country disarms, you’re always better off not disarming. So neither country disarms.

But I contend that it is not, in fact, a Prisoner’s Dilemma. It could be a Stag Hunt; if that’s the case, then only multilateral disarmament makes sense, because the best outcome is if we both disarm, but the worst outcome is if we disarm and they don’t. Once we expect them to disarm, we have no temptation to renege on the deal ourselves; but if we think there’s a good chance they won’t, we might not want to either. Stag Hunts have two stable Nash equilibria; one is where both arm, the other where both disarm.

But in fact, I think it may be simply the trivial game.

There aren’t actually that many possible symmetric two-player nonzero-sum games (basically it’s a question of ordering 4 possibilities, and it’s symmetric, so 12 possible games), and one that we never talk about (because it’s sort of boring) is the trivial game: If I do the right thing and you do the right thing, we’re both better off. If you do the wrong thing and I do the right thing, I’m better off. If we both do the wrong thing, we’re both worse off. So, obviously, we both do the right thing, because we’d be idiots not to. Formally, we say that cooperation is a strictly dominant strategy. There’s no dilemma, no paradox; the self-interested strategy is the optimal strategy. (I find it kind of amusing that laissez-faire economics basically amounts to assuming that all real-world games are the trivial game.)

That is, I don’t think the US would actually benefit from nuking Russia, even if we could do so without retaliation. Likewise, I don’t think Russia would actually benefit from nuking the US. One of the things we’ve discovered—the hardest way possible—through human history is that working together is often better for everyone than fighting. Russia could nuke NATO, and thereby destroy all of their largest trading partners, or they could continue trading with us. Even if they are despicable psychopaths who think nothing of committing mass murder (Putin might be, but surely there are people under his command who aren’t?), it’s simply not in Russia’s best interest to nuke the US and Europe. Likewise, it is not in our best interest to nuke them.

Nuclear war is a strange game: The only winning move is not to play.

So I say, let’s stop playing. Yes, let’s unilaterally disarm, the thing that so many policy analysts are terrified of because they’re so convinced we’re in a Prisoner’s Dilemma or a Stag Hunt. “What’s to stop them from destroying us, if we make it impossible for us to destroy them!?” I dunno, maybe basic human decency, or failing that, rationality?

Several other countries have already done this—South Africa unilaterally disarmed, and nobody nuked them. Japan refused to build nuclear weapons in the first place—and I think it says something that they’re the only people to ever have them used against them.

Our conventional military is plenty large enough to defend us against all realistic threats, and could even be repurposed to defend against nuclear threats as well, by a method I call credible targeted conventional response. Instead of building ever-larger nuclear arsenals to threaten devastation in the world’s most terrifying penis-measuring contest, you deploy covert operatives (perhaps Navy SEALS in submarines, or double agents, or these days even stealth drones) around the world, with the standing order that if they have reason to believe a country initiated a nuclear attack, they will stop at nothing to hunt down and kill the specific people responsible for that attack. Not the country they came from; not the city they live in; those specific people. If a leader is enough of a psychopath to be willing to kill 300 million people in another country, he’s probably enough of a psychopath to be willing to lose 150 million people in his own country. He likely has a secret underground bunker that would allow him to survive, at least if humanity as a whole does. So you should be threatening the one thing he does care about—himself. You make sure he knows that if he pushes that button, you’ll find that bunker, drop in from helicopters, and shoot him in the face.

The “targeted conventional response” should be clear by now—you use non-nuclear means to respond, and you target the particular leaders responsible—but let me say a bit more about the “credible” part. The threat of mutually-assured destruction is actually not a credible one. It’s not what we call in game theory a subgame perfect Nash equilibrium. If you know that Russia has launched 1500 ICBMs to destroy every city in America, you actually have no reason at all to retaliate with your own 1500 ICBMs, and the most important reason imaginable not to. Your people are dead either way; you can’t save them. You lose. The only question now is whether you risk taking the rest of humanity down with you. If you have even the most basic human decency, you will not push that button. You will not “retaliate” in useless vengeance that could wipe out human civilization. Thus, your threat is a bluff—it is not credible.

But if your response is targeted and conventional, it suddenly becomes credible. It’s exactly reversed; you now have every reason to retaliate, and no reason not to. Your covert operation teams aren’t being asked to destroy humanity; they’re being tasked with finding and executing the greatest mass murderer in history. They don’t have some horrific moral dilemma to resolve; they have the opportunity to become the world’s greatest heroes. Indeed, they’d very likely have the whole world (or what’s left of it) on their side; even the population of the attacking country would rise up in revolt and the double agents could use the revolt as cover. Now you have no reason to even hesitate; your threat is completely credible. The only question is whether you can actually pull it off, and if we committed the full resources of the United States military to preparing for this possibility, I see no reason to doubt that we could. If a US President can be assassinated by a lone maniac (and yes, that is actually what happened), then the world’s finest covert operations teams can assassinate whatever leader pushed that button.

This is a policy that works both unilaterally and multilaterally. We could even assemble an international coalition—perhaps make the UN “peacekeepers” put their money where their mouth is and train the finest special operatives in the history of the world tasked with actually keeping the peace.

Let’s not wait for someone else to save humanity from destruction. Let’s be the first.