I think I know what the Great Filter is now

Sep 3, JDN 2458000

One of the most plausible solutions to the Fermi Paradox of why we have not found any other intelligent life in the universe is called the Great Filter: Somewhere in the process of evolving from unicellular prokaryotes to becoming an interstellar civilization, there is some highly-probable event that breaks the process, a “filter” that screens out all but the luckiest species—or perhaps literally all of them.

I previously thought that this filter was the invention of nuclear weapons; I now realize that this theory is incomplete. Nuclear weapons by themselves are only an existential threat because they co-exist with widespread irrationality and bigotry. The Great Filter is the combination of the two.

Yet there is a deep reason why we would expect that this is precisely the combination that would emerge in most species (as it has certainly emerged in our own): The rationality of a species is not uniform. Some individuals in a species will always be more rational than others, so as a species increases its level of rationality, it does not do so all at once.

Indeed, the processes of economic development and scientific advancement that make a species more rational are unlikely to be spread evenly; some cultures will develop faster than others, and some individuals within a given culture will be further along than others. While the mean level of rationality increases, the variance will also tend to increase.

On some arbitrary and oversimplified scale where 1 is the level of rationality needed to maintain a hunter-gatherer tribe, and 20 is the level of rationality needed to invent nuclear weapons, the distribution of rationality in a population starts something like this:

Great_Filter_1

Most of the population is between levels 1 and 3, which we might think of as lying between the bare minimum for a tribe to survive and the level at which one can start to make advances in knowledge and culture.

Then, as the society advances, it goes through a phase like this:

Great_Filter_2

This is about where we were in Periclean Athens. Most of the population is between levels 2 and 8. Level 2 used to be the average level of rationality back when we were hunter-gatherers. Level 8 is the level of philosophers like Archimedes and Pythagoras.

Today, our society looks like this:
Great_Filter_3

Most of the society is between levels 4 and 20. As I said, level 20 is the point at which it becomes feasible to develop nuclear weapons. Some of the world’s people are extremely intelligent and rational, and almost everyone is more rational than even the smartest people in hunter-gatherer times, but now there is enormous variation.

Where on this chart are racism and nationalism? Importantly, I think they are above the level of rationality that most people had in ancient times. Even Greek philosophers had attitudes toward slaves and other cultures that the modern KKK would find repulsive. I think on this scale racism is about a 10 and nationalism is about a 12.

If we had managed to uniformly increase the rationality of our society, with everyone gaining at the same rate, our distribution would instead look like this:
Great_Filter_4

If that were the case, we’d be fine. The lowest level of rationality widespread in the population would be 14, which is already beyond racism and nationalism. (Maybe it’s about the level of humanities professors today? That makes them substantially below quantum physicists who are 20 by construction… but hey, still almost twice as good as the Greek philosophers they revere.) We would have our nuclear technology, but it would not endanger our future—we wouldn’t even use it for weapons, we’d use it for power generation and space travel. Indeed, this lower-variance high-rationality state seems to be about what they have the Star Trek universe.

But since we didn’t, a large chunk of our population is between 10 and 12—that is, still racist or nationalist. We have the nuclear weapons, and we have people who might actually be willing to use them.

Great_Filter_5

I think this is what happens to most advanced civilizations around the galaxy. By the time they invent space travel, they have also invented nuclear weapons—but they still have their equivalent of racism and nationalism. And most of the time, the two combine into a volatile mix that results in the destruction or regression of their entire civilization.

If this is right, then we may be living at the most important moment in human history. It may be right here, right now, that we have the only chance we’ll ever get to turn the tide. We have to find a way to reduce the variance, to raise the rest of the world’s population past nationalism to a cosmopolitan morality. And we may have very little time.

Alien invasions: Could they happen, and could we survive?

July 30, JDN 2457600

alien-invasion

It’s not actually the top-grossing film in the US right now (that would be The Secret Life of Pets), but Independence Day: Resurgence made a quite respectable gross of $343 million worldwide, giving it an ROI of 108% over its budget of $165 million. It speaks to something deep in our minds—and since most of the money came from outside the US, apparently not just Americans, though it is a deeply American film—about the fear, but perhaps also the excitement, of a possible alien invasion.

So, how likely are alien invasions anyway?

Well, first of all, how likely are aliens?

One of the great mysteries of astronomy is the Fermi Paradox: Everything we know about astronomy, biology, and probability tells us that there should be, somewhere out in the cosmos, a multitude of extraterrestrial species, and some of them should even be intelligent enough to form civilizations and invent technology. So why haven’t we found any clear evidence of any of them?

Indeed, the Fermi Paradox became even more baffling in just the last two years, as we found literally thousands of new extrasolar planets, many of them quite likely to be habitable. More extrasolar planets have been found since 2014 than in all previous years of human civilization. Perhaps this is less surprising when we remember that no extrasolar planets had ever been confirmed before 1992—but personally I think that just makes it this much more amazing that we are lucky enough to live in such a golden age of astronomy.

The Drake equation was supposed to tell us how probable it is that we should encounter an alien civilization, but the equation isn’t much use to us because so many of its terms are so wildly uncertain. Maybe we can pin down how many planets there are soon, but we still don’t know what proportion of planets can support life, what proportion of those actually have life, or above all what proportion of ecosystems ever manage to evolve a technological civilization or how long such a civilization is likely to last. All possibilities from “they’re everywhere but we just don’t notice or they actively hide from us” to “we are actually the only ones in the last million years” remain on the table.

But let’s suppose that aliens do exist, and indeed have technology sufficient to reach our solar system. Faster-than-light capability would certainly do it, but it isn’t strictly necessary; with long lifespans, cryonic hibernation, or relativistic propulsion aliens could reasonably expect to travel at least between nearby stars within their lifetimes. The Independence Day aliens appear to have FTL travel, but interestingly it makes the most sense if they do not have FTL communication—it took them 20 years to get the distress call because it was sent at lightspeed. (Or perhaps the ansible was damaged in the war, and they fell back to a lightspeed emergency system?) Otherwise I don’t quite get why it would take the Queen 20 years to deploy her personal battlecruiser after the expeditionary force she sent was destroyed—maybe she was just too busy elsewhere to bother with our backwater planet? What did she want from our planet again?

That brings me to my next point: Just what motivation would aliens have for attacking us? We often take it for granted that if aliens exist, and have the capability to attack us, they would do so. But that really doesn’t make much sense. Do they just enjoy bombarding primitive planets? I guess it’s possible they’re all sadistic psychopaths, but it seems like any civilization stable enough to invent interstellar travel has got to have some kind of ethical norms. Maybe they see us as savages or even animals, and are therefore willing to kill us—but that still means they need a reason.

Another idea, taken seriously in V and less so in Cowboys & Aliens, is that there is some sort of resource we have that they want, and they’re willing to kill us to get it. This is probably such a common trope because it has been a common part of human existence; we are very familiar with people killing other people in order to secure natural resources such as gold, spices, or oil. (Indeed, to some extent it continues to this day.)

But this actually doesn’t make a lot of sense on an interstellar scale. Certainly water (V) and gold (Cowboys & Aliens) are not things they would have even the slightest reason to try to claim from an inhabited planet, as comets are a better source of water and asteroids are a better source of gold. Indeed, almost nothing inorganic could really be cost-effective to obtain from an inhabited planet; far easier to just grab it from somewhere that won’t fight back, and may even have richer veins and lower gravity.

It’s possible they would want something organic—lumber or spices, I guess. But I’m not sure why they’d want those things, and it seems kind of baffling that they wouldn’t just trade if they really want them. I’m sure we’d gladly give up a great deal of oregano and white pine in exchange for nanotechnology and FTL. I guess I could see this happening because they assume we’re too stupid to be worth trading with, or they can’t establish reliable means of communication. But one of the reasons why globalization has succeeded where colonialism failed is that trade is a lot more efficient than theft, and I find it unlikely that aliens this advanced would have failed to learn that lesson.

Media that imagines they’d enslave us makes even less sense; slavery is wildly inefficient, and they probably have such ludicrously high productivity that they are already coping with a massive labor glut. (I suppose maybe they send off unemployed youths to go conquer random planets just to give them something to do with their time? Helps with overpopulation too.)

I actually thought Independence Day: Resurgence did a fairlygood job of finding a resource that is scarce enough to be worth fighting over while also not being something we would willingly trade. Spoiler alert, I suppose:

Molten cores. Now, I haven’t the foggiest what one does with molten planet cores that somehow justifies the expenditure of all that energy flying between solar systems and digging halfway through planets with gigantic plasma drills, but hey, maybe they are actually tremendously useful somehow. They certainly do contain huge amounts of energy, provided you can extract it efficiently. Moreover, they are scarce; of planets we know about, most of them do not have molten cores. Earth, Venus, and Mercury do, and we think Mars once did; but none of the gas giants do, and even if they did, it’s quite plausible that the Queen’s planet-cracker drill just can’t drill that far down. Venus sounds like a nightmare to drill, so really the only planet I’d expect them to extract before Earth would be Mercury. And maybe they figured they needed both cores to justify the trip, in which case it would make sense to hit the inhabited planet first so we don’t have time to react and prepare our defenses. (I can’t imagine we’d take giant alien ships showing up and draining Mercury’s core lying down.) I’m imagining the alien economist right now, working out the cost-benefit analysis of dealing with Venus’s superheated atmosphere and sulfuric acid clouds compared to the cost of winning a war against primitive indigenous apes with nuclear missiles: Well, doubling our shield capacity is cheaper than covering the whole ship in sufficient anticorrosive, so I guess we’ll go hit the ape planet. (They established in the first film that their shields can withstand direct hits from nukes—the aliens came prepared.)

So, maybe killing us for our resources isn’t completely out of the question, but it seems unlikely.

Another possibility is religious fanaticism: Every human culture has religion in some form, so why shouldn’t the aliens? And if they do, it’s likely radically different from anything we believe. If they become convinced that our beliefs are not simply a minor nuisance but an active threat to the holy purity of the galaxy, they could come to our system on a mission to convert or destroy at any cost; and since “convert” seems very unlikely, “destroy” would probably become their objective pretty quickly. It wouldn’t have to make sense in terms of a cost-benefit analysis—fanaticism doesn’t have to make sense at all. The good news here is that any culture fanatical enough to randomly attack other planets simply for believing differently from them probably won’t be cohesive enough to reach that level of technology. (Then again, we somehow managed a world with both ISIS and ICBMs.)

Personally I think there is a far more likely scenario for alien invasions, and that is benevolent imperialism.

Why do I specify “benevolent”? Because if they aren’t interested in helping us, there’s really no reason for them to bother with us in the first place. But if their goal is to uplift our civilization, the only way they can do that is by interacting with us.

Now, note that I use the word “benevolent”, not the word “beneficent”. I think they would have to desire to make our lives better—but I’m not so convinced they actually would make our lives better. In our own history, human imperialism was rarely benevolent in the first place, but even where it was, it was even more rarely actually beneficent. Their culture would most likely be radically different from our own, and what they think of as improvements might seem to us strange, pointless, or even actively detrimental. But don’t you see that the QLX coefficient is maximized if you convert all your mountains into selenium extractors? (This is probably more or less how Native Americans felt when Europeans started despoiling their land for things called “coal” and “money”.) They might even try to alter us biologically to be more similar to them: But haven’t you always wanted tentacles? Hands are so inefficient!

Moreover, even if their intentions were good and their methods of achieving them were sound, it’s still quite likely that we would violently resist. I don’t know if humans are a uniquely rebellious species—let’s hope not, lest the aliens be shocked into overreacting when we rebel—but in general humans do not like being ruled over and forced to do things, even when those rulers are benevolent and the things they are forced to do are worth doing.

So, I think the most likely scenario for a war between humans and aliens is that they come in and start trying to radically reorganize our society, and either because their demands actually are unreasonable, or at least because we think they are, we rebel against their control.

Then what? Could we actually survive?

The good news is: Yes, we probably could.

If aliens really did come down trying to extract our molten core or something, the movies are all wrong: We’d have basically no hope. It really makes no sense at all that we could win a full-scale conflict with a technologically superior species if they were willing to exterminate us. Indeed, if what they were after didn’t depend upon preserving local ecology, their most likely mode of attack is to arrive in the system and immediately glass the planet. Nuclear weapons are already available to us for that task; if they’re more advanced they might have antimatter bombs, relativistic kinetic warheads, or even something more powerful still. We might be all dead before we even realized what was happening, or they might destroy 90% of us right away and mop up the survivors later with little difficulty.

If they wanted something that required ecological stability (I shall henceforth dub this the “oregano scenario”), yet weren’t willing to trade for some reason, then they wouldn’t unleash full devastation, and we’d have the life-dinner principle on our side: The hare runs for his life, but the fox only runs for her dinner. So if the aliens are trying to destroy us to get our delicious spices, we have a certain advantage from the fact that we are willing to win at essentially any cost, while at some point that alien economist is going to run the numbers and say, “This isn’t cost-effective. Let’s cut our losses and hit another system instead.”

If they wanted to convert us to their religion, well, we’d better hope enough people convert, because otherwise they’re going to revert to, you guessed it, glass the planet. At least this means they would probably at least try to communicate first, so we’d have some time to prepare; but it’s unlikely that even if their missionaries spent decades trying to convert us we could seriously reduce our disadvantage in military technology during that time. So really, our best bet is to adopt the alien religion. I guess what I’m really trying to say here is “All Hail Xemu.”

But in the most likely scenario that their goal is actually to make our lives better, or at least better as they see it, they will not be willing to utilize their full military capability against us. They might use some lethal force, especially if they haven’t found reliable means of nonlethal force on sufficient scale; but they aren’t going to try to slaughter us outright. Maybe they kill a few dissenters to set an example, or fire into a crowd to disperse a riot. But they are unlikely to level a city, and they certainly wouldn’t glass the entire planet.

Our best bet would probably actually be nonviolent resistance, as this has a much better track record against benevolent imperialism. Gandhi probably couldn’t have won a war against Britain, but he achieved India’s independence because he was smart enough to fight on the front of public opinion. Likewise, even with one tentacle tied behind their backs by their benevolence, the aliens would still probably be able to win any full-scale direct conflict; but if our nonviolent resistance grew strong enough, they might finally take the hint and realize we don’t want their so-called “help”.

So, how about someone makes that movie? Aliens come to our planet, not to kill us, but to change us, make us “better” according to their standards. QLX coefficients are maximized, and an intrepid few even get their tentacles installed. But the Resistance arises, and splits into two factions: One tries to use violence, and is rapidly crushed by overwhelming firepower, while the other uses nonviolent resistance. Ultimately the Resistance grows strong enough to overthrow the alien provisional government, and they decide to cut their losses and leave our planet. Then, decades later, we go back to normal, and wonder if we made the right decision, or if maybe QLX coefficients really were the most important thing after all.

[The image is released under a CC0 copyleft from Pixabay.]

The real Existential Risk we should be concerned about

JDN 2457458

There is a rather large subgroup within the rationalist community (loosely defined because organizing freethinkers is like herding cats) that focuses on existential risks, also called global catastrophic risks. Prominent examples include Nick Bostrom and Eliezer Yudkowsky.

Their stated goal in life is to save humanity from destruction. And when you put it that way, it sounds pretty darn important. How can you disagree with wanting to save humanity from destruction?

Well, there are actually people who do (the Voluntary Human Extinction movement), but they are profoundly silly. It should be obvious to anyone with even a basic moral compass that saving humanity from destruction is a good thing.

It’s not the goal of fighting existential risk that bothers me. It’s the approach. Specifically, they almost all seem to focus on exotic existential risks, vivid and compelling existential risks that are the stuff of great science fiction stories. In particular, they have a rather odd obsession with AI.

Maybe it’s the overlap with Singularitarians, and their inability to understand that exponentials are not arbitrarily fast; if you just keep projecting the growth in computing power as growing forever, surely eventually we’ll have a computer powerful enough to solve all the world’s problems, right? Well, yeah, I guess… if we can actually maintain the progress that long, which we almost certainly can’t, and if the problems turn out to be computationally tractable at all (the fastest possible computer that could fit inside the observable universe could not brute-force solve the game of Go, though a heuristic AI did just beat one of the world’s best players), and/or if we find really good heuristic methods of narrowing down the solution space… but that’s an awful lot of “if”s.

But AI isn’t what we need to worry about in terms of saving humanity from destruction. Nor is it asteroid impacts; NASA has been doing a good job watching for asteroids lately, and estimates the current risk of a serious impact (by which I mean something like a city-destroyer or global climate shock, not even a global killer) at around 1/10,000 per year. Alien invasion is right out; we can’t even find clear evidence of bacteria on Mars, and the skies are so empty of voices it has been called a paradox. Gamma ray bursts could kill us, and we aren’t sure about the probability of that (we think it’s small?), but much like brain aneurysms, there really isn’t a whole lot we can do to prevent them.

There is one thing that we really need to worry about destroying humanity, and one other thing that could potentially get close over a much longer timescale. The long-range threat is ecological collapse; as global climate change gets worse and the oceans become more acidic and the aquifers are drained, we could eventually reach the point where humanity cannot survive on Earth, or at least where our population collapses so severely that civilization as we know it is destroyed. This might not seem like such a threat, since we would see this coming decades or centuries in advance—but we are seeing it coming decades or centuries in advance, and yet we can’t seem to get the world’s policymakers to wake up and do something about it. So that’s clearly the second-most important existential risk.

But the most important existential risk, by far, no question, is nuclear weapons.

Nuclear weapons are the only foreseeable, preventable means by which humanity could be destroyed in the next twenty minutes.

Yes, that is approximately the time it takes an ICBM to hit its target after launch. There are almost 4,000 ICBMs currently deployed, mostly by the US and Russia. Once we include submarine-launched missiles and bombers, the total number of global nuclear weapons is over 15,000. I apologize for terrifying you by saying that these weapons could be deployed in a moment’s notice to wipe out most of human civilization within half an hour, followed by a global ecological collapse and fallout that would endanger the future of the entire human race—but it’s the truth. If you’re not terrified, you’re not paying attention.

I’ve intentionally linked the Union of Concerned Scientists as one of those sources. Now they are people who understand existential risk. They don’t talk about AI and asteroids and aliens (how alliterative). They talk about climate change and nuclear weapons.

We must stop this. We must get rid of these weapons. Next to that, literally nothing else matters.

“What if we’re conquered by tyrants?” It won’t matter. “What if there is a genocide?” It won’t matter. “What if there is a global economic collapse?” None of these things will matter, if the human race wipes itself out with nuclear weapons.

To speak like an economist for a moment, the utility of a global nuclear war must be set at negative infinity. Any detectable reduction in the probability of that event must be considered worth paying any cost to achieve. I don’t care if it costs $20 trillion and results in us being taken over by genocidal fascists—we are talking about the destruction of humanity. We can spend $20 trillion (actually the US as a whole does every 14 months!). We can survive genocidal fascists. We cannot survive nuclear war.

The good news is, we shouldn’t actually have to pay that sort of cost. All we have to do is dismantle our nuclear arsenal, and get other countries—particularly Russia—to dismantle theirs. In the long run, we will increase our wealth as our efforts are no longer wasted maintaining doomsday machines.

The main challenge is actually a matter of game theory. The surprisingly-sophisticated 1990s cartoon show the Animaniacs basically got it right when they sang: “We’d beat our swords into liverwurst / Down by the East Riverside / But no one wants to be the first!”

The thinking, anyway, is that this is basically a Prisoner’s Dilemma. If the US disarms and Russia doesn’t, Russia can destroy the US. Conversely, if Russia disarms and the US doesn’t, the US can destroy Russia. If neither disarms, we’re left where we are. Whether or not the other country disarms, you’re always better off not disarming. So neither country disarms.

But I contend that it is not, in fact, a Prisoner’s Dilemma. It could be a Stag Hunt; if that’s the case, then only multilateral disarmament makes sense, because the best outcome is if we both disarm, but the worst outcome is if we disarm and they don’t. Once we expect them to disarm, we have no temptation to renege on the deal ourselves; but if we think there’s a good chance they won’t, we might not want to either. Stag Hunts have two stable Nash equilibria; one is where both arm, the other where both disarm.

But in fact, I think it may be simply the trivial game.

There aren’t actually that many possible symmetric two-player nonzero-sum games (basically it’s a question of ordering 4 possibilities, and it’s symmetric, so 12 possible games), and one that we never talk about (because it’s sort of boring) is the trivial game: If I do the right thing and you do the right thing, we’re both better off. If you do the wrong thing and I do the right thing, I’m better off. If we both do the wrong thing, we’re both worse off. So, obviously, we both do the right thing, because we’d be idiots not to. Formally, we say that cooperation is a strictly dominant strategy. There’s no dilemma, no paradox; the self-interested strategy is the optimal strategy. (I find it kind of amusing that laissez-faire economics basically amounts to assuming that all real-world games are the trivial game.)

That is, I don’t think the US would actually benefit from nuking Russia, even if we could do so without retaliation. Likewise, I don’t think Russia would actually benefit from nuking the US. One of the things we’ve discovered—the hardest way possible—through human history is that working together is often better for everyone than fighting. Russia could nuke NATO, and thereby destroy all of their largest trading partners, or they could continue trading with us. Even if they are despicable psychopaths who think nothing of committing mass murder (Putin might be, but surely there are people under his command who aren’t?), it’s simply not in Russia’s best interest to nuke the US and Europe. Likewise, it is not in our best interest to nuke them.

Nuclear war is a strange game: The only winning move is not to play.

So I say, let’s stop playing. Yes, let’s unilaterally disarm, the thing that so many policy analysts are terrified of because they’re so convinced we’re in a Prisoner’s Dilemma or a Stag Hunt. “What’s to stop them from destroying us, if we make it impossible for us to destroy them!?” I dunno, maybe basic human decency, or failing that, rationality?

Several other countries have already done this—South Africa unilaterally disarmed, and nobody nuked them. Japan refused to build nuclear weapons in the first place—and I think it says something that they’re the only people to ever have them used against them.

Our conventional military is plenty large enough to defend us against all realistic threats, and could even be repurposed to defend against nuclear threats as well, by a method I call credible targeted conventional response. Instead of building ever-larger nuclear arsenals to threaten devastation in the world’s most terrifying penis-measuring contest, you deploy covert operatives (perhaps Navy SEALS in submarines, or double agents, or these days even stealth drones) around the world, with the standing order that if they have reason to believe a country initiated a nuclear attack, they will stop at nothing to hunt down and kill the specific people responsible for that attack. Not the country they came from; not the city they live in; those specific people. If a leader is enough of a psychopath to be willing to kill 300 million people in another country, he’s probably enough of a psychopath to be willing to lose 150 million people in his own country. He likely has a secret underground bunker that would allow him to survive, at least if humanity as a whole does. So you should be threatening the one thing he does care about—himself. You make sure he knows that if he pushes that button, you’ll find that bunker, drop in from helicopters, and shoot him in the face.

The “targeted conventional response” should be clear by now—you use non-nuclear means to respond, and you target the particular leaders responsible—but let me say a bit more about the “credible” part. The threat of mutually-assured destruction is actually not a credible one. It’s not what we call in game theory a subgame perfect Nash equilibrium. If you know that Russia has launched 1500 ICBMs to destroy every city in America, you actually have no reason at all to retaliate with your own 1500 ICBMs, and the most important reason imaginable not to. Your people are dead either way; you can’t save them. You lose. The only question now is whether you risk taking the rest of humanity down with you. If you have even the most basic human decency, you will not push that button. You will not “retaliate” in useless vengeance that could wipe out human civilization. Thus, your threat is a bluff—it is not credible.

But if your response is targeted and conventional, it suddenly becomes credible. It’s exactly reversed; you now have every reason to retaliate, and no reason not to. Your covert operation teams aren’t being asked to destroy humanity; they’re being tasked with finding and executing the greatest mass murderer in history. They don’t have some horrific moral dilemma to resolve; they have the opportunity to become the world’s greatest heroes. Indeed, they’d very likely have the whole world (or what’s left of it) on their side; even the population of the attacking country would rise up in revolt and the double agents could use the revolt as cover. Now you have no reason to even hesitate; your threat is completely credible. The only question is whether you can actually pull it off, and if we committed the full resources of the United States military to preparing for this possibility, I see no reason to doubt that we could. If a US President can be assassinated by a lone maniac (and yes, that is actually what happened), then the world’s finest covert operations teams can assassinate whatever leader pushed that button.

This is a policy that works both unilaterally and multilaterally. We could even assemble an international coalition—perhaps make the UN “peacekeepers” put their money where their mouth is and train the finest special operatives in the history of the world tasked with actually keeping the peace.

Let’s not wait for someone else to save humanity from destruction. Let’s be the first.