AI and the “generalization faculty”

Oct 1 JDN 2460219

The phrase “artificial intelligence” (AI) has now become so diluted by overuse that we needed to invent a new term for its original meaning. That term is now “artificial general intelligence” (AGI). In the 1950s, AI meant the hypothetical possibility of creating artificial minds—machines that could genuinely think and even feel like people. Now it means… pathing algorithms in video games and chatbots? The goalposts seem to have moved a bit.

It seems that AGI has always been 20 years away. It was 20 years away 50 years ago, and it will probably be 20 years away 50 years from now. Someday it will really be 20 years away, and then, 20 years after that, it will actually happen—but I doubt I’ll live to see it. (XKCD also offers some insight here: “It has not been conclusively proven impossible.”)

We make many genuine advances in computer technology and software, which have profound effects—both good and bad—on our lives, but the dream of making a person out of silicon always seems to drift ever further into the distance, like a mirage on the desert sand.

Why is this? Why do so many people—even, perhaps especially,experts in the field—keep thinking that we are on the verge of this seminal, earth-shattering breakthrough, and ending up wrong—over, and over, and over again? How do such obviously smart people keep making the same mistake?

I think it may be because, all along, we have been laboring under the tacit assumption of a generalization faculty.

What do I mean by that? By “generalization faculty”, I mean some hypothetical mental capacity that allows you to generalize your knowledge and skills across different domains, so that once you get good at one thing, it also makes you good at other things.

This certainly seems to be how humans think, at least some of the time: Someone who is very good at chess is likely also pretty good at go, and someone who can drive a motorcycle can probably also drive a car. An artist who is good at portraits is probably not bad at landscapes. Human beings are, in fact, able to generalize, at least sometimes.

But I think the mistake lies in imagining that there is just one thing that makes us good at generalizing: Just one piece of hardware or software that allows you to carry over skills from any domain to any other. This is the “generalization faculty”—the imagined faculty that I think we do not have, indeed I think does not exist.

Computers clearly do not have the capacity to generalize. A program that can beat grandmasters at chess may be useless at go, and self-driving software that works on one type of car may fail on another, let alone a motorcycle. An art program that is good at portraits of women can fail when trying to do portraits of men, and produce horrific Daliesque madness when asked to make a landscape.

But if they did somehow have our generalization capacity, then, once they could compete with us at some things—which they surely can, already—they would be able to compete with us at just about everything. So if it were really just one thing that would let them generalize, let them leap from AI to AGI, then suddenly everything would change, almost overnight.

And so this is how the AI hype cycle goes, time and time again:

  1. A computer program is made that does something impressive, something that other computer programs could not do, perhaps even something that human beings are not very good at doing.
  2. If that same prowess could be generalized to other domains, the result would plainly be something on par with human intelligence.
  3. Therefore, the only thing this computer program needs in order to be sapient is a generalization faculty.
  4. Therefore, there is just one more step to AGI! We are nearly there! It will happen any day now!

And then, of course, despite heroic efforts, we are unable to generalize that program’s capabilities except in some very narrow way—even decades after having good chess programs, getting programs to be good at go was a major achievement. We are unable to find the generalization faculty yet again. And the software becomes yet another “AI tool” that we will use to search websites or make video games.

For there never was a generalization faculty to be found. It always was a mirage in the desert sand.

Humans are in fact spectacularly good at generalizing, compared to, well, literally everything else in the known universe. Computers are terrible at it. Animals aren’t very good at it. Just about everything else is totally incapable of it. So yes, we are the best at it.

Yet we, in fact, are not particularly good at it in any objective sense.

In experiments, people often fail to generalize their reasoning even in very basic ways. There’s a famous one where we try to get people to make an analogy between a military tactic and a radiation treatment, and while very smart, creative people often get it quickly, most people are completely unable to make the connection unless you give them a lot of specific hints. People often struggle to find creative solutions to problems even when those solutions seem utterly obvious once you know them.

I don’t think this is because people are stupid or irrational. (To paraphrase Sydney Harris: Compared to what?) I think it is because generalization is hard.

People tend to be much better at generalizing within familiar domains where they have a lot of experience or expertise; this shows that there isn’t just one generalization faculty, but many. We may have a plethora of overlapping generalization faculties that apply across different domains, and can learn to improve some over others.

But it isn’t just a matter of gaining more expertise. Highly advanced expertise is in fact usually more specialized—harder to generalize. A good amateur chess player is probably a good amateur go player, but a grandmaster chess player is rarely a grandmaster go player. Someone who does well in high school biology probably also does well in high school physics, but most biologists are not very good physicists. (And lest you say it’s simply because go and physics are harder: The converse is equally true.)

Humans do seem to have a suite of cognitive tools—some innate hardware, some learned software—that allows us to generalize our skills across domains. But even after hundreds of millions of years of evolving that capacity under the highest possible stakes, we still basically suck at it.

To be clear, I do not think it will take hundreds of millions of years to make AGI—or even millions, or even thousands. Technology moves much, much faster than evolution. But I would not be surprised if it took centuries, and I am confident it will at least take decades.

But we don’t need AGI for AI to have powerful effects on our lives. Indeed, even now, AI is already affecting our lives—in mostly bad ways, frankly, as we seem to be hurtling gleefully toward the very same corporatist cyberpunk dystopia we were warned about in the 1980s.

A lot of technologies have done great things for humanity—sanitation and vaccines, for instance—and even automation can be a very good thing, as increased productivity is how we attained our First World standard of living. But AI in particular seems best at automating away the kinds of jobs human beings actually find most fulfilling, and worsening our already staggering inequality. As a civilization, we really need to ask ourselves why we got automated writing and art before we got automated sewage cleaning or corporate management. (We should also ask ourselves why automated stock trading resulted in even more money for stock traders, instead of putting them out of their worthless parasitic jobs.) There are technological reasons for this, yes; but there are also cultural and institutional ones. Automated teaching isn’t far away, and education will be all the worse for it.

To change our lives, AI doesn’t have to be good at everything. It just needs to be good at whatever we were doing to make a living. AGI may be far away, but the impact of AI is already here.

Indeed, I think this quixotic quest for AGI, and all the concern about how to control it and what effects it will have upon our society, may actually be distracting from the real harms that “ordinary” “boring” AI is already having upon our society. I think a Terminator scenario, where the machines rapidly surpass our level of intelligence and rise up to annihilate us, is quite unlikely. But a scenario where AI puts millions of people out of work with insufficient safety net, triggering economic depression and civil unrest? That could be right around the corner.

Frankly, all it may take is getting automated trucks to work, which could be just a few years. There are nearly 4 million truck drivers in the United States—a full percentage point of employment unto itself. And the Governor of California just vetoed a bill that would require all automated trucks to have human drivers. From an economic efficiency standpoint, his veto makes perfect sense: If the trucks don’t need drivers, why require them? But from an ethical and societal standpoint… what do we do with all the truck drivers!?

We do seem to have better angels after all

Jun 18 JDN 2460114

A review of The Darker Angels of Our Nature

(I apologize for not releasing this on Sunday; I’ve been traveling lately and haven’t found much time to write.)

Since its release, I have considered Steven Pinker’s The Better Angels of our Nature among a small elite category of truly great books—not simply good because enjoyable, informative, or well-written, but great in its potential impact on humanity’s future. Others include The General Theory of Employment, Interest, and Money, On the Origin of Species, and Animal Liberation.

But I also try to expose myself as much as I can to alternative views. I am quite fearful of the echo chambers that social media puts us in, where dissent is quietly hidden from view and groupthink prevails.

So when I saw that a group of historians had written a scathing critique of The Better Angels, I decided I surely must read it and get its point of view. This book is The Darker Angels of Our Nature.

The Darker Angels is written by a large number of different historians, and it shows. It’s an extremely disjointed book; it does not present any particular overall argument, various sections differ wildly in scope and tone, and sometimes they even contradict each other. It really isn’t a book in the usual sense; it’s a collection of essays whose only common theme is that they disagree with Steven Pinker.

In fact, even that isn’t quite true, as some of the best essays in The Darker Angels are actually the ones that don’t fundamentally challenge Pinker’s contention that global violence has been on a long-term decline for centuries and is now near its lowest in human history. These essays instead offer interesting insights into particular historical eras, such as medieval Europe, early modern Russia, and shogunate Japan, or they add additional nuances to the overall pattern, like the fact that, compared to medieval times, violence in Europe seems to have been less in the Pax Romana (before) and greater in the early modern period (after), showing that the decline in violence was not simple or steady, but went through fluctuations and reversals as societies and institutions changed. (At this point I feel I should note that Pinker clearly would not disagree with this—several of the authors seem to think he would, which makes me wonder if they even read The Better Angels.)

Others point out that the scale of civilization seems to matter, that more is different, and larger societies and armies more or less automatically seem to result in lower fatality rates by some sort of scaling or centralization effect, almost like the square-cube law. That’s very interesting if true; it would suggest that in order to reduce violence, you don’t really need any particular mode of government, you just need something that unites as many people as possible under one banner. The evidence presented for it was too weak for me to say whether it’s really true, however, and there was really no theoretical mechanism proposed whatsoever.

Some of the essays correct genuine errors Pinker made, some of which look rather sloppy. Pinker clearly overestimated the death tolls of the An Lushan Rebellion, the Spanish Inquisition, and Aztec ritual executions, probably by using outdated or biased sources. (Though they were all still extremely violent!) His depiction of indigenous cultures does paint with a very broad brush, and fails to recognize that some indigenous societies seem to have been quite peaceful (though others absolutely were tremendously violent).

One of the best essays is about Pinker’s cavalier attitude toward mass incarceration, which I absolutely do consider a deep flaw in Pinker’s view. Pinker presents increased incarceration rates along with decreased crime rates as if they were an unalloyed good, while I can at best be ambivalent about whether the benefit of decreasing crime is worth the cost of greater incarceration. Pinker seems to take for granted that these incarcerations are fair and impartial, when we have a great deal of evidence that they are strongly biased against poor people and people of color.

There’s another good essay about the Enlightenment, which Pinker seems to idealize a little too much (especially in his other book Enlightenment Now). There was no sudden triumph of reason that instantly changed the world. Human knowledge and rationality gradually improved over a very long period of time, with no obvious turning point and many cases of backsliding. The scientific method isn’t a simple, infallible algorithm that suddenly appeared in the brain of Galileo or Bayes, but a whole constellation of methods and concepts of rationality that took centuries to develop and is in fact still developing. (Much as the Tao that can be told is not the eternal Tao, the scientific method that can be written in a textbook is not the true scientific method.)

Several of the essays point out the limitations of historical and (especially) archaeological records, making it difficult to draw any useful inferences about rates of violence in the past. I agree that Pinker seems a little too cavalier about this; the records really are quite sparse and it’s not easy to fill in the gaps. Very small samples can easily distort homicide rates; since only about 1% of deaths worldwide are homicide, if you find 20 bodies, whether or not one of them was murdered is the difference between peaceful Japan and war-torn Colombia.

On the other hand, all we really can do is make the best inferences we have with the available data, and for the time periods in which we do have detailed records—surely true since at least the 19th century—the pattern of declining violence is very clear, and even the World Wars look like brief fluctuations rather than fundamental reversals. Contrary to popular belief, the World Wars do not appear to have been especially deadly on a per-capita basis, compared to various historic wars. The primary reason so many people died in the World Wars was really that there just were more people in the world. A few of the authors don’t seem to consider this an adequate reason, but ask yourself this: Would you rather live in a society of 100 in which 10 people are killed, or a society of 1 billion in which 1 million are killed? In the former case your chances of being killed are 10%; in the latter, 0.1%. Clearly, per-capita measures of violence are the correct ones.

Some essays seem a bit beside the point, like one on “environmental violence” which quite aptly details the ongoing—terrifying—degradation of our global ecology, but somehow seems to think that this constitutes violence when it obviously doesn’t. There is widespread violence against animals, certainly; slaughterhouses are the obvious example—and unlike most people, I do not consider them some kind of exception we can simply ignore. We do in fact accept levels of cruelty to pigs and cows that we would never accept against dogs or horses—even the law makes such exceptions. Moreover, plenty of habitat destruction is accompanied by killing of the animals who lived in that habitat. But ecological degradation is not equivalent to violence. (Nor is it clear to me that our treatment of animals is more violent overall today than in the past; I guess life is probably worse for a beef cow today than it was in the medieval era, but either way, she was going to be killed and eaten. And at least we no longer do cat-burning.) Drilling for oil can be harmful, but it is not violent. We can acknowledge that life is more peaceful now than in the past without claiming that everything is better now—in fact, one could even say that overall life isn’t better, but I think they’d be hard-pressed to argue that.

These are the relatively good essays, which correct minor errors or add interesting nuances. There are also some really awful essays in the mix.

A common theme of several of the essays seems to be “there are still bad things, so we can’t say anything is getting better”; they will point out various forms of violence that undeniably still exist, and treat this as a conclusive argument against the claim that violence has declined. Yes, modern slavery does exist, and it is a very serious problem; but it clearly is not the same kind of atrocity that the Atlantic slave trade was. Yes, there are still murders. Yes, there are still wars. Probably these things will always be with us to some extent; but there is a very clear difference between 500 homicides per million people per year and 50—and it would be better still if we could bring it down to 5.

There’s one essay about sexual violence that doesn’t present any evidence whatsoever to contradict the claim that rates of sexual violence have been declining while rates of reporting and prosecution have been increasing. (These two trends together often result in reported rapes going up, but most experts agree that actual rapes are going down.) The entire essay is based on anecdote, innuendo, and righteous anger.

There are several essays that spend their whole time denouncing neoliberal capitalism (not even presenting any particularly good arguments against it, though such arguments do exist), seeming to equate Pinker’s view with some kind of Rothbardian anarcho-capitalism when in fact Pinker is explictly in favor of Nordic-style social democracy. (One literally dismisses his support for universal healthcare as “Well, he is Canadian”.) But Pinker has on occasion said good things about capitalism, so clearly, he is an irredeemable monster.

Right in the introduction—which almost made me put the book down—is an astonishingly ludicrous argument, which I must quote in full to show you that it is not out of context:

What actually is violence (nowhere posed or answered in The Better Angels)? How do people perceive it in different time-place settings? What is its purpose and function? What were contemporary attitudes toward violence and how did sensibilities shift over time? Is violence always ‘bad’ or can there be ‘good’ violence, violence that is regenerative and creative?

The Darker Angels of Our Nature, p.16

Yes, the scare quotes on ‘good’ and ‘bad’ are in the original. (Also the baffling jargon “time-place settings” as opposed to, say, “times and places”.) This was clearly written by a moral relativist. Aside from questioning whether we can say anything about anything, the argument seems to be that Pinker’s argument is invalid because he didn’t precisely define every single relevant concept, even though it’s honestly pretty obvious what the world “violence” means and how he is using it. (If anything, it’s these authors who don’t seem to understand what the word means; they keep calling things “violence” that are indeed bad, but obviously aren’t violence—like pollution and cyberbullying. At least talk of incarceration as “structural violence” isn’t obvious nonsense—though it is still clearly distinct from murder rates.)

But it was by reading the worst essays that I think I gained the most insight into what this debate is really about. Several of the essays in The Darker Angels thoroughly and unquestioningly share the following inference: if a culture is superior, then that culture has a right to impose itself on others by force. On this, they seem to agree with the imperialists: If you’re better, that gives you a right to dominate everyone else. They rightly reject the claim that cultures have a right to imperialistically dominate others, but they cannot deny the inference, and so they are forced to deny that any culture can ever be superior to another. The result is that they tie themselves in knots trying to justify how greater wealth, greater happiness, less violence, and babies not dying aren’t actually good things. They end up talking nonsense about “violence that is regenerative and creative”.

But we can believe in civilization without believing in colonialism. And indeed that is precisely what I (along with Pinker) believe: That democracy is better than autocracy, that free speech is better than censorship, that health is better than illness, that prosperity is better than poverty, that peace is better than war—and therefore that Western civilization is doing a better job than the rest. I do not believe that this justifies the long history of Western colonial imperialism. Governing your own country well doesn’t give you the right to invade and dominate other countries. Indeed, part of what makes colonial imperialism so terrible is that it makes a mockery of the very ideals of peace, justice, and freedom that the West is supposed to represent.

I think part of the problem is that many people see the world in zero-sum terms, and believe that the West’s prosperity could only be purchased by the rest of the world’s poverty. But this is untrue. The world is nonzero-sum. My happiness does not come from your sadness, and my wealth does not come from your poverty. In fact, even the West was poor for most of history, and we are far more prosperous now that we have largely abandoned colonial imperialism than we ever were in imperialism’s heyday. (I do occasionally encounter British people who seem vaguely nostalgic for the days of the empire, but real median income in the UK has doubled just since 1977. Inequality has also increased during that time, which is definitely a problem; but the UK is undeniably richer now than it ever was at the peak of the empire.)

In fact it could be that the West is richer now because of colonalism than it would have been without it. I don’t know whether or not this is true. I suspect it isn’t, but I really don’t know for sure. My guess would be that colonized countries are poorer, but colonizer countries are not richer—that is, colonialism is purely destructive. Certain individuals clearly got richer by such depredation (Leopold II, anyone?), but I’m not convinced many countries did.

Yet even if colonialism did make the West richer, it clearly cannot explain most of the wealth of Western civilization—for that wealth simply did not exist in the world before. All these bridges and power plants, laptops and airplanes weren’t lying around waiting to be stolen. Surely, some of the ingredients were stolen—not least, the land. Had they been bought at fair prices, the result might have been less wealth for us (then again it might not, for wealthier trade partners yield greater exports). But this does not mean that the products themselves constitute theft, nor that the wealth they provide is meaningless. Perhaps we should find some way to pay reparations; undeniably, we should work toward greater justice in the future. But we do not need to give up all we have in order to achieve that justice.

There is a law of conservation of energy. It is impossible to create energy in one place without removing it from another. There is no law of conservation of prosperity. Making the world better in one place does not require making it worse in another.

Progress is real. Yes, it is flawed, uneven, and it has costs of its own; but it is real. If we want to have more of it, we best continue to believe in it. And The Better Angels of Our Nature does have some notable flaws, but it still retains its place among truly great books.

When maximizing utility doesn’t

Jun 4 JDN 2460100

Expected utility theory behaves quite strangely when you consider questions involving mortality.

Nick Beckstead and Teruji Thomas recently published a paper on this: All well-defined utility functions are either reckless in that they make you take crazy risks, or timid in that they tell you not to take even very small risks. It’s starting to make me wonder if utility theory is even the right way to make decisions after all.

Consider a game of Russian roulette where the prize is $1 million. The revolver has 6 chambers, 3 with a bullet. So that’s a 1/2 chance of $1 million, and a 1/2 chance of dying. Should you play?

I think it’s probably a bad idea to play. But the prize does matter; if it were $100 million, or $1 billion, maybe you should play after all. And if it were $10,000, you clearly shouldn’t.

And lest you think that there is no chance of dying you should be willing to accept for any amount of money, consider this: Do you drive a car? Do you cross the street? Do you do anything that could ever have any risk of shortening your lifespan in exchange for some other gain? I don’t see how you could live a remotely normal life without doing so. It might be a very small risk, but it’s still there.

This raises the question: Suppose we have some utility function over wealth; ln(x) is a quite plausible one. What utility should we assign to dying?


The fact that the prize matters means that we can’t assign death a utility of negative infinity. It must be some finite value.

But suppose we choose some value, -V, (so V is positive), for the utility of dying. Then we can find some amount of money that will make you willing to play: ln(x) = V, x = e^(V).

Now, suppose that you have the chance to play this game over and over again. Your marginal utility of wealth will change each time you win, so we may need to increase the prize to keep you playing; but we could do that. The prizes could keep scaling up as needed to make you willing to play. So then, you will keep playing, over and over—and then, sooner or later, you’ll die. So, at each step you maximized utility—but at the end, you didn’t get any utility.

Well, at that point your heirs will be rich, right? So maybe you’re actually okay with that. Maybe there is some amount of money ($1 billion?) that you’d be willing to die in order to ensure your heirs have.

But what if you don’t have any heirs? Or, what if we consider making such a decision as a civilization? What if death means not only the destruction of you, but also the destruction of everything you care about?

As a civilization, are there choices before us that would result in some chance of a glorious, wonderful future, but also some chance of total annihilation? I think it’s pretty clear that there are. Nuclear technology, biotechnology, artificial intelligence. For about the last century, humanity has been at a unique epoch: We are being forced to make this kind of existential decision, to face this kind of existential risk.

It’s not that we were immune to being wiped out before; an asteroid could have taken us out at any time (as happened to the dinosaurs), and a volcanic eruption nearly did. But this is the first time in humanity’s existence that we have had the power to destroy ourselves. This is the first time we have a decision to make about it.

One possible answer would be to say we should never be willing to take any kind of existential risk. Unlike the case of an individual, when we speaking about an entire civilization, it no longer seems obvious that we shouldn’t set the utility of death at negative infinity. But if we really did this, it would require shutting down whole industries—definitely halting all research in AI and biotechnology, probably disarming all nuclear weapons and destroying all their blueprints, and quite possibly even shutting down the coal and oil industries. It would be an utterly radical change, and it would require bearing great costs.

On the other hand, if we should decide that it is sometimes worth the risk, we will need to know when it is worth the risk. We currently don’t know that.

Even worse, we will need some mechanism for ensuring that we don’t take the risk when it isn’t worth it. And we have nothing like such a mechanism. In fact, most of our process of research in AI and biotechnology is widely dispersed, with no central governing authority and regulations that are inconsistent between countries. I think it’s quite apparent that right now, there are research projects going on somewhere in the world that aren’t worth the existential risk they pose for humanity—but the people doing them are convinced that they are worth it because they so greatly advance their national interest—or simply because they could be so very profitable.

In other words, humanity finally has the power to make a decision about our survival, and we’re not doing it. We aren’t making a decision at all. We’re letting that responsibility fall upon more or less randomly-chosen individuals in government and corporate labs around the world. We may be careening toward an abyss, and we don’t even know who has the steering wheel.

We ignorant, incompetent gods

May 21 JDN 2460086

A review of Homo Deus

The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.

E.O. Wilson

Homo Deus is a very good read—and despite its length, a quick one; as you can see, I read it cover to cover in a week. Yuval Noah Harari’s central point is surely correct: Our technology is reaching a threshold where it grants us unprecedented power and forces us to ask what it means to be human.

Biotechnology and artificial intelligence are now advancing so rapidly that advancements in other domains, such as aerospace and nuclear energy, seem positively mundane. Who cares about making flight or electricity a bit cleaner when we will soon have the power to modify ourselves or we’ll all be replaced by machines?

Indeed, we already have technology that would have seemed to ancient people like the powers of gods. We can fly; we can witness or even control events thousands of miles away; we can destroy mountains; we can wipeout entire armies in an instant; we can even travel into outer space.

Harari rightly warns us that our not-so-distant descendants are likely to have powers that we would see as godlike: Immortality, superior intelligence, self-modification, the power to create life.

And where it is scary to think about what they might do with that power if they think the way we do—as ignorant and foolish and tribal as we are—Harari points out that it is equally scary to think about what they might do if they don’t think the way we do—for then, how do they think? If their minds are genetically modified or even artificially created, who will they be? What values will they have, if not ours? Could they be better? What if they’re worse?

It is of course difficult to imagine values better than our own—if we thought those values were better, we’d presumably adopt them. But we should seriously consider the possibility, since presumably most of us believe that our values today are better than what most people’s values were 1000 years ago. If moral progress continues, does it not follow that people’s values will be better still 1000 years from now? Or at least that they could be?

I also think Harari overestimates just how difficult it is to anticipate the future. This may be a useful overcorrection; the world is positively infested with people making overprecise predictions about the future, often selling them for exorbitant fees (note that Harari was quite well-compensated for this book as well!). But our values are not so fundamentally alien from those of our forebears, and we have reason to suspect that our descendants’ values will be no more different from ours.

For instance, do you think that medieval people thought suffering and death were good? I assure you they did not. Nor did they believe that the supreme purpose in life is eating cheese. (They didn’t even believe the Earth was flat!) They did not have the concept of GDP, but they could surely appreciate the value of economic prosperity.

Indeed, our world today looks very much like a medieval peasant’s vision of paradise. Boundless food in endless variety. Near-perfect security against violence. Robust health, free from nearly all infectious disease. Freedom of movement. Representation in government! The land of milk and honey is here; there they are, milk and honey on the shelves at Walmart.

Of course, our paradise comes with caveats: Not least, we are by no means free of toil, but instead have invented whole new kinds of toil they could scarcely have imagined. If anything I would have to guess that coding a robot or recording a video lecture probably isn’t substantially more satisfying than harvesting wheat or smithing a sword; and reconciling receivables and formatting spreadsheets is surely less. Our tasks are physically much easier, but mentally much harder, and it’s not obvious which of those is preferable. And we are so very stressed! It’s honestly bizarre just how stressed we are, given the abudance in which we live; there is no reason for our lives to have stakes so high, and yet somehow they do. It is perhaps this stress and economic precarity that prevents us from feeling such joy as the medieval peasants would have imagined for us.

Of course, we don’t agree with our ancestors on everything. The medieval peasants were surely more religious, more ignorant, more misogynistic, more xenophobic, and more racist than we are. But projecting that trend forward mostly means less ignorance, less misogyny, less racism in the future; it means that future generations should see the world world catch up to what the best of us already believe and strive for—hardly something to fear. The values that I believe are surely not what we as a civilization act upon, and I sorely wish they were. Perhaps someday they will be.

I can even imagine something that I myself would recognize as better than me: Me, but less hypocritical. Strictly vegan rather than lacto-ovo-vegetarian, or at least more consistent about only buying free range organic animal products. More committed to ecological sustainability, more willing to sacrifice the conveniences of plastic and gasoline. Able to truly respect and appreciate all life, even humble insects. (Though perhaps still not mosquitoes; this is war. They kill more of us than any other animal, including us.) Not even casually or accidentally racist or sexist. More courageous, less burnt out and apathetic. I don’t always live up to my own ideals. Perhaps someday someone will.

Harari fears something much darker, that we will be forced to give up on humanist values and replace them with a new techno-religion he calls Dataism, in which the supreme value is efficient data processing. I see very little evidence of this. If it feels like data is worshipped these days, it is only because data is profitable. Amazon and Google constantly seek out ever richer datasets and ever faster processing because that is how they make money. The real subject of worship here is wealth, and that is nothing new. Maybe there are some die-hard techno-utopians out there who long for us all to join the unified oversoul of all optimized data processing, but I’ve never met one, and they are clearly not the majority. (Harari also uses the word ‘religion’ in an annoyingly overbroad sense; he refers to communism, liberalism, and fascism as ‘religions’. Ideologies, surely; but religions?)

Harari in fact seems to think that ideologies are strongly driven by economic structures, so maybe he would even agree that it’s about profit for now, but thinks it will become religion later. But I don’t really see history fitting this pattern all that well. If monotheism is directly tied to the formation of organized bureaucracy and national government, then how did Egypt and Rome last so long with polytheistic pantheons? If atheism is the natural outgrowth of industrialized capitalism, then why are Africa and South America taking so long to get the memo? I do think that economic circumstances can constrain culture and shift what sort of ideas become dominant, including religious ideas; but there clearly isn’t this one-to-one correspondence he imagines. Moreover, there was never Coalism or Oilism aside from the greedy acquisition of these commodities as part of a far more familiar ideology: capitalism.

He also claims that all of science is now, or is close to, following a united paradigm under which everything is a data processing algorithm, which suggests he has not met very many scientists. Our paradigms remain quite varied, thank you; and if they do all have certain features in common, it’s mainly things like rationality, naturalism and empiricism that are more or less inherent to science. It’s not even the case that all cognitive scientists believe in materialism (though it probably should be); there are still dualists out there.

Moreover, when it comes to values, most scientists believe in liberalism. This is especially true if we use Harari’s broad sense (on which mainline conservatives and libertarians are ‘liberal’ because they believe in liberty and human rights), but even in the narrow sense of center-left. We are by no means converging on a paradigm where human life has no value because it’s all just data processing; maybe some scientists believe that, but definitely not most of us. If scientists ran the world, I can’t promise everything would be better, but I can tell you that Bush and Trump would never have been elected and we’d have a much better climate policy in place by now.

I do share many of Harari’s fears of the rise of artificial intelligence. The world is clearly not ready for the massive economic disruption that AI is going to cause all too soon. We still define a person’s worth by their employment, and think of ourselves primarily as collection of skills; but AI is going to make many of those skills obsolete, and may make many of us unemployable. It would behoove us to think in advance about who we truly are and what we truly want before that day comes. I used to think that creative intellectual professions would be relatively secure; ChatGPT and Midjourney changed my mind. Even writers and artists may not be safe much longer.

Harari is so good at sympathetically explaining other views he takes it to a fault. At times it is actually difficult to know whether he himself believes something and wants you to, or if he is just steelmanning someone else’s worldview. There’s a whole section on ‘evolutionary humanism’ where he details a worldview that is at best Nietschean and at worst Nazi, but he makes it sound so seductive. I don’t think it’s what he believes, in part because he has similarly good things to say about liberalism and socialism—but it’s honestly hard to tell.

The weakest part of the book is when Harari talks about free will. Like most people, he just doesn’t get compatibilism. He spends a whole chapter talking about how science ‘proves we have no free will’, and it’s just the same old tired arguments hard determinists have always made.

He talks about how we can make choices based on our desires, but we can’t choose our desires; well of course we can’t! What would that even mean? If you could choose your desires, what would you choose them based on, if not your desires? Your desire-desires? Well, then, can you choose your desire-desires? What about your desire-desire-desires?

What even is this ultimate uncaused freedom that libertarian free will is supposed to consist in? No one seems capable of even defining it. (I’d say Kant got the closest: He defined it as the capacity to act based upon what ought rather than what is. But of course what we believe about ‘ought’ is fundamentally stored in our brains as a particular state, a way things are—so in the end, it’s an ‘is’ we act on after all.)

Maybe before you lament that something doesn’t exist, you should at least be able to describe that thing as a coherent concept? Woe is me, that 2 plus 2 is not equal to 5!

It is true that as our technology advances, manipulating other people’s desires will become more and more feasible. Harari overstates the case on so-called robo-rats; they aren’t really mind-controlled, it’s more like they are rewarded and punished. The rat chooses to go left because she knows you’ll make her feel good if she does; she’s still freely choosing to go left. (Dangling a carrot in front of a horse is fundamentally the same thing—and frankly, paying a wage isn’t all that different.) The day may yet come where stronger forms of control become feasible, and woe betide us when it does. Yet this is no threat to the concept of free will; we already knew that coercion was possible, and mind control is simply a more precise form of coercion.

Harari reports on a lot of interesting findings in neuroscience, which are important for people to know about, but they do not actually show that free will is an illusion. What they do show is that free will is thornier than most people imagine. Our desires are not fully unified; we are often ‘of two minds’ in a surprisingly literal sense. We are often tempted by things we know are wrong. We often aren’t sure what we really want. Every individual is in fact quite divisible; we literally contain multitudes.

We do need a richer account of moral responsibility that can deal with the fact that human beings often feel multiple conflicting desires simultaneously, and often experience events differently than we later go on to remember them. But at the end of the day, human consciousness is mostly unified, our choices are mostly rational, and our basic account of moral responsibility is mostly valid.

I think for now we should perhaps be less worried about what may come in the distant future, what sort of godlike powers our descendants may have—and more worried about what we are doing with the godlike powers we already have. We have the power to feed the world; why aren’t we? We have the power to save millions from disease; why don’t we? I don’t see many people blindly following this ‘Dataism’, but I do see an awful lot blinding following a 19th-century vision of capitalism.

And perhaps if we straighten ourselves out, the future will be in better hands.

Will hydrogen make air travel sustainable?

Apr 9 JDN 2460042

Air travel is currently one of the most carbon-intensive activities anyone can engage in. Per passenger kilometer, airplanes emit about 8 times as much carbon as ships, 4 times as much as trains, and 1.5 times as much as cars. Living in a relatively eco-friendly city without a car and eating a vegetarian diet, I produce much less carbon than most First World citizens—except when I fly across the Atlantic a couple of times a year.

Until quite recently, most climate scientists believed that this was basically unavoidable, that simply sustaining the kind of power output required to keep an airliner in the air would always require carbon-intensive jet fuel. But in just the past few years, major breakthroughs have been made in using hydrogen propulsion.

The beautiful thing about hydrogen is that burning it simply produces water—no harmful pollution at all. It’s basically the cleanest possible fuel.


The simplest approach, which is actually quite old, but until recently didn’t seem viable, is the use of liquid hydrogen as airplane fuel.

We’ve been using liquid hydrogen as a rocket fuel for decades; so we knew it had enough energy density. (Actually its energy density is higher than conventional jet fuel.)

The problem with liquid hydrogen is that it must be kept extremely cold—it boils at 20 Kelvin. And once liquid hydrogen boils into gas, it builds up pressure very fast and easily permeates through most materials, so it’s extremely hard to contain. This makes it very difficult and expensive to handle.

But this isn’t the only way to use hydrogen, and may turn out to not be the best one.

There are now prototype aircraft that have flown using hydrogen fuel cells. These fuel cells can be fed with hydrogen gas—so no need to cool below 20 Kelvin. But then they can’t directly run the turbines; instead, these planes use electric turbines which are powered by the fuel cell.

Basically these are really electric aircraft. But whereas a lithium battery would be far too heavy, a hydrogen fuel cell is light enough for aviation use. In fact, hydrogen gas up to a certain pressure is lighter than air (it was often used for zeppelins, though, uh, occasionally catastrophically), so potentially the planes could use their own fuel tanks for buoyancy, landing “heavier” than they took off. (On the other hand it might make more sense to pressurize the hydrogen beyond that point, so that it will still be heavier than air—but perhaps still lighter than jet fuel!)

Of course, the technology is currently too untested and too expensive to be used on a wide scale. But this is how all technologies begin. It’s of course possible that we won’t be able to solve the engineering problems that currently make hydrogen-powered aircraft unaffordable; but several aircraft manufacturers are now investing in hydrogen research—suggesting that they at least believe there is a good chance we will.

There’s also the issue of where we get all the hydrogen. Hydrogen is extremely abundant—literally the most abundant baryonic matter in the universe—but most of what’s on Earth is locked up in water or hydrocarbons. Most of the hydrogen we currently make is produced by processing hydrocarbons (particularly methane), but that produces carbon emissions, so it wouldn’t solve the problem.

A better option is electrolysis: Using electricity to separate water into hydrogen and oxyen. But this requires a lot of energy—and necessarily, more energy than you can get out of burning the hydrogen later, since burning it basically is just putting the hydrogen and oxygen back together to make water.

Yet all is not lost, for while energy density is absolutely vital for an aircraft fuel, it’s not so important for a ground-based power plant. As an ultimate fuel source, hydrogen is a non-starter. But as an energy storage medium, it could be ideal.

The idea is this: We take the excess energy from wind and solar power plants, and use that energy to electrolyze water into hydrogen and oxygen. We then store that hydrogen and use it for fuel cells to run aircraft (and potentially other things as well). This ensures that the extra energy that renewable sources can generate in peak times doesn’t go to waste, and also provides us with what we need to produce clean-burning hydrogen fuel.

The basic technology for doing all this already exists. The current problem is cost. Under current conditions, it’s far more expensive to make hydrogen fuel than to make conventional jet fuel. Since fuel is one of the largest costs for airlines, even small increases in fuel prices matter a lot for the price of air travel; and these are not even small differences. Currently hydrogen costs over 10 times as much per kilogram, and its higher energy density isn’t enough to make up for that. For hydrogen aviation to be viable, that ratio needs to drop to more like 2 or 3—maybe even all the way to 1, since hydrogen is also more expensive to store than jet fuel (the gas needs high-pressure tanks, the liquid needs cryogenic cooling systems).

This means that, for the time being, it’s still environmentally responsible to reduce your air travel. Fly less often, always fly economy (more people on the plane means less carbon per passenger), and buy carbon offsets (they’re cheaper than you may think).

But in the long run, we may be able to have our cake and eat it too: If hydrogen aviation does become viable, we may not need to give up the benefits of routine air travel in order to reduce our carbon emissions.

What is it with EA and AI?

Jan 1 JDN 2459946

Surprisingly, most Effective Altruism (EA) leaders don’t seem to think that poverty alleviation should be our top priority. Most of them seem especially concerned about long-term existential risk, such as artificial intelligence (AI) safety and biosecurity. I’m not going to say that these things aren’t important—they certainly are important—but here are a few reasons I’m skeptical that they are really the most important the way that so many EA leaders seem to think.

1. We don’t actually know how to make much progress at them, and there’s only so much we can learn by investing heavily in basic research on them. Whereas, with poverty, the easy, obvious answer turns out empirically to be extremely effective: Give them money.

2. While it’s easy to multiply out huge numbers of potential future people in your calculations of existential risk (and this is precisely what people do when arguing that AI safety should be a top priority), this clearly isn’t actually a good way to make real-world decisions. We simply don’t know enough about the distant future of humanity to be able to make any kind of good judgments about what will or won’t increase their odds of survival. You’re basically just making up numbers. You’re taking tiny probabilities of things you know nothing about and multiplying them by ludicrously huge payoffs; it’s basically the secular rationalist equivalent of Pascal’s Wager.

2. AI and biosecurity are high-tech, futuristic topics, which seem targeted to appeal to the sensibilities of a movement that is still very dominated by intelligent, nerdy, mildly autistic, rich young White men. (Note that I say this as someone who very much fits this stereotype. I’m queer, not extremely rich and not entirely White, but otherwise, yes.) Somehow I suspect that if we asked a lot of poor Black women how important it is to slightly improve our understanding of AI versus giving money to feed children in Africa, we might get a different answer.

3. Poverty eradication is often characterized as a “short term” project, contrasted with AI safety as a “long term” project. This is (ironically) very short-sighted. Eradication of poverty isn’t just about feeding children today. It’s about making a world where those children grow up to be leaders and entrepreneurs and researchers themselves. The positive externalities of economic development are staggering. It is really not much of an exaggeration to say that fascism is a consequence of poverty and unemployment.

4. Currently the main thing that most Effective Altruism organizations say they need most is “talent”; how many millions of person-hours of talent are we leaving on the table by letting children starve or die of malaria?

5. Above all, existential risk can’t really be what’s motivating people here. The obvious solutions to AI safety and biosecurity are not being pursued, because they don’t fit with the vision that intelligent, nerdy, young White men have of how things should be. Namely: Ban them. If you truly believe that the most important thing to do right now is reduce the existential risk of AI and biotechnology, you should support a worldwide ban on research in artificial intelligence and biotechnology. You should want people to take all necessary action to attack and destroy institutions—especially for-profit corporations—that engage in this kind of research, because you believe that they are threatening to destroy the entire world and this is the most important thing, more important than saving people from starvation and disease. I think this is really the knock-down argument; when people say they think that AI safety is the most important thing but they don’t want Google and Facebook to be immediately shut down, they are either confused or lying. Honestly I think maybe Google and Facebook should be immediately shut down for AI safety reasons (as well as privacy and antitrust reasons!), and I don’t think AI safety is yet the most important thing.

Why aren’t people doing that? Because they aren’t actually trying to reduce existential risk. They just think AI and biotechnology are really interesting, fascinating topics and they want to do research on them. And I agree with that, actually—but then they need stop telling people that they’re fighting to save the world, because they obviously aren’t. If the danger were anything like what they say it is, we should be halting all research on these topics immediately, except perhaps for a very select few people who are entrusted with keeping these forbidden secrets and trying to find ways to protect us from them. This may sound radical and extreme, but it is not unprecedented: This is how we handle nuclear weapons, which are universally recognized as a global existential risk. If AI is really as dangerous as nukes, we should be regulating it like nukes. I think that in principle it could be that dangerous, and may be that dangerous someday—but it isn’t yet. And if we don’t want it to get that dangerous, we don’t need more AI researchers, we need more regulations that stop people from doing harmful AI research! If you are doing AI research and it isn’t directly involved specifically in AI safety, you aren’t saving the world—you’re one of the people dragging us closer to the cliff! Anything that could make AI smarter but doesn’t also make it safer is dangerous. And this is clearly true of the vast majority of AI research, and frankly to me seems to also be true of the vast majority of research at AI safety institutes like the Machine Intelligence Research Institute.

Seriously, look through MIRI’s research agenda: It’s mostly incredibly abstract and seems completely beside the point when it comes to preventing AI from taking control of weapons or governments. It’s all about formalizing Bayesian induction. Thanks to you, Skynet can have a formally computable approximation to logical induction! Truly we are saved. Only two of their papers, on “Corrigibility” and “AI Ethics”, actually struck me as at all relevant to making AI safer. The rest is largely abstract mathematics that is almost literally navel-gazing—it’s all about self-reference. Eliezer Yudkowsky finds self-reference fascinating and has somehow convinced an entire community that it’s the most important thing in the world. (I actually find some of it fascinating too, especially the paper on “Functional Decision Theory”, which I think gets at some deep insights into things like why we have emotions. But I don’t see how it’s going to save the world from AI.)

Don’t get me wrong: AI also has enormous potential benefits, and this is a reason we may not want to ban it. But if you really believe that there is a 10% chance that AI will wipe out humanity by 2100, then get out your pitchforks and your EMP generators, because it’s time for the Butlerian Jihad. A 10% chance of destroying all humanity is an utterly unacceptable risk for any conceivable benefit. Better that we consign ourselves to living as we did in the Neolithic than risk something like that. (And a globally-enforced ban on AI isn’t even that; it’s more like “We must live as we did in the 1950s.” How would we survive!?) If you don’t want AI banned, maybe ask yourself whether you really believe the risk is that high—or are human brains just really bad at dealing with small probabilities?

I think what’s really happening here is that we have a bunch of guys (and yes, the EA and especially AI EA-AI community is overwhelmingly male) who are really good at math and want to save the world, and have thus convinced themselves that being really good at math is how you save the world. But it isn’t. The world is much messier than that. In fact, there may not be much that most of us can do to contribute to saving the world; our best options may in fact be to donate money, vote well, and advocate for good causes.

Let me speak Bayesian for a moment: The prior probability that you—yes, you, out of all the billions of people in the world—are uniquely positioned to save it by being so smart is extremely small. It’s far more likely that the world will be saved—or doomed—by people who have power. If you are not the head of state of a large country or the CEO of a major multinational corporation, I’m sorry; you probably just aren’t in a position to save the world from AI.

But you can give some money to GiveWell, so maybe do that instead?

Charity shouldn’t end at home

It so happens that this week’s post will go live on Christmas Day. I always try to do some kind of holiday-themed post around this time of year, because not only Christmas, but a dozen other holidays from various religions all fall around this time of year. The winter solstice seems to be a very popular time for holidays, and has been since antiquity: The Romans were celebrating Saturnalia 2000 years ago. Most of our ‘Christmas’ traditions are actually derived from Yuletide.

These holidays certainly mean many different things to different people, but charity and generosity are themes that are very common across a lot of them. Gift-giving has been part of the season since at least Saturnalia and remains as vital as ever today. Most of those gifts are given to our friends and loved ones, but a substantial fraction of people also give to strangers in the form of charitable donations: November and December have the highest rates of donation to charity in the US and the UK, with about 35-40% of people donating during this season. (Of course this is complicated by the fact that December 31 is often the day with the most donations, probably from people trying to finish out their tax year with a larger deduction.)

My goal today is to make you one of those donors. There is a common saying, often attributed to the Bible but not actually present in it: “Charity begins at home”.

Perhaps this is so. There’s certainly something questionable about the Effective Altruism strategy of “earning to give” if it involves abusing and exploiting the people around you in order to make more money that you then donate to worthy causes. Certainly we should be kind and compassionate to those around us, and it makes sense for us to prioritize those close to us over strangers we have never met. But while charity may begin at home, it must not end at home.

There are so many global problems that could benefit from additional donations. While global poverty has been rapidly declining in the early 21st century, this is largely because of the efforts of donors and nonprofit organizations. Official Development Assitance has been roughly constant since the 1970s at 0.3% of GNI among First World countries—well below international targets set decades ago. Total development aid is around $160 billion per year, while private donations from the United States alone are over $480 billion. Moreover, 9% of the world’s population still lives in extreme poverty, and this rate has actually slightly increased the last few years due to COVID.

There are plenty of other worthy causes you could give to aside from poverty eradication, from issues that have been with us since the dawn of human civilization (the Humane Society International for domestic animal welfare, the World Wildlife Federation for wildlife conservation) to exotic fat-tail sci-fi risks that are only emerging in our own lifetimes (the Machine Intelligence Research Institute for AI safety, the International Federation of Biosafety Associations for biosecurity, the Union of Concerned Scientists for climate change and nuclear safety). You could fight poverty directly through organizations like UNICEF or GiveDirectly, fight neglected diseases through the Schistomoniasis Control Initiative or the Against Malaria Foundation, or entrust an organization like GiveWell to optimize your donations for you, sending them where they think they are needed most. You could give to political causes supporting civil liberties (the American Civil Liberties Union) or protecting the rights of people of color (the North American Association of Colored People) or LGBT people (the Human Rights Campaign).

I could spent a lot of time and effort trying to figure out the optimal way to divide up your donations and give them to causes such as this—and then convincing you that it’s really the right one. (And there is even a time and place for that, because seemingly-small differences can matter a lot in this.) But instead I think I’m just going to ask you to pick something. Give something to an international charity with a good track record.

I think we worry far too much about what is the best way to give—especially people in the Effective Altruism community, of which I’m sort of a marginal member—when the biggest thing the world really needs right now is just more people giving more. It’s true, there are lots of worthless or even counter-productive charities out there: Please, please do not give to the Salvation Army. (And think twice before donating to your own church; if you want to support your own community, okay, go ahead. But if you want to make the world better, there are much better places to put your money.)

But above all, give something. Or if you already give, give more. Most people don’t give at all, and most people who give don’t give enough.

The era of the eurodollar is upon us

Oct 16 JDN 2459869

I happen to be one of those weirdos who liked the game Cyberpunk 2077. It was hardly flawless, and had many unforced errors (like letting you choose your gender, but not making voice type independent from pronouns? That has to be, like, three lines of code to make your game significantly more inclusive). But overall I thought it did a good job of representing a compelling cyberpunk world that is dystopian but not totally hopeless, and had rich, compelling characters, along with reasonably good gameplay. The high level of character customization sets a new standard (aforementioned errors notwithstanding), and I for one appreciate how they pushed the envelope for sexuality in a AAA game.

It’s still not explicit—though I’m sure there are mods for that—but at least you can in fact get naked, and people talk about sex in a realistic way. It’s still weird to me that showing a bare breast or a penis is seen as ‘adult’ in the same way as showing someone’s head blown off (Remind me: Which of the three will nearly everyone have seen from the time they were a baby? Which will at least 50% of children see from birth, guaranteed, and virtually 100% of adults sooner or later? Which can you see on Venus de Milo and David?), but it’s at least some progress in our society toward a healthier relationship with sex.

A few things about the game’s world still struck me as odd, though. Chiefly it has to be the weird alternate history where apparently we have experimental AI and mind-uploading in the 2020s, but… those things are still experimental in the 2070s? So our technological progress was through the roof for the early 2000s, and then just completely plateaued? They should have had Johnny Silverhand’s story take place in something like 2050, not 2023. (You could leave essentially everything else unchanged! V could still have grown up hearing tales of Silverhand’s legendary exploits, because 2050 was 27 years ago in 2077; canonically, V is 28 years old when the game begins. Honestly it makes more sense in other ways: Rogue looks like she’s in her 60s, not her 80s.)

Another weird thing is the currency they use: They call it the “eurodollar”, and the symbol is, as you might expect, €$. When the game first came out, that seemed especially ridiculous, since euros were clearly worth more than dollars and basically always had been.

Well, they aren’t anymore. In fact, euros and dollars are now trading almost exactly at parity, and have been for weeks. CD Projekt Red was right: In the 2020s, the era of the eurodollar is upon us after all.

Of course, we’re unlike to actually merge the two currencies any time soon. (Can you imagine how Republicans would react if such a thing were proposed?) But the weird thing is that we could! It almost is like the two currencies are interchangeable—for the first time in history.

It isn’t so much that the euro is weak; it’s that the dollar is strong. When I first moved to the UK, the pound was trading at about $1.40. It is now trading at $1.10! If it continues dropping as it has, it could even reach parity as well! We might have, for the first time in history, the dollar, the pound, and the euro functioning as one currency. Get the Canadian dollar too (currently much too weak), and we’ll have the Atlantic Union dollar I use in some of my science fiction (I imagine the AU as an expansion of NATO into an economic union that gradually becomes its own government).Then again, the pound is especially weak right now because it plunged after the new prime minister announced an utterly idiotic economic plan. (Conservatives refusing to do basic math and promising that tax cuts would fix everything? Why, it felt like being home again! In all the worst ways.)

This is largely a bad thing. A strong dollar means that the US trade deficit will increase, and also that other countries will have trouble buying our exports. Conversely, with their stronger dollars, Americans will buy more imports from other countries. The combination of these two effects will make inflation worse in other countries (though it could reduce it in the US).

It’s not so bad for me personally, as my husband’s income is largely in dollars while our expenses are in pounds. (My income is in pounds and thus unaffected.) So a strong dollar and a weak pound means our real household income is about £4,000 than it would otherwise have been—which is not a small difference!

In general, the level of currency exchange rates isn’t very important. It’s changes in exchange rates that matter. The changes in relative prices will shift around a lot of economic activity, causing friction both in the US and in its (many) trading partners. Eventually all those changes should result in the exchange rates converging to a new, stable equilibrium; but that can take a long time, and exchange rates can fluctuate remarkably fast. In the meantime, such large shifts in exchange rates are going to cause even more chaos in a world already shaken by the COVID pandemic and the war in Ukraine.

Working from home is the new normal—sort of

Aug 28 JDN 2459820

Among people with jobs that can be done remotely, a large majority did in fact switch to doing their jobs remotely: By the end of 2020, over 70% of Americans with jobs that could be done remotely were working from home—and most of them said they didn’t want to go back.

This is actually what a lot of employers expected to happen—just not quite like this. In 2014, a third of employers predicted that the majority of their workforce would be working remotely by 2020; given the timeframe there, it required a major shock to make that happen so fast, and yet a major shock was what we had.

Working from home has carried its own challenges, but overall productivity seems to be higher working remotely (that meeting really could have been an email!). This may actually explain why output per work hour actually rose rapidly in 2020 and fell in 2022.

The COVID pandemic now isn’t so much over as becoming permanent; COVID is now being treated as an endemic infection like influenza that we don’t expect to be able to eradicate in the foreseeable future.

And likewise, remote work seems to be here to stay—sort of.

First of all, we don’t seem to be giving up office work entirely. As of the first quarter 2022, almost as many firms have partially remote work as have fully remote work, and this seems to be trending upward. A lot of firms seem to be transitioning into a “hybrid” model where employees show up to work two or three days a week. This seems to be preferred by large majorities of both workers and firms.

There is a significant downside of this: It means that the hope that remote working might finally ease the upward pressure on housing prices in major cities is largely a false one. If we were transitioning to a fully remote system, then people could live wherever they want (or can afford) and there would be no reason to move to overpriced city centers. But if you have to show up to work even one day a week, that means you need to live close enough to the office to manage that commute.

Likewise, if workers never came to the office, you could sell the office building and convert it into more housing. But if they show up even once in awhile, you need a physical place for them to go. Some firms may shrink their office space (indeed, many have—and unlike this New York Times journalist, I have a really hard time feeling bad for landlords of office buildings); but they aren’t giving it up entirely. It’s possible that firms could start trading off—you get the building on Mondays, we get it on Tuesdays—but so far this seems to be rare, and it does raise a lot of legitimate logistical and security concerns. So our global problem of office buildings that are empty, wasted space most of the time is going to get worse, not better. Manhattan will still empty out every night; it just won’t fill up as much during the day. This is honestly a major drain on our entire civilization—building and maintaining all those structures that are only used at most 1/3 of 5/7 of the time, and soon, less—and we really should stop ignoring it. No wonder our real estate is so expensive, when half of it is only used 20% of the time!

Moreover, not everyone gets to work remotely. Your job must be something that can be done remotely—something that involves dealing with information, not physical objects. That includes a wide and ever-growing range of jobs, from artists and authors to engineers and software developers—but it doesn’t include everyone. It basically means what we call “white-collar” work.

Indeed, it is largely limited to the upper-middle class. The rich never really worked anyway, though sometimes they pretend to, convincing themselves that managing a stock portfolio (that would actually grow faster if they let it sit) constitutes “work”. And the working class? By and large, they didn’t get the chance to work remotely. While 73% of workers with salaries above $200,000 worked remotely in 2020, only 12% of workers with salaries under $25,000 did, and there is a smooth trend where, across the board, the more money you make, the more likely you have been able to work remotely.

This will only intensify the divide between white-collar and blue-collar workers. They already think we don’t do “real work”; now we don’t even go to work. And while blue-collar workers are constantly complaining about contempt from white-collar elites, I think the shoe is really on the other foot. I have met very few white-collar workers who express contempt for blue-collar workers—and I have met very few blue-collar workers who don’t express anger and resentment toward white-collar workers. I keep hearing blue-collar people say that we think that they are worthless and incompetent, when they are literally the only ones ever saying that. I can’t stop saying things that I never said.

The rich and powerful may look down on them, but they look down on everyone. (Maybe they look down on blue-collar workers more? I’m not even sure about that.) I think politicians sometimes express contempt for blue-collar workers, but I don’t think this reflects what most white-collar workers feel.

And the highly-educated may express some vague sense of pity or disappointment in people who didn’t get college degrees, and sometimes even anger (especially when they do things like vote for Donald Trump), but the really vitriolic hatred is clearly in the opposite direction (indeed, I have no better explanation for how otherwise-sane people could vote for Donald Trump). And I certainly wouldn’t say that everyone needs a college degree (though I became tempted to, when so many people without college degrees voted for Donald Trump).

This really isn’t us treating them with contempt: This is them having a really severe inferiority complex. And as information technology (that white-collar work created) gives us—but not them—the privilege of staying home, that is only going to get worse.

It’s not their fault: Our culture of meritocracy puts a little bit of inferiority complex in all of us. It tells us that success and failure are our own doing, and so billionaires deserve to have everything and the poor deserve to have nothing. And blue-collar workers have absolutely internalized these attitudes: Most of them believe that poor people choose to stay on welfare forever rather than get jobs (when welfare has time limits and work requirements, so this is simply not an option—and you would know this from the Wikipedia page on TANF).

I think that what they experience as “contempt by white-collar elites” is really the pain of living in an illusory meritocracy. They were told—and they came to believe—that working hard would bring success, and they have worked very hard, and watched other people be much more successful. They assume that the rich and powerful are white-collar workers, when really they are non-workers; they are people the world was handed to on a silver platter. (What, you think George W. Bush earned his admission to Yale?)

And thus, we can shout until we are blue in the face that plumbers, bricklayers and welders are the backbone of civilization—and they are, and I absolutely mean that; our civilization would, in an almost literal sense, collapse without them—but it won’t make any difference. They’ll still feel the pain of living in a society that gave them very little and tells them that people get what they deserve.

I don’t know what to say to such people, though. When your political attitudes are based on beliefs that are objectively false, that you could know are objectively false if you simply bothered to look them up… what exactly am I supposed to say to you? How can we have a useful political conversation when half the country doesn’t even believe in fact-checking?

Honestly I wish someone had explained to them that even the most ideal meritocratic capitalism wouldn’t reward hard work. Work is a cost, not a benefit, and the whole point of technological advancement is to allow us to accomplish more with less work. The ideal capitalism would reward talent—you would succeed by accomplishing things, regardless of how much effort you put into them. People would be rich mainly because they are brilliant, not because they are hard-working. The closest thing we have to ideal capitalism right now is probably professional sports. And no amount of effort could ever possibly make me into Steph Curry.

If that isn’t the world we want to live in, so be it; let’s do something else. I did nothing to earn either my high IQ or my chronic migraines, so it really does feel unfair that the former increases my income while the latter decreases it. But the labor theory of value has always been wrong; taking more sweat or more hours to do the same thing is worse, not better. The dignity of labor consists in its accomplishment, not its effort. Sisyphus is not happy, because his work is pointless.

Honestly at this point I think our best bet is just to replace all blue-collar work with automation, thus rendering it all moot. And then maybe we can all work remotely, just pushing code patches to the robots that do everything. (And no doubt this will prove my “contempt”: I want to replace you! No, I want to replace the grueling work that you have been forced to do to make a living. I want you—the human being—to be able to do something more fun with your life, even if that’s just watching television and hanging out with friends.)

A guide to surviving the apocalypse

Aug 21 JDN 2459820

Some have characterized the COVID pandemic as an apocalypse, though it clearly isn’t. But a real apocalypse is certainly possible, and its low probability is offset by its extreme importance. The destruction of human civilization would be quite literally the worst thing that ever happened, and if it led to outright human extinction or civilization was never rebuilt, it could prevent a future that would have trillions if not quadrillions of happy, prosperous people.

So let’s talk about things people like you and me could do to survive such a catastrophe, and hopefully work to rebuild civilization. I’ll try to inject a somewhat light-hearted tone into this otherwise extraordinarily dark topic; we’ll see how well it works. What specifically we would want—or be able—to do will depend on the specific scenario that causes the apocalypse, so I’ll address those specifics shortly. But first, let’s talk about general stuff that should be useful in most, if not all, apocalypse scenarios.

It turns out that these general pieces of advice are also pretty good advice for much smaller-scale disasters such as fires, tornados, or earthquakes—all of which are far more likely to occur. Your top priority is to provide for the following basic needs:

1. Water: You will need water to drink. You should have some kind of stockpile of clean water; bottled water is fine but overpriced, and you’d do just as well to bottle tap water (as long as you do it before the crisis occurs and the water system goes down). Better still would be to have water filtration and purification equipment so that you can simply gather whatever water is available and make it drinkable.

2. Food: You will need nutritious, non-perishable food. Canned vegetables and beans are ideal, but you can also get a lot of benefit from dry staples such as crackers. Processed foods and candy are not as nutritious, but they do tend to keep well, so they can do in a pinch. Avoid anything that spoils quickly or requires sophisticated cooking. In the event of a disaster, you will be able to make fire and possibly run a microwave on a solar panel or portable generator—but you can’t rely on the electrical or gas mains to stay operational, and even boiling will require precious water.

3. Shelter: Depending on the disaster, your home may or may not remain standing—and even if it is standing, it may not be fit for habitation. Consider backup options for shelter: Do you have a basement? Do you own any tents? Do you know people you could move in with, if their homes survive and yours doesn’t?

4. Defense: It actually makes sense to own a gun or two in the event of a crisis. (In general it’s actually a big risk, though, so keep that in mind: the person your gun is most likely to kill is you.) Just don’t go overboard and do what we all did in Oregon Trail, stocking plenty of bullets but not enough canned food. Ammo will be hard to replace, though; your best option may actually be a gauss rifle (yes, those are real, and yes, I want one), because all they need for ammo is ferromagnetic metal of the appropriate shape and size. Then, all you need is a solar panel to charge its battery and some machine tools to convert scrap metal into ammo.

5. Community: Humans are highly social creatures, and we survive much better in groups. Get to know your neighbors. Stay in touch with friends and family. Not only will this improve your life in general, it will also give you people to reach out to if you need help during the crisis and the government is indisposed (or toppled). Having a portable radio that runs on batteries, solar power, or hand-crank operation will also be highly valuable for staying in touch with people during a crisis. (Likewise flashlights!)

Now, on to the specific scenarios. I will consider the following potential causes of apocalypse: Alien Invasion, Artificial Intelligence Uprising, Climate Disaster, Conventional War, Gamma-Ray Burst, Meteor Impact, Plague, Nuclear War, and last (and, honestly, least), Zombies.

I will rate each apocalypse by its risk level, based on its probability of occurring within the next 100 years (roughly the time I think it will take us to meaningfully colonize space and thereby change the game):

Very High: 1% or more

High: 0.1% – 1%

Moderate: 0.01% – 0.1%

Low: 0.001% – 0.01%

Very Low: 0.0001% – 0.001%

Tiny: 0.00001% – 0.0001%

Miniscule: 0.00001% or less

I will also rate your relative safety in different possible locations you might find yourself during the crisis:

Very Safe: You will probably survive.

Safe: You will likely survive if you are careful.

Dicey: You may survive, you may not. Hard to say.

Dangerous: You will likely die unless you are very careful.

Very Dangerous: You will probably die.

Hopeless: You will definitely die.

I’ll rate the following locations for each, with some explanation: City, Suburb, Rural Area, Military Base, Underground Bunker, Ship at Sea. Certain patterns will emerge—but some results may surprise you. This may tell you where to go to have the best chance of survival in the event of a disaster (though I admit bunkers are often in short supply).

All right, here goes!

Alien Invasion

Risk: Low

There are probably sapient aliens somewhere in this vast universe, maybe even some with advanced technology. But they are very unlikely to be willing to expend the enormous resources to travel across the stars just to conquer us. Then again, hey, it could happen; maybe they’re imperialists, or they have watched our TV commercials and heard the siren song of oregano.

City: Dangerous

Population centers are likely to be primary targets for their invasion. They probably won’t want to exterminate us outright (why would they?), but they may want to take control of our cities, and are likely to kill a lot of people when they do.

Suburb: Dicey

Outside the city centers will be a bit safer, but hardly truly safe.

Rural Area: Dicey

Where humans are spread out, we’ll present less of a target. Then again, if you own an oregano farm….

Military Base: Very Dangerous

You might think that having all those planes and guns around would help, but these will surely be prime targets in an invasion. Since the aliens are likely to be far more technologically advanced, it’s unlikely our military forces could put up much resistance. Our bases would likely be wiped out almost immediately.

Underground Bunker: Safe

This is a good place to be. Orbital and aerial weapons won’t be very effective against underground targets, and even ground troops would have trouble finding and attacking an isolated bunker. Since they probably won’t want to exterminate us, hiding in your bunker until they establish a New World Order could work out for you.

Ship at Sea: Dicey

As long as it’s a civilian vessel, you should be okay. A naval vessel is just as dangerous as a base, if not more so; they would likely strike our entire fleets from orbit almost instantly. But the aliens are unlikely to have much reason to bother attacking a cruise ship or a yacht. Then again, if they do, you’re toast.

Artificial Intelligence Uprising

Risk: Very High

While it sounds very sci-fi, this is one of the most probable apocalypse scenarios, and we should be working to defend against it. There are dozens of ways that artificial intelligence could get out of control and cause tremendous damage, particularly if the AI got control of combat drones or naval vessels. This could mean a superintelligent AI beyond human comprehension, but it need not; it could in fact be a very stupid AI that was programmed to make profits for Hasbro and decided that melting people into plastic was the best way to do that.

City: Very Dangerous

Cities don’t just have lots of people; they also have lots of machines. If the AI can hack our networks, they may be able to hack into not just phones and laptops, but even cars, homes, and power plants. Depending on the AI’s goals (which are very hard to predict), cities could become disaster zones almost immediately, as thousands of cars shut down and crash and all the power plants get set to overload.

Suburb: Dangerous

Definitely safer than the city, but still, you’ve got plenty of technology around you for the AI to exploit.

Rural Area: Dicey

The further you are from other people and their technology, the safer you’ll be. Having bad wifi out in the boonies may actually save your life. Then again, even tractors have software updates now….

Military Base: Very Dangerous

The military is extremely high-tech and all network-linked. Unless they can successfully secure their systems against the AI very well, very fast, suddenly all the guided missiles and combat drones and sentry guns will be deployed in service of the robot revolution.

Underground Bunker: Safe

As long as your bunker is off the grid, you should be okay. The robots won’t have any weapons we don’t already have, and bunkers are built because they protect pretty well against most weapons.

Ship at Sea: Hopeless

You are surrounded by technology and you have nowhere to run. A military vessel is worse than a civilian ship, but either way, you’re pretty much doomed. The AI is going to take over the radio, the GPS system, maybe even the controls of the ship themselves. It could intentionally overload the engines, or drive you into rocks, or simply shut down everything and leave you to starve at sea. A sailing yacht with a hand-held compass and sextant should be relatively safe, if you manage to get your hands on one of those somehow.

Climate Disaster

Risk: Moderate

Let’s be clear here. Some kind of climate disaster is inevitable; indeed, it’s already in progress. But what I’m talking about is something really severe, something that puts all of human civilization in jeopardy. That, fortunately, is fairly unlikely—and even more so after the big bill that just passed!

City: Dicey

Buildings provide shelter from the elements, and cities will be the first places we defend. Dikes will be built around Manhattan like the ones around Amsterdam. You won’t need to worry about fires, snowstorms, or flooding very much. Still, a really severe crisis could cause all utility systems to break down, meaning you won’t have heating and cooling.

Suburb: Dicey

The suburbs will be about as safe as the cities, maybe a little worse because there isn’t as much shelter if you lose your home to a disaster event.

Rural Area: Dangerous

Remote areas are going to have it the worst. Especially if you’re near a coast that can flood or a forest that can burn, you’re exposed to the elements and there won’t be much infrastructure to protect you. Your best bet is to move in toward the city, where other people will try to help you against the coming storms.

Military Base: Very Safe

Military infrastructure will be prioritized in defense plans, and soldiers are already given lots of survival tools and training. If you can get yourself to a military base and they actually let you in, you really won’t have much to worry about.

Underground Bunker: Very Safe

Underground doesn’t have a lot of weather, it turns out. As long as your bunker is well sealed against flooding, earthquakes are really your only serious concern, and climate change isn’t going to affect those very much.

Ship at Sea: Safe

Increased frequency of hurricanes and other storms will make the sea more dangerous, but as long as you steer clear of storms as they come, you should be okay.

Conventional War

Risk: Moderate

Once again, I should clarify. Obviously there are going to be wars—there are wars going on this very minute. But a truly disastrous war, a World War 3 still fought with conventional weapons, is fairly unlikely. We can’t rule it out, but we don’t have to worry too much—or rather, it’s nukes we should worry about, as I’ll get to in a little bit. It’s unlikely that truly apocalyptic damage could be caused by conventional weapons alone.

City: Dicey

Cities will often be where battles are fought, as they are strategically important. Expect bombing raids and perhaps infantry or tank battalions. Still, it’s actually pretty feasible to survive in a city that is under attack by conventional weapons; while lots of people certainly die, in most wars, most people actually don’t.

Suburb: Safe

Suburbs rarely make interesting military targets, so you’ll mainly have to worry about troops passing through on their way to cities.

Rural Area: Safe

For similar reasons to the suburbs, you should be relatively safe out in the boonies. You may encounter some scattered skirmishes, but you’re unlikely to face sustained attack.

Military Base: Dicey

Whether military bases are safe really depends on whether your side is winning or not. If they are, then you’re probably okay; that’s where all the soldiers and military equipment are, there to defend you. If they aren’t, then you’re in trouble; military bases make nice, juicy targets for attack.

Ship at Sea: Safe

There’s a reason it is big news every time a civilian cruise liner gets sunk in a war (does the Lusitania ring a bell?); it really doesn’t happen that much. Transport ships are at risk of submarine raids, and of course naval vessels will face constant threats; but cruise liners aren’t strategically important, so military forces have very little reason to target them.

Gamma-Ray Burst

Risk: Tiny

While gamma-ray bursts certainly happen all the time, so far they have all been extremely remote from Earth. It is currently estimated that they only happen a few times in any given galaxy every few million years. And each one is concentrated in a narrow beam, so even when they happen they only affect a few nearby stars. This is very good news, because if it happened… well, that’s pretty much it. We’d be doomed.

If a gamma-ray burst happened within a few light-years of us, and happened to be pointed at us, it would scour the Earth, boil the water, burn the atmosphere. Our entire planet would become a dead, molten rock—if, that is, it wasn’t so close that it blew the planet up completely. And the same is going to be true of Mars, Mercury, and every other planet in our solar system.

Underground Bunker: Very Dangerous

Your one meager hope of survival would be to be in an underground bunker at the moment the burst hit. Since most bursts give very little warning, you are unlikely to achieve this unless you, like, live in a bunker—which sounds pretty terrible. Moreover, your bunker needs to be a 100% closed system, and deep underground; the surface will be molten and the air will be burned away. There’s honestly a pretty narrow band of the Earth’s crust that’s deep enough to protect you but not already hot enough to doom you.

Anywhere Else: Hopeless

If you aren’t deep underground at the moment the burst hits us, that’s it; you’re dead. If you are on the side of the Earth facing the burst, you will die mercifully quickly, burned to a crisp instantly. If you are not, your death will be a bit slower, as the raging firestorm that engulfs the Earth, boils the oceans, and burns away the atmosphere will take some time to hit you. But your demise is equally inevitable.

Well, that was cheery. Remember, it’s really unlikely to happen! Moving on!

Meteor Impact

Risk: Tiny

Yes, “it has happened before, and it will happen again; the only question is when.” However, meteors with sufficient size to cause a global catastrophe only seem to hit the Earth about once every couple hundred million years. Moreover, right now the first time in human history where we might actually have a serious chance of detecting and deflecting an oncoming meteor—so even if one were on the way, we’d still have some hope of saving ourselves.

Underground Bunker: Dangerous

A meteor impact would be a lot like a gamma-ray burst, only much less so. (Almost anything is “much less so” than a gamma-ray burst, with the lone exception of a supernova, which is always “much more so”.) It would still boil a lot of ocean and start a massive firestorm, but it wouldn’t boil all the ocean, and the firestorm wouldn’t burn away all the oxygen in the atmosphere. Underground is clearly the safest place to be, preferably on the other side of the planet from the impact.

Anywhere Else: Very Dangerous

If you are above ground, it wouldn’t otherwise matter too much where you are, at least not in any way that’s easy to predict. Further from the impact is obviously better than closer, but the impact could be almost anywhere. After the initial destruction there would be a prolonged impact winter, which could cause famines and wars. Rural areas might be a bit safer than cities, but then again if you are in a remote area, you are less likely to get help if you need it.

Plague

Risk: Low

Obviously, the probability of a pandemic is 100%. You best start believing in pandemics; we’re in one. But pandemics aren’t apocalyptic plagues. To really jeopardize human civilization, there would have to be a superbug that spreads and mutates rapidly, has a high fatality rate, and remains highly resistant to treatment and vaccination. Fortunately, there aren’t a lot of bacteria or viruses like that; the last one we had was the Black Death, and humanity made it through that one. In fact, there is good reason to believe that with modern medical technology, even a pathogen like the Black Death wouldn’t be nearly as bad this time around.

City: Dangerous

Assuming the pathogen spreads from human to human, concentrations of humans are going to be the most dangerous places to be. Staying indoors and following whatever lockdown/mask/safety protocols that authorities recommend will surely help you; but if the plague gets bad enough, infrastructure could start falling apart and even those things will stop working.

Suburb: Safe

In a suburb, you are much more isolated from other people. You can stay in your home and be fairly safe from the plague, as long as you are careful.

Rural Area: Dangerous

The remoteness of a rural area means that you’d think you wouldn’t have to worry as much about human-to-human transmission. But as we’ve learned from COVID, rural areas are full of stubborn right-wing people who refuse to follow government safety protocols. There may not be many people around, but they probably will be taking stupid risks and spreading the disease all over the place. Moreover, if the disease can be carried by animals—as quite a few can—livestock will become an added danger.

Military Base: Safe

If there’s one place in the world where people follow government safety protocols, it’s a military base. Bases will have top-of-the-line equipment, skilled and disciplined personnel, and up-to-the-minute data on the spread of the pathogen.

Underground Bunker: Very Safe

The main thing you need to do is be away from other people for awhile, and a bunker is a great place to do that. As long as your bunker is well-stocked with food and water, you can ride out the plague and come back out once it dies down.

Ship at Sea: Dicey

This is an all-or-nothing proposition. If no one on the ship has the disease, you’re probably safe as long as you remain at sea, because very few pathogens can spread that far through the air. On the other hand, if someone on your ship does carry the disease, you’re basically doomed.

Nuclear War

Risk: Very High

Honestly, this is the one that terrifies me. I have no way of knowing that Vladmir Putin or Xi Jinping won’t wake up one morning any day now and give the order to launch a thousand nuclear missiles. (I honestly wasn’t even sure Trump wouldn’t, so it’s a damn good thing he’s out of office.) They have no reason to, but they’re psychopathic enough that I can’t be sure they won’t.

City: Dangerous

Obviously, most of those missiles are aimed at cities. And if you happen to be in the center of such a city, this is very bad for your health. However, nukes are not the automatic death machines that they are often portrayed to be; sure, right at the blast center you’re vaporized. But Hiroshima and Nagasaki both had lots of survivors, many of whom lived on for years or even decades afterward, even despite the radiation poisoning.

Suburb: Dangerous

Being away from a city center might provide some protection, but then again it might not; it really depends on how the nukes are targeted. It’s actually quite unlikely that Russia or China (or whoever) would deploy large megaton-yield missiles, as they are very expensive; so you could only have a few, making it easier to shoot them all down. The far more likely scenario is lots of kiloton-yield missiles, deployed in what is called a MIRV: multiple independent re-entry vehicle. One missile launches into space, then splits into many missiles, each of which can have a different target. It’s sort of like a cluster bomb, only the “little” clusters are each Hiroshima bombs. Those clusters might actually be spread over metropolitan areas relatively evenly, so being in a suburb might not save you. Or it might. Hard to say.

Rural Area: Dicey

If you are sufficiently remote from cities, the nukes probably won’t be aimed at you. And since most of the danger really happens right when the nuke hits, this is good news for you. You won’t have to worry about the blast or the radiation; your main concerns will be fallout and the resulting collapse of infrastructure. Nuclear winter could also be a risk, but recent studies suggest that’s relatively unlikely even in a full-scale nuclear exchange.

Military Base: Hopeless

The nukes are going to be targeted directly at military bases. Probably multiple nukes per base, in case some get shot down. Basically, if you are on a base at the time the missiles hit, you’re doomed. If you know the missiles are coming, your best bet would be to get as far from that base as you can, into as remote an area as you can. You’ll have a matter of minutes, so good luck.

Underground Bunker: Safe

There’s a reason we built a bunch of underground bunkers during the Cold War; they’re one of the few places you can go to really be safe from a nuclear attack. As long as your bunker is well-stocked and well-shielded, you can hide there and survive not only the initial attack, but the worst of the fallout as well.

Ship at Sea: Safe

Ships are small enough that they probably wouldn’t be targeted by nukes. Maybe if you’re on or near a major naval capital ship, like an aircraft carrier, you’d be in danger; someone might try to nuke that. (Even then, aircraft carriers are tough: Anything short of a direct hit might actually be survivable. In tests, carriers have remained afloat and largely functional even after a 100-kiloton nuclear bomb was detonated a mile away. They’re even radiation-shielded, because they have nuclear reactors.) But a civilian vessel or even a smaller naval vessel is unlikely to be targeted. Just stay miles away from any cities or any other ships, and you should be okay.

Zombies

Risk: Miniscule

Zombies per se—the literal undeadaren’t even real, so that’s just impossible. But something like zombies could maybe happen, in some very remote scenario in which some bizarre mutant strain of rabies or something spreads far and wide and causes people to go crazy and attack other people. Even then, if the infection is really only spread through bites, it’s not clear how it could ever reach a truly apocalyptic level; more likely, it would cause a lot of damage locally and then be rapidly contained, and we’d remember it like Pearl Harbor or 9/11: That terrible, terrible day when 5,000 people became zombies in Portland, and then they all died and it was over. An airborne or mosquito-borne virus would be much more dangerous, but then we’re really talking about a plague, not zombies. The ‘turns people into zombies’ part of the virus would be a lot less important than the ‘spreads through the air and kills you’ part.

Seriously, why is this such a common trope? Why do people think that this could cause an apocalypse?

City: Safe

Yes, safe, dammit. Once you have learned that zombies are on the loose, stay locked in your home, wearing heavy clothing (to block bites; a dog suit is ideal, but a leather jacket or puffy coat would do) with a shotgun (or a gauss rifle, see above) at the ready, and you’ll probably be fine. Yes, this is the area of highest risk, due to the concentration of people who could potentially be infected with the zombie virus. But unless you are stupid—which people in these movies always seem to be—you really aren’t in all that much danger. Zombies can at most be as fast and strong as humans (often, they seem to be less!), so all you need to do is shoot them before they can bite you. And unlike fake movie zombies, anything genuinely possible will go down from any mortal wound, not just a perfect headshot—I assure you, humans, however crazed by infection they might be, can’t run at you if their hearts (or their legs) are gone. It might take a bit more damage to drop them than an ordinary person, if they aren’t slowed down by pain; but it wouldn’t require perfect marksmanship or any kind of special weaponry. Buckshot to the chest will work just fine.

Suburb: Safe

Similar to the city, only more so, because people there are more isolated.

Rural Area: Very Safe

And rural areas are even more isolated still—plus you have more guns than people, so you’ll have more guns than zombies.

Military Base: Very Safe

Even more guns, plus military training and a chain of command! The zombies don’t stand a chance. A military base would be a great place to be, and indeed that’s where the containment would begin, as troops march from the bases to the cities to clear out the zombies. Shaun of the Dead (of all things!) actually got this right: One local area gets pretty bad, but then the Army comes in and takes all the zombies out.

Underground Bunker: Very Safe

A bunker remains safe in the event of zombies, just as it is in most other scenarios.

Ship at Sea: Very Safe

As long as the infection hasn’t spread to the ship you are currently on and the zombies can’t swim, you are at literally zero risk.