When maximizing utility doesn’t

Jun 4 JDN 2460100

Expected utility theory behaves quite strangely when you consider questions involving mortality.

Nick Beckstead and Teruji Thomas recently published a paper on this: All well-defined utility functions are either reckless in that they make you take crazy risks, or timid in that they tell you not to take even very small risks. It’s starting to make me wonder if utility theory is even the right way to make decisions after all.

Consider a game of Russian roulette where the prize is $1 million. The revolver has 6 chambers, 3 with a bullet. So that’s a 1/2 chance of $1 million, and a 1/2 chance of dying. Should you play?

I think it’s probably a bad idea to play. But the prize does matter; if it were $100 million, or $1 billion, maybe you should play after all. And if it were $10,000, you clearly shouldn’t.

And lest you think that there is no chance of dying you should be willing to accept for any amount of money, consider this: Do you drive a car? Do you cross the street? Do you do anything that could ever have any risk of shortening your lifespan in exchange for some other gain? I don’t see how you could live a remotely normal life without doing so. It might be a very small risk, but it’s still there.

This raises the question: Suppose we have some utility function over wealth; ln(x) is a quite plausible one. What utility should we assign to dying?


The fact that the prize matters means that we can’t assign death a utility of negative infinity. It must be some finite value.

But suppose we choose some value, -V, (so V is positive), for the utility of dying. Then we can find some amount of money that will make you willing to play: ln(x) = V, x = e^(V).

Now, suppose that you have the chance to play this game over and over again. Your marginal utility of wealth will change each time you win, so we may need to increase the prize to keep you playing; but we could do that. The prizes could keep scaling up as needed to make you willing to play. So then, you will keep playing, over and over—and then, sooner or later, you’ll die. So, at each step you maximized utility—but at the end, you didn’t get any utility.

Well, at that point your heirs will be rich, right? So maybe you’re actually okay with that. Maybe there is some amount of money ($1 billion?) that you’d be willing to die in order to ensure your heirs have.

But what if you don’t have any heirs? Or, what if we consider making such a decision as a civilization? What if death means not only the destruction of you, but also the destruction of everything you care about?

As a civilization, are there choices before us that would result in some chance of a glorious, wonderful future, but also some chance of total annihilation? I think it’s pretty clear that there are. Nuclear technology, biotechnology, artificial intelligence. For about the last century, humanity has been at a unique epoch: We are being forced to make this kind of existential decision, to face this kind of existential risk.

It’s not that we were immune to being wiped out before; an asteroid could have taken us out at any time (as happened to the dinosaurs), and a volcanic eruption nearly did. But this is the first time in humanity’s existence that we have had the power to destroy ourselves. This is the first time we have a decision to make about it.

One possible answer would be to say we should never be willing to take any kind of existential risk. Unlike the case of an individual, when we speaking about an entire civilization, it no longer seems obvious that we shouldn’t set the utility of death at negative infinity. But if we really did this, it would require shutting down whole industries—definitely halting all research in AI and biotechnology, probably disarming all nuclear weapons and destroying all their blueprints, and quite possibly even shutting down the coal and oil industries. It would be an utterly radical change, and it would require bearing great costs.

On the other hand, if we should decide that it is sometimes worth the risk, we will need to know when it is worth the risk. We currently don’t know that.

Even worse, we will need some mechanism for ensuring that we don’t take the risk when it isn’t worth it. And we have nothing like such a mechanism. In fact, most of our process of research in AI and biotechnology is widely dispersed, with no central governing authority and regulations that are inconsistent between countries. I think it’s quite apparent that right now, there are research projects going on somewhere in the world that aren’t worth the existential risk they pose for humanity—but the people doing them are convinced that they are worth it because they so greatly advance their national interest—or simply because they could be so very profitable.

In other words, humanity finally has the power to make a decision about our survival, and we’re not doing it. We aren’t making a decision at all. We’re letting that responsibility fall upon more or less randomly-chosen individuals in government and corporate labs around the world. We may be careening toward an abyss, and we don’t even know who has the steering wheel.

A guide to surviving the apocalypse

Aug 21 JDN 2459820

Some have characterized the COVID pandemic as an apocalypse, though it clearly isn’t. But a real apocalypse is certainly possible, and its low probability is offset by its extreme importance. The destruction of human civilization would be quite literally the worst thing that ever happened, and if it led to outright human extinction or civilization was never rebuilt, it could prevent a future that would have trillions if not quadrillions of happy, prosperous people.

So let’s talk about things people like you and me could do to survive such a catastrophe, and hopefully work to rebuild civilization. I’ll try to inject a somewhat light-hearted tone into this otherwise extraordinarily dark topic; we’ll see how well it works. What specifically we would want—or be able—to do will depend on the specific scenario that causes the apocalypse, so I’ll address those specifics shortly. But first, let’s talk about general stuff that should be useful in most, if not all, apocalypse scenarios.

It turns out that these general pieces of advice are also pretty good advice for much smaller-scale disasters such as fires, tornados, or earthquakes—all of which are far more likely to occur. Your top priority is to provide for the following basic needs:

1. Water: You will need water to drink. You should have some kind of stockpile of clean water; bottled water is fine but overpriced, and you’d do just as well to bottle tap water (as long as you do it before the crisis occurs and the water system goes down). Better still would be to have water filtration and purification equipment so that you can simply gather whatever water is available and make it drinkable.

2. Food: You will need nutritious, non-perishable food. Canned vegetables and beans are ideal, but you can also get a lot of benefit from dry staples such as crackers. Processed foods and candy are not as nutritious, but they do tend to keep well, so they can do in a pinch. Avoid anything that spoils quickly or requires sophisticated cooking. In the event of a disaster, you will be able to make fire and possibly run a microwave on a solar panel or portable generator—but you can’t rely on the electrical or gas mains to stay operational, and even boiling will require precious water.

3. Shelter: Depending on the disaster, your home may or may not remain standing—and even if it is standing, it may not be fit for habitation. Consider backup options for shelter: Do you have a basement? Do you own any tents? Do you know people you could move in with, if their homes survive and yours doesn’t?

4. Defense: It actually makes sense to own a gun or two in the event of a crisis. (In general it’s actually a big risk, though, so keep that in mind: the person your gun is most likely to kill is you.) Just don’t go overboard and do what we all did in Oregon Trail, stocking plenty of bullets but not enough canned food. Ammo will be hard to replace, though; your best option may actually be a gauss rifle (yes, those are real, and yes, I want one), because all they need for ammo is ferromagnetic metal of the appropriate shape and size. Then, all you need is a solar panel to charge its battery and some machine tools to convert scrap metal into ammo.

5. Community: Humans are highly social creatures, and we survive much better in groups. Get to know your neighbors. Stay in touch with friends and family. Not only will this improve your life in general, it will also give you people to reach out to if you need help during the crisis and the government is indisposed (or toppled). Having a portable radio that runs on batteries, solar power, or hand-crank operation will also be highly valuable for staying in touch with people during a crisis. (Likewise flashlights!)

Now, on to the specific scenarios. I will consider the following potential causes of apocalypse: Alien Invasion, Artificial Intelligence Uprising, Climate Disaster, Conventional War, Gamma-Ray Burst, Meteor Impact, Plague, Nuclear War, and last (and, honestly, least), Zombies.

I will rate each apocalypse by its risk level, based on its probability of occurring within the next 100 years (roughly the time I think it will take us to meaningfully colonize space and thereby change the game):

Very High: 1% or more

High: 0.1% – 1%

Moderate: 0.01% – 0.1%

Low: 0.001% – 0.01%

Very Low: 0.0001% – 0.001%

Tiny: 0.00001% – 0.0001%

Miniscule: 0.00001% or less

I will also rate your relative safety in different possible locations you might find yourself during the crisis:

Very Safe: You will probably survive.

Safe: You will likely survive if you are careful.

Dicey: You may survive, you may not. Hard to say.

Dangerous: You will likely die unless you are very careful.

Very Dangerous: You will probably die.

Hopeless: You will definitely die.

I’ll rate the following locations for each, with some explanation: City, Suburb, Rural Area, Military Base, Underground Bunker, Ship at Sea. Certain patterns will emerge—but some results may surprise you. This may tell you where to go to have the best chance of survival in the event of a disaster (though I admit bunkers are often in short supply).

All right, here goes!

Alien Invasion

Risk: Low

There are probably sapient aliens somewhere in this vast universe, maybe even some with advanced technology. But they are very unlikely to be willing to expend the enormous resources to travel across the stars just to conquer us. Then again, hey, it could happen; maybe they’re imperialists, or they have watched our TV commercials and heard the siren song of oregano.

City: Dangerous

Population centers are likely to be primary targets for their invasion. They probably won’t want to exterminate us outright (why would they?), but they may want to take control of our cities, and are likely to kill a lot of people when they do.

Suburb: Dicey

Outside the city centers will be a bit safer, but hardly truly safe.

Rural Area: Dicey

Where humans are spread out, we’ll present less of a target. Then again, if you own an oregano farm….

Military Base: Very Dangerous

You might think that having all those planes and guns around would help, but these will surely be prime targets in an invasion. Since the aliens are likely to be far more technologically advanced, it’s unlikely our military forces could put up much resistance. Our bases would likely be wiped out almost immediately.

Underground Bunker: Safe

This is a good place to be. Orbital and aerial weapons won’t be very effective against underground targets, and even ground troops would have trouble finding and attacking an isolated bunker. Since they probably won’t want to exterminate us, hiding in your bunker until they establish a New World Order could work out for you.

Ship at Sea: Dicey

As long as it’s a civilian vessel, you should be okay. A naval vessel is just as dangerous as a base, if not more so; they would likely strike our entire fleets from orbit almost instantly. But the aliens are unlikely to have much reason to bother attacking a cruise ship or a yacht. Then again, if they do, you’re toast.

Artificial Intelligence Uprising

Risk: Very High

While it sounds very sci-fi, this is one of the most probable apocalypse scenarios, and we should be working to defend against it. There are dozens of ways that artificial intelligence could get out of control and cause tremendous damage, particularly if the AI got control of combat drones or naval vessels. This could mean a superintelligent AI beyond human comprehension, but it need not; it could in fact be a very stupid AI that was programmed to make profits for Hasbro and decided that melting people into plastic was the best way to do that.

City: Very Dangerous

Cities don’t just have lots of people; they also have lots of machines. If the AI can hack our networks, they may be able to hack into not just phones and laptops, but even cars, homes, and power plants. Depending on the AI’s goals (which are very hard to predict), cities could become disaster zones almost immediately, as thousands of cars shut down and crash and all the power plants get set to overload.

Suburb: Dangerous

Definitely safer than the city, but still, you’ve got plenty of technology around you for the AI to exploit.

Rural Area: Dicey

The further you are from other people and their technology, the safer you’ll be. Having bad wifi out in the boonies may actually save your life. Then again, even tractors have software updates now….

Military Base: Very Dangerous

The military is extremely high-tech and all network-linked. Unless they can successfully secure their systems against the AI very well, very fast, suddenly all the guided missiles and combat drones and sentry guns will be deployed in service of the robot revolution.

Underground Bunker: Safe

As long as your bunker is off the grid, you should be okay. The robots won’t have any weapons we don’t already have, and bunkers are built because they protect pretty well against most weapons.

Ship at Sea: Hopeless

You are surrounded by technology and you have nowhere to run. A military vessel is worse than a civilian ship, but either way, you’re pretty much doomed. The AI is going to take over the radio, the GPS system, maybe even the controls of the ship themselves. It could intentionally overload the engines, or drive you into rocks, or simply shut down everything and leave you to starve at sea. A sailing yacht with a hand-held compass and sextant should be relatively safe, if you manage to get your hands on one of those somehow.

Climate Disaster

Risk: Moderate

Let’s be clear here. Some kind of climate disaster is inevitable; indeed, it’s already in progress. But what I’m talking about is something really severe, something that puts all of human civilization in jeopardy. That, fortunately, is fairly unlikely—and even more so after the big bill that just passed!

City: Dicey

Buildings provide shelter from the elements, and cities will be the first places we defend. Dikes will be built around Manhattan like the ones around Amsterdam. You won’t need to worry about fires, snowstorms, or flooding very much. Still, a really severe crisis could cause all utility systems to break down, meaning you won’t have heating and cooling.

Suburb: Dicey

The suburbs will be about as safe as the cities, maybe a little worse because there isn’t as much shelter if you lose your home to a disaster event.

Rural Area: Dangerous

Remote areas are going to have it the worst. Especially if you’re near a coast that can flood or a forest that can burn, you’re exposed to the elements and there won’t be much infrastructure to protect you. Your best bet is to move in toward the city, where other people will try to help you against the coming storms.

Military Base: Very Safe

Military infrastructure will be prioritized in defense plans, and soldiers are already given lots of survival tools and training. If you can get yourself to a military base and they actually let you in, you really won’t have much to worry about.

Underground Bunker: Very Safe

Underground doesn’t have a lot of weather, it turns out. As long as your bunker is well sealed against flooding, earthquakes are really your only serious concern, and climate change isn’t going to affect those very much.

Ship at Sea: Safe

Increased frequency of hurricanes and other storms will make the sea more dangerous, but as long as you steer clear of storms as they come, you should be okay.

Conventional War

Risk: Moderate

Once again, I should clarify. Obviously there are going to be wars—there are wars going on this very minute. But a truly disastrous war, a World War 3 still fought with conventional weapons, is fairly unlikely. We can’t rule it out, but we don’t have to worry too much—or rather, it’s nukes we should worry about, as I’ll get to in a little bit. It’s unlikely that truly apocalyptic damage could be caused by conventional weapons alone.

City: Dicey

Cities will often be where battles are fought, as they are strategically important. Expect bombing raids and perhaps infantry or tank battalions. Still, it’s actually pretty feasible to survive in a city that is under attack by conventional weapons; while lots of people certainly die, in most wars, most people actually don’t.

Suburb: Safe

Suburbs rarely make interesting military targets, so you’ll mainly have to worry about troops passing through on their way to cities.

Rural Area: Safe

For similar reasons to the suburbs, you should be relatively safe out in the boonies. You may encounter some scattered skirmishes, but you’re unlikely to face sustained attack.

Military Base: Dicey

Whether military bases are safe really depends on whether your side is winning or not. If they are, then you’re probably okay; that’s where all the soldiers and military equipment are, there to defend you. If they aren’t, then you’re in trouble; military bases make nice, juicy targets for attack.

Ship at Sea: Safe

There’s a reason it is big news every time a civilian cruise liner gets sunk in a war (does the Lusitania ring a bell?); it really doesn’t happen that much. Transport ships are at risk of submarine raids, and of course naval vessels will face constant threats; but cruise liners aren’t strategically important, so military forces have very little reason to target them.

Gamma-Ray Burst

Risk: Tiny

While gamma-ray bursts certainly happen all the time, so far they have all been extremely remote from Earth. It is currently estimated that they only happen a few times in any given galaxy every few million years. And each one is concentrated in a narrow beam, so even when they happen they only affect a few nearby stars. This is very good news, because if it happened… well, that’s pretty much it. We’d be doomed.

If a gamma-ray burst happened within a few light-years of us, and happened to be pointed at us, it would scour the Earth, boil the water, burn the atmosphere. Our entire planet would become a dead, molten rock—if, that is, it wasn’t so close that it blew the planet up completely. And the same is going to be true of Mars, Mercury, and every other planet in our solar system.

Underground Bunker: Very Dangerous

Your one meager hope of survival would be to be in an underground bunker at the moment the burst hit. Since most bursts give very little warning, you are unlikely to achieve this unless you, like, live in a bunker—which sounds pretty terrible. Moreover, your bunker needs to be a 100% closed system, and deep underground; the surface will be molten and the air will be burned away. There’s honestly a pretty narrow band of the Earth’s crust that’s deep enough to protect you but not already hot enough to doom you.

Anywhere Else: Hopeless

If you aren’t deep underground at the moment the burst hits us, that’s it; you’re dead. If you are on the side of the Earth facing the burst, you will die mercifully quickly, burned to a crisp instantly. If you are not, your death will be a bit slower, as the raging firestorm that engulfs the Earth, boils the oceans, and burns away the atmosphere will take some time to hit you. But your demise is equally inevitable.

Well, that was cheery. Remember, it’s really unlikely to happen! Moving on!

Meteor Impact

Risk: Tiny

Yes, “it has happened before, and it will happen again; the only question is when.” However, meteors with sufficient size to cause a global catastrophe only seem to hit the Earth about once every couple hundred million years. Moreover, right now the first time in human history where we might actually have a serious chance of detecting and deflecting an oncoming meteor—so even if one were on the way, we’d still have some hope of saving ourselves.

Underground Bunker: Dangerous

A meteor impact would be a lot like a gamma-ray burst, only much less so. (Almost anything is “much less so” than a gamma-ray burst, with the lone exception of a supernova, which is always “much more so”.) It would still boil a lot of ocean and start a massive firestorm, but it wouldn’t boil all the ocean, and the firestorm wouldn’t burn away all the oxygen in the atmosphere. Underground is clearly the safest place to be, preferably on the other side of the planet from the impact.

Anywhere Else: Very Dangerous

If you are above ground, it wouldn’t otherwise matter too much where you are, at least not in any way that’s easy to predict. Further from the impact is obviously better than closer, but the impact could be almost anywhere. After the initial destruction there would be a prolonged impact winter, which could cause famines and wars. Rural areas might be a bit safer than cities, but then again if you are in a remote area, you are less likely to get help if you need it.

Plague

Risk: Low

Obviously, the probability of a pandemic is 100%. You best start believing in pandemics; we’re in one. But pandemics aren’t apocalyptic plagues. To really jeopardize human civilization, there would have to be a superbug that spreads and mutates rapidly, has a high fatality rate, and remains highly resistant to treatment and vaccination. Fortunately, there aren’t a lot of bacteria or viruses like that; the last one we had was the Black Death, and humanity made it through that one. In fact, there is good reason to believe that with modern medical technology, even a pathogen like the Black Death wouldn’t be nearly as bad this time around.

City: Dangerous

Assuming the pathogen spreads from human to human, concentrations of humans are going to be the most dangerous places to be. Staying indoors and following whatever lockdown/mask/safety protocols that authorities recommend will surely help you; but if the plague gets bad enough, infrastructure could start falling apart and even those things will stop working.

Suburb: Safe

In a suburb, you are much more isolated from other people. You can stay in your home and be fairly safe from the plague, as long as you are careful.

Rural Area: Dangerous

The remoteness of a rural area means that you’d think you wouldn’t have to worry as much about human-to-human transmission. But as we’ve learned from COVID, rural areas are full of stubborn right-wing people who refuse to follow government safety protocols. There may not be many people around, but they probably will be taking stupid risks and spreading the disease all over the place. Moreover, if the disease can be carried by animals—as quite a few can—livestock will become an added danger.

Military Base: Safe

If there’s one place in the world where people follow government safety protocols, it’s a military base. Bases will have top-of-the-line equipment, skilled and disciplined personnel, and up-to-the-minute data on the spread of the pathogen.

Underground Bunker: Very Safe

The main thing you need to do is be away from other people for awhile, and a bunker is a great place to do that. As long as your bunker is well-stocked with food and water, you can ride out the plague and come back out once it dies down.

Ship at Sea: Dicey

This is an all-or-nothing proposition. If no one on the ship has the disease, you’re probably safe as long as you remain at sea, because very few pathogens can spread that far through the air. On the other hand, if someone on your ship does carry the disease, you’re basically doomed.

Nuclear War

Risk: Very High

Honestly, this is the one that terrifies me. I have no way of knowing that Vladmir Putin or Xi Jinping won’t wake up one morning any day now and give the order to launch a thousand nuclear missiles. (I honestly wasn’t even sure Trump wouldn’t, so it’s a damn good thing he’s out of office.) They have no reason to, but they’re psychopathic enough that I can’t be sure they won’t.

City: Dangerous

Obviously, most of those missiles are aimed at cities. And if you happen to be in the center of such a city, this is very bad for your health. However, nukes are not the automatic death machines that they are often portrayed to be; sure, right at the blast center you’re vaporized. But Hiroshima and Nagasaki both had lots of survivors, many of whom lived on for years or even decades afterward, even despite the radiation poisoning.

Suburb: Dangerous

Being away from a city center might provide some protection, but then again it might not; it really depends on how the nukes are targeted. It’s actually quite unlikely that Russia or China (or whoever) would deploy large megaton-yield missiles, as they are very expensive; so you could only have a few, making it easier to shoot them all down. The far more likely scenario is lots of kiloton-yield missiles, deployed in what is called a MIRV: multiple independent re-entry vehicle. One missile launches into space, then splits into many missiles, each of which can have a different target. It’s sort of like a cluster bomb, only the “little” clusters are each Hiroshima bombs. Those clusters might actually be spread over metropolitan areas relatively evenly, so being in a suburb might not save you. Or it might. Hard to say.

Rural Area: Dicey

If you are sufficiently remote from cities, the nukes probably won’t be aimed at you. And since most of the danger really happens right when the nuke hits, this is good news for you. You won’t have to worry about the blast or the radiation; your main concerns will be fallout and the resulting collapse of infrastructure. Nuclear winter could also be a risk, but recent studies suggest that’s relatively unlikely even in a full-scale nuclear exchange.

Military Base: Hopeless

The nukes are going to be targeted directly at military bases. Probably multiple nukes per base, in case some get shot down. Basically, if you are on a base at the time the missiles hit, you’re doomed. If you know the missiles are coming, your best bet would be to get as far from that base as you can, into as remote an area as you can. You’ll have a matter of minutes, so good luck.

Underground Bunker: Safe

There’s a reason we built a bunch of underground bunkers during the Cold War; they’re one of the few places you can go to really be safe from a nuclear attack. As long as your bunker is well-stocked and well-shielded, you can hide there and survive not only the initial attack, but the worst of the fallout as well.

Ship at Sea: Safe

Ships are small enough that they probably wouldn’t be targeted by nukes. Maybe if you’re on or near a major naval capital ship, like an aircraft carrier, you’d be in danger; someone might try to nuke that. (Even then, aircraft carriers are tough: Anything short of a direct hit might actually be survivable. In tests, carriers have remained afloat and largely functional even after a 100-kiloton nuclear bomb was detonated a mile away. They’re even radiation-shielded, because they have nuclear reactors.) But a civilian vessel or even a smaller naval vessel is unlikely to be targeted. Just stay miles away from any cities or any other ships, and you should be okay.

Zombies

Risk: Miniscule

Zombies per se—the literal undeadaren’t even real, so that’s just impossible. But something like zombies could maybe happen, in some very remote scenario in which some bizarre mutant strain of rabies or something spreads far and wide and causes people to go crazy and attack other people. Even then, if the infection is really only spread through bites, it’s not clear how it could ever reach a truly apocalyptic level; more likely, it would cause a lot of damage locally and then be rapidly contained, and we’d remember it like Pearl Harbor or 9/11: That terrible, terrible day when 5,000 people became zombies in Portland, and then they all died and it was over. An airborne or mosquito-borne virus would be much more dangerous, but then we’re really talking about a plague, not zombies. The ‘turns people into zombies’ part of the virus would be a lot less important than the ‘spreads through the air and kills you’ part.

Seriously, why is this such a common trope? Why do people think that this could cause an apocalypse?

City: Safe

Yes, safe, dammit. Once you have learned that zombies are on the loose, stay locked in your home, wearing heavy clothing (to block bites; a dog suit is ideal, but a leather jacket or puffy coat would do) with a shotgun (or a gauss rifle, see above) at the ready, and you’ll probably be fine. Yes, this is the area of highest risk, due to the concentration of people who could potentially be infected with the zombie virus. But unless you are stupid—which people in these movies always seem to be—you really aren’t in all that much danger. Zombies can at most be as fast and strong as humans (often, they seem to be less!), so all you need to do is shoot them before they can bite you. And unlike fake movie zombies, anything genuinely possible will go down from any mortal wound, not just a perfect headshot—I assure you, humans, however crazed by infection they might be, can’t run at you if their hearts (or their legs) are gone. It might take a bit more damage to drop them than an ordinary person, if they aren’t slowed down by pain; but it wouldn’t require perfect marksmanship or any kind of special weaponry. Buckshot to the chest will work just fine.

Suburb: Safe

Similar to the city, only more so, because people there are more isolated.

Rural Area: Very Safe

And rural areas are even more isolated still—plus you have more guns than people, so you’ll have more guns than zombies.

Military Base: Very Safe

Even more guns, plus military training and a chain of command! The zombies don’t stand a chance. A military base would be a great place to be, and indeed that’s where the containment would begin, as troops march from the bases to the cities to clear out the zombies. Shaun of the Dead (of all things!) actually got this right: One local area gets pretty bad, but then the Army comes in and takes all the zombies out.

Underground Bunker: Very Safe

A bunker remains safe in the event of zombies, just as it is in most other scenarios.

Ship at Sea: Very Safe

As long as the infection hasn’t spread to the ship you are currently on and the zombies can’t swim, you are at literally zero risk.

Risk compensation is not a serious problem

Nov 28 JDN 2459547

Risk compensation. It’s one of those simple but counter-intuitive ideas that economists love, and it has been a major consideration in regulatory policy since the 1970s.

The idea is this: The risk we face in our actions is partly under our control. It requires effort to reduce risk, and effort is costly. So when an external source, such as a government regulation, reduces our risk, we will compensate by reducing the effort we expend, and thus our risk will decrease less, or maybe not at all. Indeed, perhaps we’ll even overcompensate and make our risk worse!

It’s often used as an argument against various kinds of safety efforts: Airbags will make people drive worse! Masks will make people go out and get infected!

The basic theory here is sound: Effort to reduce risk is costly, and people try to reduce costly things.

Indeed, it’s theoretically possible that risk compensation could yield the exact same risk, or even more risk than before—or at least, I wasn’t able to prove that for any possible risk profile and cost function it couldn’t happen.

But I wasn’t able to find any actual risk profiles or cost functions that would yield this result, even for a quite general form. Here, let me show you.

Let’s say there’s some possible harm H. There is also some probability that it will occur, which you can mitigate with some choice x. For simplicity let’s say that it’s one-to-one, so that your risk of H occurring is precisely 1-x. Since probabilities must be between 0 and 1, thus so must x.

Reducing that risk costs effort. I won’t say much about that cost, except to call it c(x) and assume the following:

(1) It is increasing: More effort reduces risk more and costs more than less effort.

(2) It is convex: Reducing risk from a high level to a low level (e.g. 0.9 to 0.8) costs less than reducing it from a low level to an even lower level (e.g. 0.2 to 0.1).

These both seem like eminently plausible—indeed, nigh-unassailable—assumptions. And they result in the following total expected cost (the opposite of your expected utility):

(1-x)H + c(x)

Now let’s suppose there’s some policy which will reduce your risk by a factor r, which must be between 0 and 1. Your cost then becomes:

r(1-x)H + c(x)

Minimizing this yields the following result:

rH = c'(x)

where c'(x) is the derivative of c(x). Since c(x) is increasing and convex, c'(x) is positive and increasing.

Thus, if I make r smaller—an external source of less risk—then I will reduce the optimal choice of x. This is risk compensation.

But have I reduced or increased the amount of risk?

The total risk is r(1-x); since r decreased and so did x, it’s not clear whether this went up or down. Indeed, it’s theoretically possible to have cost functions that would make it go up—but I’ve never seen one.

For instance, suppose we assume that c(x) = axb, where a and b are constants. This seems like a pretty general form, doesn’t it? To maintain the assumption that c(x) is increasing and convex, I need a > 0 and b > 1. (If 0 < b < 1, you get a function that’s increasing but concave. If b=1, you get a linear function and some weird corner solutions where you either expend no effort at all or all possible effort.)

Then I’m trying to minimize:

r(1-x)H + axb

This results in a closed-form solution for x:

x = (rH/ab)^(1/(b-1))

Since b>1, 1/(b-1) > 0.


Thus, the optimal choice of x is increasing in rH and decreasing in ab. That is, reducing the harm H or the overall risk r will make me put in less effort, while reducing the cost of effort (via either a or b) will make me put in more effort. These all make sense.

Can I ever increase the overall risk by reducing r? Let’s see.


My total risk r(1-x) is therefore:

r(1-x) = r[1-(rH/ab)^(1/(b-1))]

Can making r smaller ever make this larger?

Well, let’s compare it against the case when r=1. We want to see if there’s a case where it’s actually larger.

r[1-(rH/ab)^(1/(b-1))] > [1-(H/ab)^(1/(b-1))]

r – r^(1/(b-1)) (H/ab)^(1/(b-1)) > 1 – (H/ab)^(1/(b-1))

For this to be true, we would need r > 1, which would mean we didn’t reduce risk at all. Thus, reducing risk externally reduces total risk even after compensation.

Now, to be fair, this isn’t a fully general model. I had to assume some specific functional forms. But I didn’t assume much, did I?

Indeed, there is a fully general argument that externally reduced risk will never harm you. It’s quite simple.

There are three states to consider: In state A, you have your original level of risk and your original level of effort to reduce it. In state B, you have an externally reduced level of risk and your original level of effort. In state C, you have an externally reduced level of risk, and you compensate by reducing your effort.

Which states make you better off?

Well, clearly state B is better than state A: You get reduced risk at no cost to you.

Furthermore, state C must be better than state B: You voluntarily chose to risk-compensate precisely because it made you better off.

Therefore, as long as your preferences are rational, state C is better than state A.

Externally reduced risk will never make you worse off.

QED. That’s it. That’s the whole proof.

But I’m a behavioral economist, am I not? What if people aren’t being rational? Perhaps there’s some behavioral bias that causes people to overcompensate for reduced risks. That’s ultimately an empirical question.

So, what does the empirical data say? Risk compensation is almost never a serious problem in the real world. Measures designed to increase safety, lo and behold, actually increase safety. Removing safety regulations, astonishingly enough, makes people less safe and worse off.

If we ever do find a case where risk compensation is very large, then I guess we can remove that safety measure, or find some way to get people to stop overcompensating. But in the real world this has basically never happened.

It’s still a fair question whether any given safety measure is worth the cost: Implementing regulations can be expensive, after all. And while many people would like to think that “no amount of money is worth a human life”, nobody does—or should, or even can—act like that in the real world. You wouldn’t drive to work or get out of bed in the morning if you honestly believed that.

If it would cost $4 billion to save one expected life, it’s definitely not worth it. Indeed, you should still be able to see that even if you don’t think lives can be compared with other things—because $4 billion could save an awful lot of lives if you spent it more efficiently. (Probablyover a million, in fact, as current estimates of the marginal cost to save one life are about $2,300.) Inefficient safety interventions don’t just cost money—they prevent us from doing other, more efficient safety interventions.

And as for airbags and wearing masks to prevent COVID? Yes, definitely 100% worth it, as both interventions have already saved tens if not hundreds of thousands of lives.

Why is cryptocurrency popular?

May 30 JDN 2459365

At the time of writing, the price of most cryptocurrencies has crashed, likely due to a ban on conventional banks using cryptocurrency in China (though perhaps also due to Elon Musk personally refusing to accept Bitcoin at his businesses). But for all I know by the time this post goes live the price will surge again. Or maybe they’ll crash even further. Who knows? The prices of popular cryptocurrencies have been extremely volatile.

This post isn’t really about the fluctuations of cryptocurrency prices. It’s about something a bit deeper: Why are people willing to put money into cryptocurrencies at all?

The comparison is often made to fiat currency: “Bitcoin isn’t backed by anything, but neither is the US dollar.”

But the US dollar is backed by something: It’s backed by the US government. Yes, it’s not tradeable for gold at a fixed price, but so what? You can use it to pay taxes. The government requires it to be legal tender for all debts. There are certain guaranteed exchange rights built into the US dollar, which underpin the value that the dollar takes on in other exchanges. Moreover, the US Federal Reserve carefully manages the supply of US dollars so as to keep their value roughly constant.

Bitcoin does not have this (nor does Dogecoin, or Etherium, or any of the other hundreds of lesser-known cryptocurrencies). There is no central bank. There is no government making them legal tender for any debts at all, let alone all of them. Nobody collects taxes in Bitcoin.

And so, because its value is untethered, Bitcoin’s price rises and falls, often in huge jumps, more or less randomly. If you look all the way back to when it was introduced, Bitcoin does seem to have an overall upward price trend, but this honestly seems like a statistical inevitability: If you start out being worthless, the only way your price can change is upward. While some people have become quite rich by buying into Bitcoin early on, there’s no particular reason to think that it will rise in value from here on out.

Nor does Bitcoin have any intrinsic value. You can’t eat it, or build things out of it, or use it for scientific research. It won’t even entertain you (unless you have a very weird sense of entertainment). Bitcoin doesn’t even have “intrinsic value” the way gold does (which is honestly an abuse of the term, since gold isn’t actually especially useful): It isn’t innately scarce. It was made scarce by its design: Through the blockchain, a clever application of encryption technology, it was made difficult to generate new Bitcoins (called “mining”) in an exponentially increasing way. But the decision of what encryption algorithm to use was utterly arbitrary. Bitcoin mining could just as well have been made a thousand times easier or a thousand times harder. They seem to have hit a sweet spot where they made it just hard enough that it make Bitcoin seem scarce while still making it feel feasible to get.

We could actually make a cryptocurrency that does something useful, by tying its mining to a genuinely valuable pursuit, like analyzing scientific data or proving mathematical theorems. Perhaps I should suggest a partnership with Folding@Home to make FoldCoin, the crypto coin you mine by folding proteins. There are some technical details there that would be a bit tricky, but I think it would probably be feasible. And then at least all this computing power would accomplish something, and the money people make would be to compensate them for their contribution.

But Bitcoin is not useful. No institution exists to stabilize its value. It constantly rises and falls in price. Why do people buy it?

In a word, FOMO. The fear of missing out. People buy Bitcoin because they see that a handful of other people have become rich by buying and selling Bitcoin. Bitcoin symbolizes financial freedom: The chance to become financially secure without having to participate any longer in our (utterly broken) labor market.

In this, volatility is not a bug but a feature: A stable currency won’t change much in value, so you’d only buy into it because you plan on spending it. But an unstable currency, now, there you might manage to get lucky speculating on its value and get rich quick for nothing. Or, more likely, you’ll end up poorer. You really have no way of knowing.

That makes cryptocurrency fundamentally like gambling. A few people make a lot of money playing poker, too; but most people who play poker lose money. Indeed, those people who get rich are only able to get rich because other people lose money. The game is zero-sum—and likewise so is cryptocurrency.

Note that this is not how the stock market works, or at least not how it’s supposed to work (sometimes maybe). When you buy a stock, you are buying a share of the profits of a corporation—a real, actual corporation that produces and sells goods or services. You’re (ostensibly) supplying capital to fund the operations of that corporation, so that they might make and sell more goods in order to earn more profit, which they will then share with you.

Likewise when you buy a bond: You are lending money to an institution (usually a corporation or a government) that intends to use that money to do something—some real actual thing in the world, like building a factory or a bridge. They are willing to pay interest on that debt in order to get the money now rather than having to wait.

Initial Coin Offerings were supposed to be away to turn cryptocurrency into a genuine investment, but at least in their current virtually unregulated form, they are basically indistinguishable from a Ponzi scheme. Unless the value of the coin is somehow tied to actual ownership of the corporation or shares of its profits (the way stocks are), there’s nothing to ensure that the people who buy into the coin will actually receive anything in return for the capital they invest. There’s really very little stopping a startup from running an ICO, receiving a bunch of cash, and then absconding to the Cayman Islands. If they made it really obvious like that, maybe a lawsuit would succeed; but as long as they can create even the appearance of a good-faith investment—or even actually make their business profitable!—there’s nothing forcing them to pay a cent to the owners of their cryptocurrency.

The really frustrating thing for me about all this is that, sometimes, it works. There actually are now thousands of people who made decisions that by any objective standard were irrational and irresponsible, and then came out of it millionaires. It’s much like the lottery: Playing the lottery is clearly and objectively a bad idea, but every once in awhile it will work and make you massively better off.

It’s like I said in a post about a year ago: Glorifying superstars glorifies risk. When a handful of people can massively succeed by making a decision, that makes a lot of other people think that it was a good decision. But quite often, it wasn’t a good decision at all; they just got spectacularly lucky.

I can’t exactly say you shouldn’t buy any cryptocurrency. It probably has better odds than playing poker or blackjack, and it certainly has better odds than playing the lottery. But what I can say is this: It’s about odds. It’s gambling. It may be relatively smart gambling (poker and blackjack are certainly a better idea than roulette or slot machines), with relatively good odds—but it’s still gambling. It’s a zero-sum high-risk exchange of money that makes a few people rich and lots of other people poorer.

With that in mind, don’t put any money into cryptocurrency that you couldn’t afford to lose at a blackjack table. If you’re looking for something to seriously invest your savings in, the answer remains the same: Stocks. All the stocks.

I doubt this particular crash will be the end for cryptocurrency, but I do think it may be the beginning of the end. I think people are finally beginning to realize that cryptocurrencies are really not the spectacular innovation that they were hyped to be, but more like a high-tech iteration of the ancient art of the Ponzi scheme. Maybe blockchain technology will ultimately prove useful for something—hey, maybe we should actually try making FoldCoin. But the future of money remains much as it has been for quite some time: Fiat currency managed by central banks.

Glorifying superstars glorifies excessive risk

Apr 26 JDN 2458964

Suppose you were offered the choice of the following two gambles; which one would you take?

Gamble A: 99.9% chance of $0; 0.1% chance of $100 million

Gamble B: 10% chance of $50,000; 80% chance of $100,000; 10% chance of $1 million

I think it’s pretty clear that you should choose gamble B.

If you were risk-neutral, the expected payoffs would be $100,000 for gamble A and $185,000 for gamble B. So clearly gamble B is the better deal.

But you’re probably risk-averse. If you have logarithmic utility with a baseline and current wealth of $10,000, the difference is even larger:

0.001*ln(10001) = 0.009

0.1*ln(6) + 0.8*ln(11) + 0.1*ln(101) = 2.56

Yet suppose this is a gamble that a lot of people get to take. And furthermore suppose that what you read about in the news every day is always the people who are the very richest. Then you will read, over and over again, about people who took gamble A and got lucky enough to get the $100 million. You’d probably start to wonder if maybe you should be taking gamble A instead.

This is more or less the world we live in. A handful of billionaires own staggering amounts of wealth, and we are constantly hearing about them. Even aside from the fact that most of them inherited a large portion of it and all of them had plenty of advantages that most of us will never have, it’s still not clear that they were actually smart about taking the paths they did—it could simply be that they got spectacularly lucky.

Or perhaps there’s an even clearer example: Professional athletes. The vast majority of athletes make basically no money at sports. Even most paid athletes are in minor leagues and make only a modest living.

There’s certainly nothing wrong with being an amateur who plays sports for fun. But if you were to invest a large proportion of your time training in sports in the hopes of becoming a professional athlete, you would most likely find yourself gravely disappointed, as your chances of actually getting into the major leagues and becoming a multi-millionaire are exceedingly small. Yet you can probably name at least a few major league athletes who are multi-millionaires—perhaps dozens, if you’re a serious fan—and I doubt you can name anywhere near as many minor league players or players who never made it into paid leagues in the first place.

When we spend all of our time focused on the superstars, what we are effectively assessing is the maximum possible income available on a given career track. And it’s true; the maximum for professional athletes and especially entrepreneurs is extremely high. But the maximum isn’t what you should care about; you should really be concerned about the average or even the median.

And it turns out that the same professions that offer staggeringly high incomes at the very top also tend to be professions with extremely high risk attached. The average income for an athlete is very small; the median is almost certainly zero. Entrepreneurs do better; their average and median income aren’t too much worse than most jobs. But this moderate average comes with a great deal of risk; yes, you could become a billionaire—but far more likely, you could become bankrupt.

This is a deeply perverse result: The careers that our culture most glorifies, the ones that we inspire people to dream about, are precisely those that are the most likely to result in financial ruin.

Realizing this changes your perspective on a lot of things. For instance, there is a common lament that teachers aren’t paid the way professional athletes are. I for one am extremely grateful that this is the case. If teachers were paid like athletes, yes, 0.1% would be millionaires, but only 4.9% would make a decent living, and the remaining 95% would be utterly broke. Indeed, this is precisely what might happen if MOOCs really take off, and a handful of superstar teachers are able to produce all the content while the vast majority of teaching mostly amounts to showing someone else’s slideshows. Teachers are much better off in a world where they almost all make a decent living even though none of them ever get spectacularly rich. (Are many teachers still underpaid? Sure. How do I know this? Because there are teacher shortages. A chronic shortage of something is a surefire sign that its price is too low.) And clearly the idea that we could make all teachers millionaires is just ludicrous: Do you want to pay $1 million a year for your child’s education?

Is there a way that we could change this perverse pattern? Could we somehow make it feel more inspiring to choose a career that isn’t so risky? Well, I doubt we’ll ever get children to dream of being accountants or middle managers. But there are a wide range of careers that are fulfilling and meaningful while still making a decent living—like, well, teaching. Even working in creative arts can be like this: While very few authors are millionaires, the median income for an author is quite respectable. (On the other hand there’s some survivor bias here: We don’t count you as an author if you can’t get published at all.) Software engineers are generally quite satisfied with their jobs, and they manage to get quite high incomes with low risk. I think the real answer here is to spend less time glorifying obscene hoards of wealth and more time celebrating lives that are rich and meaningful.

I don’t know if Jeff Bezos is truly happy. But I do know that you and I are more likely to be happy if instead of trying to emulate him, we focus on making our own lives meaningful.

Why are humans so bad with probability?

Apr 29 JDN 2458238

In previous posts on deviations from expected utility and cumulative prospect theory, I’ve detailed some of the myriad ways in which human beings deviate from optimal rational behavior when it comes to probability.

This post is going to be a bit different: Yes, we behave irrationally when it comes to probability. Why?

Why aren’t we optimal expected utility maximizers?
This question is not as simple as it sounds. Some of the ways that human beings deviate from neoclassical behavior are simply because neoclassical theory requires levels of knowledge and intelligence far beyond what human beings are capable of; basically anything requiring “perfect information” qualifies, as does any game theory prediction that involves solving extensive-form games with infinite strategy spaces by backward induction. (Don’t feel bad if you have no idea what that means; that’s kind of my point. Solving infinite extensive-form games by backward induction is an unsolved problem in game theory; just this past week I saw a new paper presented that offered a partial potential solutionand yet we expect people to do it optimally every time?)

I’m also not going to include questions of fundamental uncertainty, like “Will Apple stock rise or fall tomorrow?” or “Will the US go to war with North Korea in the next ten years?” where it isn’t even clear how we would assign a probability. (Though I will get back to them, for reasons that will become clear.)

No, let’s just look at the absolute simplest cases, where the probabilities are all well-defined and completely transparent: Lotteries and casino games. Why are we so bad at that?

Lotteries are not a computationally complex problem. You figure out how much the prize is worth to you, multiply it by the probability of winning—which is clearly spelled out for you—and compare that to how much the ticket price is worth to you. The most challenging part lies in specifying your marginal utility of wealth—the “how much it’s worth to you” part—but that’s something you basically had to do anyway, to make any kind of trade-offs on how to spend your time and money. Maybe you didn’t need to compute it quite so precisely over that particular range of parameters, but you need at least some idea how much $1 versus $10,000 is worth to you in order to get by in a market economy.

Casino games are a bit more complicated, but not much, and most of the work has been done for you; you can look on the Internet and find tables of probability calculations for poker, blackjack, roulette, craps and more. Memorizing all those probabilities might take some doing, but human memory is astonishingly capacious, and part of being an expert card player, especially in blackjack, seems to involve memorizing a lot of those probabilities.

Furthermore, by any plausible expected utility calculation, lotteries and casino games are a bad deal. Unless you’re an expert poker player or blackjack card-counter, your expected income from playing at a casino is always negative—and the casino set it up that way on purpose.

Why, then, can lotteries and casinos stay in business? Why are we so bad at such a simple problem?

Clearly we are using some sort of heuristic judgment in order to save computing power, and the people who make lotteries and casinos have designed formal models that can exploit those heuristics to pump money from us. (Shame on them, really; I don’t fully understand why this sort of thing is legal.)

In another previous post I proposed what I call “categorical prospect theory”, which I think is a decently accurate description of the heuristics people use when assessing probability (though I’ve not yet had the chance to test it experimentally).

But why use this particular heuristic? Indeed, why use a heuristic at all for such a simple problem?

I think it’s helpful to keep in mind that these simple problems are weird; they are absolutely not the sort of thing a tribe of hunter-gatherers is likely to encounter on the savannah. It doesn’t make sense for our brains to be optimized to solve poker or roulette.

The sort of problems that our ancestors encountered—indeed, the sort of problems that we encounter, most of the time—were not problems of calculable probability risk; they were problems of fundamental uncertainty. And they were frequently matters of life or death (which is why we’d expect them to be highly evolutionarily optimized): “Was that sound a lion, or just the wind?” “Is this mushroom safe to eat?” “Is that meat spoiled?”

In fact, many of the uncertainties most important to our ancestors are still important today: “Will these new strangers be friendly, or dangerous?” “Is that person attracted to me, or am I just projecting my own feelings?” “Can I trust you to keep your promise?” These sorts of social uncertainties are even deeper; it’s not clear that any finite being could ever totally resolve its uncertainty surrounding the behavior of other beings with the same level of intelligence, as the cognitive arms race continues indefinitely. The better I understand you, the better you understand me—and if you’re trying to deceive me, as I get better at detecting deception, you’ll get better at deceiving.

Personally, I think that it was precisely this sort of feedback loop that resulting in human beings getting such ridiculously huge brains in the first place. Chimpanzees are pretty good at dealing with the natural environment, maybe even better than we are; but even young children can outsmart them in social tasks any day. And once you start evolving for social cognition, it’s very hard to stop; basically you need to be constrained by something very fundamental, like, say, maximum caloric intake or the shape of the birth canal. Where chimpanzees look like their brains were what we call an “interior solution”, where evolution optimized toward a particular balance between cost and benefit, human brains look more like a “corner solution”, where the evolutionary pressure was entirely in one direction until we hit up against a hard constraint. That’s exactly what one would expect to happen if we were caught in a cognitive arms race.

What sort of heuristic makes sense for dealing with fundamental uncertainty—as opposed to precisely calculable probability? Well, you don’t want to compute a utility function and multiply by it, because that adds all sorts of extra computation and you have no idea what probability to assign. But you’ve got to do something like that in some sense, because that really is the optimal way to respond.

So here’s a heuristic you might try: Separate events into some broad categories based on how frequently they seem to occur, and what sort of response would be necessary.

Some things, like the sun rising each morning, seem to always happen. So you should act as if those things are going to happen pretty much always, because they do happen… pretty much always.

Other things, like rain, seem to happen frequently but not always. So you should look for signs that those things might happen, and prepare for them when the signs point in that direction.

Still other things, like being attacked by lions, happen very rarely, but are a really big deal when they do. You can’t go around expecting those to happen all the time, that would be crazy; but you need to be vigilant, and if you see any sign that they might be happening, even if you’re pretty sure they’re not, you may need to respond as if they were actually happening, just in case. The cost of a false positive is much lower than the cost of a false negative.

And still other things, like people sprouting wings and flying, never seem to happen. So you should act as if those things are never going to happen, and you don’t have to worry about them.

This heuristic is quite simple to apply once set up: It can simply slot in memories of when things did and didn’t happen in order to decide which category they go in—i.e. availability heuristic. If you can remember a lot of examples of “almost never”, maybe you should move it to “unlikely” instead. If you get a really big number of examples, you might even want to move it all the way to “likely”.

Another large advantage of this heuristic is that by combining utility and probability into one metric—we might call it “importance”, though Bayesian econometricians might complain about that—we can save on memory space and computing power. I don’t need to separately compute a utility and a probability; I just need to figure out how much effort I should put into dealing with this situation. A high probability of a small cost and a low probability of a large cost may be equally worth my time.

How might these heuristics go wrong? Well, if your environment changes sufficiently, the probabilities could shift and what seemed certain no longer is. For most of human history, “people walking on the Moon” would seem about as plausible as sprouting wings and flying away, and yet it has happened. Being attacked by lions is now exceedingly rare except in very specific places, but we still harbor a certain awe and fear before lions. And of course availability heuristic can be greatly distorted by mass media, which makes people feel like terrorist attacks and nuclear meltdowns are common and deaths by car accidents and influenza are rare—when exactly the opposite is true.

How many categories should you set, and what frequencies should they be associated with? This part I’m still struggling with, and it’s an important piece of the puzzle I will need before I can take this theory to experiment. There is probably a trade-off between more categories giving you more precision in tailoring your optimal behavior, but costing more cognitive resources to maintain. Is the optimal number 3? 4? 7? 10? I really don’t know. Even I could specify the number of categories, I’d still need to figure out precisely what categories to assign.

The right (and wrong) way to buy stocks

July 9, JDN 2457944

Most people don’t buy stocks at all. Stock equity is the quintessential form of financial wealth, and 42% of financial net wealth in the United States is held by the top 1%, while the bottom 80% owns essentially none.

Half of American households do not have any private retirement savings at all, and are depending either on employee pensions or Social Security for their retirement plans.

This is not necessarily irrational. In order to save for retirement, one must first have sufficient income to live on. Indeed, I got very annoyed at a “financial planning seminar” for grad students I attended recently, trying to scare us about the fact that almost none of us had any meaningful retirement savings. No, we shouldn’t have meaningful retirement savings, because our income is currently much lower than what we can expect to get once we graduate and enter our professions. It doesn’t make sense for someone scraping by on a $20,000 per year graduate student stipend to be saving up for retirement, when they can quite reasonably expect to be making $70,000-$100,000 per year once they finally get that PhD and become a professional economist (or sociologist, or psychologist or physicist or statistician or political scientist or material, mechanical, chemical, or aerospace engineer, or college professor in general, etc.). Even social workers, historians, and archaeologists make a lot more money than grad students. If you are already in the workforce and only expect to be getting small raises in the future, maybe you should start saving for retirement in your 20s. If you’re a grad student, don’t bother. It’ll be a lot easier to save once your income triples after graduation. (Personally, I keep about $700 in stocks mostly to get a feel for what it is like owning and trading stocks that I will apply later, not out of any serious expectation to support a retirement fund. Even at Warren Buffet-level returns I wouldn’t make more than $200 a year this way.)

Total US retirement savings are over $25 trillion, which… does actually sound low to me. In a country with a GDP now over $19 trillion, that means we’ve only saved a year and change of total income. If we had a rapidly growing population this might be fine, but we don’t; our population is fairly stable. People seem to be relying on economic growth to provide for their retirement, and since we are almost certainly at steady-state capital stock and fairly near full employment, that means waiting for technological advancement.

So basically people are hoping that we get to the Wall-E future where the robots will provide for us. And hey, maybe we will; but assuming that we haven’t abandoned capitalism by then (as they certainly haven’t in Wall-E), maybe you should try to make sure you own some assets to pay for robots with?

But okay, let’s set all that aside, and say you do actually want to save for retirement. How should you go about doing it?

Stocks are clearly the way to go. A certain proportion of government bonds also makes sense as a hedge against risk, and maybe you should even throw in the occasional commodity future. I wouldn’t recommend oil or coal at this point—either we do something about climate change and those prices plummet, or we don’t and we’ve got bigger problems—but it’s hard to go wrong with corn or steel, and for this one purpose it also can make sense to buy gold as well. Gold is not a magical panacea or the foundation of all wealth, but its price does tend to correlate negatively with stock returns, so it’s not a bad risk hedge.

Don’t buy exotic derivatives unless you really know what you’re doing—they can make a lot of money, but they can lose it just as fast—and never buy non-portfolio assets as a financial investment. If your goal is to buy something to make money, make it something you can trade at the click of a button. Buy a house because you want to live in that house. Buy wine because you like drinking wine. Don’t buy a house in the hopes of making a financial return—you’ll have leveraged your entire portfolio 10 to 1 while leaving it completely undiversified. And the problem with investing in wine, ironically, is its lack of liquidity.

The core of your investment portfolio should definitely be stocks. The biggest reason for this is the equity premium; equities—that is, stocks—get returns so much higher than other assets that it’s actually baffling to most economists. Bond returns are currently terrible, while stock returns are currently fantastic. The former is currently near 0% in inflation-adjusted terms, while the latter is closer to 16%. If this continues for the next 10 years, that means that $1000 put in bonds would be worth… $1000, while $1000 put in stocks would be worth $4400. So, do you want to keep the same amount of money, or quadruple your money? It’s up to you.

Higher risk is generally associated with higher return, because rational investors will only accept additional risk when they get some additional benefit from it; and stocks are indeed riskier than most other assets, but not that much riskier. For this to be rational, people would need to be extremely risk-averse, to the point where they should never drive a car or eat a cheeseburger. (Of course, human beings are terrible at assessing risk, so what I really think is going on is that people wildly underestimate the risk of driving a car and wildly overestimate the risk of buying stocks.)

Next, you may be asking: How does one buy stocks? This doesn’t seem to be something people teach in school.

You will need a brokerage of some sort. There are many such brokerages, but they are basically all equivalent except for the fees they charge. Some of them will try to offer you various bells and whistles to justify whatever additional cut they get of your trades, but they are almost never worth it. You should choose one that has a low a trade fee as possible, because even a few dollars here and there can add up surprisingly quickly.

Fortunately, there is now at least one well-established reliable stock brokerage available to almost anyone that has a standard trade fee of zero. They are called Robinhood, and I highly recommend them. If they have any downside, it is ironically that they make trading too easy, so you can be tempted to do it too often. Learn to resist that urge, and they will serve you well and cost you nothing.

Now, which stocks should you buy? There are a lot of them out there. The answer I’m going to give may sound strange: All of them. You should buy all the stocks.

All of them? How can you buy all of them? Wouldn’t that be ludicrously expensive?

No, it’s quite affordable in fact. In my little $700 portfolio, I own every single stock in the S&P 500 and the NASDAQ. If I get a little extra money to save, I may expand to own every stock in Europe and China as well.

How? A clever little arrangement called an exchange-traded fund, or ETF for short. An ETF is actually a form of mutual fund, where the fund purchases shares in a huge array of stocks, and adjusts what they own to precisely track the behavior of an entire stock market (such as the S&P 500). Then what you can buy is shares in that mutual fund, which are usually priced somewhere between $100 and $300 each. As the price of stocks in the market rises, the price of shares in the mutual fund rises to match, and you can reap the same capital gains they do.

A major advantage of this arrangement, especially for a typical person who isn’t well-versed in stock markets, is that it requires almost no attention at your end. You can buy into a few ETFs and then leave your money to sit there, knowing that it will grow as long as the overall stock market grows.

But there is an even more important advantage, which is that it maximizes your diversification. I said earlier that you shouldn’t buy a house as an investment, because it’s not at all diversified. What I mean by this is that the price of that house depends only on one thing—that house itself. If the price of that house changes, the full change is reflected immediately in the value of your asset. In fact, if you have 10% down on a mortgage, the full change is reflected ten times over in your net wealth, because you are leveraged 10 to 1.

An ETF is basically the opposite of that. Instead of its price depending on only one thing, it depends on a vast array of things, averaging over the prices of literally hundreds or thousands of different corporations. When some fall, others will rise. On average, as long as the economy continues to grow, they will rise.

The result is that you can get the same average return you would from owning stocks, while dramatically reducing the risk you bear.

To see how this works, consider the past year’s performance of Apple (AAPL), which has done very well, versus Fitbit (FIT), which has done very poorly, compared with the NASDAQ as a whole, of which they are both part.

AAPL has grown over 50% (40 log points) in the last year; so if you’d bought $1000 of their stock a year ago it would be worth $1500. FIT has fallen over 60% (84 log points) in the same time, so if you’d bought $1000 of their stock instead, it would be worth only $400. That’s the risk you’re taking by buying individual stocks.

Whereas, if you had simply bought a NASDAQ ETF a year ago, your return would be 35%, so that $1000 would be worth $1350.

Of course, that does mean you don’t get as high a return as you would if you had managed to choose the highest-performing stock on that index. But you’re unlikely to be able to do that, as even professional financial forecasters are worse than random chance. So, would you rather take a 50-50 shot between gaining $500 and losing $600, or would you prefer a guaranteed $350?

If higher return is not your only goal, and you want to be socially responsible in your investments, there are ETFs for that too. Instead of buying the whole stock market, these funds buy only a section of the market that is associated with some social benefit, such as lower carbon emissions or better representation of women in management. On average, you can expect a slightly lower return this way; but you are also helping to make a better world. And still your average return is generally going to be better than it would be if you tried to pick individual stocks yourself. In fact, certain classes of socially-responsible funds—particularly green tech and women’s representation—actually perform better than conventional ETFs, probably because most investors undervalue renewable energy and, well, also undervalue women. Women CEOs perform better at lower prices; why would you not want to buy their companies?

In fact ETFs are not literally guaranteed—the market as a whole does move up and down, so it is possible to lose money even by buying ETFs. But because the risk is so much lower, your odds of losing money are considerably reduced. And on average, an ETF will, by construction, perform exactly as well as the average performance of a randomly-chosen stock from that market.

Indeed, I am quite convinced that most people don’t take enough risk on their investment portfolios, because they confuse two very different types of risk.

The kind you should be worried about is idiosyncratic risk, which is risk tied to a particular investment—the risk of having chosen the Fitbit instead of Apple. But a lot of the time people seem to be avoiding market risk, which is the risk tied to changes in the market as a whole. Avoiding market risk does reduce your chances of losing money, but it does so at the cost of reducing your chances of making money even more.

Idiosyncratic risk is basically all downside. Yeah, you could get lucky; but you could just as well get unlucky. Far better if you could somehow average over that risk and get the average return. But with diversification, that is exactly what you can do. Then you are left only with market risk, which is the kind of risk that is directly tied to higher average returns.

Young people should especially be willing to take more risk in their portfolios. As you get closer to retirement, it becomes important to have more certainty about how much money will really be available to you once you retire. But if retirement is still 30 years away, the thing you should care most about is maximizing your average return. That means taking on a lot of market risk, which is then less risky overall if you diversify away the idiosyncratic risk.

I hope now that I have convinced you to avoid buying individual stocks. For most people most of the time, this is the advice you need to hear. Don’t try to forecast the market, don’t try to outperform the indexes; just buy and hold some ETFs and leave your money alone to grow.

But if you really must buy individual stocks, either because you think you are savvy enough to beat the forecasters or because you enjoy the gamble, here’s some additional advice I have for you.

My first piece of advice is that you should still buy ETFs. Even if you’re willing to risk some of your wealth on greater gambles, don’t risk all of it that way.

My second piece of advice is to buy primarily large, well-established companies (like Apple or Microsoft or Ford or General Electric). Their stocks certainly do rise and fall, but they are unlikely to completely crash and burn the way that young companies like Fitbit can.

My third piece of advice is to watch the price-earnings ratio (P/E for short). Roughly speaking, this is the number of years it would take for the profits of this corporation to pay off the value of its stock. If they pay most of their profits in dividends, it is approximately how many years you’d need to hold the stock in order to get as much in dividends as you paid for the shares.

Do you want P/E to be large or small? You want it to be small. This is called value investing, but it really should just be called “investing”. The alternatives to value investing are actually not investment but speculation and arbitrage. If you are actually investing, you are buying into companies that are currently undervalued; you want them to be cheap.

Of course, it is not always easy to tell whether a company is undervalued. A common rule-of-thumb is that you should aim for a P/E around 20 (20 years to pay off means about 5% return in dividends); if the P/E is below 10, it’s a fantastic deal, and if it is above 30, it might not be worth the price. But reality is of course more complicated than this. You don’t actually care about current earnings, you care about future earnings, and it could be that a company which is earning very little now will earn more later, or vice-versa. The more you can learn about a company, the better judgment you can make about their future profitability; this is another reason why it makes sense to buy large, well-known companies rather than tiny startups.

My final piece of advice is not to trade too frequently. Especially with something like Robinhood where trades are instant and free, it can be tempting to try to ride every little ripple in the market. Up 0.5%? Sell! Down 0.3%? Buy! And yes, in principle, if you could perfectly forecast every such fluctuation, this would be optimal—and make you an almost obscene amount of money. But you can’t. We know you can’t. You need to remember that you can’t. You should only trade if one of two things happens: Either your situation changes, or the company’s situation changes. If you need the money, sell, to get the money. If you have extra savings, buy, to give those savings a good return. If something bad happened to the company and their profits are going to fall, sell. If something good happened to the company and their profits are going to rise, buy. Otherwise, hold. In the long run, those who hold stocks longer are better off.

The credit rating agencies to be worried about aren’t the ones you think

JDN 2457499

John Oliver is probably the best investigative journalist in America today, despite being neither American nor officially a journalist; last week he took on the subject of credit rating agencies, a classic example of his mantra “If you want to do something evil, put it inside something boring.” (note that it’s on HBO, so there is foul language):

As ever, his analysis of the subject is quite good—it’s absurd how much power these agencies have over our lives, and how little accountability they have for even assuring accuracy.

But I couldn’t help but feel that he was kind of missing the point. The credit rating agencies to really be worried about aren’t Equifax, Experian, and Transunion, the ones that assess credit ratings on individuals. They are Standard & Poor’s, Moody’s, and Fitch (which would have been even easier to skewer the way John Oliver did—perhaps we can get them confused with Standardly Poor, Moody, and Filch), the agencies which assess credit ratings on institutions.

These credit rating agencies have almost unimaginable power over our society. They are responsible for rating the risk of corporate bonds, certificates of deposit, stocks, derivatives such as mortgage-backed securities and collateralized debt obligations, and even municipal and government bonds.

S&P, Moody’s, and Fitch don’t just rate the creditworthiness of Goldman Sachs and J.P. Morgan Chase; they rate the creditworthiness of Detroit and Greece. (Indeed, they played an important role in the debt crisis of Greece, which I’ll talk about more in a later post.)

Moreover, they are proven corrupt. It’s a matter of public record.

Standard and Poor’s is the worst; they have been successfully sued for fraud by small banks in Pennsylvania and by the State of New Jersey; they have also settled fraud cases with the Securities and Exchange Commission and the Department of Justice.

Moody’s has also been sued for fraud by the Department of Justice, and all three have been prosecuted for fraud by the State of New York.

But in fact this underestimates the corruption, because the worst conflicts of interest aren’t even illegal, or weren’t until Dodd-Frank was passed in 2010. The basic structure of this credit rating system is fundamentally broken; the agencies are private, for-profit corporations, and they get their revenue entirely from the banks that pay them to assess their risk. If they rate a bank’s asset as too risky, the bank stops paying them, and instead goes to another agency that will offer a higher rating—and simply the threat of doing so keeps them in line. As a result their ratings are basically uncorrelated with real risk—they failed to predict the collapse of Lehman Brothers or the failure of mortgage-backed CDOs, and they didn’t “predict” the European debt crisis so much as cause it by their panic.

Then of course there’s the fact that they are obviously an oligopoly, and furthermore one that is explicitly protected under US law. But then it dawns upon you: Wait… US law? US law decides the structure of credit rating agencies that set the bond rates of entire nations? Yes, that’s right. You’d think that such ratings would be set by the World Bank or something, but they’re not; in fact here’s a paper published by the World Bank in 2004 about how rather than reform our credit rating system, we should instead tell poor countries to reform themselves so they can better impress the private credit rating agencies.

In fact the whole concept of “sovereign debt risk” is fundamentally defective; a country that borrows in its own currency should never have to default on debt under any circumstances. National debt is almost nothing like personal or corporate debt. Their fears should be inflation and unemployment—their monetary policy should be set to minimize the harm of these two basic macroeconomic problems, understanding that policies which mitigate one may enflame the other. There is such a thing as bad fiscal policy, but it has nothing to do with “running out of money to pay your debt” unless you are forced to borrow in a currency you can’t control (as Greece is, because they are on the Euro—their debt is less like the US national debt and more like the debt of Puerto Rico, which is suffering an ongoing debt crisis you may not have heard about). If you borrow in your own currency, you should be worried about excessive borrowing creating inflation and devaluing your currency—but not about suddenly being unable to repay your creditors. The whole concept of giving a sovereign nation a credit rating makes no sense. You will be repaid on time and in full, in nominal terms; if inflation or currency exchange has devalued the currency you are repaid in, that’s sort of like a partial default, but it’s a fundamentally different kind of “default” than simply not paying back the money—and credit ratings have no way of capturing that difference.

In particular, it makes no sense for interest rates on government bonds to go up when a country is suffering some kind of macroeconomic problem.

The basic argument for why interest rates go up when risk is higher is that lenders expect to be paid more by those who do pay to compensate for what they lose from those who don’t pay. This is already much more problematic than most economists appreciate; I’ve been meaning to write a paper on how this system creates self-fulfilling prophecies of default and moral hazard from people who pay their debts being forced to subsidize those who don’t. But it at least makes some sense.

But if a country is a “high risk” in the sense of macroeconomic instability undermining the real value of their debt, we want to ensure that they can restore macroeconomic stability. But we know that when there is a surge in interest rates on government bonds, instability gets worse, not better. Fiscal policy is suddenly shifted away from real production into higher debt payments, and this creates unemployment and makes the economic crisis worse. As Paul Krugman writes about frequently, these policies of “austerity” cause enormous damage to national economies and ultimately benefit no one because they destroy the source of wealth that would have been used to repay the debt.

By letting credit rating agencies decide the rates at which governments must borrow, we are effectively treating national governments as a special case of corporations. But corporations, by design, act for profit and can go bankrupt. National governments are supposed to act for the public good and persist indefinitely. We can’t simply let Greece fail as we might let a bank fail (and of course we’ve seen that there are serious downsides even to that). We have to restructure the sovereign debt system so that it benefits the development of nations rather than detracting from it. The first step is removing the power of private for-profit corporations in the US to decide the “creditworthiness” of entire countries. If we need to assess such risks at all, they should be done by international institutions like the UN or the World Bank.

But right now people are so stuck in the idea that national debt is basically the same as personal or corporate debt that they can’t even understand the problem. For after all, one must repay one’s debts.

The Cognitive Science of Morality Part II: Molly Crockett

JDN 2457140 EDT 20:16.

This weekend has been very busy for me, so this post is going to be shorter than most—which is probably a good thing anyway, since my posts tend to run a bit long.

In an earlier post I discussed the Weinberg Cognitive Science Conference and my favorite speaker in the lineup, Joshua Greene. After a brief interlude from Capybara Day, it’s now time to talk about my second-favorite speaker, Molly Crockett. (Is it just me, or does the name “Molly” somehow seem incongruous with a person of such prestige?)

Molly Crockett is a neuroeconomist, though you’d never hear her say that. She doesn’t think of herself as an economist at all, but purely as a neuroscientist. I suspect this is because when she hears the word “economist” she thinks of only mainstream neoclassical economists, and she doesn’t want to be associated with such things.

Still, what she studies is clearly neuroeconomics—I in fact first learned of her work by reading the textbook Neuroeconomics, though I really got interested in her work after watching her TED Talk. It’s one of the better TED talks (they put out so many of them now that the quality is mixed at best); she talks about news reporting on neuroscience, how it is invariably ridiculous and sensationalist. This is particularly frustrating because of how amazing and important neuroscience actually is.

I could almost forgive the sensationalism if they were talking about something that’s actually fantastically boring, like, say, tax codes, or financial regulations. Of course, even then there is the Oliver Effect: You can hide a lot of evil by putting it in something boring. But Dodd-Frank is 2300 pages long; I read an earlier draft that was only (“only”) 600 pages, and it literally contained a three-page section explaining how to define the word “bank”. (Assuming direct proportionality, I would infer that there is now a twelve-page section defining the word “bank”. Hopefully not?) It doesn’t get a whole lot more snoozeworthy than that. So if you must be a bit sensationalist in order to get people to see why eliminating margin requirements and the swaps pushout rule are terrible, terrible ideas, so be it.

But neuroscience is not boring, and so sensationalism only means that news outlets are making up exciting things that aren’t true instead of saying the actually true things that are incredibly exciting.

Here, let me express without sensationalism what Molly Crockett does for a living: Molly Crockett experimentally determines how psychoactive drugs modulate moral judgments. The effects she observes are small, but they are real; and since these experiments are done using small doses for a short period of time, if these effects scale up they could be profound. This is the basic research component—when it comes to technological fruition it will be literally A Clockwork Orange. But it may be A Clockwork Orange in the best possible way: It could be, at last, a medical cure for psychopathy, a pill to make us not just happier or healthier, but better. We are not there yet by any means, but this is clearly the first step: Molly Crockett is to A Clockwork Orange roughly as Michael Faraday is to the Internet.

In one of the experiments she talked about at the conference, Crockett found that serotonin reuptake inhibitors enhance harm aversion. Serotonin reuptake inhibitors are very commonly used drugs—you are likely familiar with one called Prozac. So basically what this study means is that Prozac makes people more averse to causing pain in themselves or others. It doesn’t necessarily make them more altruistic, let alone more ethical; but it does make them more averse to causing pain. (To see the difference, imagine a 19th-century field surgeon dealing with a wounded soldier; there is no anesthetic, but an amputation must be made. Sometimes being ethical requires causing pain.)

The experiment is actually what Crockett calls “the honest Milgram Experiment“; under Milgram, the experimenters told their subjects they would be causing shocks, but no actual shocks were administered. Under Crockett, the shocks are absolutely 100% real (though they are restricted to a much lower voltage of course). People are given competing offers that contain an amount of money and a number of shocks to be delivered, either to you or to the other subject. They decide how much it’s worth to them to bear the shocks—or to make someone else bear them. It’s a classic willingness-to-pay paradigm, applied to the Milgram Experiment.

What Crockett found did not surprise me, nor do I expect it will surprise you if you imagine yourself in the same place; but it would totally knock the socks off of any neoclassical economist. People are much more willing to bear shocks for money than they are to give shocks for money. They are what Crockett terms hyper-altruistic; I would say that they are exhibiting an apparent solidarity coefficient greater than 1. They seem to be valuing others more than they value themselves.

Normally I’d say that this makes no sense at all—why would you value some random stranger more than yourself? Equally perhaps, and obviously only a psychopath would value them not at all; but more? And there’s no way you can actually live this way in your daily life; you’d give away all your possessions and perhaps even starve yourself to death. (I guess maybe Jesus lived that way.) But Crockett came up with a model that explains it pretty well: We are morally risk-averse. If we knew we were dealing with someone very strong who had no trouble dealing with shocks, we’d be willing to shock them a fairly large amount. But we might actually be dealing with someone very vulnerable who would suffer greatly; and we don’t want to take that chance.

I think there’s some truth to that. But her model leaves something else out that I think is quite important: We are also averse to unfairness. We don’t like the idea of raising one person while lowering another. (Obviously not so averse as to never do it—we do it all the time—but without a compelling reason we consider it morally unjustified.) So if the two subjects are in roughly the same condition (being two undergrads at Oxford, they probably are), then helping one while hurting the other is likely to create inequality where none previously existed. But if you hurt yourself in order to help yourself, no such inequality is created; all you do is raise yourself up, provided that you do believe that the money is good enough to be worth the shocks. It’s actually quite Rawslian; lifting one person up while not affecting the other is exactly the sort of inequality you’re allowed to create according to the Difference Principle.

There’s also the fact that the subjects can’t communicate; I think if I could make a deal to share the money afterward, I’d feel better about shocking someone more in order to get us both more money. So perhaps with communication people would actually be willing to shock others more. (And the sensation headline would of course be: “Talking makes people hurt each other.”)

But all of these ideas are things that could be tested in future experiments! And maybe I’ll do those experiments someday, or Crockett, or one of her students. And with clever experimental paradigms we might find out all sorts of things about how the human mind works, how moral intuitions are structured, and ultimately how chemical interventions can actually change human moral behavior. The potential for both good and evil is so huge, it’s both wondrous and terrifying—but can you deny that it is exciting?

And that’s not even getting into the Basic Fact of Cognitive Science, which undermines all concepts of afterlife and theistic religion. I already talked about it before—as the sort of thing that I sort of wish I could say when I introduce myself as a cognitive scientist—but I think it bears repeating.

As Patricia Churchland said on the Colbert Report: Colbert asked, “Are you saying I have no soul?” and she answered, “Yes.” I actually prefer Daniel Dennett’s formulation: “Yes, we have a soul, but it’s made of lots of tiny robots.”

We don’t have a magical, supernatural soul (whatever that means); we don’t have an immortal soul that will rise into Heaven or be reincarnated in someone else. But we do have something worth preserving: We have minds that are capable of consciousness. We love and hate, exalt and suffer, remember and imagine, understand and wonder. And yes, we are born and we die. Once the unique electrochemical pattern that defines your consciousness is sufficiently degraded, you are gone. Nothing remains of what you were—except perhaps the memories of others, or things you have created. But even this legacy is unlikely to last forever. One day it is likely that all of us—and everything we know, and everything we have built, from the Great Pyramids to Hamlet to Beethoven’s Ninth to Principia Mathematica to the US Interstate Highway System—will be gone. I don’t have any consolation to offer you on that point; I can’t promise you that anything will survive a thousand years, much less a million. There is a chance—even a chance that at some point in the distant future, whatever humanity has become will find a way to reverse the entropic decay of the universe itself—but nothing remotely like a guarantee. In all probability you, and I, and all of this will be gone someday, and that is absolutely terrifying.

But it is also undeniably true. The fundamental link between the mind and the brain is one of the basic facts of cognitive science; indeed I like to call it The Basic Fact of Cognitive Science. We know specifically which kinds of brain damage will make you unable to form memories, comprehend language, speak language (a totally different area), see, hear, smell, feel anger, integrate emotions with logic… do I need to go on? Everything that you are is done by your brain—because you are your brain.

Now why can’t the science journalists write about that? Instead we get “The Simple Trick That Can Boost Your Confidence Immediately” and “When it Comes to Picking Art, Men & Women Just Don’t See Eye to Eye.” HuffPo is particularly awful of course; the New York Times is better, but still hardly as good as one might like. They keep trying to find ways to make it exciting—but so rarely seem to grasp how exciting it already is.

What do we mean by “risk”?

JDN 2457118 EDT 20:50.

In an earlier post I talked about how, empirically, expected utility theory can’t explain the fact that we buy both insurance and lottery tickets, and how, normatively it really doesn’t make a lot of sense to buy lottery tickets precisely because of what expected utility theory says about them.

But today I’d like to talk about one of the major problems with expected utility theory, which I consider one of the major unexplored frontiers of economics: Expected utility theory treats all kinds of risk exactly the same.

In reality there are three kinds of risk: The first is what I’ll call classical risk, which is like the game of roulette; the odds are well-defined and known in advance, and you can play the game a large number of times and average out the results. This is where expected utility theory really shines; if you are dealing with classical risk, expected utility is obviously the way to go and Von Neumann and Morgenstern quite literally proved mathematically that anything else is irrational.

The second is uncertainty, a distinction which was most famously expounded by Frank Knight, an economist at the University of Chicago. (Chicago is a funny place; on the one hand they are a haven for the madness that is Austrian economics; on the other hand they have led the charge in behavioral and cognitive economics. Knight was a perfect fit, because he was a little of both.) Uncertainty is risk under ill-defined or unknown probabilities, where there is no way to play the game twice. Most real-world “risk” is actually uncertainty: Will the People’s Republic of China collapse in the 21st century? How many deaths will global warming cause? Will human beings ever colonize Mars? Is P = NP? None of those questions have known answers, but nor can we clearly assign probabilities either; Either P = NP or not, as a mathematical theorem (or, like the continuum hypothesis, it’s independent of ZFC, the most bizarre possibility of all), and it’s not as if someone is rolling dice to decide how many people global warming will kill. You can think of this in terms of “possible worlds”, though actually most modal theorists would tell you that we can’t even say that P=NP is possible (nor can we say it isn’t possible!) because, as a necessary statement, it can only be possible if it is actually true; this follows from the S5 axiom of modal logic, and you know what, even I am already bored with that sentence. Clearly there is some sense in which P=NP is possible, and if that’s not what modal logic says then so much the worse for modal logic. I am not a modal realist (not to be confused with a moral realist, which I am); I don’t think that possible worlds are real things out there somewhere. I think possibility is ultimately a statement about ignorance, and since we don’t know that P=NP is false then I contend that it is possible that it is true. Put another way, it would not be obviously irrational to place a bet that P=NP will be proved true by 2100; but if we can’t even say that it is possible, how can that be?

Anyway, that’s the mess that uncertainty puts us in, and almost everything is made of uncertainty. Expected utility theory basically falls apart under uncertainty; it doesn’t even know how to give an answer, let alone one that is correct. In reality what we usually end up doing is waving our hands and trying to assign a probability anyway—because we simply don’t know what else to do.

The third one is not one that’s usually talked about, yet I think it’s quite important; I will call it one-shot risk. The probabilities are known or at least reasonably well approximated, but you only get to play the game once. You can also generalize to few-shot risk, where you can play a small number of times, where “small” is defined relative to the probabilities involved; this is a little vaguer, but basically what I have in mind is that even though you can play more than once, you can’t play enough times to realistically expect the rarest outcomes to occur. Expected utility theory almost works on one-shot and few-shot risk, but you have to be very careful about taking it literally.

I think an example make things clearer: Playing the lottery is a few-shot risk. You can play the lottery multiple times, yes; potentially hundreds of times in fact. But hundreds of times is nothing compared to the 1 in 400 million chance you have of actually winning. You know that probability; it can be computed exactly from the rules of the game. But nonetheless expected utility theory runs into some serious problems here.

If we were playing a classical risk game, expected utility would obviously be right. So for example if you know that you will live one billion years, and you are offered the chance to play a game (somehow compensating for the mind-boggling levels of inflation, economic growth, transhuman transcendence, and/or total extinction that will occur during that vast expanse of time) in which at each year you can either have a guaranteed $40,000 of inflation-adjusted income or a 99.999,999,75% chance of $39,999 of inflation-adjusted income and a 0.000,000,25% chance of $100 million in inflation-adjusted income—which will disappear at the end of the year, along with everything you bought with it, so that each year you start afresh. Should you take the second option? Absolutely not, and expected utility theory explains why; that one or two years where you’ll experience 8 QALY per year isn’t worth dropping from 4.602056 QALY per year to 4.602049 QALY per year for the other nine hundred and ninety-eight million years. (Can you even fathom how long that is? From here, one billion years is all the way back to the Mesoproterozoic Era, which we think is when single-celled organisms first began to reproduce sexually. The gain is to be Mitt Romney for a year or two; the loss is the value of a dollar each year over and over again for the entire time that has elapsed since the existence of gamete meiosis.) I think it goes without saying that this whole situation is almost unimaginably bizarre. Yet that is implicitly what we’re assuming when we use expected utility theory to assess whether you should buy lottery tickets.

The real situation is more like this: There’s one world you can end up in, and almost certainly will, in which you buy lottery tickets every year and end up with an income of $39,999 instead of $40,000. There is another world, so unlikely as to be barely worth considering, yet not totally impossible, in which you get $100 million and you are completely set for life and able to live however you want for the rest of your life. Averaging over those two worlds is a really weird thing to do; what do we even mean by doing that? You don’t experience one world 0.000,000,25% as much as the other (whereas in the billion-year scenario, that is exactly what you do); you only experience one world or the other.

In fact, it’s worse than this, because if a classical risk game is such that you can play it as many times as you want as quickly as you want, we don’t even need expected utility theory—expected money theory will do. If you can play a game where you have a 50% chance of winning $200,000 and a 50% chance of losing $50,000, which you can play up to once an hour for the next 48 hours, and you will be extended any credit necessary to cover any losses, you’d be insane not to play; your 99.9% confidence level of wealth at the end of the two days is from $850,000 to $6,180,000. While you may lose money for awhile, it is vanishingly unlikely that you will end up losing more than you gain.

Yet if you are offered the chance to play this game only once, you probably should not take it, and the reason why then comes back to expected utility. If you have good access to credit you might consider it, because going $50,000 into debt is bad but not unbearably so (I did, going to college) and gaining $200,000 might actually be enough better to justify the risk. Then the effect can be averaged over your lifetime; let’s say you make $50,000 per year over 40 years. Losing $50,000 means making your average income $48,750, while gaining $200,000 means making your average income $55,000; so your QALY per year go from a guaranteed 4.70 to a 50% chance of 4.69 and a 50% chance of 4.74; that raises your expected utility from 4.70 to 4.715.

But if you don’t have good access to credit and your income for this year is $50,000, then losing $50,000 means losing everything you have and living in poverty or even starving to death. The benefits of raising your income by $200,000 this year aren’t nearly great enough to take that chance. Your expected utility goes from 4.70 to a 50% chance of 5.30 and a 50% chance of zero.

So expected utility theory only seems to properly apply if we can play the game enough times that the improbable events are likely to happen a few times, but not so many times that we can be sure our money will approach the average. And that’s assuming we know the odds and we aren’t just stuck with uncertainty.

Unfortunately, I don’t have a good alternative; so far expected utility theory may actually be the best we have. But it remains deeply unsatisfying, and I like to think we’ll one day come up with something better.