We ignorant, incompetent gods

May 21 JDN 2460086

A review of Homo Deus

The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.

E.O. Wilson

Homo Deus is a very good read—and despite its length, a quick one; as you can see, I read it cover to cover in a week. Yuval Noah Harari’s central point is surely correct: Our technology is reaching a threshold where it grants us unprecedented power and forces us to ask what it means to be human.

Biotechnology and artificial intelligence are now advancing so rapidly that advancements in other domains, such as aerospace and nuclear energy, seem positively mundane. Who cares about making flight or electricity a bit cleaner when we will soon have the power to modify ourselves or we’ll all be replaced by machines?

Indeed, we already have technology that would have seemed to ancient people like the powers of gods. We can fly; we can witness or even control events thousands of miles away; we can destroy mountains; we can wipeout entire armies in an instant; we can even travel into outer space.

Harari rightly warns us that our not-so-distant descendants are likely to have powers that we would see as godlike: Immortality, superior intelligence, self-modification, the power to create life.

And where it is scary to think about what they might do with that power if they think the way we do—as ignorant and foolish and tribal as we are—Harari points out that it is equally scary to think about what they might do if they don’t think the way we do—for then, how do they think? If their minds are genetically modified or even artificially created, who will they be? What values will they have, if not ours? Could they be better? What if they’re worse?

It is of course difficult to imagine values better than our own—if we thought those values were better, we’d presumably adopt them. But we should seriously consider the possibility, since presumably most of us believe that our values today are better than what most people’s values were 1000 years ago. If moral progress continues, does it not follow that people’s values will be better still 1000 years from now? Or at least that they could be?

I also think Harari overestimates just how difficult it is to anticipate the future. This may be a useful overcorrection; the world is positively infested with people making overprecise predictions about the future, often selling them for exorbitant fees (note that Harari was quite well-compensated for this book as well!). But our values are not so fundamentally alien from those of our forebears, and we have reason to suspect that our descendants’ values will be no more different from ours.

For instance, do you think that medieval people thought suffering and death were good? I assure you they did not. Nor did they believe that the supreme purpose in life is eating cheese. (They didn’t even believe the Earth was flat!) They did not have the concept of GDP, but they could surely appreciate the value of economic prosperity.

Indeed, our world today looks very much like a medieval peasant’s vision of paradise. Boundless food in endless variety. Near-perfect security against violence. Robust health, free from nearly all infectious disease. Freedom of movement. Representation in government! The land of milk and honey is here; there they are, milk and honey on the shelves at Walmart.

Of course, our paradise comes with caveats: Not least, we are by no means free of toil, but instead have invented whole new kinds of toil they could scarcely have imagined. If anything I would have to guess that coding a robot or recording a video lecture probably isn’t substantially more satisfying than harvesting wheat or smithing a sword; and reconciling receivables and formatting spreadsheets is surely less. Our tasks are physically much easier, but mentally much harder, and it’s not obvious which of those is preferable. And we are so very stressed! It’s honestly bizarre just how stressed we are, given the abudance in which we live; there is no reason for our lives to have stakes so high, and yet somehow they do. It is perhaps this stress and economic precarity that prevents us from feeling such joy as the medieval peasants would have imagined for us.

Of course, we don’t agree with our ancestors on everything. The medieval peasants were surely more religious, more ignorant, more misogynistic, more xenophobic, and more racist than we are. But projecting that trend forward mostly means less ignorance, less misogyny, less racism in the future; it means that future generations should see the world world catch up to what the best of us already believe and strive for—hardly something to fear. The values that I believe are surely not what we as a civilization act upon, and I sorely wish they were. Perhaps someday they will be.

I can even imagine something that I myself would recognize as better than me: Me, but less hypocritical. Strictly vegan rather than lacto-ovo-vegetarian, or at least more consistent about only buying free range organic animal products. More committed to ecological sustainability, more willing to sacrifice the conveniences of plastic and gasoline. Able to truly respect and appreciate all life, even humble insects. (Though perhaps still not mosquitoes; this is war. They kill more of us than any other animal, including us.) Not even casually or accidentally racist or sexist. More courageous, less burnt out and apathetic. I don’t always live up to my own ideals. Perhaps someday someone will.

Harari fears something much darker, that we will be forced to give up on humanist values and replace them with a new techno-religion he calls Dataism, in which the supreme value is efficient data processing. I see very little evidence of this. If it feels like data is worshipped these days, it is only because data is profitable. Amazon and Google constantly seek out ever richer datasets and ever faster processing because that is how they make money. The real subject of worship here is wealth, and that is nothing new. Maybe there are some die-hard techno-utopians out there who long for us all to join the unified oversoul of all optimized data processing, but I’ve never met one, and they are clearly not the majority. (Harari also uses the word ‘religion’ in an annoyingly overbroad sense; he refers to communism, liberalism, and fascism as ‘religions’. Ideologies, surely; but religions?)

Harari in fact seems to think that ideologies are strongly driven by economic structures, so maybe he would even agree that it’s about profit for now, but thinks it will become religion later. But I don’t really see history fitting this pattern all that well. If monotheism is directly tied to the formation of organized bureaucracy and national government, then how did Egypt and Rome last so long with polytheistic pantheons? If atheism is the natural outgrowth of industrialized capitalism, then why are Africa and South America taking so long to get the memo? I do think that economic circumstances can constrain culture and shift what sort of ideas become dominant, including religious ideas; but there clearly isn’t this one-to-one correspondence he imagines. Moreover, there was never Coalism or Oilism aside from the greedy acquisition of these commodities as part of a far more familiar ideology: capitalism.

He also claims that all of science is now, or is close to, following a united paradigm under which everything is a data processing algorithm, which suggests he has not met very many scientists. Our paradigms remain quite varied, thank you; and if they do all have certain features in common, it’s mainly things like rationality, naturalism and empiricism that are more or less inherent to science. It’s not even the case that all cognitive scientists believe in materialism (though it probably should be); there are still dualists out there.

Moreover, when it comes to values, most scientists believe in liberalism. This is especially true if we use Harari’s broad sense (on which mainline conservatives and libertarians are ‘liberal’ because they believe in liberty and human rights), but even in the narrow sense of center-left. We are by no means converging on a paradigm where human life has no value because it’s all just data processing; maybe some scientists believe that, but definitely not most of us. If scientists ran the world, I can’t promise everything would be better, but I can tell you that Bush and Trump would never have been elected and we’d have a much better climate policy in place by now.

I do share many of Harari’s fears of the rise of artificial intelligence. The world is clearly not ready for the massive economic disruption that AI is going to cause all too soon. We still define a person’s worth by their employment, and think of ourselves primarily as collection of skills; but AI is going to make many of those skills obsolete, and may make many of us unemployable. It would behoove us to think in advance about who we truly are and what we truly want before that day comes. I used to think that creative intellectual professions would be relatively secure; ChatGPT and Midjourney changed my mind. Even writers and artists may not be safe much longer.

Harari is so good at sympathetically explaining other views he takes it to a fault. At times it is actually difficult to know whether he himself believes something and wants you to, or if he is just steelmanning someone else’s worldview. There’s a whole section on ‘evolutionary humanism’ where he details a worldview that is at best Nietschean and at worst Nazi, but he makes it sound so seductive. I don’t think it’s what he believes, in part because he has similarly good things to say about liberalism and socialism—but it’s honestly hard to tell.

The weakest part of the book is when Harari talks about free will. Like most people, he just doesn’t get compatibilism. He spends a whole chapter talking about how science ‘proves we have no free will’, and it’s just the same old tired arguments hard determinists have always made.

He talks about how we can make choices based on our desires, but we can’t choose our desires; well of course we can’t! What would that even mean? If you could choose your desires, what would you choose them based on, if not your desires? Your desire-desires? Well, then, can you choose your desire-desires? What about your desire-desire-desires?

What even is this ultimate uncaused freedom that libertarian free will is supposed to consist in? No one seems capable of even defining it. (I’d say Kant got the closest: He defined it as the capacity to act based upon what ought rather than what is. But of course what we believe about ‘ought’ is fundamentally stored in our brains as a particular state, a way things are—so in the end, it’s an ‘is’ we act on after all.)

Maybe before you lament that something doesn’t exist, you should at least be able to describe that thing as a coherent concept? Woe is me, that 2 plus 2 is not equal to 5!

It is true that as our technology advances, manipulating other people’s desires will become more and more feasible. Harari overstates the case on so-called robo-rats; they aren’t really mind-controlled, it’s more like they are rewarded and punished. The rat chooses to go left because she knows you’ll make her feel good if she does; she’s still freely choosing to go left. (Dangling a carrot in front of a horse is fundamentally the same thing—and frankly, paying a wage isn’t all that different.) The day may yet come where stronger forms of control become feasible, and woe betide us when it does. Yet this is no threat to the concept of free will; we already knew that coercion was possible, and mind control is simply a more precise form of coercion.

Harari reports on a lot of interesting findings in neuroscience, which are important for people to know about, but they do not actually show that free will is an illusion. What they do show is that free will is thornier than most people imagine. Our desires are not fully unified; we are often ‘of two minds’ in a surprisingly literal sense. We are often tempted by things we know are wrong. We often aren’t sure what we really want. Every individual is in fact quite divisible; we literally contain multitudes.

We do need a richer account of moral responsibility that can deal with the fact that human beings often feel multiple conflicting desires simultaneously, and often experience events differently than we later go on to remember them. But at the end of the day, human consciousness is mostly unified, our choices are mostly rational, and our basic account of moral responsibility is mostly valid.

I think for now we should perhaps be less worried about what may come in the distant future, what sort of godlike powers our descendants may have—and more worried about what we are doing with the godlike powers we already have. We have the power to feed the world; why aren’t we? We have the power to save millions from disease; why don’t we? I don’t see many people blindly following this ‘Dataism’, but I do see an awful lot blinding following a 19th-century vision of capitalism.

And perhaps if we straighten ourselves out, the future will be in better hands.

Reckoning costs in money distorts them

May 7 JDN 2460072

Consider for a moment what it means when an economic news article reports “rising labor costs”. What are they actually saying?

They’re saying that wages are rising—perhaps in some industry, perhaps in the economy as a whole. But this is not a cost. It’s a price. As I’ve written about before, the two are fundamentally distinct.

The cost of labor is measured in effort, toil, and time. It’s the pain of having to work instead of whatever else you’d like to do with your time.

The price of labor is a monetary amount, which is delivered in a transaction.

This may seem perfectly obvious, but it has important and oft-neglected implications. A cost, one paid, is gone. That value has been destroyed. We hope that it was worth it for some benefit we gained. A price, when paid, is simply transferred: One person had that money before, now someone else has it. Nothing was gained or lost.

So in fact when reports say that “labor costs have risen”, what they are really saying is that income is being transferred from owners to workers without any change in real value taking place. They are framing as a loss what is fundamentally a zero-sum redistribution.

In fact, it is disturbingly common to see a fundamentally good redistribution of income framed in the press as a bad outcome because of its expression as “costs”; the “cost” of chocolate is feared to go up if we insist upon enforcing bans on forced labor—when in fact it is only the price that goes up, and the cost actually goes down: chocolate would no longer include complicity in an atrocity. The real suffering of making chocolate would be thereby reduced, not increased. Even when they aren’t literally enslaved, those workers are astonishingly poor, and giving them even a few more cents per hour would make a real difference in their lives. But God forbid we pay a few cents more for a candy bar!

If labor costs were to rise, that would mean that work had suddenly gotten harder, or more painful; or else, that some outside circumstance had made it more difficult to work. Having a child increases your labor costs—you now have the opportunity cost of not caring for the child. COVID increased the cost of labor, by making it suddenly dangerous just to go outside in public. That could also increase prices—you may demand a higher wage, and people do seem to have demanded higher wages after COVID. But these are two separate effects, and you can have one without the other. In fact, women typically see wage stagnation or even reduction after having kids (but men largely don’t), despite their real opportunity cost of labor having obviously greatly increased.

On an individual level, it’s not such a big mistake to equate price and cost. If you are buying something, its cost to you basically just is its price, plus a little bit of transaction cost for actually finding and buying it. But on a societal level, it makes an enormous difference. It distorts our policy priorities and can even lead to actively trying to suppress things that are beneficial—such as rising wages.

This false equivalence between price and costs seems to be at least as common among economists as it is among laypeople. Economists will often justify it on the grounds that in an ideal perfect competitive market the two would be in some sense equated. But of course we don’t live in that ideal perfect market, and even if we did, they would only beproportional at the margin, not fundamentally equal across the board. It would still be obviously wrong to characterize the total value or cost of work by the price paid for it; only the last unit of effort would be priced so that marginal value equals price equals marginal cost. The first 39 hours of your work would cost you less than what you were paid, and produce more than you were paid; only that 40th hour would set the three equal.

Once you account for all the various market distortions in the world, there’s no particular relationship between what something costs—in terms of real effort and suffering—and its price—in monetary terms. Things can be expensive and easy, or cheap and awful. In fact, they often seem to be; for some reason, there seems to be a pattern where the most terrible, miserable jobs (e.g. coal mining) actually pay the leastand the easiest, most pleasant jobs (e.g. stock trading) pay the most. Some jobs that benefit society pay well (e.g. doctors) and others pay terribly or not at all (e.g. climate activists). Some actions that harm the world get punished (e.g. armed robbery) and others get rewarded with riches (e.g. oil drilling). In the real world, whether a job is good or bad and whether it is paid well or poorly seem to be almost unrelated.

In fact, sometimes they seem even negatively related, where we often feel tempted to “sell out” and do something destructive in order to get higher pay. This is likely due to Berkson’s paradox: If people are willing to do jobs if they are either high-paying or beneficial to humanity, then we should expect that, on average, most of the high-paying jobs people do won’t be beneficial to humanity. Even if there were inherently no correlation or a small positive one, people’s refusal to do harmful low-paying work removes those jobs from our sample and results in a negative correlation in what remains.

I think that the best solution, ultimately, is to stop reckoning costs in money entirely. We should reckon them in happiness.

This is of course much more difficult than simply using prices; it’s not easy to say exactly how many QALY are sacrificed in the extraction of cocoa beans or the drilling of offshore oil wells. But if we actually did find a way to count them, I strongly suspect we’d find that it was far more than we ought to be willing to pay.

A very rough approximation, surely flawed but at least a start, would be to simply convert all payments into proportions of their recipient’s income: For full-time wages, this would result in basically everyone being counted the same, as 1 hour of work if you work 40 hours per week, 50 weeks per year is precisely 0.05% of your annual income. So we could say that whatever is equivalent to your hourly wage constitutes 50 microQALY.

This automatically implies that every time a rich person pays a poor person, QALY increase, while every time a poor person pays a rich person, QALY decrease. This is not an error in the calculation. It is a fact of the universe. We ignore it only at out own peril. All wealth redistributed downward is a benefit, while all wealth redistributed upward is a harm. That benefit may cause some other harm, or that harm may be compensated by some other benefit; but they are still there.

This would also put some things in perspective. When HSBC was fined £70 million for its crimes, that can be compared against its £1.5 billion in net income; if it were an individual, it would have been hurt about 50 milliQALY, which is about what I would feel if I lost $2000. Of course, it’s not a person, and it’s not clear exactly how this loss was passed through to employees or shareholders; but that should give us at least some sense of how small that loss was for them. They probably felt it… a little.

When Trump was ordered to pay a $1.3 million settlement, based on his $2.5 billion net wealth (corresponding to roughly $125 million in annual investment income), that cost him about 10 milliQALY; for me that would be about $500.

At the other extreme, if someone goes from making $1 per day to making $1.50 per day, that’s a 50% increase in their income—500 milliQALY per year.

For those who have no income at all, this becomes even trickier; for them I think we should probably use their annual consumption, since everyone needs to eat and that costs something, though likely not very much. Or we could try to measure their happiness directly, trying to determine how much it hurts to not eat enough and work all day in sweltering heat.

Properly shifting this whole cultural norm will take a long time. For now, I leave you with this: Any time you see a monetary figure, ask yourself: How much is that worth to them?” The world will seem quite different once you get in the habit of that.

What behavioral economics needs

Apr 16 JDN 2460049

The transition from neoclassical to behavioral economics has been a vital step forward in science. But lately we seem to have reached a plateau, with no major advances in the paradigm in quite some time.

It could be that there is work already being done which will, in hindsight, turn out to be significant enough to make that next step forward. But my fear is that we are getting bogged down by our own methodological limitations.

Neoclassical economics shared with us its obsession with mathematical sophistication. To some extent this was inevitable; in order to impress neoclassical economists enough to convert some of them, we had to use fancy math. We had to show that we could do it their way in order to convince them why we shouldn’t—otherwise, they’d just have dismissed us the way they had dismissed psychologists for decades, as too “fuzzy-headed” to do the “hard work” of putting everything into equations.

But the truth is, putting everything into equations was never the right approach. Because human beings clearly don’t think in equations. Once we write down a utility function and get ready to take its derivative and set it equal to zero, we have already distanced ourselves from how human thought actually works.

When dealing with a simple physical system, like an atom, equations make sense. Nobody thinks that the electron knows the equation and is following it intentionally. That equation simply describes how the forces of the universe operate, and the electron is subject to those forces.

But human beings do actually know things and do things intentionally. And while an equation could be useful for analyzing human behavior in the aggregate—I’m certainly not objecting to statistical analysis—it really never made sense to say that people make their decisions by optimizing the value of some function. Most people barely even know what a function is, much less remember calculus well enough to optimize one.

Yet right now, behavioral economics is still all based in that utility-maximization paradigm. We don’t use the same simplistic utility functions as neoclassical economists; we make them more sophisticated and realistic. Yet in that very sophistication we make things more complicated, more difficult—and thus in at least that respect, even further removed from how actual human thought must operate.

The worst offender here is surely Prospect Theory. I recognize that Prospect Theory predicts human behavior better than conventional expected utility theory; nevertheless, it makes absolutely no sense to suppose that human beings actually do some kind of probability-weighting calculation in their heads when they make judgments. Most of my students—who are well-trained in mathematics and economics—can’t even do that probability-weighting calculation on paper, with a calculator, on an exam. (There’s also absolutely no reason to do it! All it does it make your decisions worse!) This is a totally unrealistic model of human thought.

This is not to say that human beings are stupid. We are still smarter than any other entity in the known universe—computers are rapidly catching up, but they haven’t caught up yet. It is just that whatever makes us smart must not be easily expressible as an equation that maximizes a function. Our thoughts are bundles of heuristics, each of which may be individually quite simple, but all of which together make us capable of not only intelligence, but something computers still sorely, pathetically lack: wisdom. Computers optimize functions better than we ever will, but we still make better decisions than they do.

I think that what behavioral economics needs now is a new unifying theory of these heuristics, which accounts for not only how they work, but how we select which one to use in a given situation, and perhaps even where they come from in the first place. This new theory will of course be complex; there’s a lot of things to explain, and human behavior is a very complex phenomenon. But it shouldn’t be—mustn’t be—reliant on sophisticated advanced mathematics, because most people can’t do advanced mathematics (almost by construction—we would call it something different otherwise). If your model assumes that people are taking derivatives in their heads, your model is already broken. 90% of the world’s people can’t take a derivative.

I guess it could be that our cognitive processes in some sense operate as if they are optimizing some function. This is commonly posited for the human motor system, for instance; clearly baseball players aren’t actually solving differential equations when they throw and catch balls, but the trajectories that balls follow do in fact obey such equations, and the reliability with which baseball players can catch and throw suggests that they are in some sense acting as if they can solve them.

But I think that a careful analysis of even this classic example reveals some deeper insights that should call this whole notion into question. How do baseball players actually do what they do? They don’t seem to be calculating at all—in fact, if you asked them to try to calculate while they were playing, it would destroy their ability to play. They learn. They engage in practiced motions, acquire skills, and notice patterns. I don’t think there is anywhere in their brains that is actually doing anything like solving a differential equation. It’s all a process of throwing and catching, throwing and catching, over and over again, watching and remembering and subtly adjusting.

One thing that is particularly interesting to me about that process is that is astonishingly flexible. It doesn’t really seem to matter what physical process you are interacting with; as long as it is sufficiently orderly, such a method will allow you to predict and ultimately control that process. You don’t need to know anything about differential equations in order to learn in this way—and, indeed, I really can’t emphasize this enough, baseball players typically don’t.

In fact, learning is so flexible that it can even perform better than calculation. The usual differential equations most people would think to use to predict the throw of a ball would assume ballistic motion in a vacuum, which absolutely not what a curveball is. In order to throw a curveball, the ball must interact with the air, and it must be launched with spin; curving a baseball relies very heavily on the Magnus Effect. I think it’s probably possible to construct an equation that would fully predict the motion of a curveball, but it would be a tremendously complicated one, and might not even have an exact closed-form solution. In fact, I think it would require solving the Navier-Stokes equations, for which there is an outstanding Millennium Prize. Since the viscosity of air is very low, maybe you could get away with approximating using the Euler fluid equations.

To be fair, a learning process that is adapting to a system that obeys an equation will yield results that become an ever-closer approximation of that equation. And it is in that sense that a baseball player can be said to be acting as if solving a differential equation. But this relies heavily on the system in question being one that obeys an equation—and when it comes to economic systems, is that even true?

What if the reason we can’t find a simple set of equations that accurately describe the economy (as opposed to equations of ever-escalating complexity that still utterly fail to describe the economy) is that there isn’t one? What if the reason we can’t find the utility function people are maximizing is that they aren’t maximizing anything?

What behavioral economics needs now is a new approach, something less constrained by the norms of neoclassical economics and more aligned with psychology and cognitive science. We should be modeling human beings based on how they actually think, not some weird mathematical construct that bears no resemblance to human reasoning but is designed to impress people who are obsessed with math.

I’m of course not the first person to have suggested this. I probably won’t be the last, or even the one who most gets listened to. But I hope that I might get at least a few more people to listen to it, because I have gone through the mathematical gauntlet and earned my bona fides. It is too easy to dismiss this kind of reasoning from people who don’t actually understand advanced mathematics. But I do understand differential equations—and I’m telling you, that’s not how people think.

Optimization is unstable. Maybe that’s why we satisfice.

Feb 26 JDN 2460002

Imagine you have become stranded on a deserted island. You need to find shelter, food, and water, and then perhaps you can start working on a way to get help or escape the island.

Suppose you are programmed to be an optimizerto get the absolute best solution to any problem. At first this may seem to be a boon: You’ll build the best shelter, find the best food, get the best water, find the best way off the island.

But you’ll also expend an enormous amount of effort trying to make it the best. You could spend hours just trying to decide what the best possible shelter would be. You could pass up dozens of viable food sources because you aren’t sure that any of them are the best. And you’ll never get any rest because you’re constantly trying to improve everything.

In principle your optimization could include that: The cost of thinking too hard or searching too long could be one of the things you are optimizing over. But in practice, this sort of bounded optimization is often remarkably intractable.

And what if you forgot about something? You were so busy optimizing your shelter you forgot to treat your wounds. You were so busy seeking out the perfect food source that you didn’t realize you’d been bitten by a venomous snake.

This is not the way to survive. You don’t want to be an optimizer.

No, the person who survives is a satisficerthey make sure that what they have is good enough and then they move on to the next thing. Their shelter is lopsided and ugly. Their food is tasteless and bland. Their water is hard. But they have them.

Once they have shelter and food and water, they will have time and energy to do other things. They will notice the snakebite. They will treat the wound. Once all their needs are met, they will get enough rest.

Empirically, humans are satisficers. We seem to be happier because of it—in fact, the people who are the happiest satisfice the most. And really this shouldn’t be so surprising: Because our ancestral environment wasn’t so different from being stranded on a desert island.

Good enough is perfect. Perfect is bad.

Let’s consider another example. Suppose that you have created a powerful artificial intelligence, an AGI with the capacity to surpass human reasoning. (It hasn’t happened yet—but it probably will someday, and maybe sooner than most people think.)

What do you want that AI’s goals to be?

Okay, ideally maybe they would be something like “Maximize goodness”, where we actually somehow include all the panoply of different factors that go into goodness, like beneficence, harm, fairness, justice, kindness, honesty, and autonomy. Do you have any idea how to do that? Do you even know what your own full moral framework looks like at that level of detail?

Far more likely, the goals you program into the AGI will be much simpler than that. You’ll have something you want it to accomplish, and you’ll tell it to do that well.

Let’s make this concrete and say that you own a paperclip company. You want to make more profits by selling paperclips.

First of all, let me note that this is not an unreasonable thing for you to want. It is not an inherently evil goal for one to have. The world needs paperclips, and it’s perfectly reasonable for you to want to make a profit selling them.

But it’s also not a true ultimate goal: There are a lot of other things that matter in life besides profits and paperclips. Anyone who isn’t a complete psychopath will realize that.

But the AI won’t. Not unless you tell it to. And so if we tell it to optimize, we would need to actually include in its optimization all of the things we genuinely care about—not missing a single one—or else whatever choices it makes are probably not going to be the ones we want. Oops, we forgot to say we need clean air, and now we’re all suffocating. Oops, we forgot to say that puppies don’t like to be melted down into plastic.

The simplest cases to consider are obviously horrific: Tell it to maximize the number of paperclips produced, and it starts tearing the world apart to convert everything to paperclips. (This is the original “paperclipper” concept from Less Wrong.) Tell it to maximize the amount of money you make, and it seizes control of all the world’s central banks and starts printing $9 quintillion for itself. (Why that amount? I’m assuming it uses 64-bit signed integers, and 2^63 is over 9 quintillion. If it uses long ints, we’re even more doomed.) No, inflation-adjusting won’t fix that; even hyperinflation typically still results in more real seigniorage for the central banks doing the printing (which is, you know, why they do it). The AI won’t ever be able to own more than all the world’s real GDP—but it will be able to own that if it prints enough and we can’t stop it.

But even if we try to come up with some more sophisticated optimization for it to perform (what I’m really talking about here is specifying its utility function), it becomes vital for us to include everything we genuinely care about: Anything we forget to include will be treated as a resource to be consumed in the service of maximizing everything else.

Consider instead what would happen if we programmed the AI to satisfice. The goal would be something like, “Produce at least 400,000 paperclips at a price of at most $0.002 per paperclip.”

Given such an instruction, in all likelihood, it would in fact produce exactly 400,000 paperclips at a price of exactly $0.002 per paperclip. And maybe that’s not strictly the best outcome for your company. But if it’s better than what you were previously doing, it will still increase your profits.

Moreover, such an instruction is far less likely to result in the end of the world.

If the AI has a particular target to meet for its production quota and price limit, the first thing it would probably try is to use your existing machinery. If that’s not good enough, it might start trying to modify the machinery, or acquire new machines, or develop its own techniques for making paperclips. But there are quite strict limits on how creative it is likely to be—because there are quite strict limits on how creative it needs to be. If you were previously producing 200,000 paperclips at $0.004 per paperclip, all it needs to do is double production and halve the cost. That’s a very standard sort of industrial innovation— in computing hardware (admittedly an extreme case), we do this sort of thing every couple of years.

It certainly won’t tear the world apart making paperclips—at most it’ll tear apart enough of the world to make 400,000 paperclips, which is a pretty small chunk of the world, because paperclips aren’t that big. A paperclip weighs about a gram, so you’ve only destroyed about 400 kilos of stuff. (You might even survive the lawsuits!)

Are you leaving money on the table relative to the optimization scenario? Eh, maybe. One, it’s a small price to pay for not ending the world. But two, if 400,000 at $0.002 was too easy, next time try 600,000 at $0.001. Over time, you can gently increase its quotas and tighten its price requirements until your company becomes more and more successful—all without risking the AI going completely rogue and doing something insane and destructive.

Of course this is no guarantee of safety—and I absolutely want us to use every safeguard we possibly can when it comes to advanced AGI. But the simple change from optimizing to satisficing seems to solve the most severe problems immediately and reliably, at very little cost.

Good enough is perfect; perfect is bad.

I see broader implications here for behavioral economics. When all of our models are based on optimization, but human beings overwhelmingly seem to satisfice, maybe it’s time to stop assuming that the models are right and the humans are wrong.

Optimization is perfect if it works—and awful if it doesn’t. Satisficing is always pretty good. Optimization is unstable, while satisficing is robust.

In the real world, that probably means that satisficing is better.

Good enough is perfect; perfect is bad.

Where is the money going in academia?

Feb 19 JDN 2459995

A quandary for you:

My salary is £41,000.

Annual tuition for a full-time full-fee student in my department is £23,000.

I teach roughly the equivalent of one full-time course (about 1/2 of one and 1/4 of two others; this is typically counted as “teaching 3 courses”, but if I used that figure, it would underestimate the number of faculty needed).

Each student takes about 5 or 6 courses at a time.

Why do I have 200 students?

If you multiply this out, the 200 students I teach, divided by the 6 instructors they have at one time, times the £23,000 they are paying… I should be bringing in over £760,000 for the university. Why am I paid only 5% of that?

Granted, there are other costs a university must bear aside from paying instructors. There are facilities, and administration, and services. And most of my students are not full-fee paying; that £23,000 figure really only applies to international students.

Students from Scotland pay only £1,820, but there aren’t very many of them, and public funding is supposed to make up that difference. Even students from the rest of the UK pay £9,250. And surely the average tuition paid has got to be close to that? Yet if we multiply that out, £9,000 times 200 divided by 6, we’re still looking at £300,000. So I’m still getting only 14%.

Where is the rest going?

This isn’t specific to my university by any means. It seems to be a global phenomenon. The best data on this seems to be from the US.

According to salary.com, the median salary for an adjunct professor in the US is about $63,000. This actually sounds high, given what I’ve heard from other entry-level faculty. But okay, let’s take that as our figure. (My pay is below this average, though how much depends upon the strength of the pound against the dollar. Currently the pound is weak, so quite a bit.)

Yet average tuition for out-of-state students at public college is $23,000 per year.

This means that an adjunct professor in the US with 200 students takes in $760,000 but receives $63,000. Where does that other $700,000 go?

If you think that it’s just a matter of paying for buildings, service staff, and other costs of running a university, consider this: It wasn’t always this way.

Since 1970, inflation-adjusted salaries for US academic faculty at public universities have risen a paltry 3.1%. In other words, basically not at all.

This is considerably slower than the growth of real median household income, which has risen almost 40% in that same time.

Over the same interval, nominal tuition has risen by over 2000%; adjusted for inflation, this is a still-staggering increase of 250%.

In other words, over the last 50 years, college has gotten three times as expensive, but faculty are still paid basically the same. Where is all this extra money going?

Part of the explanation is that public funding for colleges has fallen over time, and higher tuition partly makes up the difference. But private school tuition has risen just as fast, and their faculty salaries haven’t kept up either.

In their annual budget report, the University of Edinburgh proudly declares that their income increased by 9% last year. Let me assure you, my salary did not. (In fact, inflation-adjusted, my salary went down.) And their EBITDA—earnings before interest, taxes, depreciation, and amortization—was £168 million. Of that, £92 million was lost to interest and depreciation, but they don’t pay taxes at all, so their real net income was about £76 million. In the report, they include price changes of their endowment and pension funds to try to make this number look smaller, ending up with only £37 million, but that’s basically fiction; these are just stock market price drops, and they will bounce back.

Using similar financial alchemy, they’ve been trying to cut our pensions lately, because they say they “are too expensive” (because the stock market went down—nevermind that it’ll bounce back in a year or two). Fortunately, the unions are fighting this pretty hard. I wish they’d also fight harder to make them put people like me on the tenure track.

Had that £76 million been distributed evenly between all 5,000 of us faculty, we’d each get an extra £15,600.

Well, then, that solves part of the mystery in perhaps the most obvious, corrupt way possible: They’re literally just hoarding it.

And Edinburgh is far from the worst offender here. No, that would be Harvard, who are sitting on over $50 billion in assets. Since they have 21,000 students, that is over $2 million per student. With even a moderate return on its endowment, Harvard wouldn’t need to charge tuition at all.

But even then, raising my salary to £56,000 wouldn’t explain why I need to teach 200 students. Even that is still only 19% of the £300,000 those students are bringing in. But hey, then at least the primary service for which those students are here for might actually account for one-fifth of what they’re paying!

Now let’s considers administrators. Median salary for a university administrator in the US is about $138,000—twice what adjunct professors make.


Since 1970, that same time interval when faculty salaries were rising a pitiful 3% and tuition was rising a staggering 250%, how much did chancellors’ salaries increase? Over 60%.

Of course, the number of administrators is not fixed. You might imagine that with technology allowing us to automate a lot of administrative tasks, the number of administrators could be reduced over time. If that’s what you thought happened, you would be very, very wrong. The number of university administrators in the US has more than doubled since the 1980s. This is far faster growth than the number of students—and quite frankly, why should the number of administrators even grow with the number of students? There is a clear economy of scale here, yet it doesn’t seem to matter.

Combine those two facts: 60% higher pay times twice as many administrators means that universities now spend at least 3 times as much on administration as they did 50 years ago. (Why, that’s just about the proportional increase in tuition! Coincidence? I think not.)

Edinburgh isn’t even so bad in this regard. They have 6,000 administrative staff versus 5,000 faculty. If that already sounds crazy—more admins than instructors?—consider that the University of Michigan has 7,000 faculty but 19,000 administrators.

Michigan is hardly exceptional in this regard: Illinois UC has 2,500 faculty but nearly 8,000 administrators, while Ohio State has 7,300 faculty and 27,000 administrators. UCLA is even worse, with only 4,000 faculty but 26,000 administrators—a ratio of 6 to 1. It’s not the UC system in general, though: My (other?) alma mater of UC Irvine somehow supports 5,600 faculty with only 6,400 administrators. Yes, that’s right; compared to UCLA, UCI has 40% more faculty but 76% fewer administrators. (As far as students? UCLA has 47,000 while UCI has 36,000.)

At last, I think we’ve solved the mystery! Where is all the money in academia going? Administrators.

They keep hiring more and more of them, and paying them higher and higher salaries. Meanwhile, they stop hiring tenure-track faculty and replace them with adjuncts that they can get away with paying less. And then, whatever they manage to save that way, they just squirrel away into the endowment.

A common right-wing talking point is that more institutions should be “run like a business”. Well, universities seem to have taken that to heart. Overpay your managers, underpay your actual workers, and pocket the savings.

The role of police in society

Feb12 JDN 2459988

What do the police do? Not in theory, in practice. Not what are they supposed to do—what do they actually do?

Ask someone right-wing and they’ll say something like “uphold the law”. Ask someone left-wing and they’ll say something like “protect the interests of the rich”. Both of these are clearly inaccurate. They don’t fit the pattern of how the police actually behave.

What is that pattern? Well, let’s consider some examples.

If you rob a bank, the police will definitely arrest you. That would be consistent with either upholding the law or protecting the interests of the rich, so it’s not a very useful example.

If you run a business with unsafe, illegal working conditions, and someone tells the police about it, the police will basically ignore it and do nothing. At best they might forward it to some regulatory agency who might at some point get around to issuing a fine.

If you strike against your unsafe working conditions and someone calls the police to break up your picket line, they’ll immediately come in force and break up your picket line.

So that definitively refutes the “uphold the law” theory; by ignoring OSHA violations and breaking up legal strikes, the police are actively making it harder to enforce the law. It seems to fit the “protect the interests of the rich” theory. Let’s try some other examples.

If you run a fraudulent business that cons people out of millions of dollars, the police might arrest you, eventually, if they ever actually bother to get around to investigating the fraud. That certainly doesn’t look like upholding the law—but you can get very rich and they’ll still arrest you, as Bernie Madoff discovered. So being rich doesn’t grant absolute immunity from the police.

If your negligence in managing the safety systems of your factory or oil rig kills a dozen people, the police will do absolutely nothing. Some regulatory agency may eventually get around to issuing you a fine. That also looks like protecting the interests of the rich. So far the left-wing theory is holding up.

If you are homeless and camping out on city property, the police will often come to remove you. Sometimes there’s a law against such camping, but there isn’t always; and even when there is, the level of force used often seems wildly disproportionate to the infraction. This also seems to support the left-wing account.

But now suppose you go out and murder several homeless people. That is, if anything, advancing the interests of the rich; it’s certainly not harming them. Yet the police would in fact investigate. It might be low on their priorities, especially if they have a lot of other homicides; but they would, in fact, investigate it and ultimately arrest you. That doesn’t look like advancing the interests of the rich. It looks a lot more like upholding the law, in fact.

Or suppose you are the CEO of a fraudulent company that is about to be revealed and thus collapse, and instead of accepting the outcome or absconding to the Carribbean (as any sane rich psychopath would), you decide to take some SEC officials hostage and demand that they certify your business as legitimate. Are the police going to take that lying down? No. They’re going to consider you a terrorist, and go in guns blazing. So they don’t just protect the interests of the rich after all; that also looks a lot like they’re upholding the law.

I didn’t even express this as the left-wing view earlier, because I’m trying to use the woodman argument; but there are also those on the left who would say that the primary function of the police is to uphold White supremacy. I’d be a fool to deny that there are a lot of White supremacist cops; but notice that in the above scenarios I didn’t even specify the race of the people involved, and didn’t have to. The cops are no more likely to arrest a fraudulent banker because he’s Black, and no more likely to let a hostage-taker go free because he’s White. (They might be less likely to shoot the White hostage-taker—maybe, the data on that actually isn’t as clear-cut as people think—but they’d definitely still arrest him.) While racism is a widespread problem in the police, it doesn’t dictate their behavior all the time—and it certainly isn’t their core function.

What does categorically explain how the police react in all these scenarios?

The police uphold order.

Not law. Order. They don’t actually much seem to care whether what you’re doing is illegal or harmful or even deadly. They care whether it violates civil order.

This is how we can explain the fact that police would investigate murders, but ignore oil rig disasters—even if the latter causes more deaths. The former is a violation of civil order, the latter is not.

It also explains why they would be so willing to tear apart homeless camps and break up protests and strikes. Those are actually often legal, or at worst involve minor infractions; but they’re also disruptive and disorderly.

The police seem to see their core mission as keeping the peace. It could be an unequal, unjust peace full of illegal policies that cause grievous harm and death—but what matters to them is that it’s peace. They will stomp out any violence they see with even greater violence of their own. They have a monopoly on the use of force, and they intend to defend it.

I think that realizing this can help us take a nuanced view of the police. They aren’t monsters or tools of oppression. But they also aren’t brave heroes who uphold the law and keep us safe. They are instruments of civil order.

We do need civil order; there are a lot of very important things in society that simply can’t function if civil order collapses. In places where civil order does fall apart, life becomes entirely about survival; the security that civil order provides is necessary not only for economic activity, but also for much of what gives our lives value.

But nor is civil order all that matters. And sometimes injustice truly does become so grave that it’s worth sacrificing some order in order to redress it. Strikes and protests genuinely are disruptive; society couldn’t function if they were happening everywhere all the time. But sometimes we need to disrupt the way things are going in order to get people to clearly see the injustice around them and do something about it.

I hope that this more realistic, nuanced assessment of the role police play in society may help to pull people away from both harmful political extremes.We can’t simply abolish the police; we need some system for maintaining civil order, and whatever system we have is probably going to end up looking a lot like police. (#ScandinaviaIsBetter, truly, but there are still cops in Norway.) But we also can’t afford to lionize the police or ignore their failures and excesses. When they fight to maintain civil order at the expense of social justice, they become part of the problem.

The mythology mindset

Feb 5 JDN 2459981

I recently finished reading Steven Pinker’s latest book Rationality. It’s refreshing, well-written, enjoyable, and basically correct with some small but notable errors that seem sloppy—but then you could have guessed all that from the fact that it was written by Steven Pinker.

What really makes the book interesting is an insight Pinker presents near the end, regarding the difference between the “reality mindset” and the “mythology mindset”.

It’s a pretty simple notion, but a surprisingly powerful one.

In the reality mindset, a belief is a model of how the world actually functions. It must be linked to the available evidence and integrated into a coherent framework of other beliefs. You can logically infer from how some parts work to how other parts must work. You can predict the outcomes of various actions. You live your daily life in the reality mindset; you couldn’t function otherwise.

In the mythology mindset, a belief is a narrative that fulfills some moral, emotional, or social function. It’s almost certainly untrue or even incoherent, but that doesn’t matter. The important thing is that it sends the right messages. It has the right moral overtones. It shows you’re a member of the right tribe.

The idea is similar to Dennett’s “belief in belief”, which I’ve written about before; but I think this characterization may actually be a better one, not least because people would be more willing to use it as a self-description. If you tell someone “You don’t really believe in God, you believe in believing in God”, they will object vociferously (which is, admittedly, what the theory would predict). But if you tell them, “Your belief in God is a form of the mythology mindset”, I think they are at least less likely to immediately reject your claim out of hand. “You believe in God a different way than you believe in cyanide” isn’t as obviously threatening to their identity.

A similar notion came up in a Psychology of Religion course I took, in which the professor discussed “anomalous beliefs” linked to various world religions. He picked on a bunch of obscure religions, often held by various small tribes. He asked for more examples from the class. Knowing he was nominally Catholic and not wanting to let mainstream religion off the hook, I presented my example: “This bread and wine are the body and blood of Christ.” To his credit, he immediately acknowledged it as a very good example.

It’s also not quite the same thing as saying that religion is a “metaphor”; that’s not a good answer for a lot of reasons, but perhaps chief among them is that people don’t say they believe metaphors. If I say something metaphorical and then you ask me, “Hang on; is that really true?” I will immediately acknowledge that it is not, in fact, literally true. Love is a rose with all its sweetness and all its thorns—but no, love isn’t really a rose. And when it comes to religious belief, saying that you think it’s a metaphor is basically a roundabout way of saying you’re an atheist.

From all these different directions, we seem to be converging on a single deeper insight: when people say they believe something, quite often, they clearly mean something very different by “believe” than what I would ordinarily mean.

I’m tempted even to say that they don’t really believe it—but in common usage, the word “belief” is used at least as often to refer to the mythology mindset as the reality mindset. (In fact, it sounds less weird to say “I believe in transsubstantiation” than to say “I believe in gravity”.) So if they don’t really believe it, then they at least mythologically believe it.

Both mindsets seem to come very naturally to human beings, in particular contexts. And not just modern people, either. Humans have always been like this.

Ask that psychology professor about Jesus, and he’ll tell you a tall tale of life, death, and resurrection by a demigod. But ask him about the Stroop effect, and he’ll provide a detailed explanation of rigorous experimental protocol. He believes something about God; but he knows something about psychology.

Ask a hunter-gatherer how the world began, and he’ll surely spin you a similarly tall tale about some combination of gods and spirits and whatever else, and it will all be peculiarly particular to his own tribe and no other. But ask him how to gut a fish, and he’ll explain every detail with meticulous accuracy, with almost the same rigor as that scientific experiment. He believes something about the sky-god; but he knows something about fish.

To be a rationalist, then, is to aspire to live your whole life in the reality mindset. To seek to know rather than believe.

This isn’t about certainty. A rationalist can be uncertain about many things—in fact, it’s rationalists of all people who are most willing to admit and quantify their uncertainty.

This is about whether you allow your beliefs to float free as bare, almost meaningless assertions that you profess to show you are a member of the tribe, or you make them pay rent, directly linked to other beliefs and your own experience.

As long as I can remember, I have always aspired to do this. But not everyone does. In fact, I dare say most people don’t. And that raises a very important question: Should they? Is it better to live the rationalist way?

I believe that it is. I suppose I would, temperamentally. But say what you will about the Enlightenment and the scientific revolution, they have clearly revolutionized human civilization and made life much better today than it was for most of human existence. We are peaceful, safe, and well-fed in a way that our not-so-distant ancestors could only dream of, and it’s largely thanks to systems built under the principles of reason and rationality—that is, the reality mindset.

We would never have industrialized agriculture if we still thought in terms of plant spirits and sky gods. We would never have invented vaccines and antibiotics if we still believed disease was caused by curses and witchcraft. We would never have built power grids and the Internet if we still saw energy as a mysterious force permeating the world and not as a measurable, manipulable quantity.

This doesn’t mean that ancient people who saw the world in a mythological way were stupid. In fact, it doesn’t even mean that people today who still think this way are stupid. This is not about some innate, immutable mental capacity. It’s about a technology—or perhaps the technology, the meta-technology that makes all other technology possible. It’s about learning to think the same way about the mysterious and the familiar, using the same kind of reasoning about energy and death and sunlight as we already did about rocks and trees and fish. When encountering something new and mysterious, someone in the mythology mindset quickly concocts a fanciful tale about magical beings that inevitably serves to reinforce their existing beliefs and attitudes, without the slightest shred of evidence for any of it. In their place, someone in the reality mindset looks closer and tries to figure it out.

Still, this gives me some compassion for people with weird, crazy ideas. I can better make sense of how someone living in the modern world could believe that the Earth is 6,000 years old or that the world is ruled by lizard-people. Because they probably don’t really believe it, they just mythologically believe it—and they don’t understand the difference.

There should be a glut of nurses.

Jan 15 JDN 2459960

It will not be news to most of you that there is a worldwide shortage of healthcare staff, especially nurses and emergency medical technicians (EMTs). I would like you to stop and think about the utterly terrible policy failure this represents. Maybe if enough people do, we can figure out a way to fix it.

It goes without saying—yet bears repeating—that people die when you don’t have enough nurses and EMTs. Indeed, surely a large proportion of the 2.6 million (!) deaths each year from medical errors are attributable to this. It is likely that at least one million lives per year could be saved by fixing this problem worldwide. In the US alone, over 250,000 deaths per year are caused by medical errors; so we’re looking at something like 100,000 lives we could safe each year by removing staffing shortages.

Precisely because these jobs have such high stakes, the mere fact that we would ever see the word “shortage” beside “nurse” or “EMT” was already clear evidence of dramatic policy failure.

This is not like other jobs. A shortage of accountants or baristas or even teachers, while a bad thing, is something that market forces can be expected to correct in time, and it wouldn’t be unreasonable to simply let them do so—meaning, let wages rise on their own until the market is restored to equilibrium. A “shortage” of stockbrokers or corporate lawyers would in fact be a boon to our civilization. But a shortage of nurses or EMTs or firefighters (yes, there are those too!) is a disaster.

Partly this is due to the COVID pandemic, which has been longer and more severe than any but the most pessimistic analysts predicted. But there shortages of nurses before COVID. There should not have been. There should have been a massive glut.

Even if there hadn’t been a shortage of healthcare staff before the pandemic, the fact that there wasn’t a glut was already a problem.

This is what a properly-functioning healthcare policy would look like: Most nurses are bored most of the time. They are widely regarded as overpaid. People go into nursing because it’s a comfortable, easy career with very high pay and usually not very much work. Hospitals spend most of their time with half their beds empty and half of their ambulances parked while the drivers and EMTs sit around drinking coffee and watching football games.

Why? Because healthcare, especially emergency care, involves risk, and the stakes couldn’t be higher. If the number of severely sick people doubles—as in, say, a pandemic—a hospital that usually runs at 98% capacity won’t be able to deal with them. But a hospital that usually runs at 50% capacity will.

COVID exposed to the world what a careful analysis would already have shown: There was not nearly enough redundancy in our healthcare system. We had been optimizing for a narrow-minded, short-sighted notion of “efficiency” over what we really needed, which was resiliency and robustness.

I’d like to compare this to two other types of jobs.

The first is stockbrokers.Set aside for a moment the fact that most of what they do is worthless is not actively detrimental to human society. Suppose that their most adamant boosters are correct and what they do is actually really important and beneficial.

Their experience is almost like what I just said nurses ought to be. They are widely regarded (correctly) as very overpaid. There is never any shortage of them; there are people lining up to be hired. People go into the work not because they care about it or even because they are particularly good at it, but because they know it’s an easy way to make a lot of money.

The one thing that seems to be different from my image may not be as different as it seems. Stockbrokers work long hours, but nobody can really explain why. Frankly most of what they do can be—and has been—successfully automated. Since there simply isn’t that much work for them to do, my guess is that most of the time they spend “working” 60-80 hour weeks is actually not actually working, but sitting around pretending to work. Since most financial forecasters are outperformed by a simple diversified portfolio, the most profitable action for most stock analysts to take most of the time would be nothing.

It may also be that stockbrokers work hard at sales—trying to convince people to buy and sell for bad reasons in order to earn sales commissions. This would at least explain why they work so many hours, though it would make it even harder to believe that what they do benefits society. So if we imagine our “ideal” stockbroker who makes the world a better place, I think they mostly just use a simple algorithm and maybe adjust it every month or two. They make better returns than their peers, but spend 38 hours a week goofing off.

There is a massive glut of stockbrokers. This is what it looks like when a civilization is really optimized to be good at something.

The second is soldiers. Say what you will about them, no one can dispute that their job has stakes of life and death. A lot of people seem to think that the world would be better off without them, but that’s at best only true if everyone got rid of them; if you don’t have soldiers but other countries do, you’re going to be in big trouble. (“We’ll beat our swords into liverwurst / Down by the East Riverside; / But no one wants to be the first!”) So unless and until we can solve that mother of all coordination problems, we need to have soldiers around.

What is life like for a soldier? Well, they don’t seem overpaid; if anything, underpaid. (Maybe some of the officers are overpaid, but clearly not most of the enlisted personnel. Part of the problem there is that “pay grade” is nearly synonymous with “rank”—it’s a primate hierarchy, not a rational wage structure. Then again, so are most industries; the military just makes it more explicit.) But there do seem to be enough of them. Military officials may lament of “shortages” of soldiers, but they never actually seem to want for troops to deploy when they really need them. And if a major war really did start that required all available manpower, the draft could be reinstated and then suddenly they’d have it—the authority to coerce compliance is precisely how you can avoid having a shortage while keeping your workers underpaid. (Russia’s soldier shortage is genuine—something about being utterly outclassed by your enemy’s technological superiority in an obviously pointless imperialistic war seems to hurt your recruiting numbers.)

What is life like for a typical soldier? The answer may surprise you. The overwhelming answer in surveys and interviews (which also fits with the experiences I’ve heard about from friends and family in the military) is that life as a soldier is boring. All you do is wake up in the morning and push rubbish around camp.” Bosnia was scary for about 3 months. After that it was boring. That is pretty much day to day life in the military. You are bored.”

This isn’t new, nor even an artifact of not being in any major wars: Union soldiers in the US Civil War had the same complaint. Even in World War I, a typical soldier spent only half the time on the front, and when on the front only saw combat 1/5 of the time. War is boring.

In other words, there is a massive glut of soldiers. Most of them don’t even know what to do with themselves most of the time.

This makes perfect sense. Why? Because an army needs to be resilient. And to be resilient, you must be redundant. If you only had exactly enough soldiers to deploy in a typical engagement, you’d never have enough for a really severe engagement. If on average you had enough, that means you’d spend half the time with too few. And the costs of having too few soldiers are utterly catastrophic.

This is probably an evolutionary outcome, in fact; civilizations may have tried to have “leaner” militaries that didn’t have so much redundancy, and those civilizations were conquered by other civilizations that were more profligate. (This is not to say that we couldn’t afford to cut military spending at all; it’s one thing to have the largest military in the world—I support that, actually—but quite another to have more than the next 10 combined.)

What’s the policy solution here? It’s actually pretty simple.

Pay nurses and EMTs more. A lot more. Whatever it takes to get to the point where we not only have enough, but have so many people lining up to join we don’t even know what to do with them all. If private healthcare firms won’t do it, force them to—or, all the more reason to nationalize healthcare. The stakes are far too high to leave things as they are.

Would this be expensive? Sure.

Removing the shortage of EMTs wouldn’t even be that expensive. There are only about 260,000 EMTs in the US, and they get paid the apallingly low median salary of $36,000. That means we’re currently spending only about $9 billion per year on EMTs. We could double their salaries and double their numbers for only an extra $27 billion—about 0.1% of US GDP.

Nurses would cost more. There are about 5 million nurses in the US, with an average salary of about $78,000, so we’re currently spending about $390 billion a year on nurses. We probably can’t afford to double both salary and staffing. But maybe we could increase both by 20%, costing about an extra $170 billion per year.

Altogether that would cost about $200 billion per year. To save one hundred thousand lives.

That’s $2 million per life saved, or about $40,000 per QALY. The usual estimate for the value of a statistical life is about $10 million, and the usual threshold for a cost-effective medical intervention is $50,000-$100,000 per QALY; so we’re well under both. This isn’t as efficient as buying malaria nets in Africa, but it’s more efficient than plenty of other things we’re spending on. And this isn’t even counting additional benefits of better care that go beyond lives saved.

In fact if we nationalized US healthcare we could get more than these amounts in savings from not wasting our money on profits for insurance and drug companies—simply making the US healthcare system as cost-effective as Canada’s would save $6,000 per American per year, or a whopping $1.9 trillion. At that point we could double the number of nurses and their salaries and still be spending less.

No, it’s not because nurses and doctors are paid much less in Canada than the US. That’s true in some countries, but not Canada. The median salary for nurses in Canada is about $95,500 CAD, which is $71,000 US at current exchange rates. Doctors in Canada can make anywhere from $80,000 to $400,000 CAD, which is $60,000 to $300,000 US. Nor are healthcare outcomes in Canada worse than the US; if anything, they’re better, as Canadians live an average of four years longer than Americans. No, the radical difference in cost—a factor of 2 to 1—between Canada and the US comes from privatization. Privatization is supposed to make things more efficient and lower costs, but it has absolutely not done that in US healthcare.

And if our choice is between spending more money and letting hundreds of thousands or millions of people die every year, that’s no choice at all.

Good enough is perfect, perfect is bad

Jan 8 JDN 2459953

Not too long ago, I read the book How to Keep House While Drowning by KC Davis, which I highly recommend. It offers a great deal of useful and practical advice, especially for someone neurodivergent and depressed living through an interminable pandemic (which I am, but honestly, odds are, you may be too). And to say it is a quick and easy read is actually an unfair understatement; it is explicitly designed to be readable in short bursts by people with ADHD, and it has a level of accessibility that most other books don’t even aspire to and I honestly hadn’t realized was possible. (The extreme contrast between this and academic papers is particularly apparent to me.)

One piece of advice that really stuck with me was this: Good enough is perfect.

At first, it sounded like nonsense; no, perfect is perfect, good enough is just good enough. But in fact there is a deep sense in which it is absolutely true.

Indeed, let me make it a bit stronger: Good enough is perfect; perfect is bad.

I doubt Davis thought of it in these terms, but this is a concise, elegant statement of the principles of bounded rationality. Sometimes it can be optimal not to optimize.

Suppose that you are trying to optimize something, but you have limited computational resources in which to do so. This is actually not a lot for you to suppose—it’s literally true of basically everyone basically every moment of every day.

But let’s make it a bit more concrete, and say that you need to find the solution to the following math problem: “What is the product of 2419 times 1137?” (Pretend you don’t have a calculator, as it would trivialize the exercise. I thought about using a problem you couldn’t do with a standard calculator, but I realized that would also make it much weirder and more obscure for my readers.)

Now, suppose that there are some quick, simple ways to get reasonably close to the correct answer, and some slow, difficult ways to actually get the answer precisely.

In this particular problem, the former is to approximate: What’s 2500 times 1000? 2,500,000. So it’s probably about 2,500,000.

Or we could approximate a bit more closely: Say 2400 times 1100, that’s about 100 times 100 times 24 times 11, which is 2 times 12 times 11 (times 10,000), which is 2 times (110 plus 22), which is 2 times 132 (times 10,000), which is 2,640,000.

Or, we could actually go through all the steps to do the full multiplication (remember I’m assuming you have no calculator), multiply, carry the 1s, add all four sums, re-check everything and probably fix it because you messed up somewhere; and then eventually you will get: 2,750,403.

So, our really fast method was only off by about 10%. Our moderately-fast method was only off by 4%. And both of them were a lot faster than getting the exact answer by hand.

Which of these methods you’d actually want to use depends on the context and the tools at hand. If you had a calculator, sure, get the exact answer. Even if you didn’t, but you were balancing the budget for a corporation, I’m pretty sure they’d care about that extra $110,403. (Then again, they might not care about the $403 or at least the $3.) But just as an intellectual exercise, you really didn’t need to do anything; the optimal choice may have been to take my word for it. Or, if you were at all curious, you might be better off choosing the quick approximation rather than the precise answer. Since nothing of any real significance hinged on getting that answer, it may be simply a waste of your time to bother finding it.

This is of course a contrived example. But it’s not so far from many choices we make in real life.

Yes, if you are making a big choice—which job to take, what city to move to, whether to get married, which car or house to buy—you should get a precise answer. In fact, I make spreadsheets with formal utility calculations whenever I make a big choice, and I haven’t regretted it yet. (Did I really make a spreadsheet for getting married? You’re damn right I did; there were a lot of big financial decisions to make there—taxes, insurance, the wedding itself! I didn’t decide whom to marry that way, of course; but we always had the option of staying unmarried.)

But most of the choices we make from day to day are small choices: What should I have for lunch today? Should I vacuum the carpet now? What time should I go to bed? In the aggregate they may all add up to important things—but each one of them really won’t matter that much. If you were to construct a formal model to optimize your decision of everything to do each day, you’d spend your whole day doing nothing but constructing formal models. Perfect is bad.

In fact, even for big decisions, you can’t really get a perfect answer. There are just too many unknowns. Sometimes you can spend more effort gathering additional information—but that’s costly too, and sometimes the information you would most want simply isn’t available. (You can look up the weather in a city, visit it, ask people about it—but you can’t really know what it’s like to live there until you do.) Even those spreadsheet models I use to make big decisions contain error bars and robustness checks, and if, even after investing a lot of effort trying to get precise results, I still find two or more choices just can’t be clearly distinguished to within a good margin of error, I go with my gut. And that seems to have been the best choice for me to make. Good enough is perfect.

I think that being gifted as a child trained me to be dangerously perfectionist as an adult. (Many of you may find this familiar.) When it came to solving math problems, or answering quizzes, perfection really was an attainable goal a lot of the time.

As I got older and progressed further in my education, maybe getting every answer right was no longer feasible; but I still could get the best possible grade, and did, in most of my undergraduate classes and all of my graduate classes. To be clear, I’m not trying to brag here; if anything, I’m a little embarrassed. What it mainly shows is that I had learned the wrong priorities. In fact, one of the main reasons why I didn’t get a 4.0 average in undergrad is that I spent a lot more time back then writing novels and nonfiction books, which to this day I still consider my most important accomplishments and grieve that I’ve not (yet?) been able to get them commercially published. I did my best work when I wasn’t trying to be perfect. Good enough is perfect; perfect is bad.

Now here I am on the other side of the academic system, trying to carve out a career, and suddenly, there is no perfection. When my exam is being graded by someone else, there is a way to get the most points. When I’m the one grading the exams, there is no “correct answer” anymore. There is no one scoring me to see if I did the grading the “right way”—and so, no way to be sure I did it right.

Actually, here at Edinburgh, there are other instructors who moderate grades and often require me to revise them, which feels a bit like “getting it wrong”; but it’s really more like we had different ideas of what the grade curve should look like (not to mention US versus UK grading norms). There is no longer an objectively correct answer the way there is for, say, the derivative of x^3, the capital of France, or the definition of comparative advantage. (Or, one question I got wrong on an undergrad exam because I had zoned out of that lecture to write a book on my laptop: Whether cocaine is a dopamine reuptake inhibitor. It is. And the fact that I still remember that because I got it wrong over a decade ago tells you a lot about me.)

And then when it comes to research, it’s even worse: What even constitutes “good” research, let alone “perfect” research? What would be most scientifically rigorous isn’t what journals would be most likely to publish—and without much bigger grants, I can afford neither. I find myself longing for the research paper that will be so spectacular that top journals have to publish it, removing all risk of rejection and failure—in other words, perfect.

Yet such a paper plainly does not exist. Even if I were to do something that would win me a Nobel or a Fields Medal (this is, shall we say, unlikely), it probably wouldn’t be recognized as such immediately—a typical Nobel isn’t awarded until 20 or 30 years after the work that spawned it, and while Fields Medals are faster, they’re by no means instant or guaranteed. In fact, a lot of ground-breaking, paradigm-shifting research was originally relegated to minor journals because the top journals considered it too radical to publish.

Or I could try to do something trendy—feed into DSGE or GTFO—and try to get published that way. But I know my heart wouldn’t be in it, and so I’d be miserable the whole time. In fact, because it is neither my passion nor my expertise, I probably wouldn’t even do as good a job as someone who really buys into the core assumptions. I already have trouble speaking frequentist sometimes: Are we allowed to say “almost significant” for p = 0.06? Maximizing the likelihood is still kosher, right? Just so long as I don’t impose a prior? But speaking DSGE fluently and sincerely? I’d have an easier time speaking in Latin.

What I know—on some level at least—I ought to be doing is finding the research that I think is most worthwhile, given the resources I have available, and then getting it published wherever I can. Or, in fact, I should probably constrain a little by what I know about journals: I should do the most worthwhile research that is feasible for me and has a serious chance of getting published in a peer-reviewed journal. It’s sad that those two things aren’t the same, but they clearly aren’t. This constraint binds, and its Lagrange multiplier is measured in humanity’s future.

But one thing is very clear: By trying to find the perfect paper, I have floundered and, for the last year and a half, not written any papers at all. The right choice would surely have been to write something.

Because good enough is perfect, and perfect is bad.

Charity shouldn’t end at home

It so happens that this week’s post will go live on Christmas Day. I always try to do some kind of holiday-themed post around this time of year, because not only Christmas, but a dozen other holidays from various religions all fall around this time of year. The winter solstice seems to be a very popular time for holidays, and has been since antiquity: The Romans were celebrating Saturnalia 2000 years ago. Most of our ‘Christmas’ traditions are actually derived from Yuletide.

These holidays certainly mean many different things to different people, but charity and generosity are themes that are very common across a lot of them. Gift-giving has been part of the season since at least Saturnalia and remains as vital as ever today. Most of those gifts are given to our friends and loved ones, but a substantial fraction of people also give to strangers in the form of charitable donations: November and December have the highest rates of donation to charity in the US and the UK, with about 35-40% of people donating during this season. (Of course this is complicated by the fact that December 31 is often the day with the most donations, probably from people trying to finish out their tax year with a larger deduction.)

My goal today is to make you one of those donors. There is a common saying, often attributed to the Bible but not actually present in it: “Charity begins at home”.

Perhaps this is so. There’s certainly something questionable about the Effective Altruism strategy of “earning to give” if it involves abusing and exploiting the people around you in order to make more money that you then donate to worthy causes. Certainly we should be kind and compassionate to those around us, and it makes sense for us to prioritize those close to us over strangers we have never met. But while charity may begin at home, it must not end at home.

There are so many global problems that could benefit from additional donations. While global poverty has been rapidly declining in the early 21st century, this is largely because of the efforts of donors and nonprofit organizations. Official Development Assitance has been roughly constant since the 1970s at 0.3% of GNI among First World countries—well below international targets set decades ago. Total development aid is around $160 billion per year, while private donations from the United States alone are over $480 billion. Moreover, 9% of the world’s population still lives in extreme poverty, and this rate has actually slightly increased the last few years due to COVID.

There are plenty of other worthy causes you could give to aside from poverty eradication, from issues that have been with us since the dawn of human civilization (the Humane Society International for domestic animal welfare, the World Wildlife Federation for wildlife conservation) to exotic fat-tail sci-fi risks that are only emerging in our own lifetimes (the Machine Intelligence Research Institute for AI safety, the International Federation of Biosafety Associations for biosecurity, the Union of Concerned Scientists for climate change and nuclear safety). You could fight poverty directly through organizations like UNICEF or GiveDirectly, fight neglected diseases through the Schistomoniasis Control Initiative or the Against Malaria Foundation, or entrust an organization like GiveWell to optimize your donations for you, sending them where they think they are needed most. You could give to political causes supporting civil liberties (the American Civil Liberties Union) or protecting the rights of people of color (the North American Association of Colored People) or LGBT people (the Human Rights Campaign).

I could spent a lot of time and effort trying to figure out the optimal way to divide up your donations and give them to causes such as this—and then convincing you that it’s really the right one. (And there is even a time and place for that, because seemingly-small differences can matter a lot in this.) But instead I think I’m just going to ask you to pick something. Give something to an international charity with a good track record.

I think we worry far too much about what is the best way to give—especially people in the Effective Altruism community, of which I’m sort of a marginal member—when the biggest thing the world really needs right now is just more people giving more. It’s true, there are lots of worthless or even counter-productive charities out there: Please, please do not give to the Salvation Army. (And think twice before donating to your own church; if you want to support your own community, okay, go ahead. But if you want to make the world better, there are much better places to put your money.)

But above all, give something. Or if you already give, give more. Most people don’t give at all, and most people who give don’t give enough.