# When maximizing utility doesn’t

Jun 4 JDN 2460100

Expected utility theory behaves quite strangely when you consider questions involving mortality.

Nick Beckstead and Teruji Thomas recently published a paper on this: All well-defined utility functions are either reckless in that they make you take crazy risks, or timid in that they tell you not to take even very small risks. It’s starting to make me wonder if utility theory is even the right way to make decisions after all.

Consider a game of Russian roulette where the prize is \$1 million. The revolver has 6 chambers, 3 with a bullet. So that’s a 1/2 chance of \$1 million, and a 1/2 chance of dying. Should you play?

I think it’s probably a bad idea to play. But the prize does matter; if it were \$100 million, or \$1 billion, maybe you should play after all. And if it were \$10,000, you clearly shouldn’t.

And lest you think that there is no chance of dying you should be willing to accept for any amount of money, consider this: Do you drive a car? Do you cross the street? Do you do anything that could ever have any risk of shortening your lifespan in exchange for some other gain? I don’t see how you could live a remotely normal life without doing so. It might be a very small risk, but it’s still there.

This raises the question: Suppose we have some utility function over wealth; ln(x) is a quite plausible one. What utility should we assign to dying?

The fact that the prize matters means that we can’t assign death a utility of negative infinity. It must be some finite value.

But suppose we choose some value, -V, (so V is positive), for the utility of dying. Then we can find some amount of money that will make you willing to play: ln(x) = V, x = e^(V).

Now, suppose that you have the chance to play this game over and over again. Your marginal utility of wealth will change each time you win, so we may need to increase the prize to keep you playing; but we could do that. The prizes could keep scaling up as needed to make you willing to play. So then, you will keep playing, over and over—and then, sooner or later, you’ll die. So, at each step you maximized utility—but at the end, you didn’t get any utility.

Well, at that point your heirs will be rich, right? So maybe you’re actually okay with that. Maybe there is some amount of money (\$1 billion?) that you’d be willing to die in order to ensure your heirs have.

But what if you don’t have any heirs? Or, what if we consider making such a decision as a civilization? What if death means not only the destruction of you, but also the destruction of everything you care about?

As a civilization, are there choices before us that would result in some chance of a glorious, wonderful future, but also some chance of total annihilation? I think it’s pretty clear that there are. Nuclear technology, biotechnology, artificial intelligence. For about the last century, humanity has been at a unique epoch: We are being forced to make this kind of existential decision, to face this kind of existential risk.

It’s not that we were immune to being wiped out before; an asteroid could have taken us out at any time (as happened to the dinosaurs), and a volcanic eruption nearly did. But this is the first time in humanity’s existence that we have had the power to destroy ourselves. This is the first time we have a decision to make about it.

One possible answer would be to say we should never be willing to take any kind of existential risk. Unlike the case of an individual, when we speaking about an entire civilization, it no longer seems obvious that we shouldn’t set the utility of death at negative infinity. But if we really did this, it would require shutting down whole industries—definitely halting all research in AI and biotechnology, probably disarming all nuclear weapons and destroying all their blueprints, and quite possibly even shutting down the coal and oil industries. It would be an utterly radical change, and it would require bearing great costs.

On the other hand, if we should decide that it is sometimes worth the risk, we will need to know when it is worth the risk. We currently don’t know that.

Even worse, we will need some mechanism for ensuring that we don’t take the risk when it isn’t worth it. And we have nothing like such a mechanism. In fact, most of our process of research in AI and biotechnology is widely dispersed, with no central governing authority and regulations that are inconsistent between countries. I think it’s quite apparent that right now, there are research projects going on somewhere in the world that aren’t worth the existential risk they pose for humanity—but the people doing them are convinced that they are worth it because they so greatly advance their national interest—or simply because they could be so very profitable.

In other words, humanity finally has the power to make a decision about our survival, and we’re not doing it. We aren’t making a decision at all. We’re letting that responsibility fall upon more or less randomly-chosen individuals in government and corporate labs around the world. We may be careening toward an abyss, and we don’t even know who has the steering wheel.

# We ignorant, incompetent gods

May 21 JDN 2460086

A review of Homo Deus

The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.

E.O. Wilson

Homo Deus is a very good read—and despite its length, a quick one; as you can see, I read it cover to cover in a week. Yuval Noah Harari’s central point is surely correct: Our technology is reaching a threshold where it grants us unprecedented power and forces us to ask what it means to be human.

Biotechnology and artificial intelligence are now advancing so rapidly that advancements in other domains, such as aerospace and nuclear energy, seem positively mundane. Who cares about making flight or electricity a bit cleaner when we will soon have the power to modify ourselves or we’ll all be replaced by machines?

Indeed, we already have technology that would have seemed to ancient people like the powers of gods. We can fly; we can witness or even control events thousands of miles away; we can destroy mountains; we can wipeout entire armies in an instant; we can even travel into outer space.

Harari rightly warns us that our not-so-distant descendants are likely to have powers that we would see as godlike: Immortality, superior intelligence, self-modification, the power to create life.

And where it is scary to think about what they might do with that power if they think the way we do—as ignorant and foolish and tribal as we are—Harari points out that it is equally scary to think about what they might do if they don’t think the way we do—for then, how do they think? If their minds are genetically modified or even artificially created, who will they be? What values will they have, if not ours? Could they be better? What if they’re worse?

It is of course difficult to imagine values better than our own—if we thought those values were better, we’d presumably adopt them. But we should seriously consider the possibility, since presumably most of us believe that our values today are better than what most people’s values were 1000 years ago. If moral progress continues, does it not follow that people’s values will be better still 1000 years from now? Or at least that they could be?

I also think Harari overestimates just how difficult it is to anticipate the future. This may be a useful overcorrection; the world is positively infested with people making overprecise predictions about the future, often selling them for exorbitant fees (note that Harari was quite well-compensated for this book as well!). But our values are not so fundamentally alien from those of our forebears, and we have reason to suspect that our descendants’ values will be no more different from ours.

For instance, do you think that medieval people thought suffering and death were good? I assure you they did not. Nor did they believe that the supreme purpose in life is eating cheese. (They didn’t even believe the Earth was flat!) They did not have the concept of GDP, but they could surely appreciate the value of economic prosperity.

Indeed, our world today looks very much like a medieval peasant’s vision of paradise. Boundless food in endless variety. Near-perfect security against violence. Robust health, free from nearly all infectious disease. Freedom of movement. Representation in government! The land of milk and honey is here; there they are, milk and honey on the shelves at Walmart.

Of course, our paradise comes with caveats: Not least, we are by no means free of toil, but instead have invented whole new kinds of toil they could scarcely have imagined. If anything I would have to guess that coding a robot or recording a video lecture probably isn’t substantially more satisfying than harvesting wheat or smithing a sword; and reconciling receivables and formatting spreadsheets is surely less. Our tasks are physically much easier, but mentally much harder, and it’s not obvious which of those is preferable. And we are so very stressed! It’s honestly bizarre just how stressed we are, given the abudance in which we live; there is no reason for our lives to have stakes so high, and yet somehow they do. It is perhaps this stress and economic precarity that prevents us from feeling such joy as the medieval peasants would have imagined for us.

Of course, we don’t agree with our ancestors on everything. The medieval peasants were surely more religious, more ignorant, more misogynistic, more xenophobic, and more racist than we are. But projecting that trend forward mostly means less ignorance, less misogyny, less racism in the future; it means that future generations should see the world world catch up to what the best of us already believe and strive for—hardly something to fear. The values that I believe are surely not what we as a civilization act upon, and I sorely wish they were. Perhaps someday they will be.

I can even imagine something that I myself would recognize as better than me: Me, but less hypocritical. Strictly vegan rather than lacto-ovo-vegetarian, or at least more consistent about only buying free range organic animal products. More committed to ecological sustainability, more willing to sacrifice the conveniences of plastic and gasoline. Able to truly respect and appreciate all life, even humble insects. (Though perhaps still not mosquitoes; this is war. They kill more of us than any other animal, including us.) Not even casually or accidentally racist or sexist. More courageous, less burnt out and apathetic. I don’t always live up to my own ideals. Perhaps someday someone will.

Harari fears something much darker, that we will be forced to give up on humanist values and replace them with a new techno-religion he calls Dataism, in which the supreme value is efficient data processing. I see very little evidence of this. If it feels like data is worshipped these days, it is only because data is profitable. Amazon and Google constantly seek out ever richer datasets and ever faster processing because that is how they make money. The real subject of worship here is wealth, and that is nothing new. Maybe there are some die-hard techno-utopians out there who long for us all to join the unified oversoul of all optimized data processing, but I’ve never met one, and they are clearly not the majority. (Harari also uses the word ‘religion’ in an annoyingly overbroad sense; he refers to communism, liberalism, and fascism as ‘religions’. Ideologies, surely; but religions?)

Harari in fact seems to think that ideologies are strongly driven by economic structures, so maybe he would even agree that it’s about profit for now, but thinks it will become religion later. But I don’t really see history fitting this pattern all that well. If monotheism is directly tied to the formation of organized bureaucracy and national government, then how did Egypt and Rome last so long with polytheistic pantheons? If atheism is the natural outgrowth of industrialized capitalism, then why are Africa and South America taking so long to get the memo? I do think that economic circumstances can constrain culture and shift what sort of ideas become dominant, including religious ideas; but there clearly isn’t this one-to-one correspondence he imagines. Moreover, there was never Coalism or Oilism aside from the greedy acquisition of these commodities as part of a far more familiar ideology: capitalism.

He also claims that all of science is now, or is close to, following a united paradigm under which everything is a data processing algorithm, which suggests he has not met very many scientists. Our paradigms remain quite varied, thank you; and if they do all have certain features in common, it’s mainly things like rationality, naturalism and empiricism that are more or less inherent to science. It’s not even the case that all cognitive scientists believe in materialism (though it probably should be); there are still dualists out there.

Moreover, when it comes to values, most scientists believe in liberalism. This is especially true if we use Harari’s broad sense (on which mainline conservatives and libertarians are ‘liberal’ because they believe in liberty and human rights), but even in the narrow sense of center-left. We are by no means converging on a paradigm where human life has no value because it’s all just data processing; maybe some scientists believe that, but definitely not most of us. If scientists ran the world, I can’t promise everything would be better, but I can tell you that Bush and Trump would never have been elected and we’d have a much better climate policy in place by now.

I do share many of Harari’s fears of the rise of artificial intelligence. The world is clearly not ready for the massive economic disruption that AI is going to cause all too soon. We still define a person’s worth by their employment, and think of ourselves primarily as collection of skills; but AI is going to make many of those skills obsolete, and may make many of us unemployable. It would behoove us to think in advance about who we truly are and what we truly want before that day comes. I used to think that creative intellectual professions would be relatively secure; ChatGPT and Midjourney changed my mind. Even writers and artists may not be safe much longer.

Harari is so good at sympathetically explaining other views he takes it to a fault. At times it is actually difficult to know whether he himself believes something and wants you to, or if he is just steelmanning someone else’s worldview. There’s a whole section on ‘evolutionary humanism’ where he details a worldview that is at best Nietschean and at worst Nazi, but he makes it sound so seductive. I don’t think it’s what he believes, in part because he has similarly good things to say about liberalism and socialism—but it’s honestly hard to tell.

The weakest part of the book is when Harari talks about free will. Like most people, he just doesn’t get compatibilism. He spends a whole chapter talking about how science ‘proves we have no free will’, and it’s just the same old tired arguments hard determinists have always made.

He talks about how we can make choices based on our desires, but we can’t choose our desires; well of course we can’t! What would that even mean? If you could choose your desires, what would you choose them based on, if not your desires? Your desire-desires? Well, then, can you choose your desire-desires? What about your desire-desire-desires?

What even is this ultimate uncaused freedom that libertarian free will is supposed to consist in? No one seems capable of even defining it. (I’d say Kant got the closest: He defined it as the capacity to act based upon what ought rather than what is. But of course what we believe about ‘ought’ is fundamentally stored in our brains as a particular state, a way things are—so in the end, it’s an ‘is’ we act on after all.)

Maybe before you lament that something doesn’t exist, you should at least be able to describe that thing as a coherent concept? Woe is me, that 2 plus 2 is not equal to 5!

It is true that as our technology advances, manipulating other people’s desires will become more and more feasible. Harari overstates the case on so-called robo-rats; they aren’t really mind-controlled, it’s more like they are rewarded and punished. The rat chooses to go left because she knows you’ll make her feel good if she does; she’s still freely choosing to go left. (Dangling a carrot in front of a horse is fundamentally the same thing—and frankly, paying a wage isn’t all that different.) The day may yet come where stronger forms of control become feasible, and woe betide us when it does. Yet this is no threat to the concept of free will; we already knew that coercion was possible, and mind control is simply a more precise form of coercion.

Harari reports on a lot of interesting findings in neuroscience, which are important for people to know about, but they do not actually show that free will is an illusion. What they do show is that free will is thornier than most people imagine. Our desires are not fully unified; we are often ‘of two minds’ in a surprisingly literal sense. We are often tempted by things we know are wrong. We often aren’t sure what we really want. Every individual is in fact quite divisible; we literally contain multitudes.

We do need a richer account of moral responsibility that can deal with the fact that human beings often feel multiple conflicting desires simultaneously, and often experience events differently than we later go on to remember them. But at the end of the day, human consciousness is mostly unified, our choices are mostly rational, and our basic account of moral responsibility is mostly valid.

I think for now we should perhaps be less worried about what may come in the distant future, what sort of godlike powers our descendants may have—and more worried about what we are doing with the godlike powers we already have. We have the power to feed the world; why aren’t we? We have the power to save millions from disease; why don’t we? I don’t see many people blindly following this ‘Dataism’, but I do see an awful lot blinding following a 19th-century vision of capitalism.

And perhaps if we straighten ourselves out, the future will be in better hands.

# Why does democracy work?

May 14 JDN 2460079

A review of Democracy for Realists

I don’t think it can be seriously doubted that democracy does, in fact, work. Not perfectly, by any means; but the evidence is absolutely overwhelming that more democratic societies are better than more authoritarian societies by just about any measure you could care to use.

When I first started reading Democracy for Realists and saw their scathing, at times frothing criticism of mainstream ideas of democracy, I thought they were going to try to disagree with that; but in the end they don’t. Achen and Bartels do agree that democracy works; they simply think that why and how it works is radically different from what most people think.

For it is a very long-winded book, and in dire need of better editing. Most of the middle section of the book is taken up by a deluge of empirical analysis, most of which amounts to over-interpreting the highly ambiguous results of underpowered linear regressions on extremely noisy data. The sheer quantity of them seems intended to overwhelm any realization that no particular one is especially compelling. But a hundred weak arguments don’t add up to a single strong one.

To their credit, the authors often include the actual scatter plots; but when you look at those scatter plots, you find yourself wondering how anyone could be so convinced these effects are real and important. Many of them seem more prone to new constellations.

Their econometric techniques are a bit dubious, as well; at one point they said they “removed outliers” but then the examples they gave as “outliers” were the observations most distant from their regression line rather than the rest of the data. Removing the things furthest from your regression line will always—always—make your regression seem stronger. But that’s not what outliers are. Other times, they add weird controls or exclude parts of the sample for dubious reasons, and I get the impression that these are the cherry-picked results of a much larger exploration. (Why in the world would you exclude Catholics from a study of abortion attitudes? And this study on shark attacks seems awfully specific….) And of course if you try 20 regressions at random, you can expect that at least 1 of them will probably show up with p < 0.05. I think they are mainly just following the norms of their discipline—but those norms are quite questionable.

They don’t ever get into much detail as to what sort of practical institutional changes they would recommend, so it’s hard to know whether I would agree with those. Some of their suggestions, such as more stringent rules on campaign spending, I largely agree with. Others, such as their opposition to popular referenda and recommendation for longer term limits, I have more mixed feelings about. But none seem totally ridiculous or even particularly radical, and they really don’t offer much detail about any of them. I thought they were going to tell me that appointment of judges is better than election (which many experts widely agree), or that the Electoral College is a good system (which far fewer experts would assent to, at least since George W. Bush and Donald Trump). In fact they didn’t do that; they remain eerily silent on substantive questions like this.

Honestly, what little they have to say about institutional policy feels a bit tacked on at the end, as if they suddenly realized that they ought to say something useful rather than just spend the whole time tearing down another theory.

In fact, I came to wonder if they really were tearing down anyone’s actual theory, or if this whole book was really just battering a strawman. Does anyone really think that voters are completely rational? At one point they speak of an image of the ‘sovereign omnicompetent voter’; is that something anyone really believes in?

It does seem like many people believe in making government more responsive to the people, whereas Achen and Bartels seem to have the rather distinct goal of making government make better decisions. They were able to find at least a few examples—though I know not how far and wide they had to search—where it seemed like more popular control resulted in worse outcomes, such as water fluoridation and funding for fire departments. So maybe the real substantive disagreement here is over whether more or less direct democracy is a good idea. And that is indeed a reasonable question. But one need not believe that voters are superhuman geniuses to think that referenda are better than legislation. Simply showing that voters are limited in their capacity and bound to group identity is not enough to answer that question.

In fact, I think that Achen and Bartels seriously overestimate the irrationality of voters, because they don’t seem to appreciate that group identity is often a good proxy for policy—in fact, they don’t even really seem to see social policy as policy at all. Consider this section (p. 238):

“In this pre-Hitlerian age it must have seemed to most Jews that there were no crucial issues dividing the major parties” (Fuchs 1956, 63). Yet by 1923, a very substantial majority of Jews had abandoned their Republican loyalties and begun voting for the Democrats. What had changed was not foreign policy, but rather the social status of Jews within one of America’s major political parties. In a very visible way, the Democrats had become fully accepting and incorporating of religious minorities, both Catholics and Jews. The result was a durable Jewish partisan realignment grounded in “ethnic solidarity”, in Gamm’s characterization.

Gee, I wonder why Jews would suddenly care a great deal which party was more respectful toward people like them? Okay, the Holocaust hadn’t happened yet, but anti-Semitism is very old indeed, and it was visibly creeping upward during that era. And just in general, if one party is clearly more anti-Semitic than the other, why wouldn’t Jews prefer the one that is less hateful toward them? How utterly blinded by privilege do you need to be to not see that this is an important policy difference?

Perhaps because they are both upper-middle-class straight White cisgender men (I would also venture a guess nominally but not devoutly Protestant), Achens and Bartel seem to have no concept that social policy directly affects people of minority identity, that knowing that one party accepts people like you and the other doesn’t is a damn good reason to prefer one over the other. This is not a game where we are rooting for our home team. This directly affects our lives.

I know quite a few transgender people, and not a single one is a Republican. It’s not because all trans people hate low taxes. It’s because the Republican Party has declared war on trans people.

This may also lead to trans people being more left-wing generally, as once you’re in a group you tend to absorb some views from others in that group (and, I’ll admit, Marxists and anarcho-communists seem overrepresented among LGBT people). But I absolutely know some LGBT people who would like to vote conservative for economic policy reasons, but realize they can’t, because it means voting for bigots who hate them and want to actively discriminate against them. There is nothing irrational or even particularly surprising about this choice. It would take a very powerful overriding reason for anyone to want to vote for someone who publicly announces hatred toward them.

Indeed, for me the really baffling thing is that there are political parties that publicly announce hatred toward particular groups. It seems like a really weird strategy for winning elections. That is the thing that needs to be explained here; why isn’t inclusiveness—at least a smarmy lip-service toward inclusiveness, like ‘Diversity, Equity, and Inclusion’ offices at universities—the default behavior of all successful politicians? Why don’t they all hug a Latina trans woman after kissing a baby and taking a selfie with the giant butter cow? Why is not being an obvious bigot considered a left-wing position?

Since it obviously is the case that many voters don’t want this hatred (at the very least, its targets!), in order for it not to damage electoral changes, it must be that some other voters do want this hatred. Perhaps they themselves define their own identity in opposition to other people’s identities. They certainly talk that way a lot: We hear White people fearing ‘replacement‘ by shifting racial demographics, when no sane forecaster thinks that European haplotypes are in any danger of disappearing any time soon. The central argument against gay marriage was always that it would somehow destroy straight marriage, by some mechanism never explained.

Indeed, perhaps it is this very blindness toward social policy that makes Achen and Bartels unable to see the benefits of more direct democracy. When you are laser-focused on economic policy, as they are, then it seems to you as though policy questions are mainly technical matters of fact, and thus what we need are qualified experts. (Though even then, it is not purely a matter of fact whether we should care more about inequality than growth, or more about unemployment than inflation.)

But once you include social policy, you see that politics often involves very real, direct struggles between conflicting interests and differing moral views, and that by the time you’ve decided which view is the correct one, you already have your answer for what must be done. There is no technical question of gay marriage; there is only a moral one. We don’t need expertise on such questions; we need representation. (Then again, it’s worth noting that courts have sometimes advanced rights more effectively than direct democratic votes; so having your interests represented isn’t as simple as getting an equal vote.)

Achen and Bartels even include a model in the appendix where politicians are modeled as either varying in competence or controlled by incentives; never once does it consider that they might differ in whose interests they represent. Yet I don’t vote for a particular politician just because I think they are more intelligent, or as part of some kind of deterrence mechanism to keep them from misbehaving (I certainly hope the courts do a better job of that!); I vote for them because I think they represent the goals and interests I care about. We aren’t asking who is smarter, we are asking who is on our side.

The central question that I think the book raises is one that the authors don’t seem to have much to offer on: If voters are so irrational, why does democracy work? I do think there is strong evidence that voters are irrational, though maybe not as irrational as Achen and Bartels seem to think. Honestly, I don’t see how anyone can watch Donald Trump get elected President of the United States and not think that voters are irrational. (The book was written before that; apparently there’s a new edition with a preface about Trump, but my copy doesn’t have that.) But it isn’t at all obvious to me what to do with that information, because even if so-called elites are in fact more competent than average citizens—which may or may not be true—the fact remains that their interests are never completely aligned. Thus far, representative democracy of one stripe or another seems to be the best mechanism we have for finding people who have sufficient competence while also keeping them on a short enough leash.

And perhaps that’s why democracy works as well as it does; it gives our leaders enough autonomy to let them generally advance their goals, but also places limits on how badly misaligned our leaders’ goals can be from our own.

# Reckoning costs in money distorts them

May 7 JDN 2460072

Consider for a moment what it means when an economic news article reports “rising labor costs”. What are they actually saying?

They’re saying that wages are rising—perhaps in some industry, perhaps in the economy as a whole. But this is not a cost. It’s a price. As I’ve written about before, the two are fundamentally distinct.

The cost of labor is measured in effort, toil, and time. It’s the pain of having to work instead of whatever else you’d like to do with your time.

The price of labor is a monetary amount, which is delivered in a transaction.

This may seem perfectly obvious, but it has important and oft-neglected implications. A cost, one paid, is gone. That value has been destroyed. We hope that it was worth it for some benefit we gained. A price, when paid, is simply transferred: One person had that money before, now someone else has it. Nothing was gained or lost.

So in fact when reports say that “labor costs have risen”, what they are really saying is that income is being transferred from owners to workers without any change in real value taking place. They are framing as a loss what is fundamentally a zero-sum redistribution.

In fact, it is disturbingly common to see a fundamentally good redistribution of income framed in the press as a bad outcome because of its expression as “costs”; the “cost” of chocolate is feared to go up if we insist upon enforcing bans on forced labor—when in fact it is only the price that goes up, and the cost actually goes down: chocolate would no longer include complicity in an atrocity. The real suffering of making chocolate would be thereby reduced, not increased. Even when they aren’t literally enslaved, those workers are astonishingly poor, and giving them even a few more cents per hour would make a real difference in their lives. But God forbid we pay a few cents more for a candy bar!

If labor costs were to rise, that would mean that work had suddenly gotten harder, or more painful; or else, that some outside circumstance had made it more difficult to work. Having a child increases your labor costs—you now have the opportunity cost of not caring for the child. COVID increased the cost of labor, by making it suddenly dangerous just to go outside in public. That could also increase prices—you may demand a higher wage, and people do seem to have demanded higher wages after COVID. But these are two separate effects, and you can have one without the other. In fact, women typically see wage stagnation or even reduction after having kids (but men largely don’t), despite their real opportunity cost of labor having obviously greatly increased.

On an individual level, it’s not such a big mistake to equate price and cost. If you are buying something, its cost to you basically just is its price, plus a little bit of transaction cost for actually finding and buying it. But on a societal level, it makes an enormous difference. It distorts our policy priorities and can even lead to actively trying to suppress things that are beneficial—such as rising wages.

This false equivalence between price and costs seems to be at least as common among economists as it is among laypeople. Economists will often justify it on the grounds that in an ideal perfect competitive market the two would be in some sense equated. But of course we don’t live in that ideal perfect market, and even if we did, they would only beproportional at the margin, not fundamentally equal across the board. It would still be obviously wrong to characterize the total value or cost of work by the price paid for it; only the last unit of effort would be priced so that marginal value equals price equals marginal cost. The first 39 hours of your work would cost you less than what you were paid, and produce more than you were paid; only that 40th hour would set the three equal.

Once you account for all the various market distortions in the world, there’s no particular relationship between what something costs—in terms of real effort and suffering—and its price—in monetary terms. Things can be expensive and easy, or cheap and awful. In fact, they often seem to be; for some reason, there seems to be a pattern where the most terrible, miserable jobs (e.g. coal mining) actually pay the leastand the easiest, most pleasant jobs (e.g. stock trading) pay the most. Some jobs that benefit society pay well (e.g. doctors) and others pay terribly or not at all (e.g. climate activists). Some actions that harm the world get punished (e.g. armed robbery) and others get rewarded with riches (e.g. oil drilling). In the real world, whether a job is good or bad and whether it is paid well or poorly seem to be almost unrelated.

In fact, sometimes they seem even negatively related, where we often feel tempted to “sell out” and do something destructive in order to get higher pay. This is likely due to Berkson’s paradox: If people are willing to do jobs if they are either high-paying or beneficial to humanity, then we should expect that, on average, most of the high-paying jobs people do won’t be beneficial to humanity. Even if there were inherently no correlation or a small positive one, people’s refusal to do harmful low-paying work removes those jobs from our sample and results in a negative correlation in what remains.

I think that the best solution, ultimately, is to stop reckoning costs in money entirely. We should reckon them in happiness.

This is of course much more difficult than simply using prices; it’s not easy to say exactly how many QALY are sacrificed in the extraction of cocoa beans or the drilling of offshore oil wells. But if we actually did find a way to count them, I strongly suspect we’d find that it was far more than we ought to be willing to pay.

A very rough approximation, surely flawed but at least a start, would be to simply convert all payments into proportions of their recipient’s income: For full-time wages, this would result in basically everyone being counted the same, as 1 hour of work if you work 40 hours per week, 50 weeks per year is precisely 0.05% of your annual income. So we could say that whatever is equivalent to your hourly wage constitutes 50 microQALY.

This automatically implies that every time a rich person pays a poor person, QALY increase, while every time a poor person pays a rich person, QALY decrease. This is not an error in the calculation. It is a fact of the universe. We ignore it only at out own peril. All wealth redistributed downward is a benefit, while all wealth redistributed upward is a harm. That benefit may cause some other harm, or that harm may be compensated by some other benefit; but they are still there.

This would also put some things in perspective. When HSBC was fined £70 million for its crimes, that can be compared against its £1.5 billion in net income; if it were an individual, it would have been hurt about 50 milliQALY, which is about what I would feel if I lost \$2000. Of course, it’s not a person, and it’s not clear exactly how this loss was passed through to employees or shareholders; but that should give us at least some sense of how small that loss was for them. They probably felt it… a little.

When Trump was ordered to pay a \$1.3 million settlement, based on his \$2.5 billion net wealth (corresponding to roughly \$125 million in annual investment income), that cost him about 10 milliQALY; for me that would be about \$500.

At the other extreme, if someone goes from making \$1 per day to making \$1.50 per day, that’s a 50% increase in their income—500 milliQALY per year.

For those who have no income at all, this becomes even trickier; for them I think we should probably use their annual consumption, since everyone needs to eat and that costs something, though likely not very much. Or we could try to measure their happiness directly, trying to determine how much it hurts to not eat enough and work all day in sweltering heat.

Properly shifting this whole cultural norm will take a long time. For now, I leave you with this: Any time you see a monetary figure, ask yourself: How much is that worth to them?” The world will seem quite different once you get in the habit of that.

# Optimization is unstable. Maybe that’s why we satisfice.

Feb 26 JDN 2460002

Imagine you have become stranded on a deserted island. You need to find shelter, food, and water, and then perhaps you can start working on a way to get help or escape the island.

Suppose you are programmed to be an optimizerto get the absolute best solution to any problem. At first this may seem to be a boon: You’ll build the best shelter, find the best food, get the best water, find the best way off the island.

But you’ll also expend an enormous amount of effort trying to make it the best. You could spend hours just trying to decide what the best possible shelter would be. You could pass up dozens of viable food sources because you aren’t sure that any of them are the best. And you’ll never get any rest because you’re constantly trying to improve everything.

In principle your optimization could include that: The cost of thinking too hard or searching too long could be one of the things you are optimizing over. But in practice, this sort of bounded optimization is often remarkably intractable.

And what if you forgot about something? You were so busy optimizing your shelter you forgot to treat your wounds. You were so busy seeking out the perfect food source that you didn’t realize you’d been bitten by a venomous snake.

This is not the way to survive. You don’t want to be an optimizer.

No, the person who survives is a satisficerthey make sure that what they have is good enough and then they move on to the next thing. Their shelter is lopsided and ugly. Their food is tasteless and bland. Their water is hard. But they have them.

Once they have shelter and food and water, they will have time and energy to do other things. They will notice the snakebite. They will treat the wound. Once all their needs are met, they will get enough rest.

Empirically, humans are satisficers. We seem to be happier because of it—in fact, the people who are the happiest satisfice the most. And really this shouldn’t be so surprising: Because our ancestral environment wasn’t so different from being stranded on a desert island.

Good enough is perfect. Perfect is bad.

Let’s consider another example. Suppose that you have created a powerful artificial intelligence, an AGI with the capacity to surpass human reasoning. (It hasn’t happened yet—but it probably will someday, and maybe sooner than most people think.)

What do you want that AI’s goals to be?

Okay, ideally maybe they would be something like “Maximize goodness”, where we actually somehow include all the panoply of different factors that go into goodness, like beneficence, harm, fairness, justice, kindness, honesty, and autonomy. Do you have any idea how to do that? Do you even know what your own full moral framework looks like at that level of detail?

Far more likely, the goals you program into the AGI will be much simpler than that. You’ll have something you want it to accomplish, and you’ll tell it to do that well.

Let’s make this concrete and say that you own a paperclip company. You want to make more profits by selling paperclips.

First of all, let me note that this is not an unreasonable thing for you to want. It is not an inherently evil goal for one to have. The world needs paperclips, and it’s perfectly reasonable for you to want to make a profit selling them.

But it’s also not a true ultimate goal: There are a lot of other things that matter in life besides profits and paperclips. Anyone who isn’t a complete psychopath will realize that.

But the AI won’t. Not unless you tell it to. And so if we tell it to optimize, we would need to actually include in its optimization all of the things we genuinely care about—not missing a single one—or else whatever choices it makes are probably not going to be the ones we want. Oops, we forgot to say we need clean air, and now we’re all suffocating. Oops, we forgot to say that puppies don’t like to be melted down into plastic.

The simplest cases to consider are obviously horrific: Tell it to maximize the number of paperclips produced, and it starts tearing the world apart to convert everything to paperclips. (This is the original “paperclipper” concept from Less Wrong.) Tell it to maximize the amount of money you make, and it seizes control of all the world’s central banks and starts printing \$9 quintillion for itself. (Why that amount? I’m assuming it uses 64-bit signed integers, and 2^63 is over 9 quintillion. If it uses long ints, we’re even more doomed.) No, inflation-adjusting won’t fix that; even hyperinflation typically still results in more real seigniorage for the central banks doing the printing (which is, you know, why they do it). The AI won’t ever be able to own more than all the world’s real GDP—but it will be able to own that if it prints enough and we can’t stop it.

But even if we try to come up with some more sophisticated optimization for it to perform (what I’m really talking about here is specifying its utility function), it becomes vital for us to include everything we genuinely care about: Anything we forget to include will be treated as a resource to be consumed in the service of maximizing everything else.

Consider instead what would happen if we programmed the AI to satisfice. The goal would be something like, “Produce at least 400,000 paperclips at a price of at most \$0.002 per paperclip.”

Given such an instruction, in all likelihood, it would in fact produce exactly 400,000 paperclips at a price of exactly \$0.002 per paperclip. And maybe that’s not strictly the best outcome for your company. But if it’s better than what you were previously doing, it will still increase your profits.

Moreover, such an instruction is far less likely to result in the end of the world.

If the AI has a particular target to meet for its production quota and price limit, the first thing it would probably try is to use your existing machinery. If that’s not good enough, it might start trying to modify the machinery, or acquire new machines, or develop its own techniques for making paperclips. But there are quite strict limits on how creative it is likely to be—because there are quite strict limits on how creative it needs to be. If you were previously producing 200,000 paperclips at \$0.004 per paperclip, all it needs to do is double production and halve the cost. That’s a very standard sort of industrial innovation— in computing hardware (admittedly an extreme case), we do this sort of thing every couple of years.

It certainly won’t tear the world apart making paperclips—at most it’ll tear apart enough of the world to make 400,000 paperclips, which is a pretty small chunk of the world, because paperclips aren’t that big. A paperclip weighs about a gram, so you’ve only destroyed about 400 kilos of stuff. (You might even survive the lawsuits!)

Are you leaving money on the table relative to the optimization scenario? Eh, maybe. One, it’s a small price to pay for not ending the world. But two, if 400,000 at \$0.002 was too easy, next time try 600,000 at \$0.001. Over time, you can gently increase its quotas and tighten its price requirements until your company becomes more and more successful—all without risking the AI going completely rogue and doing something insane and destructive.

Of course this is no guarantee of safety—and I absolutely want us to use every safeguard we possibly can when it comes to advanced AGI. But the simple change from optimizing to satisficing seems to solve the most severe problems immediately and reliably, at very little cost.

Good enough is perfect; perfect is bad.

I see broader implications here for behavioral economics. When all of our models are based on optimization, but human beings overwhelmingly seem to satisfice, maybe it’s time to stop assuming that the models are right and the humans are wrong.

Optimization is perfect if it works—and awful if it doesn’t. Satisficing is always pretty good. Optimization is unstable, while satisficing is robust.

In the real world, that probably means that satisficing is better.

Good enough is perfect; perfect is bad.

# What is it with EA and AI?

Jan 1 JDN 2459946

Surprisingly, most Effective Altruism (EA) leaders don’t seem to think that poverty alleviation should be our top priority. Most of them seem especially concerned about long-term existential risk, such as artificial intelligence (AI) safety and biosecurity. I’m not going to say that these things aren’t important—they certainly are important—but here are a few reasons I’m skeptical that they are really the most important the way that so many EA leaders seem to think.

1. We don’t actually know how to make much progress at them, and there’s only so much we can learn by investing heavily in basic research on them. Whereas, with poverty, the easy, obvious answer turns out empirically to be extremely effective: Give them money.

2. While it’s easy to multiply out huge numbers of potential future people in your calculations of existential risk (and this is precisely what people do when arguing that AI safety should be a top priority), this clearly isn’t actually a good way to make real-world decisions. We simply don’t know enough about the distant future of humanity to be able to make any kind of good judgments about what will or won’t increase their odds of survival. You’re basically just making up numbers. You’re taking tiny probabilities of things you know nothing about and multiplying them by ludicrously huge payoffs; it’s basically the secular rationalist equivalent of Pascal’s Wager.

2. AI and biosecurity are high-tech, futuristic topics, which seem targeted to appeal to the sensibilities of a movement that is still very dominated by intelligent, nerdy, mildly autistic, rich young White men. (Note that I say this as someone who very much fits this stereotype. I’m queer, not extremely rich and not entirely White, but otherwise, yes.) Somehow I suspect that if we asked a lot of poor Black women how important it is to slightly improve our understanding of AI versus giving money to feed children in Africa, we might get a different answer.

3. Poverty eradication is often characterized as a “short term” project, contrasted with AI safety as a “long term” project. This is (ironically) very short-sighted. Eradication of poverty isn’t just about feeding children today. It’s about making a world where those children grow up to be leaders and entrepreneurs and researchers themselves. The positive externalities of economic development are staggering. It is really not much of an exaggeration to say that fascism is a consequence of poverty and unemployment.

4. Currently the main thing that most Effective Altruism organizations say they need most is “talent”; how many millions of person-hours of talent are we leaving on the table by letting children starve or die of malaria?

5. Above all, existential risk can’t really be what’s motivating people here. The obvious solutions to AI safety and biosecurity are not being pursued, because they don’t fit with the vision that intelligent, nerdy, young White men have of how things should be. Namely: Ban them. If you truly believe that the most important thing to do right now is reduce the existential risk of AI and biotechnology, you should support a worldwide ban on research in artificial intelligence and biotechnology. You should want people to take all necessary action to attack and destroy institutions—especially for-profit corporations—that engage in this kind of research, because you believe that they are threatening to destroy the entire world and this is the most important thing, more important than saving people from starvation and disease. I think this is really the knock-down argument; when people say they think that AI safety is the most important thing but they don’t want Google and Facebook to be immediately shut down, they are either confused or lying. Honestly I think maybe Google and Facebook should be immediately shut down for AI safety reasons (as well as privacy and antitrust reasons!), and I don’t think AI safety is yet the most important thing.

Why aren’t people doing that? Because they aren’t actually trying to reduce existential risk. They just think AI and biotechnology are really interesting, fascinating topics and they want to do research on them. And I agree with that, actually—but then they need stop telling people that they’re fighting to save the world, because they obviously aren’t. If the danger were anything like what they say it is, we should be halting all research on these topics immediately, except perhaps for a very select few people who are entrusted with keeping these forbidden secrets and trying to find ways to protect us from them. This may sound radical and extreme, but it is not unprecedented: This is how we handle nuclear weapons, which are universally recognized as a global existential risk. If AI is really as dangerous as nukes, we should be regulating it like nukes. I think that in principle it could be that dangerous, and may be that dangerous someday—but it isn’t yet. And if we don’t want it to get that dangerous, we don’t need more AI researchers, we need more regulations that stop people from doing harmful AI research! If you are doing AI research and it isn’t directly involved specifically in AI safety, you aren’t saving the world—you’re one of the people dragging us closer to the cliff! Anything that could make AI smarter but doesn’t also make it safer is dangerous. And this is clearly true of the vast majority of AI research, and frankly to me seems to also be true of the vast majority of research at AI safety institutes like the Machine Intelligence Research Institute.

Seriously, look through MIRI’s research agenda: It’s mostly incredibly abstract and seems completely beside the point when it comes to preventing AI from taking control of weapons or governments. It’s all about formalizing Bayesian induction. Thanks to you, Skynet can have a formally computable approximation to logical induction! Truly we are saved. Only two of their papers, on “Corrigibility” and “AI Ethics”, actually struck me as at all relevant to making AI safer. The rest is largely abstract mathematics that is almost literally navel-gazing—it’s all about self-reference. Eliezer Yudkowsky finds self-reference fascinating and has somehow convinced an entire community that it’s the most important thing in the world. (I actually find some of it fascinating too, especially the paper on “Functional Decision Theory”, which I think gets at some deep insights into things like why we have emotions. But I don’t see how it’s going to save the world from AI.)

Don’t get me wrong: AI also has enormous potential benefits, and this is a reason we may not want to ban it. But if you really believe that there is a 10% chance that AI will wipe out humanity by 2100, then get out your pitchforks and your EMP generators, because it’s time for the Butlerian Jihad. A 10% chance of destroying all humanity is an utterly unacceptable risk for any conceivable benefit. Better that we consign ourselves to living as we did in the Neolithic than risk something like that. (And a globally-enforced ban on AI isn’t even that; it’s more like “We must live as we did in the 1950s.” How would we survive!?) If you don’t want AI banned, maybe ask yourself whether you really believe the risk is that high—or are human brains just really bad at dealing with small probabilities?

I think what’s really happening here is that we have a bunch of guys (and yes, the EA and especially AI EA-AI community is overwhelmingly male) who are really good at math and want to save the world, and have thus convinced themselves that being really good at math is how you save the world. But it isn’t. The world is much messier than that. In fact, there may not be much that most of us can do to contribute to saving the world; our best options may in fact be to donate money, vote well, and advocate for good causes.

Let me speak Bayesian for a moment: The prior probability that you—yes, you, out of all the billions of people in the world—are uniquely positioned to save it by being so smart is extremely small. It’s far more likely that the world will be saved—or doomed—by people who have power. If you are not the head of state of a large country or the CEO of a major multinational corporation, I’m sorry; you probably just aren’t in a position to save the world from AI.

But you can give some money to GiveWell, so maybe do that instead?

# In defense of civility

Dec 18 JDN 2459932

Civility is in short supply these days. Perhaps it has always been in short supply; certainly much of the nostalgia for past halcyon days of civility is ill-founded. Wikipedia has an entire article on hundreds of recorded incidents of violence in legislative assemblies, in dozens of countries, dating all the way from to the Roman Senate in 44 BC to Bosnia in 2019. But the Internet seems to bring about its own special kind of incivility, one which exposes nearly everyone to some of the worst vitriol the entire world has to offer. I think it’s worth talking about why this is bad, and perhaps what we might do about it.

For some, the benefits of civility seem so self-evident that they don’t even bear mentioning. For others, the idea of defending civility may come across as tone-deaf or even offensive. I would like to speak to both of those camps today: If you think the benefits of civility are obvious, I assure you, they aren’t to everyone. And if you think that civility is just a tool of the oppressive status quo, I hope I can make you think again.

A lot of the argument against civility seems to be founded in the notion that these issues are important, lives are at stake, and so we shouldn’t waste time and effort being careful how we speak to each other. How dare you concern yourself with the formalities of argumentation when people are dying?

But this is totally wrongheaded. It is precisely because these issues are important that civility is vital. It is precisely because lives are at stake that we must make the right decisions. And shouting and name-calling (let alone actual fistfights or drawn daggers—which have happened!) are not conducive to good decision-making.

If you shout someone down when choosing what restaurant to have dinner at, you have been very rude and people may end up unhappy with their dining experience—but very little of real value has been lost. But if you shout someone down when making national legislation, you may cause the wrong policy to be enacted, and this could lead to the suffering or death of thousands of people.

Think about how court proceedings work. Why are they so rigid and formal, with rules upon rules upon rules? Because the alternative was capricious violence. In the absence of the formal structure of a court system, so-called ‘justice’ was handed out arbitrarily, by whoever was in power, or by mobs of vigilantes. All those seemingly-overcomplicated rules were made in order to resolve various conflicts of interest and hopefully lead toward more fair, consistent results in the justice system. (And don’t get me wrong; they still could stand to be greatly improved!)

Legislatures have complex rules of civility for the same reason: Because the outcome is so important, we need to make sure that the decision process is as reliable as possible. And as flawed as existing legislatures still are, and as silly as it may seem to insist upon addressing ‘the Honorable Representative from the Great State of Vermont’, it’s clearly a better system than simply letting them duke it out with their fists.

A related argument I would like to address is that of ‘tone policing‘. If someone objects, not to the content of what you are saying, but to the tone in which you have delivered it, are they arguing in bad faith?

Well, possibly. Certainly, arguments about tone can be used that way. In particular I remember that this was basically the only coherent objection anyone could come up with against the New Atheism movement: “Well, sure, obviously, God isn’t real and religion is ridiculous; but why do you have to be so mean about it!?”

But it’s also quite possible for tone to be itself a problem. If your tone is overly aggressive and you don’t give people a chance to even seriously consider your ideas before you accuse them of being immoral for not agreeing with you—which happens all the time—then your tone really is the problem.

So, how can we tell which is which? I think a good way to reply to what you think might be bad-faith tone policing is this: “What sort of tone do you think would be better?”

I think there are basically three possible responses:

1. They can’t offer one, because there is actually no tone in which they would accept the substance of your argument. In that case, the tone policing really is in bad faith; they don’t want you to be nicer, they want you to shut up. This was clearly the case for New Atheism: As Daniel Dennett aptly remarked, “There’s simply no polite way to tell someone they have dedicated their lives to an illusion.” But sometimes, such things need to be said all the same.

2. They offer an alternative argument you could make, but it isn’t actually expressing your core message. Either they have misunderstood your core message, or they actually disagree with the substance of your argument and should be addressing it on those terms.

3. They offer an alternative way of expressing your core message in a milder, friendlier tone. This means that they are arguing in good faith and actually trying to help you be more persuasive!

I don’t know how common each of these three possibilities is; it could well be that the first one is the most frequent occurrence. That doesn’t change the fact that I have definitely been at the other end of the third one, where I absolutely agree with your core message and want your activism to succeed, but I can see that you’re acting like a jerk and nobody will want to listen to you.

Here, let me give some examples of the type of argument I’m talking about:

1. “Defund the police”: This slogan polls really badly. Probably because most people have genuine concerns about crime and want the police to protect them. Also, as more and more social services (like for mental health and homelessness) get co-opted into policing, this slogan makes it sound like you’re just going to abandon those people. But do we need serious, radical police reform? Absolutely. So how about “Reform the police”, “Put police money back into the community”, or even “Replace the police”?

2. “All Cops Are Bastards”: Speaking of police reform, did I mention we need it? A lot of it? Okay. Now, let me ask you: All cops? Every single one of them? There is not a single one out of the literally millions of police officers on this planet who is a good person? Not one who is fighting to take down police corruption from within? Not a single individual who is trying to fix the system while preserving public safety? Now, clearly, it’s worth pointing out, some cops are bastards—but hey, that even makes a better acronym: SCAB. In fact, it really is largely a few bad apples—the key point here is that you need to finish the aphorism: “A few bad apples spoil the whole barrel.” The number of police who are brutal and corrupt is relatively small, but as long as the other police continue to protect them, the system will be broken. Either you get those bad apples out pronto, or your whole barrel is bad. But demonizing the very people who are in the best position to implement those reforms—good police officers—is not helping.

3. “Be gay, do crime”: I know it’s tongue-in-cheek and ironic. I get that. It’s still a really dumb message. I am absolutely on board with LGBT rights. Even aside from being queer myself, I probably have more queer and trans friends than straight friends at this point. But why in the world would you want to associate us with petty crime? Why are you lumping us in with people who harm others at best out of desperation and at worst out of sheer greed? Even if you are literally an anarchist—which I absolutely am not—you’re really not selling anarchism well if the vision you present of it is a world of unfettered crime! There are dozens of better pro-LGBT slogans out there; pick one. Frankly even “do gay, be crime” is better, because it’s more clearly ironic. (Also, you can take it to mean something like this: Don’t just be gay, do gay—live your fullest gay life. And if you can be crime, that means that the system is fundamentally unjust: You can be criminalized just for who you are. And this is precisely what life is like for millions of LGBT people on this planet.)

A lot of people seem to think that if you aren’t immediately convinced by the most vitriolic, aggressive form of an argument, then you were never going to be convinced anyway and we should just write you off as a potential ally. This isn’t just obviously false; it’s incredibly dangerous.

The whole point of activism is that not everyone already agrees with you. You are trying to change minds. If it were really true that all reasonable, ethical people already agreed with your view, you wouldn’t need to be an activist. The whole point of making political arguments is that people can be reasonable and ethical and still be mistaken about things, and when we work hard to persuade them, we can eventually win them over. In fact, on some things we’ve actually done spectacularly well.

And what about the people who aren’t reasonable and ethical? They surely exist. But fortunately, they aren’t the majority. They don’t rule the whole world. If they did, we’d basically be screwed: If violence is really the only solution, then it’s basically a coin flip whether things get better or worse over time. But in fact, unreasonable people are outnumbered by reasonable people. Most of the things that are wrong with the world are mistakes, errors that can be fixed—not conflicts between irreconcilable factions. Our goal should be to fix those mistakes wherever we can, and that means being patient, compassionate educators—not angry, argumentative bullies.

# The case against phys ed

Dec 4 JDN 2459918

If I want to stop someone from engaging in an activity, what should I do? I could tell them it’s wrong, and if they believe me, that would work. But what if they don’t believe me? Or I could punish them for doing it, and as long as I can continue to do that reliably, that should deter them from doing it. But what happens after I remove the punishment?

If I really want to make someone not do something, the best way to accomplish that is to make them not want to do it. Make them dread doing it. Make them hate the very thought of it. And to accomplish that, a very efficient method would be to first force them to do it, but make that experience as miserable and humiliating is possible. Give them a wide variety of painful or outright traumatic experiences that are directly connected with the undesired activity, to carry with them for the rest of their life.

This is precisely what physical education does, with regard to exercise. Phys ed is basically optimized to make people hate exercise.

Oh, sure, some students enjoy phys ed. These are the students who are already athletic and fit, who already engage in regular exercise and enjoy doing so. They may enjoy phys ed, may even benefit a little from it—but they didn’t really need it in the first place.

The kids who need more physical activity are the kids who are obese, or have asthma, or suffer from various other disabilities that make exercising difficult and painful for them. And what does phys ed do to those kids? It makes them compete in front of their peers at various athletic tasks at which they will inevitably fail and be humiliated.

Even the kids who are otherwise healthy but just don’t get enough exercise will go into phys ed class at a disadvantage, and instead of being carefully trained to improve their skills and physical condition at their own level, they will be publicly shamed by their peers for their inferior performance.

I know this, because I was one of those kids. I have exercise-induced bronchoconstriction, a lung condition similar to asthma (actually there’s some debate as to whether it should be considered a form of asthma), in which intense aerobic exercise causes the airways of my lungs to become constricted and inflamed, making me unable to get enough air to continue.

It’s really quite remarkable I wasn’t diagnosed with this as a child; I actually once collapsed while running in gym class, and all they thought to do at the time was give me water and let me rest for the remainder of the class. Nobody thought to call the nurse. I was never put on a beta agonist or an inhaler. (In fact at one point I was put on a beta blocker for my migraines; I now understand why I felt so fatigued when taking it—it was literally the opposite of the drug my lungs needed.)

Actually it’s been a few years since I had an attack. This is of course partly due to me generally avoiding intense aerobic exercise; but even when I do get intense exercise, I rarely seem to get bronchoconstriction attacks. My working hypothesis is that the norepinephrine reuptake inhibition of my antidepressant acts like a beta agonist; both drugs mimic norepinephrine.

But as a child, I got such attacks quite frequently; and even when I didn’t, my overall athletic performance was always worse than most of the other kids. They knew it, I knew it, and while only a few actively tried to bully me for it, none of the others did anything to make me feel better. So gym class was always a humiliating and painful experience that I came to dread.

As a result, as soon as I got out of school and had my own autonomy in how to structure my own life, I basically avoided exercise whenever I could. Even knowing that it was good for me—really, exercise is ridiculously good for you; it honestly doesn’t even make sense to me how good it is for you—I could rarely get myself to actually go out and exercise. I certainly couldn’t do it with anyone else; sometimes, if I was very disciplined, I could manage to maintain an exercise routine by myself, as long as there was no one else there who could watch me, judge me, or compare themselves to me.

In fact, I’d probably have avoided exercise even more, had I not also had some more positive experiences with it outside of school. I trained in martial arts for a few years, getting almost to a black belt in tae kwon do; I quit precisely when it started becoming very competitive and thus began to feel humiliated again when I performed worse than others. Part of me wishes I had stuck with it long enough to actually get the black belt; but the rest of me knows that even if I’d managed it, I would have been miserable the whole time and it probably would have made me dread exercise even more.

The details of my story are of course individual to me; but the general pattern is disturbingly common. A kid does poorly in gym class, or even suffers painful attacks of whatever disabling condition they have, but nobody sees it as a medical problem; they just see the kid as weak and lazy. Or even if the adults are sympathetic, the other kids aren’t; they just see a peer who performed worse than them, and they have learned by various subtle (and not-so-subtle) cultural pressures that anyone who performs worse at a culturally-important task is worthy of being bullied and shunned.

Even outside the directly competitive environment of sports, the very structure of a phys ed class, where a large group of students are all expected to perform the same athletic tasks and can directly compare their performance against each other, invites this kind of competition. Kids can see, right in their faces, who is doing better and who is doing worse. And our culture is astonishingly bad at teaching children (or anyone else, for that matter) how to be sympathetic to others who perform worse. Worse performance is worse character. Being bad at running, jumping and climbing is just being bad.

Part of the problem is that school administrators seem to see physical education as a training and selection regimen for their sports programs. (In fact, some of them seem to see their entire school as existing to serve their sports programs.) Here is a UK government report bemoaning the fact that “only a minority of schools play competitive sport to a high level”, apparently not realizing that this is necessarily true because high-level sports performance is a relative concept. Only one team can win the championship each year. Only 10% of students will ever be in the top 10% of athletes. No matter what. Anything else is literally mathematically impossible. We do not live in Lake Wobegon; not all the children can be above average.

There are good phys ed programs out there. They have highly-trained instructors and they focus on matching tasks to a student’s own skill level, as well as actually educating them—teaching them about anatomy and physiology rather than just making them run laps. Actually the one phys ed class I took that I actually enjoyed was actually an anatomy and physiology class; we didn’t do any physical exercise in that class. But well-taught phys ed classes are clearly the exception, not the norm.

Of course, it could be that some students actually benefit from phys ed, perhaps even enough to offset the harms to people like me. (Though then the question should be asked whether phys ed should be compulsory for all students—if an intervention helps some and hurts others, maybe only give it to the ones it helps?) But I know very few people who actually described their experiences of phys ed class as positive ones. While many students describe their experiences of math class in similarly-negative terms (which is also a problem with how math classes are taught), I definitely do know people who actually enjoyed and did well in math class. Still, my sample is surely biased—it’s comprised of people similar to me, and I hated gym and loved math. So let’s look at the actual data.

Or rather, I’d like to, but there isn’t that much out there. The empirical literature on the effects of physical education is surprisingly limited.

A lot of analyses of physical education simply take as axiomatic that more phys ed means more exercise, and so they use the—overwhelming, unassailable—evidence that exercise is good to support an argument for more phys ed classes. But they never seem to stop and take a look at whether phys ed classes are actually making kids exercise more, particularly once those kids grow up and become adults.

In fact, the surprisingly weak correlations between higher physical activity and better mental health among adolescents (despite really strong correlations in adults) could be because exercise among adolescents is largely coerced via phys ed, and the misery of being coerced into physical humiliation counteracts any benefits that might have been obtained from increased exercise.

The best long-term longitudinal study I can find did show positive effects of phys ed on long-term health, though by a rather odd mechanism: Women exercised more as adults if they had phys ed in primary school, but men didn’t; they just smoked less. And this study was back in 1999, studying a cohort of adults who had phys ed quite a long time ago, when it was better funded.

The best experiment I can find actually testing whether phys ed programs work used a very carefully designed phys ed program with a lot of features that it would be really nice to have, but the vast majority of actual gym classes do not, including carefully structured activities with specific developmental goals, and, perhaps most importantly, children were taught to track and evaluate their own individual progress rather than evaluate themselves in comparison to others.

And even then, the effects are not all that large. The physical activity scores of the treatment group rose from 932 minutes per week to 1108 minutes per week for first-graders, and from 1212 to 1454 for second-graders. But the physical activity scores of the control group rose from 906 to 996 for first-graders, and 1105 to 1211 for second-graders. So of the 176 minutes per week gained by first-graders, 90 would have happened anyway. Likewise, of the 242 minutes per week gained by second-graders, 106 were not attributable to the treatment. Only about half of the gains were due to the intervention, and they amount to about a 10% increase in overall physical activity. It also seems a little odd to me that the control groups both started worse off than the experimental groups and both groups gained; it raises some doubts about the randomization.

The researchers also measured psychological effects, and these effects are even smaller and honestly a little weird. On a scale of “somatic anxiety” (basically, how bad do you feel about your body’s physical condition?), this well-designed phys ed program only reduced scores in the treatment group from 4.95 to 4.55 among first-graders, and from 4.50 to 4.10 among second-graders. Seeing as the scores for second-graders also fell in the control group from 4.63 to 4.45, only about half of the observed reduction—0.2 points on a 10-point scale—is really attributable to the treatment. And the really baffling part is that the measure of social anxiety actually fell more, which makes me wonder if they’re really measuring what they think they are.

Clearly, exercise is good. We should be trying to get people to exercise more. Actually, this is more important than almost anything else we could do for public health, with the possible exception of vaccinations. All of these campaigns trying to get kids to lose weight should be removed and replaced with programs to get them to exercise more, because losing weight doesn’t benefit health and exercising more does.

But I am not convinced that physical education as we know it actually makes people exercise more. In the short run, it forces kids to exercise, when there were surely ways to get kids to exercise that didn’t require such coercion; and in the long run, it gives them painful, even traumatic memories of exercise that make them not want to continue it once they get older. It’s too competitive, too one-size-fits-all. It doesn’t account for innate differences in athletic ability or match challenge levels to skill levels. It doesn’t help kids cope with having less ability, or even teach kids to be compassionate toward others with less ability than them.

And it makes kids miserable.

# Now is the time for CTCR

Nov 6 JDN 2459890

We live in a terrifying time. As Ukraine gains ground in its war with Russia, thanks in part to the deployment of high-tech weapons from NATO, Vladimir Putin has begun to make thinly-veiled threats of deploying his nuclear arsenal in response. No one can be sure how serious he is about this. Most analysts believe that he was referring to the possible use of small-scale tactical nuclear weapons, not a full-scale apocalyptic assault. Many think he’s just bluffing and wouldn’t resort to any nukes at all. Putin has bluffed in the past, and could be doing so again. Honestly, “this is not a bluff” is exactly the sort of thing you say when you’re bluffing—people who aren’t bluffing have better ways of showing it. (It’s like whenever Trump would say “Trust me”, and you’d know immediately that this was an especially good time not to. Of course, any time is a good time not to trust Trump.)

(By the way, financial news is a really weird thing: I actually found this article discussing how a nuclear strike would be disastrous for the economy. Dude, if there’s a nuclear strike, we’ve got much bigger things to worry about than the economy. It reminds me of this XKCD.)

But if Russia did launch nuclear weapons, and NATO responded with its own, it could trigger a nuclear war that would kill millions in a matter of hours. So we need to be prepared, and think very carefully about the best way to respond.

The current debate seems to be over whether to use economic sanctions, conventional military retaliation, or our own nuclear weapons. Well, we already have economic sanctions, and they aren’t making Russia back down. (Though they probably are hurting its war effort, so I’m all for keeping them in place.) And if we were to use our own nuclear weapons, that would only further undermine the global taboo against nuclear weapons and could quite possibly trigger that catastrophic nuclear war. Right now, NATO seems to be going for a bluff of our own: We’ll threaten an overwhelming nuclear response, but then we obviously won’t actually carry it out because that would be murder-suicide on a global scale.

That leaves conventional military retaliation. What sort of retaliation? Several years ago I came up with a very specific method of conventional retaliation I call credible targeted conventional response (CTCR, which you can pronounce “cut-core”). I believe that now would be an excellent time to carry it out.

The basic principle of CTCR is really quite simple: Don’t try to threaten entire nations. A nation is an abstract entity. Threaten people. Decisions are made by people. The response to Vladimir Putin launching nuclear weapons shouldn’t be to kill millions of innocent people in Russia that probably mean even less to Putin than they do to us. It should be to kill Vladimir Putin.

How exactly to carry this out is a matter for military strategists to decide. There are a variety of weapons at our disposal, ranging from the prosaic (covert agents) to the exotic (precision strikes from high-altitude stealth drones). Indeed, I think we should leave it purposefully vague, so that Putin can’t try to defend himself against some particular mode of attack. The whole gamut of conventional military responses should be considered on the table, from a single missile strike to a full-scale invasion.

But the basic goal is quite simple: Launching a nuclear weapon is one of the worst possible war crimes, and it must be met with an absolute commitment to bring the perpetrator to justice. We should be willing to accept some collateral damage, even a lot of collateral damage; carpet-bombing a city shouldn’t be considered out of the question. (If that sounds extreme, consider that we’ve done it before for much weaker reasons.) The only thing that we should absolutely refuse to do is deploy nuclear weapons ourselves.

The great advantage of this strategy—even aside from being obviously more humane than nuclear retaliation—is that it is more credible. It sounds more like something we’d actually be willing to do. And in fact we likely could even get help from insiders in Russia, because there are surely many people in the Russian government who aren’t so loyal to Putin that they’d want him to get away with mass murder. It might not just be an assassination; it might end up turning into a coup. (Also something we’ve done for far weaker reasons.)

This is how we preserve the taboo on nuclear weapons: We refuse to use them, but otherwise stop at nothing to kill anyone who does use them.

I therefore call upon the world to make this threat:

Launch a nuclear weapon, Vladimir Putin, and we will kill you. Not your armies, not your generals—you. It could be a Tomahawk missile at the Kremlin. It could be a car bomb in your limousine, or a Stinger missile at Aircraft One. It could be a sniper at one of your speeches. Or perhaps we’ll poison your drink with polonium, like you do to your enemies. You won’t know when or where. You will live the rest of your short and miserable life in terror. There will be nowhere for you to hide. We will stop at nothing. We will deploy every available resource around the world, and it will be our top priority. And you will die.

That’s how you threaten a psychopath. And it’s what we must do in order to keep the world safe from nuclear war.

Oct 23 JDN 2459876

I’ve noticed an odd tendency among politically active people, particular social media slacktivists (a term I do not use pejoratively: slacktivism is highly cost-effective). They adopt new ideas very rapidly, trying to stay on the cutting edge of moral and political discourse—and then they denigrate and disparage anyone who fails to do the same as an irredeemable monster.

This can take many forms, such as “if you don’t buy into my specific take on Critical Race Theory, you are a racist”, “if you have any uncertainty about the widespread use of puberty blockers you are a transphobic bigot”, “if you give any credence to the medical consensus on risks of obesity you are fatphobic“, “if you think disabilities should be cured you’re an ableist”, and “if you don’t support legalizing abortion in all circumstances you are a misogynist”.

My intention here is not to evaluate any particular moral belief, though I’ll say the following: I am skeptical of Critical Race Theory, especially the 1619 project which seems to be to include substantial distortions of history. I am cautiously supportive of puberty blockers, because the medical data on their risks are ambiguous—while the sociological data on how much happier trans kids are when accepted are totally unambiguous. I am well aware of the medical data saying that the risks of obesity are overblown (but also not negligible, particular for those who are very obese). Speaking as someone with a disability that causes me frequent, agonizing pain, yes, I want disabilities to be cured, thank you very much; accommodations are nice in the meantime, but the best long-term solution is to not need accommodations. (I’ll admit to some grey areas regarding certain neurodivergences such as autism and ADHD, and I would never want to force cures on people who don’t want them; but paralysis, deafness, blindness, diabetes, depression, and migraine are all absolutely worth finding cures for—the QALY at stake here are massive—and it’s silly to say otherwise.) I think abortion should generally be legal and readily available in the first trimester (which is when most abortions happen anyway), but much more strictly regulated thereafter—but denying it to children and rape victims is a human rights violation.

What I really want to talk about today is not the details of the moral belief, but the attitude toward those who don’t share it. There are genuine racists, transphobes, fatphobes, ableists, and misogynists in the world. There are also structural institutions that can lead to discrimination despite most of the people involved having no particular intention to discriminate. It’s worthwhile to talk about these things, and to try to find ways to fix them. But does calling anyone who disagrees with you a monster accomplish that goal?

This seems particularly bad precisely when your own beliefs are so cutting-edge. If you have a really basic, well-established sort of progressive belief like “hiring based on race should be illegal”, “women should be allowed to work outside the home” or “sodomy should be legal”, then people who disagree with you pretty much are bigots. But when you’re talking about new, controversial ideas, there is bound to be some lag; people who adopted the last generation’s—or even the last year’s—progressive beliefs may not yet be ready to accept the new beliefs, and that doesn’t make them bigots.

Consider this: Were you born believing in your current moral and political beliefs?

I contend that you were not. You may have been born intelligent, open-minded, and empathetic. You may have been born into a progressive, politically-savvy family. But the fact remains that any particular belief you hold about race, or gender, or ethics was something you had to learn. And if you learned it, that means that at some point you didn’t already know it. How would you have felt back then, if, instead of calmly explaining it to you, people called you names for not believing in it?

Now, perhaps it is true that as soon as you heard your current ideas, you immediately adopted them. But that may not be the case—it may have taken you some time to learn or change your mind—and even if it was, it’s still not fair to denigrate anyone who takes a bit longer to come around. There are many reasons why someone might not be willing to change their beliefs immediately, and most of them are not indicative of bigotry or deep moral failings.

It may be helpful to think about this in terms of updating your moral software. You were born with a very minimal moral operating system (emotions such as love and guilt, the capacity for empathy), and over time you have gradually installed more and more sophisticated software on top of that OS. If someone literally wasn’t born with the right OS—we call these people psychopaths—then, yes, you have every right to hate, fear, and denigrate them. But most of the people we’re talking about do have that underlying operating system, they just haven’t updated all their software to the same version as yours. It’s both unfair and counterproductive to treat them as irredeemably defective simply because they haven’t updated to the newest version yet. They have the hardware, they have the operating system; maybe their download is just a little slower than yours.

In fact, if you are very fast to adopt new, trendy moral beliefs, you may in fact be adopting them too quickly—they haven’t been properly vetted by human experience just yet. You can think of this as like a beta version: The newest update has some great new features, but it’s also buggy and unstable. It may need to be fixed before it is really ready for widespread release. If that’s the case, then people aren’t even wrong not to adopt them yet! It isn’t necessarily bad that you have adopted the new beliefs; we need beta testers. But you should be aware of your status as a beta tester and be prepared both to revise your own beliefs if needed, and also to cut other people slack if they disagree with you.

I understand that it can be immensely frustrating to be thoroughly convinced that something is true and important and yet see so many people disagreeing with it. (I am an atheist activist after all, so I absolutely know what that feels like.) I understand that it can be immensely painful to watch innocent people suffer because they have to live in a world where other people have harmful beliefs. But you aren’t changing anyone’s mind or saving anyone from harm by calling people names. Patience, tact, and persuasion will win the long game, and the long game is really all we have.

And if it makes you feel any better, the long game may not be as long as it seems. The arc of history may have tighter curvature than we imagine. We certainly managed a complete flip of the First World consensus on gay marriage in just a single generation. We may be able to achieve similarly fast social changes in other areas too. But we haven’t accomplished the progress we have so far by being uncharitable or aggressive toward those who disagree.

I am emphatically not saying you should stop arguing for your beliefs. We need you to argue for your beliefs. We need you to argue forcefully and passionately. But when doing so, try not to attack the people who don’t yet agree with you—for they are precisely the people we need to listen to you.