The mythology mindset

Feb 5 JDN 2459981

I recently finished reading Steven Pinker’s latest book Rationality. It’s refreshing, well-written, enjoyable, and basically correct with some small but notable errors that seem sloppy—but then you could have guessed all that from the fact that it was written by Steven Pinker.

What really makes the book interesting is an insight Pinker presents near the end, regarding the difference between the “reality mindset” and the “mythology mindset”.

It’s a pretty simple notion, but a surprisingly powerful one.

In the reality mindset, a belief is a model of how the world actually functions. It must be linked to the available evidence and integrated into a coherent framework of other beliefs. You can logically infer from how some parts work to how other parts must work. You can predict the outcomes of various actions. You live your daily life in the reality mindset; you couldn’t function otherwise.

In the mythology mindset, a belief is a narrative that fulfills some moral, emotional, or social function. It’s almost certainly untrue or even incoherent, but that doesn’t matter. The important thing is that it sends the right messages. It has the right moral overtones. It shows you’re a member of the right tribe.

The idea is similar to Dennett’s “belief in belief”, which I’ve written about before; but I think this characterization may actually be a better one, not least because people would be more willing to use it as a self-description. If you tell someone “You don’t really believe in God, you believe in believing in God”, they will object vociferously (which is, admittedly, what the theory would predict). But if you tell them, “Your belief in God is a form of the mythology mindset”, I think they are at least less likely to immediately reject your claim out of hand. “You believe in God a different way than you believe in cyanide” isn’t as obviously threatening to their identity.

A similar notion came up in a Psychology of Religion course I took, in which the professor discussed “anomalous beliefs” linked to various world religions. He picked on a bunch of obscure religions, often held by various small tribes. He asked for more examples from the class. Knowing he was nominally Catholic and not wanting to let mainstream religion off the hook, I presented my example: “This bread and wine are the body and blood of Christ.” To his credit, he immediately acknowledged it as a very good example.

It’s also not quite the same thing as saying that religion is a “metaphor”; that’s not a good answer for a lot of reasons, but perhaps chief among them is that people don’t say they believe metaphors. If I say something metaphorical and then you ask me, “Hang on; is that really true?” I will immediately acknowledge that it is not, in fact, literally true. Love is a rose with all its sweetness and all its thorns—but no, love isn’t really a rose. And when it comes to religious belief, saying that you think it’s a metaphor is basically a roundabout way of saying you’re an atheist.

From all these different directions, we seem to be converging on a single deeper insight: when people say they believe something, quite often, they clearly mean something very different by “believe” than what I would ordinarily mean.

I’m tempted even to say that they don’t really believe it—but in common usage, the word “belief” is used at least as often to refer to the mythology mindset as the reality mindset. (In fact, it sounds less weird to say “I believe in transsubstantiation” than to say “I believe in gravity”.) So if they don’t really believe it, then they at least mythologically believe it.

Both mindsets seem to come very naturally to human beings, in particular contexts. And not just modern people, either. Humans have always been like this.

Ask that psychology professor about Jesus, and he’ll tell you a tall tale of life, death, and resurrection by a demigod. But ask him about the Stroop effect, and he’ll provide a detailed explanation of rigorous experimental protocol. He believes something about God; but he knows something about psychology.

Ask a hunter-gatherer how the world began, and he’ll surely spin you a similarly tall tale about some combination of gods and spirits and whatever else, and it will all be peculiarly particular to his own tribe and no other. But ask him how to gut a fish, and he’ll explain every detail with meticulous accuracy, with almost the same rigor as that scientific experiment. He believes something about the sky-god; but he knows something about fish.

To be a rationalist, then, is to aspire to live your whole life in the reality mindset. To seek to know rather than believe.

This isn’t about certainty. A rationalist can be uncertain about many things—in fact, it’s rationalists of all people who are most willing to admit and quantify their uncertainty.

This is about whether you allow your beliefs to float free as bare, almost meaningless assertions that you profess to show you are a member of the tribe, or you make them pay rent, directly linked to other beliefs and your own experience.

As long as I can remember, I have always aspired to do this. But not everyone does. In fact, I dare say most people don’t. And that raises a very important question: Should they? Is it better to live the rationalist way?

I believe that it is. I suppose I would, temperamentally. But say what you will about the Enlightenment and the scientific revolution, they have clearly revolutionized human civilization and made life much better today than it was for most of human existence. We are peaceful, safe, and well-fed in a way that our not-so-distant ancestors could only dream of, and it’s largely thanks to systems built under the principles of reason and rationality—that is, the reality mindset.

We would never have industrialized agriculture if we still thought in terms of plant spirits and sky gods. We would never have invented vaccines and antibiotics if we still believed disease was caused by curses and witchcraft. We would never have built power grids and the Internet if we still saw energy as a mysterious force permeating the world and not as a measurable, manipulable quantity.

This doesn’t mean that ancient people who saw the world in a mythological way were stupid. In fact, it doesn’t even mean that people today who still think this way are stupid. This is not about some innate, immutable mental capacity. It’s about a technology—or perhaps the technology, the meta-technology that makes all other technology possible. It’s about learning to think the same way about the mysterious and the familiar, using the same kind of reasoning about energy and death and sunlight as we already did about rocks and trees and fish. When encountering something new and mysterious, someone in the mythology mindset quickly concocts a fanciful tale about magical beings that inevitably serves to reinforce their existing beliefs and attitudes, without the slightest shred of evidence for any of it. In their place, someone in the reality mindset looks closer and tries to figure it out.

Still, this gives me some compassion for people with weird, crazy ideas. I can better make sense of how someone living in the modern world could believe that the Earth is 6,000 years old or that the world is ruled by lizard-people. Because they probably don’t really believe it, they just mythologically believe it—and they don’t understand the difference.

Home price targeting

Jan 29 JDN 2459973

One of the largest divides in opinion between economists and the general population concerns the question of rent control. While the general public mostly supports rent control (and often votes for it in referenda), economists almost universally oppose it. It’s hard to get a consensus among economists on almost anything, and yet here we have one; but people don’t seem to care.

Why? I think it’s because high rents are a genuine and serious problem, which economists have invested remarkably little effort in trying to solve. Housing prices are one of the chief drivers of long-term inflation, and with most people spending over a third of their income on housing, even relatively small increases in housing prices can cause a lot of suffering.

One thing we do know is that rent control does not work as a long-term solution. Maybe in response to some short-term shock it would make sense. Maybe you do it for awhile as you wait for better long-term solutions to take effect. But simply putting an arbitrary cap on prices will create shortages in the long run—and it is not a coincidence that cities with strict rent control have the worst housing shortages and the greatest rates of homelessness. Rent control doesn’t even do a good job of helping the people who need it most.

Price ceilings in general are just… not a good idea. If people are selling something at a price that you think is too high and you just insist that they aren’t allowed to, they don’t generally sell at a lower price—they just don’t sell at all. There are a few exceptions; in a very monopolistic market, a well-targeted price ceiling might actually work. And short-run housing supply is inelastic enough that rent control isn’t the worst kind of price ceiling. But as a general strategy, price ceilings just aren’t an effective way of making things cheaper.

This is why we so rarely use them as a policy intervention. When the Federal Reserve wants to achieve a certain interest rate on bonds, do they simply demand that people buy the bonds at that price? No. They adjust the supply of bonds in the market until the market price goes to what they want it to be.

Prices aren’t set in a vacuum by the fiat of evil corporations. They are an equilibrium outcome of a market system. There are things you can do to intervene and shift that equilibrium, but if you just outlaw certain prices, it will result in a new equilibrium—it won’t simply be the same amount sold at the new price you wanted.

Maybe some graphs would help explain this. In each graph, the red line is the demand and the blue line is the supply.

Here is what the market looks like before intervention: The price is $6. We’ll say that’s too high; people can’t afford it.

[no_intervention.png]

Now suppose we impose a price ceiling at $4 (the green line). You aren’t allowed to charge more than $4. What will happen? Companies will charge $4. But they will also produce and sell a smaller quantity than before.

Far better would be to increase the supply of the good, shifting to a new supply curve (the purple line). Then you would reduce the price and increase the amount of the good available.

[supply_intervention.png]

This is precisely what we do with government bonds when we want to raise interest rates. (A greater supply of bonds makes their prices lower, which makes their yields higher.) And when we want to lower interest rates, we do the opposite.

Of course, with bonds, it’s easy to control the supply; it’s all just numbers in a network. Increasing the supply of housing is a much greater undertaking; you actually need to build new housing. But ultimately, the only way to ensure that housing is available and affordable for everyone is in fact to build more housing.

There are various ways we might accomplish that; one of the simplest would be to simply relax zoning restrictions that make it difficult to build high-density housing in cities. Those are bad laws anyway; they only benefit a small number of people a little bit while harming a large number of people a lot. (The problem is that the people they benefit are the local homeowners who show up to city council meetings.)

But we could do much more. I propose that we really use interest-rate targeting as our model and introduce home price targeting. I want the federal government to exercise eminent domain and order the construction of new high-density housing in any city that has rents above a certain threshold—if you like, the same threshold you were thinking of setting the rent control at.

Is this an extreme solution? Perhaps. But housing affordability is an extreme problem. And I keep hearing from the left wing that economists aren’t willing to consider “radical enough” solutions to housing (by which they always seem to mean the tried-and-failed strategy of rent control). So here’s a radical solution for you. If cities refuse to build enough housing for their people, make them do it. Buy up and bulldoze their “lovely” “historic” suburban neighborhoods that are ludicrous wastes of land (and also environmentally damaging), and replace them with high-rise apartments. (Get rid of the golf courses while you’re at it.)

This would be expensive, of course; we have to pay to build all those new apartments. But hardly so expensive as living in a society where people can’t afford to live where they want.

In fact, estimates suggest that we are losing over one trillion dollars per year in unrealized productivity because people can’t afford to live in the highest-rent cities. Average income per worker in the US has been reduced by nearly $7000 per year because of high housing prices. So that’s the budget you should be comparing against. Keeping things as they are is like taxing our whole population about 9%. (And it’s probably regressive, so more than that for poor people.)

Would this destroy the “charm” of the city? I dunno, maybe a little. But if the only thing your city had going for it was some old houses that are clearly not an efficient use of space, that’s pretty sad. And it is quite possible to build a city at high density and have it still be beautiful and a major draw for tourists; Paris is a lot denser than far-less-picturesque Houston. (Though I’ll admit, Houston is far more affordable than Paris. It’s not just about density.) And is the “charm” of your city really worth making it so unaffordable that people can’t move there without risking becoming homeless?

There are a lot of details to be worked out: How serious must things get before the federal government steps in? (Wherever we draw the line, San Francisco is surely well past it.) It takes a long time to build houses and let prices adjust, so how do we account for that time-lag? Where does the money come from, actually? Debt? Taxes? But these could all be resolved.

Of course, it’s a pipe dream; we’re never going to implement this policy, because homeowners dread the idea of their home values going down (even though it would actually make their property taxes cheaper!). I’d even be willing to consider some kind of program that would let people refinance underwater mortgages to write off the lost equity, if that’s what it takes to actually build enough housing.

Because there is really only one thing that’s ever going to solve the (global!) housing crises:

Build more homes.

I’m old enough to be President now.

Jan 22 JDN 2459967

When this post goes live, I will have passed my 35th birthday. This is old enough to be President of the United States, at least by law. (In practice, no POTUS has been less than 42.)

Not that I will ever be President. I have neither the wealth nor the charisma to run any kind of national political campaign. I might be able to get elected to some kind of local office at some point, like a school board or a city water authority. But I’ve been eligible to run for such offices for quite awhile now, and haven’t done so; nor do I feel particularly inclined at the moment.

No, the reason this birthday feels so significant is the milestone it represents. By this age, most people have spouses, children, careers. I have a spouse. I don’t have kids. I sort of have a career.

I have a job, certainly. I work for relatively decent pay. Not excellent, not what I was hoping for with a PhD in economics, but enough to live on (anywhere but an overpriced coastal metropolis). But I can’t really call that job a career, because I find large portions of it unbearable and I have absolutely no job security. In fact, I have the exact opposite: My job came with an explicit termination date from the start. (Do the people who come up with these short-term postdoc positions understand how that feels? It doesn’t seem like they do.)

I missed the window to apply for academic jobs that start next year. If I were happy here, this would be fine; I still have another year left on my contract. But I’m not happy here, and that is a grievous understatement. Working here is clearly the most important situational factor contributing to my ongoing depression. So I really ought to be applying to every alternative opportunity I can find—but I can’t find the will to try it, or the self-confidence to believe that my attempts could succeed if I did.

Then again, I’m not sure I should be applying to academic positions at all. If I did apply to academic positions, they’d probably be teaching-focused ones, since that’s the one part of my job I’m actually any good at. I’ve more or less written off applying to major research institutions; I don’t think I would get hired anyway, and even if I did, the pressure to publish is so unbearable that I think I’d be just as miserable there as I am here.

On the other hand, I can’t be sure that I would be so miserable even at another research institution; maybe with better mentoring and better administration I could be happy and successful in academic research after all.

The truth is, I really don’t know how much of my misery is due to academia in general, versus the British academic system, versus Edinburgh as an institution, versus starting work during the pandemic, versus the experience of being untenured faculty, versus simply my own particular situation. I don’t know if working at another school would be dramatically better, a little better, or just the same. (If it were somehow worse—which frankly seems hard to arrange—I would literally just quit immediately.)

I guess if the University of Michigan offered me an assistant professor job right now, I would take it. But I’m confident enough that they wouldn’t offer it to me that I can’t see the point in applying. (Besides, I missed the application windows this year.) And I’m not even sure that I would be happy there, despite the fact that just a few years ago I would have called it a dream job.

That’s really what I feel most acutely about turning 35: The shattering of dreams.

I thought I had some idea of how my life would go. I thought I knew what I wanted. I thought I knew what would make me happy.

The weirdest part it that it isn’t even that different from how I’d imagined it. If you’d asked me 10 or even 20 years ago what my career would be like at 35, I probably would have correctly predicted that I would have a PhD and be working at a major research university. 10 years ago I would have correctly expected it to be a PhD in economics; 20, I probably would have guessed physics. In both cases I probably would have thought I’d be tenured by now, or at least on the tenure track. But a postdoc or adjunct position (this is sort of both?) wouldn’t have been utterly shocking, just vaguely disappointing.

The biggest error by my past self was thinking that I’d be happy and successful in this career, instead of barely, desperately hanging on. I thought I’d have published multiple successful papers by now, and be excited to work on a new one. I imagined I’d also have published a book or two. (The fact that I self-published a nonfiction book at 16 but haven’t published any nonfiction ever since would be particularly baffling to my 15-year-old self, and is particularly depressing to me now.) I imagined myself becoming gradually recognized as an authority in my field, not languishing in obscurity; I imagined myself feeling successful and satisfied, not hopeless and depressed.

It’s like the dark Mirror Universe version of my dream job. It’s so close to what I thought I wanted, but it’s also all wrong. I finally get to touch my dreams, and they shatter in my hands.

When you are young, birthdays are a sincere cause for celebration; you look forward to the new opportunities the future will bring you. I seem to be now at the age where it no longer feels that way.

There should be a glut of nurses.

Jan 15 JDN 2459960

It will not be news to most of you that there is a worldwide shortage of healthcare staff, especially nurses and emergency medical technicians (EMTs). I would like you to stop and think about the utterly terrible policy failure this represents. Maybe if enough people do, we can figure out a way to fix it.

It goes without saying—yet bears repeating—that people die when you don’t have enough nurses and EMTs. Indeed, surely a large proportion of the 2.6 million (!) deaths each year from medical errors are attributable to this. It is likely that at least one million lives per year could be saved by fixing this problem worldwide. In the US alone, over 250,000 deaths per year are caused by medical errors; so we’re looking at something like 100,000 lives we could safe each year by removing staffing shortages.

Precisely because these jobs have such high stakes, the mere fact that we would ever see the word “shortage” beside “nurse” or “EMT” was already clear evidence of dramatic policy failure.

This is not like other jobs. A shortage of accountants or baristas or even teachers, while a bad thing, is something that market forces can be expected to correct in time, and it wouldn’t be unreasonable to simply let them do so—meaning, let wages rise on their own until the market is restored to equilibrium. A “shortage” of stockbrokers or corporate lawyers would in fact be a boon to our civilization. But a shortage of nurses or EMTs or firefighters (yes, there are those too!) is a disaster.

Partly this is due to the COVID pandemic, which has been longer and more severe than any but the most pessimistic analysts predicted. But there shortages of nurses before COVID. There should not have been. There should have been a massive glut.

Even if there hadn’t been a shortage of healthcare staff before the pandemic, the fact that there wasn’t a glut was already a problem.

This is what a properly-functioning healthcare policy would look like: Most nurses are bored most of the time. They are widely regarded as overpaid. People go into nursing because it’s a comfortable, easy career with very high pay and usually not very much work. Hospitals spend most of their time with half their beds empty and half of their ambulances parked while the drivers and EMTs sit around drinking coffee and watching football games.

Why? Because healthcare, especially emergency care, involves risk, and the stakes couldn’t be higher. If the number of severely sick people doubles—as in, say, a pandemic—a hospital that usually runs at 98% capacity won’t be able to deal with them. But a hospital that usually runs at 50% capacity will.

COVID exposed to the world what a careful analysis would already have shown: There was not nearly enough redundancy in our healthcare system. We had been optimizing for a narrow-minded, short-sighted notion of “efficiency” over what we really needed, which was resiliency and robustness.

I’d like to compare this to two other types of jobs.

The first is stockbrokers.Set aside for a moment the fact that most of what they do is worthless is not actively detrimental to human society. Suppose that their most adamant boosters are correct and what they do is actually really important and beneficial.

Their experience is almost like what I just said nurses ought to be. They are widely regarded (correctly) as very overpaid. There is never any shortage of them; there are people lining up to be hired. People go into the work not because they care about it or even because they are particularly good at it, but because they know it’s an easy way to make a lot of money.

The one thing that seems to be different from my image may not be as different as it seems. Stockbrokers work long hours, but nobody can really explain why. Frankly most of what they do can be—and has been—successfully automated. Since there simply isn’t that much work for them to do, my guess is that most of the time they spend “working” 60-80 hour weeks is actually not actually working, but sitting around pretending to work. Since most financial forecasters are outperformed by a simple diversified portfolio, the most profitable action for most stock analysts to take most of the time would be nothing.

It may also be that stockbrokers work hard at sales—trying to convince people to buy and sell for bad reasons in order to earn sales commissions. This would at least explain why they work so many hours, though it would make it even harder to believe that what they do benefits society. So if we imagine our “ideal” stockbroker who makes the world a better place, I think they mostly just use a simple algorithm and maybe adjust it every month or two. They make better returns than their peers, but spend 38 hours a week goofing off.

There is a massive glut of stockbrokers. This is what it looks like when a civilization is really optimized to be good at something.

The second is soldiers. Say what you will about them, no one can dispute that their job has stakes of life and death. A lot of people seem to think that the world would be better off without them, but that’s at best only true if everyone got rid of them; if you don’t have soldiers but other countries do, you’re going to be in big trouble. (“We’ll beat our swords into liverwurst / Down by the East Riverside; / But no one wants to be the first!”) So unless and until we can solve that mother of all coordination problems, we need to have soldiers around.

What is life like for a soldier? Well, they don’t seem overpaid; if anything, underpaid. (Maybe some of the officers are overpaid, but clearly not most of the enlisted personnel. Part of the problem there is that “pay grade” is nearly synonymous with “rank”—it’s a primate hierarchy, not a rational wage structure. Then again, so are most industries; the military just makes it more explicit.) But there do seem to be enough of them. Military officials may lament of “shortages” of soldiers, but they never actually seem to want for troops to deploy when they really need them. And if a major war really did start that required all available manpower, the draft could be reinstated and then suddenly they’d have it—the authority to coerce compliance is precisely how you can avoid having a shortage while keeping your workers underpaid. (Russia’s soldier shortage is genuine—something about being utterly outclassed by your enemy’s technological superiority in an obviously pointless imperialistic war seems to hurt your recruiting numbers.)

What is life like for a typical soldier? The answer may surprise you. The overwhelming answer in surveys and interviews (which also fits with the experiences I’ve heard about from friends and family in the military) is that life as a soldier is boring. All you do is wake up in the morning and push rubbish around camp.” Bosnia was scary for about 3 months. After that it was boring. That is pretty much day to day life in the military. You are bored.”

This isn’t new, nor even an artifact of not being in any major wars: Union soldiers in the US Civil War had the same complaint. Even in World War I, a typical soldier spent only half the time on the front, and when on the front only saw combat 1/5 of the time. War is boring.

In other words, there is a massive glut of soldiers. Most of them don’t even know what to do with themselves most of the time.

This makes perfect sense. Why? Because an army needs to be resilient. And to be resilient, you must be redundant. If you only had exactly enough soldiers to deploy in a typical engagement, you’d never have enough for a really severe engagement. If on average you had enough, that means you’d spend half the time with too few. And the costs of having too few soldiers are utterly catastrophic.

This is probably an evolutionary outcome, in fact; civilizations may have tried to have “leaner” militaries that didn’t have so much redundancy, and those civilizations were conquered by other civilizations that were more profligate. (This is not to say that we couldn’t afford to cut military spending at all; it’s one thing to have the largest military in the world—I support that, actually—but quite another to have more than the next 10 combined.)

What’s the policy solution here? It’s actually pretty simple.

Pay nurses and EMTs more. A lot more. Whatever it takes to get to the point where we not only have enough, but have so many people lining up to join we don’t even know what to do with them all. If private healthcare firms won’t do it, force them to—or, all the more reason to nationalize healthcare. The stakes are far too high to leave things as they are.

Would this be expensive? Sure.

Removing the shortage of EMTs wouldn’t even be that expensive. There are only about 260,000 EMTs in the US, and they get paid the apallingly low median salary of $36,000. That means we’re currently spending only about $9 billion per year on EMTs. We could double their salaries and double their numbers for only an extra $27 billion—about 0.1% of US GDP.

Nurses would cost more. There are about 5 million nurses in the US, with an average salary of about $78,000, so we’re currently spending about $390 billion a year on nurses. We probably can’t afford to double both salary and staffing. But maybe we could increase both by 20%, costing about an extra $170 billion per year.

Altogether that would cost about $200 billion per year. To save one hundred thousand lives.

That’s $2 million per life saved, or about $40,000 per QALY. The usual estimate for the value of a statistical life is about $10 million, and the usual threshold for a cost-effective medical intervention is $50,000-$100,000 per QALY; so we’re well under both. This isn’t as efficient as buying malaria nets in Africa, but it’s more efficient than plenty of other things we’re spending on. And this isn’t even counting additional benefits of better care that go beyond lives saved.

In fact if we nationalized US healthcare we could get more than these amounts in savings from not wasting our money on profits for insurance and drug companies—simply making the US healthcare system as cost-effective as Canada’s would save $6,000 per American per year, or a whopping $1.9 trillion. At that point we could double the number of nurses and their salaries and still be spending less.

No, it’s not because nurses and doctors are paid much less in Canada than the US. That’s true in some countries, but not Canada. The median salary for nurses in Canada is about $95,500 CAD, which is $71,000 US at current exchange rates. Doctors in Canada can make anywhere from $80,000 to $400,000 CAD, which is $60,000 to $300,000 US. Nor are healthcare outcomes in Canada worse than the US; if anything, they’re better, as Canadians live an average of four years longer than Americans. No, the radical difference in cost—a factor of 2 to 1—between Canada and the US comes from privatization. Privatization is supposed to make things more efficient and lower costs, but it has absolutely not done that in US healthcare.

And if our choice is between spending more money and letting hundreds of thousands or millions of people die every year, that’s no choice at all.

Good enough is perfect, perfect is bad

Jan 8 JDN 2459953

Not too long ago, I read the book How to Keep House While Drowning by KC Davis, which I highly recommend. It offers a great deal of useful and practical advice, especially for someone neurodivergent and depressed living through an interminable pandemic (which I am, but honestly, odds are, you may be too). And to say it is a quick and easy read is actually an unfair understatement; it is explicitly designed to be readable in short bursts by people with ADHD, and it has a level of accessibility that most other books don’t even aspire to and I honestly hadn’t realized was possible. (The extreme contrast between this and academic papers is particularly apparent to me.)

One piece of advice that really stuck with me was this: Good enough is perfect.

At first, it sounded like nonsense; no, perfect is perfect, good enough is just good enough. But in fact there is a deep sense in which it is absolutely true.

Indeed, let me make it a bit stronger: Good enough is perfect; perfect is bad.

I doubt Davis thought of it in these terms, but this is a concise, elegant statement of the principles of bounded rationality. Sometimes it can be optimal not to optimize.

Suppose that you are trying to optimize something, but you have limited computational resources in which to do so. This is actually not a lot for you to suppose—it’s literally true of basically everyone basically every moment of every day.

But let’s make it a bit more concrete, and say that you need to find the solution to the following math problem: “What is the product of 2419 times 1137?” (Pretend you don’t have a calculator, as it would trivialize the exercise. I thought about using a problem you couldn’t do with a standard calculator, but I realized that would also make it much weirder and more obscure for my readers.)

Now, suppose that there are some quick, simple ways to get reasonably close to the correct answer, and some slow, difficult ways to actually get the answer precisely.

In this particular problem, the former is to approximate: What’s 2500 times 1000? 2,500,000. So it’s probably about 2,500,000.

Or we could approximate a bit more closely: Say 2400 times 1100, that’s about 100 times 100 times 24 times 11, which is 2 times 12 times 11 (times 10,000), which is 2 times (110 plus 22), which is 2 times 132 (times 10,000), which is 2,640,000.

Or, we could actually go through all the steps to do the full multiplication (remember I’m assuming you have no calculator), multiply, carry the 1s, add all four sums, re-check everything and probably fix it because you messed up somewhere; and then eventually you will get: 2,750,403.

So, our really fast method was only off by about 10%. Our moderately-fast method was only off by 4%. And both of them were a lot faster than getting the exact answer by hand.

Which of these methods you’d actually want to use depends on the context and the tools at hand. If you had a calculator, sure, get the exact answer. Even if you didn’t, but you were balancing the budget for a corporation, I’m pretty sure they’d care about that extra $110,403. (Then again, they might not care about the $403 or at least the $3.) But just as an intellectual exercise, you really didn’t need to do anything; the optimal choice may have been to take my word for it. Or, if you were at all curious, you might be better off choosing the quick approximation rather than the precise answer. Since nothing of any real significance hinged on getting that answer, it may be simply a waste of your time to bother finding it.

This is of course a contrived example. But it’s not so far from many choices we make in real life.

Yes, if you are making a big choice—which job to take, what city to move to, whether to get married, which car or house to buy—you should get a precise answer. In fact, I make spreadsheets with formal utility calculations whenever I make a big choice, and I haven’t regretted it yet. (Did I really make a spreadsheet for getting married? You’re damn right I did; there were a lot of big financial decisions to make there—taxes, insurance, the wedding itself! I didn’t decide whom to marry that way, of course; but we always had the option of staying unmarried.)

But most of the choices we make from day to day are small choices: What should I have for lunch today? Should I vacuum the carpet now? What time should I go to bed? In the aggregate they may all add up to important things—but each one of them really won’t matter that much. If you were to construct a formal model to optimize your decision of everything to do each day, you’d spend your whole day doing nothing but constructing formal models. Perfect is bad.

In fact, even for big decisions, you can’t really get a perfect answer. There are just too many unknowns. Sometimes you can spend more effort gathering additional information—but that’s costly too, and sometimes the information you would most want simply isn’t available. (You can look up the weather in a city, visit it, ask people about it—but you can’t really know what it’s like to live there until you do.) Even those spreadsheet models I use to make big decisions contain error bars and robustness checks, and if, even after investing a lot of effort trying to get precise results, I still find two or more choices just can’t be clearly distinguished to within a good margin of error, I go with my gut. And that seems to have been the best choice for me to make. Good enough is perfect.

I think that being gifted as a child trained me to be dangerously perfectionist as an adult. (Many of you may find this familiar.) When it came to solving math problems, or answering quizzes, perfection really was an attainable goal a lot of the time.

As I got older and progressed further in my education, maybe getting every answer right was no longer feasible; but I still could get the best possible grade, and did, in most of my undergraduate classes and all of my graduate classes. To be clear, I’m not trying to brag here; if anything, I’m a little embarrassed. What it mainly shows is that I had learned the wrong priorities. In fact, one of the main reasons why I didn’t get a 4.0 average in undergrad is that I spent a lot more time back then writing novels and nonfiction books, which to this day I still consider my most important accomplishments and grieve that I’ve not (yet?) been able to get them commercially published. I did my best work when I wasn’t trying to be perfect. Good enough is perfect; perfect is bad.

Now here I am on the other side of the academic system, trying to carve out a career, and suddenly, there is no perfection. When my exam is being graded by someone else, there is a way to get the most points. When I’m the one grading the exams, there is no “correct answer” anymore. There is no one scoring me to see if I did the grading the “right way”—and so, no way to be sure I did it right.

Actually, here at Edinburgh, there are other instructors who moderate grades and often require me to revise them, which feels a bit like “getting it wrong”; but it’s really more like we had different ideas of what the grade curve should look like (not to mention US versus UK grading norms). There is no longer an objectively correct answer the way there is for, say, the derivative of x^3, the capital of France, or the definition of comparative advantage. (Or, one question I got wrong on an undergrad exam because I had zoned out of that lecture to write a book on my laptop: Whether cocaine is a dopamine reuptake inhibitor. It is. And the fact that I still remember that because I got it wrong over a decade ago tells you a lot about me.)

And then when it comes to research, it’s even worse: What even constitutes “good” research, let alone “perfect” research? What would be most scientifically rigorous isn’t what journals would be most likely to publish—and without much bigger grants, I can afford neither. I find myself longing for the research paper that will be so spectacular that top journals have to publish it, removing all risk of rejection and failure—in other words, perfect.

Yet such a paper plainly does not exist. Even if I were to do something that would win me a Nobel or a Fields Medal (this is, shall we say, unlikely), it probably wouldn’t be recognized as such immediately—a typical Nobel isn’t awarded until 20 or 30 years after the work that spawned it, and while Fields Medals are faster, they’re by no means instant or guaranteed. In fact, a lot of ground-breaking, paradigm-shifting research was originally relegated to minor journals because the top journals considered it too radical to publish.

Or I could try to do something trendy—feed into DSGE or GTFO—and try to get published that way. But I know my heart wouldn’t be in it, and so I’d be miserable the whole time. In fact, because it is neither my passion nor my expertise, I probably wouldn’t even do as good a job as someone who really buys into the core assumptions. I already have trouble speaking frequentist sometimes: Are we allowed to say “almost significant” for p = 0.06? Maximizing the likelihood is still kosher, right? Just so long as I don’t impose a prior? But speaking DSGE fluently and sincerely? I’d have an easier time speaking in Latin.

What I know—on some level at least—I ought to be doing is finding the research that I think is most worthwhile, given the resources I have available, and then getting it published wherever I can. Or, in fact, I should probably constrain a little by what I know about journals: I should do the most worthwhile research that is feasible for me and has a serious chance of getting published in a peer-reviewed journal. It’s sad that those two things aren’t the same, but they clearly aren’t. This constraint binds, and its Lagrange multiplier is measured in humanity’s future.

But one thing is very clear: By trying to find the perfect paper, I have floundered and, for the last year and a half, not written any papers at all. The right choice would surely have been to write something.

Because good enough is perfect, and perfect is bad.

What is it with EA and AI?

Jan 1 JDN 2459946

Surprisingly, most Effective Altruism (EA) leaders don’t seem to think that poverty alleviation should be our top priority. Most of them seem especially concerned about long-term existential risk, such as artificial intelligence (AI) safety and biosecurity. I’m not going to say that these things aren’t important—they certainly are important—but here are a few reasons I’m skeptical that they are really the most important the way that so many EA leaders seem to think.

1. We don’t actually know how to make much progress at them, and there’s only so much we can learn by investing heavily in basic research on them. Whereas, with poverty, the easy, obvious answer turns out empirically to be extremely effective: Give them money.

2. While it’s easy to multiply out huge numbers of potential future people in your calculations of existential risk (and this is precisely what people do when arguing that AI safety should be a top priority), this clearly isn’t actually a good way to make real-world decisions. We simply don’t know enough about the distant future of humanity to be able to make any kind of good judgments about what will or won’t increase their odds of survival. You’re basically just making up numbers. You’re taking tiny probabilities of things you know nothing about and multiplying them by ludicrously huge payoffs; it’s basically the secular rationalist equivalent of Pascal’s Wager.

2. AI and biosecurity are high-tech, futuristic topics, which seem targeted to appeal to the sensibilities of a movement that is still very dominated by intelligent, nerdy, mildly autistic, rich young White men. (Note that I say this as someone who very much fits this stereotype. I’m queer, not extremely rich and not entirely White, but otherwise, yes.) Somehow I suspect that if we asked a lot of poor Black women how important it is to slightly improve our understanding of AI versus giving money to feed children in Africa, we might get a different answer.

3. Poverty eradication is often characterized as a “short term” project, contrasted with AI safety as a “long term” project. This is (ironically) very short-sighted. Eradication of poverty isn’t just about feeding children today. It’s about making a world where those children grow up to be leaders and entrepreneurs and researchers themselves. The positive externalities of economic development are staggering. It is really not much of an exaggeration to say that fascism is a consequence of poverty and unemployment.

4. Currently the main thing that most Effective Altruism organizations say they need most is “talent”; how many millions of person-hours of talent are we leaving on the table by letting children starve or die of malaria?

5. Above all, existential risk can’t really be what’s motivating people here. The obvious solutions to AI safety and biosecurity are not being pursued, because they don’t fit with the vision that intelligent, nerdy, young White men have of how things should be. Namely: Ban them. If you truly believe that the most important thing to do right now is reduce the existential risk of AI and biotechnology, you should support a worldwide ban on research in artificial intelligence and biotechnology. You should want people to take all necessary action to attack and destroy institutions—especially for-profit corporations—that engage in this kind of research, because you believe that they are threatening to destroy the entire world and this is the most important thing, more important than saving people from starvation and disease. I think this is really the knock-down argument; when people say they think that AI safety is the most important thing but they don’t want Google and Facebook to be immediately shut down, they are either confused or lying. Honestly I think maybe Google and Facebook should be immediately shut down for AI safety reasons (as well as privacy and antitrust reasons!), and I don’t think AI safety is yet the most important thing.

Why aren’t people doing that? Because they aren’t actually trying to reduce existential risk. They just think AI and biotechnology are really interesting, fascinating topics and they want to do research on them. And I agree with that, actually—but then they need stop telling people that they’re fighting to save the world, because they obviously aren’t. If the danger were anything like what they say it is, we should be halting all research on these topics immediately, except perhaps for a very select few people who are entrusted with keeping these forbidden secrets and trying to find ways to protect us from them. This may sound radical and extreme, but it is not unprecedented: This is how we handle nuclear weapons, which are universally recognized as a global existential risk. If AI is really as dangerous as nukes, we should be regulating it like nukes. I think that in principle it could be that dangerous, and may be that dangerous someday—but it isn’t yet. And if we don’t want it to get that dangerous, we don’t need more AI researchers, we need more regulations that stop people from doing harmful AI research! If you are doing AI research and it isn’t directly involved specifically in AI safety, you aren’t saving the world—you’re one of the people dragging us closer to the cliff! Anything that could make AI smarter but doesn’t also make it safer is dangerous. And this is clearly true of the vast majority of AI research, and frankly to me seems to also be true of the vast majority of research at AI safety institutes like the Machine Intelligence Research Institute.

Seriously, look through MIRI’s research agenda: It’s mostly incredibly abstract and seems completely beside the point when it comes to preventing AI from taking control of weapons or governments. It’s all about formalizing Bayesian induction. Thanks to you, Skynet can have a formally computable approximation to logical induction! Truly we are saved. Only two of their papers, on “Corrigibility” and “AI Ethics”, actually struck me as at all relevant to making AI safer. The rest is largely abstract mathematics that is almost literally navel-gazing—it’s all about self-reference. Eliezer Yudkowsky finds self-reference fascinating and has somehow convinced an entire community that it’s the most important thing in the world. (I actually find some of it fascinating too, especially the paper on “Functional Decision Theory”, which I think gets at some deep insights into things like why we have emotions. But I don’t see how it’s going to save the world from AI.)

Don’t get me wrong: AI also has enormous potential benefits, and this is a reason we may not want to ban it. But if you really believe that there is a 10% chance that AI will wipe out humanity by 2100, then get out your pitchforks and your EMP generators, because it’s time for the Butlerian Jihad. A 10% chance of destroying all humanity is an utterly unacceptable risk for any conceivable benefit. Better that we consign ourselves to living as we did in the Neolithic than risk something like that. (And a globally-enforced ban on AI isn’t even that; it’s more like “We must live as we did in the 1950s.” How would we survive!?) If you don’t want AI banned, maybe ask yourself whether you really believe the risk is that high—or are human brains just really bad at dealing with small probabilities?

I think what’s really happening here is that we have a bunch of guys (and yes, the EA and especially AI EA-AI community is overwhelmingly male) who are really good at math and want to save the world, and have thus convinced themselves that being really good at math is how you save the world. But it isn’t. The world is much messier than that. In fact, there may not be much that most of us can do to contribute to saving the world; our best options may in fact be to donate money, vote well, and advocate for good causes.

Let me speak Bayesian for a moment: The prior probability that you—yes, you, out of all the billions of people in the world—are uniquely positioned to save it by being so smart is extremely small. It’s far more likely that the world will be saved—or doomed—by people who have power. If you are not the head of state of a large country or the CEO of a major multinational corporation, I’m sorry; you probably just aren’t in a position to save the world from AI.

But you can give some money to GiveWell, so maybe do that instead?

Charity shouldn’t end at home

It so happens that this week’s post will go live on Christmas Day. I always try to do some kind of holiday-themed post around this time of year, because not only Christmas, but a dozen other holidays from various religions all fall around this time of year. The winter solstice seems to be a very popular time for holidays, and has been since antiquity: The Romans were celebrating Saturnalia 2000 years ago. Most of our ‘Christmas’ traditions are actually derived from Yuletide.

These holidays certainly mean many different things to different people, but charity and generosity are themes that are very common across a lot of them. Gift-giving has been part of the season since at least Saturnalia and remains as vital as ever today. Most of those gifts are given to our friends and loved ones, but a substantial fraction of people also give to strangers in the form of charitable donations: November and December have the highest rates of donation to charity in the US and the UK, with about 35-40% of people donating during this season. (Of course this is complicated by the fact that December 31 is often the day with the most donations, probably from people trying to finish out their tax year with a larger deduction.)

My goal today is to make you one of those donors. There is a common saying, often attributed to the Bible but not actually present in it: “Charity begins at home”.

Perhaps this is so. There’s certainly something questionable about the Effective Altruism strategy of “earning to give” if it involves abusing and exploiting the people around you in order to make more money that you then donate to worthy causes. Certainly we should be kind and compassionate to those around us, and it makes sense for us to prioritize those close to us over strangers we have never met. But while charity may begin at home, it must not end at home.

There are so many global problems that could benefit from additional donations. While global poverty has been rapidly declining in the early 21st century, this is largely because of the efforts of donors and nonprofit organizations. Official Development Assitance has been roughly constant since the 1970s at 0.3% of GNI among First World countries—well below international targets set decades ago. Total development aid is around $160 billion per year, while private donations from the United States alone are over $480 billion. Moreover, 9% of the world’s population still lives in extreme poverty, and this rate has actually slightly increased the last few years due to COVID.

There are plenty of other worthy causes you could give to aside from poverty eradication, from issues that have been with us since the dawn of human civilization (the Humane Society International for domestic animal welfare, the World Wildlife Federation for wildlife conservation) to exotic fat-tail sci-fi risks that are only emerging in our own lifetimes (the Machine Intelligence Research Institute for AI safety, the International Federation of Biosafety Associations for biosecurity, the Union of Concerned Scientists for climate change and nuclear safety). You could fight poverty directly through organizations like UNICEF or GiveDirectly, fight neglected diseases through the Schistomoniasis Control Initiative or the Against Malaria Foundation, or entrust an organization like GiveWell to optimize your donations for you, sending them where they think they are needed most. You could give to political causes supporting civil liberties (the American Civil Liberties Union) or protecting the rights of people of color (the North American Association of Colored People) or LGBT people (the Human Rights Campaign).

I could spent a lot of time and effort trying to figure out the optimal way to divide up your donations and give them to causes such as this—and then convincing you that it’s really the right one. (And there is even a time and place for that, because seemingly-small differences can matter a lot in this.) But instead I think I’m just going to ask you to pick something. Give something to an international charity with a good track record.

I think we worry far too much about what is the best way to give—especially people in the Effective Altruism community, of which I’m sort of a marginal member—when the biggest thing the world really needs right now is just more people giving more. It’s true, there are lots of worthless or even counter-productive charities out there: Please, please do not give to the Salvation Army. (And think twice before donating to your own church; if you want to support your own community, okay, go ahead. But if you want to make the world better, there are much better places to put your money.)

But above all, give something. Or if you already give, give more. Most people don’t give at all, and most people who give don’t give enough.

In defense of civility

Dec 18 JDN 2459932

Civility is in short supply these days. Perhaps it has always been in short supply; certainly much of the nostalgia for past halcyon days of civility is ill-founded. Wikipedia has an entire article on hundreds of recorded incidents of violence in legislative assemblies, in dozens of countries, dating all the way from to the Roman Senate in 44 BC to Bosnia in 2019. But the Internet seems to bring about its own special kind of incivility, one which exposes nearly everyone to some of the worst vitriol the entire world has to offer. I think it’s worth talking about why this is bad, and perhaps what we might do about it.

For some, the benefits of civility seem so self-evident that they don’t even bear mentioning. For others, the idea of defending civility may come across as tone-deaf or even offensive. I would like to speak to both of those camps today: If you think the benefits of civility are obvious, I assure you, they aren’t to everyone. And if you think that civility is just a tool of the oppressive status quo, I hope I can make you think again.

A lot of the argument against civility seems to be founded in the notion that these issues are important, lives are at stake, and so we shouldn’t waste time and effort being careful how we speak to each other. How dare you concern yourself with the formalities of argumentation when people are dying?

But this is totally wrongheaded. It is precisely because these issues are important that civility is vital. It is precisely because lives are at stake that we must make the right decisions. And shouting and name-calling (let alone actual fistfights or drawn daggers—which have happened!) are not conducive to good decision-making.

If you shout someone down when choosing what restaurant to have dinner at, you have been very rude and people may end up unhappy with their dining experience—but very little of real value has been lost. But if you shout someone down when making national legislation, you may cause the wrong policy to be enacted, and this could lead to the suffering or death of thousands of people.

Think about how court proceedings work. Why are they so rigid and formal, with rules upon rules upon rules? Because the alternative was capricious violence. In the absence of the formal structure of a court system, so-called ‘justice’ was handed out arbitrarily, by whoever was in power, or by mobs of vigilantes. All those seemingly-overcomplicated rules were made in order to resolve various conflicts of interest and hopefully lead toward more fair, consistent results in the justice system. (And don’t get me wrong; they still could stand to be greatly improved!)

Legislatures have complex rules of civility for the same reason: Because the outcome is so important, we need to make sure that the decision process is as reliable as possible. And as flawed as existing legislatures still are, and as silly as it may seem to insist upon addressing ‘the Honorable Representative from the Great State of Vermont’, it’s clearly a better system than simply letting them duke it out with their fists.

A related argument I would like to address is that of ‘tone policing‘. If someone objects, not to the content of what you are saying, but to the tone in which you have delivered it, are they arguing in bad faith?

Well, possibly. Certainly, arguments about tone can be used that way. In particular I remember that this was basically the only coherent objection anyone could come up with against the New Atheism movement: “Well, sure, obviously, God isn’t real and religion is ridiculous; but why do you have to be so mean about it!?”

But it’s also quite possible for tone to be itself a problem. If your tone is overly aggressive and you don’t give people a chance to even seriously consider your ideas before you accuse them of being immoral for not agreeing with you—which happens all the time—then your tone really is the problem.

So, how can we tell which is which? I think a good way to reply to what you think might be bad-faith tone policing is this: “What sort of tone do you think would be better?”

I think there are basically three possible responses:

1. They can’t offer one, because there is actually no tone in which they would accept the substance of your argument. In that case, the tone policing really is in bad faith; they don’t want you to be nicer, they want you to shut up. This was clearly the case for New Atheism: As Daniel Dennett aptly remarked, “There’s simply no polite way to tell someone they have dedicated their lives to an illusion.” But sometimes, such things need to be said all the same.

2. They offer an alternative argument you could make, but it isn’t actually expressing your core message. Either they have misunderstood your core message, or they actually disagree with the substance of your argument and should be addressing it on those terms.

3. They offer an alternative way of expressing your core message in a milder, friendlier tone. This means that they are arguing in good faith and actually trying to help you be more persuasive!

I don’t know how common each of these three possibilities is; it could well be that the first one is the most frequent occurrence. That doesn’t change the fact that I have definitely been at the other end of the third one, where I absolutely agree with your core message and want your activism to succeed, but I can see that you’re acting like a jerk and nobody will want to listen to you.

Here, let me give some examples of the type of argument I’m talking about:

1. “Defund the police”: This slogan polls really badly. Probably because most people have genuine concerns about crime and want the police to protect them. Also, as more and more social services (like for mental health and homelessness) get co-opted into policing, this slogan makes it sound like you’re just going to abandon those people. But do we need serious, radical police reform? Absolutely. So how about “Reform the police”, “Put police money back into the community”, or even “Replace the police”?

2. “All Cops Are Bastards”: Speaking of police reform, did I mention we need it? A lot of it? Okay. Now, let me ask you: All cops? Every single one of them? There is not a single one out of the literally millions of police officers on this planet who is a good person? Not one who is fighting to take down police corruption from within? Not a single individual who is trying to fix the system while preserving public safety? Now, clearly, it’s worth pointing out, some cops are bastards—but hey, that even makes a better acronym: SCAB. In fact, it really is largely a few bad apples—the key point here is that you need to finish the aphorism: “A few bad apples spoil the whole barrel.” The number of police who are brutal and corrupt is relatively small, but as long as the other police continue to protect them, the system will be broken. Either you get those bad apples out pronto, or your whole barrel is bad. But demonizing the very people who are in the best position to implement those reforms—good police officers—is not helping.

3. “Be gay, do crime”: I know it’s tongue-in-cheek and ironic. I get that. It’s still a really dumb message. I am absolutely on board with LGBT rights. Even aside from being queer myself, I probably have more queer and trans friends than straight friends at this point. But why in the world would you want to associate us with petty crime? Why are you lumping us in with people who harm others at best out of desperation and at worst out of sheer greed? Even if you are literally an anarchist—which I absolutely am not—you’re really not selling anarchism well if the vision you present of it is a world of unfettered crime! There are dozens of better pro-LGBT slogans out there; pick one. Frankly even “do gay, be crime” is better, because it’s more clearly ironic. (Also, you can take it to mean something like this: Don’t just be gay, do gay—live your fullest gay life. And if you can be crime, that means that the system is fundamentally unjust: You can be criminalized just for who you are. And this is precisely what life is like for millions of LGBT people on this planet.)

A lot of people seem to think that if you aren’t immediately convinced by the most vitriolic, aggressive form of an argument, then you were never going to be convinced anyway and we should just write you off as a potential ally. This isn’t just obviously false; it’s incredibly dangerous.

The whole point of activism is that not everyone already agrees with you. You are trying to change minds. If it were really true that all reasonable, ethical people already agreed with your view, you wouldn’t need to be an activist. The whole point of making political arguments is that people can be reasonable and ethical and still be mistaken about things, and when we work hard to persuade them, we can eventually win them over. In fact, on some things we’ve actually done spectacularly well.

And what about the people who aren’t reasonable and ethical? They surely exist. But fortunately, they aren’t the majority. They don’t rule the whole world. If they did, we’d basically be screwed: If violence is really the only solution, then it’s basically a coin flip whether things get better or worse over time. But in fact, unreasonable people are outnumbered by reasonable people. Most of the things that are wrong with the world are mistakes, errors that can be fixed—not conflicts between irreconcilable factions. Our goal should be to fix those mistakes wherever we can, and that means being patient, compassionate educators—not angry, argumentative bullies.

Inequality-adjusted GDP and median income

Dec 11 JDN 2459925

There are many problems with GDP as a measure of a nation’s prosperity. For one, GDP ignores natural resources and ecological degradation; so a tree is only counted in GDP once it is cut down. For another, it doesn’t value unpaid work, so caring for a child only increases GDP if you are a paid nanny rather than the child’s parents.

But one of the most obvious problems is the use of an average to evaluate overall prosperity, without considering the level of inequality.

Consider two countries. In Alphania, everyone has an income of about $50,000. In Betavia, 99% of people have an income of $1,000 and 1% have an income of $10 million. What is the per-capita GDP of each country? Alphania’s is $50,000 of course; but Betavia’s is $100,990. Does it really make sense to say that Betavia is a more prosperous country? Maybe it has more wealth overall, but its huge inequality means that it is really not at a high level of development. It honestly sounds like an awful place to live.

A much more sensible measure would be something like median income: How much does a typical person have? In Alphania this is still $50,000; but in Betavia it is only $1,000.

Yet even this leaves out most of the actual distribution; by definition a median is only determined by what is the 50th percentile. We could vary all other incomes a great deal without changing the median.

A better measure would be some sort of inequality-adjusted per-capita GDP, which rescales GDP based on the level of inequality in a country. But we would need a good way of making that adjustment.

I contend that the most sensible way would be to adopt some kind of model of marginal utility of income, and then figure out what income would correspond to the overall average level of utility.

In other words, average over the level of happiness that people in a country get from their income, and then figure out what level of income would correspond to that level of happiness. If we magically gave everyone the same amount of money, how much would they need to get in order for the average happiness in the country to remain the same?

This is clearly going to be less than the average level of income, because marginal utility of income is decreasing; a dollar is not worth as much in real terms to a rich person as it is to a poor person. So if we could somehow redistribute all income evenly while keeping the average the same, that would actually increase overall happiness (though, for many reasons, we can’t simply do that).

For example, suppose that utility of income is logarithmic: U = ln(I).

This means that the marginal utility of an additional dollar is inversely proportional to how many dollars you already have: U'(I) = 1/I.

It also means that a 1% gain or loss in your income feels about the same regardless of how much income you have: ln((1+r)Y) = ln(Y) + ln(1+r). This seems like a quite reasonable, maybe even a bit conservative, assumption; I suspect that losing 1% of your income actually hurts more when you are poor than when you are rich.

Then the inequality adjusted GDP Y is a value such that ln(Y) is equal to the overall average level of utility: E[U] = ln(Y), so Y = exp(E[U]).

This sounds like a very difficult thing to calculate. But fortunately, the distribution of actual income seems to quite closely follow a log-normal distribution. This means that when we take the logarithm of income to get utility, we just get back a very nice, convenient normal distribution!

In fact, it turns out that for a log-normal distribution, the following holds: exp(E[ln(Y)]) = median(Y)

The income which corresponds to the average utility turns out to simply be the median income! We went looking for a better measure than median income, and ended up finding out that median income was the right measure all along.

This wouldn’t hold for most other distributions; and since real-world economies don’t perfectly follow a log-normal distribution, a more precise estimate would need to be adjusted accordingly. But the approximation is quite good for most countries we have good data on, so even for the ones we don’t, median income is likely a very good estimate.

The ranking of countries by median income isn’t radically different from the ranking by per-capita GDP; rich countries are still rich and poor countries are still poor. But it is different enough to matter.

Luxembourg is in 1st place on both lists. Scandinavian countries and the US are in the top 10 in both cases. So it’s fair to say that #ScandinaviaIsBetter for real, and the US really is so rich that our higher inequality doesn’t make our median income lower than the rest of the First World.

But some countries are quite different. Ireland looks quite good in per-capita GDP, but quite bad in median income. This is because a lot of the GDP in Ireland is actually profits by corporations that are only nominally headquartered in Ireland and don’t actually employ very many people there.

The comparison between the US, the UK, and Canada seems particularly instructive. If you look at per-capita GDP PPP, the US looks much richer at $75,000 compared to Canada’s $57,800 (a difference of 29% or 26 log points). But if you look at median personal income, they are nearly equal: $19,300 in the US and $18,600 in Canada (3.7% or 3.7 log points).

On the other hand, in per-capita GDP PPP, the UK looks close to Canada at $55,800 (3.6% or 3.6 lp); but in median income it is dramatically worse, at only $14,800 (26% or 23 lp). So Canada and the UK have similar overall levels of wealth, but life for a typical Canadian is much better than life for a typical Briton because of the higher inequality in Britain. And the US has more wealth than Canada, but it doesn’t meaningfully improve the lifestyle of a typical American relative to a typical Canadian.

The case against phys ed

Dec 4 JDN 2459918

If I want to stop someone from engaging in an activity, what should I do? I could tell them it’s wrong, and if they believe me, that would work. But what if they don’t believe me? Or I could punish them for doing it, and as long as I can continue to do that reliably, that should deter them from doing it. But what happens after I remove the punishment?

If I really want to make someone not do something, the best way to accomplish that is to make them not want to do it. Make them dread doing it. Make them hate the very thought of it. And to accomplish that, a very efficient method would be to first force them to do it, but make that experience as miserable and humiliating is possible. Give them a wide variety of painful or outright traumatic experiences that are directly connected with the undesired activity, to carry with them for the rest of their life.

This is precisely what physical education does, with regard to exercise. Phys ed is basically optimized to make people hate exercise.

Oh, sure, some students enjoy phys ed. These are the students who are already athletic and fit, who already engage in regular exercise and enjoy doing so. They may enjoy phys ed, may even benefit a little from it—but they didn’t really need it in the first place.

The kids who need more physical activity are the kids who are obese, or have asthma, or suffer from various other disabilities that make exercising difficult and painful for them. And what does phys ed do to those kids? It makes them compete in front of their peers at various athletic tasks at which they will inevitably fail and be humiliated.

Even the kids who are otherwise healthy but just don’t get enough exercise will go into phys ed class at a disadvantage, and instead of being carefully trained to improve their skills and physical condition at their own level, they will be publicly shamed by their peers for their inferior performance.

I know this, because I was one of those kids. I have exercise-induced bronchoconstriction, a lung condition similar to asthma (actually there’s some debate as to whether it should be considered a form of asthma), in which intense aerobic exercise causes the airways of my lungs to become constricted and inflamed, making me unable to get enough air to continue.

It’s really quite remarkable I wasn’t diagnosed with this as a child; I actually once collapsed while running in gym class, and all they thought to do at the time was give me water and let me rest for the remainder of the class. Nobody thought to call the nurse. I was never put on a beta agonist or an inhaler. (In fact at one point I was put on a beta blocker for my migraines; I now understand why I felt so fatigued when taking it—it was literally the opposite of the drug my lungs needed.)

Actually it’s been a few years since I had an attack. This is of course partly due to me generally avoiding intense aerobic exercise; but even when I do get intense exercise, I rarely seem to get bronchoconstriction attacks. My working hypothesis is that the norepinephrine reuptake inhibition of my antidepressant acts like a beta agonist; both drugs mimic norepinephrine.

But as a child, I got such attacks quite frequently; and even when I didn’t, my overall athletic performance was always worse than most of the other kids. They knew it, I knew it, and while only a few actively tried to bully me for it, none of the others did anything to make me feel better. So gym class was always a humiliating and painful experience that I came to dread.

As a result, as soon as I got out of school and had my own autonomy in how to structure my own life, I basically avoided exercise whenever I could. Even knowing that it was good for me—really, exercise is ridiculously good for you; it honestly doesn’t even make sense to me how good it is for you—I could rarely get myself to actually go out and exercise. I certainly couldn’t do it with anyone else; sometimes, if I was very disciplined, I could manage to maintain an exercise routine by myself, as long as there was no one else there who could watch me, judge me, or compare themselves to me.

In fact, I’d probably have avoided exercise even more, had I not also had some more positive experiences with it outside of school. I trained in martial arts for a few years, getting almost to a black belt in tae kwon do; I quit precisely when it started becoming very competitive and thus began to feel humiliated again when I performed worse than others. Part of me wishes I had stuck with it long enough to actually get the black belt; but the rest of me knows that even if I’d managed it, I would have been miserable the whole time and it probably would have made me dread exercise even more.

The details of my story are of course individual to me; but the general pattern is disturbingly common. A kid does poorly in gym class, or even suffers painful attacks of whatever disabling condition they have, but nobody sees it as a medical problem; they just see the kid as weak and lazy. Or even if the adults are sympathetic, the other kids aren’t; they just see a peer who performed worse than them, and they have learned by various subtle (and not-so-subtle) cultural pressures that anyone who performs worse at a culturally-important task is worthy of being bullied and shunned.

Even outside the directly competitive environment of sports, the very structure of a phys ed class, where a large group of students are all expected to perform the same athletic tasks and can directly compare their performance against each other, invites this kind of competition. Kids can see, right in their faces, who is doing better and who is doing worse. And our culture is astonishingly bad at teaching children (or anyone else, for that matter) how to be sympathetic to others who perform worse. Worse performance is worse character. Being bad at running, jumping and climbing is just being bad.

Part of the problem is that school administrators seem to see physical education as a training and selection regimen for their sports programs. (In fact, some of them seem to see their entire school as existing to serve their sports programs.) Here is a UK government report bemoaning the fact that “only a minority of schools play competitive sport to a high level”, apparently not realizing that this is necessarily true because high-level sports performance is a relative concept. Only one team can win the championship each year. Only 10% of students will ever be in the top 10% of athletes. No matter what. Anything else is literally mathematically impossible. We do not live in Lake Wobegon; not all the children can be above average.

There are good phys ed programs out there. They have highly-trained instructors and they focus on matching tasks to a student’s own skill level, as well as actually educating them—teaching them about anatomy and physiology rather than just making them run laps. Actually the one phys ed class I took that I actually enjoyed was actually an anatomy and physiology class; we didn’t do any physical exercise in that class. But well-taught phys ed classes are clearly the exception, not the norm.

Of course, it could be that some students actually benefit from phys ed, perhaps even enough to offset the harms to people like me. (Though then the question should be asked whether phys ed should be compulsory for all students—if an intervention helps some and hurts others, maybe only give it to the ones it helps?) But I know very few people who actually described their experiences of phys ed class as positive ones. While many students describe their experiences of math class in similarly-negative terms (which is also a problem with how math classes are taught), I definitely do know people who actually enjoyed and did well in math class. Still, my sample is surely biased—it’s comprised of people similar to me, and I hated gym and loved math. So let’s look at the actual data.

Or rather, I’d like to, but there isn’t that much out there. The empirical literature on the effects of physical education is surprisingly limited.

A lot of analyses of physical education simply take as axiomatic that more phys ed means more exercise, and so they use the—overwhelming, unassailable—evidence that exercise is good to support an argument for more phys ed classes. But they never seem to stop and take a look at whether phys ed classes are actually making kids exercise more, particularly once those kids grow up and become adults.

In fact, the surprisingly weak correlations between higher physical activity and better mental health among adolescents (despite really strong correlations in adults) could be because exercise among adolescents is largely coerced via phys ed, and the misery of being coerced into physical humiliation counteracts any benefits that might have been obtained from increased exercise.

The best long-term longitudinal study I can find did show positive effects of phys ed on long-term health, though by a rather odd mechanism: Women exercised more as adults if they had phys ed in primary school, but men didn’t; they just smoked less. And this study was back in 1999, studying a cohort of adults who had phys ed quite a long time ago, when it was better funded.

The best experiment I can find actually testing whether phys ed programs work used a very carefully designed phys ed program with a lot of features that it would be really nice to have, but the vast majority of actual gym classes do not, including carefully structured activities with specific developmental goals, and, perhaps most importantly, children were taught to track and evaluate their own individual progress rather than evaluate themselves in comparison to others.

And even then, the effects are not all that large. The physical activity scores of the treatment group rose from 932 minutes per week to 1108 minutes per week for first-graders, and from 1212 to 1454 for second-graders. But the physical activity scores of the control group rose from 906 to 996 for first-graders, and 1105 to 1211 for second-graders. So of the 176 minutes per week gained by first-graders, 90 would have happened anyway. Likewise, of the 242 minutes per week gained by second-graders, 106 were not attributable to the treatment. Only about half of the gains were due to the intervention, and they amount to about a 10% increase in overall physical activity. It also seems a little odd to me that the control groups both started worse off than the experimental groups and both groups gained; it raises some doubts about the randomization.

The researchers also measured psychological effects, and these effects are even smaller and honestly a little weird. On a scale of “somatic anxiety” (basically, how bad do you feel about your body’s physical condition?), this well-designed phys ed program only reduced scores in the treatment group from 4.95 to 4.55 among first-graders, and from 4.50 to 4.10 among second-graders. Seeing as the scores for second-graders also fell in the control group from 4.63 to 4.45, only about half of the observed reduction—0.2 points on a 10-point scale—is really attributable to the treatment. And the really baffling part is that the measure of social anxiety actually fell more, which makes me wonder if they’re really measuring what they think they are.

Clearly, exercise is good. We should be trying to get people to exercise more. Actually, this is more important than almost anything else we could do for public health, with the possible exception of vaccinations. All of these campaigns trying to get kids to lose weight should be removed and replaced with programs to get them to exercise more, because losing weight doesn’t benefit health and exercising more does.

But I am not convinced that physical education as we know it actually makes people exercise more. In the short run, it forces kids to exercise, when there were surely ways to get kids to exercise that didn’t require such coercion; and in the long run, it gives them painful, even traumatic memories of exercise that make them not want to continue it once they get older. It’s too competitive, too one-size-fits-all. It doesn’t account for innate differences in athletic ability or match challenge levels to skill levels. It doesn’t help kids cope with having less ability, or even teach kids to be compassionate toward others with less ability than them.

And it makes kids miserable.