How Effective Altruism hurt me

May 12 JDN 2460443

I don’t want this to be taken the wrong way. I still strongly believe in the core principles of Effective Altruism. Indeed, it’s shockingly hard to deny them, because basically they come out to this:

Doing more good is better than doing less good.

Then again, most people want to do good. Basically everyone agrees that more good is better than less good. So what’s the big deal about Effective Altruism?

Well, in practice, most people put shockingly little effort into trying to ensure that they are doing the most good they can. A lot of people just try to be nice people, without ever concerning themselves with the bigger picture. Many of these people don’t give to charity at all.

Then, even among people who do give to charity, typically give to charities more or less at random—or worse, in proportion to how much mail those charities send them begging for donations. (Surely you can see how that is a perverse incentive?) They donate to religious organizations, which sometimes do good things, but fundamentally are founded upon ignorance, patriarchy, and lies.

Effective Altruism is a movement intended to fix this, to get people to see the bigger picture and focus their efforts on where they will do the most good. Vet charities not just for their honesty, but also their efficiency and cost-effectiveness:

Just how many mQALY can you buy with that $1?

That part I still believe in. There is a lot of value in assessing which charities are the most effective, and trying to get more people to donate to those high-impact charities.

But there is another side to Effective Altruism, which I now realize has severely damaged my mental health.

That is the sense of obligation to give as much as you possibly can.

Peter Singer is the most extreme example of this. He seems to have mellowed—a little—in more recent years, but in some of his most famous books he uses the following thought experiment:

To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.

I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.

Basically everyone agrees with this particular decision: Even if you are wearing a very expensive suit that will be ruined, even if you’ll miss something really important like a job interview or even a wedding—most people agree that if you ever come across a drowning child, you should save them.

(Oddly enough, when contemplating this scenario, nobody ever seems to consider the advice that most lifeguards give, which is to throw a life preserver and then go find someone qualified to save the child—because saving someone who is drowning is a lot harder and a lot riskier than most people realize. (“Reach or throw, don’t go.”) But that’s a bit beside the point.)

But Singer argues that we are basically in this position all the time. For somewhere between $500 and $3000, you—yes, you—could donate to a high-impact charity, and thereby save a child’s life.

Does it matter that many other people are better positioned to donate than you are? Does it matter that the child is thousands of miles away and you’ll never see them? Does it matter that there are actually millions of children, and you could never save them all by yourself? Does it matter that you’ll only save a child in expectation, rather than saving some specific child with certainty?

Singer says that none of this matters. For a long time, I believed him.

Now, I don’t.

For, if you actually walked by a drowning child that you could save, only at the cost of missing a wedding and ruining your tuxedo, you clearly should do that. (If it would risk your life, maybe not—and as I alluded to earlier, that’s more likely than you might imagine.) If you wouldn’t, there’s something wrong with you. You’re a bad person.

But most people don’t donate everything they could to high-impact charities. Even Peter Singer himself doesn’t. So if donating is the same as saving the drowning child, it follows that we are all bad people.

(Note: In general, if an ethical theory results in the conclusion that the whole of humanity is evil, there is probably something wrong with that ethical theory.)

Singer has tried to get out of this by saying we shouldn’t “sacrifice things of comparable importance”, and then somehow cash out what “comparable importance” means in such a way that it doesn’t require you to live on the street and eat scraps from trash cans. (Even though the people you’d be donating to largely do live that way.)

I’m not sure that really works, but okay, let’s say it does. Even so, it’s pretty clear that anything you spend money on purely for enjoyment would have to go. You would never eat out at restaurants, unless you could show that the time saved allowed you to get more work done and therefore donate more. You would never go to movies or buy video games, unless you could show that it was absolutely necessary for your own mental functioning. Your life would be work, work, work, then donate, donate, donate, and then do the absolute bare minimum to recover from working and donating so you can work and donate some more.

You would enslave yourself.

And all the while, you’d believe that you were never doing enough, you were never good enough, you are always a terrible person because you try to cling to any personal joy in your own life rather than giving, giving, giving all you have.

I now realize that Effective Altruism, as a movement, had been basically telling me to do that. And I’d been listening.

I now realize that Effective Altruism has given me this voice in my head, which I hear whenever I want to apply for a job or submit work for publication:

If you try, you will probably fail. And if you fail, a child will die.

The “if you try, you will probably fail” is just an objective fact. It’s inescapable. Any given job application or writing submission will probably fail.

Yes, maybe there’s some sort of bundling we could do to reframe that, as I discussed in an earlier post. But basically, this is correct, and I need to accept it.

Now, what about the second part? “If you fail, a child will die.” To most of you, that probably sounds crazy. And it is crazy. It’s way more pressure than any ordinary person should have in their daily life. This kind of pressure should be reserved for neurosurgeons and bomb squads.

But this is essentially what Effective Altruism taught me to believe. It taught me that every few thousand dollars I don’t donate is a child I am allowing to die. And since I can’t donate what I don’t have, it follows that every few thousand dollars I fail to get is another dead child.

And since Effective Altruism is so laser-focused on results above all else, it taught me that it really doesn’t matter whether I apply for the job and don’t get it, or never apply at all; the outcome is the same, and that outcome is that children suffer and die because I had no money to save them.

I think part of the problem here is that Effective Altruism is utilitarian through and through, and utilitarianism has very little place for good enough. There is better and there is worse; but there is no threshold at which you can say that your moral obligations are discharged and you are free to live your life as you wish. There is always more good that you could do, and therefore always more that you should do.

Do we really want to live in a world where to be a good person is to owe your whole life to others?

I do not believe in absolute selfishness. I believe that we owe something to other people. But I no longer believe that we owe everything. Sacrificing my own well-being at the altar of altruism has been incredibly destructive to my mental health, and I don’t think I’m the only one.

By all means, give to high-impact charities. But give a moderate amount—at most, tithe—and then go live your life. You don’t owe the world more than that.

Against average utilitarianism

Jul 30 JDN 2460156

Content warning: Suicide and suicidal ideation

There are two broad strands of utilitarianism, known as average utilitarianism and total utilitarianism. As utilitarianism, both versions concern themselves with maximizing happiness and minimizing suffering. And for many types of ethical question, they yield the same results.

Under average utilitarianism, the goal is to maximize the average level of happiness minus suffering: It doesn’t matter how many people there are in the world, only how happy they are.

Under total utilitarianism, the goal is to maximize the total level of happiness minus suffering: Adding another person is a good thing, as long as their life is worth living.

Mathematically, its the difference between taking the sum of net happiness (total utilitarianism), and taking that sum and dividing it by the population (average utilitarianism).

It would make for too long a post to discuss the validity of utilitarianism in general. Overall I will say briefly that I think utilitarianism is basically correct, but there are some particular issues with it that need to be resolved, and usually end up being resolved by heading slightly in the direction of a more deontological ethics—in short, rule utilitarianism.

But for today, I want to focus on the difference between average and total utilitarianism, because average utilitarianism is a very common ethical view despite having appalling, horrifying implications.

Above all: under average utilitarianism, if you are considering suicide, you should probably do it.

Why? Because anyone who is considering suicide is probably of below-average happiness. And average utilitarianism necessarily implies that anyone who expects to be of below-average happiness should be immediately killed as painlessly as possible.

Note that this does not require that your life be one of endless suffering, so that it isn’t even worth going on living. Even a total utilitarian would be willing to commit suicide, if their life is expected to be so full of suffering that it isn’t worth going on.

Indeed, I suspect that most actual suicidal ideation by depressed people takes this form: My life will always be endless suffering. I will never be happy again. My life is worthless.

The problem with such suicidal ideation is not the ethical logic, which is valid: If indeed your existence from this point forward would be nothing but endless suffering, suicide actually makes sense. (Imagine someone who is being held in a dungeon being continually mercilessly tortured with no hope of escape; it doesn’t seem unreasonable for them to take a cyanide pill.) The problem is the prediction, which says that your life from this point forward will be nothing but endless suffering. Most people with depression do, eventually, feel better. They may never be quite as happy overall as people who aren’t depressed, but they do, in fact, have happy times. And most people who considered suicide but didn’t go through with it end up glad that they went on living.

No, an average utilitarian says you should commit suicide as long as your happiness is below average.

We could be living in a glorious utopia, where almost everyone is happy almost all the time, and people are only occasionally annoyed by minor inconveniences—and average utilitarianism would say that if you expect to suffer a more than average rate of such inconveniences, the world would be better off if you ceased to exist.

Moreover, average utilitarianism says that you should commit suicide if your life is expected to get worse—even if it’s still going to be good, adding more years to your life will just bring your average happiness down. If you had a very happy childhood and adulthood is going just sort of okay, you may as well end it now.

Average utilitarianism also implies that we should bomb Third World countries into oblivion, because their people are less happy than ours and thus their deaths will raise the population average.

Are there ways an average utilitarian can respond to these problems? Perhaps. But every response I’ve seen is far too weak to resolve the real problem.

One approach would be to say that the killing itself is bad, or will cause sufficient grief as to offset the loss of the unhappy person. (An average utilitarian is inherently committed to the claim that losing an unhappy person is itself an inherent good. There is something to be offset.)

This might work for the utopia case: The grief from losing someone you love is much worse than even a very large number of minor inconveniences.

It may even work for the case of declining happiness over your lifespan: Presumably some other people would be sad to lose you, even if they agreed that your overall happiness is expected to gradually decline. Then again, if their happiness is also expected to decline… should they, too, shuffle off this mortal coil?

But does it work for the question of bombing? Would most Americans really be so aggrieved at the injustice of bombing Burundi or Somalia to oblivion? Most of them don’t seem particularly aggrieved at the actual bombings of literally dozens of countries—including, by the way, Somalia. Granted, these bombings were ostensibly justified by various humanitarian or geopolitical objectives, but some of those justifications (e.g. Kosovo) seem a lot stronger than others (e.g. Grenada). And quite frankly, I care more about this sort of thing than most people, and I still can’t muster anything like the same kind of grief for random strangers in a foreign country that I feel when a friend or relative dies. Indeed, I can’t muster the same grief for one million random strangers in a foreign country that I feel for one lost loved one. Human grief just doesn’t seem to work that way. Sometimes I wish it did—but then, I’m not quite sure what our lives would be like in such a radically different world.

Moreover, the whole point is that an average utilitarian should consider it an intrinsically good thing to eliminate the existence of unhappy people, as long as it can be done swiftly and painlessly. So why, then, should people be aggrieved at the deaths of millions of innocent strangers they know are mostly unhappy? Under average utilitarianism, the greatest harm of war is the survivors you leave, because they will feel grief—so your job is to make sure you annihilate them as thoroughly as possible, presumably with nuclear weapons. Killing a soldier is bad as long as his family is left alive to mourn him—but if you kill an entire country, that’s good, because their country was unhappy.

Enough about killing and dying. Let’s talk about something happier: Babies.

At least, total utilitarians are happy about babies. When a new person is brought into the world, a total utilitarian considers this a good thing, as long as the baby is expected to have a life worth living and their existence doesn’t harm the rest of the world too much.

I think that fits with most people’s notions of what is good. Generally the response when someone has a baby is “Congratulations!” rather than “I’m sorry”. We see adding another person to the world as generally a good thing.

But under average utilitarianism, babies must reach a much higher standard in order to be a good thing. Your baby only deserves to exist if they will be happier than average.

Granted, this is the average for the whole world, so perhaps First World people can justify the existence of their children by pointing out that unless things go very badly, they should end up happier than the world average. (Then again, if you have a family history of depression….)

But for Third World families, quite the opposite: The baby may well bring joy to all around them, but unless that joy is enough to bring someone above the global average, it would still be better if the baby did not exist. Adding one more person of moderately-low happiness will just bring the world average down.

So in fact, on a global scale, an average utilitarian should always expect that babies are nearly as likely to be bad as they are good, unless we have some reason to think that the next generation would be substantially happier than this one.

And while I’m not aware of anyone who sincerely believes that we should nuke Third World countries for their own good, I have heard people speak this way about population growth in Third World countries: such discussions of “overpopulation” are usually ostensibly about ecological sustainability, even though the ecological impact of First World countries is dramatically higher—and such talk often shades very quickly into eugenics.

Of course, we wouldn’t want to say that having babies is always good, lest we all be compelled to crank out as many babies as possible and genuinely overpopulate the world. But total utilitarianism can solve this problem: It’s worth adding more people to the world unless the harm of adding those additional people is sufficient to offset the benefit of adding another person whose life is worth living.

Moreover, total utilitarianism can say that it would be good to delay adding another person to the world, until the situation is better. Potentially this delay could be quite long: Perhaps it is best for us not to have too many children until we can colonize the stars. For now, let’s just keep our population sustainable while we develop the technology for interstellar travel. If having more children now would increase the risk that we won’t ever manage to colonize distant stars, total utilitarianism would absolutely say we shouldn’t do it.

There’s also a subtler problem here, which is that it may seem good for any particular individual to have more children, but the net result is that the higher total population is harmful. Then what I think is happening is that we are unaware of, or uncertain about, or simply inattentive to, the small harm to many other people caused by adding one new person to the world. Alternatively, we may not be entirely altruistic, and a benefit that accrues to our own family may be taken as greater than a harm that accrues to many other people far away. If we really knew the actual marginal costs and benefits, and we really agreed on that utility function, we would in fact make the right decision. It’s our ignorance or disagreement that makes us fail, not total utilitarianism in principle. In practice, this means coming up with general rules that seem to result in a fair and reasonable outcome, like “families who want to have kids should aim for two or three”—and again we’re at something like rule utilitarianism.

Another case average utilitarianism seems tempting is in resolving the mere addition paradox.

Consider three possible worlds, A, B, and C:

In world A, there is a population of 1 billion, and everyone is living an utterly happy, utopian life.

In world B, there is a population of 1 billion living in a utopia, and a population of 2 billion living mediocre lives.

In world C, there is a population of 3 billion living good, but not utopian, lives.

The mere addition paradox is that, to many people, world B seems worse than world A, even though all we’ve done is add 2 billion people whose lives are worth living.

Moreover, many people seem to think that the ordering goes like this:


World B is better than world A, because all we’ve done is add more people whose lives are worth living.

World C is better than world B, because it’s fairer, and overall happiness is higher.

World A is better than world C, because everyone is happier, and all we’ve done is reduce the population.


This is intransitive: We have A > C > B > A. Our preferences over worlds are incoherent.

Average utilitarianism resolves this by saying that A > C is true, and C > B is true—but it says that B > A is false. Since average happiness is higher in world A, A > B.

But of course this results in the conclusion that if we are faced with world B, we should do whatever we can to annihilate the 2 billion extra unhappy people, so that we can get to world A. And the whole point of this post is that this is an utterly appalling conclusion we should immediately reject.

What does total utilitarianism say? It says that indeed C > B and B > A, but it denies that A > C. Rather, since there are more people in world C, it’s okay that people aren’t quite as happy.

Derek Parfit argues that this leads to what he calls the “repugnant conclusion”: If we keep increasing the population by a large amount while decreasing happiness by a small amount, the best possible world ends up being one where population is utterly massive but our lives are only barely worth living.

I do believe that total utilitarianism results in this outcome. I can live with that.

Under average utilitarianism, the best possible world is precisely one person who is immortal and absolutely ecstatic 100% of the time. Adding even one person who is not quite that happy will make things worse.

Under total utilitarianism, adding more people who are still very happy would be good, even if it makes that one ecstatic person a bit less ecstatic. And adding more people would continue to be good, as long as it didn’t bring the average down too quickly.

If you find this conclusion repugnant, as Parfit does, I submit that it is because it is difficult to imagine just how large a population we are talking about. Maybe putting some numbers on it will help.

Let’s say the happiness level of an average person in the world today is 35 quality-adjusted life years—our life expectancy of 70, times an average happiness level of 0.5.

So right now we have a world of 8 billion people at 35 QALY, for a total of 280 TQALY. (That’s tera-QALY, 1 trillion QALY.)

(Note: I’m not addressing inequality here. If you believe that a world where one person has 100 QALY and another has 50 QALY is worse than one where both have 75 QALY, you should adjust your scores accordingly—which mainly serves to make the current world look worse, due to our utterly staggering inequality. In fact I think I do not believe this—in my view, the problem is not that happiness is unequal, but that staggering inequality of wealth makes much greater suffering among the poor in exchange for very little happiness among the rich.)

Average utilitarianism says that we should eliminate the less happy people, so we can raise the average QALY higher, maybe to something like 60. I’ve already said why I find this appalling.

So now consider what total utilitarianism asks of us. If we could raise that figure above 280 TQALY, we should. Say we could increase our population to 10 billion, at the cost of reducing average happiness to 30 QALY; should we? Yes, we should, because that’s 300 TQALY.

But notice that in this scenario we’re still 85% as happy as we were. That doesn’t sound so bad. Parfit is worried about a scenario where our lives are barely worth living. So let’s consider what that would require.

“Barely worth living” sounds like maybe 1 QALY. This wouldn’t mean we all live exactly one year; that’s not sustainable, because babies can’t have babies. So it would be more like a life expectancy of 33, with a happiness of 0.03—pretty bad, but still worth living.

In that case, we would need to raise our population over 800 billion to make it better than our current existence. We must colonize at least 100 other planets and fill them as full as we’ve filled Earth.

In fact, I think this 1 QALY life was something like that human beings had at the dawn of agriculture (which by some estimates was actually worse than ancient hunter-gatherer life; we were sort of forced into early agriculture, rather than choosing it because it was better): Nasty, brutish, and short, but still, worth living.

So, Parfit’s repugnant conclusion is that filling 100 planets with people who live like the ancient Babylonians would be as good as life on Earth is now? I don’t really see how this is obviously horrible. Certainly not to the same degree that saying we should immediately nuke Somalia is obviously horrible.

Moreover, total utilitarianism absolutely still says that if we can make those 800 billion people happier, we should. A world of 800 billion people each getting 35 QALY is 100 times better than the way things are now—and doesn’t that seem right, at least?


Yet if you indeed believe that copying a good world 100 times gives you a 100 times better world, you are basically committed to total utilitarianism.

There are actually other views that would allow you to escape this conclusion without being an average utilitarian.

One way, naturally, is to not be a utilitarian. You could be a deontologist or something. I don’t have time to go into that in this post, so let’s save it for another time. For now, let me say that, historically, utilitarianism has led the charge in positive moral change, from feminism to gay rights, from labor unions to animal welfare. We tend to drag stodgy deontologists kicking and screaming toward a better world. (I vaguely recall an excellent tweet on this, though not who wrote it: “Yes, historically, almost every positive social change has been spearheaded by utilitarians. But sometimes utilitarianism seems to lead to weird conclusions in bizarre thought experiments, and surely that’s more important!”)

Another way, which has gotten surprisingly little attention, is to use an aggregating function that is neither a sum nor an average. For instance, you could add up all utility and divide by the square root of population, so that larger populations get penalized for being larger, but you aren’t simply trying to maximize average happiness. That does seem to still tell some people to die even though their lives were worth living, but at least it doesn’t require us to exterminate all who are below average. And it may also avoid the conclusion Parfit considers repugnant, by making our galactic civilization span 10,000 worlds. Of course, why square root? Why not a cube root, or a logarithm? Maybe the arbitrariness is why it hasn’t been seriously considered. But honestly, I think dividing by anything is suspicious; how can adding someone else who is happy ever make things worse?

But if I must admit that a sufficiently large galactic civilization would be better than our current lives, even if everyone there is mostly pretty unhappy? That’s a bullet I’m prepared to bite. At least I’m not saying we should annihilate everyone who is unhappy.

How much should we give of ourselves?

Jul 23 JDN 2460149

This is a question I’ve written about before, but it’s a very important one—perhaps the most important question I deal with on this blog—so today I’d like to come back to it from a slightly different angle.

Suppose you could sacrifice all the happiness in the rest of your life, making your own existence barely worth living, in exchange for saving the lives of 100 people you will never meet.

  1. Would it be good for you do so?
  2. Should you do so?
  3. Are you a bad person if you don’t?
  4. Are all of the above really the same question?

Think carefully about your answer. It may be tempting to say “yes”. It feels righteous to say “yes”.

But in fact this is not hypothetical. It is the actual situation you are in.

This GiveWell article is entitled “Why is it so expensive to save a life?” but that’s incredibly weird, because the actual figure they give is astonishingly, mind-bogglingly, frankly disgustingly cheap: It costs about $4500 to save one human life. I don’t know how you can possibly find that expensive. I don’t understand how anyone can think, “Saving this person’s life might max out a credit card or two; boy, that sure seems expensive!

The standard for healthcare policy in the US is that something is worth doing if it is able to save one quality-adjusted life year for less than $50,000. That’s one year for ten times as much. Even accounting for the shorter lifespans and worse lives in poor countries, saving someone from a poor country for $4500 is at least one hundred times as cost-effective as that.

To put it another way, if you are a typical middle-class person in the First World, with an after-tax income of about $25,000 per year, and you were to donate 90% of that after-tax income to high-impact charities, you could be expected to save 5 lives every year. Over the course of a 30-year career, that’s 150 lives saved.

You would of course be utterly miserable for those 30 years, having given away all the money you could possibly have used for any kind of entertainment or enjoyment, not to mention living in the cheapest possible housing—maybe even a tent in a homeless camp—and eating the cheapest possible food. But you could do it, and you would in fact be expected to save over 100 lives by doing so.

So let me ask you again:

  1. Would it be good for you do so?
  2. Should you do so?
  3. Are you a bad person if you don’t?
  4. Are all of the above really the same question?

Peter Singer often writes as though the answer to all these questions is “yes”. But even he doesn’t actually live that way. He gives a great deal to charity, mind you; no one seems to know exactly how much, but estimates range from at least 10% to up to 50% of his income. My general impression is that he gives about 10% of his ordinary income and more like 50% of big prizes he receives (which are in fact quite numerous). Over the course of his life he has certainly donated at least a couple million dollars. Yet he clearly could give more than he does: He lives a comfortable, upper-middle-class life.

Peter Singer’s original argument for his view, from his essay “Famine, Affluence, and Morality”, is actually astonishingly weak. It involves imagining a scenario where a child is drowning in a lake and you could go save them, but only at the cost of ruining your expensive suit.

Obviously, you should save the child. We all agree on that. You are in fact a terrible person if you wouldn’t save the child.

But Singer tries to generalize this into a principle that requires us to donate all most of our income to international charities, and that just doesn’t follow.

First of all, that suit is not worth $4500. Not if you’re a middle-class person. That’s a damn Armani. No one who isn’t a millionaire wears suits like that.

Second, in the imagined scenario, you’re the only one who can help the kid. All I have to do is change that one thing and already the answer is different: If right next to you there is a trained, certified lifeguard, they should save the kid, not you. And if there are a hundred other people at the lake, and none of them is saving the kid… probably there’s a good reason for that? (It could be bystander effect, but actually that’s much weaker than a lot of people think.) The responsibility doesn’t uniquely fall upon you.

Third, the drowning child is a one-off, emergency scenario that almost certainly will never happen to you, and if it does ever happen, will almost certainly only happen once. But donation is something you could always do, and you could do over and over and over again, until you have depleted all your savings and run up massive debts.

Fourth, in the hypothetical scenario, there is only one child. What if there were ten—or a hundred—or a thousand? What if you couldn’t possibly save them all by yourself? Should you keep going out there and saving children until you become exhausted and you yourself drown? Even if there is a lifeguard and a hundred other bystanders right there doing nothing?

And finally, in the drowning child scenario, you are right there. This isn’t some faceless stranger thousands of miles away. You can actually see that child in front of you. Peter Singer thinks that doesn’t matter—actually his central point seems to be that it doesn’t matter. But I think it does.

Singer writes:

It makes no moral difference whether the person I can help is a neighbor’s child ten yards away from me or a Bengali whose name I shall never know, ten thousand miles away.

That’s clearly wrong, isn’t it? Relationships mean nothing? Community means nothing? There is no moral value whatsoever to helping people close to us rather than random strangers on the other side of the planet?

One answer might be to say that the answer to question 4 is “no”. You aren’t a bad person for not doing everything you should, and even though something would be good if you did it, that doesn’t necessarily mean you should do it.

Perhaps some things are above and beyond the call of duty: Good, perhaps even heroic, if you’re willing to do them, but not something we are all obliged to do. The formal term for this is supererogatory. While I think that overall utilitarianism is basically correct and has done great things for human society, one thing I think most utilitarians miss is that they seem to deny that supererogatory actions exist.

Even then, I’m not entirely sure it is good to be this altruistic.

Someone who really believed that we owe as much to random strangers as we do to our friends and family would never show up to any birthday parties, because any time spent at a birthday party would be more efficiently spent earning-to-give to some high-impact charity. They would never visit their family on Christmas, because plane tickets are expensive and airplanes burn a lot of carbon.

They also wouldn’t concern themselves with whether their job is satisfying or even not totally miserable; they would only care whether the total positive impact they can have on the world is positive, either directly through their work or by raising as much money as possible and donating it all to charity.

They would rest only the minimum amount they require to remain functional, eat only the barest minimum of nutritious food, and otherwise work, work, work, constantly, all the time. If their body was capable of doing the work, they would continue doing the work. For there is not a moment to waste when lives are on the line!

A world full of people like that would be horrible. We would all live our entire lives in miserable drudgery trying to maximize the amount we can donate to faceless strangers on the other side of the planet. There would be no joy or friendship in that world, only endless, endless toil.

When I bring this up in the Effective Altruism community, I’ve heard people try to argue otherwise, basically saying that we would never need everyone to devote themselves to the cause at this level, because we’d soon solve all the big problems and be able to go back to enjoying our lives. I think that’s probably true—but it also kind of misses the point.

Yes, if everyone gave their fair share, that fair share wouldn’t have to be terribly large. But we know for a fact that most people are not giving their fair share. So what now? What should we actually do? Do you really want to live in a world where the morally best people are miserable all the time sacrificing themselves at the altar of altruism?

Yes, clearly, most people don’t do enough. In fact, most people give basically nothing to high-impact charities. We should be trying to fix that. But if I am already giving far more than my fair share, far more than I would have to give if everyone else were pitching in as they should—isn’t there some point at which I’m allowed to stop? Do I have to give everything I can or else I’m a monster?

The conclusion that we ought to make ourselves utterly miserable in order to save distant strangers feels deeply unsettling. It feels even worse if we say that we ought to do so, and worse still if we feel we are bad people if we don’t.

One solution would be to say that we owe absolutely nothing to these distant strangers. Yet that clearly goes too far in the opposite direction. There are so many problems in this world that could be fixed if more people cared just a little bit about strangers on the other side of the planet. Poverty, hunger, war, climate change… if everyone in the world (or really even just everyone in power) cared even 1% as much about random strangers as they do about themselves, all these would be solved.

Should you donate to charity? Yes! You absolutely should. Please, I beseech you, give some reasonable amount to charity—perhaps 5% of your income, or if you can’t manage that, maybe 1%.

Should you make changes in your life to make the world better? Yes! Small ones. Eat less meat. Take public transit instead of driving. Recycle. Vote.

But I can’t ask you to give 90% of your income and spend your entire life trying to optimize your positive impact. Even if it worked, it would be utter madness, and the world would be terrible if all the good people tried to do that.

I feel quite strongly that this is the right approach: Give something. Your fair share, or perhaps even a bit more, because you know not everyone will.

Yet it’s surprisingly hard to come up with a moral theory on which this is the right answer.

It’s much easier to develop a theory on which we owe absolutely nothing: egoism, or any deontology on which charity is not an obligation. And of course Singer-style utilitarianism says that we owe virtually everything: As long as QALYs can be purchased cheaper by GiveWell than by spending on yourself, you should continue donating to GiveWell.

I think part of the problem is that we have developed all these moral theories as if we were isolated beings, who act in a world that is simply beyond our control. It’s much like the assumption of perfect competition in economics: I am but one producer among thousands, so whatever I do won’t affect the price.

But what we really needed was a moral theory that could work for a whole society. Something that would still make sense if everyone did it—or better yet, still make sense if half the people did it, or 10%, or 5%. The theory cannot depend upon the assumption that you are the only one following it. It cannot simply “hold constant” the rest of society.

I have come to realize that the Effective Altruism movement, while probably mostly good for the world as a whole, has actually been quite harmful to the mental health of many of its followers, including myself. It has made us feel guilty for not doing enough, pressured us to burn ourselves out working ever harder to save the world. Because we do not give our last dollar to charity, we are told that we are murderers.

But there are real murderers in this world. While you were beating yourself up over not donating enough, Vladmir Putin was continuing his invasion of Ukraine, ExxonMobil was expanding its offshore drilling, Daesh was carrying out hundreds of terrorist attacks, Qanon was deluding millions of people, and the human trafficking industry was making $150 billion per year.

In other words, by simply doing nothing you are considerably better than the real monsters responsible for most of the world’s horror.

In fact, those starving children in Africa that you’re sending money to help? They wouldn’t need it, were it not for centuries of colonial imperialism followed by a series of corrupt and/or incompetent governments ruled mainly by psychopaths.

Indeed the best way to save those people, in the long run, would be to fix their governments—as has been done in places like Namibia and Botswana. According to the World Development Indicators, the proportion of people living below the UN extreme poverty line (currently $2.15 per day at purchasing power parity) has fallen from 36% to 16% in Namibia since 2003, and from 42% to 15% in Botswana since 1984. Compare this to some countries that haven’t had good governments over that time: In Cote d’Ivoire the same poverty rate was 8% in 1985 but is 11% today (and was actually as high as 33% in 2015), while in Congo it remains at 35%. Then there are countries that are trying, but just started out so poor it’s a long way to go: Burkina Faso’s extreme poverty rate has fallen from 82% in 1994 to 30% today.

In other words, if you’re feeling bad about not giving enough, remember this: if everyone in the world were as good as you, you wouldn’t need to give a cent.

Of course, simply feeling good about yourself for not being a psychopath doesn’t accomplish very much either. Somehow we have to find a balance: Motivate people enough so that they do something, get them to do their share; but don’t pressure them to sacrifice themselves at the altar of altruism.

I think part of the problem here—and not just here—is that the people who most need to change are the ones least likely to listen. The kind of person who reads Peter Singer is already probably in the top 10% of most altruistic people, and really doesn’t need much more than a slight nudge to be doing their fair share. And meanwhile the really terrible people in the world have probably never picked up an ethics book in their lives, or if they have, they ignored everything it said.

I don’t quite know what to do about that. But I hope I can least convince you—and myself—to take some of the pressure off when it feels like we’re not doing enough.

Reckoning costs in money distorts them

May 7 JDN 2460072

Consider for a moment what it means when an economic news article reports “rising labor costs”. What are they actually saying?

They’re saying that wages are rising—perhaps in some industry, perhaps in the economy as a whole. But this is not a cost. It’s a price. As I’ve written about before, the two are fundamentally distinct.

The cost of labor is measured in effort, toil, and time. It’s the pain of having to work instead of whatever else you’d like to do with your time.

The price of labor is a monetary amount, which is delivered in a transaction.

This may seem perfectly obvious, but it has important and oft-neglected implications. A cost, one paid, is gone. That value has been destroyed. We hope that it was worth it for some benefit we gained. A price, when paid, is simply transferred: One person had that money before, now someone else has it. Nothing was gained or lost.

So in fact when reports say that “labor costs have risen”, what they are really saying is that income is being transferred from owners to workers without any change in real value taking place. They are framing as a loss what is fundamentally a zero-sum redistribution.

In fact, it is disturbingly common to see a fundamentally good redistribution of income framed in the press as a bad outcome because of its expression as “costs”; the “cost” of chocolate is feared to go up if we insist upon enforcing bans on forced labor—when in fact it is only the price that goes up, and the cost actually goes down: chocolate would no longer include complicity in an atrocity. The real suffering of making chocolate would be thereby reduced, not increased. Even when they aren’t literally enslaved, those workers are astonishingly poor, and giving them even a few more cents per hour would make a real difference in their lives. But God forbid we pay a few cents more for a candy bar!

If labor costs were to rise, that would mean that work had suddenly gotten harder, or more painful; or else, that some outside circumstance had made it more difficult to work. Having a child increases your labor costs—you now have the opportunity cost of not caring for the child. COVID increased the cost of labor, by making it suddenly dangerous just to go outside in public. That could also increase prices—you may demand a higher wage, and people do seem to have demanded higher wages after COVID. But these are two separate effects, and you can have one without the other. In fact, women typically see wage stagnation or even reduction after having kids (but men largely don’t), despite their real opportunity cost of labor having obviously greatly increased.

On an individual level, it’s not such a big mistake to equate price and cost. If you are buying something, its cost to you basically just is its price, plus a little bit of transaction cost for actually finding and buying it. But on a societal level, it makes an enormous difference. It distorts our policy priorities and can even lead to actively trying to suppress things that are beneficial—such as rising wages.

This false equivalence between price and costs seems to be at least as common among economists as it is among laypeople. Economists will often justify it on the grounds that in an ideal perfect competitive market the two would be in some sense equated. But of course we don’t live in that ideal perfect market, and even if we did, they would only beproportional at the margin, not fundamentally equal across the board. It would still be obviously wrong to characterize the total value or cost of work by the price paid for it; only the last unit of effort would be priced so that marginal value equals price equals marginal cost. The first 39 hours of your work would cost you less than what you were paid, and produce more than you were paid; only that 40th hour would set the three equal.

Once you account for all the various market distortions in the world, there’s no particular relationship between what something costs—in terms of real effort and suffering—and its price—in monetary terms. Things can be expensive and easy, or cheap and awful. In fact, they often seem to be; for some reason, there seems to be a pattern where the most terrible, miserable jobs (e.g. coal mining) actually pay the leastand the easiest, most pleasant jobs (e.g. stock trading) pay the most. Some jobs that benefit society pay well (e.g. doctors) and others pay terribly or not at all (e.g. climate activists). Some actions that harm the world get punished (e.g. armed robbery) and others get rewarded with riches (e.g. oil drilling). In the real world, whether a job is good or bad and whether it is paid well or poorly seem to be almost unrelated.

In fact, sometimes they seem even negatively related, where we often feel tempted to “sell out” and do something destructive in order to get higher pay. This is likely due to Berkson’s paradox: If people are willing to do jobs if they are either high-paying or beneficial to humanity, then we should expect that, on average, most of the high-paying jobs people do won’t be beneficial to humanity. Even if there were inherently no correlation or a small positive one, people’s refusal to do harmful low-paying work removes those jobs from our sample and results in a negative correlation in what remains.

I think that the best solution, ultimately, is to stop reckoning costs in money entirely. We should reckon them in happiness.

This is of course much more difficult than simply using prices; it’s not easy to say exactly how many QALY are sacrificed in the extraction of cocoa beans or the drilling of offshore oil wells. But if we actually did find a way to count them, I strongly suspect we’d find that it was far more than we ought to be willing to pay.

A very rough approximation, surely flawed but at least a start, would be to simply convert all payments into proportions of their recipient’s income: For full-time wages, this would result in basically everyone being counted the same, as 1 hour of work if you work 40 hours per week, 50 weeks per year is precisely 0.05% of your annual income. So we could say that whatever is equivalent to your hourly wage constitutes 50 microQALY.

This automatically implies that every time a rich person pays a poor person, QALY increase, while every time a poor person pays a rich person, QALY decrease. This is not an error in the calculation. It is a fact of the universe. We ignore it only at out own peril. All wealth redistributed downward is a benefit, while all wealth redistributed upward is a harm. That benefit may cause some other harm, or that harm may be compensated by some other benefit; but they are still there.

This would also put some things in perspective. When HSBC was fined £70 million for its crimes, that can be compared against its £1.5 billion in net income; if it were an individual, it would have been hurt about 50 milliQALY, which is about what I would feel if I lost $2000. Of course, it’s not a person, and it’s not clear exactly how this loss was passed through to employees or shareholders; but that should give us at least some sense of how small that loss was for them. They probably felt it… a little.

When Trump was ordered to pay a $1.3 million settlement, based on his $2.5 billion net wealth (corresponding to roughly $125 million in annual investment income), that cost him about 10 milliQALY; for me that would be about $500.

At the other extreme, if someone goes from making $1 per day to making $1.50 per day, that’s a 50% increase in their income—500 milliQALY per year.

For those who have no income at all, this becomes even trickier; for them I think we should probably use their annual consumption, since everyone needs to eat and that costs something, though likely not very much. Or we could try to measure their happiness directly, trying to determine how much it hurts to not eat enough and work all day in sweltering heat.

Properly shifting this whole cultural norm will take a long time. For now, I leave you with this: Any time you see a monetary figure, ask yourself: How much is that worth to them?” The world will seem quite different once you get in the habit of that.

There should be a glut of nurses.

Jan 15 JDN 2459960

It will not be news to most of you that there is a worldwide shortage of healthcare staff, especially nurses and emergency medical technicians (EMTs). I would like you to stop and think about the utterly terrible policy failure this represents. Maybe if enough people do, we can figure out a way to fix it.

It goes without saying—yet bears repeating—that people die when you don’t have enough nurses and EMTs. Indeed, surely a large proportion of the 2.6 million (!) deaths each year from medical errors are attributable to this. It is likely that at least one million lives per year could be saved by fixing this problem worldwide. In the US alone, over 250,000 deaths per year are caused by medical errors; so we’re looking at something like 100,000 lives we could safe each year by removing staffing shortages.

Precisely because these jobs have such high stakes, the mere fact that we would ever see the word “shortage” beside “nurse” or “EMT” was already clear evidence of dramatic policy failure.

This is not like other jobs. A shortage of accountants or baristas or even teachers, while a bad thing, is something that market forces can be expected to correct in time, and it wouldn’t be unreasonable to simply let them do so—meaning, let wages rise on their own until the market is restored to equilibrium. A “shortage” of stockbrokers or corporate lawyers would in fact be a boon to our civilization. But a shortage of nurses or EMTs or firefighters (yes, there are those too!) is a disaster.

Partly this is due to the COVID pandemic, which has been longer and more severe than any but the most pessimistic analysts predicted. But there shortages of nurses before COVID. There should not have been. There should have been a massive glut.

Even if there hadn’t been a shortage of healthcare staff before the pandemic, the fact that there wasn’t a glut was already a problem.

This is what a properly-functioning healthcare policy would look like: Most nurses are bored most of the time. They are widely regarded as overpaid. People go into nursing because it’s a comfortable, easy career with very high pay and usually not very much work. Hospitals spend most of their time with half their beds empty and half of their ambulances parked while the drivers and EMTs sit around drinking coffee and watching football games.

Why? Because healthcare, especially emergency care, involves risk, and the stakes couldn’t be higher. If the number of severely sick people doubles—as in, say, a pandemic—a hospital that usually runs at 98% capacity won’t be able to deal with them. But a hospital that usually runs at 50% capacity will.

COVID exposed to the world what a careful analysis would already have shown: There was not nearly enough redundancy in our healthcare system. We had been optimizing for a narrow-minded, short-sighted notion of “efficiency” over what we really needed, which was resiliency and robustness.

I’d like to compare this to two other types of jobs.

The first is stockbrokers.Set aside for a moment the fact that most of what they do is worthless is not actively detrimental to human society. Suppose that their most adamant boosters are correct and what they do is actually really important and beneficial.

Their experience is almost like what I just said nurses ought to be. They are widely regarded (correctly) as very overpaid. There is never any shortage of them; there are people lining up to be hired. People go into the work not because they care about it or even because they are particularly good at it, but because they know it’s an easy way to make a lot of money.

The one thing that seems to be different from my image may not be as different as it seems. Stockbrokers work long hours, but nobody can really explain why. Frankly most of what they do can be—and has been—successfully automated. Since there simply isn’t that much work for them to do, my guess is that most of the time they spend “working” 60-80 hour weeks is actually not actually working, but sitting around pretending to work. Since most financial forecasters are outperformed by a simple diversified portfolio, the most profitable action for most stock analysts to take most of the time would be nothing.

It may also be that stockbrokers work hard at sales—trying to convince people to buy and sell for bad reasons in order to earn sales commissions. This would at least explain why they work so many hours, though it would make it even harder to believe that what they do benefits society. So if we imagine our “ideal” stockbroker who makes the world a better place, I think they mostly just use a simple algorithm and maybe adjust it every month or two. They make better returns than their peers, but spend 38 hours a week goofing off.

There is a massive glut of stockbrokers. This is what it looks like when a civilization is really optimized to be good at something.

The second is soldiers. Say what you will about them, no one can dispute that their job has stakes of life and death. A lot of people seem to think that the world would be better off without them, but that’s at best only true if everyone got rid of them; if you don’t have soldiers but other countries do, you’re going to be in big trouble. (“We’ll beat our swords into liverwurst / Down by the East Riverside; / But no one wants to be the first!”) So unless and until we can solve that mother of all coordination problems, we need to have soldiers around.

What is life like for a soldier? Well, they don’t seem overpaid; if anything, underpaid. (Maybe some of the officers are overpaid, but clearly not most of the enlisted personnel. Part of the problem there is that “pay grade” is nearly synonymous with “rank”—it’s a primate hierarchy, not a rational wage structure. Then again, so are most industries; the military just makes it more explicit.) But there do seem to be enough of them. Military officials may lament of “shortages” of soldiers, but they never actually seem to want for troops to deploy when they really need them. And if a major war really did start that required all available manpower, the draft could be reinstated and then suddenly they’d have it—the authority to coerce compliance is precisely how you can avoid having a shortage while keeping your workers underpaid. (Russia’s soldier shortage is genuine—something about being utterly outclassed by your enemy’s technological superiority in an obviously pointless imperialistic war seems to hurt your recruiting numbers.)

What is life like for a typical soldier? The answer may surprise you. The overwhelming answer in surveys and interviews (which also fits with the experiences I’ve heard about from friends and family in the military) is that life as a soldier is boring. All you do is wake up in the morning and push rubbish around camp.” Bosnia was scary for about 3 months. After that it was boring. That is pretty much day to day life in the military. You are bored.”

This isn’t new, nor even an artifact of not being in any major wars: Union soldiers in the US Civil War had the same complaint. Even in World War I, a typical soldier spent only half the time on the front, and when on the front only saw combat 1/5 of the time. War is boring.

In other words, there is a massive glut of soldiers. Most of them don’t even know what to do with themselves most of the time.

This makes perfect sense. Why? Because an army needs to be resilient. And to be resilient, you must be redundant. If you only had exactly enough soldiers to deploy in a typical engagement, you’d never have enough for a really severe engagement. If on average you had enough, that means you’d spend half the time with too few. And the costs of having too few soldiers are utterly catastrophic.

This is probably an evolutionary outcome, in fact; civilizations may have tried to have “leaner” militaries that didn’t have so much redundancy, and those civilizations were conquered by other civilizations that were more profligate. (This is not to say that we couldn’t afford to cut military spending at all; it’s one thing to have the largest military in the world—I support that, actually—but quite another to have more than the next 10 combined.)

What’s the policy solution here? It’s actually pretty simple.

Pay nurses and EMTs more. A lot more. Whatever it takes to get to the point where we not only have enough, but have so many people lining up to join we don’t even know what to do with them all. If private healthcare firms won’t do it, force them to—or, all the more reason to nationalize healthcare. The stakes are far too high to leave things as they are.

Would this be expensive? Sure.

Removing the shortage of EMTs wouldn’t even be that expensive. There are only about 260,000 EMTs in the US, and they get paid the apallingly low median salary of $36,000. That means we’re currently spending only about $9 billion per year on EMTs. We could double their salaries and double their numbers for only an extra $27 billion—about 0.1% of US GDP.

Nurses would cost more. There are about 5 million nurses in the US, with an average salary of about $78,000, so we’re currently spending about $390 billion a year on nurses. We probably can’t afford to double both salary and staffing. But maybe we could increase both by 20%, costing about an extra $170 billion per year.

Altogether that would cost about $200 billion per year. To save one hundred thousand lives.

That’s $2 million per life saved, or about $40,000 per QALY. The usual estimate for the value of a statistical life is about $10 million, and the usual threshold for a cost-effective medical intervention is $50,000-$100,000 per QALY; so we’re well under both. This isn’t as efficient as buying malaria nets in Africa, but it’s more efficient than plenty of other things we’re spending on. And this isn’t even counting additional benefits of better care that go beyond lives saved.

In fact if we nationalized US healthcare we could get more than these amounts in savings from not wasting our money on profits for insurance and drug companies—simply making the US healthcare system as cost-effective as Canada’s would save $6,000 per American per year, or a whopping $1.9 trillion. At that point we could double the number of nurses and their salaries and still be spending less.

No, it’s not because nurses and doctors are paid much less in Canada than the US. That’s true in some countries, but not Canada. The median salary for nurses in Canada is about $95,500 CAD, which is $71,000 US at current exchange rates. Doctors in Canada can make anywhere from $80,000 to $400,000 CAD, which is $60,000 to $300,000 US. Nor are healthcare outcomes in Canada worse than the US; if anything, they’re better, as Canadians live an average of four years longer than Americans. No, the radical difference in cost—a factor of 2 to 1—between Canada and the US comes from privatization. Privatization is supposed to make things more efficient and lower costs, but it has absolutely not done that in US healthcare.

And if our choice is between spending more money and letting hundreds of thousands or millions of people die every year, that’s no choice at all.

Mind reading is not optional

Nov 20 JDN 2459904

I have great respect for cognitive-behavioral therapy (CBT), and it has done a lot of good for me. (It is also astonishingly cost-effective; its QALY per dollar rate compares favorably to almost any other First World treatment, and loses only to treating high-impact Third World diseases like malaria and schistomoniasis.)

But there are certain aspects of it that have always been frustrating to me. Standard CBT techniques often present as ‘cognitive distortions‘ what are in fact clearly necessary heuristics without which it would be impossible to function.

Perhaps the worst of these is so-called ‘mind reading‘. The very phrasing of it makes it sound ridiculous: Are you suggesting that you have some kind of extrasensory perception? Are you claiming to be a telepath?

But in fact ‘mind reading’ is simply the use of internal cognitive models to forecast the thoughts, behaviors, and expectations of other human beings. And without it, it would be completely impossible to function in human society.

For instance, I have had therapists tell me that it is ‘mind reading’ for me to anticipate that people will have tacit expectations for my behavior that they will judge me for failing to meet, and I should simply wait for people to express their expectations rather than assuming them. I admit, life would be much easier if I could do that. But I know for a fact that I can’t. Indeed, I used to do that, as a child, and it got me in trouble all the time. People were continually upset at me for not doing things they had expected me to do but never bothered to actually mention. They thought these expectations were “obvious”; they were not, at least not to me.

It was often little things, and in hindsight some of these things seem silly: I didn’t know what a ‘made bed’ was supposed to look like, so I put it in a state that was functional for me, but that was not considered ‘making the bed’. (I have since learned that my way was actually better: It’s good to let sheets air out before re-using them.) I was asked to ‘clear the sink’, so I moved the dishes out of the sink and left them on the counter, not realizing that the implicit command was for me to wash those dishes, dry them, and put them away. I was asked to ‘bring the dinner plates to the table’, so I did that, and left them in a stack there, not realizing that I should be setting them out in front of each person’s chair and also bringing flatware. Of course I know better now. But how was I supposed to know then? It seems like I was expected to, though.

Most people just really don’t seem to realize how many subtle, tacit expectations are baked into every single task. I think neurodivergence is quite relevant here; I have a mild autism spectrum disorder, and so I think rather differently than most people. If you are neurotypical, then you probably can forecast other people’s expectations fairly well automatically, and so they may seem obvious to you. In fact, they may seem so obvious that you don’t even realize you’re doing it. Then when someone like me comes along and is consciously, actively trying to forecast other people’s expectations, and sometimes doing it poorly, you go and tell them to stop trying to forecast. But if they were to do that, they’d end up even worse off than they are. What you really need to be telling them is how to forecast better—but that would require insight into your own forecasting methods which you aren’t even consciously aware of.

Seriously, stop and think for a moment all of the things other people expect you to do every day that are rarely if ever explicitly stated. How you are supposed to dress, how you are supposed to speak, how close you are supposed to stand to other people, how long you are supposed to hold eye contact—all of these are standards you will be expected to meet, whether or not any of them have ever been explicitly explained to you. You may do this automatically; or you may learn to do it consciously after being criticized for failing to do it. But one way or another, you must forecast what other people will expect you to do.

To my knowledge, no one has ever explicitly told me not to wear a Starfleet uniform to work. I am not aware of any part of the university dress code that explicitly forbids such attire. But I’m fairly sure it would not be a good idea. To my knowledge, no one has ever explicitly told me not to burst out into song in the middle of a meeting. But I’m still pretty sure I shouldn’t do that. To my knowledge, no one has ever explicitly told me what the ‘right of way’ rules are for walking down a crowded sidewalk, who should be expected to move out of the way of whom. But people still get mad if you mess up and bump into them.

Even when norms are stated explicitly, it is often as a kind of last resort, and the mere fact that you needed to have a norm stated is often taken as a mark against your character. I have been explicitly told in various contexts not to talk to myself or engage in stimming leg movements; but the way I was told has generally suggested that I would have been judged better if I hadn’t had to be told, if I had simply known the way that other people seem to know. (Or is it that they never felt any particular desire to stim?)

In fact, I think a major part of developing social skills and becoming more functional, to the point where a lot of people actually now seem a bit surprised to learn I have an autism spectrum disorder, has been improving my ability to forecast other people’s expectations for my behavior. There are dozens if not hundreds of norms that people expect you to follow at any given moment; most people seem to intuit them so easily that they don’t even realize they are there. But they are there all the same, and this is painfully evident to those of us who aren’t always able to immediately intuit them all.

Now, the fact remains that my current mental models are surely imperfect. I am often wrong about what other people expect of me. I’m even prepared to believe that some of my anxiety comes from believing that people have expectations more demanding than what they actually have. But I can’t simply abandon the idea of forecasting other people’s expectations. Don’t tell me to stop doing it; tell me how to do it better.

Moreover, there is a clear asymmetry here: If you think people want more from you than they actually do, you’ll be anxious, but people will like you and be impressed by you. If you think people want less from you than they actually do, people will be upset at you and look down on you. So, in the presence of uncertainty, there’s a lot of pressure to assume that the expectations are high. It would be best to get it right, of course; but when you aren’t sure you can get it right, you’re often better off erring on the side of caution—which is to say, the side of anxiety.

In short, mind reading isn’t optional. If you think it is, that’s only because you do it automatically.

Updating your moral software

Oct 23 JDN 2459876

I’ve noticed an odd tendency among politically active people, particular social media slacktivists (a term I do not use pejoratively: slacktivism is highly cost-effective). They adopt new ideas very rapidly, trying to stay on the cutting edge of moral and political discourse—and then they denigrate and disparage anyone who fails to do the same as an irredeemable monster.

This can take many forms, such as “if you don’t buy into my specific take on Critical Race Theory, you are a racist”, “if you have any uncertainty about the widespread use of puberty blockers you are a transphobic bigot”, “if you give any credence to the medical consensus on risks of obesity you are fatphobic“, “if you think disabilities should be cured you’re an ableist”, and “if you don’t support legalizing abortion in all circumstances you are a misogynist”.

My intention here is not to evaluate any particular moral belief, though I’ll say the following: I am skeptical of Critical Race Theory, especially the 1619 project which seems to be to include substantial distortions of history. I am cautiously supportive of puberty blockers, because the medical data on their risks are ambiguous—while the sociological data on how much happier trans kids are when accepted are totally unambiguous. I am well aware of the medical data saying that the risks of obesity are overblown (but also not negligible, particular for those who are very obese). Speaking as someone with a disability that causes me frequent, agonizing pain, yes, I want disabilities to be cured, thank you very much; accommodations are nice in the meantime, but the best long-term solution is to not need accommodations. (I’ll admit to some grey areas regarding certain neurodivergences such as autism and ADHD, and I would never want to force cures on people who don’t want them; but paralysis, deafness, blindness, diabetes, depression, and migraine are all absolutely worth finding cures for—the QALY at stake here are massive—and it’s silly to say otherwise.) I think abortion should generally be legal and readily available in the first trimester (which is when most abortions happen anyway), but much more strictly regulated thereafter—but denying it to children and rape victims is a human rights violation.

What I really want to talk about today is not the details of the moral belief, but the attitude toward those who don’t share it. There are genuine racists, transphobes, fatphobes, ableists, and misogynists in the world. There are also structural institutions that can lead to discrimination despite most of the people involved having no particular intention to discriminate. It’s worthwhile to talk about these things, and to try to find ways to fix them. But does calling anyone who disagrees with you a monster accomplish that goal?

This seems particularly bad precisely when your own beliefs are so cutting-edge. If you have a really basic, well-established sort of progressive belief like “hiring based on race should be illegal”, “women should be allowed to work outside the home” or “sodomy should be legal”, then people who disagree with you pretty much are bigots. But when you’re talking about new, controversial ideas, there is bound to be some lag; people who adopted the last generation’s—or even the last year’s—progressive beliefs may not yet be ready to accept the new beliefs, and that doesn’t make them bigots.

Consider this: Were you born believing in your current moral and political beliefs?

I contend that you were not. You may have been born intelligent, open-minded, and empathetic. You may have been born into a progressive, politically-savvy family. But the fact remains that any particular belief you hold about race, or gender, or ethics was something you had to learn. And if you learned it, that means that at some point you didn’t already know it. How would you have felt back then, if, instead of calmly explaining it to you, people called you names for not believing in it?

Now, perhaps it is true that as soon as you heard your current ideas, you immediately adopted them. But that may not be the case—it may have taken you some time to learn or change your mind—and even if it was, it’s still not fair to denigrate anyone who takes a bit longer to come around. There are many reasons why someone might not be willing to change their beliefs immediately, and most of them are not indicative of bigotry or deep moral failings.

It may be helpful to think about this in terms of updating your moral software. You were born with a very minimal moral operating system (emotions such as love and guilt, the capacity for empathy), and over time you have gradually installed more and more sophisticated software on top of that OS. If someone literally wasn’t born with the right OS—we call these people psychopaths—then, yes, you have every right to hate, fear, and denigrate them. But most of the people we’re talking about do have that underlying operating system, they just haven’t updated all their software to the same version as yours. It’s both unfair and counterproductive to treat them as irredeemably defective simply because they haven’t updated to the newest version yet. They have the hardware, they have the operating system; maybe their download is just a little slower than yours.

In fact, if you are very fast to adopt new, trendy moral beliefs, you may in fact be adopting them too quickly—they haven’t been properly vetted by human experience just yet. You can think of this as like a beta version: The newest update has some great new features, but it’s also buggy and unstable. It may need to be fixed before it is really ready for widespread release. If that’s the case, then people aren’t even wrong not to adopt them yet! It isn’t necessarily bad that you have adopted the new beliefs; we need beta testers. But you should be aware of your status as a beta tester and be prepared both to revise your own beliefs if needed, and also to cut other people slack if they disagree with you.

I understand that it can be immensely frustrating to be thoroughly convinced that something is true and important and yet see so many people disagreeing with it. (I am an atheist activist after all, so I absolutely know what that feels like.) I understand that it can be immensely painful to watch innocent people suffer because they have to live in a world where other people have harmful beliefs. But you aren’t changing anyone’s mind or saving anyone from harm by calling people names. Patience, tact, and persuasion will win the long game, and the long game is really all we have.

And if it makes you feel any better, the long game may not be as long as it seems. The arc of history may have tighter curvature than we imagine. We certainly managed a complete flip of the First World consensus on gay marriage in just a single generation. We may be able to achieve similarly fast social changes in other areas too. But we haven’t accomplished the progress we have so far by being uncharitable or aggressive toward those who disagree.

I am emphatically not saying you should stop arguing for your beliefs. We need you to argue for your beliefs. We need you to argue forcefully and passionately. But when doing so, try not to attack the people who don’t yet agree with you—for they are precisely the people we need to listen to you.

How we measure efficiency affects our efficiency

Jun 21 JDN 2459022

Suppose we are trying to minimize carbon emissions, and we can afford one of the two following policies to improve fuel efficiency:

  1. Policy A will replace 10,000 cars that average 25 MPG with hybrid cars that average 100 MPG.
  2. Policy B will replace 5,000 diesel trucks that average 5 MPG with turbocharged, aerodynamic diesel trucks that average 10 MPG.

Assume that both cars and trucks last about 100,000 miles (in reality this of course depends on a lot of factors), and diesel and gas pollute about the same amount per gallon (this isn’t quite true, but it’s close). Which policy should we choose?

It seems obvious: Policy A, right? 10,000 vehicles, each increasing efficiency by 75 MPG or a factor of 4, instead of 5,000 vehicles, each increasing efficiency by only 5 MPG or a factor of 2.

And yet—in fact the correct answer is definitely policy B, because the use of MPG has distorted our perception of what constitutes efficiency. We should have been using the inverse: gallons per hundred miles.

  1. Policy A will replace 10,000 cars that average 4 GPHM with cars that average 1 GPHM.
  2. Policy B will replace 5,000 trucks that average 20 GPHM with trucks that average 10 GPHM.

This means that policy A will save (10,000)(100,000/100)(4-1) = 30 million gallons, while policy B will save (5,000)(100,000/100)(20-10) = 50 million gallons.

A gallon of gasoline produces about 9 kg of CO2 when burned. This means that by choosing the right policy here, we’ll have saved 450,000 tons of CO2—or by choosing the wrong one we would only have saved 270,000.

The simple choice of which efficiency measure to use when making our judgment—GPHM versus MPG—has had a profound effect on the real impact of our choices.

Let’s try applying the same reasoning to charities. Again suppose we can choose one of two policies.

  1. Policy C will move $10 million that currently goes to local community charities which can save one QALY for $1 million to medical-research charities that can save one QALY for $50,000.
  2. Policy D will move $10 million that currently goes to direct-transfer charities which can save one QALY for $1000 to anti-malaria net charities that can save one QALY for $800.

Policy C means moving funds from charities that are almost useless ($1 million per QALY!?) to charities that meet a basic notion of cost-effectiveness (most public health agencies in the First World have a standard threshold of about $50,000 or $100,000 per QALY).

Policy D means moving funds from charities that are already highly cost-effective to other charities that are only a bit more cost-effective. It almost seems pedantic to even concern ourselves with the difference between $1000 per QALY and $800 per QALY.

It’s the same $10 million either way. So, which policy should we pick?

If the lesson you took from the MPG example is that we should always be focused on increasing the efficiency of the least efficient, you’ll get the wrong answer. The correct answer is based on actually using the right measure of efficiency.

Here, it’s not dollars per QALY we should care about; it’s QALY per million dollars.

  1. Policy C will move $10 million from charities which get 1 QALY per million dollars to charities which get 20 QALY per million dollars.
  2. Policy D will move $10 million from charities which get 1000 QALY per million dollars to charities which get 1250 QALY per million dollars.

Multiply that out, and policy C will gain (10)(20-1) = 190 QALY, while policy D will gain (10)(1250-1000) = 2500 QALY. Assuming that “saving a life” means about 50 QALY, this is the difference between saving 4 lives and saving 50 lives.

My intuition actually failed me on this one; before I actually did the math, I had assumed that it would be far more important to move funds from utterly useless charities to ones that meet a basic standard. But it turns out that it’s actually far more important to make sure that the funds being targeted at the most efficient charities are really the most efficient—even apparently tiny differences matter a great deal.

Of course, if we can move that $10 million from the useless charities to the very best charities, that’s the best of all; it would save (10)(1250-1) = 12,490 QALY. This is nearly 250 lives.

In the fuel economy example, there’s no feasible way to upgrade a semitrailer to get 100 MPG. If we could, we totally should; but nobody has any idea how to do that. Even an electric semi probably won’t be that efficient, depending on how the grid produces electricity. (Obviously if the grid were all nuclear, wind, and solar, it would be; but very few places are like that.)

But when we’re talking about charities, this is just money; it is by definition fungible. So it is absolutely feasible in an economic sense to get all the money currently going towards nearly-useless charities like churches and museums and move that money directly toward high-impact charities like anti-malaria nets and vaccines.

Then again, it may not be feasible in a practical or political sense. Someone who currently donates to their local church may simply not be motivated by the same kind of cosmopolitan humanitarianism that motivates Effective Altruism. They may care more about supporting their local community, or be motivated by genuine religious devotion. This isn’t even inherently a bad thing; nobody is a cosmopolitan in everything they do, nor should we be—we have good reasons to care more about our own friends, family, and community than we do about random strangers in foreign countries thousands of miles away. (And while I’m fairly sure Jesus himself would have been an Effective Altruist if he’d been alive today, I’m well aware that most Christians aren’t—and this doesn’t make them “false Christians”.) There might be some broader social or cultural change that could make this happen—but it’s not something any particular person can expect to accomplish.

Whereas, getting people who are already Effective Altruists giving to efficient charities to give to a slightly more efficient charity is relatively easy: Indeed, it’s basically the whole purpose for which GiveWell exists. And there are analysts working at GiveWell right now whose job it is to figure out exactly which charities yield the most QALY per dollar and publish that information. One person doing that job even slightly better can save hundreds or even thousands of lives.

Indeed, I’m seriously considering applying to be one myself—it sounds both more pleasant and more important than anything I’d be likely to get in academia.

The cost of illness

Feb 2 JDN 2458882

As I write this I am suffering from some sort of sinus infection, most likely some strain of rhinovirus. So far it has just been basically a bad cold, so there isn’t much to do aside from resting and waiting it out. But it did get me thinking about healthcare—we’re so focused on the costs of providing it that we often forget the costs of not providing it.

The United States is the only First World country without a universal healthcare system. It is not a coincidence that we also have some of the highest rates of preventable mortality and burden of disease.

We in the United States spend about $3.5 trillion per year on healthcare, the most of any country in the world, even as a proportion of GDP. Yet this is not the cost of disease; this is how much we were willing to pay to avoid the cost of disease. Whatever harm that would have been caused without all that treatment must actually be worth more than $3.5 trillion to us—because we paid that much to avoid it.

Globally, the disease burden is about 30,000 disability-adjusted life-years (DALY) per 100,000 people per year—that is to say, the average person is about 30% disabled by disease. I’ve spoken previously about quality-adjusted life years (QALY); the two measures take slightly different approaches to the same overall goal, and are largely interchangeable for most purposes.

Of course this result relies upon the disability weights; it’s not so obvious how we should be comparing across different conditions. How many years would you be willing to trade of normal life to avoid ten years of Alzheimer’s? But it’s probably not too far off to say that if we could somehow wave a magic wand and cure all disease, we would really increase our GDP by something like 30%. This would be over $6 trillion in the US, and over $26 trillion worldwide.

Of course, we can’t actually do that. But we can ask what kinds of policies are most likely to promote health in a cost-effective way.

Unsurprisingly, the biggest improvements to be made are in the poorest countries, where it can be astonishingly cheap to improve health. Malaria prevention has a cost of around $30 per DALY—by donating to the Against Malaria Foundation you can buy a year of life for less than the price of a new video game. Compare this to the standard threshold in the US of $50,000 per QALY: Targeting healthcare in the poorest countries can increase cost-effectiveness a thousandfold. In humanitarian terms, it would be well worth diverting spending from our own healthcare to provide public health interventions in poor countries. (Fortunately, we have even better options than that, like raising taxes on billionaires or diverting military spending instead.)

We in the United States spend about twice as much (per person per year) on healthcare as other First World countries. Are our health outcomes twice as good? Clearly not. Are they any better at all? That really isn’t clear. We certainly don’t have a particularly high life expectancy. We spend more on administrative costs than we do on preventative care—unlike every other First World country except Australia. Almost all of our drugs and therapies are more expensive here than they are everywhere else in the world.

The obvious answer here is to make our own healthcare system more like those of other First World countries. There are a variety of universal health care systems in the world that we could model ourselves on, ranging from the single-payer government-run system in the UK to the universal mandate system of Switzerland. The amazing thing is that it almost doesn’t matter which one we choose: We could copy basically any other First World country and get better healthcare for less spending. Obamacare was in many ways similar to the Swiss system, but we never fully implemented it and the Republicans have been undermining it every way they can. Under President Trump, they have made significant progress in undermining it, and as a result, there are now 3 million more Americans without health insurance than there were before Trump took office. The Republican Party is intentionally increasing the harm of disease.

Valuing harm without devaluing the harmed

June 9 JDN 2458644

In last week’s post I talked about the matter of “putting a value on a human life”. I explained how we don’t actually need to make a transparently absurd statement like “a human life is worth $5 million” to do cost-benefit analysis; we simply need to ask ourselves what else we could do with any given amount of money. We don’t actually need to put a dollar value on human lives; we need only value them in terms of other lives.

But there is a deeper problem to face here, which is how we ought to value not simply life, but quality of life. The notion is built into the concept of quality-adjusted life-years (QALY), but how exactly do we make such a quality adjustment?

Indeed, much like cost-benefit analysis in general or the value of a statistical life, the very concept of QALY can be repugnant to many people. The problem seems to be that it violates our deeply-held belief that all lives are of equal value: If I say that saving one person adds 2.5 QALY and saving another adds 68 QALY, I seem to be saying that the second person is worth more than the first.

But this is not really true. QALY aren’t associated with a particular individual. They are associated with the duration and quality of life.

It should be fairly easy to convince yourself that duration matters: Saving a newborn baby who will go on to live to be 84 years old adds an awful lot more in terms of human happiness than extending the life of a dying person by a single hour. To call each of these things “saving a life” is actually very unequal: It’s implying that 1 hour for the second person is worth 84 years for the first.

Quality, on the other hand, poses much thornier problems. Presumably, we’d like to be able to say that being wheelchair-bound is a bad thing, and if we can make people able to walk we should want to do that. But this means that we need to assign some sort of QALY cost to being in a wheelchair, which then seems to imply that people in wheelchairs are worth less than people who can walk.

And the same goes for any disability or disorder: Assigning a QALY cost to depression, or migraine, or cystic fibrosis, or diabetes, or blindness, or pneumonia, always seems to imply that people with the condition are worth less than people without. This is a deeply unsettling result.

Yet I think the mistake is in how we are using the concept of “worth”. We are not saying that the happiness of someone with depression is less important than the happiness of someone without; we are saying that the person with depression experiences less happiness—which, in this case of depression especially, is basically true by construction.

Does this imply, however, that if we are given the choice between saving two people, one of whom has a disability, we should save the one without?

Well, here’s an extreme example: Suppose there is a plague which kills 50% of its victims within one year. There are two people in a burning building. One of them has the plague, the other does not. You only have time to save one: Which do you save? I think it’s quite obvious you save the person who doesn’t have the plague.

But that only relies upon duration, which wasn’t so difficult. All right, fine; say the plague doesn’t kill you. Instead, it renders you paralyzed and in constant pain for the rest of your life. Is it really that far-fetched to say that we should save the person who won’t have that experience?

We really shouldn’t think of it as valuing people; we should think of it as valuing actions. QALY are a way of deciding which actions we should take, not which people are more important or more worthy. “Is a person who can walk worth more than a person who needs a wheelchair?” is a fundamentally bizarre and ultimately useless question. ‘Worth more’ in what sense? “Should we spend $100 million developing this technology that will allow people who use wheelchairs to walk?” is the question we should be asking. The QALY cost we assign to a condition isn’t about how much people with that condition are worth; it’s about what resources we should be willing to commit in order to treat that condition. If you have a given condition, you should want us to assign a high QALY cost to it, to motivate us to find better treatments.

I think it’s also important to consider which individuals are having QALY added or subtracted. In last week’s post I talked about how some people read “the value of a statistical life is $5 million” to mean “it’s okay to kill someone as long as you profit at least $5 million”; but this doesn’t follow at all. We don’t say that it’s all right to steal $1,000 from someone just because they lose $1,000 and you gain $1,000. We wouldn’t say it was all right if you had a better investment strategy and would end up with $1,100 afterward. We probably wouldn’t even say it was all right if you were much poorer and desperate for the money (though then we might at least be tempted). If a billionaire kills people to make $10 million each (sadly I’m quite sure that oil executives have killed for far less), that’s still killing people. And in fact since he is a billionaire, his marginal utility of wealth is so low that his value of a statistical life isn’t $5 million; it’s got to be in the billions. So the net happiness of the world has not increased, in fact.

Above all, it’s vital to appreciate the benefits of doing good cost-benefit analysis. Cost-benefit analysis tells us to stop fighting wars. It tells us to focus our spending on medical research and foreign aid instead of yet more corporate subsidies or aircraft carriers. It tells us how to allocate our public health resources so as to save the most lives. It emphasizes how vital our environmental regulations are in making our lives better and longer.

Could we do all these things without QALY? Maybe—but I suspect we would not do them as well, and when millions of lives are on the line, “not as well” is thousands of innocent people dead. Sometimes we really are faced with two choices for a public health intervention, and we need to decide which one will help the most people. Sometimes we really do have to set a pollution target, and decide just what amount of risk is worth accepting for the economic benefits of industry. These are very difficult questions, and without good cost-benefit analysis we could get the answers dangerously wrong.