How Effective Altruism hurt me

May 12 JDN 2460443

I don’t want this to be taken the wrong way. I still strongly believe in the core principles of Effective Altruism. Indeed, it’s shockingly hard to deny them, because basically they come out to this:

Doing more good is better than doing less good.

Then again, most people want to do good. Basically everyone agrees that more good is better than less good. So what’s the big deal about Effective Altruism?

Well, in practice, most people put shockingly little effort into trying to ensure that they are doing the most good they can. A lot of people just try to be nice people, without ever concerning themselves with the bigger picture. Many of these people don’t give to charity at all.

Then, even among people who do give to charity, typically give to charities more or less at random—or worse, in proportion to how much mail those charities send them begging for donations. (Surely you can see how that is a perverse incentive?) They donate to religious organizations, which sometimes do good things, but fundamentally are founded upon ignorance, patriarchy, and lies.

Effective Altruism is a movement intended to fix this, to get people to see the bigger picture and focus their efforts on where they will do the most good. Vet charities not just for their honesty, but also their efficiency and cost-effectiveness:

Just how many mQALY can you buy with that $1?

That part I still believe in. There is a lot of value in assessing which charities are the most effective, and trying to get more people to donate to those high-impact charities.

But there is another side to Effective Altruism, which I now realize has severely damaged my mental health.

That is the sense of obligation to give as much as you possibly can.

Peter Singer is the most extreme example of this. He seems to have mellowed—a little—in more recent years, but in some of his most famous books he uses the following thought experiment:

To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.

I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.

Basically everyone agrees with this particular decision: Even if you are wearing a very expensive suit that will be ruined, even if you’ll miss something really important like a job interview or even a wedding—most people agree that if you ever come across a drowning child, you should save them.

(Oddly enough, when contemplating this scenario, nobody ever seems to consider the advice that most lifeguards give, which is to throw a life preserver and then go find someone qualified to save the child—because saving someone who is drowning is a lot harder and a lot riskier than most people realize. (“Reach or throw, don’t go.”) But that’s a bit beside the point.)

But Singer argues that we are basically in this position all the time. For somewhere between $500 and $3000, you—yes, you—could donate to a high-impact charity, and thereby save a child’s life.

Does it matter that many other people are better positioned to donate than you are? Does it matter that the child is thousands of miles away and you’ll never see them? Does it matter that there are actually millions of children, and you could never save them all by yourself? Does it matter that you’ll only save a child in expectation, rather than saving some specific child with certainty?

Singer says that none of this matters. For a long time, I believed him.

Now, I don’t.

For, if you actually walked by a drowning child that you could save, only at the cost of missing a wedding and ruining your tuxedo, you clearly should do that. (If it would risk your life, maybe not—and as I alluded to earlier, that’s more likely than you might imagine.) If you wouldn’t, there’s something wrong with you. You’re a bad person.

But most people don’t donate everything they could to high-impact charities. Even Peter Singer himself doesn’t. So if donating is the same as saving the drowning child, it follows that we are all bad people.

(Note: In general, if an ethical theory results in the conclusion that the whole of humanity is evil, there is probably something wrong with that ethical theory.)

Singer has tried to get out of this by saying we shouldn’t “sacrifice things of comparable importance”, and then somehow cash out what “comparable importance” means in such a way that it doesn’t require you to live on the street and eat scraps from trash cans. (Even though the people you’d be donating to largely do live that way.)

I’m not sure that really works, but okay, let’s say it does. Even so, it’s pretty clear that anything you spend money on purely for enjoyment would have to go. You would never eat out at restaurants, unless you could show that the time saved allowed you to get more work done and therefore donate more. You would never go to movies or buy video games, unless you could show that it was absolutely necessary for your own mental functioning. Your life would be work, work, work, then donate, donate, donate, and then do the absolute bare minimum to recover from working and donating so you can work and donate some more.

You would enslave yourself.

And all the while, you’d believe that you were never doing enough, you were never good enough, you are always a terrible person because you try to cling to any personal joy in your own life rather than giving, giving, giving all you have.

I now realize that Effective Altruism, as a movement, had been basically telling me to do that. And I’d been listening.

I now realize that Effective Altruism has given me this voice in my head, which I hear whenever I want to apply for a job or submit work for publication:

If you try, you will probably fail. And if you fail, a child will die.

The “if you try, you will probably fail” is just an objective fact. It’s inescapable. Any given job application or writing submission will probably fail.

Yes, maybe there’s some sort of bundling we could do to reframe that, as I discussed in an earlier post. But basically, this is correct, and I need to accept it.

Now, what about the second part? “If you fail, a child will die.” To most of you, that probably sounds crazy. And it is crazy. It’s way more pressure than any ordinary person should have in their daily life. This kind of pressure should be reserved for neurosurgeons and bomb squads.

But this is essentially what Effective Altruism taught me to believe. It taught me that every few thousand dollars I don’t donate is a child I am allowing to die. And since I can’t donate what I don’t have, it follows that every few thousand dollars I fail to get is another dead child.

And since Effective Altruism is so laser-focused on results above all else, it taught me that it really doesn’t matter whether I apply for the job and don’t get it, or never apply at all; the outcome is the same, and that outcome is that children suffer and die because I had no money to save them.

I think part of the problem here is that Effective Altruism is utilitarian through and through, and utilitarianism has very little place for good enough. There is better and there is worse; but there is no threshold at which you can say that your moral obligations are discharged and you are free to live your life as you wish. There is always more good that you could do, and therefore always more that you should do.

Do we really want to live in a world where to be a good person is to owe your whole life to others?

I do not believe in absolute selfishness. I believe that we owe something to other people. But I no longer believe that we owe everything. Sacrificing my own well-being at the altar of altruism has been incredibly destructive to my mental health, and I don’t think I’m the only one.

By all means, give to high-impact charities. But give a moderate amount—at most, tithe—and then go live your life. You don’t owe the world more than that.

Time and How to Use It

Nov 5 JDN 2460254

A review of Four Thousand Weeks by Oliver Burkeman

The central message of Four Thousand Weeks: Time and How to Use It seems so obvious in hindsight it’s difficult to understand why it feels so new and unfamiliar. It’s a much-needed reaction to the obsessive culture of “efficiency” and “productivity” that dominates the self-help genre. Its core message is remarkable simple:

You don’t have time to do everything you want, so stop trying.

I actually think Burkeman understands the problem incorrectly. He argues repeatedly that it is our mortality which makes our lives precious—that it is because we only get four thousand weeks of life that we must use our time well. But this strikes me as just yet more making excuses for the dragon.

Our lives would not be less precious if we lived a thousand years or a million. Indeed, our time would hardly be any less scarce! You still can’t read every book ever written if you live a million years—for every one of those million years, another 500,000 books will be published. You could visit every one of the 10,000 cities in the world, surely; but if you spend a week in each one, by the time you get back to Paris for a second visit, centuries will have passed—I must imagine you’ll have missed quite a bit of change in that time. (And this assumes that our population remains the same—do we really think it would, if humans could live a million years?)

Even a truly immortal being that will live until the end of time needs to decide where to be at 7 PM this Saturday.

Yet Burkeman does grasp—and I fear that too many of us do not—that our time is precious, and when we try to do everything that seems worth doing, we end up failing to prioritize what really matters most.

What do most of us spend most of our lives doing? Whatever our bosses tell us to do. Aside from sleeping, the activity that human beings spend the largest chunk of their lives on is working.

This has made us tremendously, mind-bogglingly productive—our real GDP per capita is four times what it was in just 1950, and about eight times what it was in the 1920s. Projecting back further than that is a bit dicier, but assuming even 1% annual growth, it should be about twenty times what it was at the dawn of the Industrial Revolution. We could surely live better than medieval peasants did by working only a few hours per week; yet in fact on average we work more hours than they did—by some estimates, nearly twice as much. Rather than getting the same wealth for 5% of the work, or twice the wealth for 10%, we chose to get 40 times the wealth for twice the work.

It would be one thing if all this wealth and productivity actually seemed to make us happy. But does it?

Our physical health is excellent: We are tall, we live long lives—we are smarter, even, than people of the not-so-distant past. We have largely conquered disease as the ancients knew it. Even a ‘catastrophic’ global pandemic today kills a smaller share of the population than would die in a typical year from disease in ancient times. Even many of our most common physical ailments, such as obesity, heart disease, and diabetes, are more symptoms of abundance than poverty. Our higher rates of dementia and cancer are largely consequences of living longer lives—most medieval peasants simply didn’t make it long enough to get Alzheimer’s. I wonder sometimes how ancient people dealt with other common ailments such as migraine and sleep apnea; but my guess is that they basically just didn’t—since treatment was impossible, they learned to live with it. Maybe they consoled themselves with whatever placebo treatments the healers of their local culture offered.

Yet our mental health seems to be no better than ever—and depending on how you measure it, may actually be getting worse over time. Some of the measured increase is surely due to more sensitive diagnosis; but some of it may be a genuine increase—especially as a result of the COVID pandemic. I wasn’t able to find any good estimates of rates of depression or anxiety disorders in ancient or medieval times, so I guess I really can’t say whether this is a problem that’s getting worse. But it sure doesn’t seem to be getting better. We clearly have not solved the problem of depression the way we have solved the problem of infectious disease.

Burkeman doesn’t tell us to all quit our jobs and stop working. But he does suggest that if you are particularly unhappy at your current job (as I am), you may want to quit it and begin searching for something else (as I have). He reminds us that we often get stuck in a particular pattern and underestimate the possibilities that may be available to us.

And he has advice for those who want to stay in their current jobs, too: Do less. Don’t take on everything that is asked of you. Don’t work yourself to the bone. The rewards for working harder are far smaller than our society will tell you, and the costs of burning out are far higher. Do the work that is genuinely most important, and let the rest go.

Unlike most self-help books, Four Thousand Weeks offers very little in the way of practical advice. It’s more like a philosophical treatise, exhorting you to adopt a whole new outlook on time and how you use it. But he does offer a little bit of advice, near the end of the book, in “Ten Tools for Embracing Your Finitude” and “Five Questions”.

The ten tools are as follows:


Adopt a ‘fixed volume’ approach to productivity. Limit the number of tasks on your to-do list. Set aside a particular amount of time for productive work, and work only during that time.

I am relatively good at this one; I work only during certain hours on weekdays, and I resist the urge to work other times.

Serialize, serialize, serialize. Do one major project at a time.

I am terrible at this one; I constantly flit between different projects, leaving most of them unfinished indefinitely. But I’m not entirely convinced I’d do better trying to focus on one in particular. I switch projects because I get stalled on the current one, not because I’m anxious about not doing the others. Unless I can find a better way to break those stalls, switching projects still gets more done than staying stuck on the same one.

Decide in advance what to fail at. Prioritize your life and accept that some things will fail.

We all, inevitably, fail to achieve everything we want to. What Burkeman is telling us to do is choose in advance which achievements we will fail at. Ask yourself: How much do you really care about keeping the kitchen clean and the lawn mowed? If you’re doing these things to satisfy other people’s expectations but you don’t truly care about them yourself, maybe you should just accept that people will frown upon you for your messy kitchen and overgrown lawn.

Focus on what you’ve already completed, not just on what’s left to complete. Make a ‘done list’ of tasks you have completed today—even small ones like “brushed teeth” and “made breakfast”—to remind yourself that you do in fact accomplish things.

I may try this one for awhile. It feels a bit hokey to congratulate yourself on making breakfast—but when you are severely depressed, even small tasks like that can in fact feel like an ordeal.

Consolidate your caring. Be generous and kind, but pick your battles.

I’m not very good at this one either. Spending less time on social media has helped; I am no longer bombarded quite so constantly by worthy causes and global crises. Yet I still have a vague sense that I am not doing enough, that I should be giving more of myself to help others. For me this is partly colored by a feeling that I have failed to build a career that would have both allowed me to have direct impact on some issues and also made enough money to afford large donations.

Embrace boring and single-purpose technology. Downgrade your technology to reduce distraction.

I don’t do this one, but I also don’t see it as particularly good advice. Maybe taking Facebook and (the-platform-formerly-known-as-) Twitter off your phone home screen is a good idea. But the reason you go to social media isn’t that they are so easy to access. It’s that you are expected to, and that you try to use them to fill some kind of need in your life—though it’s unclear they ever actually fill it.

Seek out novelty in the mundane. Cultivate awareness and appreciation of the ordinary things around you.

This one is basically a stripped-down meditation technique. It does work, but it’s also a lot harder to do than most people seem to think. It is especially hard to do when you are severely depressed. One technique I’ve learned from therapy that is surprisingly helpful is to replace “I have to” with “I get to” whenever you can: You don’t have to scoop cat litter, you get to because you have an adorable cat. You don’t have to catch the bus to work, you get to because you have a job. You don’t have to make breakfast for your family, you get to because you have a loving family.

Be a ‘researcher’ in relationships. Cultivate curiosity rather than anxiety or judgment.

Human beings are tremendously varied and often unpredictable. If you worry about whether or not people will do what you want, you’ll be constantly worried. And I have certainly been there. It can help to try to take a stance of detachment, where you concern yourself less with getting the right outcome and more with learning about the people you are with. I think this can be taken too far—you can become totally detached from relationships, or you could put yourself in danger by failing to pass judgment on obviously harmful behaviors—but in moderation, it’s surprisingly powerful. The first time I ever enjoyed going to a nightclub, (at my therapist’s suggestion) I went as a social scientist, tasked with observing and cataloguing the behavior around me. I still didn’t feel fully integrated into the environment (and the music was still too damn loud!), but for once, I wasn’t anxious and miserable.

Cultivate instantaneous generosity. If you feel like doing something good for someone, just do it.

I’m honestly not sure whether this one is good advice. I used to follow it much more than I do now. Interacting with the Effective Altruism community taught me to temper these impulses, and instead of giving to every random charity or homeless person that asks for money, instead concentrate my donations into a few highly cost-effective charities. Objectively, concentrating donations in this way produces a larger positive impact on the world. But subjectively, it doesn’t feel as good, it makes people sad, and sometimes it can make you feel like a very callous person. Maybe there’s a balance to be had here: Give a little when the impulse strikes, but save up most of it for the really important donations.

Practice doing nothing.

This one is perhaps the most subversive, the most opposed to all standard self-help advice. Do nothing? Just rest? How can you say such a thing, when you just reminded us that we have only four thousand weeks to live? Yet this is in fact the advice most of us need to hear. We burn ourselves out because we forget how to rest.

I am also terrible at this one. I tend to get most anxious when I have between 15 and 45 minutes of free time before an activity, because 45 minutes doesn’t feel long enough to do anything, and 15 minutes feels too long to do nothing. Logically this doesn’t really make sense: Either you have time to do something, or you don’t. But it can be hard to find good ways to fill that sort of interval, because it requires the emotional overhead of starting and stopping a task.

Then, there are the five questions:

Where in your life or work are you currently pursuing comfort, when what’s called for is a little discomfort?

It seems odd to recommend discomfort as a goal, but I think what Burkeman is getting at is that we tend to get stuck in the comfortable and familiar, even when we would be better off reaching out and exploring into the unknown. I know that for me, finally deciding to quit this job was very uncomfortable; it required taking a big risk and going outside the familiar and expected. But I am now convinced it was the right decision.

Are you holding yourself to, and judging yourself by, standards of productivity or performance that are impossible to meet?

In a word? Yes. I’m sure I am. But this one is also slipperier than it may seem—for how do we really know what’s possible? And possible for whom? If you see someone else who seems to be living the life you think you want, is it just an illusion? Are they really suffering as badly as you? Or do they perhaps have advantages you don’t, which made it possible for them, but not for you? When people say they work 60 hours per week and you can barely manage 20, are they lying? Are you truly not investing enough effort? Or do you suffer from ailments they don’t, which make it impossible for you to commit those same hours?

In what ways have you yet to accept the fact that you are who you are, not the person you think you ought to be?

I think most of us have a lot of ways that we fail to accept ourselves: physically, socially, psychologically. We are never the perfect beings we aspire to be. And constantly aspiring to an impossible ideal will surely drain you. But I also fear that self-acceptance could be a dangerous thing: What if it makes us stop striving to improve? What if we could be better than we are, but we don’t bother? Would you want a murderous psychopath to practice self-acceptance? (Then again, do they already, whether we want them to or not?) How are we to know which flaws in ourselves should be accepted, and which repaired?

In which areas of your life are you still holding back until you feel like you know what you’re doing?

This one cut me very deep. I have several areas of my life where this accusation would be apt, and one in particular where I am plainly guilty as charged: Parenting. In a same-sex marriage, offspring don’t emerge automatically without intervention. If we want to have kids, we must do a great deal of work to secure adoption. And it has been much easier—safer, more comfortable—to simply put off that work, avoid the risk. I told myself we’d adopt once I finished grad school; but then I only got a temporary job, so I put it off again, saying we’d adopt once I found stability in my career. But what if I never find that stability? What if the rest of my career is always this precarious? What if I can always find some excuse to delay? The pain of never fulfilling that lifelong dream of parenthood might continue to gnaw at me forever.

How would you spend your days differently if you didn’t care so much about seeing your actions reach fruition?

This one is frankly useless. I hate it. It’s like when people say “What would you do if you knew you’d die tomorrow?” Obviously, you wouldn’t go to work, you wouldn’t pay your bills, you wouldn’t clean your bathroom. You might devote yourself single-mindedly to a single creative task you hoped to make a legacy, or gather your family and friends to share one last day of love, or throw yourself into meaningless hedonistic pleasure. Those might even be things worth doing, on occasion. But you can’t do them every day. If you knew you were about to die, you absolutely would not live in any kind of sustainable way.

Similarly, if I didn’t care about seeing my actions reach fruition, I would continue to write stories and never worry about publishing them. I would make little stabs at research when I got curious, then once it starts getting difficult or boring, give up and never bother writing the paper. I would continue flitting between a dozen random projects at once and never finish any of them. I might well feel happier—at least until it all came crashing down—but I would get absolutely nothing done.

Above all, I would never apply for any jobs, because applying for jobs is absolutely not about enjoying the journey. If you know for a fact that you won’t get an offer, you’re an idiot to bother applying. That is a task that is only worth doing if I believe that it will yield results—and indeed, a big part of why it’s so hard to bring myself to do it is that I have a hard time maintaining that belief.

If you read the surrounding context, Burkeman actually seems to intend something quite different than the actual question he wrote. He suggests devoting more time to big, long-term projects that require whole communities to complete. He likens this to laying bricks in a cathedral that we will never see finished.

I do think there is wisdom in this. But it isn’t a simple matter of not caring about results. Indeed, if you don’t care at all about whether the cathedral will stand, you won’t bother laying the bricks correctly. In some sense Burkeman is actually asking us to do the opposite: To care more about results, but specifically results that we may never live to see. Maybe he really intends to emphasize the word see—you care about your actions reaching fruition, but not whether or not you’ll ever see it.

Yet this, I am quite certain, is not my problem. When a psychiatrist once asked me, “What do you really want most in life?” I gave a very thoughtful answer: “To be remembered in a thousand years for my contribution to humanity.” (His response was glib: “You can’t control that.”) I still stand by that answer: If I could have whatever I want, no limits at all, three wishes from an all-powerful genie, two of them would be to solve some of the world’s greatest problems, and the third would be for the chance to live my life in a way that I knew would be forever remembered.

But I am slowly coming to realize that maybe I should abandon that answer. That psychiatrist’s answer was far too glib (he was in fact not a very good fit for me; I quickly switched to a different psychiatrist), but maybe it wasn’t fundamentally wrong. It may be impossible to predict, let alone control, whether our lives have that kind of lasting impact—and, almost by construction, most lives can’t.

Perhaps, indeed, I am too worried about whether the cathedral will stand. I only have a few bricks to lay myself, and while I can lay them the best I can, that ultimately will not be what decides the fate of the cathedral. A fire, or an earthquake, or simply some other bricklayer’s incompetence, could bring about its destruction—and there is nothing at all I can do to prevent that.

This post is already getting too long, so I should try to bring it to a close.

As the adage goes, perhaps if I had more time, I’d make it shorter.

How much should we give of ourselves?

Jul 23 JDN 2460149

This is a question I’ve written about before, but it’s a very important one—perhaps the most important question I deal with on this blog—so today I’d like to come back to it from a slightly different angle.

Suppose you could sacrifice all the happiness in the rest of your life, making your own existence barely worth living, in exchange for saving the lives of 100 people you will never meet.

  1. Would it be good for you do so?
  2. Should you do so?
  3. Are you a bad person if you don’t?
  4. Are all of the above really the same question?

Think carefully about your answer. It may be tempting to say “yes”. It feels righteous to say “yes”.

But in fact this is not hypothetical. It is the actual situation you are in.

This GiveWell article is entitled “Why is it so expensive to save a life?” but that’s incredibly weird, because the actual figure they give is astonishingly, mind-bogglingly, frankly disgustingly cheap: It costs about $4500 to save one human life. I don’t know how you can possibly find that expensive. I don’t understand how anyone can think, “Saving this person’s life might max out a credit card or two; boy, that sure seems expensive!

The standard for healthcare policy in the US is that something is worth doing if it is able to save one quality-adjusted life year for less than $50,000. That’s one year for ten times as much. Even accounting for the shorter lifespans and worse lives in poor countries, saving someone from a poor country for $4500 is at least one hundred times as cost-effective as that.

To put it another way, if you are a typical middle-class person in the First World, with an after-tax income of about $25,000 per year, and you were to donate 90% of that after-tax income to high-impact charities, you could be expected to save 5 lives every year. Over the course of a 30-year career, that’s 150 lives saved.

You would of course be utterly miserable for those 30 years, having given away all the money you could possibly have used for any kind of entertainment or enjoyment, not to mention living in the cheapest possible housing—maybe even a tent in a homeless camp—and eating the cheapest possible food. But you could do it, and you would in fact be expected to save over 100 lives by doing so.

So let me ask you again:

  1. Would it be good for you do so?
  2. Should you do so?
  3. Are you a bad person if you don’t?
  4. Are all of the above really the same question?

Peter Singer often writes as though the answer to all these questions is “yes”. But even he doesn’t actually live that way. He gives a great deal to charity, mind you; no one seems to know exactly how much, but estimates range from at least 10% to up to 50% of his income. My general impression is that he gives about 10% of his ordinary income and more like 50% of big prizes he receives (which are in fact quite numerous). Over the course of his life he has certainly donated at least a couple million dollars. Yet he clearly could give more than he does: He lives a comfortable, upper-middle-class life.

Peter Singer’s original argument for his view, from his essay “Famine, Affluence, and Morality”, is actually astonishingly weak. It involves imagining a scenario where a child is drowning in a lake and you could go save them, but only at the cost of ruining your expensive suit.

Obviously, you should save the child. We all agree on that. You are in fact a terrible person if you wouldn’t save the child.

But Singer tries to generalize this into a principle that requires us to donate all most of our income to international charities, and that just doesn’t follow.

First of all, that suit is not worth $4500. Not if you’re a middle-class person. That’s a damn Armani. No one who isn’t a millionaire wears suits like that.

Second, in the imagined scenario, you’re the only one who can help the kid. All I have to do is change that one thing and already the answer is different: If right next to you there is a trained, certified lifeguard, they should save the kid, not you. And if there are a hundred other people at the lake, and none of them is saving the kid… probably there’s a good reason for that? (It could be bystander effect, but actually that’s much weaker than a lot of people think.) The responsibility doesn’t uniquely fall upon you.

Third, the drowning child is a one-off, emergency scenario that almost certainly will never happen to you, and if it does ever happen, will almost certainly only happen once. But donation is something you could always do, and you could do over and over and over again, until you have depleted all your savings and run up massive debts.

Fourth, in the hypothetical scenario, there is only one child. What if there were ten—or a hundred—or a thousand? What if you couldn’t possibly save them all by yourself? Should you keep going out there and saving children until you become exhausted and you yourself drown? Even if there is a lifeguard and a hundred other bystanders right there doing nothing?

And finally, in the drowning child scenario, you are right there. This isn’t some faceless stranger thousands of miles away. You can actually see that child in front of you. Peter Singer thinks that doesn’t matter—actually his central point seems to be that it doesn’t matter. But I think it does.

Singer writes:

It makes no moral difference whether the person I can help is a neighbor’s child ten yards away from me or a Bengali whose name I shall never know, ten thousand miles away.

That’s clearly wrong, isn’t it? Relationships mean nothing? Community means nothing? There is no moral value whatsoever to helping people close to us rather than random strangers on the other side of the planet?

One answer might be to say that the answer to question 4 is “no”. You aren’t a bad person for not doing everything you should, and even though something would be good if you did it, that doesn’t necessarily mean you should do it.

Perhaps some things are above and beyond the call of duty: Good, perhaps even heroic, if you’re willing to do them, but not something we are all obliged to do. The formal term for this is supererogatory. While I think that overall utilitarianism is basically correct and has done great things for human society, one thing I think most utilitarians miss is that they seem to deny that supererogatory actions exist.

Even then, I’m not entirely sure it is good to be this altruistic.

Someone who really believed that we owe as much to random strangers as we do to our friends and family would never show up to any birthday parties, because any time spent at a birthday party would be more efficiently spent earning-to-give to some high-impact charity. They would never visit their family on Christmas, because plane tickets are expensive and airplanes burn a lot of carbon.

They also wouldn’t concern themselves with whether their job is satisfying or even not totally miserable; they would only care whether the total positive impact they can have on the world is positive, either directly through their work or by raising as much money as possible and donating it all to charity.

They would rest only the minimum amount they require to remain functional, eat only the barest minimum of nutritious food, and otherwise work, work, work, constantly, all the time. If their body was capable of doing the work, they would continue doing the work. For there is not a moment to waste when lives are on the line!

A world full of people like that would be horrible. We would all live our entire lives in miserable drudgery trying to maximize the amount we can donate to faceless strangers on the other side of the planet. There would be no joy or friendship in that world, only endless, endless toil.

When I bring this up in the Effective Altruism community, I’ve heard people try to argue otherwise, basically saying that we would never need everyone to devote themselves to the cause at this level, because we’d soon solve all the big problems and be able to go back to enjoying our lives. I think that’s probably true—but it also kind of misses the point.

Yes, if everyone gave their fair share, that fair share wouldn’t have to be terribly large. But we know for a fact that most people are not giving their fair share. So what now? What should we actually do? Do you really want to live in a world where the morally best people are miserable all the time sacrificing themselves at the altar of altruism?

Yes, clearly, most people don’t do enough. In fact, most people give basically nothing to high-impact charities. We should be trying to fix that. But if I am already giving far more than my fair share, far more than I would have to give if everyone else were pitching in as they should—isn’t there some point at which I’m allowed to stop? Do I have to give everything I can or else I’m a monster?

The conclusion that we ought to make ourselves utterly miserable in order to save distant strangers feels deeply unsettling. It feels even worse if we say that we ought to do so, and worse still if we feel we are bad people if we don’t.

One solution would be to say that we owe absolutely nothing to these distant strangers. Yet that clearly goes too far in the opposite direction. There are so many problems in this world that could be fixed if more people cared just a little bit about strangers on the other side of the planet. Poverty, hunger, war, climate change… if everyone in the world (or really even just everyone in power) cared even 1% as much about random strangers as they do about themselves, all these would be solved.

Should you donate to charity? Yes! You absolutely should. Please, I beseech you, give some reasonable amount to charity—perhaps 5% of your income, or if you can’t manage that, maybe 1%.

Should you make changes in your life to make the world better? Yes! Small ones. Eat less meat. Take public transit instead of driving. Recycle. Vote.

But I can’t ask you to give 90% of your income and spend your entire life trying to optimize your positive impact. Even if it worked, it would be utter madness, and the world would be terrible if all the good people tried to do that.

I feel quite strongly that this is the right approach: Give something. Your fair share, or perhaps even a bit more, because you know not everyone will.

Yet it’s surprisingly hard to come up with a moral theory on which this is the right answer.

It’s much easier to develop a theory on which we owe absolutely nothing: egoism, or any deontology on which charity is not an obligation. And of course Singer-style utilitarianism says that we owe virtually everything: As long as QALYs can be purchased cheaper by GiveWell than by spending on yourself, you should continue donating to GiveWell.

I think part of the problem is that we have developed all these moral theories as if we were isolated beings, who act in a world that is simply beyond our control. It’s much like the assumption of perfect competition in economics: I am but one producer among thousands, so whatever I do won’t affect the price.

But what we really needed was a moral theory that could work for a whole society. Something that would still make sense if everyone did it—or better yet, still make sense if half the people did it, or 10%, or 5%. The theory cannot depend upon the assumption that you are the only one following it. It cannot simply “hold constant” the rest of society.

I have come to realize that the Effective Altruism movement, while probably mostly good for the world as a whole, has actually been quite harmful to the mental health of many of its followers, including myself. It has made us feel guilty for not doing enough, pressured us to burn ourselves out working ever harder to save the world. Because we do not give our last dollar to charity, we are told that we are murderers.

But there are real murderers in this world. While you were beating yourself up over not donating enough, Vladmir Putin was continuing his invasion of Ukraine, ExxonMobil was expanding its offshore drilling, Daesh was carrying out hundreds of terrorist attacks, Qanon was deluding millions of people, and the human trafficking industry was making $150 billion per year.

In other words, by simply doing nothing you are considerably better than the real monsters responsible for most of the world’s horror.

In fact, those starving children in Africa that you’re sending money to help? They wouldn’t need it, were it not for centuries of colonial imperialism followed by a series of corrupt and/or incompetent governments ruled mainly by psychopaths.

Indeed the best way to save those people, in the long run, would be to fix their governments—as has been done in places like Namibia and Botswana. According to the World Development Indicators, the proportion of people living below the UN extreme poverty line (currently $2.15 per day at purchasing power parity) has fallen from 36% to 16% in Namibia since 2003, and from 42% to 15% in Botswana since 1984. Compare this to some countries that haven’t had good governments over that time: In Cote d’Ivoire the same poverty rate was 8% in 1985 but is 11% today (and was actually as high as 33% in 2015), while in Congo it remains at 35%. Then there are countries that are trying, but just started out so poor it’s a long way to go: Burkina Faso’s extreme poverty rate has fallen from 82% in 1994 to 30% today.

In other words, if you’re feeling bad about not giving enough, remember this: if everyone in the world were as good as you, you wouldn’t need to give a cent.

Of course, simply feeling good about yourself for not being a psychopath doesn’t accomplish very much either. Somehow we have to find a balance: Motivate people enough so that they do something, get them to do their share; but don’t pressure them to sacrifice themselves at the altar of altruism.

I think part of the problem here—and not just here—is that the people who most need to change are the ones least likely to listen. The kind of person who reads Peter Singer is already probably in the top 10% of most altruistic people, and really doesn’t need much more than a slight nudge to be doing their fair share. And meanwhile the really terrible people in the world have probably never picked up an ethics book in their lives, or if they have, they ignored everything it said.

I don’t quite know what to do about that. But I hope I can least convince you—and myself—to take some of the pressure off when it feels like we’re not doing enough.

What is it with EA and AI?

Jan 1 JDN 2459946

Surprisingly, most Effective Altruism (EA) leaders don’t seem to think that poverty alleviation should be our top priority. Most of them seem especially concerned about long-term existential risk, such as artificial intelligence (AI) safety and biosecurity. I’m not going to say that these things aren’t important—they certainly are important—but here are a few reasons I’m skeptical that they are really the most important the way that so many EA leaders seem to think.

1. We don’t actually know how to make much progress at them, and there’s only so much we can learn by investing heavily in basic research on them. Whereas, with poverty, the easy, obvious answer turns out empirically to be extremely effective: Give them money.

2. While it’s easy to multiply out huge numbers of potential future people in your calculations of existential risk (and this is precisely what people do when arguing that AI safety should be a top priority), this clearly isn’t actually a good way to make real-world decisions. We simply don’t know enough about the distant future of humanity to be able to make any kind of good judgments about what will or won’t increase their odds of survival. You’re basically just making up numbers. You’re taking tiny probabilities of things you know nothing about and multiplying them by ludicrously huge payoffs; it’s basically the secular rationalist equivalent of Pascal’s Wager.

2. AI and biosecurity are high-tech, futuristic topics, which seem targeted to appeal to the sensibilities of a movement that is still very dominated by intelligent, nerdy, mildly autistic, rich young White men. (Note that I say this as someone who very much fits this stereotype. I’m queer, not extremely rich and not entirely White, but otherwise, yes.) Somehow I suspect that if we asked a lot of poor Black women how important it is to slightly improve our understanding of AI versus giving money to feed children in Africa, we might get a different answer.

3. Poverty eradication is often characterized as a “short term” project, contrasted with AI safety as a “long term” project. This is (ironically) very short-sighted. Eradication of poverty isn’t just about feeding children today. It’s about making a world where those children grow up to be leaders and entrepreneurs and researchers themselves. The positive externalities of economic development are staggering. It is really not much of an exaggeration to say that fascism is a consequence of poverty and unemployment.

4. Currently the main thing that most Effective Altruism organizations say they need most is “talent”; how many millions of person-hours of talent are we leaving on the table by letting children starve or die of malaria?

5. Above all, existential risk can’t really be what’s motivating people here. The obvious solutions to AI safety and biosecurity are not being pursued, because they don’t fit with the vision that intelligent, nerdy, young White men have of how things should be. Namely: Ban them. If you truly believe that the most important thing to do right now is reduce the existential risk of AI and biotechnology, you should support a worldwide ban on research in artificial intelligence and biotechnology. You should want people to take all necessary action to attack and destroy institutions—especially for-profit corporations—that engage in this kind of research, because you believe that they are threatening to destroy the entire world and this is the most important thing, more important than saving people from starvation and disease. I think this is really the knock-down argument; when people say they think that AI safety is the most important thing but they don’t want Google and Facebook to be immediately shut down, they are either confused or lying. Honestly I think maybe Google and Facebook should be immediately shut down for AI safety reasons (as well as privacy and antitrust reasons!), and I don’t think AI safety is yet the most important thing.

Why aren’t people doing that? Because they aren’t actually trying to reduce existential risk. They just think AI and biotechnology are really interesting, fascinating topics and they want to do research on them. And I agree with that, actually—but then they need stop telling people that they’re fighting to save the world, because they obviously aren’t. If the danger were anything like what they say it is, we should be halting all research on these topics immediately, except perhaps for a very select few people who are entrusted with keeping these forbidden secrets and trying to find ways to protect us from them. This may sound radical and extreme, but it is not unprecedented: This is how we handle nuclear weapons, which are universally recognized as a global existential risk. If AI is really as dangerous as nukes, we should be regulating it like nukes. I think that in principle it could be that dangerous, and may be that dangerous someday—but it isn’t yet. And if we don’t want it to get that dangerous, we don’t need more AI researchers, we need more regulations that stop people from doing harmful AI research! If you are doing AI research and it isn’t directly involved specifically in AI safety, you aren’t saving the world—you’re one of the people dragging us closer to the cliff! Anything that could make AI smarter but doesn’t also make it safer is dangerous. And this is clearly true of the vast majority of AI research, and frankly to me seems to also be true of the vast majority of research at AI safety institutes like the Machine Intelligence Research Institute.

Seriously, look through MIRI’s research agenda: It’s mostly incredibly abstract and seems completely beside the point when it comes to preventing AI from taking control of weapons or governments. It’s all about formalizing Bayesian induction. Thanks to you, Skynet can have a formally computable approximation to logical induction! Truly we are saved. Only two of their papers, on “Corrigibility” and “AI Ethics”, actually struck me as at all relevant to making AI safer. The rest is largely abstract mathematics that is almost literally navel-gazing—it’s all about self-reference. Eliezer Yudkowsky finds self-reference fascinating and has somehow convinced an entire community that it’s the most important thing in the world. (I actually find some of it fascinating too, especially the paper on “Functional Decision Theory”, which I think gets at some deep insights into things like why we have emotions. But I don’t see how it’s going to save the world from AI.)

Don’t get me wrong: AI also has enormous potential benefits, and this is a reason we may not want to ban it. But if you really believe that there is a 10% chance that AI will wipe out humanity by 2100, then get out your pitchforks and your EMP generators, because it’s time for the Butlerian Jihad. A 10% chance of destroying all humanity is an utterly unacceptable risk for any conceivable benefit. Better that we consign ourselves to living as we did in the Neolithic than risk something like that. (And a globally-enforced ban on AI isn’t even that; it’s more like “We must live as we did in the 1950s.” How would we survive!?) If you don’t want AI banned, maybe ask yourself whether you really believe the risk is that high—or are human brains just really bad at dealing with small probabilities?

I think what’s really happening here is that we have a bunch of guys (and yes, the EA and especially AI EA-AI community is overwhelmingly male) who are really good at math and want to save the world, and have thus convinced themselves that being really good at math is how you save the world. But it isn’t. The world is much messier than that. In fact, there may not be much that most of us can do to contribute to saving the world; our best options may in fact be to donate money, vote well, and advocate for good causes.

Let me speak Bayesian for a moment: The prior probability that you—yes, you, out of all the billions of people in the world—are uniquely positioned to save it by being so smart is extremely small. It’s far more likely that the world will be saved—or doomed—by people who have power. If you are not the head of state of a large country or the CEO of a major multinational corporation, I’m sorry; you probably just aren’t in a position to save the world from AI.

But you can give some money to GiveWell, so maybe do that instead?

Charity shouldn’t end at home

It so happens that this week’s post will go live on Christmas Day. I always try to do some kind of holiday-themed post around this time of year, because not only Christmas, but a dozen other holidays from various religions all fall around this time of year. The winter solstice seems to be a very popular time for holidays, and has been since antiquity: The Romans were celebrating Saturnalia 2000 years ago. Most of our ‘Christmas’ traditions are actually derived from Yuletide.

These holidays certainly mean many different things to different people, but charity and generosity are themes that are very common across a lot of them. Gift-giving has been part of the season since at least Saturnalia and remains as vital as ever today. Most of those gifts are given to our friends and loved ones, but a substantial fraction of people also give to strangers in the form of charitable donations: November and December have the highest rates of donation to charity in the US and the UK, with about 35-40% of people donating during this season. (Of course this is complicated by the fact that December 31 is often the day with the most donations, probably from people trying to finish out their tax year with a larger deduction.)

My goal today is to make you one of those donors. There is a common saying, often attributed to the Bible but not actually present in it: “Charity begins at home”.

Perhaps this is so. There’s certainly something questionable about the Effective Altruism strategy of “earning to give” if it involves abusing and exploiting the people around you in order to make more money that you then donate to worthy causes. Certainly we should be kind and compassionate to those around us, and it makes sense for us to prioritize those close to us over strangers we have never met. But while charity may begin at home, it must not end at home.

There are so many global problems that could benefit from additional donations. While global poverty has been rapidly declining in the early 21st century, this is largely because of the efforts of donors and nonprofit organizations. Official Development Assitance has been roughly constant since the 1970s at 0.3% of GNI among First World countries—well below international targets set decades ago. Total development aid is around $160 billion per year, while private donations from the United States alone are over $480 billion. Moreover, 9% of the world’s population still lives in extreme poverty, and this rate has actually slightly increased the last few years due to COVID.

There are plenty of other worthy causes you could give to aside from poverty eradication, from issues that have been with us since the dawn of human civilization (the Humane Society International for domestic animal welfare, the World Wildlife Federation for wildlife conservation) to exotic fat-tail sci-fi risks that are only emerging in our own lifetimes (the Machine Intelligence Research Institute for AI safety, the International Federation of Biosafety Associations for biosecurity, the Union of Concerned Scientists for climate change and nuclear safety). You could fight poverty directly through organizations like UNICEF or GiveDirectly, fight neglected diseases through the Schistomoniasis Control Initiative or the Against Malaria Foundation, or entrust an organization like GiveWell to optimize your donations for you, sending them where they think they are needed most. You could give to political causes supporting civil liberties (the American Civil Liberties Union) or protecting the rights of people of color (the North American Association of Colored People) or LGBT people (the Human Rights Campaign).

I could spent a lot of time and effort trying to figure out the optimal way to divide up your donations and give them to causes such as this—and then convincing you that it’s really the right one. (And there is even a time and place for that, because seemingly-small differences can matter a lot in this.) But instead I think I’m just going to ask you to pick something. Give something to an international charity with a good track record.

I think we worry far too much about what is the best way to give—especially people in the Effective Altruism community, of which I’m sort of a marginal member—when the biggest thing the world really needs right now is just more people giving more. It’s true, there are lots of worthless or even counter-productive charities out there: Please, please do not give to the Salvation Army. (And think twice before donating to your own church; if you want to support your own community, okay, go ahead. But if you want to make the world better, there are much better places to put your money.)

But above all, give something. Or if you already give, give more. Most people don’t give at all, and most people who give don’t give enough.

How we measure efficiency affects our efficiency

Jun 21 JDN 2459022

Suppose we are trying to minimize carbon emissions, and we can afford one of the two following policies to improve fuel efficiency:

  1. Policy A will replace 10,000 cars that average 25 MPG with hybrid cars that average 100 MPG.
  2. Policy B will replace 5,000 diesel trucks that average 5 MPG with turbocharged, aerodynamic diesel trucks that average 10 MPG.

Assume that both cars and trucks last about 100,000 miles (in reality this of course depends on a lot of factors), and diesel and gas pollute about the same amount per gallon (this isn’t quite true, but it’s close). Which policy should we choose?

It seems obvious: Policy A, right? 10,000 vehicles, each increasing efficiency by 75 MPG or a factor of 4, instead of 5,000 vehicles, each increasing efficiency by only 5 MPG or a factor of 2.

And yet—in fact the correct answer is definitely policy B, because the use of MPG has distorted our perception of what constitutes efficiency. We should have been using the inverse: gallons per hundred miles.

  1. Policy A will replace 10,000 cars that average 4 GPHM with cars that average 1 GPHM.
  2. Policy B will replace 5,000 trucks that average 20 GPHM with trucks that average 10 GPHM.

This means that policy A will save (10,000)(100,000/100)(4-1) = 30 million gallons, while policy B will save (5,000)(100,000/100)(20-10) = 50 million gallons.

A gallon of gasoline produces about 9 kg of CO2 when burned. This means that by choosing the right policy here, we’ll have saved 450,000 tons of CO2—or by choosing the wrong one we would only have saved 270,000.

The simple choice of which efficiency measure to use when making our judgment—GPHM versus MPG—has had a profound effect on the real impact of our choices.

Let’s try applying the same reasoning to charities. Again suppose we can choose one of two policies.

  1. Policy C will move $10 million that currently goes to local community charities which can save one QALY for $1 million to medical-research charities that can save one QALY for $50,000.
  2. Policy D will move $10 million that currently goes to direct-transfer charities which can save one QALY for $1000 to anti-malaria net charities that can save one QALY for $800.

Policy C means moving funds from charities that are almost useless ($1 million per QALY!?) to charities that meet a basic notion of cost-effectiveness (most public health agencies in the First World have a standard threshold of about $50,000 or $100,000 per QALY).

Policy D means moving funds from charities that are already highly cost-effective to other charities that are only a bit more cost-effective. It almost seems pedantic to even concern ourselves with the difference between $1000 per QALY and $800 per QALY.

It’s the same $10 million either way. So, which policy should we pick?

If the lesson you took from the MPG example is that we should always be focused on increasing the efficiency of the least efficient, you’ll get the wrong answer. The correct answer is based on actually using the right measure of efficiency.

Here, it’s not dollars per QALY we should care about; it’s QALY per million dollars.

  1. Policy C will move $10 million from charities which get 1 QALY per million dollars to charities which get 20 QALY per million dollars.
  2. Policy D will move $10 million from charities which get 1000 QALY per million dollars to charities which get 1250 QALY per million dollars.

Multiply that out, and policy C will gain (10)(20-1) = 190 QALY, while policy D will gain (10)(1250-1000) = 2500 QALY. Assuming that “saving a life” means about 50 QALY, this is the difference between saving 4 lives and saving 50 lives.

My intuition actually failed me on this one; before I actually did the math, I had assumed that it would be far more important to move funds from utterly useless charities to ones that meet a basic standard. But it turns out that it’s actually far more important to make sure that the funds being targeted at the most efficient charities are really the most efficient—even apparently tiny differences matter a great deal.

Of course, if we can move that $10 million from the useless charities to the very best charities, that’s the best of all; it would save (10)(1250-1) = 12,490 QALY. This is nearly 250 lives.

In the fuel economy example, there’s no feasible way to upgrade a semitrailer to get 100 MPG. If we could, we totally should; but nobody has any idea how to do that. Even an electric semi probably won’t be that efficient, depending on how the grid produces electricity. (Obviously if the grid were all nuclear, wind, and solar, it would be; but very few places are like that.)

But when we’re talking about charities, this is just money; it is by definition fungible. So it is absolutely feasible in an economic sense to get all the money currently going towards nearly-useless charities like churches and museums and move that money directly toward high-impact charities like anti-malaria nets and vaccines.

Then again, it may not be feasible in a practical or political sense. Someone who currently donates to their local church may simply not be motivated by the same kind of cosmopolitan humanitarianism that motivates Effective Altruism. They may care more about supporting their local community, or be motivated by genuine religious devotion. This isn’t even inherently a bad thing; nobody is a cosmopolitan in everything they do, nor should we be—we have good reasons to care more about our own friends, family, and community than we do about random strangers in foreign countries thousands of miles away. (And while I’m fairly sure Jesus himself would have been an Effective Altruist if he’d been alive today, I’m well aware that most Christians aren’t—and this doesn’t make them “false Christians”.) There might be some broader social or cultural change that could make this happen—but it’s not something any particular person can expect to accomplish.

Whereas, getting people who are already Effective Altruists giving to efficient charities to give to a slightly more efficient charity is relatively easy: Indeed, it’s basically the whole purpose for which GiveWell exists. And there are analysts working at GiveWell right now whose job it is to figure out exactly which charities yield the most QALY per dollar and publish that information. One person doing that job even slightly better can save hundreds or even thousands of lives.

Indeed, I’m seriously considering applying to be one myself—it sounds both more pleasant and more important than anything I’d be likely to get in academia.

Scope neglect and the question of optimal altruism

JDN 2457090 EDT 16:15.

We’re now on Eastern Daylight Time because of this bizarre tradition of shifting our time zone forward for half of the year. It’s supposed to save energy, but a natural experiment in India suggests it actually increases energy demand. So why do we do it? Like every ridiculous tradition (have you ever tried to explain Groundhog Day to someone from another country?), we do it because we’ve always done it.
This week’s topic is scope neglect, one of the most pervasive—and pernicious—cognitive heuristics human beings face. Scope neglect raises a great many challenges not only practically but also theoretically—it raises what I call the question of optimal altruism.

The question is simple to ask yet remarkably challenging to answer: How much should we be willing to sacrifice in order to benefit others? If we think of this as a number, your solidarity coefficient (s), it is equal to the cost you are willing to pay divided by the benefit your action has for someone else: s B > C.

This is analogous to the biological concept relatedness (r), on which Hamilton’s Rule applies: r B > C. Solidarity is the psychological analogue; instead of valuing people based on their genetic similarity to you, you value them based on… well, that’s the problem.

I can easily place upper and lower bounds: The lower bound is zero: You should definitely be willing to sacrifice something to help other people—otherwise you are a psychopath. The upper bound is one: There’s no point in paying more cost than you produce in benefit, and in fact even paying the same cost to yourself as you yield in benefits for other people doesn’t make a lot of sense, because it means that your own self-interest is meaningless and the fact that you understand your own needs better than the needs of others is also irrelevant.

But beyond that, it gets a lot harder—and that may explain why we suffer scope neglect in the first place. Should it be 90%? 50%? 10%? 1%? How should it vary between friends versus family versus strangers? It’s really hard to say. And this inability to precisely decide how much other people should be worth to us may be part of why we suffer scope neglect.

Scope neglect is the fact that we are not willing to expend effort or money in direct proportion to the benefit it would have. When different groups were asked how much they would be willing to donate in order to save the lives of 2,000 birds, 20,000 birds, or 200,000 birds, the answers they gave were statistically indistinguishable—always about $80. But however much a bird’s life is worth to you, shouldn’t 200,000 birds be worth, well, 200,000 times as much? In fact, more than that, because the marginal utility of wealth is decreasing, but I see no reason to think that the marginal utility of birds decreases nearly as fast.

But therein lies the problem: Usually we can’t pay 200,000 times as much. I’d feel like a horrible person if I weren’t willing to expend at least $10 or an equivalent amount of effort in order to save a bird. To save 200,000 birds that means I’d owe $2 million—and I simply don’t have $2 million.

You can get similar results to the bird experiment if you use children—though, as one might hope, the absolute numbers are a bit bigger, usually more like $500 to $1000. (And this, it turns out, is actually about how much it actually costs to save a child’s life by a particularly efficient means, such as anti-malaria nets, de-worming, or direct cash transfer. So please, by all means, give $1000 to UNICEF or the Against Malaria Foundation. If you can’t give $1000, give $100; if you can’t give $100, give $10.) It doesn’t much matter whether you say that the project will save 500 children, 5,000 children, or 50,000 children—people still will give about $500 to $1000. But once again, if I’m willing to spend $1000 to save a child—and I definitely am—how much should I be willing to spend to end malaria, which kills 500,000 children a year? Apparently $500 million, which not only do I not have, I almost certainly will not make that much money cumulatively through my entire life. ($2 million, on the other hand, I almost certainly will make cumulatively—the median income of an economist is $90,000 per year, so if I work for at least 22 years with that as my average income I’ll have cumulatively made $2 million. My net wealth may never be that high—though if I get better positions, or I’m lucky enough or clever enough with the stock market it might—but my cumulative income almost certainly will. Indeed, the average gain in cumulative income from a college degree is about $1 million. Because it takes time—time is money—and loans carry interest, this gives it a net present value of about $300,000.)

But maybe scope neglect isn’t such a bad thing after all. There is a very serious problem with these sort of moral dilemmas: The question didn’t say I would single-handedly save 200,000 birds—and indeed, that notion seems quite ridiculous. If I knew that I could actually save 200,000 birds and I were the only one who could do it, dammit, I would try to come up with that $2 million. I might not succeed, but I really would try as hard as I could.

And if I could single-handedly end malaria, I hereby vow that I would do anything it took to achieve that. Short of mass murder, anything I could do couldn’t be a higher cost to the world than malaria itself. I have no idea how I’d come up with $500 million, but I’d certainly try. Bill Gates could easily come up with that $500 million—so he did. In fact he endowed the Gates Foundation with $28 billion, and they’ve spent $1.3 billion of that on fighting malaria, saving hundreds of thousands of lives.

With this in mind, what is scope neglect really about? I think it’s about coordination. It’s not that people don’t care more about 200,000 birds than they do about 2,000; and it’s certainly not that they don’t care more about 50,000 children than they do about 500. Rather, the problem is that people don’t know how many other people are likely to donate, or how expensive the total project is likely to be; and we don’t know how much we should be willing to pay to save the life of a bird or a child.

Hence, what we basically do is give up; since we can’t actually assess the marginal utility of our donation dollars, we fall back on our automatic emotional response. Our mind focuses itself on visualizing that single bird covered in oil, or that single child suffering from malaria. We then hope that the representative heuristic will guide us in how much to give. Or we follow social norms, and give as much as we think others would expect us to give.

While many in the effective altruism community take this to be a failing, they never actually say what we should do—they never give us a figure for how much money we should be willing to donate to save the life of a child. Instead they retreat to abstraction, saying that whatever it is we’re willing to give to save a child, we should be willing to give 50,000 times as much to save 50,000 children.

But it’s not that simple. A bigger project may attract more supporters; if the two occur in direct proportion, then constant donation is the optimal response. Since it’s probably not actually proportional, you likely should give somewhat more to causes that affect more people; but exactly how much more is an astonishingly difficult question. I really don’t blame people—or myself—for only giving a little bit more to causes with larger impact, because actually getting the right answer is so incredibly hard. This is why it’s so important that we have institutions like GiveWell and Charity Navigator which do the hard work to research the effectiveness of charities and tell us which ones we should give to.

Yet even if we can properly prioritize which charities to give to first, that still leaves the question of how much each of us should give. 1% of our income? 5%? 10%? 20%? 50%? Should we give so much that we throw ourselves into the same poverty we are trying to save others from?

In his earlier work Peter Singer seemed to think we should give so much that it throws us into poverty ourselves; he asked us to literally compare every single purchase and ask ourselves whether a year of lattes or a nicer car is worth a child’s life. Of course even he doesn’t live that way, and in his later books Singer seems to have realized this, and now recommends the far more modest standard that everyone give at least 1% of their income. (He himself gives about 33%, but he’s also very rich so he doesn’t feel it nearly as much.) I think he may have overcompensated; while if literally everyone gave at least 1% that would be more than enough to end world hunger and solve many other problems—world nominal GDP is over $70 trillion, so 1% of that is $700 billion a year—we know that this won’t happen. Some will give more, others less; most will give nothing at all. Hence I think those of us who give should give more than our share; hence I lean toward figures more like 5% or 10%.

But then, why not 50% or 90%? It is very difficult for me to argue on principle why we shouldn’t be expected to give that much. Because my income is such a small proportion of the total donations, the marginal utility of each dollar I give is basically constant—and quite high; if it takes about $1000 to save a child’s life on average, and each of these children will then live about 60 more years at about half the world average happiness, that’s about 30 QALY per $1000, or about 30 milliQALY per dollar. Even at my current level of income (incidentally about as much as I think the US basic income should be), I’m benefiting myself only about 150 microQALY per dollar—so my money is worth about 200 times as much to those children as it is to me.

So now we have to ask ourselves the really uncomfortable question: How much do I value those children, relative to myself? If I am at all honest, the value is not 1; I’m not prepared to die for someone I’ve never met 10,000 kilometers away in a nation I’ve never even visited, nor am I prepared to give away all my possessions and throw myself into the same starvation I am hoping to save them from. I value my closest friends and family approximately the same as myself, but I have to admit that I value random strangers considerably less.

Do I really value them at less than 1%, as these figures would seem to imply? I feel like a monster saying that, but maybe it really isn’t so terrible—after all, most economists seem to think that the optimal solidarity coefficient is in fact zero. Maybe we need to become more comfortable admitting that random strangers aren’t worth that much to us, simply so that we can coherently acknowledge that they aren’t worth nothing. Very few of us actually give away all our possessions, after all.

Then again, what do we mean by worth? I can say from direct experience that a single migraine causes me vastly more pain than learning about the death of 200,000 people in an earthquake in Southeast Asia. And while I gave about $100 to the relief efforts involved in that earthquake, I’ve spent considerably more on migraine treatments—thousands, once you include health insurance. But given the chance, would I be willing to suffer a migraine to prevent such an earthquake? Without hesitation. So the amount of pain we feel is not the same as the amount of money we pay, which is not the same as what we would be willing to sacrifice. I think the latter is more indicative of how much people’s lives are really worth to us—but then, what we pay is what has the most direct effect on the world.

It’s actually possible to justify not dying or selling all my possessions even if my solidarity coefficient is much higher—it just leads to some really questionable conclusions. Essentially the argument is this: I am an asset. I have what economists call “human capital”—my health, my intelligence, my education—that gives me the opportunity to affect the world in ways those children cannot. In my ideal imagined future (albeit improbable) in which I actually become President of the World Bank and have the authority to set global development policy, I myself could actually have a marginal impact of megaQALY—millions of person-years of better life. In the far more likely scenario in which I attain some mid-level research or advisory position, I could be one of thousands of people who together have that sort of impact—which still means my own marginal effect is on the order of kiloQALY. And clearly it’s true that if I died, or even if I sold all my possessions, these events would no longer be possible.

The problem with that reasoning is that it’s wildly implausible to say that everyone in the First World are in this same sort of position—Peter Singer can say that, and maybe I can say that, and indeed hundreds of development economists can say that—but at least 99.9% of the First World population are not development economists, nor are they physicists likely to invent cold fusion, nor biomedical engineers likely to cure HIV, nor aid workers who distribute anti-malaria nets and polio vaccines, nor politicians who set national policy, nor diplomats who influence international relations, nor authors whose bestselling books raise worldwide consciousness. Yet I am not comfortable saying that all the world’s teachers, secretaries, airline pilots and truck drivers should give away their possessions either. (Maybe all the world’s bankers and CEOs should—or at least most of them.)

Is it enough that our economy would collapse without teachers, secretaries, airline pilots and truck drivers? But this seems rather like the fact that if everyone in the world visited the same restaurant there wouldn’t be enough room. Surely we could do without any individual teacher, any individual truck driver? If everyone gave the same proportion of their income, 1% would be more than enough to end malaria and world hunger. But we know that everyone won’t give, and the job won’t get done if those of us who do give only 1%.

Moreover, it’s also clearly not the case that everything I spend money on makes me more likely to become a successful and influential development economist. Buying a suit and a car actually clearly does—it’s much easier to get good jobs that way. Even leisure can be justified to some extent, since human beings need leisure and there’s no sense burning myself out before I get anything done. But do I need both of my video game systems? Couldn’t I buy a bit less Coke Zero? What if I watched a 20-inch TV instead of a 40-inch one? I still have free time; could I get another job and donate that money? This is the sort of question Peter Singer tells us to ask ourselves, and it quickly leads to a painfully spartan existence in which most of our time is spent thinking about whether what we’re doing is advancing or damaging the cause of ending world hunger. But then the cost of that stress and cognitive effort must be included; but how do you optimize your own cognitive effort? You need to think about the cost of thinking about the cost of thinking… and on and on. This is why bounded rationality modeling is hard, even though it’s plainly essential to both cognitive science and computer science. (John Stuart Mill wrote an essay that resonates deeply with me about how the pressure to change the world drove him into depression, and how he learned to accept that he could still change the world even if he weren’t constantly pressuring himself to do so—and indeed he did. James Mill set out to create in his son, John Stuart Mill, the greatest philosopher in the history of the world—and I believe that he succeeded.)

Perhaps we should figure out what proportion of the world’s people are likely to give, and how much we need altogether, and then assign the amount we expect from each of them based on that? The more money you ask from each, the fewer people are likely to give. This creates an optimization problem akin to setting the price of a product under monopoly—monopolies maximize profits by carefully balancing the quantity sold with the price at which they sell, and perhaps a similar balance would allow us to maximize development aid. But wouldn’t it be better if we could simply increase the number of people who give, so that we don’t have to ask so much of those who are generous? That means tax-funded foreign aid is the way to go, because it ensures coordination. And indeed I do favor increasing foreign aid to about 1% of GDP—in the US it is currently about $50 billion, 0.3% of GDP, a little more than 1% of the Federal budget. (Most people who say we should “cut” foreign aid don’t realize how small it already is.) But foreign aid is coercive; wouldn’t it be better if people would give voluntarily?

I don’t have a simple answer. I don’t know how much other people’s lives ought to be worth to us, or what it means for our decisions once we assign that value. But I hope I’ve convinced you that this problem is an important one—and made you think a little more about scope neglect and why we have it.