What is it with EA and AI?

Jan 1 JDN 2459946

Surprisingly, most Effective Altruism (EA) leaders don’t seem to think that poverty alleviation should be our top priority. Most of them seem especially concerned about long-term existential risk, such as artificial intelligence (AI) safety and biosecurity. I’m not going to say that these things aren’t important—they certainly are important—but here are a few reasons I’m skeptical that they are really the most important the way that so many EA leaders seem to think.

1. We don’t actually know how to make much progress at them, and there’s only so much we can learn by investing heavily in basic research on them. Whereas, with poverty, the easy, obvious answer turns out empirically to be extremely effective: Give them money.

2. While it’s easy to multiply out huge numbers of potential future people in your calculations of existential risk (and this is precisely what people do when arguing that AI safety should be a top priority), this clearly isn’t actually a good way to make real-world decisions. We simply don’t know enough about the distant future of humanity to be able to make any kind of good judgments about what will or won’t increase their odds of survival. You’re basically just making up numbers. You’re taking tiny probabilities of things you know nothing about and multiplying them by ludicrously huge payoffs; it’s basically the secular rationalist equivalent of Pascal’s Wager.

2. AI and biosecurity are high-tech, futuristic topics, which seem targeted to appeal to the sensibilities of a movement that is still very dominated by intelligent, nerdy, mildly autistic, rich young White men. (Note that I say this as someone who very much fits this stereotype. I’m queer, not extremely rich and not entirely White, but otherwise, yes.) Somehow I suspect that if we asked a lot of poor Black women how important it is to slightly improve our understanding of AI versus giving money to feed children in Africa, we might get a different answer.

3. Poverty eradication is often characterized as a “short term” project, contrasted with AI safety as a “long term” project. This is (ironically) very short-sighted. Eradication of poverty isn’t just about feeding children today. It’s about making a world where those children grow up to be leaders and entrepreneurs and researchers themselves. The positive externalities of economic development are staggering. It is really not much of an exaggeration to say that fascism is a consequence of poverty and unemployment.

4. Currently the main thing that most Effective Altruism organizations say they need most is “talent”; how many millions of person-hours of talent are we leaving on the table by letting children starve or die of malaria?

5. Above all, existential risk can’t really be what’s motivating people here. The obvious solutions to AI safety and biosecurity are not being pursued, because they don’t fit with the vision that intelligent, nerdy, young White men have of how things should be. Namely: Ban them. If you truly believe that the most important thing to do right now is reduce the existential risk of AI and biotechnology, you should support a worldwide ban on research in artificial intelligence and biotechnology. You should want people to take all necessary action to attack and destroy institutions—especially for-profit corporations—that engage in this kind of research, because you believe that they are threatening to destroy the entire world and this is the most important thing, more important than saving people from starvation and disease. I think this is really the knock-down argument; when people say they think that AI safety is the most important thing but they don’t want Google and Facebook to be immediately shut down, they are either confused or lying. Honestly I think maybe Google and Facebook should be immediately shut down for AI safety reasons (as well as privacy and antitrust reasons!), and I don’t think AI safety is yet the most important thing.

Why aren’t people doing that? Because they aren’t actually trying to reduce existential risk. They just think AI and biotechnology are really interesting, fascinating topics and they want to do research on them. And I agree with that, actually—but then they need stop telling people that they’re fighting to save the world, because they obviously aren’t. If the danger were anything like what they say it is, we should be halting all research on these topics immediately, except perhaps for a very select few people who are entrusted with keeping these forbidden secrets and trying to find ways to protect us from them. This may sound radical and extreme, but it is not unprecedented: This is how we handle nuclear weapons, which are universally recognized as a global existential risk. If AI is really as dangerous as nukes, we should be regulating it like nukes. I think that in principle it could be that dangerous, and may be that dangerous someday—but it isn’t yet. And if we don’t want it to get that dangerous, we don’t need more AI researchers, we need more regulations that stop people from doing harmful AI research! If you are doing AI research and it isn’t directly involved specifically in AI safety, you aren’t saving the world—you’re one of the people dragging us closer to the cliff! Anything that could make AI smarter but doesn’t also make it safer is dangerous. And this is clearly true of the vast majority of AI research, and frankly to me seems to also be true of the vast majority of research at AI safety institutes like the Machine Intelligence Research Institute.

Seriously, look through MIRI’s research agenda: It’s mostly incredibly abstract and seems completely beside the point when it comes to preventing AI from taking control of weapons or governments. It’s all about formalizing Bayesian induction. Thanks to you, Skynet can have a formally computable approximation to logical induction! Truly we are saved. Only two of their papers, on “Corrigibility” and “AI Ethics”, actually struck me as at all relevant to making AI safer. The rest is largely abstract mathematics that is almost literally navel-gazing—it’s all about self-reference. Eliezer Yudkowsky finds self-reference fascinating and has somehow convinced an entire community that it’s the most important thing in the world. (I actually find some of it fascinating too, especially the paper on “Functional Decision Theory”, which I think gets at some deep insights into things like why we have emotions. But I don’t see how it’s going to save the world from AI.)

Don’t get me wrong: AI also has enormous potential benefits, and this is a reason we may not want to ban it. But if you really believe that there is a 10% chance that AI will wipe out humanity by 2100, then get out your pitchforks and your EMP generators, because it’s time for the Butlerian Jihad. A 10% chance of destroying all humanity is an utterly unacceptable risk for any conceivable benefit. Better that we consign ourselves to living as we did in the Neolithic than risk something like that. (And a globally-enforced ban on AI isn’t even that; it’s more like “We must live as we did in the 1950s.” How would we survive!?) If you don’t want AI banned, maybe ask yourself whether you really believe the risk is that high—or are human brains just really bad at dealing with small probabilities?

I think what’s really happening here is that we have a bunch of guys (and yes, the EA and especially AI EA-AI community is overwhelmingly male) who are really good at math and want to save the world, and have thus convinced themselves that being really good at math is how you save the world. But it isn’t. The world is much messier than that. In fact, there may not be much that most of us can do to contribute to saving the world; our best options may in fact be to donate money, vote well, and advocate for good causes.

Let me speak Bayesian for a moment: The prior probability that you—yes, you, out of all the billions of people in the world—are uniquely positioned to save it by being so smart is extremely small. It’s far more likely that the world will be saved—or doomed—by people who have power. If you are not the head of state of a large country or the CEO of a major multinational corporation, I’m sorry; you probably just aren’t in a position to save the world from AI.

But you can give some money to GiveWell, so maybe do that instead?

Charity shouldn’t end at home

It so happens that this week’s post will go live on Christmas Day. I always try to do some kind of holiday-themed post around this time of year, because not only Christmas, but a dozen other holidays from various religions all fall around this time of year. The winter solstice seems to be a very popular time for holidays, and has been since antiquity: The Romans were celebrating Saturnalia 2000 years ago. Most of our ‘Christmas’ traditions are actually derived from Yuletide.

These holidays certainly mean many different things to different people, but charity and generosity are themes that are very common across a lot of them. Gift-giving has been part of the season since at least Saturnalia and remains as vital as ever today. Most of those gifts are given to our friends and loved ones, but a substantial fraction of people also give to strangers in the form of charitable donations: November and December have the highest rates of donation to charity in the US and the UK, with about 35-40% of people donating during this season. (Of course this is complicated by the fact that December 31 is often the day with the most donations, probably from people trying to finish out their tax year with a larger deduction.)

My goal today is to make you one of those donors. There is a common saying, often attributed to the Bible but not actually present in it: “Charity begins at home”.

Perhaps this is so. There’s certainly something questionable about the Effective Altruism strategy of “earning to give” if it involves abusing and exploiting the people around you in order to make more money that you then donate to worthy causes. Certainly we should be kind and compassionate to those around us, and it makes sense for us to prioritize those close to us over strangers we have never met. But while charity may begin at home, it must not end at home.

There are so many global problems that could benefit from additional donations. While global poverty has been rapidly declining in the early 21st century, this is largely because of the efforts of donors and nonprofit organizations. Official Development Assitance has been roughly constant since the 1970s at 0.3% of GNI among First World countries—well below international targets set decades ago. Total development aid is around $160 billion per year, while private donations from the United States alone are over $480 billion. Moreover, 9% of the world’s population still lives in extreme poverty, and this rate has actually slightly increased the last few years due to COVID.

There are plenty of other worthy causes you could give to aside from poverty eradication, from issues that have been with us since the dawn of human civilization (the Humane Society International for domestic animal welfare, the World Wildlife Federation for wildlife conservation) to exotic fat-tail sci-fi risks that are only emerging in our own lifetimes (the Machine Intelligence Research Institute for AI safety, the International Federation of Biosafety Associations for biosecurity, the Union of Concerned Scientists for climate change and nuclear safety). You could fight poverty directly through organizations like UNICEF or GiveDirectly, fight neglected diseases through the Schistomoniasis Control Initiative or the Against Malaria Foundation, or entrust an organization like GiveWell to optimize your donations for you, sending them where they think they are needed most. You could give to political causes supporting civil liberties (the American Civil Liberties Union) or protecting the rights of people of color (the North American Association of Colored People) or LGBT people (the Human Rights Campaign).

I could spent a lot of time and effort trying to figure out the optimal way to divide up your donations and give them to causes such as this—and then convincing you that it’s really the right one. (And there is even a time and place for that, because seemingly-small differences can matter a lot in this.) But instead I think I’m just going to ask you to pick something. Give something to an international charity with a good track record.

I think we worry far too much about what is the best way to give—especially people in the Effective Altruism community, of which I’m sort of a marginal member—when the biggest thing the world really needs right now is just more people giving more. It’s true, there are lots of worthless or even counter-productive charities out there: Please, please do not give to the Salvation Army. (And think twice before donating to your own church; if you want to support your own community, okay, go ahead. But if you want to make the world better, there are much better places to put your money.)

But above all, give something. Or if you already give, give more. Most people don’t give at all, and most people who give don’t give enough.

How we measure efficiency affects our efficiency

Jun 21 JDN 2459022

Suppose we are trying to minimize carbon emissions, and we can afford one of the two following policies to improve fuel efficiency:

  1. Policy A will replace 10,000 cars that average 25 MPG with hybrid cars that average 100 MPG.
  2. Policy B will replace 5,000 diesel trucks that average 5 MPG with turbocharged, aerodynamic diesel trucks that average 10 MPG.

Assume that both cars and trucks last about 100,000 miles (in reality this of course depends on a lot of factors), and diesel and gas pollute about the same amount per gallon (this isn’t quite true, but it’s close). Which policy should we choose?

It seems obvious: Policy A, right? 10,000 vehicles, each increasing efficiency by 75 MPG or a factor of 4, instead of 5,000 vehicles, each increasing efficiency by only 5 MPG or a factor of 2.

And yet—in fact the correct answer is definitely policy B, because the use of MPG has distorted our perception of what constitutes efficiency. We should have been using the inverse: gallons per hundred miles.

  1. Policy A will replace 10,000 cars that average 4 GPHM with cars that average 1 GPHM.
  2. Policy B will replace 5,000 trucks that average 20 GPHM with trucks that average 10 GPHM.

This means that policy A will save (10,000)(100,000/100)(4-1) = 30 million gallons, while policy B will save (5,000)(100,000/100)(20-10) = 50 million gallons.

A gallon of gasoline produces about 9 kg of CO2 when burned. This means that by choosing the right policy here, we’ll have saved 450,000 tons of CO2—or by choosing the wrong one we would only have saved 270,000.

The simple choice of which efficiency measure to use when making our judgment—GPHM versus MPG—has had a profound effect on the real impact of our choices.

Let’s try applying the same reasoning to charities. Again suppose we can choose one of two policies.

  1. Policy C will move $10 million that currently goes to local community charities which can save one QALY for $1 million to medical-research charities that can save one QALY for $50,000.
  2. Policy D will move $10 million that currently goes to direct-transfer charities which can save one QALY for $1000 to anti-malaria net charities that can save one QALY for $800.

Policy C means moving funds from charities that are almost useless ($1 million per QALY!?) to charities that meet a basic notion of cost-effectiveness (most public health agencies in the First World have a standard threshold of about $50,000 or $100,000 per QALY).

Policy D means moving funds from charities that are already highly cost-effective to other charities that are only a bit more cost-effective. It almost seems pedantic to even concern ourselves with the difference between $1000 per QALY and $800 per QALY.

It’s the same $10 million either way. So, which policy should we pick?

If the lesson you took from the MPG example is that we should always be focused on increasing the efficiency of the least efficient, you’ll get the wrong answer. The correct answer is based on actually using the right measure of efficiency.

Here, it’s not dollars per QALY we should care about; it’s QALY per million dollars.

  1. Policy C will move $10 million from charities which get 1 QALY per million dollars to charities which get 20 QALY per million dollars.
  2. Policy D will move $10 million from charities which get 1000 QALY per million dollars to charities which get 1250 QALY per million dollars.

Multiply that out, and policy C will gain (10)(20-1) = 190 QALY, while policy D will gain (10)(1250-1000) = 2500 QALY. Assuming that “saving a life” means about 50 QALY, this is the difference between saving 4 lives and saving 50 lives.

My intuition actually failed me on this one; before I actually did the math, I had assumed that it would be far more important to move funds from utterly useless charities to ones that meet a basic standard. But it turns out that it’s actually far more important to make sure that the funds being targeted at the most efficient charities are really the most efficient—even apparently tiny differences matter a great deal.

Of course, if we can move that $10 million from the useless charities to the very best charities, that’s the best of all; it would save (10)(1250-1) = 12,490 QALY. This is nearly 250 lives.

In the fuel economy example, there’s no feasible way to upgrade a semitrailer to get 100 MPG. If we could, we totally should; but nobody has any idea how to do that. Even an electric semi probably won’t be that efficient, depending on how the grid produces electricity. (Obviously if the grid were all nuclear, wind, and solar, it would be; but very few places are like that.)

But when we’re talking about charities, this is just money; it is by definition fungible. So it is absolutely feasible in an economic sense to get all the money currently going towards nearly-useless charities like churches and museums and move that money directly toward high-impact charities like anti-malaria nets and vaccines.

Then again, it may not be feasible in a practical or political sense. Someone who currently donates to their local church may simply not be motivated by the same kind of cosmopolitan humanitarianism that motivates Effective Altruism. They may care more about supporting their local community, or be motivated by genuine religious devotion. This isn’t even inherently a bad thing; nobody is a cosmopolitan in everything they do, nor should we be—we have good reasons to care more about our own friends, family, and community than we do about random strangers in foreign countries thousands of miles away. (And while I’m fairly sure Jesus himself would have been an Effective Altruist if he’d been alive today, I’m well aware that most Christians aren’t—and this doesn’t make them “false Christians”.) There might be some broader social or cultural change that could make this happen—but it’s not something any particular person can expect to accomplish.

Whereas, getting people who are already Effective Altruists giving to efficient charities to give to a slightly more efficient charity is relatively easy: Indeed, it’s basically the whole purpose for which GiveWell exists. And there are analysts working at GiveWell right now whose job it is to figure out exactly which charities yield the most QALY per dollar and publish that information. One person doing that job even slightly better can save hundreds or even thousands of lives.

Indeed, I’m seriously considering applying to be one myself—it sounds both more pleasant and more important than anything I’d be likely to get in academia.

Scope neglect and the question of optimal altruism

JDN 2457090 EDT 16:15.

We’re now on Eastern Daylight Time because of this bizarre tradition of shifting our time zone forward for half of the year. It’s supposed to save energy, but a natural experiment in India suggests it actually increases energy demand. So why do we do it? Like every ridiculous tradition (have you ever tried to explain Groundhog Day to someone from another country?), we do it because we’ve always done it.
This week’s topic is scope neglect, one of the most pervasive—and pernicious—cognitive heuristics human beings face. Scope neglect raises a great many challenges not only practically but also theoretically—it raises what I call the question of optimal altruism.

The question is simple to ask yet remarkably challenging to answer: How much should we be willing to sacrifice in order to benefit others? If we think of this as a number, your solidarity coefficient (s), it is equal to the cost you are willing to pay divided by the benefit your action has for someone else: s B > C.

This is analogous to the biological concept relatedness (r), on which Hamilton’s Rule applies: r B > C. Solidarity is the psychological analogue; instead of valuing people based on their genetic similarity to you, you value them based on… well, that’s the problem.

I can easily place upper and lower bounds: The lower bound is zero: You should definitely be willing to sacrifice something to help other people—otherwise you are a psychopath. The upper bound is one: There’s no point in paying more cost than you produce in benefit, and in fact even paying the same cost to yourself as you yield in benefits for other people doesn’t make a lot of sense, because it means that your own self-interest is meaningless and the fact that you understand your own needs better than the needs of others is also irrelevant.

But beyond that, it gets a lot harder—and that may explain why we suffer scope neglect in the first place. Should it be 90%? 50%? 10%? 1%? How should it vary between friends versus family versus strangers? It’s really hard to say. And this inability to precisely decide how much other people should be worth to us may be part of why we suffer scope neglect.

Scope neglect is the fact that we are not willing to expend effort or money in direct proportion to the benefit it would have. When different groups were asked how much they would be willing to donate in order to save the lives of 2,000 birds, 20,000 birds, or 200,000 birds, the answers they gave were statistically indistinguishable—always about $80. But however much a bird’s life is worth to you, shouldn’t 200,000 birds be worth, well, 200,000 times as much? In fact, more than that, because the marginal utility of wealth is decreasing, but I see no reason to think that the marginal utility of birds decreases nearly as fast.

But therein lies the problem: Usually we can’t pay 200,000 times as much. I’d feel like a horrible person if I weren’t willing to expend at least $10 or an equivalent amount of effort in order to save a bird. To save 200,000 birds that means I’d owe $2 million—and I simply don’t have $2 million.

You can get similar results to the bird experiment if you use children—though, as one might hope, the absolute numbers are a bit bigger, usually more like $500 to $1000. (And this, it turns out, is actually about how much it actually costs to save a child’s life by a particularly efficient means, such as anti-malaria nets, de-worming, or direct cash transfer. So please, by all means, give $1000 to UNICEF or the Against Malaria Foundation. If you can’t give $1000, give $100; if you can’t give $100, give $10.) It doesn’t much matter whether you say that the project will save 500 children, 5,000 children, or 50,000 children—people still will give about $500 to $1000. But once again, if I’m willing to spend $1000 to save a child—and I definitely am—how much should I be willing to spend to end malaria, which kills 500,000 children a year? Apparently $500 million, which not only do I not have, I almost certainly will not make that much money cumulatively through my entire life. ($2 million, on the other hand, I almost certainly will make cumulatively—the median income of an economist is $90,000 per year, so if I work for at least 22 years with that as my average income I’ll have cumulatively made $2 million. My net wealth may never be that high—though if I get better positions, or I’m lucky enough or clever enough with the stock market it might—but my cumulative income almost certainly will. Indeed, the average gain in cumulative income from a college degree is about $1 million. Because it takes time—time is money—and loans carry interest, this gives it a net present value of about $300,000.)

But maybe scope neglect isn’t such a bad thing after all. There is a very serious problem with these sort of moral dilemmas: The question didn’t say I would single-handedly save 200,000 birds—and indeed, that notion seems quite ridiculous. If I knew that I could actually save 200,000 birds and I were the only one who could do it, dammit, I would try to come up with that $2 million. I might not succeed, but I really would try as hard as I could.

And if I could single-handedly end malaria, I hereby vow that I would do anything it took to achieve that. Short of mass murder, anything I could do couldn’t be a higher cost to the world than malaria itself. I have no idea how I’d come up with $500 million, but I’d certainly try. Bill Gates could easily come up with that $500 million—so he did. In fact he endowed the Gates Foundation with $28 billion, and they’ve spent $1.3 billion of that on fighting malaria, saving hundreds of thousands of lives.

With this in mind, what is scope neglect really about? I think it’s about coordination. It’s not that people don’t care more about 200,000 birds than they do about 2,000; and it’s certainly not that they don’t care more about 50,000 children than they do about 500. Rather, the problem is that people don’t know how many other people are likely to donate, or how expensive the total project is likely to be; and we don’t know how much we should be willing to pay to save the life of a bird or a child.

Hence, what we basically do is give up; since we can’t actually assess the marginal utility of our donation dollars, we fall back on our automatic emotional response. Our mind focuses itself on visualizing that single bird covered in oil, or that single child suffering from malaria. We then hope that the representative heuristic will guide us in how much to give. Or we follow social norms, and give as much as we think others would expect us to give.

While many in the effective altruism community take this to be a failing, they never actually say what we should do—they never give us a figure for how much money we should be willing to donate to save the life of a child. Instead they retreat to abstraction, saying that whatever it is we’re willing to give to save a child, we should be willing to give 50,000 times as much to save 50,000 children.

But it’s not that simple. A bigger project may attract more supporters; if the two occur in direct proportion, then constant donation is the optimal response. Since it’s probably not actually proportional, you likely should give somewhat more to causes that affect more people; but exactly how much more is an astonishingly difficult question. I really don’t blame people—or myself—for only giving a little bit more to causes with larger impact, because actually getting the right answer is so incredibly hard. This is why it’s so important that we have institutions like GiveWell and Charity Navigator which do the hard work to research the effectiveness of charities and tell us which ones we should give to.

Yet even if we can properly prioritize which charities to give to first, that still leaves the question of how much each of us should give. 1% of our income? 5%? 10%? 20%? 50%? Should we give so much that we throw ourselves into the same poverty we are trying to save others from?

In his earlier work Peter Singer seemed to think we should give so much that it throws us into poverty ourselves; he asked us to literally compare every single purchase and ask ourselves whether a year of lattes or a nicer car is worth a child’s life. Of course even he doesn’t live that way, and in his later books Singer seems to have realized this, and now recommends the far more modest standard that everyone give at least 1% of their income. (He himself gives about 33%, but he’s also very rich so he doesn’t feel it nearly as much.) I think he may have overcompensated; while if literally everyone gave at least 1% that would be more than enough to end world hunger and solve many other problems—world nominal GDP is over $70 trillion, so 1% of that is $700 billion a year—we know that this won’t happen. Some will give more, others less; most will give nothing at all. Hence I think those of us who give should give more than our share; hence I lean toward figures more like 5% or 10%.

But then, why not 50% or 90%? It is very difficult for me to argue on principle why we shouldn’t be expected to give that much. Because my income is such a small proportion of the total donations, the marginal utility of each dollar I give is basically constant—and quite high; if it takes about $1000 to save a child’s life on average, and each of these children will then live about 60 more years at about half the world average happiness, that’s about 30 QALY per $1000, or about 30 milliQALY per dollar. Even at my current level of income (incidentally about as much as I think the US basic income should be), I’m benefiting myself only about 150 microQALY per dollar—so my money is worth about 200 times as much to those children as it is to me.

So now we have to ask ourselves the really uncomfortable question: How much do I value those children, relative to myself? If I am at all honest, the value is not 1; I’m not prepared to die for someone I’ve never met 10,000 kilometers away in a nation I’ve never even visited, nor am I prepared to give away all my possessions and throw myself into the same starvation I am hoping to save them from. I value my closest friends and family approximately the same as myself, but I have to admit that I value random strangers considerably less.

Do I really value them at less than 1%, as these figures would seem to imply? I feel like a monster saying that, but maybe it really isn’t so terrible—after all, most economists seem to think that the optimal solidarity coefficient is in fact zero. Maybe we need to become more comfortable admitting that random strangers aren’t worth that much to us, simply so that we can coherently acknowledge that they aren’t worth nothing. Very few of us actually give away all our possessions, after all.

Then again, what do we mean by worth? I can say from direct experience that a single migraine causes me vastly more pain than learning about the death of 200,000 people in an earthquake in Southeast Asia. And while I gave about $100 to the relief efforts involved in that earthquake, I’ve spent considerably more on migraine treatments—thousands, once you include health insurance. But given the chance, would I be willing to suffer a migraine to prevent such an earthquake? Without hesitation. So the amount of pain we feel is not the same as the amount of money we pay, which is not the same as what we would be willing to sacrifice. I think the latter is more indicative of how much people’s lives are really worth to us—but then, what we pay is what has the most direct effect on the world.

It’s actually possible to justify not dying or selling all my possessions even if my solidarity coefficient is much higher—it just leads to some really questionable conclusions. Essentially the argument is this: I am an asset. I have what economists call “human capital”—my health, my intelligence, my education—that gives me the opportunity to affect the world in ways those children cannot. In my ideal imagined future (albeit improbable) in which I actually become President of the World Bank and have the authority to set global development policy, I myself could actually have a marginal impact of megaQALY—millions of person-years of better life. In the far more likely scenario in which I attain some mid-level research or advisory position, I could be one of thousands of people who together have that sort of impact—which still means my own marginal effect is on the order of kiloQALY. And clearly it’s true that if I died, or even if I sold all my possessions, these events would no longer be possible.

The problem with that reasoning is that it’s wildly implausible to say that everyone in the First World are in this same sort of position—Peter Singer can say that, and maybe I can say that, and indeed hundreds of development economists can say that—but at least 99.9% of the First World population are not development economists, nor are they physicists likely to invent cold fusion, nor biomedical engineers likely to cure HIV, nor aid workers who distribute anti-malaria nets and polio vaccines, nor politicians who set national policy, nor diplomats who influence international relations, nor authors whose bestselling books raise worldwide consciousness. Yet I am not comfortable saying that all the world’s teachers, secretaries, airline pilots and truck drivers should give away their possessions either. (Maybe all the world’s bankers and CEOs should—or at least most of them.)

Is it enough that our economy would collapse without teachers, secretaries, airline pilots and truck drivers? But this seems rather like the fact that if everyone in the world visited the same restaurant there wouldn’t be enough room. Surely we could do without any individual teacher, any individual truck driver? If everyone gave the same proportion of their income, 1% would be more than enough to end malaria and world hunger. But we know that everyone won’t give, and the job won’t get done if those of us who do give only 1%.

Moreover, it’s also clearly not the case that everything I spend money on makes me more likely to become a successful and influential development economist. Buying a suit and a car actually clearly does—it’s much easier to get good jobs that way. Even leisure can be justified to some extent, since human beings need leisure and there’s no sense burning myself out before I get anything done. But do I need both of my video game systems? Couldn’t I buy a bit less Coke Zero? What if I watched a 20-inch TV instead of a 40-inch one? I still have free time; could I get another job and donate that money? This is the sort of question Peter Singer tells us to ask ourselves, and it quickly leads to a painfully spartan existence in which most of our time is spent thinking about whether what we’re doing is advancing or damaging the cause of ending world hunger. But then the cost of that stress and cognitive effort must be included; but how do you optimize your own cognitive effort? You need to think about the cost of thinking about the cost of thinking… and on and on. This is why bounded rationality modeling is hard, even though it’s plainly essential to both cognitive science and computer science. (John Stuart Mill wrote an essay that resonates deeply with me about how the pressure to change the world drove him into depression, and how he learned to accept that he could still change the world even if he weren’t constantly pressuring himself to do so—and indeed he did. James Mill set out to create in his son, John Stuart Mill, the greatest philosopher in the history of the world—and I believe that he succeeded.)

Perhaps we should figure out what proportion of the world’s people are likely to give, and how much we need altogether, and then assign the amount we expect from each of them based on that? The more money you ask from each, the fewer people are likely to give. This creates an optimization problem akin to setting the price of a product under monopoly—monopolies maximize profits by carefully balancing the quantity sold with the price at which they sell, and perhaps a similar balance would allow us to maximize development aid. But wouldn’t it be better if we could simply increase the number of people who give, so that we don’t have to ask so much of those who are generous? That means tax-funded foreign aid is the way to go, because it ensures coordination. And indeed I do favor increasing foreign aid to about 1% of GDP—in the US it is currently about $50 billion, 0.3% of GDP, a little more than 1% of the Federal budget. (Most people who say we should “cut” foreign aid don’t realize how small it already is.) But foreign aid is coercive; wouldn’t it be better if people would give voluntarily?

I don’t have a simple answer. I don’t know how much other people’s lives ought to be worth to us, or what it means for our decisions once we assign that value. But I hope I’ve convinced you that this problem is an important one—and made you think a little more about scope neglect and why we have it.