Sheepskin effect doesn’t prove much

Sep 20 JDN 2459113

The sheepskin effect is the observation that the increase in income from graduating from college after four years, relative going through college for three years, is much higher than the increase in income from simply going through college for three years instead of two.

It has been suggested that this provides strong evidence that education is primarily due to signaling, and doesn’t provide any actual value. In this post I’m going to show why this view is mistaken. The sheepskin effect in fact tells us very little about the true value of college. (Noah Smith actually made a pretty decent argument that it provides evidence against signaling!)

To see this, consider two very simple models.

In both models, we’ll assume that markets are competitive but productivity is not directly observable, so employers sort you based on your education level and then pay a wage equal to the average productivity of people at your education level, compensated for the cost of getting that education.

Model 1:

In this model, people all start with the same productivity, and are randomly assigned by their life circumstances to go to either 0, 1, 2, 3, or 4 years of college. College itself has no long-term cost.

The first year of college you learn a lot, the next couple of years you don’t learn much because you’re trying to find your way, and then in the last year of college you learn a lot of specialized skills that directly increase your productivity.

So this is your productivity after x years of college:

Years of collegeProductivity
010
117
222
325
431

We assumed that you’d get paid your productivity, so these are also your wages.

The increase in income each year goes from +7, to +5, to +3, then jumps up to +6. So if you compare the 4-year-minus-3-year gap (+6) with the 3-year-minus-2-year gap (+3), you get a sheepskin effect.

Model 2:

In this model, college is useless and provides no actual benefits. People vary in their intrinsic productivity, which is also directly correlated with the difficulty of making it through college.

In particular, there are five types of people:

TypeProductivityCost per year of college
0108
1116
2144
3193
4310

The wages for different levels of college education are as follows:

Years of collegeWage
010
117
222
325
431

Notice that these are exactly the same wages as in scenario 1. This is of course entirely intentional. In a moment I’ll show why this is a Nash equilibrium.

Consider the choice of how many years of college to attend. You know your type, so you know the cost of college to you. You want to maximize your net benefit, which is the wage you’ll get minus the total cost of going to college.

Let’s assume that if a given year of college isn’t worth it, you won’t try to continue past it and see if more would be.

For a type-0 person, they could get 10 by not going to college at all, or 17-(1)(8) = 9 by going for 1 year, so they stop.

For a type-1 person, they could get 10 by not going to college at all, or 17-(1)(6) = 11 by going for 1 year, or 22-(2)(6) = 10 by going for 2 years, so they stop.

Filling out all the possibilities yields this table:

Years \ Type01234
01010101010
1911131417
2
10141622
3

131925
4


1930

I’d actually like to point out that it was much harder to find numbers that allowed me to make the sheepskin effect work in the second model, where education was all signaling. In the model where education provides genuine benefit, all I need to do is posit that the last year of college is particularly valuable (perhaps because high-level specialized courses are more beneficial to productivity). I could pretty much vary that parameter however I wanted, and get whatever magnitude of sheepskin effect I chose.

For the signaling model, I had to carefully calibrate the parameters so that the costs and benefits lined up just right to make sure that each type chose exactly the amount of college I wanted them to choose while still getting the desired sheepskin effect. It took me about two hours of very frustrating fiddling just to get numbers that worked. And that’s with the assumption that someone who finds 2 years of college not worth it won’t consider trying for 4 years of college (which, given the numbers above, they actually might want to), as well as the assumption that when type-3 individuals are indifferent between staying and dropping out they drop out.

And yet the sheepskin effect is supposed to be evidence that the world works like the signaling model?

I’m sure a more sophisticated model could make the signaling explanation a little more robust. The biggest limitation of these models is that once you observe someone’s education level, you immediately know their true productivity, whether it came from college or not. Realistically we should be allowing for unobserved variation that can’t be sorted out by years of college.

Maybe it seems implausible that the last year of college is actually more beneficial to your productivity than the previous years. This is probably the intuition behind the idea that sheepskin effects are evidence of signaling rather than genuine learning.

So how about this model?

Model 3:

As in the second model, there are four types of people, types 0, 1, 2, 3, and 4. They all start with the same level of productivity, and they have the same cost of going to college; but they get different benefits from going to college.

The problem is, people don’t start out knowing what type they are. Nor can they observe their productivity directly. All they can do is observe their experience of going to college and then try to figure out what type they must be.

Type 0s don’t benefit from college at all, and they know they are type 0; so they don’t go to college.

Type 1s benefit a tiny amount from college (+1 productivity per year), but don’t realize they are type 1s until after one year of college.

Type 2s benefit a little from college (+2 productivity per year), but don’t realize they are type 2s until after two years of college.

Type 3s benefit a moderate amount from college (+3 productivity per year), but don’t realize they are type 3s until after three years of college.

Type 4s benefit a great deal from college (+5 productivity per year), but don’t realize they are type 4s until after three years of college.

What then will happen? Type 0s will not go to college. Type 1s will go one year and then drop out. Type 2s will go two years and then drop out. Type 3s will go three years and then drop out. And type 4s will actually graduate.

That results in the following before-and-after productivity:

TypeProductivity before collegeYears of collegeProductivity after college
010010
110111
210214
310319
410430

If each person is paid a wage equal to their productivity, there will be a huge sheepskin effect; wages only go up +1 for 1 year, +3 for 2 years, +5 for 3 years, but then they jump up to +11 for graduation. It appears that the benefit of that last year of college is more than the other three combined. But in fact it’s not; for any given individual, the benefits of college are the same each year. It’s just that college is more beneficial to the people who decided to stay longer.

And I could of course change that assumption too, making the early years more beneficial, or varying the distribution of types, or adding more uncertainty—and so on. But it’s really not hard at all to make a model where college is beneficial and you observe a large sheepskin effect.

In reality, I am confident that some of the observed benefit of college is due to sorting—not the same thing as signaling—rather than the direct benefits of education. The earnings advantage of going to a top-tier school may be as much about the selection of students as they are the actual quality of the education, since once you control for measures of student ability like GPA and test scores those benefits drop dramatically.

Moreover, I agree that it’s worth looking at this: Insofar as college is about sorting or signaling, it’s wasteful from a societal perspective, and we should be trying to find more efficient sorting mechanisms.

But I highly doubt that all the benefits of college are due to sorting or signaling; there definitely are a lot of important things that people learn in college, not just conventional academic knowledge like how to do calculus, but also broader skills like how to manage time, how to work in groups, and how to present ideas to others. Colleges also cultivate friendships and provide opportunities for networking and exposure to a diverse community. Judging by voting patterns, I’m going to go out on a limb and say that college also makes you a better citizen, which would be well worth it by itself.

The truth is, we don’t know exactly why college is beneficial. We certainly know that it is beneficial: Unemployment rates and median earnings are directly sorted by education level. Yes, even PhDs in philosophy and sociology have lower unemployment and higher incomes (on average) than the general population. (And of course PhDs in economics do better still.)

What we could, what we should, and what we must

May 27 JDN 2458266

In one of the most famous essays in all of ethical philosophy, Peter Singer famously argued that we are morally obligated to give so much to charity that we would effectively reduce ourselves to poverty only slightly better than what our donations sought to prevent. His argument is a surprisingly convincing one, especially for such a radical proposition. Indeed, one of the core activities of the Effective Altruism movement has basically been finding ways to moderate Singer’s argument without giving up on its core principles, because it’s so obvious both that we ought to do much more to help people around the world and that there’s no way we’re ever going to do what that argument actually asks of us.

The most cost-effective charities in the world can save a human life for an average cost of under $4,000. The maneuver that Singer basically makes is quite simple: If you know that you could save someone’s life for $4,000, you have $4,000 to spend, and instead you spend that $4,000 on something else, aren’t you saying that whatever you did spend it on was more important than saving that person’s life? And is that really something you believe?

But if you think a little more carefully, it becomes clear that things are not quite so simple. You aren’t being paid $4,000 to kill someone, first of all. If you were willing to accept $4,000 as sufficient payment to commit a murder, you would be, quite simply, a monster. Implicitly the “infinite identical psychopath” of neoclassical rational agent models would be willing to do such a thing, but very few actual human beings—even actual psychopaths—are that callous.

Obviously, we must refrain from murdering people, even for amounts far in excess of $4,000. If you were offered the chance to murder someone for $4 billion dollars, I can understand why you would be tempted to do such a thing. Think of what you could do with all that money! Not only would you and everyone in your immediate family be independently wealthy for life, you could donate billions of dollars to charity and save as much as a million lives. What’s one life for a million? Even then, I have a strong intuition that you shouldn’t commit this murder—but I have never been able to find a compelling moral argument for why. The best I’ve been able to come up with a sort of Kantian notion: What if everyone did this?

Since the most plausible scenario is that the $4 billion comes from existing wealth, all those murders would simply be transferring wealth around, from unknown sources. If you stipulate where the wealth comes from, the dilemma can change quite a bit.

Suppose for example the $4 billion is confiscated from Bashar Al-Assad. That would be in itself a good thing, lessening the power of a genocidal tyrant. So we need to add that to the positive side of the ledger. It is probably worth killing one innocent person just to undermine Al-Assad’s power; indeed, the US Air Force certainly seems to think so, as they average more than one civilian fatality every day in airstrikes.

Now suppose the wealth was extracted by clever financial machinations that took just a few dollars out of every bank account in America. This would be in itself a bad thing, but perhaps not a terrible thing, especially since we’re planning on giving most of it to UNICEF. Those people should have given it anyway, right? This sounds like a pretty good movie, actually; a cyberpunk Robin Hood basically.

Next, suppose it was obtained by stealing the life savings of a million poor people in Africa. Now the method of obtaining the money is so terrible that it’s not clear that funneling it through UNICEF would compensate, even if you didn’t have to murder someone to get it.

Finally, suppose that the wealth is actually created anew—not printed money from the Federal Reserve, but some new technology that will increase the world’s wealth by billions of dollars yet requires the death of an innocent person to create. In this scenario, the murder has become something more like the inherent risk in human subjects biomedical research, and actually seems justifiable. And indeed, that fits with the Kantian answer, for if we all had the chance to kill one person in order to create something that would increase the wealth of the world by $4 billion, we could turn this planet into a post-scarcity utopia within a generation for fewer deaths than are currently caused by diabetes.

Anyway, my point here is that the detailed context of a decision actually matters a great deal. We can’t simply abstract away from everything else in the world and ask whether the money is worth the life.

When we consider this broader context with regard to the world’s most cost-effective charities, it becomes apparent that a small proportion of very dedicated people giving huge proportions of their income to charity is not the kind of world we want to see.

If I actually gave so much that I equalized my marginal utility of wealth to that of a child dying of malaria in Ghana, I would have to donate over 95% of my income—and well before that point, I would be homeless and impoverished. This actually seems penny-wise and pound-foolish even from the perspective of total altruism: If I stop paying rent, it gets a lot harder for me to finish my doctorate and become a development economist. And even if I never donated another dollar, the world would be much better off with one more good development economist than with even another $23,000 to the Against Malaria Foundation. Once you factor in the higher income I’ll have (and proportionately higher donations I’ll make), it’s obviously the wrong decision for me to give 95% of $25,000 today rather than 10% of $70,000 every year for the next 20 years after I graduate.

But the optimal amount for me to donate from that perspective is whatever the maximum would be that I could give without jeopardizing my education and career prospects. This is almost certainly more than I am presently giving. Exactly how much more is actually not all that apparent: It’s not enough to say that I need to be able to pay rent, eat three meals a day, and own a laptop that’s good enough for programming and statistical analysis. There’s also a certain amount that I need for leisure, to keep myself at optimal cognitive functioning for the next several years. Do I need that specific video game, that specific movie? Surely not—but if I go the next ten years without ever watching another movie or playing another video game, I’m probably going to be in trouble psychologically. But what exactly is the minimum amount to keep me functioning well? And how much should I be willing to spend attending conferences? Those can be important career-building activities, but they can also be expensive wastes of time.

Singer acts as though jeopardizing your career prospects is no big deal, but this is clearly wrong: The harm isn’t just to your own well-being, but also to your productivity and earning power that could have allowed you to donate more later. You are a human capital asset, and you are right to invest in yourself. Exactly how much you should invest in yourself is a much harder question.
Such calculations are extremely difficult to do. There are all sorts of variables I simply don’t know, and don’t have any clear way of finding out. It’s not a good sign for an ethical theory when even someone with years of education and expertise on specifically that topic still can’t figure out the answer. Ethics is supposed to be something we can apply to everyone.

So I think it’s most helpful to think in those terms: What could we apply to everyone? What standard of donation would be high enough if we could get everyone on board?

World poverty is rapidly declining. The direct poverty gap at the UN poverty line of $1.90 per day is now only $80 billion. Realistically, we couldn’t simply close that gap precisely (there would also be all sorts of perverse incentives if we tried to do it that way). But the standard estimate that it would take about $300 billion per year in well-targeted spending to eliminate world hunger is looking very good.

How much would each person, just those in the middle class or above within the US or the EU, have to give in order to raise this much?
89% of US income is received by the top 60% of households (who I would say are unambiguously “middle class or above”). Income inequality is not as extreme within the EU, so the proportion of income received by the top 60% seems to be more like 75%.

89% of US GDP plus 75% of EU GDP is all together about $29 trillion per year. This means that in order to raise $300 billion, each person in the middle class or above would need to donate just over one percent of their income.

Not 95%. Not 25%. Not even 10%. Just 1%. That would be enough.

Of course, more is generally better—at least until you start jeopardizing your career prospects. So by all means, give 2% or 5% or even 10%. But I really don’t think it’s helpful to make people feel guilty about not giving 95% when all we really needed was for everyone to give 1%.

There is an important difference between what we could do, what we should do, and what we must do.

What we must do are moral obligations so strong they are essentially inviolable: We must not murder people. There may be extreme circumstances where exceptions can be made (such as collateral damage in war), and we can always come up with hypothetical scenarios that would justify almost anything, but for the vast majority of people the vast majority of time, these ethical rules are absolutely binding.

What we should do are moral obligations that are strong enough to be marks against your character if you break them, but not so absolutely binding that you have to be a monster not to follow them. This is where I put donating at least 1% of your income. (This is also where I put being vegetarian, but perhaps that is a topic for another time.) You really ought to do it, and you are doing something wrongful if you don’t—but most people don’t, and you are not a terrible person if you don’t.

This latter category is in part socially constructed, based on the norms people actually follow. Today, slavery is obviously a grave crime, and to be a human trafficker who participates in it you must be a psychopath. But two hundred years ago, things were somewhat different: Slavery was still wrong, yes, but it was quite possible to be an ordinary person who was generally an upstanding citizen in most respects and yet still own slaves. I would still condemn people who owned slaves back then, but not nearly as forcefully as I would condemn someone who owned slaves today. Two hundred years from now, perhaps vegetarianism will move up a category: The norm will be that everyone eats only plants, and someone who went out of their way to kill and eat a pig would have to be a psychopath. Eating meat is already wrong today—but it will be more wrong in the future. I’d say the same about donating 1% of your income, but actually I’m hoping that by two hundred years from now there will be no more poverty left to eradicate, and donation will no longer be necessary.

Finally, there is what we could do—supererogatory, even heroic actions of self-sacrifice that would make the world a better place, but cannot be reasonably expected of us. This is where donating 95% or even 25% of your income would fall. Yes, absolutely, that would help more people than donating 1%; but you don’t owe the world that much. It’s not wrong for you to contribute less than this. You don’t need to feel guilty for not giving this much.

But I do want to make you feel guilty if you don’t give at least 1%. Don’t tell me you can’t. You can. If your income is $30,000 per year, that’s $300 per year. If you needed that much for a car repair, or dental work, or fixing your roof, you’d find a way to come up with it. No one in the First World middle class is that liquidity-constrained. It is true that half of Americans say they couldn’t come up with $400 in an emergency, but I frankly don’t believe it. (I believe it for the bottom 25% or so, who are actually poor; but not half of Americans.) If you have even one credit card that’s not maxed out, you can do this—and frankly even if a card is maxed out, you can probably call them and get them to raise your limit. There is something you could cut out of your spending that would allow you to get back 1% of your annual income. I don’t know what it is, necessarily: Restaurants? Entertainment? Clothes? But I’m not asking you to give a third of your income—I’m asking you to give one penny out of every dollar.

I give considerably more than that; my current donation target is 8% and I’m planning on raising it to 10% or more once I get a high-paying job. I live on a grad student salary which is less than the median personal income in the US. So I know it can be done. But I am very intentionally not asking you to give this much; that would be above and beyond the call of duty. I’m only asking you to give 1%.

Moral responsibility does not inherit across generations

JDN 2457548

In last week’s post I made a sharp distinction between believing in human progress and believing that colonialism was justified. To make this argument, I relied upon a moral assumption that seems to me perfectly obvious, and probably would to most ethicists as well: Moral responsibility does not inherit across generations, and people are only responsible for their individual actions.

But is in fact this principle is not uncontroversial in many circles. When I read utterly nonsensical arguments like this one from the aptly-named Race Baitr saying that White people have no role to play in the liberation of Black people apparently because our blood is somehow tainted by the crimes our ancestors, it becomes apparent to me that this principle is not obvious to everyone, and therefore is worth defending. Indeed, many applications of the concept of “White Privilege” seem to ignore this principle, speaking as though racism is not something one does or participates in, but something that one is simply by being born with less melanin. Here’s a Salon interview specifically rejecting the proposition that racism is something one does:

For white people, their identities rest on the idea of racism as about good or bad people, about moral or immoral singular acts, and if we’re good, moral people we can’t be racist – we don’t engage in those acts. This is one of the most effective adaptations of racism over time—that we can think of racism as only something that individuals either are or are not “doing.”

If racism isn’t something one does, then what in the world is it? It’s all well and good to talk about systems and social institutions, but ultimately systems and social institutions are made of human behaviors. If you think most White people aren’t doing enough to combat racism (which sounds about right to me!), say that—don’t make some bizarre accusation that simply by existing we are inherently racist. (Also: We? I’m only 75% White, so am I only 75% inherently racist?) And please, stop redefining the word “racism” to mean something other than what everyone uses it to mean; “White people are snakes” is in fact a racist sentiment (and yes, one I’ve actually heard–indeed, here is the late Muhammad Ali comparing all White people to rattlesnakes, and Huffington Post fawning over him for it).

Racism is clearly more common and typically worse when performed by White people against Black people—but contrary to the claims of some social justice activists the White perpetrator and Black victim are not part of the definition of racism. Similarly, sexism is more common and more severe committed by men against women, but that doesn’t mean that “men are pigs” is not a sexist statement (and don’t tell me you haven’t heard that one). I don’t have a good word for bigotry by gay people against straight people (“heterophobia”?) but it clearly does happen on occasion, and similarly cannot be defined out of existence.

I wouldn’t care so much that you make this distinction between “racism” and “racial prejudice”, except that it’s not the normal usage of the word “racism” and therefore confuses people, and also this redefinition clearly is meant to serve a political purpose that is quite insidious, namely making excuses for the most extreme and hateful prejudice as long as it’s committed by people of the appropriate color. If “White people are snakes” is not racism, then the word has no meaning.

Not all discussions of “White Privilege” are like this, of course; this article from Occupy Wall Street actually does a fairly good job of making “White Privilege” into a sensible concept, albeit still not a terribly useful one in my opinion. I think the useful concept is oppression—the problem here is not how we are treating White people, but how we are treating everyone else. What privilege gives you is the freedom to be who you are.”? Shouldn’t everyone have that?

Almost all the so-called “benefits” or “perks” associated with privilege” are actually forgone harms—they are not good things done to you, but bad things not done to you. But benefitting from racist systems doesn’t mean that everything is magically easy for us. It just means that as hard as things are, they could always be worse.” No, that is not what the word “benefit” means. The word “benefit” means you would be worse off without it—and in most cases that simply isn’t true. Many White people obviously think that it is true—which is probably a big reason why so many White people fight so hard to defend racism, you know; you’ve convinced them it is in their self-interest. But, with rare exceptions, it is not; most racial discrimination has literally zero long-run benefit. It’s just bad. Maybe if we helped people appreciate that more, they would be less resistant to fighting racism!

The only features of “privilege” that really make sense as benefits are those that occur in a state of competition—like being more likely to be hired for a job or get a loan—but one of the most important insights of economics is that competition is nonzero-sum, and fairer competition ultimately means a more efficient economy and thus more prosperity for everyone.

But okay, let’s set that aside and talk about this core question of what sort of responsibility we bear for the acts of our ancestors. Many White people clearly do feel deep shame about what their ancestors (or people the same color as their ancestors!) did hundreds of years ago. The psychological reactance to that shame may actually be what makes so many White people deny that racism even exists (or exists anymore)—though a majority of Americans of all races do believe that racism is still widespread.

We also apply some sense of moral responsibility applied to whole races quite frequently. We speak of a policy “benefiting White people” or “harming Black people” and quickly elide the distinction between harming specific people who are Black, and somehow harming “Black people” as a group. The former happens all the time—the latter is utterly nonsensical. Similarly, we speak of a “debt owed by White people to Black people” (which might actually make sense in the very narrow sense of economic reparations, because people do inherit money! They probably shouldn’t, that is literally feudalist, but in the existing system they in fact do), which makes about as much sense as a debt owed by tall people to short people. As Walter Michaels pointed out in The Trouble with Diversity (which I highly recommend), because of this bizarre sense of responsibility we are often in the habit of “apologizing for something you didn’t do to people to whom you didn’t do it (indeed to whom it wasn’t done)”. It is my responsibility to condemn colonialism (which I indeed do), to fight to ensure that it never happens again; it is not my responsibility to apologize for colonialism.

This makes some sense in evolutionary terms; it’s part of the all-encompassing tribal paradigm, wherein human beings come to identify themselves with groups and treat those groups as the meaningful moral agents. It’s much easier to maintain the cohesion of a tribe against the slings and arrows (sometimes quite literal) of outrageous fortune if everyone believes that the tribe is one moral agent worthy of ultimate concern.

This concept of racial responsibility is clearly deeply ingrained in human minds, for it appears in some of our oldest texts, including the Bible: “You shall not bow down to them or worship them; for I, the Lord your God, am a jealous God, punishing the children for the sin of the parents to the third and fourth generation of those who hate me,” (Exodus 20:5)

Why is inheritance of moral responsibility across generations nonsensical? Any number of reasons, take your pick. The economist in me leaps to “Ancestry cannot be incentivized.” There’s no point in holding people responsible for things they can’t control, because in doing so you will not in any way alter behavior. The Stanford Encyclopedia of Philosophy article on moral responsibility takes it as so obvious that people are only responsible for actions they themselves did that they don’t even bother to mention it as an assumption. (Their big question is how to reconcile moral responsibility with determinism, which turns out to be not all that difficult.)

An interesting counter-argument might be that descent can be incentivized: You could use rewards and punishments applied to future generations to motivate current actions. But this is actually one of the ways that incentives clearly depart from moral responsibilities; you could incentivize me to do something by threatening to murder 1,000 children in China if I don’t, but even if it was in fact something I ought to do, it wouldn’t be those children’s fault if I didn’t do it. They wouldn’t deserve punishment for my inaction—I might, and you certainly would for using such a cruel incentive.

Moreover, there’s a problem with dynamic consistency here: Once the action is already done, what’s the sense in carrying out the punishment? This is why a moral theory of punishment can’t merely be based on deterrence—the fact that you could deter a bad action by some other less-bad action doesn’t make the less-bad action necessarily a deserved punishment, particularly if it is applied to someone who wasn’t responsible for the action you sought to deter. In any case, people aren’t thinking that we should threaten to punish future generations if people are racist today; they are feeling guilty that their ancestors were racist generations ago. That doesn’t make any sense even on this deterrence theory.

There’s another problem with trying to inherit moral responsibility: People have lots of ancestors. Some of my ancestors were most likely rapists and murderers; most were ordinary folk; a few may have been great heroes—and this is true of just about anyone anywhere. We all have bad ancestors, great ancestors, and, mostly, pretty good ancestors. 75% of my ancestors are European, but 25% are Native American; so if I am to apologize for colonialism, should I be apologizing to myself? (Only 75%, perhaps?) If you go back enough generations, literally everyone is related—and you may only have to go back about 4,000 years. That’s historical time.

Of course, we wouldn’t be different colors in the first place if there weren’t some differences in ancestry, but there is a huge amount of gene flow between different human populations. The US is a particularly mixed place; because most Black Americans are quite genetically mixed, it is about as likely that any randomly-selected Black person in the US is descended from a slaveowner as it is that any randomly-selected White person is. (Especially since there were a large number of Black slaveowners in Africa and even some in the United States.) What moral significance does this have? Basically none! That’s the whole point; your ancestors don’t define who you are.

If these facts do have any moral significance, it is to undermine the sense most people seem to have that there are well-defined groups called “races” that exist in reality, to which culture responds. No; races were created by culture. I’ve said this before, but it bears repeating: The “races” we hold most dear in the US, White and Black, are in fact the most nonsensical. “Asian” and “Native American” at least almost make sense as categories, though Chippewa are more closely related to Ainu than Ainu are to Papuans. “Latino” isn’t utterly incoherent, though it includes as much Aztec as it does Iberian. But “White” is a club one can join or be kicked out of, while “Black” is the majority of genetic diversity.

Sex is a real thing—while there are intermediate cases of course, broadly speaking humans, like most metazoa, are sexually dimorphic and come in “male” and “female” varieties. So sexism took a real phenomenon and applied cultural dynamics to it; but that’s not what happened with racism. Insofar as there was a real phenomenon, it was extremely superficial—quite literally skin deep. In that respect, race is more like class—a categorization that is itself the result of social institutions.

To be clear: Does the fact that we don’t inherit moral responsibility from our ancestors absolve us from doing anything to rectify the inequities of racism? Absolutely not. Not only is there plenty of present discrimination going on we should be fighting, there are also inherited inequities due to the way that assets and skills are passed on from one generation to the next. If my grandfather stole a painting from your grandfather and both our grandfathers are dead but I am now hanging that painting in my den, I don’t owe you an apology—but I damn well owe you a painting.

The further we become from the past discrimination the harder it gets to make reparations, but all hope is not lost; we still have the option of trying to reset everyone’s status to the same at birth and maintaining equality of opportunity from there. Of course we’ll never achieve total equality of opportunity—but we can get much closer than we presently are.

We could start by establishing an extremely high estate tax—on the order of 99%—because no one has a right to be born rich. Free public education is another good way of equalizing the distribution of “human capital” that would otherwise be concentrated in particular families, and expanding it to higher education would make it that much better. It even makes sense, at least in the short run, to establish some affirmative action policies that are race-conscious and sex-conscious, because there are so many biases in the opposite direction that sometimes you must fight bias with bias.

Actually what I think we should do in hiring, for example, is assemble a pool of applicants based on demographic quotas to ensure a representative sample, and then anonymize the applications and assess them on merit. This way we do ensure representation and reduce bias, but don’t ever end up hiring anyone other than the most qualified candidate. But nowhere should we think that this is something that White men “owe” to women or Black people; it’s something that people should do in order to correct the biases that otherwise exist in our society. Similarly with regard to sexism: Women exhibit just as much unconscious bias against other women as men do. This is not “men” hurting “women”—this is a set of unconscious biases found in almost everywhere and social structures almost everywhere that systematically discriminate against people because they are women.

Perhaps by understanding that this is not about which “team” you’re on (which tribe you’re in), but what policy we should have, we can finally make these biases disappear, or at least fade so small that they are negligible.

Scope neglect and the question of optimal altruism

JDN 2457090 EDT 16:15.

We’re now on Eastern Daylight Time because of this bizarre tradition of shifting our time zone forward for half of the year. It’s supposed to save energy, but a natural experiment in India suggests it actually increases energy demand. So why do we do it? Like every ridiculous tradition (have you ever tried to explain Groundhog Day to someone from another country?), we do it because we’ve always done it.
This week’s topic is scope neglect, one of the most pervasive—and pernicious—cognitive heuristics human beings face. Scope neglect raises a great many challenges not only practically but also theoretically—it raises what I call the question of optimal altruism.

The question is simple to ask yet remarkably challenging to answer: How much should we be willing to sacrifice in order to benefit others? If we think of this as a number, your solidarity coefficient (s), it is equal to the cost you are willing to pay divided by the benefit your action has for someone else: s B > C.

This is analogous to the biological concept relatedness (r), on which Hamilton’s Rule applies: r B > C. Solidarity is the psychological analogue; instead of valuing people based on their genetic similarity to you, you value them based on… well, that’s the problem.

I can easily place upper and lower bounds: The lower bound is zero: You should definitely be willing to sacrifice something to help other people—otherwise you are a psychopath. The upper bound is one: There’s no point in paying more cost than you produce in benefit, and in fact even paying the same cost to yourself as you yield in benefits for other people doesn’t make a lot of sense, because it means that your own self-interest is meaningless and the fact that you understand your own needs better than the needs of others is also irrelevant.

But beyond that, it gets a lot harder—and that may explain why we suffer scope neglect in the first place. Should it be 90%? 50%? 10%? 1%? How should it vary between friends versus family versus strangers? It’s really hard to say. And this inability to precisely decide how much other people should be worth to us may be part of why we suffer scope neglect.

Scope neglect is the fact that we are not willing to expend effort or money in direct proportion to the benefit it would have. When different groups were asked how much they would be willing to donate in order to save the lives of 2,000 birds, 20,000 birds, or 200,000 birds, the answers they gave were statistically indistinguishable—always about $80. But however much a bird’s life is worth to you, shouldn’t 200,000 birds be worth, well, 200,000 times as much? In fact, more than that, because the marginal utility of wealth is decreasing, but I see no reason to think that the marginal utility of birds decreases nearly as fast.

But therein lies the problem: Usually we can’t pay 200,000 times as much. I’d feel like a horrible person if I weren’t willing to expend at least $10 or an equivalent amount of effort in order to save a bird. To save 200,000 birds that means I’d owe $2 million—and I simply don’t have $2 million.

You can get similar results to the bird experiment if you use children—though, as one might hope, the absolute numbers are a bit bigger, usually more like $500 to $1000. (And this, it turns out, is actually about how much it actually costs to save a child’s life by a particularly efficient means, such as anti-malaria nets, de-worming, or direct cash transfer. So please, by all means, give $1000 to UNICEF or the Against Malaria Foundation. If you can’t give $1000, give $100; if you can’t give $100, give $10.) It doesn’t much matter whether you say that the project will save 500 children, 5,000 children, or 50,000 children—people still will give about $500 to $1000. But once again, if I’m willing to spend $1000 to save a child—and I definitely am—how much should I be willing to spend to end malaria, which kills 500,000 children a year? Apparently $500 million, which not only do I not have, I almost certainly will not make that much money cumulatively through my entire life. ($2 million, on the other hand, I almost certainly will make cumulatively—the median income of an economist is $90,000 per year, so if I work for at least 22 years with that as my average income I’ll have cumulatively made $2 million. My net wealth may never be that high—though if I get better positions, or I’m lucky enough or clever enough with the stock market it might—but my cumulative income almost certainly will. Indeed, the average gain in cumulative income from a college degree is about $1 million. Because it takes time—time is money—and loans carry interest, this gives it a net present value of about $300,000.)

But maybe scope neglect isn’t such a bad thing after all. There is a very serious problem with these sort of moral dilemmas: The question didn’t say I would single-handedly save 200,000 birds—and indeed, that notion seems quite ridiculous. If I knew that I could actually save 200,000 birds and I were the only one who could do it, dammit, I would try to come up with that $2 million. I might not succeed, but I really would try as hard as I could.

And if I could single-handedly end malaria, I hereby vow that I would do anything it took to achieve that. Short of mass murder, anything I could do couldn’t be a higher cost to the world than malaria itself. I have no idea how I’d come up with $500 million, but I’d certainly try. Bill Gates could easily come up with that $500 million—so he did. In fact he endowed the Gates Foundation with $28 billion, and they’ve spent $1.3 billion of that on fighting malaria, saving hundreds of thousands of lives.

With this in mind, what is scope neglect really about? I think it’s about coordination. It’s not that people don’t care more about 200,000 birds than they do about 2,000; and it’s certainly not that they don’t care more about 50,000 children than they do about 500. Rather, the problem is that people don’t know how many other people are likely to donate, or how expensive the total project is likely to be; and we don’t know how much we should be willing to pay to save the life of a bird or a child.

Hence, what we basically do is give up; since we can’t actually assess the marginal utility of our donation dollars, we fall back on our automatic emotional response. Our mind focuses itself on visualizing that single bird covered in oil, or that single child suffering from malaria. We then hope that the representative heuristic will guide us in how much to give. Or we follow social norms, and give as much as we think others would expect us to give.

While many in the effective altruism community take this to be a failing, they never actually say what we should do—they never give us a figure for how much money we should be willing to donate to save the life of a child. Instead they retreat to abstraction, saying that whatever it is we’re willing to give to save a child, we should be willing to give 50,000 times as much to save 50,000 children.

But it’s not that simple. A bigger project may attract more supporters; if the two occur in direct proportion, then constant donation is the optimal response. Since it’s probably not actually proportional, you likely should give somewhat more to causes that affect more people; but exactly how much more is an astonishingly difficult question. I really don’t blame people—or myself—for only giving a little bit more to causes with larger impact, because actually getting the right answer is so incredibly hard. This is why it’s so important that we have institutions like GiveWell and Charity Navigator which do the hard work to research the effectiveness of charities and tell us which ones we should give to.

Yet even if we can properly prioritize which charities to give to first, that still leaves the question of how much each of us should give. 1% of our income? 5%? 10%? 20%? 50%? Should we give so much that we throw ourselves into the same poverty we are trying to save others from?

In his earlier work Peter Singer seemed to think we should give so much that it throws us into poverty ourselves; he asked us to literally compare every single purchase and ask ourselves whether a year of lattes or a nicer car is worth a child’s life. Of course even he doesn’t live that way, and in his later books Singer seems to have realized this, and now recommends the far more modest standard that everyone give at least 1% of their income. (He himself gives about 33%, but he’s also very rich so he doesn’t feel it nearly as much.) I think he may have overcompensated; while if literally everyone gave at least 1% that would be more than enough to end world hunger and solve many other problems—world nominal GDP is over $70 trillion, so 1% of that is $700 billion a year—we know that this won’t happen. Some will give more, others less; most will give nothing at all. Hence I think those of us who give should give more than our share; hence I lean toward figures more like 5% or 10%.

But then, why not 50% or 90%? It is very difficult for me to argue on principle why we shouldn’t be expected to give that much. Because my income is such a small proportion of the total donations, the marginal utility of each dollar I give is basically constant—and quite high; if it takes about $1000 to save a child’s life on average, and each of these children will then live about 60 more years at about half the world average happiness, that’s about 30 QALY per $1000, or about 30 milliQALY per dollar. Even at my current level of income (incidentally about as much as I think the US basic income should be), I’m benefiting myself only about 150 microQALY per dollar—so my money is worth about 200 times as much to those children as it is to me.

So now we have to ask ourselves the really uncomfortable question: How much do I value those children, relative to myself? If I am at all honest, the value is not 1; I’m not prepared to die for someone I’ve never met 10,000 kilometers away in a nation I’ve never even visited, nor am I prepared to give away all my possessions and throw myself into the same starvation I am hoping to save them from. I value my closest friends and family approximately the same as myself, but I have to admit that I value random strangers considerably less.

Do I really value them at less than 1%, as these figures would seem to imply? I feel like a monster saying that, but maybe it really isn’t so terrible—after all, most economists seem to think that the optimal solidarity coefficient is in fact zero. Maybe we need to become more comfortable admitting that random strangers aren’t worth that much to us, simply so that we can coherently acknowledge that they aren’t worth nothing. Very few of us actually give away all our possessions, after all.

Then again, what do we mean by worth? I can say from direct experience that a single migraine causes me vastly more pain than learning about the death of 200,000 people in an earthquake in Southeast Asia. And while I gave about $100 to the relief efforts involved in that earthquake, I’ve spent considerably more on migraine treatments—thousands, once you include health insurance. But given the chance, would I be willing to suffer a migraine to prevent such an earthquake? Without hesitation. So the amount of pain we feel is not the same as the amount of money we pay, which is not the same as what we would be willing to sacrifice. I think the latter is more indicative of how much people’s lives are really worth to us—but then, what we pay is what has the most direct effect on the world.

It’s actually possible to justify not dying or selling all my possessions even if my solidarity coefficient is much higher—it just leads to some really questionable conclusions. Essentially the argument is this: I am an asset. I have what economists call “human capital”—my health, my intelligence, my education—that gives me the opportunity to affect the world in ways those children cannot. In my ideal imagined future (albeit improbable) in which I actually become President of the World Bank and have the authority to set global development policy, I myself could actually have a marginal impact of megaQALY—millions of person-years of better life. In the far more likely scenario in which I attain some mid-level research or advisory position, I could be one of thousands of people who together have that sort of impact—which still means my own marginal effect is on the order of kiloQALY. And clearly it’s true that if I died, or even if I sold all my possessions, these events would no longer be possible.

The problem with that reasoning is that it’s wildly implausible to say that everyone in the First World are in this same sort of position—Peter Singer can say that, and maybe I can say that, and indeed hundreds of development economists can say that—but at least 99.9% of the First World population are not development economists, nor are they physicists likely to invent cold fusion, nor biomedical engineers likely to cure HIV, nor aid workers who distribute anti-malaria nets and polio vaccines, nor politicians who set national policy, nor diplomats who influence international relations, nor authors whose bestselling books raise worldwide consciousness. Yet I am not comfortable saying that all the world’s teachers, secretaries, airline pilots and truck drivers should give away their possessions either. (Maybe all the world’s bankers and CEOs should—or at least most of them.)

Is it enough that our economy would collapse without teachers, secretaries, airline pilots and truck drivers? But this seems rather like the fact that if everyone in the world visited the same restaurant there wouldn’t be enough room. Surely we could do without any individual teacher, any individual truck driver? If everyone gave the same proportion of their income, 1% would be more than enough to end malaria and world hunger. But we know that everyone won’t give, and the job won’t get done if those of us who do give only 1%.

Moreover, it’s also clearly not the case that everything I spend money on makes me more likely to become a successful and influential development economist. Buying a suit and a car actually clearly does—it’s much easier to get good jobs that way. Even leisure can be justified to some extent, since human beings need leisure and there’s no sense burning myself out before I get anything done. But do I need both of my video game systems? Couldn’t I buy a bit less Coke Zero? What if I watched a 20-inch TV instead of a 40-inch one? I still have free time; could I get another job and donate that money? This is the sort of question Peter Singer tells us to ask ourselves, and it quickly leads to a painfully spartan existence in which most of our time is spent thinking about whether what we’re doing is advancing or damaging the cause of ending world hunger. But then the cost of that stress and cognitive effort must be included; but how do you optimize your own cognitive effort? You need to think about the cost of thinking about the cost of thinking… and on and on. This is why bounded rationality modeling is hard, even though it’s plainly essential to both cognitive science and computer science. (John Stuart Mill wrote an essay that resonates deeply with me about how the pressure to change the world drove him into depression, and how he learned to accept that he could still change the world even if he weren’t constantly pressuring himself to do so—and indeed he did. James Mill set out to create in his son, John Stuart Mill, the greatest philosopher in the history of the world—and I believe that he succeeded.)

Perhaps we should figure out what proportion of the world’s people are likely to give, and how much we need altogether, and then assign the amount we expect from each of them based on that? The more money you ask from each, the fewer people are likely to give. This creates an optimization problem akin to setting the price of a product under monopoly—monopolies maximize profits by carefully balancing the quantity sold with the price at which they sell, and perhaps a similar balance would allow us to maximize development aid. But wouldn’t it be better if we could simply increase the number of people who give, so that we don’t have to ask so much of those who are generous? That means tax-funded foreign aid is the way to go, because it ensures coordination. And indeed I do favor increasing foreign aid to about 1% of GDP—in the US it is currently about $50 billion, 0.3% of GDP, a little more than 1% of the Federal budget. (Most people who say we should “cut” foreign aid don’t realize how small it already is.) But foreign aid is coercive; wouldn’t it be better if people would give voluntarily?

I don’t have a simple answer. I don’t know how much other people’s lives ought to be worth to us, or what it means for our decisions once we assign that value. But I hope I’ve convinced you that this problem is an important one—and made you think a little more about scope neglect and why we have it.