Moral responsibility does not inherit across generations

JDN 2457548

In last week’s post I made a sharp distinction between believing in human progress and believing that colonialism was justified. To make this argument, I relied upon a moral assumption that seems to me perfectly obvious, and probably would to most ethicists as well: Moral responsibility does not inherit across generations, and people are only responsible for their individual actions.

But is in fact this principle is not uncontroversial in many circles. When I read utterly nonsensical arguments like this one from the aptly-named Race Baitr saying that White people have no role to play in the liberation of Black people apparently because our blood is somehow tainted by the crimes our ancestors, it becomes apparent to me that this principle is not obvious to everyone, and therefore is worth defending. Indeed, many applications of the concept of “White Privilege” seem to ignore this principle, speaking as though racism is not something one does or participates in, but something that one is simply by being born with less melanin. Here’s a Salon interview specifically rejecting the proposition that racism is something one does:

For white people, their identities rest on the idea of racism as about good or bad people, about moral or immoral singular acts, and if we’re good, moral people we can’t be racist – we don’t engage in those acts. This is one of the most effective adaptations of racism over time—that we can think of racism as only something that individuals either are or are not “doing.”

If racism isn’t something one does, then what in the world is it? It’s all well and good to talk about systems and social institutions, but ultimately systems and social institutions are made of human behaviors. If you think most White people aren’t doing enough to combat racism (which sounds about right to me!), say that—don’t make some bizarre accusation that simply by existing we are inherently racist. (Also: We? I’m only 75% White, so am I only 75% inherently racist?) And please, stop redefining the word “racism” to mean something other than what everyone uses it to mean; “White people are snakes” is in fact a racist sentiment (and yes, one I’ve actually heard–indeed, here is the late Muhammad Ali comparing all White people to rattlesnakes, and Huffington Post fawning over him for it).

Racism is clearly more common and typically worse when performed by White people against Black people—but contrary to the claims of some social justice activists the White perpetrator and Black victim are not part of the definition of racism. Similarly, sexism is more common and more severe committed by men against women, but that doesn’t mean that “men are pigs” is not a sexist statement (and don’t tell me you haven’t heard that one). I don’t have a good word for bigotry by gay people against straight people (“heterophobia”?) but it clearly does happen on occasion, and similarly cannot be defined out of existence.

I wouldn’t care so much that you make this distinction between “racism” and “racial prejudice”, except that it’s not the normal usage of the word “racism” and therefore confuses people, and also this redefinition clearly is meant to serve a political purpose that is quite insidious, namely making excuses for the most extreme and hateful prejudice as long as it’s committed by people of the appropriate color. If “White people are snakes” is not racism, then the word has no meaning.

Not all discussions of “White Privilege” are like this, of course; this article from Occupy Wall Street actually does a fairly good job of making “White Privilege” into a sensible concept, albeit still not a terribly useful one in my opinion. I think the useful concept is oppression—the problem here is not how we are treating White people, but how we are treating everyone else. What privilege gives you is the freedom to be who you are.”? Shouldn’t everyone have that?

Almost all the so-called “benefits” or “perks” associated with privilege” are actually forgone harms—they are not good things done to you, but bad things not done to you. But benefitting from racist systems doesn’t mean that everything is magically easy for us. It just means that as hard as things are, they could always be worse.” No, that is not what the word “benefit” means. The word “benefit” means you would be worse off without it—and in most cases that simply isn’t true. Many White people obviously think that it is true—which is probably a big reason why so many White people fight so hard to defend racism, you know; you’ve convinced them it is in their self-interest. But, with rare exceptions, it is not; most racial discrimination has literally zero long-run benefit. It’s just bad. Maybe if we helped people appreciate that more, they would be less resistant to fighting racism!

The only features of “privilege” that really make sense as benefits are those that occur in a state of competition—like being more likely to be hired for a job or get a loan—but one of the most important insights of economics is that competition is nonzero-sum, and fairer competition ultimately means a more efficient economy and thus more prosperity for everyone.

But okay, let’s set that aside and talk about this core question of what sort of responsibility we bear for the acts of our ancestors. Many White people clearly do feel deep shame about what their ancestors (or people the same color as their ancestors!) did hundreds of years ago. The psychological reactance to that shame may actually be what makes so many White people deny that racism even exists (or exists anymore)—though a majority of Americans of all races do believe that racism is still widespread.

We also apply some sense of moral responsibility applied to whole races quite frequently. We speak of a policy “benefiting White people” or “harming Black people” and quickly elide the distinction between harming specific people who are Black, and somehow harming “Black people” as a group. The former happens all the time—the latter is utterly nonsensical. Similarly, we speak of a “debt owed by White people to Black people” (which might actually make sense in the very narrow sense of economic reparations, because people do inherit money! They probably shouldn’t, that is literally feudalist, but in the existing system they in fact do), which makes about as much sense as a debt owed by tall people to short people. As Walter Michaels pointed out in The Trouble with Diversity (which I highly recommend), because of this bizarre sense of responsibility we are often in the habit of “apologizing for something you didn’t do to people to whom you didn’t do it (indeed to whom it wasn’t done)”. It is my responsibility to condemn colonialism (which I indeed do), to fight to ensure that it never happens again; it is not my responsibility to apologize for colonialism.

This makes some sense in evolutionary terms; it’s part of the all-encompassing tribal paradigm, wherein human beings come to identify themselves with groups and treat those groups as the meaningful moral agents. It’s much easier to maintain the cohesion of a tribe against the slings and arrows (sometimes quite literal) of outrageous fortune if everyone believes that the tribe is one moral agent worthy of ultimate concern.

This concept of racial responsibility is clearly deeply ingrained in human minds, for it appears in some of our oldest texts, including the Bible: “You shall not bow down to them or worship them; for I, the Lord your God, am a jealous God, punishing the children for the sin of the parents to the third and fourth generation of those who hate me,” (Exodus 20:5)

Why is inheritance of moral responsibility across generations nonsensical? Any number of reasons, take your pick. The economist in me leaps to “Ancestry cannot be incentivized.” There’s no point in holding people responsible for things they can’t control, because in doing so you will not in any way alter behavior. The Stanford Encyclopedia of Philosophy article on moral responsibility takes it as so obvious that people are only responsible for actions they themselves did that they don’t even bother to mention it as an assumption. (Their big question is how to reconcile moral responsibility with determinism, which turns out to be not all that difficult.)

An interesting counter-argument might be that descent can be incentivized: You could use rewards and punishments applied to future generations to motivate current actions. But this is actually one of the ways that incentives clearly depart from moral responsibilities; you could incentivize me to do something by threatening to murder 1,000 children in China if I don’t, but even if it was in fact something I ought to do, it wouldn’t be those children’s fault if I didn’t do it. They wouldn’t deserve punishment for my inaction—I might, and you certainly would for using such a cruel incentive.

Moreover, there’s a problem with dynamic consistency here: Once the action is already done, what’s the sense in carrying out the punishment? This is why a moral theory of punishment can’t merely be based on deterrence—the fact that you could deter a bad action by some other less-bad action doesn’t make the less-bad action necessarily a deserved punishment, particularly if it is applied to someone who wasn’t responsible for the action you sought to deter. In any case, people aren’t thinking that we should threaten to punish future generations if people are racist today; they are feeling guilty that their ancestors were racist generations ago. That doesn’t make any sense even on this deterrence theory.

There’s another problem with trying to inherit moral responsibility: People have lots of ancestors. Some of my ancestors were most likely rapists and murderers; most were ordinary folk; a few may have been great heroes—and this is true of just about anyone anywhere. We all have bad ancestors, great ancestors, and, mostly, pretty good ancestors. 75% of my ancestors are European, but 25% are Native American; so if I am to apologize for colonialism, should I be apologizing to myself? (Only 75%, perhaps?) If you go back enough generations, literally everyone is related—and you may only have to go back about 4,000 years. That’s historical time.

Of course, we wouldn’t be different colors in the first place if there weren’t some differences in ancestry, but there is a huge amount of gene flow between different human populations. The US is a particularly mixed place; because most Black Americans are quite genetically mixed, it is about as likely that any randomly-selected Black person in the US is descended from a slaveowner as it is that any randomly-selected White person is. (Especially since there were a large number of Black slaveowners in Africa and even some in the United States.) What moral significance does this have? Basically none! That’s the whole point; your ancestors don’t define who you are.

If these facts do have any moral significance, it is to undermine the sense most people seem to have that there are well-defined groups called “races” that exist in reality, to which culture responds. No; races were created by culture. I’ve said this before, but it bears repeating: The “races” we hold most dear in the US, White and Black, are in fact the most nonsensical. “Asian” and “Native American” at least almost make sense as categories, though Chippewa are more closely related to Ainu than Ainu are to Papuans. “Latino” isn’t utterly incoherent, though it includes as much Aztec as it does Iberian. But “White” is a club one can join or be kicked out of, while “Black” is the majority of genetic diversity.

Sex is a real thing—while there are intermediate cases of course, broadly speaking humans, like most metazoa, are sexually dimorphic and come in “male” and “female” varieties. So sexism took a real phenomenon and applied cultural dynamics to it; but that’s not what happened with racism. Insofar as there was a real phenomenon, it was extremely superficial—quite literally skin deep. In that respect, race is more like class—a categorization that is itself the result of social institutions.

To be clear: Does the fact that we don’t inherit moral responsibility from our ancestors absolve us from doing anything to rectify the inequities of racism? Absolutely not. Not only is there plenty of present discrimination going on we should be fighting, there are also inherited inequities due to the way that assets and skills are passed on from one generation to the next. If my grandfather stole a painting from your grandfather and both our grandfathers are dead but I am now hanging that painting in my den, I don’t owe you an apology—but I damn well owe you a painting.

The further we become from the past discrimination the harder it gets to make reparations, but all hope is not lost; we still have the option of trying to reset everyone’s status to the same at birth and maintaining equality of opportunity from there. Of course we’ll never achieve total equality of opportunity—but we can get much closer than we presently are.

We could start by establishing an extremely high estate tax—on the order of 99%—because no one has a right to be born rich. Free public education is another good way of equalizing the distribution of “human capital” that would otherwise be concentrated in particular families, and expanding it to higher education would make it that much better. It even makes sense, at least in the short run, to establish some affirmative action policies that are race-conscious and sex-conscious, because there are so many biases in the opposite direction that sometimes you must fight bias with bias.

Actually what I think we should do in hiring, for example, is assemble a pool of applicants based on demographic quotas to ensure a representative sample, and then anonymize the applications and assess them on merit. This way we do ensure representation and reduce bias, but don’t ever end up hiring anyone other than the most qualified candidate. But nowhere should we think that this is something that White men “owe” to women or Black people; it’s something that people should do in order to correct the biases that otherwise exist in our society. Similarly with regard to sexism: Women exhibit just as much unconscious bias against other women as men do. This is not “men” hurting “women”—this is a set of unconscious biases found in almost everywhere and social structures almost everywhere that systematically discriminate against people because they are women.

Perhaps by understanding that this is not about which “team” you’re on (which tribe you’re in), but what policy we should have, we can finally make these biases disappear, or at least fade so small that they are negligible.

Believing in civilization without believing in colonialism

JDN 2457541

In a post last week I presented some of the overwhelming evidence that society has been getting better over time, particularly since the start of the Industrial Revolution. I focused mainly on infant mortality rates—babies not dying—but there are lots of other measures you could use as well. Despite popular belief, poverty is rapidly declining, and is now the lowest it’s ever been. War is rapidly declining. Crime is rapidly declining in First World countries, and to the best of our knowledge crime rates are stable worldwide. Public health is rapidly improving. Lifespans are getting longer. And so on, and so on. It’s not quite true to say that every indicator of human progress is on an upward trend, but the vast majority of really important indicators are.

Moreover, there is every reason to believe that this great progress is largely the result of what we call “civilization”, even Western civilization: Stable, centralized governments, strong national defense, representative democracy, free markets, openness to global trade, investment in infrastructure, science and technology, secularism, a culture that values innovation, and freedom of speech and the press. We did not get here by Marxism, nor agragrian socialism, nor primitivism, nor anarcho-capitalism. We did not get here by fascism, nor theocracy, nor monarchy. This progress was built by the center-left welfare state, “social democracy”, “modified capitalism”, the system where free, open markets are coupled with a strong democratic government to protect and steer them.

This fact is basically beyond dispute; the evidence is overwhelming. The serious debate in development economics is over which parts of the Western welfare state are most conducive to raising human well-being, and which parts of the package are more optional. And even then, some things are fairly obvious: Stable government is clearly necessary, while speaking English is clearly optional.

Yet many people are resistant to this conclusion, or even offended by it, and I think I know why: They are confusing the results of civilization with the methods by which it was established.

The results of civilization are indisputably positive: Everything I just named above, especially babies not dying.

But the methods by which civilization was established are not; indeed, some of the greatest atrocities in human history are attributable at least in part to attempts to “spread civilization” to “primitive” or “savage” people.
It is therefore vital to distinguish between the result, civilization, and the processes by which it was effected, such as colonialism and imperialism.

First, it’s important not to overstate the link between civilization and colonialism.

We tend to associate colonialism and imperialism with White people from Western European cultures conquering other people in other cultures; but in fact colonialism and imperialism are basically universal to any human culture that attains sufficient size and centralization. India engaged in colonialism, Persia engaged in imperialism, China engaged in imperialism, the Mongols were of course major imperialists, and don’t forget the Ottoman Empire; and did you realize that Tibet and Mali were at one time imperialists as well? And of course there are a whole bunch of empires you’ve probably never heard of, like the Parthians and the Ghaznavids and the Ummayyads. Even many of the people we’re accustoming to thinking of as innocent victims of colonialism were themselves imperialists—the Aztecs certainly were (they even sold people into slavery and used them for human sacrifice!), as were the Pequot, and the Iroquois may not have outright conquered anyone but were definitely at least “soft imperialists” the way that the US is today, spreading their influence around and using economic and sometimes military pressure to absorb other cultures into their own.

Of course, those were all civilizations, at least in the broadest sense of the word; but before that, it’s not that there wasn’t violence, it just wasn’t organized enough to be worthy of being called “imperialism”. The more general concept of intertribal warfare is a human universal, and some hunter-gatherer tribes actually engage in an essentially constant state of warfare we call “endemic warfare”. People have been grouping together to kill other people they perceived as different for at least as long as there have been people to do so.

This is of course not to excuse what European colonial powers did when they set up bases on other continents and exploited, enslaved, or even murdered the indigenous population. And the absolute numbers of people enslaved or killed are typically larger under European colonialism, mainly because European cultures became so powerful and conquered almost the entire world. Even if European societies were not uniquely predisposed to be violent (and I see no evidence to say that they were—humans are pretty much humans), they were more successful in their violent conquering, and so more people suffered and died. It’s also a first-mover effect: If the Ming Dynasty had supported Zheng He more in his colonial ambitions, I’d probably be writing this post in Mandarin and reflecting on why Asian cultures have engaged in so much colonial oppression.

While there is a deeply condescending paternalism (and often post-hoc rationalization of your own self-interested exploitation) involved in saying that you are conquering other people in order to civilize them, humans are also perfectly capable of committing atrocities for far less noble-sounding motives. There are holy wars such as the Crusades and ethnic genocides like in Rwanda, and the Arab slave trade was purely for profit and didn’t even have the pretense of civilizing people (not that the Atlantic slave trade was ever really about that anyway).

Indeed, I think it’s important to distinguish between colonialists who really did make some effort at civilizing the populations they conquered (like Britain, and also the Mongols actually) and those that clearly were just using that as an excuse to rape and pillage (like Spain and Portugal). This is similar to but not quite the same thing as the distinction between settler colonialism, where you send colonists to live there and build up the country, and exploitation colonialism, where you send military forces to take control of the existing population and exploit them to get their resources. Countries that experienced settler colonialism (such as the US and Australia) have fared a lot better in the long run than countries that experienced exploitation colonialism (such as Haiti and Zimbabwe).

The worst consequences of colonialism weren’t even really anyone’s fault, actually. The reason something like 98% of all Native Americans died as a result of European colonization was not that Europeans killed them—they did kill thousands of course, and I hope it goes without saying that that’s terrible, but it was a small fraction of the total deaths. The reason such a huge number died and whole cultures were depopulated was disease, and the inability of medical technology in any culture at that time to handle such a catastrophic plague. The primary cause was therefore accidental, and not really foreseeable given the state of scientific knowledge at the time. (I therefore think it’s wrong to consider it genocide—maybe democide.) Indeed, what really would have saved these people would be if Europe had advanced even faster into industrial capitalism and modern science, or else waited to colonize until they had; and then they could have distributed vaccines and antibiotics when they arrived. (Of course, there is evidence that a few European colonists used the diseases intentionally as biological weapons, which no amount of vaccine technology would prevent—and that is indeed genocide. But again, this was a small fraction of the total deaths.)

However, even with all those caveats, I hope we can all agree that colonialism and imperialism were morally wrong. No nation has the right to invade and conquer other nations; no one has the right to enslave people; no one has the right to kill people based on their culture or ethnicity.

My point is that it is entirely possible to recognize that and still appreciate that Western civilization has dramatically improved the standard of human life over the last few centuries. It simply doesn’t follow from the fact that British government and culture were more advanced and pluralistic that British soldiers can just go around taking over other people’s countries and planting their own flag (follow the link if you need some comic relief from this dark topic). That was the moral failing of colonialism; not that they thought their society was better—for in many ways it was—but that they thought that gave them the right to terrorize, slaughter, enslave, and conquer people.

Indeed, the “justification” of colonialism is a lot like that bizarre pseudo-utilitarianism I mentioned in my post on torture, where the mere presence of some benefit is taken to justify any possible action toward achieving that benefit. No, that’s not how morality works. You can’t justify unlimited evil by any good—it has to be a greater good, as in actually greater.

So let’s suppose that you do find yourself encountering another culture which is clearly more primitive than yours; their inferior technology results in them living in poverty and having very high rates of disease and death, especially among infants and children. What, if anything, are you justified in doing to intervene to improve their condition?

One idea would be to hold to the Prime Directive: No intervention, no sir, not ever. This is clearly what Gene Roddenberry thought of imperialism, hence why he built it into the Federation’s core principles.

But does that really make sense? Even as Star Trek shows progressed, the writers kept coming up with situations where the Prime Directive really seemed like it should have an exception, and sometimes decided that the honorable crew of Enterprise or Voyager really should intervene in this more primitive society to save them from some terrible fate. And I hope I’m not committing a Fictional Evidence Fallacy when I say that if your fictional universe specifically designed not to let that happen makes that happen, well… maybe it’s something we should be considering.

What if people are dying of a terrible disease that you could easily cure? Should you really deny them access to your medicine to avoid intervening in their society?

What if the primitive culture is ruled by a horrible tyrant that you could easily depose with little or no bloodshed? Should you let him continue to rule with an iron fist?

What if the natives are engaged in slavery, or even their own brand of imperialism against other indigenous cultures? Can you fight imperialism with imperialism?

And then we have to ask, does it really matter whether their babies are being murdered by the tyrant or simply dying from malnutrition and infection? The babies are just as dead, aren’t they? Even if we say that being murdered by a tyrant is worse than dying of malnutrition, it can’t be that much worse, can it? Surely 10 babies dying of malnutrition is at least as bad as 1 baby being murdered?

But then it begins to seem like we have a duty to intervene, and moreover a duty that applies in almost every circumstance! If you are on opposite sides of the technology threshold where infant mortality drops from 30% to 1%, how can you justify not intervening?

I think the best answer here is to keep in mind the very large costs of intervention as well as the potentially large benefits. The answer sounds simple, but is actually perhaps the hardest possible answer to apply in practice: You must do a cost-benefit analysis. Furthermore, you must do it well. We can’t demand perfection, but it must actually be a serious good-faith effort to predict the consequences of different intervention policies.

We know that people tend to resist most outside interventions, especially if you have the intention of toppling their leaders (even if they are indeed tyrannical). Even the simple act of offering people vaccines could be met with resistance, as the native people might think you are poisoning them or somehow trying to control them. But in general, opening contact with with gifts and trade is almost certainly going to trigger less hostility and therefore be more effective than going in guns blazing.

If you do use military force, it must be targeted at the particular leaders who are most harmful, and it must be designed to achieve swift, decisive victory with minimal collateral damage. (Basically I’m talking about just war theory.) If you really have such an advanced civilization, show it by exhibiting total technological dominance and minimizing the number of innocent people you kill. The NATO interventions in Kosovo and Libya mostly got this right. The Vietnam War and Iraq War got it totally wrong.

As you change their society, you should be prepared to bear most of the cost of transition; you are, after all, much richer than they are, and also the ones responsible for effecting the transition. You should not expect to see short-term gains for your own civilization, only long-term gains once their culture has advanced to a level near your own. You can’t bear all the costs of course—transition is just painful, no matter what you do—but at least the fungible economic costs should be borne by you, not by the native population. Examples of doing this wrong include basically all the standard examples of exploitation colonialism: Africa, the Caribbean, South America. Examples of doing this right include West Germany and Japan after WW2, and South Korea after the Korean War—which is to say, the greatest economic successes in the history of the human race. This was us winning development, humanity. Do this again everywhere and we will have not only ended world hunger, but achieved global prosperity.

What happens if we apply these principles to real-world colonialism? It does not fare well. Nor should it, as we’ve already established that most if not all real-world colonialism was morally wrong.

15th and 16th century colonialism fail immediately; they offer no benefit to speak of. Europe’s technological superiority was enough to give them gunpowder but not enough to drop their infant mortality rate. Maybe life was better in 16th century Spain than it was in the Aztec Empire, but honestly not by all that much; and life in the Iroquois Confederacy was in many ways better than life in 15th century England. (Though maybe that justifies some Iroquois imperialism, at least their “soft imperialism”?)

If these principles did justify any real-world imperialism—and I am not convinced that it does—it would only be much later imperialism, like the British Empire in the 19th and 20th century. And even then, it’s not clear that the talk of “civilizing” people and “the White Man’s Burden” was much more than rationalization, an attempt to give a humanitarian justification for what were really acts of self-interested economic exploitation. Even though India and South Africa are probably better off now than they were when the British first took them over, it’s not at all clear that this was really the goal of the British government so much as a side effect, and there are a lot of things the British could have done differently that would obviously have made them better off still—you know, like not implementing the precursors to apartheid, or making India a parliamentary democracy immediately instead of starting with the Raj and only conceding to democracy after decades of protest. What actually happened doesn’t exactly look like Britain cared nothing for actually improving the lives of people in India and South Africa (they did build a lot of schools and railroads, and sought to undermine slavery and the caste system), but it also doesn’t look like that was their only goal; it was more like one goal among several which also included the strategic and economic interests of Britain. It isn’t enough that Britain was a better society or even that they made South Africa and India better societies than they were; if the goal wasn’t really about making people’s lives better where you are intervening, it’s clearly not justified intervention.

And that’s the relatively beneficent imperialism; the really horrific imperialists throughout history made only the barest pretense of spreading civilization and were clearly interested in nothing more than maximizing their own wealth and power. This is probably why we get things like the Prime Directive; we saw how bad it can get, and overreacted a little by saying that intervening in other cultures is always, always wrong, no matter what. It was only a slight overreaction—intervening in other cultures is usually wrong, and almost all historical examples of it were wrong—but it is still an overreaction. There are exceptional cases where intervening in another culture can be not only morally right but obligatory.

Indeed, one underappreciated consequence of colonialism and imperialism is that they have triggered a backlash against real good-faith efforts toward economic development. People in Africa, Asia, and Latin America see economists from the US and the UK (and most of the world’s top economists are in fact educated in the US or the UK) come in and tell them that they need to do this and that to restructure their society for greater prosperity, and they understandably ask: “Why should I trust you this time?” The last two or four or seven batches of people coming from the US and Europe to intervene in their countries exploited them or worse, so why is this time any different?

It is different, of course; UNDP is not the East India Company, not by a longshot. Even for all their faults, the IMF isn’t the East India Company either. Indeed, while these people largely come from the same places as the imperialists, and may be descended from them, they are in fact completely different people, and moral responsibility does not inherit across generations. While the suspicion is understandable, it is ultimately unjustified; whatever happened hundreds of years ago, this time most of us really are trying to help—and it’s working.

What is progress? How far have we really come?

JDN 2457534

It is a controversy that has lasted throughout the ages: Is the world getting better? Is it getting worse? Or is it more or less staying the same, changing in ways that don’t really constitute improvements or detriments?

The most obvious and indisputable change in human society over the course of history has been the advancement of technology. At one extreme there are techno-utopians, who believe that technology will solve all the world’s problems and bring about a glorious future; at the other extreme are anarcho-primitivists, who maintain that civilization, technology, and industrialization were all grave mistakes, removing us from our natural state of peace and harmony.

I am not a techno-utopian—I do not believe that technology will solve all our problems—but I am much closer to that end of the scale. Technology has solved a lot of our problems, and will continue to solve a lot more. My aim in this post is to convince you that progress is real, that things really are, on the whole, getting better.

One of the more baffling arguments against progress comes from none other than Jared Diamond, the social scientist most famous for Guns, Germs and Steel (which oddly enough is mainly about horses and goats). About seven months before I was born, Diamond wrote an essay for Discover magazine arguing quite literally that agriculture—and by extension, civilization—was a mistake.

Diamond fortunately avoids the usual argument based solely on modern hunter-gatherers, which is a selection bias if ever I heard one. Instead his main argument seems to be that paleontological evidence shows an overall decrease in health around the same time as agriculture emerged. But that’s still an endogeneity problem, albeit a subtler one. Maybe agriculture emerged as a response to famine and disease. Or maybe they were both triggered by rising populations; higher populations increase disease risk, and are also basically impossible to sustain without agriculture.

I am similarly dubious of the claim that hunter-gatherers are always peaceful and egalitarian. It does seem to be the case that herders are more violent than other cultures, as they tend to form honor cultures that punish all sleights with overwhelming violence. Even after the Industrial Revolution there were herder honor cultures—the Wild West. Yet as Steven Pinker keeps trying to tell people, the death rates due to homicide in all human cultures appear to have steadily declined for thousands of years.

I read an article just a few days ago on the Scientific American blog which included the following claim so astonishingly nonsensical it makes me wonder if the authors can even do arithmetic or read statistical tables correctly:

I keep reminding readers (see Further Reading), the evidence is overwhelming that war is a relatively recent cultural invention. War emerged toward the end of the Paleolithic era, and then only sporadically. A new study by Japanese researchers published in the Royal Society journal Biology Letters corroborates this view.

Six Japanese scholars led by Hisashi Nakao examined the remains of 2,582 hunter-gatherers who lived 12,000 to 2,800 years ago, during Japan’s so-called Jomon Period. The researchers found bashed-in skulls and other marks consistent with violent death on 23 skeletons, for a mortality rate of 0.89 percent.

That is supposed to be evidence that ancient hunter-gatherers were peaceful? The global homicide rate today is 62 homicides per million people per year. Using the worldwide life expectancy of 71 years (which is biasing against modern civilization because our life expectancy is longer), that means that the worldwide lifetime homicide rate is 4,400 homicides per million people, or 0.44%—that’s less than half the homicide rate of these “peaceful” hunter-gatherers. If you compare just against First World countries, the difference is even starker; let’s use the US, which has the highest homicide rate in the First World. Our homicide rate is 38 homicides per million people per year, which at our life expectancy of 79 years is 3,000 homicides per million people, or an overall homicide rate of 0.3%, slightly more than a third of this “peaceful” ancient culture. The most peaceful societies today—notably Japan, where these remains were found—have homicide rates as low as 3 per million people per year, which is a lifetime homicide rate of 0.02%, forty times smaller than their supposedly utopian ancestors. (Yes, all of Japan has fewer total homicides than Chicago. I’m sure it has nothing to do with their extremely strict gun control laws.) Indeed, to get a modern homicide rate as high as these hunter-gatherers, you need to go to a country like Congo, Myanmar, or the Central African Republic. To get a substantially higher homicide rate, you essentially have to be in Latin America. Honduras, the murder capital of the world, has a lifetime homicide rate of about 6.7%.

Again, how did I figure these things out? By reading basic information from publicly-available statistical tables and then doing some simple arithmetic. Apparently these paleoanthropologists couldn’t be bothered to do that, or didn’t know how to do it correctly, before they started proclaiming that human nature is peaceful and civilization is the source of violence. After an oversight as egregious as that, it feels almost petty to note that a sample size of a few thousand people from one particular region and culture isn’t sufficient data to draw such sweeping judgments or speak of “overwhelming” evidence.

Of course, in order to decide whether progress is a real phenomenon, we need a clearer idea of what we mean by progress. It would be presumptuous to use per-capita GDP, though there can be absolutely no doubt that technology and capitalism do in fact raise per-capita GDP. If we measure by inequality, modern society clearly fares much worse (our top 1% share and Gini coefficient may be higher than Classical Rome!), but that is clearly biased in the opposite direction, because the main way we have raised inequality is by raising the ceiling, not lowering the floor. Most of our really good measures (like the Human Development Index) only exist for the last few decades and can barely even be extrapolated back through the 20th century.

How about babies not dying? This is my preferred measure of a society’s value. It seems like something that should be totally uncontroversial: Babies dying is bad. All other things equal, a society is better if fewer babies die.

I suppose it doesn’t immediately follow that all things considered a society is better if fewer babies die; maybe the dying babies could be offset by some greater good. Perhaps a totalitarian society where no babies die is in fact worse than a free society in which a few babies die, or perhaps we should be prepared to accept some small amount of babies dying in order to save adults from poverty, or something like that. But without some really powerful overriding reason, babies not dying probably means your society is doing something right. (And since most ancient societies were in a state of universal poverty and quite frequently tyranny, these exceptions would only strengthen my case.)

Well, get ready for some high-yield truth bombs about infant mortality rates.

It’s hard to get good data for prehistoric cultures, but the best data we have says that infant mortality in ancient hunter-gatherer cultures was about 20-50%, with a best estimate around 30%. This is statistically indistinguishable from early agricultural societies.

Indeed, 30% seems to be the figure humanity had for most of history. Just shy of a third of all babies died for most of history.

In Medieval times, infant mortality was about 30%.

This same rate (fluctuating based on various plagues) persisted into the Enlightenment—Sweden has the best records, and their infant mortality rate in 1750 was about 30%.

The decline in infant mortality began slowly: During the Industrial Era, infant mortality was about 15% in isolated villages, but still as high as 40% in major cities due to high population densities with poor sanitation.

Even as recently as 1900, there were US cities with infant mortality rates as high as 30%, though the overall rate was more like 10%.

Most of the decline was recent and rapid: Just within the US since WW2, infant mortality fell from about 5.5% to 0.7%, though there remains a substantial disparity between White and Black people.

Globally, the infant mortality rate fell from 6.3% to 3.2% within my lifetime, and in Africa today, the region where it is worst, it is about 5.5%—or what it was in the US in the 1940s.

This precipitous decline in babies dying is the main reason ancient societies have such low life expectancies; actually once they reached adulthood they lived to be about 70 years old, not much worse than we do today. So my multiplying everything by 71 actually isn’t too far off even for ancient societies.

Let me make a graph for you here, of the approximate rate of babies dying over time from 10,000 BC to today:

Infant_mortality.png

Let’s zoom in on the last 250 years, where the data is much more solid:

Infant_mortality_recent.png

I think you may notice something in these graphs. There is quite literally a turning point for humanity, a kink in the curve where we suddenly begin a rapid decline from an otherwise constant mortality rate.

That point occurs around or shortly before 1800—that is, it occurs at industrial capitalism. Adam Smith (not to mention Thomas Jefferson) was writing at just about the point in time when humanity made a sudden and unprecedented shift toward saving the lives of millions of babies.

So now, think about that the next time you are tempted to say that capitalism is an evil system that destroys the world; the evidence points to capitalism quite literally saving babies from dying.

How would it do so? Well, there’s that rising per-capita GDP we previously ignored, for one thing. But more important seems to be the way that industrialization and free markets support technological innovation, and in this case especially medical innovation—antibiotics and vaccines. Our higher rates of literacy and better communication, also a result of raised standard of living and improved technology, surely didn’t hurt. I’m not often in agreement with the Cato Institute, but they’re right about this one: Industrial capitalism is the chief source of human progress.

Billions of babies would have died but we saved them. So yes, I’m going to call that progress. Civilization, and in particular industrialization and free markets, have dramatically improved human life over the last few hundred years.

In a future post I’ll address one of the common retorts to this basically indisputable fact: “You’re making excuses for colonialism and imperialism!” No, I’m not. Saying that modern capitalism is a better system (not least because it saves babies) is not at all the same thing as saying that our ancestors were justified in using murder, slavery, and tyranny to force people into it.

Is Equal Unfair?

JDN 2457492

Much as you are officially a professional when people start paying you for what you do, I think you are officially a book reviewer when people start sending you books for free asking you to review them for publicity. This has now happened to me, with the book Equal Is Unfair by Don Watkins and Yaron Brook. This post is longer than usual, but in order to be fair to the book’s virtues as well as its flaws, I felt a need to explain quite thoroughly.

It’s a very frustrating book, because at times I find myself agreeing quite strongly with the first part of a paragraph, and then reaching the end of that same paragraph and wanting to press my forehead firmly into the desk in front of me. It makes some really good points, and for the most part uses economic statistics reasonably accurately—but then it rides gleefully down a slippery slope fallacy like a waterslide. But I guess that’s what I should have expected; it’s by leaders of the Ayn Rand Institute, and my experience with reading Ayn Rand is similar to that of Randall Monroe (I’m mainly referring to the alt-text, which uses slightly foul language).

As I kept being jostled between “That’s a very good point.”, “Hmm, that’s an interesting perspective.”, and “How can anyone as educated as you believe anything that stupid!?” I realized that there are actually three books here, interleaved:

1. A decent economics text on the downsides of taxation and regulation and the great success of technology and capitalism at raising the standard of living in the United States, which could have been written by just about any mainstream centrist neoclassical economist—I’d say it reads most like John Taylor or Ken Galbraith. My reactions to this book were things like “That’s a very good point.”, and “Sure, but any economist would agree with that.”

2. An interesting philosophical treatise on the meanings of “equality” and “opportunity” and their application to normative economic policy, as well as about the limitations of statistical data in making political and ethical judgments. It could have been written by Robert Nozick (actually I think much of it was based on Robert Nozick). Some of the arguments are convincing, others are not, and many of the conclusions are taken too far; but it’s well within the space of reasonable philosophical arguments. My reactions to this book were things like “Hmm, that’s an interesting perspective.” and “Your argument is valid, but I think I reject the second premise.”

3. A delusional rant of the sort that could only be penned by a True Believer in the One True Gospel of Ayn Rand, about how poor people are lazy moochers, billionaires are world-changing geniuses whose superior talent and great generosity we should all bow down before, and anyone who would dare suggest that perhaps Steve Jobs got lucky or owes something to the rest of society is an authoritarian Communist who hates all achievement and wants to destroy the American Dream. It was this book that gave me reactions like “How can anyone as educated as you believe anything that stupid!?” and “You clearly have no idea what poverty is like, do you?” and “[expletive] you, you narcissistic ingrate!”

Given that the two co-authors are Executive Director and a fellow of the Ayn Rand Institute, I suppose I should really be pleasantly surprised that books 1 and 2 exist, rather than disappointed by book 3.

As evidence of each of the three books interleaved, I offer the following quotations:

Book 1:

“All else being equal, taxes discourage production and prosperity.” (p. 30)

No reasonable economist would disagree. The key is all else being equal—it rarely is.

“For most of human history, our most pressing problem was getting enough food. Now food is abundant and affordable.” (p.84)

Correct! And worth pointing out, especially to anyone who thinks that economic progress is an illusion or we should go back to pre-industrial farming practices—and such people do exist.

“Wealth creation is first and foremost knowledge creation. And this is why you can add to the list of people who have created the modern world, great thinkers: people such as Euclid, Aristotle, Galileo, Newton, Darwin, Einstein, and a relative handful of others.” (p.90, emph. in orig.)

Absolutely right, though as I’ll get to below there’s something rather notable about that list.

“To be sure, there is competition in an economy, but it’s not a zero-sum game in which some have to lose so that others can win—not in the big picture.” (p. 97)

Yes! Precisely! I wish I could explain to more people—on both the Left and the Right, by the way—that economics is nonzero-sum, and that in the long run competitive markets improve the standard of living of society as a whole, not just the people who win that competition.

Book 2:

“Even opportunities that may come to us without effort on our part—affluent parents, valuable personal connections, a good education—require enormous effort to capitalize on.” (p. 66)

This is sometimes true, but clearly doesn’t apply to things like the Waltons’ inherited billions, for which all they had to do was be born in the right family and not waste their money too extravagantly.

“But life is not a game, and achieving equality of initial chances means forcing people to play by different rules.” (p. 79)

This is an interesting point, and one that I think we should acknowledge; we must treat those born rich differently from those born poor, because their unequal starting positions mean that treating them equally from this point forward would lead to a wildly unfair outcome. If my grandfather stole your grandfather’s wealth and passed it on to me, the fair thing to do is not to treat you and I equally from this point forward—it’s to force me to return what was stolen, insofar as that is possible. And even if we suppose that my grandfather earned far vaster wealth than yours, I think a more limited redistribution remains justified simply to put you and I on a level playing field and ensure fair competition and economic efficiency.

“The key error in this argument is that it totally mischaracterizes what it means to earn something. For the egalitarians, the results of our actions don’t merely have to be under our control, but entirely of our own making. […] But there is nothing like that in reality, and so what the egalitarians are ultimately doing is wiping out the very possibility of earning something.” (p. 193)

The way they use “egalitarian” as an insult is a bit grating, but there clearly are some actual egalitarian philosophers whose views are this extreme, such as G.A. Cohen, James Kwak and Peter Singer. I strongly agree that we need to make a principled distinction between gains that are earned and gains that are unearned, such that both sets are nonempty. Yet while Cohen would seem to make “earned” an empty set, Watkins and Brook very nearly make “unearned” empty—you get what you get, and you deserve it. The only exceptions they seem willing to make are outright theft and, what they consider equivalent, taxation. They have no concept of exploitation, excessive market power, or arbitrage—and while they claim they oppose fraud, they seem to think that only government is capable of it.

Book 3:

“What about government handouts (usually referred to as ‘transfer payments’)?” (p. 23)

Because Social Security is totally just a handout—it’s not like you pay into it your whole life or anything.

“No one cares whether the person who fixes his car or performs his brain surgery or applies for a job at his company is male or female, Indian or Pakistani—he wants to know whether they are competent.” (p.61)

Yes they do. We have direct experimental evidence of this.

“The notion that ‘spending drives the economy’ and that rich people spend less than others isn’t a view seriously entertained by economists,[…]” (p. 110)

The New Synthesis is Keynesian! This is what Milton Friedman was talking about when he said, “We’re all Keynesians now.”

“Because mobility statistics don’t distinguish between those who don’t rise and those who can’t, they are useless when it comes to assessing how healthy mobility is.” (p. 119)

So, if Black people have much lower odds of achieving high incomes even controlling for education, we can’t assume that they are disadvantaged or discriminated against; maybe Black people are just lazy or stupid? Is that what you’re saying here? (I think it might be.)

“Payroll taxes alone amount to 15.3 percent of your income; money that is taken from you and handed out to the elderly. This means that you have to spend more than a month and a half each year working without pay in order to fund other people’s retirement and medical care.” (p. 127)

That is not even close to how taxes work. Taxes are not “taken” from money you’d otherwise get—taxation changes prices and the monetary system depends upon taxation.

“People are poor, in the end, because they have not created enough wealth to make themselves prosperous.” (p. 144)

This sentence was so awful that when I showed it to my boyfriend, he assumed it must be out of context. When I showed him the context, he started swearing the most I’ve heard him swear in a long time, because the context was even worse than it sounds. Yes, this book is literally arguing that the reason people are poor is that they’re just too lazy and stupid to work their way out of poverty.

“No society has fully implemented the egalitarian doctrine, but one came as close as any society can come: Cambodia’s Khmer Rouge.” (p. 207)

Because obviously the problem with the Khmer Rouge was their capital gains taxes. They were just too darn fair, and if they’d been more selfish they would never have committed genocide. (The authors literally appear to believe this.)

 

So there are my extensive quotations, to show that this really is what the book is saying. Now, a little more summary of the good, the bad, and the ugly.

One good thing is that the authors really do seem to understand fairly well the arguments of their opponents. They quote their opponents extensively, and only a few times did it feel meaningfully out of context. Their use of economic statistics is also fairly good, though occasionally they present misleading numbers or compare two obviously incomparable measures.

One of the core points in Equal is Unfair is quite weak: They argue against the “shared-pie assumption”, which is that we create wealth as a society, and thus the rest of society is owed some portion of the fruits of our efforts. They maintain that this is fundamentally authoritarian and immoral; essentially they believe a totalizing false dichotomy between either absolute laissez-faire or Stalinist Communism.

But the “shared-pie assumption” is not false; we do create wealth as a society. Human cognition is fundamentally social cognition; they said themselves that we depend upon the discoveries of people like Newton and Einstein for our way of life. But it should be obvious we can never pay Einstein back; so instead we must pay forward, to help some child born in the ghetto to rise to become the next Einstein. I agree that we must build a society where opportunity is maximized—and that means, necessarily, redistributing wealth from its current state of absurd and immoral inequality.

I do however agree with another core point, which is that most discussions of inequality rely upon a tacit assumption which is false: They call it the “fixed-pie assumption”.

When you talk about the share of income going to different groups in a population, you have to be careful about the fact that there is not a fixed amount of wealth in a society to be distributed—not a “fixed pie” that we are cutting up and giving around. If it were really true that the rising income share of the top 1% were necessary to maximize the absolute benefits of the bottom 99%, we probably should tolerate that, because the alternative means harming everyone. (In arguing this they quote John Rawls several times with disapprobation, which is baffling because that is exactly what Rawls says.)

Even if that’s true, there is still a case to be made against inequality, because too much wealth in the hands of a few people will give them more power—and unequal power can be dangerous even if wealth is earned, exchanges are uncoerced, and the distribution is optimally efficient. (Watkins and Brook dismiss this contention out of hand, essentially defining beneficent exploitation out of existence.)

Of course, in the real world, there’s no reason to think that the ballooning income share of the top 0.01% in the US is actually associated with improved standard of living for everyone else.

I’ve shown these graphs before, but they bear repeating:

Income shares for the top 1% and especially the top 0.1% and 0.01% have risen dramatically in the last 30 years.

top_income_shares_adjusted

But real median income has only slightly increased during the same period.

US_median_household_income

Thus, mean income has risen much faster than median income.

median_mean

While theoretically it could be that the nature of our productivity technology has shifted in such a way that it suddenly became necessary to heap more and more wealth on the top 1% in order to continue increasing national output, there is actually very little evidence of this. On the contrary, as Joseph Stiglitz (Nobel Laureate, you may recall) has documented, the leading cause of our rising inequality appears to be a dramatic increase in rent-seeking, which is to say corruption, exploitation, and monopoly power. (This probably has something to do with why I found in my master’s thesis that rising top income shares correlate quite strongly with rising levels of corruption.)

Now to be fair, the authors of Equal is Unfair do say that they are opposed to rent-seeking, and would like to see it removed. But they have a very odd concept of what rent-seeking entails, and it basically seems to amount to saying that whatever the government does is rent-seeking, whatever corporations do is fair free-market competition. On page 38 they warn us not to assume that government is good and corporations are bad—but actually it’s much more that they assume that government is bad and corporations are good. (The mainstream opinion appears to be actually that both are bad, and we should replace them both with… er… something.)

They do make some other good points I wish more leftists would appreciate, such as the point that while colonialism and imperialism can damage countries that suffer them and make them poorer, they generally do not benefit the countries that commit them and make them richer. The notion that Europe is rich because of imperialism is simply wrong; Europe is rich because of education, technology, and good governance. Indeed, the greatest surge in Europe’s economic growth occurred as the period of imperialism was winding down—when Europeans realized that they would be better off trying to actually invent and produce things rather than stealing them from others.

Likewise, they rightfully demolish notions of primitivism and anti-globalization that I often see bouncing around from folks like Naomi Klein. But these are book 1 messages; any economist would agree that primitivism is a terrible idea, and very few are opposed to globalization per se.

The end of Equal is Unfair gives a five-part plan for unleashing opportunity in America:

1. Abolish all forms of corporate welfare so that no business can gain unfair advantage.

2. Abolish government barriers to work so that every individual can enjoy the dignity of earned success.

3. Phase out the welfare state so that America can once again become the land of self-reliance.

4. Unleash the power of innovation in education by ending the government monopoly on schooling.

5. Liberate innovators from the regulatory shackles that are strangling them.

Number 1 is hard to disagree with, except that they include literally everything the government does that benefits a corporation as corporate welfare, including things like subsidies for solar power that the world desperately needs (or millions of people will die).

Number 2 sounds really great until you realize that they are including all labor standards, environmental standards and safety regulations as “barriers to work”; because it’s such a barrier for children to not be able to work in a factory where your arm can get cut off, and such a barrier that we’ve eliminated lead from gasoline emissions and thereby cut crime in half.

Number 3 could mean a lot of things; if it means replacing the existing system with a basic income I’m all for it. But in fact it seems to mean removing all social insurance whatsoever. Indeed, Watkins and Brook do not appear to believe in social insurance at all. The whole concept of “less fortunate”, “there but for the grace of God go I” seems to elude them. They have no sense that being fortunate in their own lives gives them some duty to help others who were not; they feel no pang of moral obligation whatsoever to help anyone else who needs help. Indeed, they literally mock the idea that human beings are “all in this together”.

They also don’t even seem to believe in public goods, or somehow imagine that rational self-interest could lead people to pay for public goods without any enforcement whatsoever despite the overwhelming incentives to free-ride. (What if you allow people to freely enter a contract that provides such enforcement mechanisms? Oh, you mean like social democracy?)

Regarding number 4, I’d first like to point out that private schools exist. Moreover, so do charter schools in most states, and in states without charter schools there are usually vouchers parents can use to offset the cost of private schools. So while the government has a monopoly in the market share sense—the vast majority of education in the US is public—it does not actually appear to be enforcing a monopoly in the anti-competitive sense—you can go to private school, it’s just too expensive or not as good. Why, it’s almost as if education is a public good or a natural monopoly.

Number 5 also sounds all right, until you see that they actually seem most opposed to antitrust laws of all things. Why would antitrust laws be the ones that bother you? They are designed to increase competition and lower barriers, and largely succeed in doing so (when they are actually enforced, which is rare of late). If you really want to end barriers to innovation and government-granted monopolies, why is it not patents that draw your ire?

They also seem to have trouble with the difference between handicapping and redistribution—they seem to think that the only way to make outcomes more equal is to bring the top down and leave the bottom where it is, and they often use ridiculous examples like “Should we ban reading to your children, because some people don’t?” But of course no serious egalitarian would suggest such a thing. Education isn’t fungible, so it can’t be redistributed. You can take it away (and sometimes you can add it, e.g. public education, which Watkins and Brooks adamantly oppose); but you can’t simply transfer it from one person to another. Money on the other hand, is by definition fungible—that’s kind of what makes it money, really. So when we take a dollar from a rich person and give it to a poor person, the poor person now has an extra dollar. We’ve not simply lowered; we’ve also raised. (In practice it’s a bit more complicated than that, as redistribution can introduce inefficiencies. So realistically maybe we take $1.00 and give $0.90; that’s still worth doing in a lot of cases.)

If attributes like intelligence were fungible, I think we’d have a very serious moral question on our hands! It is not obvious to me that the world is better off with its current range of intelligence, compared to a world where geniuses had their excess IQ somehow sucked out and transferred to mentally disabled people. Or if you think that the marginal utility of intelligence is increasing, then maybe we should redistribute IQ upward—take it from some mentally disabled children who aren’t really using it for much and add it onto some geniuses to make them super-geniuses. Of course, the whole notion is ridiculous; you can’t do that. But whereas Watkins and Brook seem to think it’s obvious that we shouldn’t even if we could, I don’t find that obvious at all. You didn’t earn your IQ (for the most part); you don’t seem to deserve it in any deep sense; so why should you get to keep it, if the world would be much better off if you didn’t? Why should other people barely be able to feed themselves so I can be good at calculus? At best, maybe I’m free to keep it—but given the stakes, I’m not even sure that would be justifiable. Peter Singer is right about one thing: You’re not free to let a child drown in a lake just to keep your suit from getting wet.

Ultimately, if you really want to understand what’s going on with Equal is Unfair, consider the following sentence, which I find deeply revealing as to the true objectives of these Objectivists:

“Today, meanwhile, although we have far more liberty than our feudal ancestors, there are countless ways in which the government restricts our freedom to produce and trade including minimum wage laws, rent control, occupational licensing laws, tariffs, union shop laws, antitrust laws, government monopolies such as those granted to the post office and education system, subsidies for industries such as agriculture or wind and solar power, eminent domain laws, wealth redistribution via the welfare state, and the progressive income tax.” (p. 114)

Some of these are things no serious economist would disagree with: We should stop subsidizing agriculture and tariffs should be reduced or removed. Many occupational licenses are clearly unnecessary (though this has a very small impact on inequality in real terms—licensing may stop you from becoming a barber, but it’s not what stops you from becoming a CEO). Others are legitimately controversial: Economists are currently quite divided over whether minimum wage is beneficial or harmful (I lean toward beneficial, but I’d prefer a better solution), as well as how to properly regulate unions so that they give workers much-needed bargaining power without giving unions too much power. But a couple of these are totally backward, exactly contrary to what any mainstream economist would say: Antitrust laws need to be enforced more, not eliminated (don’t take it from me; take it from that well-known Marxist rag The Economist). Subsidies for wind and solar power make the economy more efficient, not less—and suspiciously Watkins and Brook omitted the competing subsidies that actually are harmful, namely those to coal and oil.

Moreover, I think it’s very revealing that they included the word progressive when talking about taxation. In what sense does making a tax progressive undermine our freedom? None, so far as I can tell. The presence of a tax undermines freedom—your freedom to spend that money some other way. Making the tax higher undermines freedom—it’s more money you lose control over. But making the tax progressive increases freedom for some and decreases it for others—and since rich people have lower marginal utility of wealth and are generally more free in substantive terms in general, it really makes the most sense that, holding revenue constant, making a tax progressive generally makes your people more free.

But there’s one thing that making taxes progressive does do: It benefits poor people and hurts rich people. And thus the true agenda of Equal is Unfair becomes clear: They aren’t actually interested in maximizing freedom—if they were, they wouldn’t be complaining about occupational licensing and progressive taxation, they’d be outraged by forced labor, mass incarceration, indefinite detention, and the very real loss of substantive freedom that comes from being born into poverty. They wouldn’t want less redistribution, they’d want more efficient and transparent redistribution—a shift from the current hodgepodge welfare state to a basic income system. They would be less concerned about the “freedom” to pollute the air and water with impunity, and more concerned about the freedom to breathe clean air and drink clean water.

No, what they really believe is rich people are better. They believe that billionaires attained their status not by luck or circumstance, not by corruption or ruthlessness, but by the sheer force of their genius. (This is essentially the entire subject of chapter 6, “The Money-Makers and the Money-Appropriators”, and it’s nauseating.) They describe our financial industry as “fundamentally moral and productive” (p.156)—the industry that you may recall stole millions of homes and laundered money for terrorists. They assert that no sane person could believe that Steve Wozniack got lucky—I maintain no sane person could think otherwise. Yes, he was brilliant; yes, he invented good things. But he had to be at the right place at the right time, in a society that supported and educated him and provided him with customers and employees. You didn’t build that.

Indeed, perhaps most baffling is that they themselves seem to admit that the really great innovators, such as Newton, Einstein, and Darwin, were scientists—but scientists are almost never billionaires. Even the common counterexample, Thomas Edison, is largely false; he mainly plagiarized from Nikola Tesla and appropriated the ideas of his employees. Newton, Einstein and Darwin were all at least upper-middle class (as was Tesla, by the way—he did not die poor as is sometimes portrayed), but they weren’t spectacularly mind-bogglingly rich the way that Steve Jobs and Andrew Carnegie were and Bill Gates and Jeff Bezos are.

Some people clearly have more talent than others, and some people clearly work harder than others, and some people clearly produce more than others. But I just can’t wrap my head around the idea that a single man can work so hard, be so talented, produce so much that he can deserve to have as much wealth as a nation of millions of people produces in a year. Yet, Mark Zuckerberg has that much wealth. Remind me again what he did? Did he cure a disease that was killing millions? Did he colonize another planet? Did he discover a fundamental law of nature? Oh yes, he made a piece of software that’s particularly convenient for talking to your friends. Clearly that is worth the GDP of Latvia. Not that silly Darwin fellow, who only uncovered the fundamental laws of life itself.

In the grand tradition of reducing complex systems to simple numerical values, I give book 1 a 7/10, book 2 a 5/10, and book 3 a 2/10. Equal is Unfair is about 25% book 1, 25% book 2, and 50% book 3, so altogether their final score is, drumroll please: 4/10. Maybe read the first half, I guess? That’s where most of the good stuff is.

So what can we actually do about sweatshops?

JDN 2457489

(The topic of this post was chosen by a vote of my Patreons.) There seem to be two major camps on most political issues: One camp says “This is not a problem, stop worrying about it.” The other says “This is a huge problem, it must be fixed right away, and here’s the easy solution.” Typically neither of these things is true, and the correct answer is actually “This is a huge problem, well worth fixing—but we need to do a lot of work to figure out exactly how.”

Sweatshop labor is a very good example of this phenomenon.

Camp A is represented here by the American Enterprise Institute, which even goes as far as to defend child labor on the grounds that “we used to do it before”. (Note that we also used to do slavery before. Also protectionism, but of course AEI doesn’t think that was good. Who needs logical consistency when you have ideological purity?) The College Conservative uses ECON 101 to defend sweatshops, perhaps not realizing that economics courses continue past ECON 101.

Camp B is represented here by Buycott, telling us to buy “made in the USA” products and boycott all companies that use sweatshops. Other commonly listed strategies include buying used clothes (I mean, there may be some ecological benefits to this, but clearly not all clothes can be used clothes) and “buy union-made” which is next to impossible for most products. Also in this camp is LaborVoices, a Silicon Valley tech company that seems convinced they can somehow solve the problem of sweatshops by means of smartphone apps, because apparently Silicon Valley people believe that smartphones are magical and not, say, one type of product that performs services similar to many other pre-existing products but somewhat more efficiently. (This would also explain how Uber can say with a straight face that they are “revolutionary” when all they actually do is mediate unlicensed taxi services, and Airbnb is “innovative” because it makes it slightly more convenient to rent out rooms in your home.)

Of course I am in that third camp, people who realize that sweatshops—and exploitative labor practices in general—are a serious problem, but a very complex and challenging one that does not have any easy, obvious solutions.

One thing we absolutely cannot do is return to protectionism or get American consumers to only buy from American companies (a sort of “soft protectionism” by social construction). This would not only be inefficient for us—it would be devastating for people in Third World countries. Sweatshops typically provide substantially better living conditions than the alternatives available to their workers.

Yet this does not mean that sweatshops are morally acceptable or should simply be left alone, contrary to the assertions of many economists—most famously Benjamin Powell. Anyone who doubts this must immediately read “Wrongful Beneficence” by Chris Meyers; the mere fact that an act benefits someone –or even everyone—does not prove that the act was morally acceptable. If someone is starving to death and you offer them bread in exchange for doing whatever you want them to do for the next year, you are benefiting them, surely—but what you are doing is morally wrong. And this is basically what sweatshops are; they provide survival in exchange for exploitation.

It can be remarkably difficult to even tell which companies are using sweatshops—and this is by design. While in response to public pressure corporations often try to create the image of improving their labor standards, they seem quite averse to actually improving labor standards, and even more averse to establishing systems of enforcement to make those labor standards followed consistently. Almost no sweatshops are directly owned by the retailers whose products they make; instead there is a chain of outsourced vendors and distributors, a chain that creates diffusion of responsibility and plausible deniability. When international labor organizations do get the chance to investigate the labor conditions of factories operated by multinational corporations, they invariably find that regulations are more honored in the breach than the observance.

So, what would a long-run solution to sweatshops look like? In a word: Development. The only sustainable solution to oppressive labor conditions is a world where everyone is healthy enough, educated enough, and provided with enough resources that their productivity is at a First World level; furthermore it is a world where workers have enough bargaining power that they are actually paid according to that productivity. (The US has lately been finding out what happens if you do the former but not the latter—the result is that you generate an enormous amount of wealth, but it all ends up in the hands of the top 0.1%. Yet it is quite possible to do the latter, as Denmark has figured out, #ScandinaviaIsBetter.)

To achieve this, we need more factories in Third World countries, not fewer—more investment, not less. We need to buy more of China’s exports, hire more factory workers in Bangladesh.

But it’s not enough to provide incentives to build factories—we must also provide incentives to give workers at those factories more bargaining power.

To see how we can pull this off, I offer a case study of a (qualified) success: Nike.

In the 1990s, Nike’s subcontractors had some of the worst labor conditions in the shoe industry. Today, they actually have some of the best. How did that happen?

It began with people noticing a problem—activists and investigative journalists documented the abuses in Nike’s factories. They drew public attention, which undermined Nike’s efforts at mass advertising (which was basically their entire business model—their shoes aren’t actually especially good). They tried to clean up their image with obviously biased reports, which triggered a backlash. Finally Nike decides to actually do something about the problem, and actually becomes a founding member of the Fair Labor Association. They establish new labor standards, and they audit regularly to ensure that those standards are being complied with. Today they publish an annual corporate social responsibility report that actually appears to be quite transparent and accurate, showing both the substantial improvements that have been made and the remaining problems. Activist campaigns turned Nike around almost completely.

In short, consumer pressure led to private regulation. Many development economists are increasingly convinced that this is what we need—we must put pressure on corporations to regulate themselves.

The pressure is a key part of this process; Willem Buiter wasn’t wrong when he quipped that “self-regulation stands in relation to regulation the way self-importance stands in relation to importance and self-righteousness to righteousness.” For any regulation to work, it must have an enforcement mechanism; for private regulation to work, that enforcement mechanism comes from the consumers.

Yet even this is not enough, because there are too many incentives for corporations to lie and cheat if they only have to be responsive to consumers. It’s unreasonable to expect every consumer to take the time—let alone have the expertise—to perform extensive research on the supply chain of every corporation they buy a product from. I also think it’s unreasonable to expect most people to engage in community organizing or shareholder activism as Green America suggests, though it certainly wouldn’t hurt if some did. But there are just too many corporations to keep track of! Like it or not, we live in a globalized capitalist economy where you almost certainly buy from a hundred different corporations over the course of a year.

Instead we need governments to step up—and the obvious choice is the government of the United States, which remains the world’s economic and military hegemon. We should be pressuring our legislators to make new regulations on international trade that will raise labor standards around the globe.

Note that this undermines the most basic argument corporations use against improving their labor standards: “If we raise wages, we won’t be able to compete.” Not if we force everyone to raise wages, around the globe. “If it’s cheaper to build a factory in Indonesia, why shouldn’t we?” It won’t be cheaper, unless Indonesia actually has a real comparative advantage in producing that product. You won’t be able to artificially hold down your expenses by exploiting your workers—you’ll have to actually be more efficient in order to be more profitable, which is how capitalism is supposed to work.

There’s another argument we often hear that is more legitimate, which is that raising wages would also force corporations to raise prices. But as I discussed in a previous post on this subject, the amount by which prices would need to rise is remarkably small, and nowhere near large enough to justify panic about dangerous global inflation. Paying 10% or even 20% more for our products is well worth it to reduce the corruption and exploitation that abuses millions of people—a remarkable number of them children—around the globe. Also, it doesn’t take a mathematical savant to realize that if increasing wages by a factor of 10 only increases prices by 20%, workers will in fact be better off.

Where would all that extra money come from? Now we come to the real reason why corporations don’t want to raise their labor standards: It would come from profits. Right now profits are extraordinarily large, much larger than they have any right to be in a fair market. It was recently estimated that 74% of billionaire wealth comes from economic rent—that is to say, from deception, exploitation, and market manipulation, rather than actual productivity. (There’s a lot of uncertainty in this estimate; the true figure is probably somewhere between 50% and 90%—it’s almost certainly a majority, and could be the vast majority.) In fact, I really shouldn’t say “money”, which we can just print; what we really want to know is where the extra wealth would come from to give that money value. But by paying workers more, improving their standard of living, and creating more consumer demand, we would in fact dramatically increase the amount of real wealth in the world.

So, we need regulations to improve global labor standards. But we must first be clear: What should these regulations say?

First, we must rule out protectionist regulations that would give unfair advantages to companies that produce locally. These would only result in economic inefficiency at best, and trade wars throwing millions back into poverty at worst. (Some advantage makes sense to internalize the externalities of shipping, but really that should be created by a carbon tax, not by trade tariffs. It’s a lot more expensive and carbon-intensive to ship from Detroit to LA than from Detroit to Windsor, but the latter is the “international” trade.)

Second, we should not naively assume that every country should have the same minimum wage. (I am similarly skeptical of Hillary Clinton’s proposal to include people with severe mental or physical disabilities in the US federal minimum wage; I too am concerned about people with disabilities being exploited, but the fact is many people with severe disabilities really aren’t as productive, and it makes sense for wages to reflect that.) If we’re going to have minimum wages at all—basic income and wage subsidies both make a good deal more sense than a hard price floor; see also my earlier post on minimum wage—they should reflect the productivity and prices of the region. I applaud California and New York for adopting $15 minimum wages, but I’d be a bit skeptical of doing the same in Mississippi, and adamantly opposed to doing so in Bangladesh.

It may not even be reasonable to expect all countries to have the same safety standards; workers who are less skilled and in more dire poverty may rationally be willing to accept more risk to remain employed, rather than laid off because their employer could not afford to meet safety standards and still pay them a sufficient wage. For some safety standards this is ridiculous; making sufficiently many exits with doors that swing outward and maintaining smoke detectors are not expensive things to do. (And yet factories in Bangladesh often fail to meet such basic requirements, which kills hundreds of workers each year.) But other safety standards may be justifiably relaxed; OSHA compliance in the US costs about $70 billion per year, about $200 per person, which many countries simply couldn’t afford. (On the other hand, OSHA saves thousands of lives, does not increase unemployment, and may actually benefit employers when compared with the high cost of private injury lawsuits.) We should have expert economists perform careful cost-benefit analyses of proposed safety regulations to determine which ones are cost-effective at protecting workers and which ones are too expensive to be viable.

While we’re at it, these regulations should include environmental standards, or a global carbon tax that’s used to fund climate change mitigation efforts around the world. Here there isn’t much excuse for not being strict; pollution and environmental degradation harms the poor the most. Yes, we do need to consider the benefits of production that is polluting; but we have plenty of profit incentives for that already. Right now the balance is clearly tipped far too much in favor of more pollution than the optimum rather than less. Even relatively heavy-handed policies like total bans on offshore drilling and mountaintop removal might be in order; in general I’d prefer to tax rather than ban, but these activities are so enormously damaging that if the choice is between a ban and doing nothing, I’ll take the ban. (I’m less convinced of this with regard to fracking; yes, earthquakes and polluted groundwater are bad—but are they Saudi Arabia bad? Because buying more oil from Saudi Arabia is our leading alternative.)

It should go without saying (but unfortunately it doesn’t seem to) that our regulations must include an absolute zero-tolerance policy for forced labor. If we find out that a company is employing forced labor, they should have to not only free every single enslaved worker, but pay each one a million dollars (PPP 2005 chained CPI of course). If they can’t do that and they go bankrupt, good riddance; remind me to play them the world’s saddest song on the world’s tiniest violin. Of course, first we need to find out, which brings me to the most important point.

Above all, these regulations must be enforced. We could start with enforceable multilateral trade agreements, where tariff reductions are tied to human rights and labor standards. This is something the President of the United States could do, right now, as an addendum to the Trans-Pacific Partnership. (What he should have done is made the TPP contingent on this, but it’s too late for that.) Future trade agreements should include these as a matter of course.If countries want to reap the benefits of free trade, they must be held accountable for sharing those benefits equitably with their people.

But ultimately we should not depend upon multilateral agreements between nations—we need truly international standards with global enforcement. We should empower the International Labor Organization to enact sanctions and inspections (right now it mostly enacts suggestions which are promptly and dutifully ignored), and possibly even to arrest executives for trial at the International Criminal Court. We should double if not triple or quadruple their funding—and if member nations will not pay this voluntarily, we should make them—the United Nations should be empowered to collect taxes in support of global development, which should be progressive with per-capita GDP. Coercion, you say? National sovereignty, you say? Millions of starving little girls is my reply.

Right now, the ability of multinational corporations to move between countries to find the ones that let them pay the least have created a race to the floor; it’s time for us to raise that floor.

What can you yourself do, assuming you’re not a head of state? (If you are, I’m honored. Also, any openings on your staff?) Well, you can vote—and you can use that vote to put pressure on your legislators to support these kinds of polices. There are also some other direct actions you can take that I discussed in a previous post; but mainly what we need is policy. Consumer pressure and philanthropy are good, and by all means, don’t stop; but to really achieve global justice we will need nothing short of global governance.

What can we do to make the world a better place?

JDN 2457475

There are an awful lot of big problems in the world: war, poverty, oppression, disease, terrorism, crime… I could go on for awhile, but I think you get the idea. Solving or even mitigating these huge global problems could improve or even save the lives of millions of people.

But precisely because these problems are so big, they can also make us feel powerless. What can one person, or even a hundred people, do against problems on this scale?

The answer is quite simple: Do your share.

No one person can solve any of these problems—not even someone like Bill Gates, though he for one at least can have a significant impact on poverty and disease because he is so spectacularly mind-bogglingly rich; the Gates Foundation has a huge impact because it has as much wealth as the annual budget of the NIH.

But all of us together can have an enormous impact. This post today is about helping you see just how cheap and easy it would be to end world hunger and cure poverty-related diseases, if we simply got enough people to contribute.

The Against Malaria Foundation releases annual reports for all their regular donors. I recently got a report that my donations personally account for 1/100,000 of their total assets. That’s terrible. The global population is 7 billion people; in the First World alone it’s over 1 billion. I am the 0.01%, at least when it comes to donations to the Against Malaria Foundation.

I’ve given them only $850. Their total assets are only $80 million. They shouldn’t have $80 million—they should have $80 billion. So, please, if you do nothing else as a result of this post, go make a donation to the Against Malaria Foundation. I am entirely serious; if you think you might forget or change your mind, do it right now. Even a dollar would be worth it. If everyone in the First World gave $1, they would get 12 times as much as they currently have.

GiveWell is an excellent source for other places you should donate; they rate charities around the world for their cost-effectiveness in the only way worth doing: Lives saved per dollar donated. They don’t just naively look at what percentage goes to administrative costs; they look at how everything is being spent and how many children have their diseases cured.

Until the end of April, UNICEF is offering an astonishing five times matching funds—meaning that if you donate $10, a full $50 goes to UNICEF projects. I have really mixed feelings about donors that offer matching funds (So what you’re saying is, you won’t give if we don’t?), but when they are being offered, use them.

All those charities are focused on immediate poverty reduction; if you’re looking for somewhere to give that fights Existential Risk, I highly recommend the Union of Concerned Scientists—one of the few Existential Risk organizations that uses evidence-based projections and recognizes that nuclear weapons and climate change are the threats we need to worry about.

And let’s not be too anthropocentrist; there are a lot of other sentient beings on this planet, and Animal Charity Evaluator can help you find which charities will best improve the lives of other animals.

I’ve just listed a whole bunch of ways you can give money—and that probably is the best thing for you to give; your time is probably most efficiently used working in your own profession whatever that may be—but there are other ways you can contribute as well.

One simple but important change you can make, if you haven’t already, is to become vegetarian. Even aside from the horrific treatment of animals in industrial farming, you don’t have to believe that animals deserve rights to understand that meat is murder. Meat production is a larger contributor to global greenhouse gas emissions than transportation, so everyone becoming vegetarian would have a larger impact against climate change than taking literally every car and truck in the world off the road. Since the world population is less than 10 billion, meat is 18% of greenhouse emissions and the IPCC projects that climate change will kill between 10 and 100 million people over the next century, every 500 to 5000 new vegetarians saves a life.

You can move your money from a bank to a credit union, as even the worst credit unions are generally better than the best for-profit banks, and the worst for-profit banks are very, very bad. The actual transition can be fairly inconvenient, but a good credit union will provide you with all the same services, and most credit unions link their networks and have online banking, so for example I can still deposit and withdraw from my University of Michigan Credit Union account while in California.

Another thing you can do is reduce your consumption of sweatshop products in favor of products manufactured under fair labor standards. This is harder than it sounds; it can be very difficult to tell what a company’s true labor conditions are like, as the worst companies work very hard to hide them (now, if they worked half as hard to improve them… it reminds me of how many students seem willing to do twice as much work to cheat as they would to simply learn the material in the first place).

You should not simply stop buying products that say “Made in China”; in fact, this could be counterproductive. We want products to be made in China; we need products to be made in China. What we have to do is improve labor standards in China, so that products made in China are like products made in Japan or Korea—skilled workers with high-paying jobs in high-tech factories. Presumably it doesn’t bother you when something says “Made in Switzerland” or “Made in the UK”, because you know their labor standards are at least as high as our own; that’s where I’d like to get with “Made in China”.

The simplest way to do this is of course to buy Fair Trade products, particularly coffee and chocolate. But most products are not available Fair Trade (there are no Fair Trade computers, and only loose analogues for clothing and shoes).

Moreover, we must not let the perfect be the enemy of the good; companies that have done terrible things in the past may still be the best companies to support, because there are no alternatives that are any better. In order to incentivize improvement, we must buy from the least of all evils for awhile until the new competitive pressure makes non-evil corporations viable. With this in mind, the Fair Labor Association may not be wrong to endorse companies like Adidas and Apple, even though they surely have substantial room to improve. Similarly, few companies on the Ethisphere list are spotless, but they probably are genuinely better than their competitors. (Well, those that have competitors; Hasbro is on there. Name a well-known board game, and odds are it’s made by a Hasbro subsidiary: they own Parker Brothers, Milton Bradley, and Wizards of the Coast. Wikipedia has their own category, Hasbro subsidiaries. Maybe they’ve been trying to tell us something with all those versions of Monopoly?)

I’m not very happy with the current state of labor standards reporting (much less labor standards enforcement), so I don’t want to recommend any of these sources too highly. But if you are considering buying from one of three companies and only one of them is endorsed by the Fair Labor Association, it couldn’t hurt to buy from that one instead of the others.

Buying from ethical companies will generally be more expensive—but rarely prohibitively so, and this is part of how we use price signals to incentivize better behavior. For about a year, BP gasoline was clearly cheaper than other gasoline, because nobody wanted to buy from BP and they were forced to sell at a discount after the Deepwater Horizon disaster. Their profits tanked as a result. That’s the kind of outcome we want—preferably for a longer period of time.

I suppose you could also save money by buying cheaper products and then donate the difference, and in the short run this would actually be most cost-effective for global utility; but (1) nobody really does that; people who buy Fair Trade also tend to donate more, maybe just because they are more generous in general, and (2) in the long run what we actually want is more ethical businesses, not a system where businesses exploit everyone and then we rely upon private charity to compensate us for our exploitation. For similar reasons, philanthropy is a stopgap—and a much-needed one—but not a solution.

Of course, you can vote. And don’t just vote in the big name elections like President of the United States. Your personal impact may actually be larger from voting in legislatures and even local elections and ballot proposals. Certainly your probability of being a deciding vote is far larger, though this is compensated by the smaller effect of the resulting policies. Most US states have a website where you can look up any upcoming ballots you’ll be eligible to vote on, so you can plan out your decisions well in advance.

You may even want to consider running for office at the local level, though I realize this is a very large commitment. But most local officials run uncontested, which means there is no real democracy at work there at all.

Finally, you can contribute in some small way to making the world a better place simply by spreading the word, as I hope I’m doing right now.

Is America uniquely… mean?

JDN 2457454

I read this article yesterday which I found both very resonant and very disturbing: At least among First World countries, the United States really does seem uniquely, for lack of a better word, mean.

The formal psychological terminology is social dominance orientation; the political science term is authoritarianism. In economics, we notice the difference due to its effect on income inequality. But all of these concepts are capturing part of a deeper underlying reality that in the age of Trump I am finding increasingly hard to deny. The best predictor of support for Trump is authoritarianism.

Of course I’ve already talked about our enormous military budget; but then Tennessee had to make their official state rifle a 50-caliber weapon capable of destroying light tanks. There is something especially dominant, aggressive, and violent about American culture.

We are certainly not unique in the world as a whole—actually I think the amount of social dominance orientation, authoritarianism, and inequality in the US is fairly similar to the world average. We are unique in our gun ownership, but our military spending proportional to GDP is not particularly high by world standards—we’re just an extremely rich country. But in all these respects we are a unique outlier among First World countries; in many ways we resemble a rich authoritarian petrostate like Qatar rather than a European social democracy like France or the UK. (At least we’re not Saudi Arabia?)

More than other First World cultures, Americans believe in hierarchy; they believe that someone should be on top and other people should be on the bottom. More than that, they believe that people “like us” should be on top and people “not like us” should be on the bottom, however that is defined—often in terms of race or religion, but not necessarily.

Indeed, one of the things I find most baffling about this is that it is often more important to people that others be held down than that they themselves be lifted up. This is the only way I can make sense of the fact that people who have watched their wages be drained into the pockets of billionaires for a generation can think that the most important things to do right now are block out illegal immigrants and deport Muslims.

It seems to be that people become convinced that their own status, whatever it may be, is deserved: If they are rich, it is obviously because they are so brilliant and hard-working (something Trump clearly believes about himself, being a textbook example of Narcissistic Personality Disorder); if they are poor, it is obviously because they are so incompetent and lazy. Thus, being lifted up doesn’t make sense; why would you give me things I don’t deserve?

But then when they see people who are different from them, they know automatically that those people must be by definition inferior, as all who are Not of Our Tribe are by definition inferior. And therefore, any of them who are rich gained their position through corruption or injustice, and all of them who are poor deserve their fate for being so inferior. Thus, it is most vital to ensure that these Not of Our Tribe are held down from reaching high positions they so obviously do not deserve.

I’m fairly sure that most of this happens at a very deep unconscious level; it calls upon ancient evolutionary instincts to love our own tribe, to serve the alpha male, to fear and hate those of other tribes. These instincts may well have served us 200,000 years ago (then again, they may just have been the best our brains could manage at the time); but they are becoming a dangerous liability today.

As E.O. Wilson put it: “The real problem of humanity is the following: we have paleolithic emotions; medieval institutions; and god-like technology.”

Yet this cannot be a complete explanation, for there is variation in these attitudes. A purely instinctual theory should say that all human cultures have this to an essentially equal degree; but I started this post by pointing out that the United States appears to have a particularly large amount relative to Europe.

So, there must be something in the cultures or institutions of different nations that makes them either enhance or suppress this instinctual tribalism. There must be something that Europe is doing right, the US is doing wrong, and Saudi Arabia is doing very, very wrong.
Well, the obvious one that sticks out at me is religion. It seems fairly obvious to me that Sweden is less religious than the US, which is less religious than Saudi Arabia.

Data does back me up on this. Religiosity isn’t easy to measure, but we have methods of doing so. If we ask people in various countries if religion is very important in their lives, the percentage of people who say yes gives us an indication of how religious that country is.

In Saudi Arabia, 93% say yes. In the United States, 65% say yes. In Sweden, only 17% say yes.

Religiosity tends to be highest in the poorest countries, but the US is an outlier, far too rich for our religion (or too religious for our wealth).

Religiosity also tends to be highest in countries with high inequality—this time, the US fits right in.

The link between religion and inequality is quite clear. It’s harder to say which way the causation runs. Perhaps high inequality makes people cling more to religion as a comfort, and getting rid of religion would only mean taking that comfort away. Or, perhaps religion actually makes people believe more in social dominance, and thus is part of what keeps that high inequality in place. It could also be a feedback loop, in which higher inequality leads to higher religiosity which leads to higher inequality.

That said, I think we actually have some evidence that causality runs from religion to inequality, rather than the other way around. The secularization of France took place around the same time as the French Revolution that overthrew the existing economic system and replaced it with one that had substantially less inequality. Iran’s government became substantially more based on religion in the latter half of the 20th century, and their inequality soared thereafter.

Above all, Donald Trump dominates the evangelical vote, which makes absolutely no sense if religion is a comfort against inequality—but perfect sense if religion solidifies the tendency of people to think in terms of hierarchy and authoritarianism.

This also makes sense in terms of the content of religion, especially Abrahamaic religion; read the Bible and the Qur’an, and you will see that their primary goal seems to be to convince you that some people, namely people who believe in this book, are just better than other people, and we should be in charge because God says so. (And you wouldn’t try to argue with God, would you?) They really make no particular effort to convince you that God actually exists; they spend all their argumentative effort on what God wants you to do and who God wants you to put in charge—and for some strange reason it always seems to be the same guys who are writing down “God’s words” in the book! What a coincidence!

If religion is indeed the problem, or a large part of the problem, what can we do about it? That’s the most difficult part. We’ve been making absolutely conclusive rational arguments against religion since literally 300 years before Jesus was even born (there has never been a time in human history in which it was rational for an educated person to believe in Christianity or Islam, for the religions did not come into existence until well after the arguments to refute them were well-known!), and the empirical evidence against theism has only gotten stronger ever since; so that clearly isn’t enough.

I think what we really need to do at this point is confront the moral monopoly that religion has asserted for itself. The “Moral Majority” was neither, but its name still sort of makes sense to us because we so strongly associate being moral with being religious. We use terms like “Christian” and “generous” almost interchangeably. And whenever you get into a debate about religion, shortly after you have thoroughly demolished any shred of empirical credibility religion still had left, you can basically guarantee that the response will be: “But without God, how can you know right from wrong?”

What is perhaps most baffling about this concept of morality so commonplace in our culture is that not only is the command of a higher authority that rewards and punishes you not the highest level of moral development—it is literally the lowest. Of the six stages of moral thinking Kohlberg documented in children, the reward and punishment orientation exemplified by the Bible and the Qur’an is the very first. I think many of these people really truly haven’t gotten past level 1, which is why when you start trying to explain how you base your moral judgments on universal principles of justice and consequences (level 6) they don’t seem to have any idea what you’re talking about.

Perhaps this is a task for our education system (philosophy classes in middle school?), perhaps we need something more drastic than that, or perhaps it is enough that we keep speaking about it in public. But somehow we need to break up the monopoly that religion has on moral concepts, so that people no longer feel ashamed to say that something is morally wrong without being able to cite a particular passage from a particular book from the Iron Age. Perhaps once we can finally make people realize that morality does not depend on religion, we can finally free them from the grip of religion—and therefore from the grip of authoritarianism and social dominance.

If this is right, then the reason America is so mean is that we are so Christian—and people need to realize that this is not a paradoxical statement.

Will robots take our jobs?

JDN 2457451
I briefly discussed this topic before, but I thought it deserved a little more depth. Also, the SF author in me really likes writing this sort of post where I get to speculate about futures that are utopian, dystopian, or (most likely) somewhere in between.

The fear is quite widespread, but how realistic is it? Will robots in fact take all our jobs?

Most economists do not think so. Robert Solow famously quipped, “You can see the computer age everywhere but in the productivity statistics.” (It never quite seemed to occur to him that this might be a flaw in the way we measure productivity statistics.)

By the usual measure of labor productivity, robots do not appear to have had a large impact. Indeed, their impact appears to have been smaller than almost any other major technological innovation.

Using BLS data (which was formatted badly and thus a pain to clean, by the way—albeit not as bad as the World Bank data I used on my master’s thesis, which was awful), I made this graph of the growth rate of labor productivity as usually measured:

Productivity_growth

The fluctuations are really jagged due to measurement errors, so I also made an annually smoothed version:

Productivity_growth_smooth

Based on this standard measure, productivity has grown more or less steadily during my lifetime, fluctuating with the business cycle around a value of about 3.5% per year (3.4 log points). If anything, the growth rate seems to be slowing down; in recent years it’s been around 1.5% (1.5 lp).

This was clearly the time during which robots became ubiquitous—autonomous robots did not emerge until the 1970s and 1980s, and robots became widespread in factories in the 1980s. Then there’s the fact that computing power has been doubling every 1.5 years during this period, which is an annual growth rate of 59% (46 lp). So why hasn’t productivity grown at anywhere near that rate?

I think the main problem is that we’re measuring productivity all wrong. We measure it in terms of money instead of in terms of services. Yes, we try to correct for inflation; but we fail to account for the fact that computers have allowed us to perform literally billions of services every day that could not have been performed without them. You can’t adjust that away by plugging into the CPI or the GDP deflator.

Think about it: Your computer provides you the services of all the following:

  1. A decent typesetter and layout artist
  2. A truly spectacular computer (remember, that used to be a profession!)
  3. A highly skilled statistician (who takes no initiative—you must tell her what calculations to do)
  4. A painting studio
  5. A photographer
  6. A video camera operator
  7. A professional orchestra of the highest quality
  8. A decent audio recording studio
  9. Thousands of books, articles, and textbooks
  10. Ideal seats at every sports stadium in the world

And that’s not even counting things like social media and video games that can’t even be readily compared to services that were provided before computers.

If you added up the value of all of those jobs, the amount you would have had to pay in order to hire all those people to do all those things for you before computers existed, your computer easily provides you with at least $1 million in professional services every year. Put another way, your computer has taken jobs that would have provided $1 million in wages. You do the work of a hundred people with the help of your computer.

This isn’t counted in our productivity statistics precisely because it’s so efficient. If we still had to pay that much for all these services, it would be included in our GDP and then our GDP per worker would properly reflect all this work that is getting done. But then… whom would we be paying? And how would we have enough to pay that? Capitalism isn’t actually set up to handle this sort of dramatic increase in productivity—no system is, really—and thus the market price for work has almost no real relation to the productive capacity of the technology that makes that work possible.

Instead it has to do with scarcity of work—if you are the only one in the world who can do something (e.g. write Harry Potter books), you can make an awful lot of money doing that thing, while something that is far more important but can be done by almost anyone (e.g. feed babies) will pay nothing or next to nothing. At best we could say it has to do with marginal productivity, but marginal in the sense of your additional contribution over and above what everyone else could already do—not in the sense of the value actually provided by the work that you are doing. Anyone who thinks that markets automatically reward hard work or “pay you what you’re worth” clearly does not understand how markets function in the real world.

So, let’s ask again: Will robots take our jobs?

Well, they’ve already taken many jobs already. There isn’t even a clear high-skill/low-skill dichotomy here; robots are just as likely to make pharmacists obsolete as they are truck drivers, just as likely to replace surgeons as they are cashiers.

Labor force participation is declining, though slowly:

Labor_force_participation

Yet I think this also underestimates the effect of technology. As David Graeber points out, most of the new jobs we’ve been creating seem to be for lack of a better term bullshit jobs—jobs that really don’t seem like they need to be done, other than to provide people with something to do so that we can justify paying them salaries.

As he puts it:

Again, an objective measure is hard to find, but one easy way to get a sense is to ask: what would happen were this entire class of people to simply disappear? Say what you like about nurses, garbage collectors, or mechanics, it’s obvious that were they to vanish in a puff of smoke, the results would be immediate and catastrophic. A world without teachers or dock-workers would soon be in trouble, and even one without science fiction writers or ska musicians would clearly be a lesser place. It’s not entirely clear how humanity would suffer were all private equity CEOs, lobbyists, PR researchers, actuaries, telemarketers, bailiffs or legal consultants to similarly vanish. (Many suspect it might markedly improve.)

The paragon of all bullshit jobs is sales. Sales is a job that simply should not exist. If something is worth buying, you should be able to present it to the market and people should choose to buy it. If there are many choices for a given product, maybe we could have some sort of independent product rating agencies that decide which ones are the best. But sales means trying to convince people to buy your product—you have an absolutely overwhelming conflict of interest that makes your statements to customers so utterly unreliable that they are literally not even information anymore. The vast majority of advertising, marketing, and sales is thus, in a fundamental sense, literally noise. Sales contributes absolutely nothing to our economy, and because we spend so much effort on it and advertising occupies so much of our time and attention, takes a great deal away. But sales is one of our most steadily growing labor sectors; once we figure out how to make things without people, we employ the people in trying to convince customers to buy the new things we’ve made. Sales is also absolutely miserable for many of the people who do it, as I know from personal experience in two different sales jobs that I had to quit before the end of the first week.

Fortunately we have not yet reached the point where sales is the fastest growing labor sector. Currently the fastest-growing jobs fall into three categories: Medicine, green energy, and of course computers—but actually mostly medicine. Yet even this is unlikely to last; one of the easiest ways to reduce medical costs would be to replace more and more medical staff with automated systems. A nursing robot may not be quite as pleasant as a real professional nurse—but if by switching to robots the hospital can save several million dollars a year, they’re quite likely to do so.

Certain tasks are harder to automate than others—particularly anything requiring creativity and originality is very hard to replace, which is why I believe that in the 2050s or so there will be a Revenge of the Humanities Majors as all the supposedly so stable and forward-thinking STEM jobs disappear and the only jobs that are left are for artists, authors, musicians, game designers and graphic designers. (Also, by that point, very likely holographic designers, VR game designers, and perhaps even neurostim artists.) Being good at math won’t mean anything anymore—frankly it probably shouldn’t right now. No human being, not even great mathematical savants, is anywhere near as good at arithmetic as a pocket calculator. There will still be a place for scientists and mathematicians, but it will be the creative aspects of science and math that persist—design of experiments, development of new theories, mathematical intuition to develop new concepts. The grunt work of cleaning data and churning through statistical models will be fully automated.

Most economists appear to believe that we will continue to find tasks for human beings to perform, and this improved productivity will simply raise our overall standard of living. As any ECON 101 textbook will tell you, “scarcity is a fundamental fact of the universe, because human needs are unlimited and resources are finite.”

In fact, neither of those claims are true. Human needs are not unlimited; indeed, on Maslow’s hierarchy of needs First World countries have essentially reached the point where we could provide the entire population with the whole pyramid, guaranteed, all the time—if we were willing and able to fundamentally reform our economic system.

Resources are not even finite; what constitutes a “resource” depends on technology, as does how accessible or available any given source of resources will be. When we were hunter-gatherers, our only resources were the plants and animals around us. Agriculture turned seeds and arable land into a vital resource. Whale oil used to be a major scarce resource, until we found ways to use petroleum. Petroleum in turn is becoming increasingly irrelevant (and cheap) as solar and wind power mature. Soon the waters of the oceans themselves will be our power source as we refine the deuterium for fusion. Eventually we’ll find we need something for interstellar travel that we used to throw away as garbage (perhaps it will in fact be dilithium!) I suppose that if the universe is finite or if FTL is impossible, we will be bound by what is available in the cosmic horizon… but even that is not finite, as the universe continues to expand! If the universe is open (as it probably is) and one day we can harness the dark energy that seethes through the ever-expanding vacuum, our total energy consumption can grow without bound just as the universe does. Perhaps we could even stave off the heat death of the universe this way—we after all have billions of years to figure out how.

If scarcity were indeed this fundamental law that we could rely on, then more jobs would always continue to emerge, producing whatever is next on the list of needs ordered by marginal utility. Life would always get better, but there would always be more work to be done. But in fact, we are basically already at the point where our needs are satiated; we continue to try to make more not because there isn’t enough stuff, but because nobody will let us have it unless we do enough work to convince them that we deserve it.

We could continue on this route, making more and more bullshit jobs, pretending that this is work that needs done so that we don’t have to adjust our moral framework which requires that people be constantly working for money in order to deserve to live. It’s quite likely in fact that we will, at least for the foreseeable future. In this future, robots will not take our jobs, because we’ll make up excuses to create more.

But that future is more on the dystopian end, in my opinion; there is another way, a better way, the world could be. As technology makes it ever easier to produce as much wealth as we need, we could learn to share that wealth. As robots take our jobs, we could get rid of the idea of jobs as something people must have in order to live. We could build a new economic system: One where we don’t ask ourselves whether children deserve to eat before we feed them, where we don’t expect adults to spend most of their waking hours pushing papers around in order to justify letting them have homes, where we don’t require students to take out loans they’ll need decades to repay before we teach them history and calculus.

This second vision is admittedly utopian, and perhaps in the worst way—perhaps there’s simply no way to make human beings actually live like this. Perhaps our brains, evolved for the all-too-real scarcity of the ancient savannah, simply are not plastic enough to live without that scarcity, and so create imaginary scarcity by whatever means they can. It is indeed hard to believe that we can make so fundamental a shift. But for a Homo erectus in 500,000 BP, the idea that our descendants would one day turn rocks into thinking machines that travel to other worlds would be pretty hard to believe too.

Will robots take our jobs? Let’s hope so.

Why are all our Presidents war criminals?

JDN 2457443

Today I take on a topic that we really don’t like to talk about. It creates grave cognitive dissonance in our minds, forcing us to deeply question the moral character of our entire nation.

Yet it is undeniably a fact:

Most US Presidents are war criminals.

There is a long tradition of war crimes by US Presidents which includes Obama, Bush, Nixon, and above all Johnson and Truman.

Barack Obama has ordered so-called “double-tap” drone strikes, which kill medics and first responders, in express violation of the Geneva Convention.

George W. Bush orchestrated a global program of torture and indefinite detention.

Bill Clinton ordered “extraordinary renditions” in which suspects were detained without trial and transferred to other countries for interrogation, where we knew they would most likely be tortured.

I actually had trouble finding any credible accusations of war crimes by George H.W. Bush (there are definitely accusations, but none of them are credible—seriously, people are listening to Manuel Noriega?), even as Director of the CIA. He might not be a war criminal.

Ronald Reagan supported a government in Guatemala that was engaged in genocide. He knew this was happening and did not seem to care. This was only one of many tyrannical, murderous regimes supported by Reagan’s administration. In fact, Ronald Reagan was successfully convicted of war crimes by the International Court of Justice. Chomsky isn’t wrong about this one. Ronald Reagan was a convicted war criminal.

Jimmy Carter is a major exception to the rule; not only are there no credible accusations of war crimes against him, he has actively fought to pursue war crimes investigations against Israel and even publicly discussed the war crimes of George W. Bush.

I also wasn’t able to find any credible accusations of war crimes by Gerald Ford, so he might be clean.

But then we get to Richard Nixon, who deployed chemical weapons against civilians in Vietnam. (Calling Agent Orange “herbicide” probably shouldn’t matter morally—but it might legally, as tactical “herbicides” are not always war crimes.) But Nixon does deserve some credit for banning biological weapons.

Indeed, most of the responsibility for war crimes in Vietnam falls upon Johnson. The US deployed something very close to a “total war” strategy involving carpet bombing—more bombs were dropped by the US in Vietnam than by all countries in WW2—as well as napalm and of course chemical weapons; basically it was everything short of nuclear weapons. Kennedy and Johnson also substantially expanded the US biological weapons program.

Speaking of weapons of mass destruction, I’m not sure if it was actually illegal to expand the US nuclear arsenal as dramatically as Kennedy did, but it definitely should have been. Kennedy brought our nuclear arsenal up to its greatest peak, a horrifying 30,000 deployable warheads—more than enough to wipe out human civilization, and possibly enough to destroy the entire human race.

While Eisenhower was accused of the gravest war crime on this list, namely the genocide of over 1 million people in Germany, most historians do not consider this accusation credible. Rather, his war crimes were committed as Supreme Allied Commander, in the form of carpet bombing, especially of Tokyo, which killed as many as 200,000 people, and of Dresden, which had no apparent military significance and even held a number of Allied POWs.

But then we get to Truman, the coup de grace, the only man in history to order the use of nuclear weapons in warfare. Truman gave the order to deploy nuclear weapons against civilians. He was the only person in the history of the world to ever give such an order. It wasn’t Hitler; it wasn’t Stalin. It was Harry S. Truman.

Then of course there’s Roosevelt’s internment of over 100,000 Japanese Americans. It really pales in comparison to Truman’s order to vaporize an equal number of Japanese civilians in the blink of an eye.

I think it will suffice to end the list here, though I could definitely go on. I think Truman is a really good one to focus on, for two reasons that pull quite strongly in opposite directions.

1. The use of nuclear weapons against civilians is among the gravest possible crimes. It may be second to genocide, but then again it may not, as genocide does not risk the destruction of the entire human race. If we only had the option of outlawing one thing in war, and had to allow everything else, we would have no choice but to ban the use of nuclear weapons against civilians.

2. Truman’s decision may have been justified. To this day is still hotly debated whether the atomic bombings were justifiable; mainstream historians have taken both sides. On Debate.org, the vote is almost exactly divided—51% yes, 49% no. Many historians believe that had Truman not deployed nuclear weapons, there would have been an additional 5 million deaths as a result of the continuation of the war.

Perhaps now you can see why this matter makes me so ambivalent.

There is a part of me that wants to take an absolute hard line against war crimes, and say that they must never be tolerated, that even otherwise good Presidents like Clinton and Obama deserve to be tried at the Hague for what they have done. (Truman and Eisenhower are dead, so it’s too late for them.)

But another part of me wonders what would happen if we did this. What if the world really is so dangerous that we have no choice but to allow our leaders to commit horrible atrocities in order to defend us?

There are easy cases—Bush’s torture program didn’t even result in very much useful intelligence, so it was simply a pointless degradation of our national character. The same amount of effort invested in more humane intelligence gathering would very likely have provided more reliable information. And in any case, terrorism is such a minor threat in the scheme of things that the effort would be better spent on improving environmental regulations or auto safety.

Similarly, there’s no reason to engage in “extraordinary rendition” to a country that tortures people when you could simply conduct a legitimate trial in absentia and then arrest the convicted terrorist with special forces and imprison him in a US maximum-security prison until his execution. (Or even carry out the execution directly by the special forces; as long as the trial is legitimate, I see no problem with that.) At that point, the atrocities are being committed simply to avoid inconvenience.

But especially when we come to the WW2 examples, where the United States—nay, the world—was facing a genuine threat of being conquered by genocidal tyrants, I do begin to wonder if “victory by any means necessary” is a legitimate choice.

There is a way to cut the Gordian knot here, and say that yes, these are crimes, and should be punished; but yes, they were morally justified. Then, the moral calculus any President must undergo when contemplating such an atrocity is that he himself will be tried and executed if he goes through with it. If your situation is truly so dire that you are willing to kill 100,000 civilians, perhaps you should be willing to go down with the ship. (Roger Fisher made a similar argument when he suggested implanting the nuclear launch codes inside the body of a US military officer. If you’re not willing to tear one man apart with a knife, why are you willing to vaporize an entire city?)

But if your actions really were morally justified… what sense does it make to punish you for them? And if we hold up this threat of punishment, could it cause a President to flinch when we really need him to take such drastic action?

Another possibility to consider is that perhaps our standards for war crimes really are too strict, and some—not all, but some—of the actions I just listed are in fact morally justifiable and should be made legal under international law. Perhaps the US government is right to fight the UN convention against cluster munitions; maybe we need cluster bombs to successfully defend national security. Perhaps it should not be illegal to kill the combat medics who directly serve under the command of enemy military forces—as opposed to civilian first-responders or Medecins Sans Frontieres. Perhaps our tolerance for civilian casualties is unrealistically low, and it is impossible to fight a war in the real world without killing a large number of civilians.

Then again, perhaps not. Perhaps we are too willing to engage in war in the first place, too accustomed to deploying military force as our primary response to international conflict. Perhaps the prospect of facing a war crimes tribunal in a couple of years should be an extra layer of deterrent against any President ordering yet another war—by some estimates we have been at war 93% of the time since our founding as a nation, and it is a well-documented fact that we have by far the highest military spending in the world. Why is it that so many Americans see diplomacy as foolish, see compromise as weakness?

Perhaps the most terrifying thing is not that so many US Presidents are war criminals; it is that so many Americans don’t seem to have any problem with that.

Do we always want to internalize externalities?

JDN 2457437

I often talk about the importance of externalitiesa full discussion in this earlier post, and one of their important implications, the tragedy of the commons, in another. Briefly, externalities are consequences of actions incurred upon people who did not perform those actions. Anything I do affecting you that you had no say in, is an externality.

Usually I’m talking about how we want to internalize externalities, meaning that we set up a system of incentives to make it so that the consequences fall upon the people who chose the actions instead of anyone else. If you pollute a river, you should have to pay to clean it up. If you assault someone, you should serve jail time as punishment. If you invent a new technology, you should be rewarded for it. These are all attempts to internalize externalities.

But today I’m going to push back a little, and ask whether we really always want to internalize externalities. If you think carefully, it’s not hard to come up with scenarios where it actually seems fairer to leave the externality in place, or perhaps reduce it somewhat without eliminating it.

For example, suppose indeed that someone invents a great new technology. To be specific, let’s think about Jonas Salk, inventing the polio vaccine. This vaccine saved the lives of thousands of people and saved millions more from pain and suffering. Its value to society is enormous, and of course Salk deserved to be rewarded for it.

But we did not actually fully internalize the externality. If we had, every family whose child was saved from polio would have had to pay Jonas Salk an amount equal to what they saved on medical treatments as a result, or even an amount somehow equal to the value of their child’s life (imagine how offended people would get if you asked that on a survey!). Those millions of people spared from suffering would need to each pay, at minimum, thousands of dollars to Jonas Salk, making him of course a billionaire.

And indeed this is more or less what would have happened, if he had been willing and able to enforce a patent on the vaccine. The inability of some to pay for the vaccine at its monopoly prices would add some deadweight loss, but even that could be removed if Salk Industries had found a way to offer targeted price vouchers that let them precisely price-discriminate so that every single customer paid exactly what they could afford to pay. If that had happened, we would have fully internalized the externality and therefore maximized economic efficiency.

But doesn’t that sound awful? Doesn’t it sound much worse than what we actually did, where Jonas Salk received a great deal of funding and support from governments and universities, and lived out his life comfortably upper-middle class as a tenured university professor?

Now, perhaps he should have been awarded a Nobel Prize—I take that back, there’s no “perhaps” about it, he definitely should have been awarded a Nobel Prize in Medicine, it’s absurd that he did not—which means that I at least do feel the externality should have been internalized a bit more than it was. But a Nobel Prize is only 10 million SEK, about $1.1 million. That’s about enough to be independently wealthy and live comfortably for the rest of your life; but it’s a small fraction of the roughly $7 billion he could have gotten if he had patented the vaccine. Yet while the possible world in which he wins a Nobel is better than this one, I’m fairly well convinced that the possible world in which he patents the vaccine and becomes a billionaire is considerably worse.

Internalizing externalities makes sense if your goal is to maximize total surplus (a concept I explain further in the linked post), but total surplus is actually a terrible measure of human welfare.

Total surplus counts every dollar of willingness-to-pay exactly the same across different people, regardless of whether they live on $400 per year or $4 billion.

It also takes no account whatsoever of how wealth is distributed. Suppose a new technology adds $10 billion in wealth to the world. As far as total surplus, it makes no difference whether that $10 billion is spread evenly across the entire planet, distributed among a city of a million people, concentrated in a small town of 2,000, or even held entirely in the bank account of a single man.

Particularly a propos of the Salk example, total surplus makes no distinction between these two scenarios: a perfectly-competitive market where everything is sold at a fair price, and a perfectly price-discriminating monopoly, where everything is sold at the very highest possible price each person would be willing to pay.

This is a perfectly-competitive market, where the benefits are more or less equally (in this case exactly equally, but that need not be true in real life) between sellers and buyers:

elastic_supply_competitive_labeled

This is a perfectly price-discriminating monopoly, where the benefits accrue entirely to the corporation selling the good:

elastic_supply_price_discrimination

In the former case, the company profits, consumers are better off, everyone is happy. In the latter case, the company reaps all the benefits and everyone else is left exactly as they were. In real terms those are obviously very different outcomes—the former being what we want, the latter being the cyberpunk dystopia we seem to be hurtling mercilessly toward. But in terms of total surplus, and therefore the kind of “efficiency” that is maximize by internalizing all externalities, they are indistinguishable.

In fact (as I hope to publish a paper about at some point), the way willingness-to-pay works, it weights rich people more. Redistributing goods from the poor to the rich will typically increase total surplus.

Here’s an example. Suppose there is a cake, which is sufficiently delicious that it offers 2 milliQALY in utility to whoever consumes it (this is a truly fabulous cake). Suppose there are two people to whom we might give this cake: Richie, who has $10 million in annual income, and Hungry, who has only $1,000 in annual income. How much will each of them be willing to pay?

Well, assuming logarithmic marginal utility of wealth (which is itself probably biasing slightly in favor of the rich), 1 milliQALY is about $1 to Hungry, so Hungry will be willing to pay $2 for the cake. To Richie, however, 1 milliQALY is about $10,000; so he will be willing to pay a whopping $20,000 for this cake.

What this means is that the cake will almost certainly be sold to Richie; and if we proposed a policy to redistribute the cake from Richie to Hungry, economists would emerge to tell us that we have just reduced total surplus by $19,998 and thereby committed a great sin against economic efficiency. They will cajole us into returning the cake to Richie and thus raising total surplus by $19,998 once more.

This despite the fact that I stipulated that the cake is worth just as much in real terms to Hungry as it is to Richie; the difference is due to their wildly differing marginal utility of wealth.

Indeed, it gets worse, because even if we suppose that the cake is worth much more in real utility to Hungry—because he is in fact hungry—it can still easily turn out that Richie’s willingness-to-pay is substantially higher. Suppose that Hungry actually gets 20 milliQALY out of eating the cake, while Richie still only gets 2 milliQALY. Hungry’s willingness-to-pay is now $20, but Richie is still going to end up with the cake.

Now, if your thought is, “Why would Richie pay $20,000, when he can go to another store and get another cake that’s just as good for $20?” Well, he wouldn’t—but in the sense we mean for total surplus, willingness-to-pay isn’t just what you’d actually be willing to pay given the actual prices of the goods, but the absolute maximum price you’d be willing to pay to get that good under any circumstances. It is instead the marginal utility of the good divided by your marginal utility of wealth. In this sense the cake is “worth” $20,000 to Richie, and “worth” substantially less to Hungry—but not because it’s actually worth less in real terms, but simply because Richie has so much more money.

Even economists often equate these two, implicitly assuming that we are spending our money up to the point where our marginal willingness-to-pay is the actual price we choose to pay; but in general our willingness-to-pay is higher than the price if we are willing to buy the good at all. The consumer surplus we get from goods is in fact equal to the difference between willingness-to-pay and actual price paid, summed up over all the goods we have purchased.

Internalizing all externalities would definitely maximize total surplus—but would it actually maximize happiness? Probably not.

If you asked most people what their marginal utility of wealth is, they’d have no idea what you’re talking about. But most people do actually have an intuitive sense that a dollar is worth more to a homeless person than it is to a millionaire, and that’s really all we mean by diminishing marginal utility of wealth.

I think the reason we’re uncomfortable with the idea of Jonas Salk getting $7 billion from selling the polio vaccine, rather than the same number of people getting the polio vaccine and Jonas Salk only getting the $1.1 million from a Nobel Prize, is that we intuitively grasp that after that $1.1 million makes him independently wealthy, the rest of the money is just going to sit in some stock account and continue making even more money, while if we’d let the families keep it they would have put it to much better use raising their children who are now protected from polio. We do want to reward Salk for his great accomplishment, but we don’t see why we should keep throwing cash at him when it could obviously be spent in better ways.

And indeed I think this intuition is correct; great accomplishments—which is to say, large positive externalities—should be rewarded, but not in direct proportion. Maybe there should be some threshold above which we say, “You know what? You’re rich enough now; we can stop giving you money.” Or maybe it should simply damp down very quickly, so that a contribution which is worth $10 billion to the world pays only slightly more than one that is worth $100 million, but a contribution that is worth $100,000 pays considerably more than one which is only worth $10,000.

What it ultimately comes down to is that if we make all the benefits incur to the person who did it, there aren’t any benefits anymore. The whole point of Jonas Salk inventing the polio vaccine (or Einstein discovering relativity, or Darwin figuring out natural selection, or any great achievement) is that it will benefit the rest of humanity, preferably on to future generations. If you managed to fully internalize that externality, this would no longer be true; Salk and Einstein and Darwin would have become fabulously wealthy, and then somehow we’d all have to continue paying into their estates or something an amount equal to the benefits we received from their discoveries. (Every time you use your GPS, pay a royalty to the Einsteins. Every time you take a pill, pay a royalty to the Darwins.) At some point we’d probably get fed up and decide we’re no better off with them than without them—which is exactly by construction how we should feel if the externality were fully internalized.

Internalizing negative externalities is much less problematic—it’s your mess, clean it up. We don’t want other people to be harmed by your actions, and if we can pull that off that’s fantastic. (In reality, we usually can’t fully internalize negative externalities, but we can at least try.)

But maybe internalizing positive externalities really isn’t so great after all.