Two terms in marginal utility of wealth

JDN 2457569

This post is going to be a little wonkier than most; I’m actually trying to sort out my thoughts and draw some public comment on a theory that has been dancing around my head for awhile. The original idea of separating terms in marginal utility of wealth was actually suggested by my boyfriend, and from there I’ve been trying to give it some more mathematical precision to see if I can come up with a way to test it experimentally. My thinking is also influenced by a paper Miles Kimball wrote about the distinction between happiness and utility.

There are lots of ways one could conceivably spend money—everything from watching football games to buying refrigerators to building museums to inventing vaccines. But insofar as we are rational (and we are after all about 90% rational), we’re going to try to spend our money in such a way that its marginal utility is approximately equal across various activities. You’ll buy one refrigerator, maybe two, but not seven, because the marginal utility of refrigerators drops off pretty fast; instead you’ll spend that money elsewhere. You probably won’t buy a house that’s twice as large if it means you can’t afford groceries anymore. I don’t think our spending is truly optimal at maximizing utility, but I think it’s fairly good.

Therefore, it doesn’t make much sense to break down marginal utility of wealth into all these different categories—cars, refrigerators, football games, shoes, and so on—because we already do a fairly good job of equalizing marginal utility across all those different categories. I could see breaking it down into a few specific categories, such as food, housing, transportation, medicine, and entertainment (and this definitely seems useful for making your own household budget); but even then, I don’t get the impression that most people routinely spend too much on one of these categories and not enough on the others.

However, I can think of two quite different fundamental motives behind spending money, which I think are distinct enough to be worth separating.

One way to spend money is on yourself, raising your own standard of living, making yourself more comfortable. This would include both football games and refrigerators, really anything that makes your life better. We could call this the consumption motive, or maybe simply the self-directed motive.

The other way is to spend it on other people, which, depending on your personality can take either the form of philanthropy to help others, or as a means of self-aggrandizement to raise your own relative status. It’s also possible to do both at the same time in various combinations; while the Gates Foundation is almost entirely philanthropic and Trump Tower is almost entirely self-aggrandizing, Carnegie Hall falls somewhere in between, being at once a significant contribution to our society and an obvious attempt to bring praise and adulation to himself. I would also include spending on Veblen goods that are mainly to show off your own wealth and status in this category. We can call this spending the philanthropic/status motive, or simply the other-directed motive.

There is some spending which combines both motives: A car is surely useful, but a Ferrari is mainly for show—but then, a Lexus or a BMW could be either to show off or really because you like the car better. Some form of housing is a basic human need, and bigger, fancier houses are often better, but the main reason one builds mansions in Beverly Hills is to demonstrate to the world that one is fabulously rich. This complicates the theory somewhat, but basically I think the best approach is to try to separate a sort of “spending proportion” on such goods, so that say $20,000 of the Lexus is for usefulness and $15,000 is for show. Empirically this might be hard to do, but theoretically it makes sense.

One of the central mysteries in cognitive economics right now is the fact that while self-reported happiness rises very little, if at all, as income increases, a finding which was recently replicated even in poor countries where we might not expect it to be true, nonetheless self-reported satisfaction continues to rise indefinitely. A number of theories have been proposed to explain this apparent paradox.

This model might just be able to account for that, if by “happiness” we’re really talking about the self-directed motive, and by “satisfaction” we’re talking about the other-directed motive. Self-reported happiness seems to obey a rule that $100 is worth as much to someone with $10,000 as $25 is to someone with $5,000, or $400 to someone with $20,000.

Self-reported satisfaction seems to obey a different rule, such that each unit of additional satisfaction requires a roughly equal proportional increase in income.

By having a utility function with two terms, we can account for both of these effects. Total utility will be u(x), happiness h(x), and satisfaction s(x).

u(x) = h(x) + s(x)

To obey the above rule, happiness must obey harmonic utility, like this, for some constants h0 and r:

h(x) = h0 – r/x

Proof of this is straightforward, though to keep it simple I’ve hand-waved why it’s a power law:

Given

h'(2x) = 1/4 h'(x)

Let

h'(x) = r x^n

h'(2x) = r (2x)^n

r (2x)^n = 1/4 r x^n

n = -2

h'(x) = r/x^2

h(x) = – r x^(-1) + C

h(x) = h0 – r/x

Miles Kimball also has some more discussion on his blog about how a utility function of this form works. (His statement about redistribution at the end is kind of baffling though; sure, dollar for dollar, redistributing wealth from the middle class to the poor would produce a higher gain in utility than redistributing wealth from the rich to the middle class. But neither is as good as redistributing from the rich to the poor, and the rich have a lot more dollars to redistribute.)

Satisfaction, however, must obey logarithmic utility, like this, for some constants s0 and k.

The x+1 means that it takes slightly less proportionally to have the same effect as your wealth increases, but it allows the function to be equal to s0 at x=0 instead of going to negative infinity:

s(x) = s0 + k ln(x)

Proof of this is very simple, almost trivial:

Given

s'(x) = k/x

s(x) = k ln(x) + s0

Both of these functions actually have a serious problem that as x approaches zero, they go to negative infinity. For self-directed utility this almost makes sense (if your real consumption goes to zero, you die), but it makes no sense at all for other-directed utility, and since there are causes most of us would willingly die for, the disutility of dying should be large, but not infinite.

Therefore I think it’s probably better to use x +1 in place of x:

h(x) = h0 – r/(x+1)

s(x) = s0 + k ln(x+1)

This makes s0 the baseline satisfaction of having no other-directed spending, though the baseline happiness of zero self-directed spending is actually h0 – r rather than just h0. If we want it to be h0, we could use this form instead:

h(x) = h0 + r x/(x+1)

This looks quite different, but actually only differs by a constant.

Therefore, my final answer for the utility of wealth (or possibly income, or spending? I’m not sure which interpretation is best just yet) is actually this:

u(x) = h(x) + s(x)

h(x) = h0 + r x/(x+1)

s(x) = s0 + k ln(x+1)

Marginal utility is then the derivatives of these:

h'(x) = r/(x+1)^2

s'(x) = k/(x+1)

Let’s assign some values to the constants so that we can actually graph these.

Let h0 = s0 = 0, so our baseline is just zero.

Furthermore, let r = k = 1, which would mean that the value of $1 is the same whether spent either on yourself or on others, if $1 is all you have. (This is probably wrong, actually, but it’s the simplest to start with. Shortly I’ll discuss what happens as you vary the ratio k/r.)

Here is the result graphed on a linear scale:

Utility_linear

And now, graphed with wealth on a logarithmic scale:

Utility_log

As you can see, self-directed marginal utility drops off much faster than other-directed marginal utility, so the amount you spend on others relative to yourself rapidly increases as your wealth increases. If that doesn’t sound right, remember that I’m including Veblen goods as “other-directed”; when you buy a Ferrari, it’s not really for yourself. While proportional rates of charitable donation do not increase as wealth increases (it’s actually a U-shaped pattern, largely driven by poor people giving to religious institutions), they probably should (people should really stop giving to religious institutions! Even the good ones aren’t cost-effective, and some are very, very bad.). Furthermore, if you include spending on relative power and status as the other-directed motive, that kind of spending clearly does proportionally increase as wealth increases—gotta keep up with those Joneses.

If r/k = 1, that basically means you value others exactly as much as yourself, which I think is implausible (maybe some extreme altruists do that, and Peter Singer seems to think this would be morally optimal). r/k < 1 would mean you should never spend anything on yourself, which not even Peter Singer believes. I think r/k = 10 is a more reasonable estimate.

For any given value of r/k, there is an optimal ratio of self-directed versus other-directed spending, which can vary based on your total wealth.

Actually deriving what the optimal proportion would be requires a whole lot of algebra in a post that probably already has too much algebra, but the point is, there is one, and it will depend strongly on the ratio r/k, that is, the overall relative importance of self-directed versus other-directed motivation.

Take a look at this graph, which uses r/k = 10.

Utility_marginal

If you only have 2 to spend, you should spend it entirely on yourself, because up to that point the marginal utility of self-directed spending is always higher. If you have 3 to spend, you should spend most of it on yourself, but a little bit on other people, because after you’ve spent about 2.2 on yourself there is more marginal utility for spending on others than on yourself.

If your available wealth is W, you would spend some amount x on yourself, and then W-x on others:

u(x) = h(x) + s(W-x)

u(x) = r x/(x+1) + k ln(W – x + 1)

Then you take the derivative and set it equal to zero to find the local maximum. I’ll spare you the algebra, but this is the result of that optimization:

x = – 1 – r/(2k) + sqrt(r/k) sqrt(2 + W + r/(4k))

As long as k <= r (which more or less means that you care at least as much about yourself as about others—I think this is true of basically everyone) then as long as W > 0 (as long as you have some money to spend) we also have x > 0 (you will spend at least something on yourself).

Below a certain threshold (depending on r/k), the optimal value of x is greater than W, which means that, if possible, you should be receiving donations from other people and spending them on yourself. (Otherwise, just spend everything on yourself). After that, x < W, which means that you should be donating to others. The proportion that you should be donating smoothly increases as W increases, as you can see on this graph (which uses r/k = 10, a figure I find fairly plausible):

Utility_donation

While I’m sure no one literally does this calculation, most people do seem to have an intuitive sense that you should donate an increasing proportion of your income to others as your income increases, and similarly that you should pay a higher proportion in taxes. This utility function would justify that—which is something that most proposed utility functions cannot do. In most models there is a hard cutoff where you should donate nothing up to the point where your marginal utility is equal to the marginal utility of donating, and then from that point forward you should donate absolutely everything. Maybe a case can be made for that ethically, but psychologically I think it’s a non-starter.

I’m still not sure exactly how to test this empirically. It’s already quite difficult to get people to answer questions about marginal utility in a way that is meaningful and coherent (people just don’t think about questions like “Which is worth more? $4 to me now or $10 if I had twice as much wealth?” on a regular basis). I’m thinking maybe they could play some sort of game where they have the opportunity to make money at the game, but must perform tasks or bear risks to do so, and can then keep the money or donate it to charity. The biggest problem I see with that is that the amounts would probably be too small to really cover a significant part of anyone’s total wealth, and therefore couldn’t cover much of their marginal utility of wealth function either. (This is actually a big problem with a lot of experiments that use risk aversion to try to tease out marginal utility of wealth.) But maybe with a variety of experimental participants, all of whom we get income figures on?

Selling debt goes against everything the free market stands for

JDN 2457555

I don’t think most people—or even most economists—have any concept of just how fundamentally perverse and destructive our financial system has become, and a large chunk of it ultimately boils down to one thing: Selling debt.

Certainly collateralized debt obligations (CDOs), and their meta-form, CDO2s (pronounced “see-dee-oh squareds”), are nothing more than selling debt, and along with credit default swaps (CDS; they are basically insurance, but without those pesky regulations against things like fraud and conflicts of interest) they were directly responsible for the 2008 financial crisis and the ensuing Great Recession and Second Depression.

But selling debt continues in a more insidious way, underpinning the entire debt collection industry which raises tens of billions of dollars per year by harassment, intimidation and extortion, especially of the poor and helpless. Frankly, I think what’s most shocking is how little money they make, given the huge number of people they harass and intimidate.

John Oliver did a great segment on debt collections (with a very nice surprise at the end):

But perhaps most baffling to me is the number of people who defend the selling of debt on the grounds that it is a “free market” activity which must be protected from government “interference in personal liberty”. To show this is not a strawman, here’s the American Enterprise Institute saying exactly that.

So let me say this in no uncertain terms: Selling debt goes against everything the free market stands for.

One of the most basic principles of free markets, one of the founding precepts of capitalism laid down by no less than Adam Smith (and before him by great political philosophers like John Locke), is the freedom of contract. This is the good part of capitalism, the part that makes sense, the reason we shouldn’t tear it all down but should instead try to reform it around the edges.

Indeed, the freedom of contract is so fundamental to human liberty that laws can only be considered legitimate insofar as they do not infringe upon it without a compelling public interest. Freedom of contract is right up there with freedom of speech, freedom of the press, freedom of religion, and the right of due process.

The freedom of contract is the right to make agreements, including financial agreements, with anyone you please, and under conditions that you freely and rationally impose in a state of good faith and transparent discussion. Conversely, it is the right not to make agreements with those you choose not to, and to not be forced into agreements under conditions of fraud, intimidation, or impaired judgment.

Freedom of contract is the basis of my right to take on debt, provided that I am honest about my circumstances and I can find a lender who is willing to lend to me. So taking on debt is a fundamental part of freedom of contract.

But selling debt is something else entirely. Far from exercising the freedom of contract, it violates it. When I take out a loan from bank A, and then they turn around and sell that loan to bank B, I suddenly owe money to bank B, but I never agreed to do that. I had nothing to do with their decision to work with bank B as opposed to keeping the loan or selling it to bank C.

Current regulations prohibit banks from “changing the terms of the loan”, but in practice they change them all the time—they can’t change the principal balance, the loan term, or the interest rate, but they can change the late fees, the payment schedule, and lots of subtler things about the loan that can still make a very big difference. Indeed, as far as I’m concerned they have changed the terms of the loan—one of the terms of the loan was that I was to pay X amount to bank A, not that I was to pay X amount to bank B. I may or may not have good reasons not to want to pay bank B—they might be far less trustworthy than bank A, for instance, or have a far worse social responsibility record—and in any case it doesn’t matter; it is my choice whether or not I want anything to do with bank B, whatever my reasons might be.

I take this matter quite personally, for it is by the selling of debt that, in moral (albeit not legal) terms, a British bank stole my parents’ house. Indeed, not just any British bank; it was none other than HSBC, the money launderers for terrorists.

When they first obtained their mortgage, my parents did not actually know that HSBC was quite so evil as to literally launder money for terrorists, but they did already know that they were involved in a great many shady dealings, and even specifically told their lender that they did not want the loan sold, and if it was to be sold, it was absolutely never to be sold to HSBC in particular. Their mistake (which was rather like the “mistake” of someone who leaves their car unlocked and has it stolen, or forgets to arm the home alarm system and suffers a burglary) was not to get this written into the formal contract, rather than simply made as a verbal agreement with the bankers. Such verbal contracts are enforceable under the law, at least in theory; but that would require proof of the verbal contract (and what proof could we provide?), and also probably have cost as much as the house in litigation fees.

Oh, by the way, they were given a subprime interest rate of 8% despite being middle-class professionals with good credit, no doubt to maximize the broker’s closing commission. Most banks reserved such behavior for racial minorities, but apparently this one was equal-opportunity in the worst way.Perhaps my parents were naive to trust bankers any further than they could throw them.

As a result, I think you know what happened next: They sold the loan to HSBC.

Now, had it ended there, with my parents unwittingly forced into supporting a bank that launders money for terrorists, that would have been bad enough. But it assuredly did not.

By a series of subtle and manipulative practices that poked through one loophole after another, HSBC proceeded to raise my parents’ payments higher and higher. One particularly insidious tactic they used was to sit on the checks until just after the due date passed, so they could charge late fees on the payments, then they recapitalized the late fees. My parents caught on to this particular trick after a few months, and started mailing the checks certified so they would be date-stamped; and lo and behold, all the payments were suddenly on time! By several other similarly devious tactics, all of which were technically legal or at least not provable, they managed to raise my parents’ monthly mortgage payments by over 50%.

Note that it was a fixed-rate, fixed-term mortgage. The initial payments—what should have been always the payments, that’s the point of a fixed-rate fixed-term mortgage—were under $2000 per month. By the end they were paying over $3000 per month. HSBC forced my parents to overpay on a mortgage an amount equal to the US individual poverty line, or the per-capita GDP of Peru.

They tried to make the payments, but after being wildly over budget and hit by other unexpected expenses (including defects in the house’s foundation that they had to pay to fix, but because of the “small” amount at stake and the overwhelming legal might of the construction company, no lawyer was willing to sue over), they simply couldn’t do it anymore, and gave up. They gave the house to the bank with a deed in lieu of foreclosure.

And that is the story of how a bank that my parents never agreed to work with, never would have agreed to work with, indeed specifically said they would not work with, still ended up claiming their house—our house, the house I grew up in from the age of 12. Legally, I cannot prove they did anything against the law. (I mean, other than laundered money for terrorists.) But morally, how is this any less than theft? Would we not be victimized less had a burglar broken into our home, vandalized the walls and stolen our furniture?

Indeed, that would probably be covered under our insurance! Where can I buy insurance against the corrupt and predatory financial system? Where are my credit default swaps to pay me when everything goes wrong?

And all of this could have been prevented, if banks simply weren’t allowed to violate our freedom of contract by selling their loans to other banks.

Indeed, the Second Depression could probably have been likewise prevented. Without selling debt, there is no securitization. Without securitization, there is far less leverage. Without leverage, there are not bank failures. Without bank failures, there is no depression. A decade of global economic growth was lost because we allowed banks to sell debt whenever they please.

I have heard the counter-arguments many times:

“But what if banks need the liquidity?” Easy. They can take out their own loans with those other banks. If bank A finds they need more cashflow, they should absolutely feel free to take out a loan from bank B. They can even point to their projected revenues from the mortgage payments we owe them, as a means of repaying that loan. But they should not be able to involve us in that transaction. If you want to trust HSBC, that’s your business (you’re an idiot, but it’s a free country). But you have no right to force me to trust HSBC.

“But banks might not be willing to make those loans, if they knew they couldn’t sell or securitize them!” THAT’S THE POINT. Banks wouldn’t take on all these ridiculous risks in their lending practices that they did (“NINJA loans” and mortgages with payments larger than their buyers’ annual incomes), if they knew they couldn’t just foist the debt off on some Greater Fool later on. They would only make loans they actually expect to be repaid. Obviously any loan carries some risk, but banks would only take on risks they thought they could bear, as opposed to risks they thought they could convince someone else to bear—which is the definition of moral hazard.

“Homes would be unaffordable if people couldn’t take out large loans!” First of all, I’m not against mortgages—I’m against securitization of mortgages. Yes, of course, people need to be able to take out loans. But they shouldn’t be forced to pay those loans to whoever their bank sees fit. If indeed the loss of subprime securitized mortgages made it harder for people to get homes, that’s a problem; but the solution to that problem was never to make it easier for people to get loans they can’t afford—it is clearly either to reduce the price of homes or increase the incomes of buyers. Subsidized housing construction, public housing, changes in zoning regulation, a basic income, lower property taxes, an expanded earned-income tax credit—these are the sort of policies that one implements to make housing more affordable, not “go ahead and let banks exploit people however they want”.

Remember, a regulation against selling debt would protect the freedom of contract. It would remove a way for private individuals and corporations to violate that freedom, like regulations against fraud, intimidation, and coercion. It should be uncontroversial that no one has any right to force you to do business with someone you would not voluntarily do business with, certainly not in a private transaction between for-profit corporations. Maybe that sort of mandate makes sense in rare circumstances by the government, but even then it should really be implemented as a tax, not a mandate to do business with a particular entity. The right to buy what you choose is the foundation of a free market—and implicit in it is the right not to buy what you do not choose.

There are many regulations on debt that do impose upon freedom of contract: As horrific as payday loans are, if someone really honestly knowingly wants to take on short-term debt at 400% APR I’m not sure it’s my business to stop them. And some people may really be in such dire circumstances that they need money that urgently and no one else will lend to them. Insofar as I want payday loans regulated, it is to ensure that they are really lending in good faith—as many surely are not—and ultimately I want to outcompete them by providing desperate people with more reasonable loan terms. But a ban on securitization is like a ban on fraud; it is the sort of law that protects our rights.

The many varieties of argument “men”

JDN 2457552

After several long, intense, and very likely controversial posts in a row, I decided to take a break with a post that is short and fun.

You have probably already heard of a “strawman” argument, but I think there are many more “materials” an argument can be made of which would be useful terms to have, so I have proposed a taxonomy of similar argument “men”. Perhaps this will help others in the future to more precisely characterize where arguments have gone wrong and how they should have gone differently.

For examples of each, I’m using a hypothetical argument about the gold standard, based on the actual arguments I refute in my previous post on the subject.

This is an argument actually given by a proponent of the gold standard, upon which my “men” shall be built:

1) A gold standard is key to achieving a period of sustained, 4% real economic growth.

The U.S. dollar was created as a defined weight of gold and silver in 1792. As detailed in the booklet, The 21st Century Gold Standard (available free at http://agoldenage.com), I co-authored with fellow Forbes.com columnist Ralph Benko, a dollar as good as gold endured until 1971 with the relatively brief exceptions of the War of 1812, the Civil War and Reconstruction, and 1933, the year President Franklin Roosevelt suspended dollar/gold convertibility until January 31, 1934 when the dollar/gold link was re-established at $35 an ounce, a 40% devaluation from the prior $20.67 an ounce. Over that entire 179 years, the U.S. economy grew at a 3.9% average annual rate, including all of the panics, wars, industrialization and a myriad other events. During the post World War II Bretton Woods gold standard, the U.S. economy also grew on average 4% a year.

By contrast, during the 40-years since going off gold, U.S. economic growth has averaged an anemic 2.8% a year. The only 40-year periods in which the economic growth was slower were those ending in the Great Depression, from 1930 to 1940.

2) A gold standard reduces the risk of recessions and financial crises.

Critics of the gold standard point out, correctly, that it would prohibit the Federal Reserve from manipulating interest rates and the value of the dollar in hopes of stimulating demand. In fact, the idea that a paper dollar would lead to a more stable economy was one of the key selling points for abandoning the gold standard in 1971.

However, this power has done far more harm than good. Under the paper dollar, recessions have become more severe and financial crises more frequent. During the post World War II gold standard, unemployment averaged less than 5% and never rose above 7% during a calendar year. Since going off gold, unemployment has averaged more than 6%, and has been above 8% now for nearly 3.5 years.

And now, the argument men:

Fallacious (Bad) Argument Men

These argument “men” are harmful and irrational; they are to be avoided, and destroyed wherever they are found. Maybe in some very extreme circumstances they would be justifiable—but only in circumstances where it is justifiable to be dishonest and manipulative. You can use a strawman argument to convince a terrorist to let the hostages go; you can’t use one to convince your uncle not to vote Republican.

Strawman: The familiar fallacy in which instead of trying to address someone else’s argument, you make up your own fake version of that argument which is easier to defeat. The image is of making an effigy of your opponent out of straw and beating on the effigy to avoid confronting the actual opponent.

You can’t possibly think that going to the gold standard would make the financial system perfect! There will still be corrupt bankers, a banking oligopoly, and an unpredictable future. The gold standard would do nothing to remove these deep flaws in the system.

Hitman: An even worse form of the strawman, in which you misrepresent not only your opponent’s argument, but your opponent themselves, using your distortion of their view as an excuse for personal attacks against their character.

Oh, you would favor the gold standard, wouldn’t you? A rich, middle-aged White man, presumably straight and nominally Christian? You have all the privileges in life, so you don’t care if you take away the protections that less-fortunate people depend upon. You don’t care if other people become unemployed, so long as you don’t have to bear inflation reducing the real value of your precious capital assets.

Conman: An argument for your own view which you don’t actually believe, but expect to be easier to explain or more persuasive to this particular audience than the true reasons for your beliefs.

Back when we were on the gold standard, it was the era of “Robber Barons”. Poverty was rampant. If we go back to that system, it will just mean handing over all the hard-earned money of working people to billionaire capitalists.

Vaporman: Not even an argument, just a forceful assertion of your view that takes the place or shape of an argument.

The gold standard is madness! It makes no sense at all! How can you even think of going back to such a ridiculous monetary system?

Honest (Acceptable) Argument Men

These argument “men” are perfectly acceptable, and should be the normal expectation in honest discourse.

Woodman: The actual argument your opponent made, addressed and refuted honestly using sound evidence.

There is very little evidence that going back to the gold standard would in any way improve the stability of the currency or the financial system. While long-run inflation was very low under the gold standard, this fact obscures the volatility of inflation, which was extremely high; bouts of inflation were followed by bouts of deflation, swinging the value of the dollar up or down as much as 15% in a single year. Nor is there any evidence that the gold standard prevented financial crises, as dozens of financial crises occurred under the gold standard, if anything more often than they have since the full-fiat monetary system established in 1971.

Bananaman: An actual argument your opponent made that you honestly refute, which nonetheless is so ridiculous that it seems like a strawman, even though it isn’t. Named in “honor” of Ray Comfort’s Banana Argument. Of course, some bananas are squishier than others, and the only one I could find here was at least relatively woody–though still recognizable as a banana:

You said “A gold standard is key to achieving a period of sustained, 4% real economic growth.” based on several distorted, misunderstood, or outright false historical examples. The 4% annual growth in total GDP during the early part of the United States was due primarily to population growth, not a rise in real standard of living, while the rapid growth during WW2 was obviously due to the enormous and unprecedented surge in government spending (and by the way, we weren’t even really on the gold standard during that period). In a blatant No True Scotsman fallacy, you specifically exclude the Great Depression from the “true gold standard” so that you don’t have to admit that the gold standard contributed significantly to the severity of the depression.

Middleman: An argument that synthesizes your view and your opponent’s view, in an attempt to find a compromise position that may be acceptable, if not preferred, by all.

Unlike the classical gold standard, the Bretton Woods gold standard in place from 1945 to 1971 was not obviously disastrous. If you want to go back to a system of international exchange rates fixed by gold similar to Bretton Woods, I would consider that a reasonable position to take.

Virtuous (Good) Argument Men

These argument “men” go above and beyond the call of duty; rather than simply seek to win arguments honestly, they actively seek the truth behind the veil of opposing arguments. These cannot be expected in all circumstances, but they are to be aspired to, and commended when found.

Ironman: Your opponent’s actual argument, but improved, with some of its flaws shored up. The same basic thinking as your opponent, but done more carefully, filling in the proper gaps.

The gold standard might not reduce short-run inflation, but it would reduce longrun inflation, making our currency more stable over long periods of time. We would be able to track long-term price trends in goods such as housing and technology much more easily, and people would have an easier time psychologically grasping the real prices of goods as they change during their lifetime. No longer would we hear people complain, “How can you want a minimum wage of $15? As a teenager in 1955, I got paid $3 an hour and I was happy with that!” when that $3 in 1955, adjusted for inflation, is $26.78 in today’s money.

Steelman: Not the argument your opponent made, but the one they should have made. The best possible argument you are aware of that would militate in favor of their view, the one that sometimes gives you pause about your own opinions, the real and tangible downside of what you believe in.

Tying currency to gold or any other commodity may not be very useful directly, but it could serve one potentially vital function, which is as a commitment mechanism to prevent the central bank from manipulating the currency to enrich themselves or special interests. It may not be the optimal commitment mechanism, but it is a psychologically appealing one for many people, and is also relatively easy to define and keep track of. It is also not subject to as much manipulation as something like nominal GDP targeting or a Taylor Rule, which could be fudged by corrupt statisticians. And while it might cause moderate volatility, it can also protect against the most extreme forms of volatility such as hyperinflation. In countries with very corrupt governments, a gold standard might actually be a good idea, if you could actually enforce it, because it would at least limit the damage that can be done by corrupt central bank officials. Had such a system been in place in Zimbabwe in the 1990s, the hyperinflation might have been prevented. The US is not nearly as corrupt as Zimbabwe, so we probably do not need a gold standard; but it may be wise to recommend the use of gold standards or similar fixed-exchange currencies in Third World countries so that corrupt leaders cannot abuse the monetary system to gain at the expense of their people.

Moral responsibility does not inherit across generations

JDN 2457548

In last week’s post I made a sharp distinction between believing in human progress and believing that colonialism was justified. To make this argument, I relied upon a moral assumption that seems to me perfectly obvious, and probably would to most ethicists as well: Moral responsibility does not inherit across generations, and people are only responsible for their individual actions.

But is in fact this principle is not uncontroversial in many circles. When I read utterly nonsensical arguments like this one from the aptly-named Race Baitr saying that White people have no role to play in the liberation of Black people apparently because our blood is somehow tainted by the crimes our ancestors, it becomes apparent to me that this principle is not obvious to everyone, and therefore is worth defending. Indeed, many applications of the concept of “White Privilege” seem to ignore this principle, speaking as though racism is not something one does or participates in, but something that one is simply by being born with less melanin. Here’s a Salon interview specifically rejecting the proposition that racism is something one does:

For white people, their identities rest on the idea of racism as about good or bad people, about moral or immoral singular acts, and if we’re good, moral people we can’t be racist – we don’t engage in those acts. This is one of the most effective adaptations of racism over time—that we can think of racism as only something that individuals either are or are not “doing.”

If racism isn’t something one does, then what in the world is it? It’s all well and good to talk about systems and social institutions, but ultimately systems and social institutions are made of human behaviors. If you think most White people aren’t doing enough to combat racism (which sounds about right to me!), say that—don’t make some bizarre accusation that simply by existing we are inherently racist. (Also: We? I’m only 75% White, so am I only 75% inherently racist?) And please, stop redefining the word “racism” to mean something other than what everyone uses it to mean; “White people are snakes” is in fact a racist sentiment (and yes, one I’ve actually heard–indeed, here is the late Muhammad Ali comparing all White people to rattlesnakes, and Huffington Post fawning over him for it).

Racism is clearly more common and typically worse when performed by White people against Black people—but contrary to the claims of some social justice activists the White perpetrator and Black victim are not part of the definition of racism. Similarly, sexism is more common and more severe committed by men against women, but that doesn’t mean that “men are pigs” is not a sexist statement (and don’t tell me you haven’t heard that one). I don’t have a good word for bigotry by gay people against straight people (“heterophobia”?) but it clearly does happen on occasion, and similarly cannot be defined out of existence.

I wouldn’t care so much that you make this distinction between “racism” and “racial prejudice”, except that it’s not the normal usage of the word “racism” and therefore confuses people, and also this redefinition clearly is meant to serve a political purpose that is quite insidious, namely making excuses for the most extreme and hateful prejudice as long as it’s committed by people of the appropriate color. If “White people are snakes” is not racism, then the word has no meaning.

Not all discussions of “White Privilege” are like this, of course; this article from Occupy Wall Street actually does a fairly good job of making “White Privilege” into a sensible concept, albeit still not a terribly useful one in my opinion. I think the useful concept is oppression—the problem here is not how we are treating White people, but how we are treating everyone else. What privilege gives you is the freedom to be who you are.”? Shouldn’t everyone have that?

Almost all the so-called “benefits” or “perks” associated with privilege” are actually forgone harms—they are not good things done to you, but bad things not done to you. But benefitting from racist systems doesn’t mean that everything is magically easy for us. It just means that as hard as things are, they could always be worse.” No, that is not what the word “benefit” means. The word “benefit” means you would be worse off without it—and in most cases that simply isn’t true. Many White people obviously think that it is true—which is probably a big reason why so many White people fight so hard to defend racism, you know; you’ve convinced them it is in their self-interest. But, with rare exceptions, it is not; most racial discrimination has literally zero long-run benefit. It’s just bad. Maybe if we helped people appreciate that more, they would be less resistant to fighting racism!

The only features of “privilege” that really make sense as benefits are those that occur in a state of competition—like being more likely to be hired for a job or get a loan—but one of the most important insights of economics is that competition is nonzero-sum, and fairer competition ultimately means a more efficient economy and thus more prosperity for everyone.

But okay, let’s set that aside and talk about this core question of what sort of responsibility we bear for the acts of our ancestors. Many White people clearly do feel deep shame about what their ancestors (or people the same color as their ancestors!) did hundreds of years ago. The psychological reactance to that shame may actually be what makes so many White people deny that racism even exists (or exists anymore)—though a majority of Americans of all races do believe that racism is still widespread.

We also apply some sense of moral responsibility applied to whole races quite frequently. We speak of a policy “benefiting White people” or “harming Black people” and quickly elide the distinction between harming specific people who are Black, and somehow harming “Black people” as a group. The former happens all the time—the latter is utterly nonsensical. Similarly, we speak of a “debt owed by White people to Black people” (which might actually make sense in the very narrow sense of economic reparations, because people do inherit money! They probably shouldn’t, that is literally feudalist, but in the existing system they in fact do), which makes about as much sense as a debt owed by tall people to short people. As Walter Michaels pointed out in The Trouble with Diversity (which I highly recommend), because of this bizarre sense of responsibility we are often in the habit of “apologizing for something you didn’t do to people to whom you didn’t do it (indeed to whom it wasn’t done)”. It is my responsibility to condemn colonialism (which I indeed do), to fight to ensure that it never happens again; it is not my responsibility to apologize for colonialism.

This makes some sense in evolutionary terms; it’s part of the all-encompassing tribal paradigm, wherein human beings come to identify themselves with groups and treat those groups as the meaningful moral agents. It’s much easier to maintain the cohesion of a tribe against the slings and arrows (sometimes quite literal) of outrageous fortune if everyone believes that the tribe is one moral agent worthy of ultimate concern.

This concept of racial responsibility is clearly deeply ingrained in human minds, for it appears in some of our oldest texts, including the Bible: “You shall not bow down to them or worship them; for I, the Lord your God, am a jealous God, punishing the children for the sin of the parents to the third and fourth generation of those who hate me,” (Exodus 20:5)

Why is inheritance of moral responsibility across generations nonsensical? Any number of reasons, take your pick. The economist in me leaps to “Ancestry cannot be incentivized.” There’s no point in holding people responsible for things they can’t control, because in doing so you will not in any way alter behavior. The Stanford Encyclopedia of Philosophy article on moral responsibility takes it as so obvious that people are only responsible for actions they themselves did that they don’t even bother to mention it as an assumption. (Their big question is how to reconcile moral responsibility with determinism, which turns out to be not all that difficult.)

An interesting counter-argument might be that descent can be incentivized: You could use rewards and punishments applied to future generations to motivate current actions. But this is actually one of the ways that incentives clearly depart from moral responsibilities; you could incentivize me to do something by threatening to murder 1,000 children in China if I don’t, but even if it was in fact something I ought to do, it wouldn’t be those children’s fault if I didn’t do it. They wouldn’t deserve punishment for my inaction—I might, and you certainly would for using such a cruel incentive.

Moreover, there’s a problem with dynamic consistency here: Once the action is already done, what’s the sense in carrying out the punishment? This is why a moral theory of punishment can’t merely be based on deterrence—the fact that you could deter a bad action by some other less-bad action doesn’t make the less-bad action necessarily a deserved punishment, particularly if it is applied to someone who wasn’t responsible for the action you sought to deter. In any case, people aren’t thinking that we should threaten to punish future generations if people are racist today; they are feeling guilty that their ancestors were racist generations ago. That doesn’t make any sense even on this deterrence theory.

There’s another problem with trying to inherit moral responsibility: People have lots of ancestors. Some of my ancestors were most likely rapists and murderers; most were ordinary folk; a few may have been great heroes—and this is true of just about anyone anywhere. We all have bad ancestors, great ancestors, and, mostly, pretty good ancestors. 75% of my ancestors are European, but 25% are Native American; so if I am to apologize for colonialism, should I be apologizing to myself? (Only 75%, perhaps?) If you go back enough generations, literally everyone is related—and you may only have to go back about 4,000 years. That’s historical time.

Of course, we wouldn’t be different colors in the first place if there weren’t some differences in ancestry, but there is a huge amount of gene flow between different human populations. The US is a particularly mixed place; because most Black Americans are quite genetically mixed, it is about as likely that any randomly-selected Black person in the US is descended from a slaveowner as it is that any randomly-selected White person is. (Especially since there were a large number of Black slaveowners in Africa and even some in the United States.) What moral significance does this have? Basically none! That’s the whole point; your ancestors don’t define who you are.

If these facts do have any moral significance, it is to undermine the sense most people seem to have that there are well-defined groups called “races” that exist in reality, to which culture responds. No; races were created by culture. I’ve said this before, but it bears repeating: The “races” we hold most dear in the US, White and Black, are in fact the most nonsensical. “Asian” and “Native American” at least almost make sense as categories, though Chippewa are more closely related to Ainu than Ainu are to Papuans. “Latino” isn’t utterly incoherent, though it includes as much Aztec as it does Iberian. But “White” is a club one can join or be kicked out of, while “Black” is the majority of genetic diversity.

Sex is a real thing—while there are intermediate cases of course, broadly speaking humans, like most metazoa, are sexually dimorphic and come in “male” and “female” varieties. So sexism took a real phenomenon and applied cultural dynamics to it; but that’s not what happened with racism. Insofar as there was a real phenomenon, it was extremely superficial—quite literally skin deep. In that respect, race is more like class—a categorization that is itself the result of social institutions.

To be clear: Does the fact that we don’t inherit moral responsibility from our ancestors absolve us from doing anything to rectify the inequities of racism? Absolutely not. Not only is there plenty of present discrimination going on we should be fighting, there are also inherited inequities due to the way that assets and skills are passed on from one generation to the next. If my grandfather stole a painting from your grandfather and both our grandfathers are dead but I am now hanging that painting in my den, I don’t owe you an apology—but I damn well owe you a painting.

The further we become from the past discrimination the harder it gets to make reparations, but all hope is not lost; we still have the option of trying to reset everyone’s status to the same at birth and maintaining equality of opportunity from there. Of course we’ll never achieve total equality of opportunity—but we can get much closer than we presently are.

We could start by establishing an extremely high estate tax—on the order of 99%—because no one has a right to be born rich. Free public education is another good way of equalizing the distribution of “human capital” that would otherwise be concentrated in particular families, and expanding it to higher education would make it that much better. It even makes sense, at least in the short run, to establish some affirmative action policies that are race-conscious and sex-conscious, because there are so many biases in the opposite direction that sometimes you must fight bias with bias.

Actually what I think we should do in hiring, for example, is assemble a pool of applicants based on demographic quotas to ensure a representative sample, and then anonymize the applications and assess them on merit. This way we do ensure representation and reduce bias, but don’t ever end up hiring anyone other than the most qualified candidate. But nowhere should we think that this is something that White men “owe” to women or Black people; it’s something that people should do in order to correct the biases that otherwise exist in our society. Similarly with regard to sexism: Women exhibit just as much unconscious bias against other women as men do. This is not “men” hurting “women”—this is a set of unconscious biases found in almost everywhere and social structures almost everywhere that systematically discriminate against people because they are women.

Perhaps by understanding that this is not about which “team” you’re on (which tribe you’re in), but what policy we should have, we can finally make these biases disappear, or at least fade so small that they are negligible.

Believing in civilization without believing in colonialism

JDN 2457541

In a post last week I presented some of the overwhelming evidence that society has been getting better over time, particularly since the start of the Industrial Revolution. I focused mainly on infant mortality rates—babies not dying—but there are lots of other measures you could use as well. Despite popular belief, poverty is rapidly declining, and is now the lowest it’s ever been. War is rapidly declining. Crime is rapidly declining in First World countries, and to the best of our knowledge crime rates are stable worldwide. Public health is rapidly improving. Lifespans are getting longer. And so on, and so on. It’s not quite true to say that every indicator of human progress is on an upward trend, but the vast majority of really important indicators are.

Moreover, there is every reason to believe that this great progress is largely the result of what we call “civilization”, even Western civilization: Stable, centralized governments, strong national defense, representative democracy, free markets, openness to global trade, investment in infrastructure, science and technology, secularism, a culture that values innovation, and freedom of speech and the press. We did not get here by Marxism, nor agragrian socialism, nor primitivism, nor anarcho-capitalism. We did not get here by fascism, nor theocracy, nor monarchy. This progress was built by the center-left welfare state, “social democracy”, “modified capitalism”, the system where free, open markets are coupled with a strong democratic government to protect and steer them.

This fact is basically beyond dispute; the evidence is overwhelming. The serious debate in development economics is over which parts of the Western welfare state are most conducive to raising human well-being, and which parts of the package are more optional. And even then, some things are fairly obvious: Stable government is clearly necessary, while speaking English is clearly optional.

Yet many people are resistant to this conclusion, or even offended by it, and I think I know why: They are confusing the results of civilization with the methods by which it was established.

The results of civilization are indisputably positive: Everything I just named above, especially babies not dying.

But the methods by which civilization was established are not; indeed, some of the greatest atrocities in human history are attributable at least in part to attempts to “spread civilization” to “primitive” or “savage” people.
It is therefore vital to distinguish between the result, civilization, and the processes by which it was effected, such as colonialism and imperialism.

First, it’s important not to overstate the link between civilization and colonialism.

We tend to associate colonialism and imperialism with White people from Western European cultures conquering other people in other cultures; but in fact colonialism and imperialism are basically universal to any human culture that attains sufficient size and centralization. India engaged in colonialism, Persia engaged in imperialism, China engaged in imperialism, the Mongols were of course major imperialists, and don’t forget the Ottoman Empire; and did you realize that Tibet and Mali were at one time imperialists as well? And of course there are a whole bunch of empires you’ve probably never heard of, like the Parthians and the Ghaznavids and the Ummayyads. Even many of the people we’re accustoming to thinking of as innocent victims of colonialism were themselves imperialists—the Aztecs certainly were (they even sold people into slavery and used them for human sacrifice!), as were the Pequot, and the Iroquois may not have outright conquered anyone but were definitely at least “soft imperialists” the way that the US is today, spreading their influence around and using economic and sometimes military pressure to absorb other cultures into their own.

Of course, those were all civilizations, at least in the broadest sense of the word; but before that, it’s not that there wasn’t violence, it just wasn’t organized enough to be worthy of being called “imperialism”. The more general concept of intertribal warfare is a human universal, and some hunter-gatherer tribes actually engage in an essentially constant state of warfare we call “endemic warfare”. People have been grouping together to kill other people they perceived as different for at least as long as there have been people to do so.

This is of course not to excuse what European colonial powers did when they set up bases on other continents and exploited, enslaved, or even murdered the indigenous population. And the absolute numbers of people enslaved or killed are typically larger under European colonialism, mainly because European cultures became so powerful and conquered almost the entire world. Even if European societies were not uniquely predisposed to be violent (and I see no evidence to say that they were—humans are pretty much humans), they were more successful in their violent conquering, and so more people suffered and died. It’s also a first-mover effect: If the Ming Dynasty had supported Zheng He more in his colonial ambitions, I’d probably be writing this post in Mandarin and reflecting on why Asian cultures have engaged in so much colonial oppression.

While there is a deeply condescending paternalism (and often post-hoc rationalization of your own self-interested exploitation) involved in saying that you are conquering other people in order to civilize them, humans are also perfectly capable of committing atrocities for far less noble-sounding motives. There are holy wars such as the Crusades and ethnic genocides like in Rwanda, and the Arab slave trade was purely for profit and didn’t even have the pretense of civilizing people (not that the Atlantic slave trade was ever really about that anyway).

Indeed, I think it’s important to distinguish between colonialists who really did make some effort at civilizing the populations they conquered (like Britain, and also the Mongols actually) and those that clearly were just using that as an excuse to rape and pillage (like Spain and Portugal). This is similar to but not quite the same thing as the distinction between settler colonialism, where you send colonists to live there and build up the country, and exploitation colonialism, where you send military forces to take control of the existing population and exploit them to get their resources. Countries that experienced settler colonialism (such as the US and Australia) have fared a lot better in the long run than countries that experienced exploitation colonialism (such as Haiti and Zimbabwe).

The worst consequences of colonialism weren’t even really anyone’s fault, actually. The reason something like 98% of all Native Americans died as a result of European colonization was not that Europeans killed them—they did kill thousands of course, and I hope it goes without saying that that’s terrible, but it was a small fraction of the total deaths. The reason such a huge number died and whole cultures were depopulated was disease, and the inability of medical technology in any culture at that time to handle such a catastrophic plague. The primary cause was therefore accidental, and not really foreseeable given the state of scientific knowledge at the time. (I therefore think it’s wrong to consider it genocide—maybe democide.) Indeed, what really would have saved these people would be if Europe had advanced even faster into industrial capitalism and modern science, or else waited to colonize until they had; and then they could have distributed vaccines and antibiotics when they arrived. (Of course, there is evidence that a few European colonists used the diseases intentionally as biological weapons, which no amount of vaccine technology would prevent—and that is indeed genocide. But again, this was a small fraction of the total deaths.)

However, even with all those caveats, I hope we can all agree that colonialism and imperialism were morally wrong. No nation has the right to invade and conquer other nations; no one has the right to enslave people; no one has the right to kill people based on their culture or ethnicity.

My point is that it is entirely possible to recognize that and still appreciate that Western civilization has dramatically improved the standard of human life over the last few centuries. It simply doesn’t follow from the fact that British government and culture were more advanced and pluralistic that British soldiers can just go around taking over other people’s countries and planting their own flag (follow the link if you need some comic relief from this dark topic). That was the moral failing of colonialism; not that they thought their society was better—for in many ways it was—but that they thought that gave them the right to terrorize, slaughter, enslave, and conquer people.

Indeed, the “justification” of colonialism is a lot like that bizarre pseudo-utilitarianism I mentioned in my post on torture, where the mere presence of some benefit is taken to justify any possible action toward achieving that benefit. No, that’s not how morality works. You can’t justify unlimited evil by any good—it has to be a greater good, as in actually greater.

So let’s suppose that you do find yourself encountering another culture which is clearly more primitive than yours; their inferior technology results in them living in poverty and having very high rates of disease and death, especially among infants and children. What, if anything, are you justified in doing to intervene to improve their condition?

One idea would be to hold to the Prime Directive: No intervention, no sir, not ever. This is clearly what Gene Roddenberry thought of imperialism, hence why he built it into the Federation’s core principles.

But does that really make sense? Even as Star Trek shows progressed, the writers kept coming up with situations where the Prime Directive really seemed like it should have an exception, and sometimes decided that the honorable crew of Enterprise or Voyager really should intervene in this more primitive society to save them from some terrible fate. And I hope I’m not committing a Fictional Evidence Fallacy when I say that if your fictional universe specifically designed not to let that happen makes that happen, well… maybe it’s something we should be considering.

What if people are dying of a terrible disease that you could easily cure? Should you really deny them access to your medicine to avoid intervening in their society?

What if the primitive culture is ruled by a horrible tyrant that you could easily depose with little or no bloodshed? Should you let him continue to rule with an iron fist?

What if the natives are engaged in slavery, or even their own brand of imperialism against other indigenous cultures? Can you fight imperialism with imperialism?

And then we have to ask, does it really matter whether their babies are being murdered by the tyrant or simply dying from malnutrition and infection? The babies are just as dead, aren’t they? Even if we say that being murdered by a tyrant is worse than dying of malnutrition, it can’t be that much worse, can it? Surely 10 babies dying of malnutrition is at least as bad as 1 baby being murdered?

But then it begins to seem like we have a duty to intervene, and moreover a duty that applies in almost every circumstance! If you are on opposite sides of the technology threshold where infant mortality drops from 30% to 1%, how can you justify not intervening?

I think the best answer here is to keep in mind the very large costs of intervention as well as the potentially large benefits. The answer sounds simple, but is actually perhaps the hardest possible answer to apply in practice: You must do a cost-benefit analysis. Furthermore, you must do it well. We can’t demand perfection, but it must actually be a serious good-faith effort to predict the consequences of different intervention policies.

We know that people tend to resist most outside interventions, especially if you have the intention of toppling their leaders (even if they are indeed tyrannical). Even the simple act of offering people vaccines could be met with resistance, as the native people might think you are poisoning them or somehow trying to control them. But in general, opening contact with with gifts and trade is almost certainly going to trigger less hostility and therefore be more effective than going in guns blazing.

If you do use military force, it must be targeted at the particular leaders who are most harmful, and it must be designed to achieve swift, decisive victory with minimal collateral damage. (Basically I’m talking about just war theory.) If you really have such an advanced civilization, show it by exhibiting total technological dominance and minimizing the number of innocent people you kill. The NATO interventions in Kosovo and Libya mostly got this right. The Vietnam War and Iraq War got it totally wrong.

As you change their society, you should be prepared to bear most of the cost of transition; you are, after all, much richer than they are, and also the ones responsible for effecting the transition. You should not expect to see short-term gains for your own civilization, only long-term gains once their culture has advanced to a level near your own. You can’t bear all the costs of course—transition is just painful, no matter what you do—but at least the fungible economic costs should be borne by you, not by the native population. Examples of doing this wrong include basically all the standard examples of exploitation colonialism: Africa, the Caribbean, South America. Examples of doing this right include West Germany and Japan after WW2, and South Korea after the Korean War—which is to say, the greatest economic successes in the history of the human race. This was us winning development, humanity. Do this again everywhere and we will have not only ended world hunger, but achieved global prosperity.

What happens if we apply these principles to real-world colonialism? It does not fare well. Nor should it, as we’ve already established that most if not all real-world colonialism was morally wrong.

15th and 16th century colonialism fail immediately; they offer no benefit to speak of. Europe’s technological superiority was enough to give them gunpowder but not enough to drop their infant mortality rate. Maybe life was better in 16th century Spain than it was in the Aztec Empire, but honestly not by all that much; and life in the Iroquois Confederacy was in many ways better than life in 15th century England. (Though maybe that justifies some Iroquois imperialism, at least their “soft imperialism”?)

If these principles did justify any real-world imperialism—and I am not convinced that it does—it would only be much later imperialism, like the British Empire in the 19th and 20th century. And even then, it’s not clear that the talk of “civilizing” people and “the White Man’s Burden” was much more than rationalization, an attempt to give a humanitarian justification for what were really acts of self-interested economic exploitation. Even though India and South Africa are probably better off now than they were when the British first took them over, it’s not at all clear that this was really the goal of the British government so much as a side effect, and there are a lot of things the British could have done differently that would obviously have made them better off still—you know, like not implementing the precursors to apartheid, or making India a parliamentary democracy immediately instead of starting with the Raj and only conceding to democracy after decades of protest. What actually happened doesn’t exactly look like Britain cared nothing for actually improving the lives of people in India and South Africa (they did build a lot of schools and railroads, and sought to undermine slavery and the caste system), but it also doesn’t look like that was their only goal; it was more like one goal among several which also included the strategic and economic interests of Britain. It isn’t enough that Britain was a better society or even that they made South Africa and India better societies than they were; if the goal wasn’t really about making people’s lives better where you are intervening, it’s clearly not justified intervention.

And that’s the relatively beneficent imperialism; the really horrific imperialists throughout history made only the barest pretense of spreading civilization and were clearly interested in nothing more than maximizing their own wealth and power. This is probably why we get things like the Prime Directive; we saw how bad it can get, and overreacted a little by saying that intervening in other cultures is always, always wrong, no matter what. It was only a slight overreaction—intervening in other cultures is usually wrong, and almost all historical examples of it were wrong—but it is still an overreaction. There are exceptional cases where intervening in another culture can be not only morally right but obligatory.

Indeed, one underappreciated consequence of colonialism and imperialism is that they have triggered a backlash against real good-faith efforts toward economic development. People in Africa, Asia, and Latin America see economists from the US and the UK (and most of the world’s top economists are in fact educated in the US or the UK) come in and tell them that they need to do this and that to restructure their society for greater prosperity, and they understandably ask: “Why should I trust you this time?” The last two or four or seven batches of people coming from the US and Europe to intervene in their countries exploited them or worse, so why is this time any different?

It is different, of course; UNDP is not the East India Company, not by a longshot. Even for all their faults, the IMF isn’t the East India Company either. Indeed, while these people largely come from the same places as the imperialists, and may be descended from them, they are in fact completely different people, and moral responsibility does not inherit across generations. While the suspicion is understandable, it is ultimately unjustified; whatever happened hundreds of years ago, this time most of us really are trying to help—and it’s working.

Actually, our economic growth has been fairly ecologically sustainable lately!

JDN 2457538

Environmentalists have a reputation for being pessimists, and it is not entirely undeserved. While as Paul Samuelson said, all Street indexes have predicted nine out of the last five recessions, environmentalists have predicted more like twenty out of the last zero ecological collapses.

Some fairly serious scientists have endorsed predictions of imminent collapse that haven’t panned out, and many continue to do so. This Guardian article should be hilarious to statisticians, as it literally takes trends that are going one direction, maps them onto a theory that arbitrarily decides they’ll suddenly reverse, and then says “the theory fits the data”. This should be taught in statistics courses as a lesson in how not to fit models. More data distortion occurs in this Scientific American article, which contains the phrase “food per capita is decreasing”; well, that’s true if you just look at the last couple of years, but according to FAOSTAT, food production per capita in 2012 (the most recent data in FAOSTAT) was higher than literally every other year on record except 2011. So if you allow for even the slightest amount of random fluctuation, it’s very clear that food per capita is increasing, not decreasing.

global_food.png

So many people predicting imminent collapse of human civilization. And yet, for some reason, all the people predicting this go about their lives as if it weren’t happening! Why, it’s almost as if they don’t really believe it, and just say it to get attention. Nobody gets on the news by saying “Civilization is doing fine; things are mostly getting better.”

There’s a long history of these sorts of gloom and doom predictions; perhaps the paradigm example is Thomas Malthus in 1779 predicting the imminent destruction of civilization by inevitable famine—just in time for global infant mortality rates to start plummeting and economic output to surge beyond anyone’s wildest dreams.

Still, when I sat down to study this it was remarkable to me just how good the outlook is for future sustainability. The Index of Sustainable Economic Welfare was created essentially in an attempt to show how our economic growth is largely an illusion driven by our rapacious natural resource consumption, but it has since been discontinued, perhaps because it didn’t show that. Using the US as an example, I reconstructed the index as best I could from World Bank data, and here’s what came out for the period since 1990:

ISEW

The top line is US GDP as normally measured. The bottom line is the ISEW. The gap between those lines expands on a linear scale, but not on a logarithmic scale; that is to say, GDP and ISEW grow at almost exactly the same rate, so ISEW is always a constant (and large) proportion of GDP. By construction it is necessarily smaller (it basically takes GDP and subtracts out from it), but the fact that it is growing at the same rate shows that our economic growth is not being driven by depletion of natural resources or the military-industrial complex; it’s being driven by real improvements in education and technology.

The Human Development Index has grown in almost every country (albeit at quite different rates) since 1990. Global poverty is the lowest it has ever been. We are living in a golden age of prosperity. This is such a golden age for our civilization, our happiness rating maxed out and now we’re getting +20% production and extra gold from every source. (Sorry, gamer in-joke.)

Now, it is said that pride cometh before a fall; so perhaps our current mind-boggling improvements in human welfare have only been purchased on borrowed time as we further drain our natural resources.

There is some cause for alarm: We’re literally running out of fish, and groundwater tables are falling rapidly. Due to poor land use deserts are expanding. Huge quantities of garbage now float in our oceans. And of course, climate change is poised to kill millions of people. Arctic ice will melt every summer starting in the next few years.

And yet, global carbon emissions have not been increasing the last few years, despite strong global economic growth. We need to be reducing emissions, not just keeping them flat (in a previous post I talked about some policies to do that); but even keeping them flat while still raising standard of living is something a lot of environmentalists kept telling us we couldn’t possibly do. Despite constant talk of “overpopulation” and a “population bomb”, population growth rates are declining and world population is projected to level off around 9 billion. Total solar power production in the US expanded by a factor of 40 in just the last 10 years.

Of course, I don’t deny that there are serious environmental problems, and we need to make policies to combat them; but we are doing that. Humanity is not mindlessly plunging headlong into an abyss; we are taking steps to improve our future.

And in fact I think environmentalists deserve a lot of credit for that! Raising awareness of environmental problems has made most Americans recognize that climate change is a serious problem. Further pressure might make them realize it should be one of our top priorities (presently most Americans do not).

And who knows, maybe the extremist doomsayers are necessary to set the Overton Window for the rest of us. I think we of the center-left (toward which reality has a well-known bias) often underestimate how much we rely upon the radical left to pull the discussion away from the radical right and make us seem more reasonable by comparison. It could well be that “climate change will kill tens of millions of people unless we act now to institute a carbon tax and build hundreds of nuclear power plants” is easier to swallow after hearing “climate change will destroy humanity unless we act now to transform global capitalism to agrarian anarcho-socialism.” Ultimately I wish people could be persuaded simply by the overwhelming scientific evidence in favor of the carbon tax/nuclear power argument, but alas, humans are simply not rational enough for that; and you must go to policy with the public you have. So maybe irrational levels of pessimism are a worthwhile corrective to the irrational levels of optimism coming from the other side, like the execrable sophistry of “in praise of fossil fuels” (yes, we know our economy was built on coal and oil—that’s the problem. We’re “rolling drunk on petroleum”; when we’re trying to quit drinking, reminding us how much we enjoy drinking is not helpful.).

But I worry that this sort of irrational pessimism carries its own risks. First there is the risk of simply giving up, succumbing to learned helplessness and deciding there’s nothing we can possibly do to save ourselves. Second is the risk that we will do something needlessly drastic (like the a radical socialist revolution) that impoverishes or even kills millions of people for no reason. The extreme fear that we are on the verge of ecological collapse could lead people to take a “by any means necessary” stance and end up with a cure worse than the disease. So far the word “ecoterrorism” has mainly been applied to what was really ecovandalism; but if we were in fact on the verge of total civilizational collapse, I can understand why someone would think quite literal terrorism was justified (actually the main reason I don’t is that I just don’t see how it could actually help). Just about anything is worth it to save humanity from destruction.

What is progress? How far have we really come?

JDN 2457534

It is a controversy that has lasted throughout the ages: Is the world getting better? Is it getting worse? Or is it more or less staying the same, changing in ways that don’t really constitute improvements or detriments?

The most obvious and indisputable change in human society over the course of history has been the advancement of technology. At one extreme there are techno-utopians, who believe that technology will solve all the world’s problems and bring about a glorious future; at the other extreme are anarcho-primitivists, who maintain that civilization, technology, and industrialization were all grave mistakes, removing us from our natural state of peace and harmony.

I am not a techno-utopian—I do not believe that technology will solve all our problems—but I am much closer to that end of the scale. Technology has solved a lot of our problems, and will continue to solve a lot more. My aim in this post is to convince you that progress is real, that things really are, on the whole, getting better.

One of the more baffling arguments against progress comes from none other than Jared Diamond, the social scientist most famous for Guns, Germs and Steel (which oddly enough is mainly about horses and goats). About seven months before I was born, Diamond wrote an essay for Discover magazine arguing quite literally that agriculture—and by extension, civilization—was a mistake.

Diamond fortunately avoids the usual argument based solely on modern hunter-gatherers, which is a selection bias if ever I heard one. Instead his main argument seems to be that paleontological evidence shows an overall decrease in health around the same time as agriculture emerged. But that’s still an endogeneity problem, albeit a subtler one. Maybe agriculture emerged as a response to famine and disease. Or maybe they were both triggered by rising populations; higher populations increase disease risk, and are also basically impossible to sustain without agriculture.

I am similarly dubious of the claim that hunter-gatherers are always peaceful and egalitarian. It does seem to be the case that herders are more violent than other cultures, as they tend to form honor cultures that punish all sleights with overwhelming violence. Even after the Industrial Revolution there were herder honor cultures—the Wild West. Yet as Steven Pinker keeps trying to tell people, the death rates due to homicide in all human cultures appear to have steadily declined for thousands of years.

I read an article just a few days ago on the Scientific American blog which included the following claim so astonishingly nonsensical it makes me wonder if the authors can even do arithmetic or read statistical tables correctly:

I keep reminding readers (see Further Reading), the evidence is overwhelming that war is a relatively recent cultural invention. War emerged toward the end of the Paleolithic era, and then only sporadically. A new study by Japanese researchers published in the Royal Society journal Biology Letters corroborates this view.

Six Japanese scholars led by Hisashi Nakao examined the remains of 2,582 hunter-gatherers who lived 12,000 to 2,800 years ago, during Japan’s so-called Jomon Period. The researchers found bashed-in skulls and other marks consistent with violent death on 23 skeletons, for a mortality rate of 0.89 percent.

That is supposed to be evidence that ancient hunter-gatherers were peaceful? The global homicide rate today is 62 homicides per million people per year. Using the worldwide life expectancy of 71 years (which is biasing against modern civilization because our life expectancy is longer), that means that the worldwide lifetime homicide rate is 4,400 homicides per million people, or 0.44%—that’s less than half the homicide rate of these “peaceful” hunter-gatherers. If you compare just against First World countries, the difference is even starker; let’s use the US, which has the highest homicide rate in the First World. Our homicide rate is 38 homicides per million people per year, which at our life expectancy of 79 years is 3,000 homicides per million people, or an overall homicide rate of 0.3%, slightly more than a third of this “peaceful” ancient culture. The most peaceful societies today—notably Japan, where these remains were found—have homicide rates as low as 3 per million people per year, which is a lifetime homicide rate of 0.02%, forty times smaller than their supposedly utopian ancestors. (Yes, all of Japan has fewer total homicides than Chicago. I’m sure it has nothing to do with their extremely strict gun control laws.) Indeed, to get a modern homicide rate as high as these hunter-gatherers, you need to go to a country like Congo, Myanmar, or the Central African Republic. To get a substantially higher homicide rate, you essentially have to be in Latin America. Honduras, the murder capital of the world, has a lifetime homicide rate of about 6.7%.

Again, how did I figure these things out? By reading basic information from publicly-available statistical tables and then doing some simple arithmetic. Apparently these paleoanthropologists couldn’t be bothered to do that, or didn’t know how to do it correctly, before they started proclaiming that human nature is peaceful and civilization is the source of violence. After an oversight as egregious as that, it feels almost petty to note that a sample size of a few thousand people from one particular region and culture isn’t sufficient data to draw such sweeping judgments or speak of “overwhelming” evidence.

Of course, in order to decide whether progress is a real phenomenon, we need a clearer idea of what we mean by progress. It would be presumptuous to use per-capita GDP, though there can be absolutely no doubt that technology and capitalism do in fact raise per-capita GDP. If we measure by inequality, modern society clearly fares much worse (our top 1% share and Gini coefficient may be higher than Classical Rome!), but that is clearly biased in the opposite direction, because the main way we have raised inequality is by raising the ceiling, not lowering the floor. Most of our really good measures (like the Human Development Index) only exist for the last few decades and can barely even be extrapolated back through the 20th century.

How about babies not dying? This is my preferred measure of a society’s value. It seems like something that should be totally uncontroversial: Babies dying is bad. All other things equal, a society is better if fewer babies die.

I suppose it doesn’t immediately follow that all things considered a society is better if fewer babies die; maybe the dying babies could be offset by some greater good. Perhaps a totalitarian society where no babies die is in fact worse than a free society in which a few babies die, or perhaps we should be prepared to accept some small amount of babies dying in order to save adults from poverty, or something like that. But without some really powerful overriding reason, babies not dying probably means your society is doing something right. (And since most ancient societies were in a state of universal poverty and quite frequently tyranny, these exceptions would only strengthen my case.)

Well, get ready for some high-yield truth bombs about infant mortality rates.

It’s hard to get good data for prehistoric cultures, but the best data we have says that infant mortality in ancient hunter-gatherer cultures was about 20-50%, with a best estimate around 30%. This is statistically indistinguishable from early agricultural societies.

Indeed, 30% seems to be the figure humanity had for most of history. Just shy of a third of all babies died for most of history.

In Medieval times, infant mortality was about 30%.

This same rate (fluctuating based on various plagues) persisted into the Enlightenment—Sweden has the best records, and their infant mortality rate in 1750 was about 30%.

The decline in infant mortality began slowly: During the Industrial Era, infant mortality was about 15% in isolated villages, but still as high as 40% in major cities due to high population densities with poor sanitation.

Even as recently as 1900, there were US cities with infant mortality rates as high as 30%, though the overall rate was more like 10%.

Most of the decline was recent and rapid: Just within the US since WW2, infant mortality fell from about 5.5% to 0.7%, though there remains a substantial disparity between White and Black people.

Globally, the infant mortality rate fell from 6.3% to 3.2% within my lifetime, and in Africa today, the region where it is worst, it is about 5.5%—or what it was in the US in the 1940s.

This precipitous decline in babies dying is the main reason ancient societies have such low life expectancies; actually once they reached adulthood they lived to be about 70 years old, not much worse than we do today. So my multiplying everything by 71 actually isn’t too far off even for ancient societies.

Let me make a graph for you here, of the approximate rate of babies dying over time from 10,000 BC to today:

Infant_mortality.png

Let’s zoom in on the last 250 years, where the data is much more solid:

Infant_mortality_recent.png

I think you may notice something in these graphs. There is quite literally a turning point for humanity, a kink in the curve where we suddenly begin a rapid decline from an otherwise constant mortality rate.

That point occurs around or shortly before 1800—that is, it occurs at industrial capitalism. Adam Smith (not to mention Thomas Jefferson) was writing at just about the point in time when humanity made a sudden and unprecedented shift toward saving the lives of millions of babies.

So now, think about that the next time you are tempted to say that capitalism is an evil system that destroys the world; the evidence points to capitalism quite literally saving babies from dying.

How would it do so? Well, there’s that rising per-capita GDP we previously ignored, for one thing. But more important seems to be the way that industrialization and free markets support technological innovation, and in this case especially medical innovation—antibiotics and vaccines. Our higher rates of literacy and better communication, also a result of raised standard of living and improved technology, surely didn’t hurt. I’m not often in agreement with the Cato Institute, but they’re right about this one: Industrial capitalism is the chief source of human progress.

Billions of babies would have died but we saved them. So yes, I’m going to call that progress. Civilization, and in particular industrialization and free markets, have dramatically improved human life over the last few hundred years.

In a future post I’ll address one of the common retorts to this basically indisputable fact: “You’re making excuses for colonialism and imperialism!” No, I’m not. Saying that modern capitalism is a better system (not least because it saves babies) is not at all the same thing as saying that our ancestors were justified in using murder, slavery, and tyranny to force people into it.

Why it matters that torture is ineffective

JDN 2457531

Like “longest-ever-serving Speaker of the House sexually abuses teenagers” and “NSA spy program is trying to monitor the entire telephone and email system”, the news that the US government systematically tortures suspects is an egregious violation that goes to the highest levels of our government—that for some reason most Americans don’t particularly seem to care about.

The good news is that President Obama signed an executive order in 2009 banning torture domestically, reversing official policy under the Bush Administration, and then better yet in 2014 expanded the order to apply to all US interests worldwide. If this is properly enforced, perhaps our history of hypocrisy will finally be at its end. (Well, not if Trump wins…)

Yet as often seems to happen, there are two extremes in this debate and I think they’re both wrong.
The really disturbing side is “Torture works and we have to use it!” The preferred mode of argumentation for this is the “ticking time bomb scenario”, in which we have some urgent disaster to prevent (such as a nuclear bomb about to go off) and torture is the only way to stop it from happening. Surely then torture is justified? This argument may sound plausible, but as I’ll get to below, this is a lot like saying, “If aliens were attacking from outer space trying to wipe out humanity, nuclear bombs would probably be justified against them; therefore nuclear bombs are always justified and we can use them whenever we want.” If you can’t wait for my explanation, The Atlantic skewers the argument nicely.

Yet the opponents of torture have brought this sort of argument on themselves, by staking out a position so extreme as “It doesn’t matter if torture works! It’s wrong, wrong, wrong!” This kind of simplistic deontological reasoning is very appealing and intuitive to humans, because it casts the world into simple black-and-white categories. To show that this is not a strawman, here are several different people all making this same basic argument, that since torture is illegal and wrong it doesn’t matter if it works and there should be no further debate.

But the truth is, if it really were true that the only way to stop a nuclear bomb from leveling Los Angeles was to torture someone, it would be entirely justified—indeed obligatory—to torture that suspect and stop that nuclear bomb.

The problem with that argument is not just that this is not our usual scenario (though it certainly isn’t); it goes much deeper than that:

That scenario makes no sense. It wouldn’t happen.

To use the example the late Antonin Scalia used from an episode of 24 (perhaps the most egregious Fictional Evidence Fallacy ever committed), if there ever is a nuclear bomb planted in Los Angeles, that would literally be one of the worst things that ever happened in the history of the human race—literally a Holocaust in the blink of an eye. We should be prepared to cause extreme suffering and death in order to prevent it. But not only is that event (fortunately) very unlikely, torture would not help us.

Why? Because torture just doesn’t work that well.

It would be too strong to say that it doesn’t work at all; it’s possible that it could produce some valuable intelligence—though clear examples of such results are amazingly hard to come by. There are some social scientists who have found empirical results showing some effectiveness of torture, however. We can’t say with any certainty that it is completely useless. (For obvious reasons, a randomized controlled experiment in torture is wildly unethical, so none have ever been attempted.) But to justify torture it isn’t enough that it could work sometimes; it has to work vastly better than any other method we have.

And our empirical data is in fact reliable enough to show that that is not the case. Torture often produces unreliable information, as we would expect from the game theory involved—your incentive is to stop the pain, not provide accurate intel; the psychological trauma that torture causes actually distorts memory and reasoning; and as a matter of fact basically all the useful intelligence obtained in the War on Terror was obtained through humane interrogation methods. As interrogation experts agree, torture just isn’t that effective.

In principle, there are four basic cases to consider:

1. Torture is vastly more effective than the best humane interrogation methods.

2. Torture is slightly more effective than the best humane interrogation methods.

3. Torture is as effective as the best humane interrogation methods.

4. Torture is less effective than the best humane interrogation methods.

The evidence points most strongly to case 4, which would mean that torture is a no-brainer; if it doesn’t even work as well as other methods, it’s absurd to use it. You’re basically kicking puppies at that point—purely sadistic violence that accomplishes nothing. But the data isn’t clear enough for us to rule out case 3 or even case 2. There is only one case we can strictly rule out, and that is case 1.

But it was only in case 1 that torture could ever be justified!

If you’re trying to justify doing something intrinsically horrible, it’s not enough that it has some slight benefit.

People seem to have this bizarre notion that we have only two choices in morality:

Either we are strict deontologists, and wrong actions can never be justified by good outcomes ever, in which case apparently vaccines are morally wrong, because stabbing children with needles is wrong. Tto be fair, some people seem to actually believe this; but then, some people believe the Earth is less than 10,000 years old.

Or alternatively we are the bizarre strawman concept most people seem to have of utilitarianism, under which any wrong action can be justified by even the slightest good outcome, in which case all you need to do to justify slavery is show that it would lead to a 1% increase in per-capita GDP. Sadly, there honestly do seem to be economists who believe this sort of thing. Here’s one arguing that US chattel slavery was economically efficient, and some of the more extreme arguments for why sweatshops are good can take on this character. Sweatshops may be a necessary evil for the time being, but they are still an evil.

But what utilitarianism actually says (and I consider myself some form of nuanced rule-utilitarian, though actually I sometimes call it “deontological consequentialism” to emphasize that I mean to synthesize the best parts of the two extremes) is not that the ends always justify the means, but that the ends can justify the means—that it can be morally good or even obligatory to do something intrinsically bad (like stabbing children with needles) if it is the best way to accomplish some greater good (like saving them from measles and polio). But the good actually has to be greater, and it has to be the best way to accomplish that good.

To see why this later proviso is important, consider the real-world ethical issues involved in psychology experiments. The benefits of psychology experiments are already quite large, and poised to grow as the science improves; one day the benefits of cognitive science to humanity may be even larger than the benefits of physics and biology are today. Imagine a world without mood disorders or mental illness of any kind; a world without psychopathy, where everyone is compassionate; a world where everyone is achieving their full potential for happiness and self-actualization. Cognitive science may yet make that world possible—and I haven’t even gotten into its applications in artificial intelligence.

To achieve that world, we will need a great many psychology experiments. But does that mean we can just corral people off the street and throw them into psychology experiments without their consent—or perhaps even their knowledge? That we can do whatever we want in those experiments, as long as it’s scientifically useful? No, it does not. We have ethical standards in psychology experiments for a very good reason, and while those ethical standards do slightly reduce the efficiency of the research process, the reduction is small enough that the moral choice is obviously to retain the ethics committees and accept the slight reduction in research efficiency. Yes, randomly throwing people into psychology experiments might actually be slightly better in purely scientific terms (larger and more random samples)—but it would be terrible in moral terms.

Along similar lines, even if torture works about as well or even slightly better than other methods, that’s simply not enough to justify it morally. Making a successful interrogation take 16 days instead of 17 simply wouldn’t be enough benefit to justify the psychological trauma to the suspect (and perhaps the interrogator!), the risk of harm to the falsely accused, or the violation of international human rights law. And in fact a number of terrorism suspects were waterboarded for months, so even the idea that it could shorten the interrogation is pretty implausible. If anything, torture seems to make interrogations take longer and give less reliable information—case 4.

A lot of people seem to have this impression that torture is amazingly, wildly effective, that a suspect who won’t crack after hours of humane interrogation can be tortured for just a few minutes and give you all the information you need. This is exactly what we do not find empirically; if he didn’t crack after hours of talk, he won’t crack after hours of torture. If you literally only have 30 minutes to find the nuke in Los Angeles, I’m sorry; you’re not going to find the nuke in Los Angeles. No adversarial interrogation is ever going to be completed that quickly, no matter what technique you use. Evacuate as many people to safe distances or underground shelters as you can in the time you have left.

This is why the “ticking time-bomb” scenario is so ridiculous (and so insidious); that’s simply not how interrogation works. The best methods we have for “rapid” interrogation of hostile suspects take hours or even days, and they are humane—building trust and rapport is the most important step. The goal is to get the suspect to want to give you accurate information.

For the purposes of the thought experiment, okay, you can stipulate that it would work (this is what the Stanford Encyclopedia of Philosophy does). But now all you’ve done is made the thought experiment more distant from the real-world moral question. The closest real-world examples we’ve ever had involved individual crimes, probably too small to justify the torture (as bad as a murdered child is, think about what you’re doing if you let the police torture people). But by the time the terrorism to be prevented is large enough to really be sufficient justification, it (1) hasn’t happened in the real world and (2) surely involves terrorists who are sufficiently ideologically committed that they’ll be able to resist the torture. If such a situation arises, of course we should try to get information from the suspects—but what we try should be our best methods, the ones that work most consistently, not the ones that “feel right” and maybe happen to work on occasion.

Indeed, the best explanation I have for why people use torture at all, given its horrible effects and mediocre effectiveness at best is that it feels right.

When someone does something terrible (such as an act of terrorism), we rightfully reduce our moral valuation of them relative to everyone else. If you are even tempted to deny this, suppose a terrorist and a random civilian are both inside a burning building and you only have time to save one. Of course you save the civilian and not the terrorist. And that’s still true even if you know that once the terrorist was rescued he’d go to prison and never be a threat to anyone else. He’s just not worth as much.

In the most extreme circumstances, a person can be so terrible that their moral valuation should be effectively zero: If the only person in a burning building is Stalin, I’m not sure you should save him even if you easily could. But it is a grave moral mistake to think that a person’s moral valuation should ever go negative, yet I think this is something that people do when confronted with someone they truly hate. The federal agents torturing those terrorists didn’t merely think of them as worthless—they thought of them as having negative worth. They felt it was a positive good to harm them. But this is fundamentally wrong; no sentient being has negative worth. Some may be so terrible as to have essentially zero worth; and we are often justified in causing harm to some in order to save others. It would have been entirely justified to kill Stalin (as a matter of fact he died of heart disease and old age), to remove the continued threat he posed; but to torture him would not have made the world a better place, and actually might well have made it worse.

Yet I can see how psychologically it could be useful to have a mechanism in our brains that makes us hate someone so much we view them as having negative worth. It makes it a lot easier to harm them when necessary, makes us feel a lot better about ourselves when we do. The idea that any act of homicide is a tragedy but some of them are necessary tragedies is a lot harder to deal with than the idea that some people are just so evil that killing or even torturing them is intrinsically good. But some of the worst things human beings have ever done ultimately came from that place in our brains—and torture is one of them.

The powerful persistence of bigotry

JDN 2457527

Bigotry has been a part of human society since the beginning—people have been hating people they perceive as different since as long as there have been people, and maybe even before that. I wouldn’t be surprised to find that different tribes of chimpanzees or even elephants hold bigoted beliefs about each other.

Yet it may surprise you that neoclassical economics has basically no explanation for this. There is a long-standing famous argument that bigotry is inherently irrational: If you hire based on anything aside from actual qualifications, you are leaving money on the table for your company. Because women CEOs are paid less and perform better, simply ending discrimination against women in top executive positions could save any typical large multinational corporation tens of millions of dollars a year. And yet, they don’t! Fancy that.

More recently there has been work on the concept of statistical discrimination, under which it is rational (in the sense of narrowly-defined economic self-interest) to discriminate because categories like race and gender may provide some statistically valid stereotype information. For example, “Black people are poor” is obviously not true across the board, but race is strongly correlated with wealth in the US; “Asians are smart” is not a universal truth, but Asian-Americans do have very high educational attainment. In the absence of more reliable information that might be your best option for making good decisions. Of course, this creates a vicious cycle where people in the positive stereotype group are better off and have more incentive to improve their skills than people in the negative stereotype group, thus perpetuating the statistical validity of the stereotype.

But of course that assumes that the stereotypes are statistically valid, and that employers don’t have more reliable information. Yet many stereotypes aren’t even true statistically: If “women are bad drivers”, then why do men cause 75% of traffic fatalities? Furthermore, in most cases employers have more reliable information—resumes with education and employment records. Asian-Americans are indeed more likely to have bachelor’s degrees than Latino Americans, but when it say right on Mr. Lorenzo’s resume that he has a B.A. and on Mr. Suzuki’s resume that he doesn’t, that racial stereotype no longer provides you with any further information. Yet even if the resumes are identical, employers will be more likely to hire a White applicant than a Black applicant, and more likely to hire a male applicant than a female applicant—we have directly tested this in experiments. In an experiment where employers had direct performance figures in front of them, they were still more likely to choose the man when they had the same scores—and sometimes even when the woman had a higher score!

Even our assessments of competence are often biased, probably subconsciously; given the same essay to review, most reviewers find more spelling errors and are more concerned about those errors if they are told that the author is Black. If they thought the author was White, they thought of the errors as “minor mistakes” by a student with “otherwise good potential”; but if they thought the author was Black, they “can’t believe he got into this school in the first place”. These reviewers were reading the same essay. The alleged author’s race was decided randomly. Most if not all of these reviewers were not consciously racist. Subconscious racial biases are all over the place; almost everyone exhibits some subconscious racial bias.

No, discrimination isn’t just rational inference based on valid (if unfortunate and self-reinforcing) statistical trends. There is a significant component of just outright irrational bigotry.

We’re seeing this play out in North Carolina; due to their arbitrary discrimination against lesbian, gay, bisexual and especially transgender people, they are now hemorrhaging jobs as employers pull out, and their federal funding for student loans is now in jeopardy due to the obvious Title IX violation. This is obviously not in the best interest of the people of North Carolina (even the ones who aren’t LGBT!); and it’s all being justified on the grounds of an epidemic of sexual assaults by people pretending to be trans that doesn’t even exist. It turns out that more Republican Senators have been arrested for sexual misconduct in bathrooms than transgender people—and while the number of transgender people in the US is surprisingly hard to measure, it’s clearly a lot larger than the number of Republican Senators!

In fact, discrimination is even more irrational than it may seem, because empirically the benefits of discrimination (such as they are—short-term narrow economic self-interest) fall almost entirely on the rich while the harms fall mainly on the poor, yet poor people are much more likely to be racist! Since income and education are highly correlated, education accounts for some of this effect. This is reason to be hopeful, for as educational attainment has soared, we have found that racism has decreased.

But education doesn’t seem to explain the full effect. One theory to account this is what’s called last-place aversiona highly pernicious heuristic where people are less concerned about their own absolute status than they are about not having the worst status. In economic experiments, people are usually more willing to give money to people worse off than them than to those better off than them—unless giving it to the worse-off would make those people better off than they themselves are. I think we actually need to do further study to see what happens if it would make those other people exactly as well-off as they are, because that turns out to be absolutely critical to whether people would be willing to support a basic income. In other words, do people count “tied for last”? Would they rather play a game where everyone gets $100, or one where they get $50 but everyone else only gets $10?

I would hope that humanity is better than that—that we would want to play the $100 game, which is analogous to a basic income. But when I look at the extreme and persistent inequality that has plagued human society for millennia, I begin to wonder if perhaps there really are a lot of people who think of the world in such zero-sum, purely relative terms, and care more about being better than others than they do about doing well themselves. Perhaps the horrific poverty of Sub-Saharan Africa and Southeast Asia is, for many First World people, not a bug but a feature; we feel richer when we know they are poorer. Scarcity seems to amplify this zero-sum thinking; racism gets worse whenever we have economic downturns. Precisely because discrimination is economically inefficient, this can create a vicious cycle where poverty causes bigotry which worsens poverty.

There is also something deeper going on, something evolutionary; bigotry is part of what I call the tribal paradigm, the core aspect of human psychology that defines identity in terms of in-groups which are good and out-groups which are bad. We will probably never fully escape the tribal paradigm, but this is not a reason to give up hope; we have made substantial progress in reducing bigotry in many places. What seems to happen is that people learn to expand their mental tribe, so that it encompasses larger and larger groups—not just White Americans but all Americans, or not just Americans but all human beings. Peter Singer calls this the Expanding Circle (also the title of his book on it). We may one day be able to make our tribe large enough to encompass all sentient beings in the universe; at that point, it’s just fine if we are only interested in advancing the interests of those in our tribe, because our tribe would include everyone. Yet I don’t think any of us are quite there yet, and some people have a really long way to go.

But with these expanding tribes in mind, perhaps I can leave you with a fact that is as counter-intuitive as it is encouraging, and even easier still to take out of context: Racism was better than what came before it. What I mean by this is not that racism is good—of course it’s terrible—but that in order to be racism, to define the whole world into a small number of “racial groups”, people already had to enormously expand their mental tribe from where it started. When we evolved on the African savannah millions of years ago, our tribe was 150 people; to this day, that’s about the number of people we actually feel close to and interact with on a personal level. We could have stopped there, and for millennia we did. But over time we managed to expand beyond that number, to a village of 1,000, a town of 10,000, a city of 100,000. More recently we attained mental tribes of whole nations, in some case hundreds of millions of people. Racism is about that same scale, if not a bit larger; what most people (rather arbitrarily, and in a way that changes over time) call “White” constitutes about a billion people. “Asian” (including South Asian) is almost four billion. These are astonishingly huge figures, some seven orders of magnitude larger than what we originally evolved to handle. The ability to feel empathy for all “White” people is just a little bit smaller than the ability to feel empathy for all people period. Similarly, while today the gender in “all men are created equal” is jarring to us, the idea at the time really was an incredibly radical broadening of the moral horizon—Half the world? Are you mad?

Therefore I am confident that one day, not too far from now, the world will take that next step, that next order of magnitude, which many of us already have (or try to), and we will at last conquer bigotry, and if not eradicate it entirely then force it completely into the most distant shadows and deny it its power over our society.

Is there hope for stopping climate change?

JDN 2457523

This topic was decided by vote of my Patreons (there are still few enough that the vote usually has only two or three people, but hey, what else can I do?).

When it comes to climate change, I have good news and bad news.

First, the bad news:

We are not going to be able to stop climate change, or even stop making it worse, any time soon. Because of this, millions of people are going to die and there’s nothing we can do about it.

Now, the good news:

We can do a great deal to slow down our contribution to climate change, reduce its impact on human society, and save most of the people who would otherwise have been killed by it. It is currently forecasted that climate change will cause somewhere between 10 million and 100 million deaths over the next century; if we can hold to the lower end of that error bar instead of the upper end, that’s half a dozen Holocausts prevented.

There are three basic approaches to take, and we will need all of them:

1. Emission reduction: Put less carbon in

2. Geoengineering: Take more carbon out

3. Adaptation: Protect more humans from the damage

Strategies 1 and 2 are classified as mitigation, while strategy 3 is classified as adaptation. Mitigation is reducing climate change; adaptation is reducing the effect of climate change on people.

Let’s start with strategy 1, emission reduction. It’s probably the most important; without it the others are clearly doomed to fail.

So, what are our major sources of emissions, and what can we do to reduce them?

While within the US and most other First World countries the primary sources of emissions are electricity and transportation, worldwide transportation is less important and agriculture is about as large a source of emissions as electricity. 25% of global emissions are due to electricity, 24% are due to agriculture, 21% are due to industry, 14% are due to transportation, only 6% are due to buildings, and everything else adds up to 10%.

global_emissions_sector_2015

1A. Both within the First World and worldwide, the leading source of emissions is electricity. Our first priority is therefore electrical grid reform.

Energy efficiency can help—and it already is helping, as global electricity consumption has stopped growing despite growth in population and GDP. Energy intensity of GDP is declining. But the main thing we need to do is reform the way that electricity is produced.

Let’s take a look at how the world currently produces electricity. Currently, the leading source of electricity is “liquids”, an odd euphemism for oil; currently about 175 quadrillion BTU per year, 30% of all production. This is closely followed by coal, at about 160 quadrillion BTU per year, 28%. Then we have natural gas, about 130 quadrillion BTU per year (23%), wind, solar, hydroelectric, and geothermal altogether about 60 quadrillion BTU per year (11%), and nuclear fission only about 40 quadrillion BTU per year (7%).

This list basically needs to be reversed. We will probably not be able to completely stop using oil for transportation, but we have no excuse for using for electricity production. We also need to stop using coal for, well, just about anything. There are a few industrial processes that basically have to use coal; fine, use it for that. But just as something to burn, coal is one of the most heavily-polluting technologies in existence—the only things we burn that are worse are wood and animal dung. Simply ending the burning of coal, wood, and dung would by itself save 4 million lives a year just from reduced pollution.

Natural gas burns cleaner than coal or oil, but it still produces a lot of carbon emissions. Even worse, natural gas is itself one of the worst greenhouse gases—and so natural gas leaks are a major source of greenhouse emissions. Last year a single massive leak accounted for 25% of California’s methane emissions. Like oil, natural gas is also something we’ll want to use quite sparingly.

The best power source is solar power, hands-down. In the long run, the goal should be to convert as much as possible of the grid to solar. Wind, hydroelectric, and geothermal are also very useful, though wind power peaks at the wrong time of day for high energy demand and hydro and geothermal require specific geography to work. Solar is also the most scalable; as long as you have the raw materials and the technology, you can keep expanding solar production all the way up to a Dyson Sphere.

But solar is intermittent, and we don’t have good enough energy storage methods right now to ensure a steady grid on solar alone. The bulk of our grid is therefore going to have to be made of the one energy source we have with negligible carbon emissions, mature technology, and virtually unlimited and fully controllable output: Nuclear fission. At least until fusion matures or we solve the solar energy storage problem, nuclear fission is our best option for minimizing carbon emissions immediatelynot waiting for some new technology to come save us, but building efficient reactors now. Why does France only emit 6 tonnes of carbon per person per year while the UK emits 9, Germany emits 10, and the US emits a whopping 17? Because France’s electricity grid is almost entirely nuclear.

But nuclear power is dangerous!” people will say. France has indeed had several nuclear accidents in the last 40 years; guess how many deaths those accidents have caused? Zero. Deepwater Horizon killed more people than the sum total of all nuclear accidents in all First World countries. Worldwide, there was one Black Swan horrible nuclear event—Chernobyl (which still only killed about as many people as die in the US each year of car accidents or lung cancer), and other than that, nuclear power is safer that every form of fossil fuel.

“Where will we store the nuclear waste?” Well, that’s a more legitimate question, but you know what? It can wait. Nuclear waste doesn’t accumulate very fast, precisely because fission is thousands of times more efficient than combustion; so we’ll have plenty of room in existing facilities or easily-built expansions for the next century. By that point, we should have fusion or a good way of converting the whole grid to solar. We should of course invest in R&D in the meantime. But right now, we need fission.

So, after we’ve converted the electricity grid to nuclear, what next?
1B. To reduce the effect of agriculture, we need to eat less meat; among agricultural sources, livestock is the leading contributor of greenhouse emissions, followed by land use “emissions” (i.e. deforestation), which could also be reduced by converting more crop production to vegetables instead of meat because vegetables are much more land-efficient (and just-about-everything-else-efficient).

1C. To reduce the effect of transportation, we need huge investments in public transit, as well as more fuel-efficient vehicles like hybrids and electric cars. Switching to public transit could cut private transportation-related emissions in half. 100% electric cars are too much to hope for, but by implementing a high carbon tax, we might at least raise the cost of gasoline enough to incentivize makers and buyers of cars to choose more fuel-efficient models.
The biggest gains in fuel efficiency happen on the most gas-guzzling vehicles—indeed, so much so that our usual measure “miles per gallon” is highly misleading.

Quick: Which of the following changes would reduce emissions more, assuming all the vehicles drive the same amount? Switching from a hybrid of 50 MPG to a zero-emission electric (infinity MPG!), switching from a normal sedan of 20 MPG to a hybrid of 50 MPG, or switching from an inefficient diesel truck of 3 MPG to a modern diesel truck of 7 MPG?

The diesel truck, by far.

If each vehicle drives 10,000 miles per year: The first switch will take us from consuming 200 gallons to consuming 0 gallons—saving 200 gallons. The second switch will take us from consuming 500 gallons to consuming 200 gallons—saving 300 gallons. But the third switch will take us from consuming 3,334 gallons to consuming only 1,429 gallons—saving a whopping 1,905 gallons. Even slight increases in the fuel efficiency of highly inefficient vehicles have a huge impact, while you can raise an already-efficient vehicle to perfect efficiency and barely notice a difference.

We really should measure in gallons per mile—or better yet, liters per megameter. (Most of the world uses liters per 100 km; almost!)

All right, let’s assume we’ve done that: The whole grid is nuclear, and everyone is a vegetarian driving an electric car. That’s a good start. But we can’t stop there. Because of the feedback loops involved, we only reduce our emissions—even to near zero—the amount of carbon dioxide will continue to increase for decades. We need to somehow take the carbon out that is already there, which brings me to strategy 2, geoengineering.

2A. There are some exotic proposals out there for geoengineering (putting sulfur into the air to block out the Sun; what could possibly go wrong?), and maybe we’ll end up using some of them. I think iron fertilization of the oceans is one of the more promising options. But we need to be careful to make sure we actually know what these projects will do; we got into this mess by doing things without appreciating their long-run environmental impact, so let’s not make the same mistake again.

2B. But really, the most effective form of geoengineering is simply reforestation. Trees are very good at capturing carbon from the atmosphere; it’s what they evolved to do. So let’s plant trees—lots of trees. Many countries already have net positive forestation (such as the US as a matter of fact), but the world still has net deforestation, and that needs to be reversed.

But even if we do all that, at this point we probably can’t do enough fast enough to actually stop climate change from causing damage. After we’ve done our best to slow it down, we’re still going to need to respond to its effects and find ways to minimize the harm. That’s strategy 3, adaptation.

3A. Coastal regions around the world are going to have to turn into the Netherlands, surrounded by dikes and polders. First World countries already have the resources to do this, and will most likely do it on our own (many cities already have plans to); but other countries need to be given the resources to do it. We’re responsible for most of the emissions, and we have the most wealth, so we should pick up the tab for most of the adaptation.

3B. Some places aren’t going to be worth saving—so that means saving the people, by moving them somewhere else. We’re going to have global refugee crises, and we need to prepare for them, not in the usual way of “How can I clear my conscience while xenophobically excluding these people?” but by welcoming them with open arms. We are going to need to resettle tens of millions—possibly hundreds of millions—of people, and we need a process for doing that efficiently and integrating these people into the societies they end up living in. We must stop presuming that closed borders are the default and realize that the burden of proof was always on anyone who says that people should have different rights based on whether they were born on the proper side of an imaginary line. If open borders are utopian, then it is utopian we must be.

The bad news is that even if we do all these things, millions of people are still going to die from climate change—but a lot fewer millions than would if we didn’t.

And the really good news is that people are finally starting to do these things. It took a lot longer than it should, and there are still a lot of holdouts; but significant progress is already being made. There are a lot of reasons to be hopeful.