Lukewarm support is a lot better than opposition

July 23, JDN 2457593

Depending on your preconceptions, this statement may seem either eminently trivial or offensively wrong: Lukewarm support is a lot better than opposition.

I’ve always been in the “trivial” camp, so it has taken me awhile to really understand where people are coming from when they say things like the following.

From a civil rights activist blogger (“POC” being “person of color” in case you didn’t know):

Many of my POC friends would actually prefer to hang out with an Archie Bunker-type who spits flagrantly offensive opinions, rather than a colorblind liberal whose insidious paternalism, dehumanizing tokenism, and cognitive indoctrination ooze out between superficially progressive words.

From the Daily Kos:

Right-wing racists are much more honest, and thus easier to deal with, than liberal racists.

From a Libertarian blogger:

I can deal with someone opposing me because of my politics. I can deal with someone who attacks me because of my religious beliefs. I can deal with open hostility. I know where I stand with people like that.

They hate me or my actions for (insert reason here). Fine, that is their choice. Let’s move onto the next bit. I’m willing to live and let live if they are.

But I don’t like someone buttering me up because they need my support, only to drop me the first chance they get. I don’t need sweet talk to distract me from the knife at my back. I don’t need someone promising the world just so they can get a boost up.

In each of these cases, people are expressing a preference for dealing with someone who actively opposes them, rather than someone who mostly supports them. That’s really weird.

The basic fact that lukewarm support is better than opposition is basically a mathematical theorem. In a democracy or anything resembling one, if you have the majority of population supporting you, even if they are all lukewarm, you win; if you have the majority of the population opposing you, even if the remaining minority is extremely committed to your cause, you lose.

Yes, okay, it does get slightly more complicated than that, as in most real-world democracies small but committed interest groups actually can pressure policy more than lukewarm majorities (the special interest effect); but even then, you are talking about the choice between no special interests and a special interest actively against you.

There is a valid question of whether it is more worthwhile to get a small, committed coalition, or a large, lukewarm coalition; but at the individual level, it is absolutely undeniable that supporting you is better for you than opposing you, full stop. I mean that in the same sense that the Pythagorean theorem is undeniable; it’s a theorem, it has to be true.

If you had the opportunity to immediately replace every single person who opposes you with someone who supports you but is lukewarm about it, you’d be insane not to take it. Indeed, this is basically how all social change actually happens: Committed supporters persuade committed opponents to become lukewarm supporters, until they get a majority and start winning policy votes.

If this is indeed so obvious and undeniable, why are there so many people… trying to deny it?

I came to realize that there is a deep psychological effect at work here. I could find very little in the literature describing this effect, which I’m going to call heretic effect (though the literature on betrayal aversion, several examples of which are linked in this sentence, is at least somewhat related).

Heretic effect is the deeply-ingrained sense human beings tend to have (as part of the overall tribal paradigm) that one of the worst things you can possibly do is betray your tribe. It is worse than being in an enemy tribe, worse even than murdering someone. The one absolutely inviolable principle is that you must side with your tribe.

This is one of the biggest barriers to police reform, by the way: The Blue Wall of Silence is the result of police officers identifying themselves as a tight-knit tribe and refusing to betray one of their own for anything. I think the best option for convincing police officers to support reform is to reframe violations of police conduct as themselves betrayals—the betrayal is not the IA taking away your badge, the betrayal is you shooting an unarmed man because he was Black.

Heretic effect is a particular form of betrayal aversion, where we treat those who are similar to our tribe but not quite part of it as the very worst sort of people, worse than even our enemies, because at least our enemies are not betrayers. In fact it isn’t really betrayal, but it feels like betrayal.

I call it “heretic effect” because of the way that exclusivist religions (including all the Abrahamaic religions, and especially Christianity and Islam) focus so much of their energy on rooting out “heretics”, people who almost believe the same as you do but not quite. The Spanish Inquisition wasn’t targeted at Buddhists or even Muslims; it was targeted at Christians who slightly disagreed with Catholicism. Why? Because while Buddhists might be the enemy, Protestants were betrayers. You can still see this in the way that Muslim societies treat “apostates”, those who once believed in Islam but don’t anymore. Indeed, the very fact that Christianity and Islam are at each other’s throats, rather than Hinduism and atheism, shows that it’s the people who almost agree with you that really draw your hatred, not the people whose worldview is radically distinct.

This is the effect that makes people dislike lukewarm supporters; like heresy, lukewarm support feels like betrayal. You can clearly hear that in the last quote: “I don’t need sweet talk to distract me from the knife at my back.” Believe it or not, Libertarians, my support for replacing the social welfare state with a basic income, decriminalizing drugs, and dramatically reducing our incarceration rate is not deception. Nor do I think I’ve been particularly secretive about my desire to make taxes more progressive and environmental regulations stronger, the things you absolutely don’t agree with. Agreeing with you on some things but not on other things is not in fact the same thing as lying to you about my beliefs or infiltrating and betraying your tribe.

That said, I do sort of understand why it feels that way. When I agree with you on one thing (decriminalizing cannabis, for instance), it sends you a signal: “This person thinks like me.” You may even subconsciously tag me as a fellow Libertarian. But then I go and disagree with you on something else that’s just as important (strengthening environmental regulations), and it feels to you like I have worn your Libertarian badge only to stab you in the back with my treasonous environmentalism. I thought you were one of us!

Similarly, if you are a social justice activist who knows all the proper lingo and is constantly aware of “checking your privilege”, and I start by saying, yes, racism is real and terrible, and we should definitely be working to fight it, but then I question something about your language and approach, that feels like a betrayal. At least if I’d come in wearing a Trump hat you could have known which side I was really on. (And indeed, I have had people unfriend me or launch into furious rants at me for questioning the orthodoxy in this way. And sure, it’s not as bad as actually being harassed on the street by bigots—a thing that has actually happened to me, by the way—but it’s still bad.)

But if you can resist this deep-seated impulse and really think carefully about what’s happening here, agreeing with you partially clearly is much better than not agreeing with you at all. Indeed, there’s a fairly smooth function there, wherein the more I agree with your goals the more our interests are aligned and the better we should get along. It’s not completely smooth, because certain things are sort of package deals: I wouldn’t want to eliminate the social welfare system without replacing it with a basic income, whereas many Libertarians would. I wouldn’t want to ban fracking unless we had established a strong nuclear infrastructure, but many environmentalists would. But on the whole, more agreement is better than less agreement—and really, even these examples are actually surface-level results of deeper disagreement.

Getting this reaction from social justice activists is particularly frustrating, because I am on your side. Bigotry corrupts our society at a deep level and holds back untold human potential, and I want to do my part to undermine and hopefully one day destroy it. When I say that maybe “privilege” isn’t the best word to use and warn you about not implicitly ascribing moral responsibility across generations, this is not me being a heretic against your tribe; this is a strategic policy critique. If you are writing a letter to the world, I’m telling you to leave out paragraph 2 and correcting your punctuation errors, not crumpling up the paper and throwing it into a fire. I’m doing this because I want you to win, and I think that your current approach isn’t working as well as it should. Maybe I’m wrong about that—maybe paragraph 2 really needs to be there, and you put that semicolon there on purpose—in which case, go ahead and say so. If you argue well enough, you may even convince me; if not, this is the sort of situation where we can respectfully agree to disagree. But please, for the love of all that is good in the world, stop saying that I’m worse than the guys in the KKK hoods. Resist that feeling of betrayal so that we can have a constructive critique of our strategy. Don’t do it for me; do it for the cause.

“The cake is a lie”: The fundamental distortions of inequality

July 13, JDN 2457583

Inequality of wealth and income, especially when it is very large, fundamentally and radically distorts outcomes in a capitalist market. I’ve already alluded to this matter in previous posts on externalities and marginal utility of wealth, but it is so important I think it deserves to have its own post. In many ways this marks a paradigm shift: You can’t think about economics the same way once you realize it is true.

To motivate what I’m getting at, I’ll expand upon an example from a previous post.

Suppose there are only two goods in the world; let’s call them “cake” (K) and “money” (M). Then suppose there are three people, Baker, who makes cakes, Richie, who is very rich, and Hungry, who is very poor. Furthermore, suppose that Baker, Richie and Hungry all have exactly the same utility function, which exhibits diminishing marginal utility in cake and money. To make it more concrete, let’s suppose that this utility function is logarithmic, specifically: U = 10*ln(K+1) + ln(M+1)

The only difference between them is in their initial endowments: Baker starts with 10 cakes, Richie starts with $100,000, and Hungry starts with $10.

Therefore their starting utilities are:

U(B) = 10*ln(10+1)= 23.98

U(R) = ln(100,000+1) = 11.51

U(H) = ln(10+1) = 2.40

Thus, the total happiness is the sum of these: U = 37.89

Now let’s ask two very simple questions:

1. What redistribution would maximize overall happiness?
2. What redistribution will actually occur if the three agents trade rationally?

If multiple agents have the same diminishing marginal utility function, it’s actually a simple and deep theorem that the total will be maximized if they split the wealth exactly evenly. In the following blockquote I’ll prove the simplest case, which is two agents and one good; it’s an incredibly elegant proof:

Given: for all x, f(x) > 0, f'(x) > 0, f”(x) < 0.

Maximize: f(x) + f(A-x) for fixed A

f'(x) – f'(A – x) = 0

f'(x) = f'(A – x)

Since f”(x) < 0, this is a maximum.

Since f'(x) > 0, f is monotonic; therefore f is injective.

x = A – x

QED

This can be generalized to any number of agents, and for multiple goods. Thus, in this case overall happiness is maximized if the cakes and money are both evenly distributed, so that each person gets 3 1/3 cakes and $33,336.66.

The total utility in that case is:

3 * (10 ln(10/3+1) + ln(33,336.66+1)) = 3 * (14.66 + 10.414) = 3 (25.074) =75.22

That’s considerably better than our initial distribution (almost twice as good). Now, how close do we get by rational trade?

Each person is willing to trade up until the point where their marginal utility of cake is equal to their marginal utility of money. The price of cake will be set by the respective marginal utilities.

In particular, let’s look at the trade that will occur between Baker and Richie. They will trade until their marginal rate of substitution is the same.

The actual algebra involved is obnoxious (if you’re really curious, here are some solved exercises of similar trade problems), so let’s just skip to the end. (I rushed through, so I’m not actually totally sure I got it right, but to make my point the precise numbers aren’t important.)
Basically what happens is that Richie pays an exorbitant price of $10,000 per cake, buying half the cakes with half of his money.

Baker’s new utility and Richie’s new utility are thus the same:
U(R) = U(B) = 10*ln(5+1) + ln(50,000+1) = 17.92 + 10.82 = 28.74
What about Hungry? Yeah, well, he doesn’t have $10,000. If cakes are infinitely divisible, he can buy up to 1/1000 of a cake. But it turns out that even that isn’t worth doing (it would cost too much for what he gains from it), so he may as well buy nothing, and his utility remains 2.40.

Hungry wanted cake just as much as Richie, and because Richie has so much more Hungry would have gotten more happiness from each new bite. Neoclassical economists promised him that markets were efficient and optimal, and so he thought he’d get the cake he needs—but the cake is a lie.

The total utility is therefore:

U = U(B) + U(R) + U(H)

U = 28.74 + 28.74 + 2.40

U = 59.88

Note three things about this result: First, it is more than where we started at 37.89—trade increases utility. Second, both Richie and Baker are better off than they were—trade is Pareto-improving. Third, the total is less than the optimal value of 75.22—trade is not utility-maximizing in the presence of inequality. This is a general theorem that I could prove formally, if I wanted to bore and confuse all my readers. (Perhaps someday I will try to publish a paper doing that.)

This result is incredibly radical—it basically goes against the core of neoclassical welfare theory, or at least of all its applications to real-world policy—so let me be absolutely clear about what I’m saying, and what assumptions I had to make to get there.

I am saying that if people start with different amounts of wealth, the trades they would willfully engage in, acting purely under their own self interest, would not maximize the total happiness of the population. Redistribution of wealth toward equality would increase total happiness.

First, I had to assume that we could simply redistribute goods however we like without affecting the total amount of goods. This is wildly unrealistic, which is why I’m not actually saying we should reduce inequality to zero (as would follow if you took this result completely literally). Ironically, this is an assumption that most neoclassical welfare theory agrees with—the Second Welfare Theorem only makes any sense in a world where wealth can be magically redistributed between people without any harmful economic effects. If you weaken this assumption, what you find is basically that we should redistribute wealth toward equality, but beware of the tradeoff between too much redistribution and too little.

Second, I had to assume that there’s such a thing as “utility”—specifically, interpersonally comparable cardinal utility. In other words, I had to assume that there’s some way of measuring how much happiness each person has, and meaningfully comparing them so that I can say whether taking something from one person and giving it to someone else is good or bad in any given circumstance.

This is the assumption neoclassical welfare theory generally does not accept; instead they use ordinal utility, on which we can only say whether things are better or worse, but never by how much. Thus, their only way of determining whether a situation is better or worse is Pareto efficiency, which I discussed in a post a couple years ago. The change from the situation where Baker and Richie trade and Hungry is left in the lurch to the situation where all share cake and money equally in socialist utopia is not a Pareto-improvement. Richie and Baker are slightly worse off with 25.07 utilons in the latter scenario, while they had 28.74 utilons in the former.

Third, I had to assume selfishness—which is again fairly unrealistic, but again not something neoclassical theory disagrees with. If you weaken this assumption and say that people are at least partially altruistic, you can get the result where instead of buying things for themselves, people donate money to help others out, and eventually the whole system achieves optimal utility by willful actions. (It depends just how altruistic people are, as well as how unequal the initial endowments are.) This actually is basically what I’m trying to make happen in the real world—I want to show people that markets won’t do it on their own, but we have the chance to do it ourselves. But even then, it would go a lot faster if we used the power of government instead of waiting on private donations.

Also, I’m ignoring externalities, which are a different type of market failure which in no way conflicts with this type of failure. Indeed, there are three basic functions of government in my view: One is to maintain security. The second is to cancel externalities. The third is to redistribute wealth. The DOD, the EPA, and the SSA, basically. One could also add macroeconomic stability as a fourth core function—the Fed.

One way to escape my theorem would be to deny interpersonally comparable utility, but this makes measuring welfare in any way (including the usual methods of consumer surplus and GDP) meaningless, and furthermore results in the ridiculous claim that we have no way of being sure whether Bill Gates is happier than a child starving and dying of malaria in Burkina Faso, because they are two different people and we can’t compare different people. Far more reasonable is not to believe in cardinal utility, meaning that we can say an extra dollar makes you better off, but we can’t put a number on how much.

And indeed, the difficulty of even finding a unit of measure for utility would seem to support this view: Should I use QALY? DALY? A Likert scale from 0 to 10? There is no known measure of utility that is without serious flaws and limitations.

But it’s important to understand just how strong your denial of cardinal utility needs to be in order for this theorem to fail. It’s not enough that we can’t measure precisely; it’s not even enough that we can’t measure with current knowledge and technology. It must be fundamentally impossible to measure. It must be literally meaningless to say that taking a dollar from Bill Gates and giving it to the starving Burkinabe would do more good than harm, as if you were asserting that triangles are greener than schadenfreude.

Indeed, the whole project of welfare theory doesn’t make a whole lot of sense if all you have to work with is ordinal utility. Yes, in principle there are policy changes that could make absolutely everyone better off, or make some better off while harming absolutely no one; and the Pareto criterion can indeed tell you that those would be good things to do.

But in reality, such policies almost never exist. In the real world, almost anything you do is going to harm someone. The Nuremburg trials harmed Nazi war criminals. The invention of the automobile harmed horse trainers. The discovery of scientific medicine took jobs away from witch doctors. Inversely, almost any policy is going to benefit someone. The Great Leap Forward was a pretty good deal for Mao. The purges advanced the self-interest of Stalin. Slavery was profitable for plantation owners. So if you can only evaluate policy outcomes based on the Pareto criterion, you are literally committed to saying that there is no difference in welfare between the Great Leap Forward and the invention of the polio vaccine.

One way around it (that might actually be a good kludge for now, until we get better at measuring utility) is to broaden the Pareto criterion: We could use a majoritarian criterion, where you care about the number of people benefited versus harmed, without worrying about magnitudes—but this can lead to Tyranny of the Majority. Or you could use the Difference Principle developed by Rawls: find an ordering where we can say that some people are better or worse off than others, and then make the system so that the worst-off people are benefited as much as possible. I can think of a few cases where I wouldn’t want to apply this criterion (essentially they are circumstances where autonomy and consent are vital), but in general it’s a very good approach.

Neither of these depends upon cardinal utility, so have you escaped my theorem? Well, no, actually. You’ve weakened it, to be sure—it is no longer a statement about the fundamental impossibility of welfare-maximizing markets. But applied to the real world, people in Third World poverty are obviously the worst off, and therefore worthy of our help by the Difference Principle; and there are an awful lot of them and very few billionaires, so majority rule says take from the billionaires. The basic conclusion that it is a moral imperative to dramatically reduce global inequality remains—as does the realization that the “efficiency” and “optimality” of unregulated capitalism is a chimera.

Should we give up on growth?

JDN 2457572

Recently I read this article published by the Post Carbon Institute, “How to Shrink the Economy without Crashing It”, which has been going around environmentalist circles. (I posted on Facebook that I’d answer it in more detail, so here goes.)

This is the far left view on climate change, which is wrong, but not nearly as wrong as even the “mainstream” right-wing view that climate change is not a serious problem and we should continue with business as usual. Most of the Republicans who ran for President this year didn’t believe in using government action to fight climate change, and Donald Trump doesn’t even believe it exists.
This core message of the article is clearly correct:

We know this because Global Footprint Network, which methodically tracks the relevant data, informs us that humanity is now using 1.5 Earths’ worth of resources.

We can temporarily use resources faster than Earth regenerates them only by borrowing from the future productivity of the planet, leaving less for our descendants. But we cannot do this for long.

To be clear, “using 1.5 Earths” is not as bad as it sounds; spending is allow to exceed income at times, as long as you have reason to think that future income will exceed future spending, and this is true not just of money but also of natural resources. You can in fact “borrow from the future”, provided you do actually have a plan to pay it back. And indeed there has been some theoretical work by environmental economists suggesting that we are rightly still in the phase of net ecological dissaving, and won’t enter the phase of net ecological saving until the mid-21st century when our technology has made us two or three times as productive. This optimal path is defined by a “weak sustainability” condition where total real wealth never falls over time, so any natural wealth depleted is replaced by at least as much artificial wealth.

Of course some things can’t be paid back; while forests depleted can be replanted, if you drive species to extinction, only very advanced technology could restore them. And we are driving thousands of species to extinction every single year. Even if we should be optimally dissaving, we are almost certainly depleting natural resources too fast, and depleting natural resources that will be difficult if not impossible to later restore. In that sense, the Post Carbon Institute is right: We must change course toward ecological sustainability.

Unfortunately, their specific ideas of how to do so leave much to be desired. Beyond ecological sustainability, they really argue for two propositions: one is radical but worth discussing, but the other is totally absurd.

The absurd claim is that we should somehow force the world to de-urbanize and regress into living in small farming villages. To show this is a bananaman and not a strawman, I quote:

8. Re-localize. One of the difficulties in the transition to renewable energy is that liquid fuels are hard to substitute. Oil drives nearly all transportation currently, and it is highly unlikely that alternative fuels will enable anything like current levels of mobility (electric airliners and cargo ships are non-starters; massive production of biofuels is a mere fantasy). That means communities will be obtaining fewer provisions from far-off places. Of course trade will continue in some form: even hunter-gatherers trade. Re-localization will merely reverse the recent globalizing trade trend until most necessities are once again produced close by, so that we—like our ancestors only a century ago—are once again acquainted with the people who make our shoes and grow our food.

9. Re-ruralize. Urbanization was the dominant demographic trend of the 20th century, but it cannot be sustained. Indeed, without cheap transport and abundant energy, megacities will become increasingly dysfunctional. Meanwhile, we’ll need lots more farmers. Solution: dedicate more societal resources to towns and villages, make land available to young farmers, and work to revitalize rural culture.

First of all: Are electric cargo ships non-starters? The Ford-class aircraft carrier is electric, specifically nuclear. Nuclear-powered cargo ships would raise a number of issues in terms of practicality, safety, and regulation, but they aren’t fundamentally infeasible. Massive efficient production of biofuels is a fantasy as long as the energy to do it is provided by coal power, but not if it’s provided by nuclear. Perhaps this author’s concept of “infeasible” really just means “infeasible if I can’t get over my irrational fear of nuclear power”. Even electric airliners are not necessarily out of the question; NASA has been experimenting with electric aircraft.

The most charitable reading I can give of this (in my terminology of argument “men”, I’m trying to make a banana out of iron), is as promoting slightly deurbanizing and going back to more like say the 1950s United States, with 64% of people in cities instead of 80% today. Even then this makes less than no sense, as higher urbanization is associated with lower per-capita ecological impact, which frankly shouldn’t even be surprising because cities have such huge economies of scale. Instead of everyone needing a car to get around in the suburbs, we can all share a subway system in the city. If that subway system is powered by a grid of nuclear, solar, and wind power, it could produce essentially zero carbon emissions—which is absolutely impossible for rural or suburban transportation. Urbanization is also associated with slower population growth (or even population decline), and indeed the reason population growth is declining is that rising standard of living and greater urbanization have reduced birth rates and will continue to do so as poor countries reach higher levels of development. Far from being a solution to ecological unsustainability, deurbanization would make it worse.

And that’s not even getting into the fact that you would have to force urban white-collar workers to become farmers, because if we wanted to be farmers we already would be (the converse is not as true), and now you’re actually talking about some kind of massive forced labor-shift policy like the Great Leap Forward. Normally I’m annoyed when people accuse environmentalists of being totalitarian communists, but in this case, I think the accusation might be onto something.

Moving on, the radical but not absurd claim is that we must turn away from economic growth and even turn toward economic shrinkage:

One way or another, the economy (and here we are talking mostly about the economies of industrial nations) must shrink until it subsists on what Earth can provide long-term.

[…]

If nothing is done deliberately to reverse growth or pre-adapt to inevitable economic stagnation and contraction, the likely result will be an episodic, protracted, and chaotic process of collapse continuing for many decades or perhaps centuries, with innumerable human and non-human casualties.

I still don’t think this is right, but I understand where it’s coming from, and like I said it’s worth talking about.

The biggest mistake here lies in assuming that GDP is directly correlated to natural resource depletion, so that the only way to reduce natural resource depletion is to reduce GDP. This is not even remotely true; indeed, countries vary almost as much in their GDP-per-carbon-emission ratio as they do in their per-capita GDP. As usual, #ScandinaviaIsBetter; Norway and Sweden produce about $8,000 in GDP per ton of carbon, while the US produces only about $2,000 per ton. Both poor and rich countries can be found among both the inefficient and the efficient. Saudi Arabia is very rich and produces about $900 per ton, while Liberia is exceedingly poor and produces about $800 per ton. I already mentioned how Norway produces $8,000 per ton, and they are as rich as Saudi Arabia. Yet above them is Mali, which produces almost $11,000 per ton, and is as poor as Liberia. Other notable facts: France is head and shoulders above the UK and Germany at almost $6000 per ton instead of $4300 and $3600 respectively—because France runs almost entirely on nuclear power.

So the real conclusion to draw from this is not that we need to shrink GDP, but that we need to make GDP more like how they do it in Norway or at least how they do it in France, rather than how we do in the US, and definitely not how they do it in Saudi Arabia. Total world emissions are currently about 36 billion tons per year, producing about $108 trillion in GDP, averaging about $3,000 of GDP per ton of carbon emissions. If we could raise the entire world to the ecological efficiency of Norway, we could double world GDP and still be producing less CO2 than we currently are. Turning the entire planet into a bunch of Norways would indeed raise CO2 output, by about a factor of 2; but it would raise standard of living by a factor of 5, and indeed bring about a utopian future with neither war nor hunger. Compare this to the prospect of cutting world GDP in half, but producing it as inefficiently as in Saudi Arabia: This would actually increase global CO2 emissions, almost as much as turning every country into Norway.

But ultimately we will in fact need to slow down or even end economic growth. I ran a little model for you, which shows a reasonable trajectory for global economic growth.

This graph shows the growth rate in productivity slowly declining, along with a much more rapidly declining GDP growth:

Solow_growth

This graph shows the growth trajectory for total real capital and GDP:

Solow_capital

And finally, this is the long-run trend for GDP graphed on a log scale:

Solow_logGDP

The units are arbitrary, though it’s not unreasonable to imagine them as being years and hundreds of dollars in per-capita GDP. If that is indeed what you imagine them to be, my model shows us the Star Trek future: In about 300 years, we rise from a per-capita GDP of $10,000 to one of $165,000—from a world much like today to a world where everyone is a millionaire.

Notice that the growth rate slows down a great deal fairly quickly; by the end of 100 years (i.e., the end of the 21st century), growth has slowed from its peak over 10% to just over 2% per year. By the end of the 300-year period, the growth rate is a crawl of only 0.1%.

Of course this model is very simplistic, but I chose it for a very specific reason: This is not a radical left-wing environmentalist model involving “limits to growth” or “degrowth”. This is the Solow-Swan model, the paradigm example of neoclassical models of economic growth. It is sometimes in fact called simply “the neoclassical growth model”, because it is that influential. I made one very small change from the usual form, which was to assume that the rate of productivity growth would decline exponentially over time. Since productivity growth is exogenous to the model, this is a very simple change to make; it amounts to saying that productivity-enhancing technology is subject to diminishing returns, which fits recent data fairly well but could be totally wrong if something like artificial intelligence or neural enhancement ever takes off.

I chose this because many environmentalists seem to think that economists have this delusional belief that we can maintain a rate of economic growth equal to today indefinitely. David Attenborough famously said “Anyone who believes in indefinite growth in anything physical, on a physically finite planet, is either mad – or an economist.”

Another physicist argued that if we increase energy consumption 2.3% per year for 400 years, we’d literally boil the Earth. Yes, we would, and no economist I know of believes that this is what will happen. Economic growth doesn’t require energy growth, and we do not think growth can or should continue indefinitely—we just think it can and should continue a little while longer. We don’t think that a world standard of living 1000 times as good as Norway is going to happen; we think that a world standard of living equal to Norway is worth fighting for.

Indeed, we are often the ones trying to explain to leaders that they need to adapt to slower growth rates—this is particularly a problem in China, where nationalism and groupthink seems to have convinced many people in China that 7% annual growth is the result of some brilliant unique feature of the great Chinese system, when it is in fact simply the expected high growth rate for an economy that is very poor and still catching up by establishing a capital base. (It’s not so much what they are doing right now, as what they were doing wrong before. Just as you feel a lot better when you stop hitting yourself in the head, countries tend to grow quite fast after they transition out of horrifically terrible economic policy—and it doesn’t get much more terrible than Mao.) Even a lot of the IMF projections are now believed to be too optimistic, because they didn’t account for how China was fudging the numbers and rapidly depleting natural resources.

Some of the specific policies recommended in the article are reasonable, while others go to far.

1. Energy: cap, reduce, and ration it. Energy is what makes the economy go, and expanded energy consumption is what makes it grow. Climate scientists advocate capping and reducing carbon emissions to prevent planetary disaster, and cutting carbon emissions inevitably entails reducing energy from fossil fuels. However, if we aim to shrink the size of the economy, we should restrain not just fossil energy, but all energy consumption. The fairest way to do that would probably be with tradable energy quotas.

I strongly support cap-and-trade on fossil fuels, but I can’t support it on energy in general, unless we get so advanced that we’re seriously concerned about significantly altering the entropy of the universe. Solar power does not have negative externalities, and therefore should not be taxed or capped.

The shift to renewable energy sources is a no-brainer, and I know of no ecologist and few economists who would disagree.

This one is rich, coming from someone who goes on to argue for nonsensical deurbanization:

However, this is a complicated process. It will not be possible merely to unplug coal power plants, plug in solar panels, and continue with business as usual: we have built our immense modern industrial infrastructure of cities, suburbs, highways, airports, and factories to take advantage of the unique qualities and characteristics of fossil fuels.

How will we make our industrial infrastructure run off a solar grid? Urbanization. When everything is in one place, you can use public transportation and plug everything into the grid. We could replace the interstate highway system with a network of maglev lines, provided that almost everyone lived in major cities that were along those lines. We can’t do that if people move out of cities and go back to being farmers.

Here’s another weird one:

Without continued economic growth, the market economy probably can’t function long. This suggests we should run the transformational process in reverse by decommodifying land, labor, and money.

“Decommodifying money”? That’s like skinning leather or dehydrating water. The whole point of money is that it is a maximally fungible commodity. I support the idea of a land tax to provide a basic income, which could go a long way to decommodifying land and labor; but you can’t decommodify money.

The next one starts off sounding ridiculous, but then gets more reasonable:

4. Get rid of debt. Decommodifying money means letting it revert to its function as an inert medium of exchange and store of value, and reducing or eliminating the expectation that money should reproduce more of itself. This ultimately means doing away with interest and the trading or manipulation of currencies. Make investing a community-mediated process of directing capital toward projects that are of unquestioned collective benefit. The first step: cancel existing debt. Then ban derivatives, and tax and tightly regulate the buying and selling of financial instruments of all kinds.

No, we’re not going to get rid of debt. But should we regulate it more? Absolutely. A ban on derivatives is strong, but shouldn’t be out of the question; it’s not clear that even the most useful derivatives (like interest rate swaps and stock options) bring more benefit than they cause harm.

The next proposal, to reform our monetary system so that it is no longer based on debt, is one I broadly agree with, though you need to be clear about how you plan to do that. Positive Money’s plan to make central banks democratically accountable, establish full-reserve banking, and print money without trying to hide it in arcane accounting mechanisms sounds pretty good to me. Going back to the gold standard or something would be a terrible idea. The article links to a couple of “alternative money theorists”, but doesn’t explain further.

Sooner or later, we absolutely will need to restructure our macroeconomic policy so that 4% or even 2% real growth is no longer the expectation in First World countries. We will need to ensure that constant growth isn’t necessary to maintain stability and full employment.

But I believe we can do that, and in any case we do not want to stop global growth just yet—far from it. We are now on the verge of ending world hunger, and if we manage to do it, it will be from economic growth above all else.

Two terms in marginal utility of wealth

JDN 2457569

This post is going to be a little wonkier than most; I’m actually trying to sort out my thoughts and draw some public comment on a theory that has been dancing around my head for awhile. The original idea of separating terms in marginal utility of wealth was actually suggested by my boyfriend, and from there I’ve been trying to give it some more mathematical precision to see if I can come up with a way to test it experimentally. My thinking is also influenced by a paper Miles Kimball wrote about the distinction between happiness and utility.

There are lots of ways one could conceivably spend money—everything from watching football games to buying refrigerators to building museums to inventing vaccines. But insofar as we are rational (and we are after all about 90% rational), we’re going to try to spend our money in such a way that its marginal utility is approximately equal across various activities. You’ll buy one refrigerator, maybe two, but not seven, because the marginal utility of refrigerators drops off pretty fast; instead you’ll spend that money elsewhere. You probably won’t buy a house that’s twice as large if it means you can’t afford groceries anymore. I don’t think our spending is truly optimal at maximizing utility, but I think it’s fairly good.

Therefore, it doesn’t make much sense to break down marginal utility of wealth into all these different categories—cars, refrigerators, football games, shoes, and so on—because we already do a fairly good job of equalizing marginal utility across all those different categories. I could see breaking it down into a few specific categories, such as food, housing, transportation, medicine, and entertainment (and this definitely seems useful for making your own household budget); but even then, I don’t get the impression that most people routinely spend too much on one of these categories and not enough on the others.

However, I can think of two quite different fundamental motives behind spending money, which I think are distinct enough to be worth separating.

One way to spend money is on yourself, raising your own standard of living, making yourself more comfortable. This would include both football games and refrigerators, really anything that makes your life better. We could call this the consumption motive, or maybe simply the self-directed motive.

The other way is to spend it on other people, which, depending on your personality can take either the form of philanthropy to help others, or as a means of self-aggrandizement to raise your own relative status. It’s also possible to do both at the same time in various combinations; while the Gates Foundation is almost entirely philanthropic and Trump Tower is almost entirely self-aggrandizing, Carnegie Hall falls somewhere in between, being at once a significant contribution to our society and an obvious attempt to bring praise and adulation to himself. I would also include spending on Veblen goods that are mainly to show off your own wealth and status in this category. We can call this spending the philanthropic/status motive, or simply the other-directed motive.

There is some spending which combines both motives: A car is surely useful, but a Ferrari is mainly for show—but then, a Lexus or a BMW could be either to show off or really because you like the car better. Some form of housing is a basic human need, and bigger, fancier houses are often better, but the main reason one builds mansions in Beverly Hills is to demonstrate to the world that one is fabulously rich. This complicates the theory somewhat, but basically I think the best approach is to try to separate a sort of “spending proportion” on such goods, so that say $20,000 of the Lexus is for usefulness and $15,000 is for show. Empirically this might be hard to do, but theoretically it makes sense.

One of the central mysteries in cognitive economics right now is the fact that while self-reported happiness rises very little, if at all, as income increases, a finding which was recently replicated even in poor countries where we might not expect it to be true, nonetheless self-reported satisfaction continues to rise indefinitely. A number of theories have been proposed to explain this apparent paradox.

This model might just be able to account for that, if by “happiness” we’re really talking about the self-directed motive, and by “satisfaction” we’re talking about the other-directed motive. Self-reported happiness seems to obey a rule that $100 is worth as much to someone with $10,000 as $25 is to someone with $5,000, or $400 to someone with $20,000.

Self-reported satisfaction seems to obey a different rule, such that each unit of additional satisfaction requires a roughly equal proportional increase in income.

By having a utility function with two terms, we can account for both of these effects. Total utility will be u(x), happiness h(x), and satisfaction s(x).

u(x) = h(x) + s(x)

To obey the above rule, happiness must obey harmonic utility, like this, for some constants h0 and r:

h(x) = h0 – r/x

Proof of this is straightforward, though to keep it simple I’ve hand-waved why it’s a power law:

Given

h'(2x) = 1/4 h'(x)

Let

h'(x) = r x^n

h'(2x) = r (2x)^n

r (2x)^n = 1/4 r x^n

n = -2

h'(x) = r/x^2

h(x) = – r x^(-1) + C

h(x) = h0 – r/x

Miles Kimball also has some more discussion on his blog about how a utility function of this form works. (His statement about redistribution at the end is kind of baffling though; sure, dollar for dollar, redistributing wealth from the middle class to the poor would produce a higher gain in utility than redistributing wealth from the rich to the middle class. But neither is as good as redistributing from the rich to the poor, and the rich have a lot more dollars to redistribute.)

Satisfaction, however, must obey logarithmic utility, like this, for some constants s0 and k.

The x+1 means that it takes slightly less proportionally to have the same effect as your wealth increases, but it allows the function to be equal to s0 at x=0 instead of going to negative infinity:

s(x) = s0 + k ln(x)

Proof of this is very simple, almost trivial:

Given

s'(x) = k/x

s(x) = k ln(x) + s0

Both of these functions actually have a serious problem that as x approaches zero, they go to negative infinity. For self-directed utility this almost makes sense (if your real consumption goes to zero, you die), but it makes no sense at all for other-directed utility, and since there are causes most of us would willingly die for, the disutility of dying should be large, but not infinite.

Therefore I think it’s probably better to use x +1 in place of x:

h(x) = h0 – r/(x+1)

s(x) = s0 + k ln(x+1)

This makes s0 the baseline satisfaction of having no other-directed spending, though the baseline happiness of zero self-directed spending is actually h0 – r rather than just h0. If we want it to be h0, we could use this form instead:

h(x) = h0 + r x/(x+1)

This looks quite different, but actually only differs by a constant.

Therefore, my final answer for the utility of wealth (or possibly income, or spending? I’m not sure which interpretation is best just yet) is actually this:

u(x) = h(x) + s(x)

h(x) = h0 + r x/(x+1)

s(x) = s0 + k ln(x+1)

Marginal utility is then the derivatives of these:

h'(x) = r/(x+1)^2

s'(x) = k/(x+1)

Let’s assign some values to the constants so that we can actually graph these.

Let h0 = s0 = 0, so our baseline is just zero.

Furthermore, let r = k = 1, which would mean that the value of $1 is the same whether spent either on yourself or on others, if $1 is all you have. (This is probably wrong, actually, but it’s the simplest to start with. Shortly I’ll discuss what happens as you vary the ratio k/r.)

Here is the result graphed on a linear scale:

Utility_linear

And now, graphed with wealth on a logarithmic scale:

Utility_log

As you can see, self-directed marginal utility drops off much faster than other-directed marginal utility, so the amount you spend on others relative to yourself rapidly increases as your wealth increases. If that doesn’t sound right, remember that I’m including Veblen goods as “other-directed”; when you buy a Ferrari, it’s not really for yourself. While proportional rates of charitable donation do not increase as wealth increases (it’s actually a U-shaped pattern, largely driven by poor people giving to religious institutions), they probably should (people should really stop giving to religious institutions! Even the good ones aren’t cost-effective, and some are very, very bad.). Furthermore, if you include spending on relative power and status as the other-directed motive, that kind of spending clearly does proportionally increase as wealth increases—gotta keep up with those Joneses.

If r/k = 1, that basically means you value others exactly as much as yourself, which I think is implausible (maybe some extreme altruists do that, and Peter Singer seems to think this would be morally optimal). r/k < 1 would mean you should never spend anything on yourself, which not even Peter Singer believes. I think r/k = 10 is a more reasonable estimate.

For any given value of r/k, there is an optimal ratio of self-directed versus other-directed spending, which can vary based on your total wealth.

Actually deriving what the optimal proportion would be requires a whole lot of algebra in a post that probably already has too much algebra, but the point is, there is one, and it will depend strongly on the ratio r/k, that is, the overall relative importance of self-directed versus other-directed motivation.

Take a look at this graph, which uses r/k = 10.

Utility_marginal

If you only have 2 to spend, you should spend it entirely on yourself, because up to that point the marginal utility of self-directed spending is always higher. If you have 3 to spend, you should spend most of it on yourself, but a little bit on other people, because after you’ve spent about 2.2 on yourself there is more marginal utility for spending on others than on yourself.

If your available wealth is W, you would spend some amount x on yourself, and then W-x on others:

u(x) = h(x) + s(W-x)

u(x) = r x/(x+1) + k ln(W – x + 1)

Then you take the derivative and set it equal to zero to find the local maximum. I’ll spare you the algebra, but this is the result of that optimization:

x = – 1 – r/(2k) + sqrt(r/k) sqrt(2 + W + r/(4k))

As long as k <= r (which more or less means that you care at least as much about yourself as about others—I think this is true of basically everyone) then as long as W > 0 (as long as you have some money to spend) we also have x > 0 (you will spend at least something on yourself).

Below a certain threshold (depending on r/k), the optimal value of x is greater than W, which means that, if possible, you should be receiving donations from other people and spending them on yourself. (Otherwise, just spend everything on yourself). After that, x < W, which means that you should be donating to others. The proportion that you should be donating smoothly increases as W increases, as you can see on this graph (which uses r/k = 10, a figure I find fairly plausible):

Utility_donation

While I’m sure no one literally does this calculation, most people do seem to have an intuitive sense that you should donate an increasing proportion of your income to others as your income increases, and similarly that you should pay a higher proportion in taxes. This utility function would justify that—which is something that most proposed utility functions cannot do. In most models there is a hard cutoff where you should donate nothing up to the point where your marginal utility is equal to the marginal utility of donating, and then from that point forward you should donate absolutely everything. Maybe a case can be made for that ethically, but psychologically I think it’s a non-starter.

I’m still not sure exactly how to test this empirically. It’s already quite difficult to get people to answer questions about marginal utility in a way that is meaningful and coherent (people just don’t think about questions like “Which is worth more? $4 to me now or $10 if I had twice as much wealth?” on a regular basis). I’m thinking maybe they could play some sort of game where they have the opportunity to make money at the game, but must perform tasks or bear risks to do so, and can then keep the money or donate it to charity. The biggest problem I see with that is that the amounts would probably be too small to really cover a significant part of anyone’s total wealth, and therefore couldn’t cover much of their marginal utility of wealth function either. (This is actually a big problem with a lot of experiments that use risk aversion to try to tease out marginal utility of wealth.) But maybe with a variety of experimental participants, all of whom we get income figures on?

Selling debt goes against everything the free market stands for

JDN 2457555

I don’t think most people—or even most economists—have any concept of just how fundamentally perverse and destructive our financial system has become, and a large chunk of it ultimately boils down to one thing: Selling debt.

Certainly collateralized debt obligations (CDOs), and their meta-form, CDO2s (pronounced “see-dee-oh squareds”), are nothing more than selling debt, and along with credit default swaps (CDS; they are basically insurance, but without those pesky regulations against things like fraud and conflicts of interest) they were directly responsible for the 2008 financial crisis and the ensuing Great Recession and Second Depression.

But selling debt continues in a more insidious way, underpinning the entire debt collection industry which raises tens of billions of dollars per year by harassment, intimidation and extortion, especially of the poor and helpless. Frankly, I think what’s most shocking is how little money they make, given the huge number of people they harass and intimidate.

John Oliver did a great segment on debt collections (with a very nice surprise at the end):

But perhaps most baffling to me is the number of people who defend the selling of debt on the grounds that it is a “free market” activity which must be protected from government “interference in personal liberty”. To show this is not a strawman, here’s the American Enterprise Institute saying exactly that.

So let me say this in no uncertain terms: Selling debt goes against everything the free market stands for.

One of the most basic principles of free markets, one of the founding precepts of capitalism laid down by no less than Adam Smith (and before him by great political philosophers like John Locke), is the freedom of contract. This is the good part of capitalism, the part that makes sense, the reason we shouldn’t tear it all down but should instead try to reform it around the edges.

Indeed, the freedom of contract is so fundamental to human liberty that laws can only be considered legitimate insofar as they do not infringe upon it without a compelling public interest. Freedom of contract is right up there with freedom of speech, freedom of the press, freedom of religion, and the right of due process.

The freedom of contract is the right to make agreements, including financial agreements, with anyone you please, and under conditions that you freely and rationally impose in a state of good faith and transparent discussion. Conversely, it is the right not to make agreements with those you choose not to, and to not be forced into agreements under conditions of fraud, intimidation, or impaired judgment.

Freedom of contract is the basis of my right to take on debt, provided that I am honest about my circumstances and I can find a lender who is willing to lend to me. So taking on debt is a fundamental part of freedom of contract.

But selling debt is something else entirely. Far from exercising the freedom of contract, it violates it. When I take out a loan from bank A, and then they turn around and sell that loan to bank B, I suddenly owe money to bank B, but I never agreed to do that. I had nothing to do with their decision to work with bank B as opposed to keeping the loan or selling it to bank C.

Current regulations prohibit banks from “changing the terms of the loan”, but in practice they change them all the time—they can’t change the principal balance, the loan term, or the interest rate, but they can change the late fees, the payment schedule, and lots of subtler things about the loan that can still make a very big difference. Indeed, as far as I’m concerned they have changed the terms of the loan—one of the terms of the loan was that I was to pay X amount to bank A, not that I was to pay X amount to bank B. I may or may not have good reasons not to want to pay bank B—they might be far less trustworthy than bank A, for instance, or have a far worse social responsibility record—and in any case it doesn’t matter; it is my choice whether or not I want anything to do with bank B, whatever my reasons might be.

I take this matter quite personally, for it is by the selling of debt that, in moral (albeit not legal) terms, a British bank stole my parents’ house. Indeed, not just any British bank; it was none other than HSBC, the money launderers for terrorists.

When they first obtained their mortgage, my parents did not actually know that HSBC was quite so evil as to literally launder money for terrorists, but they did already know that they were involved in a great many shady dealings, and even specifically told their lender that they did not want the loan sold, and if it was to be sold, it was absolutely never to be sold to HSBC in particular. Their mistake (which was rather like the “mistake” of someone who leaves their car unlocked and has it stolen, or forgets to arm the home alarm system and suffers a burglary) was not to get this written into the formal contract, rather than simply made as a verbal agreement with the bankers. Such verbal contracts are enforceable under the law, at least in theory; but that would require proof of the verbal contract (and what proof could we provide?), and also probably have cost as much as the house in litigation fees.

Oh, by the way, they were given a subprime interest rate of 8% despite being middle-class professionals with good credit, no doubt to maximize the broker’s closing commission. Most banks reserved such behavior for racial minorities, but apparently this one was equal-opportunity in the worst way.Perhaps my parents were naive to trust bankers any further than they could throw them.

As a result, I think you know what happened next: They sold the loan to HSBC.

Now, had it ended there, with my parents unwittingly forced into supporting a bank that launders money for terrorists, that would have been bad enough. But it assuredly did not.

By a series of subtle and manipulative practices that poked through one loophole after another, HSBC proceeded to raise my parents’ payments higher and higher. One particularly insidious tactic they used was to sit on the checks until just after the due date passed, so they could charge late fees on the payments, then they recapitalized the late fees. My parents caught on to this particular trick after a few months, and started mailing the checks certified so they would be date-stamped; and lo and behold, all the payments were suddenly on time! By several other similarly devious tactics, all of which were technically legal or at least not provable, they managed to raise my parents’ monthly mortgage payments by over 50%.

Note that it was a fixed-rate, fixed-term mortgage. The initial payments—what should have been always the payments, that’s the point of a fixed-rate fixed-term mortgage—were under $2000 per month. By the end they were paying over $3000 per month. HSBC forced my parents to overpay on a mortgage an amount equal to the US individual poverty line, or the per-capita GDP of Peru.

They tried to make the payments, but after being wildly over budget and hit by other unexpected expenses (including defects in the house’s foundation that they had to pay to fix, but because of the “small” amount at stake and the overwhelming legal might of the construction company, no lawyer was willing to sue over), they simply couldn’t do it anymore, and gave up. They gave the house to the bank with a deed in lieu of foreclosure.

And that is the story of how a bank that my parents never agreed to work with, never would have agreed to work with, indeed specifically said they would not work with, still ended up claiming their house—our house, the house I grew up in from the age of 12. Legally, I cannot prove they did anything against the law. (I mean, other than laundered money for terrorists.) But morally, how is this any less than theft? Would we not be victimized less had a burglar broken into our home, vandalized the walls and stolen our furniture?

Indeed, that would probably be covered under our insurance! Where can I buy insurance against the corrupt and predatory financial system? Where are my credit default swaps to pay me when everything goes wrong?

And all of this could have been prevented, if banks simply weren’t allowed to violate our freedom of contract by selling their loans to other banks.

Indeed, the Second Depression could probably have been likewise prevented. Without selling debt, there is no securitization. Without securitization, there is far less leverage. Without leverage, there are not bank failures. Without bank failures, there is no depression. A decade of global economic growth was lost because we allowed banks to sell debt whenever they please.

I have heard the counter-arguments many times:

“But what if banks need the liquidity?” Easy. They can take out their own loans with those other banks. If bank A finds they need more cashflow, they should absolutely feel free to take out a loan from bank B. They can even point to their projected revenues from the mortgage payments we owe them, as a means of repaying that loan. But they should not be able to involve us in that transaction. If you want to trust HSBC, that’s your business (you’re an idiot, but it’s a free country). But you have no right to force me to trust HSBC.

“But banks might not be willing to make those loans, if they knew they couldn’t sell or securitize them!” THAT’S THE POINT. Banks wouldn’t take on all these ridiculous risks in their lending practices that they did (“NINJA loans” and mortgages with payments larger than their buyers’ annual incomes), if they knew they couldn’t just foist the debt off on some Greater Fool later on. They would only make loans they actually expect to be repaid. Obviously any loan carries some risk, but banks would only take on risks they thought they could bear, as opposed to risks they thought they could convince someone else to bear—which is the definition of moral hazard.

“Homes would be unaffordable if people couldn’t take out large loans!” First of all, I’m not against mortgages—I’m against securitization of mortgages. Yes, of course, people need to be able to take out loans. But they shouldn’t be forced to pay those loans to whoever their bank sees fit. If indeed the loss of subprime securitized mortgages made it harder for people to get homes, that’s a problem; but the solution to that problem was never to make it easier for people to get loans they can’t afford—it is clearly either to reduce the price of homes or increase the incomes of buyers. Subsidized housing construction, public housing, changes in zoning regulation, a basic income, lower property taxes, an expanded earned-income tax credit—these are the sort of policies that one implements to make housing more affordable, not “go ahead and let banks exploit people however they want”.

Remember, a regulation against selling debt would protect the freedom of contract. It would remove a way for private individuals and corporations to violate that freedom, like regulations against fraud, intimidation, and coercion. It should be uncontroversial that no one has any right to force you to do business with someone you would not voluntarily do business with, certainly not in a private transaction between for-profit corporations. Maybe that sort of mandate makes sense in rare circumstances by the government, but even then it should really be implemented as a tax, not a mandate to do business with a particular entity. The right to buy what you choose is the foundation of a free market—and implicit in it is the right not to buy what you do not choose.

There are many regulations on debt that do impose upon freedom of contract: As horrific as payday loans are, if someone really honestly knowingly wants to take on short-term debt at 400% APR I’m not sure it’s my business to stop them. And some people may really be in such dire circumstances that they need money that urgently and no one else will lend to them. Insofar as I want payday loans regulated, it is to ensure that they are really lending in good faith—as many surely are not—and ultimately I want to outcompete them by providing desperate people with more reasonable loan terms. But a ban on securitization is like a ban on fraud; it is the sort of law that protects our rights.

The many varieties of argument “men”

JDN 2457552

After several long, intense, and very likely controversial posts in a row, I decided to take a break with a post that is short and fun.

You have probably already heard of a “strawman” argument, but I think there are many more “materials” an argument can be made of which would be useful terms to have, so I have proposed a taxonomy of similar argument “men”. Perhaps this will help others in the future to more precisely characterize where arguments have gone wrong and how they should have gone differently.

For examples of each, I’m using a hypothetical argument about the gold standard, based on the actual arguments I refute in my previous post on the subject.

This is an argument actually given by a proponent of the gold standard, upon which my “men” shall be built:

1) A gold standard is key to achieving a period of sustained, 4% real economic growth.

The U.S. dollar was created as a defined weight of gold and silver in 1792. As detailed in the booklet, The 21st Century Gold Standard (available free at http://agoldenage.com), I co-authored with fellow Forbes.com columnist Ralph Benko, a dollar as good as gold endured until 1971 with the relatively brief exceptions of the War of 1812, the Civil War and Reconstruction, and 1933, the year President Franklin Roosevelt suspended dollar/gold convertibility until January 31, 1934 when the dollar/gold link was re-established at $35 an ounce, a 40% devaluation from the prior $20.67 an ounce. Over that entire 179 years, the U.S. economy grew at a 3.9% average annual rate, including all of the panics, wars, industrialization and a myriad other events. During the post World War II Bretton Woods gold standard, the U.S. economy also grew on average 4% a year.

By contrast, during the 40-years since going off gold, U.S. economic growth has averaged an anemic 2.8% a year. The only 40-year periods in which the economic growth was slower were those ending in the Great Depression, from 1930 to 1940.

2) A gold standard reduces the risk of recessions and financial crises.

Critics of the gold standard point out, correctly, that it would prohibit the Federal Reserve from manipulating interest rates and the value of the dollar in hopes of stimulating demand. In fact, the idea that a paper dollar would lead to a more stable economy was one of the key selling points for abandoning the gold standard in 1971.

However, this power has done far more harm than good. Under the paper dollar, recessions have become more severe and financial crises more frequent. During the post World War II gold standard, unemployment averaged less than 5% and never rose above 7% during a calendar year. Since going off gold, unemployment has averaged more than 6%, and has been above 8% now for nearly 3.5 years.

And now, the argument men:

Fallacious (Bad) Argument Men

These argument “men” are harmful and irrational; they are to be avoided, and destroyed wherever they are found. Maybe in some very extreme circumstances they would be justifiable—but only in circumstances where it is justifiable to be dishonest and manipulative. You can use a strawman argument to convince a terrorist to let the hostages go; you can’t use one to convince your uncle not to vote Republican.

Strawman: The familiar fallacy in which instead of trying to address someone else’s argument, you make up your own fake version of that argument which is easier to defeat. The image is of making an effigy of your opponent out of straw and beating on the effigy to avoid confronting the actual opponent.

You can’t possibly think that going to the gold standard would make the financial system perfect! There will still be corrupt bankers, a banking oligopoly, and an unpredictable future. The gold standard would do nothing to remove these deep flaws in the system.

Hitman: An even worse form of the strawman, in which you misrepresent not only your opponent’s argument, but your opponent themselves, using your distortion of their view as an excuse for personal attacks against their character.

Oh, you would favor the gold standard, wouldn’t you? A rich, middle-aged White man, presumably straight and nominally Christian? You have all the privileges in life, so you don’t care if you take away the protections that less-fortunate people depend upon. You don’t care if other people become unemployed, so long as you don’t have to bear inflation reducing the real value of your precious capital assets.

Conman: An argument for your own view which you don’t actually believe, but expect to be easier to explain or more persuasive to this particular audience than the true reasons for your beliefs.

Back when we were on the gold standard, it was the era of “Robber Barons”. Poverty was rampant. If we go back to that system, it will just mean handing over all the hard-earned money of working people to billionaire capitalists.

Vaporman: Not even an argument, just a forceful assertion of your view that takes the place or shape of an argument.

The gold standard is madness! It makes no sense at all! How can you even think of going back to such a ridiculous monetary system?

Honest (Acceptable) Argument Men

These argument “men” are perfectly acceptable, and should be the normal expectation in honest discourse.

Woodman: The actual argument your opponent made, addressed and refuted honestly using sound evidence.

There is very little evidence that going back to the gold standard would in any way improve the stability of the currency or the financial system. While long-run inflation was very low under the gold standard, this fact obscures the volatility of inflation, which was extremely high; bouts of inflation were followed by bouts of deflation, swinging the value of the dollar up or down as much as 15% in a single year. Nor is there any evidence that the gold standard prevented financial crises, as dozens of financial crises occurred under the gold standard, if anything more often than they have since the full-fiat monetary system established in 1971.

Bananaman: An actual argument your opponent made that you honestly refute, which nonetheless is so ridiculous that it seems like a strawman, even though it isn’t. Named in “honor” of Ray Comfort’s Banana Argument. Of course, some bananas are squishier than others, and the only one I could find here was at least relatively woody–though still recognizable as a banana:

You said “A gold standard is key to achieving a period of sustained, 4% real economic growth.” based on several distorted, misunderstood, or outright false historical examples. The 4% annual growth in total GDP during the early part of the United States was due primarily to population growth, not a rise in real standard of living, while the rapid growth during WW2 was obviously due to the enormous and unprecedented surge in government spending (and by the way, we weren’t even really on the gold standard during that period). In a blatant No True Scotsman fallacy, you specifically exclude the Great Depression from the “true gold standard” so that you don’t have to admit that the gold standard contributed significantly to the severity of the depression.

Middleman: An argument that synthesizes your view and your opponent’s view, in an attempt to find a compromise position that may be acceptable, if not preferred, by all.

Unlike the classical gold standard, the Bretton Woods gold standard in place from 1945 to 1971 was not obviously disastrous. If you want to go back to a system of international exchange rates fixed by gold similar to Bretton Woods, I would consider that a reasonable position to take.

Virtuous (Good) Argument Men

These argument “men” go above and beyond the call of duty; rather than simply seek to win arguments honestly, they actively seek the truth behind the veil of opposing arguments. These cannot be expected in all circumstances, but they are to be aspired to, and commended when found.

Ironman: Your opponent’s actual argument, but improved, with some of its flaws shored up. The same basic thinking as your opponent, but done more carefully, filling in the proper gaps.

The gold standard might not reduce short-run inflation, but it would reduce longrun inflation, making our currency more stable over long periods of time. We would be able to track long-term price trends in goods such as housing and technology much more easily, and people would have an easier time psychologically grasping the real prices of goods as they change during their lifetime. No longer would we hear people complain, “How can you want a minimum wage of $15? As a teenager in 1955, I got paid $3 an hour and I was happy with that!” when that $3 in 1955, adjusted for inflation, is $26.78 in today’s money.

Steelman: Not the argument your opponent made, but the one they should have made. The best possible argument you are aware of that would militate in favor of their view, the one that sometimes gives you pause about your own opinions, the real and tangible downside of what you believe in.

Tying currency to gold or any other commodity may not be very useful directly, but it could serve one potentially vital function, which is as a commitment mechanism to prevent the central bank from manipulating the currency to enrich themselves or special interests. It may not be the optimal commitment mechanism, but it is a psychologically appealing one for many people, and is also relatively easy to define and keep track of. It is also not subject to as much manipulation as something like nominal GDP targeting or a Taylor Rule, which could be fudged by corrupt statisticians. And while it might cause moderate volatility, it can also protect against the most extreme forms of volatility such as hyperinflation. In countries with very corrupt governments, a gold standard might actually be a good idea, if you could actually enforce it, because it would at least limit the damage that can be done by corrupt central bank officials. Had such a system been in place in Zimbabwe in the 1990s, the hyperinflation might have been prevented. The US is not nearly as corrupt as Zimbabwe, so we probably do not need a gold standard; but it may be wise to recommend the use of gold standards or similar fixed-exchange currencies in Third World countries so that corrupt leaders cannot abuse the monetary system to gain at the expense of their people.

Moral responsibility does not inherit across generations

JDN 2457548

In last week’s post I made a sharp distinction between believing in human progress and believing that colonialism was justified. To make this argument, I relied upon a moral assumption that seems to me perfectly obvious, and probably would to most ethicists as well: Moral responsibility does not inherit across generations, and people are only responsible for their individual actions.

But is in fact this principle is not uncontroversial in many circles. When I read utterly nonsensical arguments like this one from the aptly-named Race Baitr saying that White people have no role to play in the liberation of Black people apparently because our blood is somehow tainted by the crimes our ancestors, it becomes apparent to me that this principle is not obvious to everyone, and therefore is worth defending. Indeed, many applications of the concept of “White Privilege” seem to ignore this principle, speaking as though racism is not something one does or participates in, but something that one is simply by being born with less melanin. Here’s a Salon interview specifically rejecting the proposition that racism is something one does:

For white people, their identities rest on the idea of racism as about good or bad people, about moral or immoral singular acts, and if we’re good, moral people we can’t be racist – we don’t engage in those acts. This is one of the most effective adaptations of racism over time—that we can think of racism as only something that individuals either are or are not “doing.”

If racism isn’t something one does, then what in the world is it? It’s all well and good to talk about systems and social institutions, but ultimately systems and social institutions are made of human behaviors. If you think most White people aren’t doing enough to combat racism (which sounds about right to me!), say that—don’t make some bizarre accusation that simply by existing we are inherently racist. (Also: We? I’m only 75% White, so am I only 75% inherently racist?) And please, stop redefining the word “racism” to mean something other than what everyone uses it to mean; “White people are snakes” is in fact a racist sentiment (and yes, one I’ve actually heard–indeed, here is the late Muhammad Ali comparing all White people to rattlesnakes, and Huffington Post fawning over him for it).

Racism is clearly more common and typically worse when performed by White people against Black people—but contrary to the claims of some social justice activists the White perpetrator and Black victim are not part of the definition of racism. Similarly, sexism is more common and more severe committed by men against women, but that doesn’t mean that “men are pigs” is not a sexist statement (and don’t tell me you haven’t heard that one). I don’t have a good word for bigotry by gay people against straight people (“heterophobia”?) but it clearly does happen on occasion, and similarly cannot be defined out of existence.

I wouldn’t care so much that you make this distinction between “racism” and “racial prejudice”, except that it’s not the normal usage of the word “racism” and therefore confuses people, and also this redefinition clearly is meant to serve a political purpose that is quite insidious, namely making excuses for the most extreme and hateful prejudice as long as it’s committed by people of the appropriate color. If “White people are snakes” is not racism, then the word has no meaning.

Not all discussions of “White Privilege” are like this, of course; this article from Occupy Wall Street actually does a fairly good job of making “White Privilege” into a sensible concept, albeit still not a terribly useful one in my opinion. I think the useful concept is oppression—the problem here is not how we are treating White people, but how we are treating everyone else. What privilege gives you is the freedom to be who you are.”? Shouldn’t everyone have that?

Almost all the so-called “benefits” or “perks” associated with privilege” are actually forgone harms—they are not good things done to you, but bad things not done to you. But benefitting from racist systems doesn’t mean that everything is magically easy for us. It just means that as hard as things are, they could always be worse.” No, that is not what the word “benefit” means. The word “benefit” means you would be worse off without it—and in most cases that simply isn’t true. Many White people obviously think that it is true—which is probably a big reason why so many White people fight so hard to defend racism, you know; you’ve convinced them it is in their self-interest. But, with rare exceptions, it is not; most racial discrimination has literally zero long-run benefit. It’s just bad. Maybe if we helped people appreciate that more, they would be less resistant to fighting racism!

The only features of “privilege” that really make sense as benefits are those that occur in a state of competition—like being more likely to be hired for a job or get a loan—but one of the most important insights of economics is that competition is nonzero-sum, and fairer competition ultimately means a more efficient economy and thus more prosperity for everyone.

But okay, let’s set that aside and talk about this core question of what sort of responsibility we bear for the acts of our ancestors. Many White people clearly do feel deep shame about what their ancestors (or people the same color as their ancestors!) did hundreds of years ago. The psychological reactance to that shame may actually be what makes so many White people deny that racism even exists (or exists anymore)—though a majority of Americans of all races do believe that racism is still widespread.

We also apply some sense of moral responsibility applied to whole races quite frequently. We speak of a policy “benefiting White people” or “harming Black people” and quickly elide the distinction between harming specific people who are Black, and somehow harming “Black people” as a group. The former happens all the time—the latter is utterly nonsensical. Similarly, we speak of a “debt owed by White people to Black people” (which might actually make sense in the very narrow sense of economic reparations, because people do inherit money! They probably shouldn’t, that is literally feudalist, but in the existing system they in fact do), which makes about as much sense as a debt owed by tall people to short people. As Walter Michaels pointed out in The Trouble with Diversity (which I highly recommend), because of this bizarre sense of responsibility we are often in the habit of “apologizing for something you didn’t do to people to whom you didn’t do it (indeed to whom it wasn’t done)”. It is my responsibility to condemn colonialism (which I indeed do), to fight to ensure that it never happens again; it is not my responsibility to apologize for colonialism.

This makes some sense in evolutionary terms; it’s part of the all-encompassing tribal paradigm, wherein human beings come to identify themselves with groups and treat those groups as the meaningful moral agents. It’s much easier to maintain the cohesion of a tribe against the slings and arrows (sometimes quite literal) of outrageous fortune if everyone believes that the tribe is one moral agent worthy of ultimate concern.

This concept of racial responsibility is clearly deeply ingrained in human minds, for it appears in some of our oldest texts, including the Bible: “You shall not bow down to them or worship them; for I, the Lord your God, am a jealous God, punishing the children for the sin of the parents to the third and fourth generation of those who hate me,” (Exodus 20:5)

Why is inheritance of moral responsibility across generations nonsensical? Any number of reasons, take your pick. The economist in me leaps to “Ancestry cannot be incentivized.” There’s no point in holding people responsible for things they can’t control, because in doing so you will not in any way alter behavior. The Stanford Encyclopedia of Philosophy article on moral responsibility takes it as so obvious that people are only responsible for actions they themselves did that they don’t even bother to mention it as an assumption. (Their big question is how to reconcile moral responsibility with determinism, which turns out to be not all that difficult.)

An interesting counter-argument might be that descent can be incentivized: You could use rewards and punishments applied to future generations to motivate current actions. But this is actually one of the ways that incentives clearly depart from moral responsibilities; you could incentivize me to do something by threatening to murder 1,000 children in China if I don’t, but even if it was in fact something I ought to do, it wouldn’t be those children’s fault if I didn’t do it. They wouldn’t deserve punishment for my inaction—I might, and you certainly would for using such a cruel incentive.

Moreover, there’s a problem with dynamic consistency here: Once the action is already done, what’s the sense in carrying out the punishment? This is why a moral theory of punishment can’t merely be based on deterrence—the fact that you could deter a bad action by some other less-bad action doesn’t make the less-bad action necessarily a deserved punishment, particularly if it is applied to someone who wasn’t responsible for the action you sought to deter. In any case, people aren’t thinking that we should threaten to punish future generations if people are racist today; they are feeling guilty that their ancestors were racist generations ago. That doesn’t make any sense even on this deterrence theory.

There’s another problem with trying to inherit moral responsibility: People have lots of ancestors. Some of my ancestors were most likely rapists and murderers; most were ordinary folk; a few may have been great heroes—and this is true of just about anyone anywhere. We all have bad ancestors, great ancestors, and, mostly, pretty good ancestors. 75% of my ancestors are European, but 25% are Native American; so if I am to apologize for colonialism, should I be apologizing to myself? (Only 75%, perhaps?) If you go back enough generations, literally everyone is related—and you may only have to go back about 4,000 years. That’s historical time.

Of course, we wouldn’t be different colors in the first place if there weren’t some differences in ancestry, but there is a huge amount of gene flow between different human populations. The US is a particularly mixed place; because most Black Americans are quite genetically mixed, it is about as likely that any randomly-selected Black person in the US is descended from a slaveowner as it is that any randomly-selected White person is. (Especially since there were a large number of Black slaveowners in Africa and even some in the United States.) What moral significance does this have? Basically none! That’s the whole point; your ancestors don’t define who you are.

If these facts do have any moral significance, it is to undermine the sense most people seem to have that there are well-defined groups called “races” that exist in reality, to which culture responds. No; races were created by culture. I’ve said this before, but it bears repeating: The “races” we hold most dear in the US, White and Black, are in fact the most nonsensical. “Asian” and “Native American” at least almost make sense as categories, though Chippewa are more closely related to Ainu than Ainu are to Papuans. “Latino” isn’t utterly incoherent, though it includes as much Aztec as it does Iberian. But “White” is a club one can join or be kicked out of, while “Black” is the majority of genetic diversity.

Sex is a real thing—while there are intermediate cases of course, broadly speaking humans, like most metazoa, are sexually dimorphic and come in “male” and “female” varieties. So sexism took a real phenomenon and applied cultural dynamics to it; but that’s not what happened with racism. Insofar as there was a real phenomenon, it was extremely superficial—quite literally skin deep. In that respect, race is more like class—a categorization that is itself the result of social institutions.

To be clear: Does the fact that we don’t inherit moral responsibility from our ancestors absolve us from doing anything to rectify the inequities of racism? Absolutely not. Not only is there plenty of present discrimination going on we should be fighting, there are also inherited inequities due to the way that assets and skills are passed on from one generation to the next. If my grandfather stole a painting from your grandfather and both our grandfathers are dead but I am now hanging that painting in my den, I don’t owe you an apology—but I damn well owe you a painting.

The further we become from the past discrimination the harder it gets to make reparations, but all hope is not lost; we still have the option of trying to reset everyone’s status to the same at birth and maintaining equality of opportunity from there. Of course we’ll never achieve total equality of opportunity—but we can get much closer than we presently are.

We could start by establishing an extremely high estate tax—on the order of 99%—because no one has a right to be born rich. Free public education is another good way of equalizing the distribution of “human capital” that would otherwise be concentrated in particular families, and expanding it to higher education would make it that much better. It even makes sense, at least in the short run, to establish some affirmative action policies that are race-conscious and sex-conscious, because there are so many biases in the opposite direction that sometimes you must fight bias with bias.

Actually what I think we should do in hiring, for example, is assemble a pool of applicants based on demographic quotas to ensure a representative sample, and then anonymize the applications and assess them on merit. This way we do ensure representation and reduce bias, but don’t ever end up hiring anyone other than the most qualified candidate. But nowhere should we think that this is something that White men “owe” to women or Black people; it’s something that people should do in order to correct the biases that otherwise exist in our society. Similarly with regard to sexism: Women exhibit just as much unconscious bias against other women as men do. This is not “men” hurting “women”—this is a set of unconscious biases found in almost everywhere and social structures almost everywhere that systematically discriminate against people because they are women.

Perhaps by understanding that this is not about which “team” you’re on (which tribe you’re in), but what policy we should have, we can finally make these biases disappear, or at least fade so small that they are negligible.

Believing in civilization without believing in colonialism

JDN 2457541

In a post last week I presented some of the overwhelming evidence that society has been getting better over time, particularly since the start of the Industrial Revolution. I focused mainly on infant mortality rates—babies not dying—but there are lots of other measures you could use as well. Despite popular belief, poverty is rapidly declining, and is now the lowest it’s ever been. War is rapidly declining. Crime is rapidly declining in First World countries, and to the best of our knowledge crime rates are stable worldwide. Public health is rapidly improving. Lifespans are getting longer. And so on, and so on. It’s not quite true to say that every indicator of human progress is on an upward trend, but the vast majority of really important indicators are.

Moreover, there is every reason to believe that this great progress is largely the result of what we call “civilization”, even Western civilization: Stable, centralized governments, strong national defense, representative democracy, free markets, openness to global trade, investment in infrastructure, science and technology, secularism, a culture that values innovation, and freedom of speech and the press. We did not get here by Marxism, nor agragrian socialism, nor primitivism, nor anarcho-capitalism. We did not get here by fascism, nor theocracy, nor monarchy. This progress was built by the center-left welfare state, “social democracy”, “modified capitalism”, the system where free, open markets are coupled with a strong democratic government to protect and steer them.

This fact is basically beyond dispute; the evidence is overwhelming. The serious debate in development economics is over which parts of the Western welfare state are most conducive to raising human well-being, and which parts of the package are more optional. And even then, some things are fairly obvious: Stable government is clearly necessary, while speaking English is clearly optional.

Yet many people are resistant to this conclusion, or even offended by it, and I think I know why: They are confusing the results of civilization with the methods by which it was established.

The results of civilization are indisputably positive: Everything I just named above, especially babies not dying.

But the methods by which civilization was established are not; indeed, some of the greatest atrocities in human history are attributable at least in part to attempts to “spread civilization” to “primitive” or “savage” people.
It is therefore vital to distinguish between the result, civilization, and the processes by which it was effected, such as colonialism and imperialism.

First, it’s important not to overstate the link between civilization and colonialism.

We tend to associate colonialism and imperialism with White people from Western European cultures conquering other people in other cultures; but in fact colonialism and imperialism are basically universal to any human culture that attains sufficient size and centralization. India engaged in colonialism, Persia engaged in imperialism, China engaged in imperialism, the Mongols were of course major imperialists, and don’t forget the Ottoman Empire; and did you realize that Tibet and Mali were at one time imperialists as well? And of course there are a whole bunch of empires you’ve probably never heard of, like the Parthians and the Ghaznavids and the Ummayyads. Even many of the people we’re accustoming to thinking of as innocent victims of colonialism were themselves imperialists—the Aztecs certainly were (they even sold people into slavery and used them for human sacrifice!), as were the Pequot, and the Iroquois may not have outright conquered anyone but were definitely at least “soft imperialists” the way that the US is today, spreading their influence around and using economic and sometimes military pressure to absorb other cultures into their own.

Of course, those were all civilizations, at least in the broadest sense of the word; but before that, it’s not that there wasn’t violence, it just wasn’t organized enough to be worthy of being called “imperialism”. The more general concept of intertribal warfare is a human universal, and some hunter-gatherer tribes actually engage in an essentially constant state of warfare we call “endemic warfare”. People have been grouping together to kill other people they perceived as different for at least as long as there have been people to do so.

This is of course not to excuse what European colonial powers did when they set up bases on other continents and exploited, enslaved, or even murdered the indigenous population. And the absolute numbers of people enslaved or killed are typically larger under European colonialism, mainly because European cultures became so powerful and conquered almost the entire world. Even if European societies were not uniquely predisposed to be violent (and I see no evidence to say that they were—humans are pretty much humans), they were more successful in their violent conquering, and so more people suffered and died. It’s also a first-mover effect: If the Ming Dynasty had supported Zheng He more in his colonial ambitions, I’d probably be writing this post in Mandarin and reflecting on why Asian cultures have engaged in so much colonial oppression.

While there is a deeply condescending paternalism (and often post-hoc rationalization of your own self-interested exploitation) involved in saying that you are conquering other people in order to civilize them, humans are also perfectly capable of committing atrocities for far less noble-sounding motives. There are holy wars such as the Crusades and ethnic genocides like in Rwanda, and the Arab slave trade was purely for profit and didn’t even have the pretense of civilizing people (not that the Atlantic slave trade was ever really about that anyway).

Indeed, I think it’s important to distinguish between colonialists who really did make some effort at civilizing the populations they conquered (like Britain, and also the Mongols actually) and those that clearly were just using that as an excuse to rape and pillage (like Spain and Portugal). This is similar to but not quite the same thing as the distinction between settler colonialism, where you send colonists to live there and build up the country, and exploitation colonialism, where you send military forces to take control of the existing population and exploit them to get their resources. Countries that experienced settler colonialism (such as the US and Australia) have fared a lot better in the long run than countries that experienced exploitation colonialism (such as Haiti and Zimbabwe).

The worst consequences of colonialism weren’t even really anyone’s fault, actually. The reason something like 98% of all Native Americans died as a result of European colonization was not that Europeans killed them—they did kill thousands of course, and I hope it goes without saying that that’s terrible, but it was a small fraction of the total deaths. The reason such a huge number died and whole cultures were depopulated was disease, and the inability of medical technology in any culture at that time to handle such a catastrophic plague. The primary cause was therefore accidental, and not really foreseeable given the state of scientific knowledge at the time. (I therefore think it’s wrong to consider it genocide—maybe democide.) Indeed, what really would have saved these people would be if Europe had advanced even faster into industrial capitalism and modern science, or else waited to colonize until they had; and then they could have distributed vaccines and antibiotics when they arrived. (Of course, there is evidence that a few European colonists used the diseases intentionally as biological weapons, which no amount of vaccine technology would prevent—and that is indeed genocide. But again, this was a small fraction of the total deaths.)

However, even with all those caveats, I hope we can all agree that colonialism and imperialism were morally wrong. No nation has the right to invade and conquer other nations; no one has the right to enslave people; no one has the right to kill people based on their culture or ethnicity.

My point is that it is entirely possible to recognize that and still appreciate that Western civilization has dramatically improved the standard of human life over the last few centuries. It simply doesn’t follow from the fact that British government and culture were more advanced and pluralistic that British soldiers can just go around taking over other people’s countries and planting their own flag (follow the link if you need some comic relief from this dark topic). That was the moral failing of colonialism; not that they thought their society was better—for in many ways it was—but that they thought that gave them the right to terrorize, slaughter, enslave, and conquer people.

Indeed, the “justification” of colonialism is a lot like that bizarre pseudo-utilitarianism I mentioned in my post on torture, where the mere presence of some benefit is taken to justify any possible action toward achieving that benefit. No, that’s not how morality works. You can’t justify unlimited evil by any good—it has to be a greater good, as in actually greater.

So let’s suppose that you do find yourself encountering another culture which is clearly more primitive than yours; their inferior technology results in them living in poverty and having very high rates of disease and death, especially among infants and children. What, if anything, are you justified in doing to intervene to improve their condition?

One idea would be to hold to the Prime Directive: No intervention, no sir, not ever. This is clearly what Gene Roddenberry thought of imperialism, hence why he built it into the Federation’s core principles.

But does that really make sense? Even as Star Trek shows progressed, the writers kept coming up with situations where the Prime Directive really seemed like it should have an exception, and sometimes decided that the honorable crew of Enterprise or Voyager really should intervene in this more primitive society to save them from some terrible fate. And I hope I’m not committing a Fictional Evidence Fallacy when I say that if your fictional universe specifically designed not to let that happen makes that happen, well… maybe it’s something we should be considering.

What if people are dying of a terrible disease that you could easily cure? Should you really deny them access to your medicine to avoid intervening in their society?

What if the primitive culture is ruled by a horrible tyrant that you could easily depose with little or no bloodshed? Should you let him continue to rule with an iron fist?

What if the natives are engaged in slavery, or even their own brand of imperialism against other indigenous cultures? Can you fight imperialism with imperialism?

And then we have to ask, does it really matter whether their babies are being murdered by the tyrant or simply dying from malnutrition and infection? The babies are just as dead, aren’t they? Even if we say that being murdered by a tyrant is worse than dying of malnutrition, it can’t be that much worse, can it? Surely 10 babies dying of malnutrition is at least as bad as 1 baby being murdered?

But then it begins to seem like we have a duty to intervene, and moreover a duty that applies in almost every circumstance! If you are on opposite sides of the technology threshold where infant mortality drops from 30% to 1%, how can you justify not intervening?

I think the best answer here is to keep in mind the very large costs of intervention as well as the potentially large benefits. The answer sounds simple, but is actually perhaps the hardest possible answer to apply in practice: You must do a cost-benefit analysis. Furthermore, you must do it well. We can’t demand perfection, but it must actually be a serious good-faith effort to predict the consequences of different intervention policies.

We know that people tend to resist most outside interventions, especially if you have the intention of toppling their leaders (even if they are indeed tyrannical). Even the simple act of offering people vaccines could be met with resistance, as the native people might think you are poisoning them or somehow trying to control them. But in general, opening contact with with gifts and trade is almost certainly going to trigger less hostility and therefore be more effective than going in guns blazing.

If you do use military force, it must be targeted at the particular leaders who are most harmful, and it must be designed to achieve swift, decisive victory with minimal collateral damage. (Basically I’m talking about just war theory.) If you really have such an advanced civilization, show it by exhibiting total technological dominance and minimizing the number of innocent people you kill. The NATO interventions in Kosovo and Libya mostly got this right. The Vietnam War and Iraq War got it totally wrong.

As you change their society, you should be prepared to bear most of the cost of transition; you are, after all, much richer than they are, and also the ones responsible for effecting the transition. You should not expect to see short-term gains for your own civilization, only long-term gains once their culture has advanced to a level near your own. You can’t bear all the costs of course—transition is just painful, no matter what you do—but at least the fungible economic costs should be borne by you, not by the native population. Examples of doing this wrong include basically all the standard examples of exploitation colonialism: Africa, the Caribbean, South America. Examples of doing this right include West Germany and Japan after WW2, and South Korea after the Korean War—which is to say, the greatest economic successes in the history of the human race. This was us winning development, humanity. Do this again everywhere and we will have not only ended world hunger, but achieved global prosperity.

What happens if we apply these principles to real-world colonialism? It does not fare well. Nor should it, as we’ve already established that most if not all real-world colonialism was morally wrong.

15th and 16th century colonialism fail immediately; they offer no benefit to speak of. Europe’s technological superiority was enough to give them gunpowder but not enough to drop their infant mortality rate. Maybe life was better in 16th century Spain than it was in the Aztec Empire, but honestly not by all that much; and life in the Iroquois Confederacy was in many ways better than life in 15th century England. (Though maybe that justifies some Iroquois imperialism, at least their “soft imperialism”?)

If these principles did justify any real-world imperialism—and I am not convinced that it does—it would only be much later imperialism, like the British Empire in the 19th and 20th century. And even then, it’s not clear that the talk of “civilizing” people and “the White Man’s Burden” was much more than rationalization, an attempt to give a humanitarian justification for what were really acts of self-interested economic exploitation. Even though India and South Africa are probably better off now than they were when the British first took them over, it’s not at all clear that this was really the goal of the British government so much as a side effect, and there are a lot of things the British could have done differently that would obviously have made them better off still—you know, like not implementing the precursors to apartheid, or making India a parliamentary democracy immediately instead of starting with the Raj and only conceding to democracy after decades of protest. What actually happened doesn’t exactly look like Britain cared nothing for actually improving the lives of people in India and South Africa (they did build a lot of schools and railroads, and sought to undermine slavery and the caste system), but it also doesn’t look like that was their only goal; it was more like one goal among several which also included the strategic and economic interests of Britain. It isn’t enough that Britain was a better society or even that they made South Africa and India better societies than they were; if the goal wasn’t really about making people’s lives better where you are intervening, it’s clearly not justified intervention.

And that’s the relatively beneficent imperialism; the really horrific imperialists throughout history made only the barest pretense of spreading civilization and were clearly interested in nothing more than maximizing their own wealth and power. This is probably why we get things like the Prime Directive; we saw how bad it can get, and overreacted a little by saying that intervening in other cultures is always, always wrong, no matter what. It was only a slight overreaction—intervening in other cultures is usually wrong, and almost all historical examples of it were wrong—but it is still an overreaction. There are exceptional cases where intervening in another culture can be not only morally right but obligatory.

Indeed, one underappreciated consequence of colonialism and imperialism is that they have triggered a backlash against real good-faith efforts toward economic development. People in Africa, Asia, and Latin America see economists from the US and the UK (and most of the world’s top economists are in fact educated in the US or the UK) come in and tell them that they need to do this and that to restructure their society for greater prosperity, and they understandably ask: “Why should I trust you this time?” The last two or four or seven batches of people coming from the US and Europe to intervene in their countries exploited them or worse, so why is this time any different?

It is different, of course; UNDP is not the East India Company, not by a longshot. Even for all their faults, the IMF isn’t the East India Company either. Indeed, while these people largely come from the same places as the imperialists, and may be descended from them, they are in fact completely different people, and moral responsibility does not inherit across generations. While the suspicion is understandable, it is ultimately unjustified; whatever happened hundreds of years ago, this time most of us really are trying to help—and it’s working.

Actually, our economic growth has been fairly ecologically sustainable lately!

JDN 2457538

Environmentalists have a reputation for being pessimists, and it is not entirely undeserved. While as Paul Samuelson said, all Street indexes have predicted nine out of the last five recessions, environmentalists have predicted more like twenty out of the last zero ecological collapses.

Some fairly serious scientists have endorsed predictions of imminent collapse that haven’t panned out, and many continue to do so. This Guardian article should be hilarious to statisticians, as it literally takes trends that are going one direction, maps them onto a theory that arbitrarily decides they’ll suddenly reverse, and then says “the theory fits the data”. This should be taught in statistics courses as a lesson in how not to fit models. More data distortion occurs in this Scientific American article, which contains the phrase “food per capita is decreasing”; well, that’s true if you just look at the last couple of years, but according to FAOSTAT, food production per capita in 2012 (the most recent data in FAOSTAT) was higher than literally every other year on record except 2011. So if you allow for even the slightest amount of random fluctuation, it’s very clear that food per capita is increasing, not decreasing.

global_food.png

So many people predicting imminent collapse of human civilization. And yet, for some reason, all the people predicting this go about their lives as if it weren’t happening! Why, it’s almost as if they don’t really believe it, and just say it to get attention. Nobody gets on the news by saying “Civilization is doing fine; things are mostly getting better.”

There’s a long history of these sorts of gloom and doom predictions; perhaps the paradigm example is Thomas Malthus in 1779 predicting the imminent destruction of civilization by inevitable famine—just in time for global infant mortality rates to start plummeting and economic output to surge beyond anyone’s wildest dreams.

Still, when I sat down to study this it was remarkable to me just how good the outlook is for future sustainability. The Index of Sustainable Economic Welfare was created essentially in an attempt to show how our economic growth is largely an illusion driven by our rapacious natural resource consumption, but it has since been discontinued, perhaps because it didn’t show that. Using the US as an example, I reconstructed the index as best I could from World Bank data, and here’s what came out for the period since 1990:

ISEW

The top line is US GDP as normally measured. The bottom line is the ISEW. The gap between those lines expands on a linear scale, but not on a logarithmic scale; that is to say, GDP and ISEW grow at almost exactly the same rate, so ISEW is always a constant (and large) proportion of GDP. By construction it is necessarily smaller (it basically takes GDP and subtracts out from it), but the fact that it is growing at the same rate shows that our economic growth is not being driven by depletion of natural resources or the military-industrial complex; it’s being driven by real improvements in education and technology.

The Human Development Index has grown in almost every country (albeit at quite different rates) since 1990. Global poverty is the lowest it has ever been. We are living in a golden age of prosperity. This is such a golden age for our civilization, our happiness rating maxed out and now we’re getting +20% production and extra gold from every source. (Sorry, gamer in-joke.)

Now, it is said that pride cometh before a fall; so perhaps our current mind-boggling improvements in human welfare have only been purchased on borrowed time as we further drain our natural resources.

There is some cause for alarm: We’re literally running out of fish, and groundwater tables are falling rapidly. Due to poor land use deserts are expanding. Huge quantities of garbage now float in our oceans. And of course, climate change is poised to kill millions of people. Arctic ice will melt every summer starting in the next few years.

And yet, global carbon emissions have not been increasing the last few years, despite strong global economic growth. We need to be reducing emissions, not just keeping them flat (in a previous post I talked about some policies to do that); but even keeping them flat while still raising standard of living is something a lot of environmentalists kept telling us we couldn’t possibly do. Despite constant talk of “overpopulation” and a “population bomb”, population growth rates are declining and world population is projected to level off around 9 billion. Total solar power production in the US expanded by a factor of 40 in just the last 10 years.

Of course, I don’t deny that there are serious environmental problems, and we need to make policies to combat them; but we are doing that. Humanity is not mindlessly plunging headlong into an abyss; we are taking steps to improve our future.

And in fact I think environmentalists deserve a lot of credit for that! Raising awareness of environmental problems has made most Americans recognize that climate change is a serious problem. Further pressure might make them realize it should be one of our top priorities (presently most Americans do not).

And who knows, maybe the extremist doomsayers are necessary to set the Overton Window for the rest of us. I think we of the center-left (toward which reality has a well-known bias) often underestimate how much we rely upon the radical left to pull the discussion away from the radical right and make us seem more reasonable by comparison. It could well be that “climate change will kill tens of millions of people unless we act now to institute a carbon tax and build hundreds of nuclear power plants” is easier to swallow after hearing “climate change will destroy humanity unless we act now to transform global capitalism to agrarian anarcho-socialism.” Ultimately I wish people could be persuaded simply by the overwhelming scientific evidence in favor of the carbon tax/nuclear power argument, but alas, humans are simply not rational enough for that; and you must go to policy with the public you have. So maybe irrational levels of pessimism are a worthwhile corrective to the irrational levels of optimism coming from the other side, like the execrable sophistry of “in praise of fossil fuels” (yes, we know our economy was built on coal and oil—that’s the problem. We’re “rolling drunk on petroleum”; when we’re trying to quit drinking, reminding us how much we enjoy drinking is not helpful.).

But I worry that this sort of irrational pessimism carries its own risks. First there is the risk of simply giving up, succumbing to learned helplessness and deciding there’s nothing we can possibly do to save ourselves. Second is the risk that we will do something needlessly drastic (like the a radical socialist revolution) that impoverishes or even kills millions of people for no reason. The extreme fear that we are on the verge of ecological collapse could lead people to take a “by any means necessary” stance and end up with a cure worse than the disease. So far the word “ecoterrorism” has mainly been applied to what was really ecovandalism; but if we were in fact on the verge of total civilizational collapse, I can understand why someone would think quite literal terrorism was justified (actually the main reason I don’t is that I just don’t see how it could actually help). Just about anything is worth it to save humanity from destruction.

What is progress? How far have we really come?

JDN 2457534

It is a controversy that has lasted throughout the ages: Is the world getting better? Is it getting worse? Or is it more or less staying the same, changing in ways that don’t really constitute improvements or detriments?

The most obvious and indisputable change in human society over the course of history has been the advancement of technology. At one extreme there are techno-utopians, who believe that technology will solve all the world’s problems and bring about a glorious future; at the other extreme are anarcho-primitivists, who maintain that civilization, technology, and industrialization were all grave mistakes, removing us from our natural state of peace and harmony.

I am not a techno-utopian—I do not believe that technology will solve all our problems—but I am much closer to that end of the scale. Technology has solved a lot of our problems, and will continue to solve a lot more. My aim in this post is to convince you that progress is real, that things really are, on the whole, getting better.

One of the more baffling arguments against progress comes from none other than Jared Diamond, the social scientist most famous for Guns, Germs and Steel (which oddly enough is mainly about horses and goats). About seven months before I was born, Diamond wrote an essay for Discover magazine arguing quite literally that agriculture—and by extension, civilization—was a mistake.

Diamond fortunately avoids the usual argument based solely on modern hunter-gatherers, which is a selection bias if ever I heard one. Instead his main argument seems to be that paleontological evidence shows an overall decrease in health around the same time as agriculture emerged. But that’s still an endogeneity problem, albeit a subtler one. Maybe agriculture emerged as a response to famine and disease. Or maybe they were both triggered by rising populations; higher populations increase disease risk, and are also basically impossible to sustain without agriculture.

I am similarly dubious of the claim that hunter-gatherers are always peaceful and egalitarian. It does seem to be the case that herders are more violent than other cultures, as they tend to form honor cultures that punish all sleights with overwhelming violence. Even after the Industrial Revolution there were herder honor cultures—the Wild West. Yet as Steven Pinker keeps trying to tell people, the death rates due to homicide in all human cultures appear to have steadily declined for thousands of years.

I read an article just a few days ago on the Scientific American blog which included the following claim so astonishingly nonsensical it makes me wonder if the authors can even do arithmetic or read statistical tables correctly:

I keep reminding readers (see Further Reading), the evidence is overwhelming that war is a relatively recent cultural invention. War emerged toward the end of the Paleolithic era, and then only sporadically. A new study by Japanese researchers published in the Royal Society journal Biology Letters corroborates this view.

Six Japanese scholars led by Hisashi Nakao examined the remains of 2,582 hunter-gatherers who lived 12,000 to 2,800 years ago, during Japan’s so-called Jomon Period. The researchers found bashed-in skulls and other marks consistent with violent death on 23 skeletons, for a mortality rate of 0.89 percent.

That is supposed to be evidence that ancient hunter-gatherers were peaceful? The global homicide rate today is 62 homicides per million people per year. Using the worldwide life expectancy of 71 years (which is biasing against modern civilization because our life expectancy is longer), that means that the worldwide lifetime homicide rate is 4,400 homicides per million people, or 0.44%—that’s less than half the homicide rate of these “peaceful” hunter-gatherers. If you compare just against First World countries, the difference is even starker; let’s use the US, which has the highest homicide rate in the First World. Our homicide rate is 38 homicides per million people per year, which at our life expectancy of 79 years is 3,000 homicides per million people, or an overall homicide rate of 0.3%, slightly more than a third of this “peaceful” ancient culture. The most peaceful societies today—notably Japan, where these remains were found—have homicide rates as low as 3 per million people per year, which is a lifetime homicide rate of 0.02%, forty times smaller than their supposedly utopian ancestors. (Yes, all of Japan has fewer total homicides than Chicago. I’m sure it has nothing to do with their extremely strict gun control laws.) Indeed, to get a modern homicide rate as high as these hunter-gatherers, you need to go to a country like Congo, Myanmar, or the Central African Republic. To get a substantially higher homicide rate, you essentially have to be in Latin America. Honduras, the murder capital of the world, has a lifetime homicide rate of about 6.7%.

Again, how did I figure these things out? By reading basic information from publicly-available statistical tables and then doing some simple arithmetic. Apparently these paleoanthropologists couldn’t be bothered to do that, or didn’t know how to do it correctly, before they started proclaiming that human nature is peaceful and civilization is the source of violence. After an oversight as egregious as that, it feels almost petty to note that a sample size of a few thousand people from one particular region and culture isn’t sufficient data to draw such sweeping judgments or speak of “overwhelming” evidence.

Of course, in order to decide whether progress is a real phenomenon, we need a clearer idea of what we mean by progress. It would be presumptuous to use per-capita GDP, though there can be absolutely no doubt that technology and capitalism do in fact raise per-capita GDP. If we measure by inequality, modern society clearly fares much worse (our top 1% share and Gini coefficient may be higher than Classical Rome!), but that is clearly biased in the opposite direction, because the main way we have raised inequality is by raising the ceiling, not lowering the floor. Most of our really good measures (like the Human Development Index) only exist for the last few decades and can barely even be extrapolated back through the 20th century.

How about babies not dying? This is my preferred measure of a society’s value. It seems like something that should be totally uncontroversial: Babies dying is bad. All other things equal, a society is better if fewer babies die.

I suppose it doesn’t immediately follow that all things considered a society is better if fewer babies die; maybe the dying babies could be offset by some greater good. Perhaps a totalitarian society where no babies die is in fact worse than a free society in which a few babies die, or perhaps we should be prepared to accept some small amount of babies dying in order to save adults from poverty, or something like that. But without some really powerful overriding reason, babies not dying probably means your society is doing something right. (And since most ancient societies were in a state of universal poverty and quite frequently tyranny, these exceptions would only strengthen my case.)

Well, get ready for some high-yield truth bombs about infant mortality rates.

It’s hard to get good data for prehistoric cultures, but the best data we have says that infant mortality in ancient hunter-gatherer cultures was about 20-50%, with a best estimate around 30%. This is statistically indistinguishable from early agricultural societies.

Indeed, 30% seems to be the figure humanity had for most of history. Just shy of a third of all babies died for most of history.

In Medieval times, infant mortality was about 30%.

This same rate (fluctuating based on various plagues) persisted into the Enlightenment—Sweden has the best records, and their infant mortality rate in 1750 was about 30%.

The decline in infant mortality began slowly: During the Industrial Era, infant mortality was about 15% in isolated villages, but still as high as 40% in major cities due to high population densities with poor sanitation.

Even as recently as 1900, there were US cities with infant mortality rates as high as 30%, though the overall rate was more like 10%.

Most of the decline was recent and rapid: Just within the US since WW2, infant mortality fell from about 5.5% to 0.7%, though there remains a substantial disparity between White and Black people.

Globally, the infant mortality rate fell from 6.3% to 3.2% within my lifetime, and in Africa today, the region where it is worst, it is about 5.5%—or what it was in the US in the 1940s.

This precipitous decline in babies dying is the main reason ancient societies have such low life expectancies; actually once they reached adulthood they lived to be about 70 years old, not much worse than we do today. So my multiplying everything by 71 actually isn’t too far off even for ancient societies.

Let me make a graph for you here, of the approximate rate of babies dying over time from 10,000 BC to today:

Infant_mortality.png

Let’s zoom in on the last 250 years, where the data is much more solid:

Infant_mortality_recent.png

I think you may notice something in these graphs. There is quite literally a turning point for humanity, a kink in the curve where we suddenly begin a rapid decline from an otherwise constant mortality rate.

That point occurs around or shortly before 1800—that is, it occurs at industrial capitalism. Adam Smith (not to mention Thomas Jefferson) was writing at just about the point in time when humanity made a sudden and unprecedented shift toward saving the lives of millions of babies.

So now, think about that the next time you are tempted to say that capitalism is an evil system that destroys the world; the evidence points to capitalism quite literally saving babies from dying.

How would it do so? Well, there’s that rising per-capita GDP we previously ignored, for one thing. But more important seems to be the way that industrialization and free markets support technological innovation, and in this case especially medical innovation—antibiotics and vaccines. Our higher rates of literacy and better communication, also a result of raised standard of living and improved technology, surely didn’t hurt. I’m not often in agreement with the Cato Institute, but they’re right about this one: Industrial capitalism is the chief source of human progress.

Billions of babies would have died but we saved them. So yes, I’m going to call that progress. Civilization, and in particular industrialization and free markets, have dramatically improved human life over the last few hundred years.

In a future post I’ll address one of the common retorts to this basically indisputable fact: “You’re making excuses for colonialism and imperialism!” No, I’m not. Saying that modern capitalism is a better system (not least because it saves babies) is not at all the same thing as saying that our ancestors were justified in using murder, slavery, and tyranny to force people into it.