The urban-rural divide runs deep

Feb 5, JDN 2457790

Are urban people worth less than rural people?

That probably sounds like a ridiculous thing to ask; of course not, all people are worth the same (other things equal of course—philanthropists are worth more than serial murderers). But then, if you agree with that, you’re probably an urban person, as I’m sure most of my readers are (and as indeed most people in highly-developed countries are).

A disturbing number of rural people, however, honestly do seem to believe this. They think that our urban lifestyles (whatever they imagine those to be) devalue us as citizens and human beings.

That is the key subtext to understand in the terrifying phenomenon that is Donald Trump. Most of the people who voted for him can’t possibly have thought he was actually trustworthy, and many probably didn’t actually support his policies of bigotry and authoritarianism (though he was very popular among bigots and authoritarians). From speaking with family members and acquaintances who proudly voted for Trump, one thing came through very clearly: This was a gigantic middle finger pointed at cities. They didn’t even really want Trump; they just knew we didn’t, and so they voted for him out of spite as much as anything else. They also have really confused views about free trade, so some of them voted for him because he promised to bring back jobs lost to trade (that weren’t lost to trade, can’t be brought back, and shouldn’t be even if they could). Talk with a Trump voter for a few minutes, and sneers of “latte-sipping liberal” (I don’t even like coffee) and “coastal elite” (I moved here to get educated; I wasn’t born here) are sure to follow.

There has always been some conflict between rural and urban cultures, for as long as there have been urban cultures for rural cultures to be in conflict with. It is found not just in the US, but in most if not all countries around the world. It was relatively calm during the postwar boom in the 20th century, as incomes everywhere (or at least everywhere within highly-developed countries) were improving more or less in lockstep. But the 21st century has brought us much more unequal growth, concentrated on particular groups of people and particular industries. This has brought more resentment. And that divide, above all else, is what brought us Trump; the correlation between population density and voting behavior is enormous.

Of course, “urban” is sometimes a dog-whistle for “Black”; but sometimes I think it actually really means “urban”—and yet there’s still a lot of hatred embedded in it. Indeed, perhaps that’s why the dog-whistle works; a White man from a rural town can sneer at “urban” people and it’s not entirely clear whether he’s being racist or just being anti-urban.

The assumption that rural lifestyles are superior runs so deep in our culture that even in articles by urban people (like this one from the LA Times) supposedly reflecting about how to resolve this divide, there are long paeans to the world of “hard work” and “sacrifice” and “autonomy” of rural life, and mocking “urban elites” for their “disproportionate” (by which you can only mean almost proportionate) power over government.

Well, guess what? If you want to live in a rural area, go live in a rural area. Don’t pine for it. Don’t tell me how great farm life is. If you want to live on a farm, go live on a farm. I have nothing against it; we need farmers, after all. I just want you to shut up about how great it is, especially if you’re not going to actually do it. Pining for someone else’s lifestyle when you could easily take on that lifestyle if you really wanted it just shows that you think the grass is greener on the other side.

Because the truth is, farm living isn’t so great for most people. The world’s poorest people are almost all farmers. 70% of people below the UN poverty line live in rural areas, even as more and more of the world’s population moves into cities. If you use a broader poverty measure, as many as 85% of the world’s poor live in rural areas.

The kind of “autonomy” that means defending your home with a shotgun is normally what we would call anarchy—it’s a society that has no governance, no security. (Of course, in the US that’s pure illusion; crime rates in general are low and falling, and lower in rural areas than urban areas. But in some parts of the world, that anarchy is very real.) One of the central goals of global economic development is to get people away from subsistence farming into far more efficient manufacturing and service jobs.

At least in the US, farm life is a lot better than it used to be, now that agricultural technology has improved so that one farmer can now do the work of hundreds. Despite increased population and increased food consumption per person, the number of farmers in the US is now the smallest it has been since before the Civil War. The share of employment devoted to agriculture has fallen from over 80% in 1800 to under 2% today. Even just since the 1960s labor productivity of US farms has more than tripled.

But the reason that some 80% of Americans have chosen to live in cities—and yes, I can clearly say “chosen”, because cities are more expensive and therefore urban living is a voluntary activity. Most people who live in the city right now could move to the country if we really wanted to. We choose not to, because we know our life would be worse if we did.

Indeed, I dare say that a lot of the hatred of city-dwellers has got to be envy. Our (median) incomes are higher and our (mean) lifespans are longer. Fewer of our children are in poverty. Life is better here—we know it, and deep down, they know it too.

We also have better Internet access, unsurprisingly—though rural areas are only a few years behind, and the technology improves so rapidly that twice as many rural homes in the US have Internet access than urban homes did in 1998.

Now, a rational solution to this problem would be either to improve the lives of people in rural areas or else move everyone to urban areas—and both of those things have been happening, not only in the US but around the world. But in order to do that, you need to be willing to change things. You have to give up the illusion that farm life is some wonderful thing we should all be emulating, rather than the necessary toil that humanity was forced to go through for centuries until civilization could advance beyond it. You have to be willing to replace farmers with robots, so that people who would have been farmers can go do something better with their lives. You need to give up the illusion that there is something noble or honorable about hard labor on a farm—indeed, you need to give up the illusion that there is anything noble or honorable about hard work in general. Work is not a benefit; work is a cost. Work is what we do because we have to—and when we no longer have to do it, we should stop. Wanting to escape toil and suffering doesn’t make you lazy or selfish—it makes you rational.

We could surely be more welcoming—but cities are obviously more welcoming to newcomers than rural areas are. Our housing is too expensive, but that’s in part because so many people want to live here—supply hasn’t been able to keep up with demand.

I may seem to be presenting this issue as one-sided; don’t urban people devalue rural people too? Sometimes. Insults like “hick” and “yokel” and “redneck” do of course exist. But I’ve never heard anyone from a city seriously argue that people who live in rural areas should have votes that systematically count for less than those of people who live in cities—yet the reverse is literally what people are saying when they defend the Electoral College. If you honestly think that the Electoral College deserves to exist in anything like its present form, you must believe that some Americans are worth more than others, and the people who are worth more are almost all in rural areas while the people who are worth less are almost all in urban areas.

No, National Review, the Electoral College doesn’t “save” America from California’s imperial power; it gives imperial power to a handful of swing states. The only reason California would be more important than any other state is that more Americans live here. Indeed, a lot of Republicans in California are disenfranchised, because they know that their votes will never overcome the overwhelming Democratic majority for the state as a whole and the system is winner-takes-all. Indeed, about 30% of California votes Republican (well, not in the last election, because that was Trump—Orange County went Democrat for the first time in decades), so the number of disenfranchised Republicans alone in California is larger than the population of Michigan, which in turn is larger than the population of Wyoming, North Dakota, South Dakota, Montana, Nebraska, West Virginia, and Kansas combined. Indeed, there are more people in California than there are in Canada. So yeah, I’m thinking maybe we should get a lot of votes?

But it’s easy for you to drum up fear over “imperial rule” by California in particular, because we’re so liberal—and so urban, indeed an astonishing 95% urban, the most of any US state (or frankly probably any major regional entity on the planet Earth! To beat that you have to be something like Singapore, which literally just is a single city).

In fact, while insults thrown at urban people get thrown at basically all of us regardless of what we do, most of the insults that are thrown at rural people are mainly thrown at uneducated rural people. (And statistically, while many people in rural areas are educated and many people in urban areas are not, there’s definitely a positive correlation between urbanization and education.) It’s still unfair in many ways, not least because education isn’t entirely a choice, not in a society where tuition at an average private university costs more than the median individual income. Many of the people we mock as being stupid were really just born poor. It may not be their fault, but they can’t believe that the Earth is only 10,000 years old and not have some substantial failings in their education. I still don’t think mockery is the right answer; it’s really kicking them while they’re down. But clearly there is something wrong with our society when 40% of people believe something so obviously ludicrous—and those beliefs are very much concentrated in the same Southern states that have the most rural populations. “They think we’re ignorant just because we believe that God made the Earth 6,000 years ago!” I mean… yes? I’m gonna have to own up to that one, I guess. I do in fact think that people who believe things that were disproven centuries ago are ignorant.

So really this issue is one-sided. We who live in cities are being systematically degraded and disenfranchised, and when we challenge that system we are accused of being selfish or elitist or worse. We are told that our lifestyles are inferior and shameful, and when we speak out about the positive qualities of our lives—our education, our acceptance of diversity, our flexibility in the face of change—we are again accused of elitism and condescension.

We could simply stew in that resentment. But we can do better. We can reach out to people in rural areas, show them not just that our lives are better—as I said, they already know this—but that they can have these lives too. And we can make policy so that this really can happen for people. Envy doesn’t automatically lead to resentment; that only happens when combined with a lack of mobility. The way urban people pine for the countryside is baffling, since we could go there any time; but the way that country people long for the city is perfectly understandable, as our lives really are better but our rent is too high for them to afford. We need to bring that rent down, not just for the people already living in cities, but also for the people who want to but can’t.

And of course we don’t want to move everyone to cities, either. Many people won’t want to live in cities, and we need a certain population of farmers to make our food after all. We can work to improve infrastructure in rural areas—particularly when it comes to hospitals, which are a basic necessity that is increasingly underfunded. We shouldn’t stop using cost-effectiveness calculations, but we need to compare against the right things. If that hospital isn’t worth building, it should be because there’s another, better hospital we could make for the same amount or cheaper—not because we think that this town doesn’t deserve to have a hospital. We can expand our public transit systems over a wider area, and improve their transit speeds so that people can more easily travel to the city from further away.

We should seriously face up to the costs that free trade has imposed upon many rural areas. We can’t give up on free trade—but that doesn’t mean we need to keep our trade policy exactly as it is. We can do more to ensure that multinational corporations don’t have overwhelming bargaining power against workers and small businesses. We can establish a tax system that would redistribute more of the gains from free trade to the people and places most hurt by the transition. Right now, poor people in the US are often the most fiercely opposed to redistribution of wealth, because somehow they perceive that wealth will be redistributed from them when it would in fact be redistributed to them. They are in a scarcity mindset, their whole worldview shaped by the fact that they struggle to get by. They see every change as a threat, every stranger as an enemy.

Somehow we need to fight that mindset, get them to see that there are many positive changes that can be made, many things that we can achieve together that none of us could achieve along.

Wrong answers are better than no answer

Nov 6, JDN 2457699

I’ve been hearing some disturbing sentiments from some surprising places lately, things like “Economics is not a science, it’s just an extension of politics” and “There’s no such thing as a true model”. I’ve now met multiple economists who speak this way, who seem to be some sort of “subjectivists” or “anti-realists” (those links are to explanations of moral subjectivism and anti-realism, which are also mistaken, but in a much less obvious way, and are far more common views to express). It is possible to read most of the individual statements in a non-subjectivist way, but in the context of all of them together, it really gives me the general impression that many of these economists… don’t believe in economics. (Nor do they even believe in believing it, or they’d put up a better show.)

I think what has happened is that in the wake of the Second Depression, economists have had a sort of “crisis of faith”. The models we thought were right were wrong, so we may as well give up; there’s no such thing as a true model. The science of economics failed, so maybe economics was never a science at all.

Never really thought I’d be in this position, but in such circumstances actually feel strongly inclined to defend neoclassical economics. Neoclassical economics is wrong; but subjectivism is not even wrong.

If a model is wrong, you can fix it. You can make it right, or at least less wrong. But if you give up on modeling altogether, your theory avoids being disproven only by making itself totally detached from reality. I can’t prove you wrong, but only because you’ve given up on the whole idea of being right or wrong.

As Isaac Asimov wrote, “when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.”

What we might call “folk economics”, what most people seem to believe about economics, is like thinking the Earth is flat—it’s fundamentally wrong, but not so obviously inaccurate on an individual scale that it can’t be a useful approximation for your daily life. Neoclassical economics is like thinking the Earth is spherical—it’s almost right, but still wrong in some subtle but important ways. Thinking that economics isn’t a science is wronger than both of them put together.

The sense in which “there’s no such thing as a true model” is true is a trivial one: There’s no such thing as a perfect model, because by the time you included everything you’d just get back the world itself. But there are better and worse models, and some of our very best models (quantum mechanics, Darwinian evolution) are really good enough that I think it’s quite perverse not to call them simply true. Economics doesn’t have such models yet for more than a handful of phenomena—but we’re working on it (at least, I thought that’s what we were doing!).

Indeed, a key point I like to make about realism—in science, morality, or whatever—is that if you think something can be wrong, you must be a realist. In order for an idea to be wrong, there must be some objective reality to compare it to that it can fail to match. If everything is just subjective beliefs and sociopolitical pressures, there is no such thing as “wrong”, only “unpopular”. I’ve heard many people say things like “Well, that’s just your opinion; you could be wrong.” No, if it’s just my opinion, then I cannot possibly be wrong. So choose a lane! Either you think I’m wrong, or you think it’s just my opinion—but you can’t have it both ways.

Now, it’s clearly true in the real world that there is a lot of very bad and unscientific economics going on. The worst is surely the stuff that comes out of right-wing think-tanks that are paid almost explicitly to come up with particular results that are convenient for their right-wing funders. (As Krugman puts it, “there are liberal professional economists, conservative professional economists, and professional conservative economists.”) But there’s also a lot of really unscientific economics done without such direct and obvious financial incentives. Economists get blinded by their own ideology, they choose what topics to work on based on what will garner the most prestige, they use fundamentally defective statistical techniques because journals won’t publish them if they don’t.

But of course, the same is true of many other fields, particularly in social science. Sociologists also get blinded by their pet theories; psychologists also abuse statistics because the journals make them do it; political scientists are influenced by their funding sources; anthropologists also choose what to work on based on what’s prestigious in the field.

Moreover, natural sciences do this too. String theorists are (almost by definition) blinded by their favorite theory. Biochemists are manipulated by the financial pressures of the pharmaceutical industry. Neuroscientists publish all sorts of statistically nonsensical research. I’d be very surprised if even geologists were immune to the social norms of academia telling them to work on the most prestigious problems. If this is enough reason to abandon a field as a science, it is a reason to abandon science, full stop. That is what you are arguing for here.

And really, this should be fairly obvious, actually. Are workers and factories and televisions actual things that are actually here? Obviously they are. Therefore you can be right or wrong about how they interact. There is an obvious objective reality here that one can have more or less accurate beliefs about.

For socially-constructed phenomena like money, markets, and prices, this isn’t as obvious; if everyone stopped believing in the US Dollar, like Tinkerbell the US Dollar would cease to exist. But there does remain some objective reality (or if you like, intersubjective reality) here: I can be right or wrong about the price of a dishwasher or the exchange rate from dollars to pounds.

So, in order to abandon the possibility of scientifically accurate economics, you have to say that even though there is this obvious physical reality of workers and factories and televisions, we can’t actually study that scientifically, even when it sure looks like we’re studying it scientifically by performing careful observations, rigorous statistics, and even randomized controlled experiments. Even when I perform my detailed Bayesian analysis of my randomized controlled experiment, nope, that’s not science. It doesn’t count, for some reason.

The only at all principled way I can see you could justify such a thing is to say that once you start studying other humans you lose all possibility of scientific objectivity—but notice that by making such a claim you haven’t just thrown out psychology and economics, you’ve also thrown out anthropology and neuroscience. The statements “DNA evidence shows that all modern human beings descend from a common migration out of Africa” and “Human nerve conduction speed is approximately 300 meters per second” aren’t scientific? Then what in the world are they?

Or is it specifically behavioral sciences that bother you? Now perhaps you can leave out biological anthropology and basic neuroscience; there’s some cultural anthropology and behavioral neuroscience you have to still include, but maybe that’s a bullet you’re willing to bite. There is perhaps something intuitively appealing here: Since science is a human behavior, you can’t use science to study human behavior without an unresolvable infinite regress.

But there are still two very big problems with this idea.

First, you’ve got to explain how there can be this obvious objective reality of human behavior that is nonetheless somehow forever beyond our understanding. Even though people actually do things, and we can study those things using the usual tools of science, somehow we’re not really doing science, and we can never actually learn anything about how human beings behave.

Second, you’ve got to explain why we’ve done as well as we have. For some reason, people seem to have this impression that psychology and especially economics have been dismal failures, they’ve brought us nothing but nonsense and misery.

But where exactly do you think we got the lowest poverty rate in the history of the world? That just happened by magic, or by accident while we were doing other things? No, economists did that, on purpose—the UN Millennium Goals were designed, implemented, and evaluated by economists. Against staunch opposition from both ends of the political spectrum, we have managed to bring free trade to the world, and with it, some measure of prosperity.

The only other science I can think of that has been more successful at its core mission is biology; as XCKD pointed out, the biologists killed a Horseman of the Apocalypse while the physicists were busy making a new one. Congratulations on beating Pestilence, biologists; we economists think we finally have Famine on the ropes now. Hey political scientists, how is War going? Oh, not bad, actually? War deaths per capita are near their lowest levels in history? But clearly it would be foolhardy to think that economics and political science are actually sciences!

I can at least see why people might think psychology is a failure, because rates of diagnosis of mental illness keep rising higher and higher; but the key word there is diagnosis. People were already suffering from anxiety and depression across the globe; it’s just that nobody was giving them therapy or medication for it. Some people argue that all we’ve done is pathologize normal human experience—but this wildly underestimates the severity of many mental disorders. Wanting to end your own life for reasons you yourself cannot understand is not normal human experience being pathologized. (And the fact that 40,000 Americans commit suicide every year may make it common, but it does not make it normal. Is trying to keep people from dying of influenza “pathologizing normal human experience”? Well, suicide kills almost as many.) It’s possible there is some overdiagnosis; but there is also an awful lot of real mental illness that previously went untreated—and yes, meta-analysis shows that treatment can and does work.

Of course, we’ve made a lot of mistakes. We will continue to make mistakes. Many of our existing models are seriously flawed in very important ways, and many economists continue to use those models incautiously, blind to their defects. The Second Depression was largely the fault of economists, because it was economists who told everyone that markets are efficient, banks will regulate themselves, leave it alone, don’t worry about it.

But we can do better. We will do better. And we can only do that because economics is a science, it does reflect reality, and therefore we make ourselves less wrong.

No, Scandinavian countries aren’t parasites. They’re just… better.

Oct 1, JDN 2457663

If you’ve been reading my blogs for awhile, you likely have noticed me occasionally drop the hashtag #ScandinaviaIsBetter; I am in fact quite enamored of the Scandinavian (or Nordic more generally) model of economic and social policy.

But this is not a consensus view (except perhaps within Scandinavia itself), and I haven’t actually gotten around to presenting a detailed argument for just what it is that makes these countries so great.

I was inspired to do this by discussion with a classmate of mine (who shall remain nameless) who emphatically disagreed; he actually seems to think that American economic policy is somewhere near optimal (and to be fair, it might actually be near optimal, in the broad space of all possible economic policies—we are not Maoist China, we are not Somalia, we are not a nuclear wasteland). He couldn’t disagree with the statistics on how wealthy and secure and happy Scandinavian countries are, so instead he came up with this: “They are parasites.”

What he seemed to mean by this is that somehow Scandinavian countries achieve their success by sapping wealth from other countries, perhaps the rest of Europe, perhaps the world more generally. On this view, it’s not that Norway and Denmark aren’t rich because they economic policy basically figured out; no, they are somehow draining those riches from elsewhere.

This could scarcely be further from the truth.

But first, consider a couple of countries that are parasites, at least partially: Luxembourg and Singapore.

Singapore has an enormous trade surplus: 5.5 billion SGD per month, which is $4 billion per month, so almost $50 billion per year. They also have a positive balance of payments of $61 billion per year. Singapore’s total GDP is about $310 billion, so these are not small amounts. What does this mean? It means that Singapore is taking in a lot more money than they are spending out. They are effectively acting as mercantilists, or if you like as a profit-seeking corporation.

Moreover, Singapore is totally dependent on trade: their exports are over $330 billion per year, and their imports are over $280 billion. You may recognize each of these figures as comparable to the entire GDP of the country. Yes, their total trade is 200% of GDP. They aren’t really so much a country as a gigantic trading company.

What about Luxembourg? Well, they have a trade deficit of 420 million Euros per month, which is about $560 million per year. Their imports total about $2 billion per year, and their exports about $1.5 billion. Since Luxembourg’s total GDP is $56 billion, these aren’t unreasonably huge figures (total trade is about 6% of GDP); so Luxembourg isn’t a parasite in the sense that Singapore is.

No, what makes Luxembourg a parasite is the fact that 36% of their GDP is due to finance. Compare the US, where 12% of our GDP is finance—and we are clearly overfinancialized. Over a third of Luxembourg’s income doesn’t involve actually… doing anything. They hold onto other people’s money and place bets with it. Even insofar as finance can be useful, it should be only very slightly profitable, and definitely not more than 10% of GDP. As Stiglitz and Krugman agree (and both are Nobel Laureate economists), banking should be boring.

Do either of these arguments apply to Scandinavia? Let’s look at trade first. Denmark’s imports total about 42 billion DKK per month, which is about $70 billion per year. Their exports total about $90 billion per year. Denmark’s total GDP is $330 billion, so these numbers are quite reasonable. What are their main sectors? Manufacturing, farming, and fuel production. Notably, not finance.

Similar arguments hold for Sweden and Norway. They may be small countries, but they have diversified economies and strong production of real economic goods. Norway is probably overly dependent on oil exports, but they are specifically trying to move away from that right now. Even as it is, only about $90 billion of their $150 billion exports are related to oil, and exports in general are only about 35% of GDP, so oil is about 20% of Norway’s GDP. Compare that to Saudi Arabia, of which has 90% of its exports related to oil, accounting for 45% of GDP. If oil were to suddenly disappear, Norway would lose 20% of their GDP, dropping their per-capita GDP… all the way to the same as the US. (Terrifying!) But Saudi Arabia would suffer a total economic collapse, and their per capita-GDP would fall from where it is now at about the same as the US to about the same as Greece.

And at least oil actually does things. Oil exporting countries aren’t parasites so much as they are drug dealers. The world is “rolling drunk on petroleum”, and until we manage to get sober we’re going to continue to need that sweet black crude. Better we buy it from Norway than Saudi Arabia.

So, what is it that makes Scandinavia so great? Why do they have the highest happiness ratings, the lowest poverty rates, the best education systems, the lowest unemployment rates, the best social mobility and the highest incomes? To be fair, in most of these not literally every top spot is held by a Scandinavian country; Canada does well, Germany does well, the UK does well, even the US does well. Unemployment rates in particular deserve further explanation, because a lot of very poor countries report surprisingly low unemployment rates, such as Cambodia and Laos.

It’s also important to recognize that even great countries can have serious flaws, and the remnants of the feudal system in Scandinavia—especially in Sweden—still contribute to substantial inequality of wealth and power.

But in general, I think if you assembled a general index of overall prosperity of a country (or simply used one that already exists like the Human Development Index), you would find that Scandinavian countries are disproportionately represented at the very highest rankings. This calls out for some sort of explanation.

Is it simply that they are so small? They are certainly quite small; Norway and Denmark each have fewer people than the core of New York City, and Sweden has slightly more people than the Chicago metropolitan area. Put them all together, add in Finland and Iceland (which aren’t quite Scandinavia), and all together you have about the population of the New York City Combined Statistical Area.

But some of the world’s smallest countries are also its poorest. Samoa and Kiribati each have populations comparable to the city of Ann Arbor and per-capita GDPs 1/10 that of the US. Eritrea is the same size as Norway, and 70 times poorer. Burundi is slightly larger than Sweden, and has a per-capita GDP PPP of only $3.14 per day.

There’s actually a good statistical reason to expect that the smallest countries should vary the most in their incomes; you’re averaging over a smaller sample so you get more variance in the estimate. But this doesn’t explain why Norway is rich and Eritrea is poor. Incomes aren’t assigned randomly. This might be a reason to try comparing Norway to specifically New York City or Los Angeles rather than to the United States as a whole (Norway still does better, in case you were wondering—especially compared to LA); but it’s not a reason to say that Norway’s wealth doesn’t really count.

Is it because they are ethnically homogeneous? Yes, relatively speaking; but perhaps not as much as you imagine. 14% of Sweden’s population is immigrants, of which 64% are from outside the EU. 10% of Denmark’s population is comprised of immigrants, of which 66% came from non-Western countries. Immigrants are 13% of Norway’s population, of which half are from non-Western countries.

That’s certainly more ethnically homogeneous than the United States; 13% of our population is immigrants, which may sound comparable, but almost all non-immigrants in Scandinavia are of indigenous Nordic descent, all “White” by the usual classification. Meanwhile the United States is 64% non-Hispanic White, 16% Hispanic, 12% Black, 5% Asian, and 1% Native American or Pacific Islander.

Scandinavian countries are actually by some measures less homogeneous than the US in terms of religion, however; only 4% of Americans are not Christian (78.5%), atheist (16.1%), or Jewish (1.7%), and only 0.6% are Muslim. As much as In Sweden, on the other hand, 60% of the population is nominally Lutheran, but 80% is atheist, and 5% of the population is Muslim. So if you think of Christian/Muslim as the sharp divide (theologically this doesn’t make a whole lot of sense, but it seems to be the cultural norm in vogue), then Sweden has more religious conflict to worry about than the US does.

Moreover, there are some very ethnically homogeneous countries that are in horrible shape. North Korea is almost completely ethnically homogeneous, for example, as is Haiti. There does seem to be a correlation between higher ethnic diversity and lower economic prosperity, but Canada and the US are vastly more diverse than Japan and South Korea yet significantly richer. So clearly ethnicity is not the whole story here.

I do think ethnic homogeneity can partly explain why Scandinavian countries have the good policies they do; because humans are tribal, ethnic homogeneity engenders a sense of unity and cooperation, a notion that “we are all in this together”. That egalitarian attitude makes people more comfortable with some of the policies that make Scandinavia what it is, which I will get into at the end of this post.

What about culture? Is there something about Nordic ideas, those Viking traditions, that makes Scandinavia better? Miles Kimball has argued this; he says we need to import “hard work, healthy diets, social cohesion and high levels of trust—not Socialism”. And truth be told, it’s hard to refute this assertion, since it’s very difficult to isolate and control for cultural variables even though we know they are important.

But this difficulty in falsification is a reason to be cautious about such a hypothesis; it should be a last resort when all the more testable theories have been ruled out. I’m not saying culture doesn’t matter; it clearly does. But unless you can test it, “culture” becomes a theory that can explain just about anything—which means that it really explains nothing.

The “social cohesion and high levels of trust” part actually can be tested to some extent—and it is fairly well supported. High levels of trust are strongly correlated with economic prosperity. But we don’t really need to “import” that; the US is already near the top of the list in countries with the highest levels of trust.

I can’t really disagree with “good diet”, except to say that almost everywhere eats a better diet than the United States. The homeland of McDonald’s and Coca-Cola is frankly quite dystopian when it comes to rates of heart disease and diabetes. Given our horrible diet and ludicrously inefficient healthcare system, the only reason we live as long as we do is that we are an extremely rich country (so we can afford to pay the most for healthcare, for certain definitions of “afford”), and almost no one here smokes anymore. But good diet isn’t so much Scandinavian as it is… un-American.

But as for “hard work”, he’s got it backwards; the average number of work hours per week is 33 in Denmark and Norway, compared to 38 in the US. Among full-time workers in the US, the average number of hours per week is a whopping 47. Working hours in the US are much more intensive than anywhere in Europe, including Scandinavia. Though of course we are nowhere near the insane work addiction suffered by most East Asian countries; lately South Korea and Japan have been instituting massive reforms to try to get people to stop working themselves to death. And not surprisingly, work-related stress is a leading cause of death in the United States. If anything, we need to import some laziness, or at least a sense of work-life balance. (Indeed, I’m fairly sure that the only reason he said “hard work” is that it’s a cultural Applause Light in the US; being against hard work is like being against the American Flag or homemade apple pie. At this point, “we need more hard work” isn’t so much an assertion as it is a declaration of tribal membership.)

But none of these things adequately explains why poverty and inequality is so much lower in Scandinavia than it is in the United States, and there’s really a quite simple explanation.

Why is it that #ScandinaviaIsBetter? They’re not afraid to make rich people pay higher taxes so they can help poor people.

In the US, this idea of “redistribution of wealth” is anathema, even taboo; simply accusing a policy of being “redistributive” or “socialist” is for many Americans a knock-down argument against that policy. In Denmark, “socialist” is a meaningful descriptor; some policies are “socialist”, others “capitalist”, and these aren’t particularly weighted terms; it’s like saying here that a policy is “Keynesian” or “Monetarist”, or if that’s too obscure, saying that it’s “liberal” or “conservative”. People will definitely take sides, and it is a matter of political importance—but it’s inside the Overton Window. It’s not almost unthinkable as it is here.

If culture has an effect here, it likely comes from Scandinavia’s long traditions of egalitarianism. Going at least back to the Vikings, in theory at least (clearly not always in practice), people—or at least fellow Scandinavians—were considered equal participants in society, no one “better” or “higher” than anyone else. Even today, it is impolite in Denmark to express pride at your own accomplishments; there’s a sense that you are trying to present yourself as somehow more deserving than others. Honestly this attitude seems unhealthy to me, though perhaps preferable to the unrelenting narcissism of American society; but insofar as culture is making Scandinavia better, it’s almost certainly because this thoroughgoing sense of egalitarianism underlies all their economic policy. In the US, the rich are brilliant and the poor are lazy; in Denmark, the rich are fortunate and the poor are unlucky. (Which theory is more accurate? Donald Trump. I rest my case.)

To be clear, Scandinavia is not communist; and they are certainly not Stalinist. They don’t believe in total collectivization of industry, or complete government control over the economy. They don’t believe in complete, total equality, or even a hard cap on wealth: Stefan Persson is an 11-figure billionaire. Does he pay high taxes, living in Sweden? Yes he does, considerably higher than he’d pay in the US. He seems to be okay with that. Why, it’s almost like his marginal utility of wealth is now negligible.

Scandinavian countries also don’t try to micromanage your life in the way often associated with “socialism”–in fact I’d say they do it less than we do in the US. Here we have Republicans who want to require drug tests for food stamps even though that literally wastes money and helps no one; there they just provide a long list of government benefits for everyone free of charge. They just held a conference in Copenhagen to discuss the possibility of transitioning many of these benefits into a basic income; and basic income is the least intrusive means of redistributing wealth.

In fact, because Scandinavian countries tax differently, it’s not necessarily the case that people always pay higher taxes there. But they pay more transparent taxes, and taxes with sharper incidence. Denmark’s corporate tax rate is only 22% compared to 35% in the US; but their top personal income tax bracket is 59% while ours is only 39.6% (though it can rise over 50% with some state taxes). Denmark also has a land value tax and a VAT, both of which most economists have clamored for for generations. (The land value tax I totally agree with; the VAT I’m a little more ambivalent about.) Moreover, filing your taxes in Denmark is not a month-long stress marathon of gathering paperwork, filling out forms, and fearing that you’ll get something wrong and be audited as it is in the US; they literally just send you a bill. You can contest it, but most people don’t. You just pay it and you’re done.

Now, that does mean the government is keeping track of your income; and I might think that Americans would never tolerate such extreme surveillance… and then I remember that PRISM is a thing. Apparently we’re totally fine with the NSA reading our emails, but God forbid the IRS just fill out our 1040s for us (that they are going to read anyway). And there’s no surveillance involved in requiring retail stores to incorporate sales tax into listed price like they do in Europe instead of making us do math at the cash register like they do here. It’s almost like Americans are trying to make taxes as painful as possible.

Indeed, I think Scandanavian socialism is a good example of how high taxes are a sign of a free society, not an authoritarian one. Taxes are a minimal incursion on liberty. High taxes are how you fund a strong government and maintain extensive infrastructure and public services while still being fair and following the rule of law. The lowest tax rates in the world are in North Korea, which has ostensibly no taxes at all; the government just confiscates whatever they decide they want. Taxes in Venezuela are quite low, because the government just owns all the oil refineries (and also uses multiple currency exchange rates to arbitrage seigniorage). US taxes are low by First World standards, but not by world standards, because we combine a free society with a staunch opposition to excessive taxation. Most of the rest of the free world is fine with paying a lot more taxes than we do. In fact, even using Heritage Foundation data, there is a clear positive correlation between higher tax rates and higher economic freedom:
Graph: Heritage Foundation Economic Freedom Index and tax burden

What’s really strange, though, is that most Americans actually support higher taxes on the rich. They often have strange or even incoherent ideas about what constitutes “rich”; I have extended family members who have said they think $100,000 is an unreasonable amount of money for someone to make, yet somehow are totally okay with Donald Trump making $300,000,000. The chant “we are the 99%” has always been off by a couple orders of magnitude; the plutocrat rentier class is the top 0.01%, not the top 1%. The top 1% consists mainly of doctors and lawyers and engineers; the top 0.01%, to a man—and they are nearly all men, in fact White men—either own corporations or work in finance. But even adjusting for all this, it seems like at least a bare majority of Americans are all right with “redistributive” “socialist” policies—as long as you don’t call them that.

So I suppose that’s sort of what I’m trying to do; don’t think of it as “socialism”. Think of it as #ScandinaviaIsBetter.

We need to be honest about free trade’s costs, and clearer about its benefits

August 6, JDN 2457607

I discussed in a post awhile ago the fact that economists overwhelmingly favor free trade but most people don’t. There are some deep psychological reasons for this, particularly the loss aversion which makes people experience losses about twice as much as they experience gains. Free trade requires change; it creates some jobs and destroys others. Those forced transitions can be baffling and painful.

The good news is that views on trade in the US are actually getting more positive in recent years—which makes Trump that much more baffling. I honestly can’t make much sense of the fact that candidates who are against free trade have been so big in this election (and let’s face it, even Bernie Sanders is largely against free trade!), in light of polls showing that free trade is actually increasingly popular.

Partly this can be explained by the fact that people are generally more positive about free trade in general than they are about particular trade agreements, and understandably so, as free trade agreements often include some really awful provisions that in no way advance free trade. But that doesn’t really explain the whole effect here. Maybe it’s a special interest effect: People who hate trade are much more passionate about hating trade than people who like trade are passionate about liking trade. If that’s the case, then this is what we need to change.

Today I’d like to focus on what we as economists and the economically literate more generally can do to help people understand what free trade is and why it is so important. This means two things:

First, of course, we must be clearer about the benefits of free trade. Many economists seem to think that it is simply so obvious that they don’t even bother to explain it, and end up seeming like slogan-chanting ideologues. “Free trade! Free trade! Free trade!”

Above all, we need to talk about how it was primarily through free trade that global extreme poverty is now at the lowest level it has ever been. This benefit needs to be repeated over and over, and anyone who argues for protectionism needs to be confronted with the millions of people they will throw back into poverty. Most people don’t even realize that global poverty is declining, so first of all, they need to be shown that it is.

American ideas are often credited with fighting global poverty, but that’s not so convincing, since most of the improvement in poverty has happened in China (not exactly a paragon of free markets, much less liberal democracy); what really seems to have made the difference is American dollars, spent in free trade. Imports to the US from China have risen from $3.8 billion in 1985 to $483 billion in 2015. Extreme poverty in China fell from 61% of the population in 1990 to 4% in 2015. Coincidence? I think not. Indeed, that $483 billion is just about $1 per day for every man, woman, and child in China—and the UN extreme poverty line is $1.25 per person per day.

We need to be talking about the jobs that are created by trade—if need be, making TV commercials interviewing workers at factories who make products for export. “Most of our customers are in Japan,” they might say. “Without free trade, I’d be out of a job.” Interview business owners saying things like, “Two years ago we opened up sales to China. Now I need to double my workforce just to keep up with demand.” Unlike a lot of other economic policies where the benefits are diffuse and hard to keep track of, free trade is one where you can actually point to specific people and see that they are now better off because they make more selling exports. From there, we just need to point out that imports and exports are two sides of the same transaction—so if you like exports, you’d better have imports.

We need to make it clear that the economic gains from trade are just as real as the losses from transition, even if they may not be as obvious. William Poole put it very well in this article on attitudes toward free trade:

Economists are sometimes charged with insensitivity over job losses, when in fact most of us are extremely sensitive to such losses. What good economics tells us is that saving jobs in one industry does not save jobs in the economy as a whole. We urge people to be as sensitive to the jobs indirectly lost as a consequence of trade restriction as to those lost as a consequence of changing trade patterns.

Second, just as importantly, we must be honest about the costs of free trade. We need to stop eliding the distinction between net aggregate benefits and benefits for everyone everywhere. There are winners and losers, and we need to face up to that.

For example, we need to stop saying thinks like “Free trade will not send jobs to Mexico and China.” No, it absolutely will, and has, and does—and that is part of what it’s for. Because people in Mexico and China are people, and they deserve to have better jobs just as much as we do. Sending jobs to China is not a bug; it’s a feature. China needs jobs particularly badly.

Then comes the next part: “But if our jobs get sent to China, what will we do?” Better jobs, created here by the economic benefits of free trade. No longer will American workers toil in factories assembling parts; instead they will work in brightly-lit offices designing those parts on CAD software.

Of course this raises another problem: What happens to people who were qualified to toil in factories, but aren’t qualified to design parts on CAD software? Well, they’ll need to learn. And we should be paying for that education (though in large part, we are; altogether US federal, state, and local governments spend over $1 trillion a year on education).

And what if they can’t learn, can’t find another job somewhere else? What if they’re just not cut out for the kind of work we need in a 21st century economy? Then here comes my most radical statement of all: Then they shouldn’t have to.

The whole point of expanding economic efficiency—which free trade most certainly does—is to create more stuff. But if you create more stuff, you then have the opportunity to redistribute that stuff, in such a way that no one is harmed by that transition. This is what we have been failing to do in the United States. We need to set up our unemployment and pension systems so that people who lose their jobs due to free trade are not harmed by it, but instead feel like it is an opportunity to change careers or retire. We should have a basic income so that even people who can’t work at all can still live with dignity. This redistribution will not happen automatically; it is a policy choice we must make.

 

In theory there is a way around it, which is often proposed as an alternative to a basic income; it is called a job guarantee. Simply giving everyone free money for some reason makes people uncomfortable (never could quite fathom why; Donald Trump inherits capital income from his father, that’s fine, but we all inherit shared capital income as a nation, that’s a handout?), so instead we give everyone a job, so they can earn their money!

Well, here’s the thing: They won’t actually be earning it—or else it’s not a job guarantee. If you just want an active labor-market program to retrain workers and match them with jobs, that sounds great; Denmark has had great success with such things, and after all #ScandinaviaIsBetter. But no matter how good your program is, some people are going to not have any employable skills, or have disabilities too severe to do any productive work, or simply be too lazy to actually work. And now you’ve got a choice to make: Do you give those people jobs, or not?

If you don’t, it’s not a job guarantee. If you do, they’re not earning it anymore. Either employment is tied to actual productivity, or it isn’t; if you are guaranteed a certain wage no matter what you do, then some people are going to get that wage for doing nothing. As The Economist put it:

However, there are two alternatives: give people money with no strings attached (through a guaranteed basic income, unemployment insurance, disability payments, and so forth), or just make unemployed people survive on whatever miserable scraps they can cobble together.

If it’s really a job guarantee, we would still need to give jobs to people who can’t work or simply won’t. How is this different from a basic income? Well, it isn’t, except you added all these extra layers of bureaucracy so that you could feel like you weren’t just giving a handout. You’ve added additional costs for monitoring and administration, as well as additional opportunities for people to slip through the cracks. Either you are going to leave some people in poverty, or you are going to give money to people who don’t work—so why not give money to people who don’t work?

Another cost we need to be honest about is ecological. In our rush to open free trade, we are often lax in ensuring that this trade will not accelerate environmental degradation and climate change. This is often justified in the name of helping the world’s poorest people; but they will be hurt far more when their homes are leveled by hurricanes than by waiting a few more years to get the trade agreement right. That’s one where Poole actually loses me:

Few Americans favor a world trading system in which U.S. policies on environmental and other conditions could be controlled by foreign governments through their willingness to accept goods exported by the United States.

Really? You think we should be able to force other countries to accept our goods, regardless of whether they consider them ecologically sustainable? You think most Americans think that? It’s easy to frame it as other people imposing on us, but trade restrictions on ecologically harmful goods are actually a very minimal—indeed, almost certainly insufficient—regulation against environmental harm. Oil can still kill a lot of people even if it never crosses borders (or never crosses in liquid form—part of the point is you can’t stop the gaseous form). We desperately need global standards on ecological sustainability, and while we must balance environmental regulations with economic efficiency, currently that balance is tipped way too far against the environment—and millions will die if it remains this way.

This is the kernel of truth in otherwise economically-ignorant environmentalist diatribes like Naomi Klein’s This Changes Everything; free trade in principle doesn’t say anything about being environmentally unsustainable, but free trade in practice has often meant cutting corners and burning coal. Where we currently have diesel-powered container ships built in coal-powered factories and Klein wants no container ships and perhaps even no factories, what we really need are nuclear-powered container ships and solar-powered factories. Klein points out cases where free trade agreements have shut down solar projects that tried to create local jobs—but neither side seems to realize that a good free trade agreement would expand that solar project to create global jobs. Instead of building solar panels in Canada to sell only in Canada, we’d build solar panels in Canada to sell in China and India—and build ten times as many. That is what free trade could be, if we did it right.

“The cake is a lie”: The fundamental distortions of inequality

July 13, JDN 2457583

Inequality of wealth and income, especially when it is very large, fundamentally and radically distorts outcomes in a capitalist market. I’ve already alluded to this matter in previous posts on externalities and marginal utility of wealth, but it is so important I think it deserves to have its own post. In many ways this marks a paradigm shift: You can’t think about economics the same way once you realize it is true.

To motivate what I’m getting at, I’ll expand upon an example from a previous post.

Suppose there are only two goods in the world; let’s call them “cake” (K) and “money” (M). Then suppose there are three people, Baker, who makes cakes, Richie, who is very rich, and Hungry, who is very poor. Furthermore, suppose that Baker, Richie and Hungry all have exactly the same utility function, which exhibits diminishing marginal utility in cake and money. To make it more concrete, let’s suppose that this utility function is logarithmic, specifically: U = 10*ln(K+1) + ln(M+1)

The only difference between them is in their initial endowments: Baker starts with 10 cakes, Richie starts with $100,000, and Hungry starts with $10.

Therefore their starting utilities are:

U(B) = 10*ln(10+1)= 23.98

U(R) = ln(100,000+1) = 11.51

U(H) = ln(10+1) = 2.40

Thus, the total happiness is the sum of these: U = 37.89

Now let’s ask two very simple questions:

1. What redistribution would maximize overall happiness?
2. What redistribution will actually occur if the three agents trade rationally?

If multiple agents have the same diminishing marginal utility function, it’s actually a simple and deep theorem that the total will be maximized if they split the wealth exactly evenly. In the following blockquote I’ll prove the simplest case, which is two agents and one good; it’s an incredibly elegant proof:

Given: for all x, f(x) > 0, f'(x) > 0, f”(x) < 0.

Maximize: f(x) + f(A-x) for fixed A

f'(x) – f'(A – x) = 0

f'(x) = f'(A – x)

Since f”(x) < 0, this is a maximum.

Since f'(x) > 0, f is monotonic; therefore f is injective.

x = A – x

QED

This can be generalized to any number of agents, and for multiple goods. Thus, in this case overall happiness is maximized if the cakes and money are both evenly distributed, so that each person gets 3 1/3 cakes and $33,336.66.

The total utility in that case is:

3 * (10 ln(10/3+1) + ln(33,336.66+1)) = 3 * (14.66 + 10.414) = 3 (25.074) =75.22

That’s considerably better than our initial distribution (almost twice as good). Now, how close do we get by rational trade?

Each person is willing to trade up until the point where their marginal utility of cake is equal to their marginal utility of money. The price of cake will be set by the respective marginal utilities.

In particular, let’s look at the trade that will occur between Baker and Richie. They will trade until their marginal rate of substitution is the same.

The actual algebra involved is obnoxious (if you’re really curious, here are some solved exercises of similar trade problems), so let’s just skip to the end. (I rushed through, so I’m not actually totally sure I got it right, but to make my point the precise numbers aren’t important.)
Basically what happens is that Richie pays an exorbitant price of $10,000 per cake, buying half the cakes with half of his money.

Baker’s new utility and Richie’s new utility are thus the same:
U(R) = U(B) = 10*ln(5+1) + ln(50,000+1) = 17.92 + 10.82 = 28.74
What about Hungry? Yeah, well, he doesn’t have $10,000. If cakes are infinitely divisible, he can buy up to 1/1000 of a cake. But it turns out that even that isn’t worth doing (it would cost too much for what he gains from it), so he may as well buy nothing, and his utility remains 2.40.

Hungry wanted cake just as much as Richie, and because Richie has so much more Hungry would have gotten more happiness from each new bite. Neoclassical economists promised him that markets were efficient and optimal, and so he thought he’d get the cake he needs—but the cake is a lie.

The total utility is therefore:

U = U(B) + U(R) + U(H)

U = 28.74 + 28.74 + 2.40

U = 59.88

Note three things about this result: First, it is more than where we started at 37.89—trade increases utility. Second, both Richie and Baker are better off than they were—trade is Pareto-improving. Third, the total is less than the optimal value of 75.22—trade is not utility-maximizing in the presence of inequality. This is a general theorem that I could prove formally, if I wanted to bore and confuse all my readers. (Perhaps someday I will try to publish a paper doing that.)

This result is incredibly radical—it basically goes against the core of neoclassical welfare theory, or at least of all its applications to real-world policy—so let me be absolutely clear about what I’m saying, and what assumptions I had to make to get there.

I am saying that if people start with different amounts of wealth, the trades they would willfully engage in, acting purely under their own self interest, would not maximize the total happiness of the population. Redistribution of wealth toward equality would increase total happiness.

First, I had to assume that we could simply redistribute goods however we like without affecting the total amount of goods. This is wildly unrealistic, which is why I’m not actually saying we should reduce inequality to zero (as would follow if you took this result completely literally). Ironically, this is an assumption that most neoclassical welfare theory agrees with—the Second Welfare Theorem only makes any sense in a world where wealth can be magically redistributed between people without any harmful economic effects. If you weaken this assumption, what you find is basically that we should redistribute wealth toward equality, but beware of the tradeoff between too much redistribution and too little.

Second, I had to assume that there’s such a thing as “utility”—specifically, interpersonally comparable cardinal utility. In other words, I had to assume that there’s some way of measuring how much happiness each person has, and meaningfully comparing them so that I can say whether taking something from one person and giving it to someone else is good or bad in any given circumstance.

This is the assumption neoclassical welfare theory generally does not accept; instead they use ordinal utility, on which we can only say whether things are better or worse, but never by how much. Thus, their only way of determining whether a situation is better or worse is Pareto efficiency, which I discussed in a post a couple years ago. The change from the situation where Baker and Richie trade and Hungry is left in the lurch to the situation where all share cake and money equally in socialist utopia is not a Pareto-improvement. Richie and Baker are slightly worse off with 25.07 utilons in the latter scenario, while they had 28.74 utilons in the former.

Third, I had to assume selfishness—which is again fairly unrealistic, but again not something neoclassical theory disagrees with. If you weaken this assumption and say that people are at least partially altruistic, you can get the result where instead of buying things for themselves, people donate money to help others out, and eventually the whole system achieves optimal utility by willful actions. (It depends just how altruistic people are, as well as how unequal the initial endowments are.) This actually is basically what I’m trying to make happen in the real world—I want to show people that markets won’t do it on their own, but we have the chance to do it ourselves. But even then, it would go a lot faster if we used the power of government instead of waiting on private donations.

Also, I’m ignoring externalities, which are a different type of market failure which in no way conflicts with this type of failure. Indeed, there are three basic functions of government in my view: One is to maintain security. The second is to cancel externalities. The third is to redistribute wealth. The DOD, the EPA, and the SSA, basically. One could also add macroeconomic stability as a fourth core function—the Fed.

One way to escape my theorem would be to deny interpersonally comparable utility, but this makes measuring welfare in any way (including the usual methods of consumer surplus and GDP) meaningless, and furthermore results in the ridiculous claim that we have no way of being sure whether Bill Gates is happier than a child starving and dying of malaria in Burkina Faso, because they are two different people and we can’t compare different people. Far more reasonable is not to believe in cardinal utility, meaning that we can say an extra dollar makes you better off, but we can’t put a number on how much.

And indeed, the difficulty of even finding a unit of measure for utility would seem to support this view: Should I use QALY? DALY? A Likert scale from 0 to 10? There is no known measure of utility that is without serious flaws and limitations.

But it’s important to understand just how strong your denial of cardinal utility needs to be in order for this theorem to fail. It’s not enough that we can’t measure precisely; it’s not even enough that we can’t measure with current knowledge and technology. It must be fundamentally impossible to measure. It must be literally meaningless to say that taking a dollar from Bill Gates and giving it to the starving Burkinabe would do more good than harm, as if you were asserting that triangles are greener than schadenfreude.

Indeed, the whole project of welfare theory doesn’t make a whole lot of sense if all you have to work with is ordinal utility. Yes, in principle there are policy changes that could make absolutely everyone better off, or make some better off while harming absolutely no one; and the Pareto criterion can indeed tell you that those would be good things to do.

But in reality, such policies almost never exist. In the real world, almost anything you do is going to harm someone. The Nuremburg trials harmed Nazi war criminals. The invention of the automobile harmed horse trainers. The discovery of scientific medicine took jobs away from witch doctors. Inversely, almost any policy is going to benefit someone. The Great Leap Forward was a pretty good deal for Mao. The purges advanced the self-interest of Stalin. Slavery was profitable for plantation owners. So if you can only evaluate policy outcomes based on the Pareto criterion, you are literally committed to saying that there is no difference in welfare between the Great Leap Forward and the invention of the polio vaccine.

One way around it (that might actually be a good kludge for now, until we get better at measuring utility) is to broaden the Pareto criterion: We could use a majoritarian criterion, where you care about the number of people benefited versus harmed, without worrying about magnitudes—but this can lead to Tyranny of the Majority. Or you could use the Difference Principle developed by Rawls: find an ordering where we can say that some people are better or worse off than others, and then make the system so that the worst-off people are benefited as much as possible. I can think of a few cases where I wouldn’t want to apply this criterion (essentially they are circumstances where autonomy and consent are vital), but in general it’s a very good approach.

Neither of these depends upon cardinal utility, so have you escaped my theorem? Well, no, actually. You’ve weakened it, to be sure—it is no longer a statement about the fundamental impossibility of welfare-maximizing markets. But applied to the real world, people in Third World poverty are obviously the worst off, and therefore worthy of our help by the Difference Principle; and there are an awful lot of them and very few billionaires, so majority rule says take from the billionaires. The basic conclusion that it is a moral imperative to dramatically reduce global inequality remains—as does the realization that the “efficiency” and “optimality” of unregulated capitalism is a chimera.

Believing in civilization without believing in colonialism

JDN 2457541

In a post last week I presented some of the overwhelming evidence that society has been getting better over time, particularly since the start of the Industrial Revolution. I focused mainly on infant mortality rates—babies not dying—but there are lots of other measures you could use as well. Despite popular belief, poverty is rapidly declining, and is now the lowest it’s ever been. War is rapidly declining. Crime is rapidly declining in First World countries, and to the best of our knowledge crime rates are stable worldwide. Public health is rapidly improving. Lifespans are getting longer. And so on, and so on. It’s not quite true to say that every indicator of human progress is on an upward trend, but the vast majority of really important indicators are.

Moreover, there is every reason to believe that this great progress is largely the result of what we call “civilization”, even Western civilization: Stable, centralized governments, strong national defense, representative democracy, free markets, openness to global trade, investment in infrastructure, science and technology, secularism, a culture that values innovation, and freedom of speech and the press. We did not get here by Marxism, nor agragrian socialism, nor primitivism, nor anarcho-capitalism. We did not get here by fascism, nor theocracy, nor monarchy. This progress was built by the center-left welfare state, “social democracy”, “modified capitalism”, the system where free, open markets are coupled with a strong democratic government to protect and steer them.

This fact is basically beyond dispute; the evidence is overwhelming. The serious debate in development economics is over which parts of the Western welfare state are most conducive to raising human well-being, and which parts of the package are more optional. And even then, some things are fairly obvious: Stable government is clearly necessary, while speaking English is clearly optional.

Yet many people are resistant to this conclusion, or even offended by it, and I think I know why: They are confusing the results of civilization with the methods by which it was established.

The results of civilization are indisputably positive: Everything I just named above, especially babies not dying.

But the methods by which civilization was established are not; indeed, some of the greatest atrocities in human history are attributable at least in part to attempts to “spread civilization” to “primitive” or “savage” people.
It is therefore vital to distinguish between the result, civilization, and the processes by which it was effected, such as colonialism and imperialism.

First, it’s important not to overstate the link between civilization and colonialism.

We tend to associate colonialism and imperialism with White people from Western European cultures conquering other people in other cultures; but in fact colonialism and imperialism are basically universal to any human culture that attains sufficient size and centralization. India engaged in colonialism, Persia engaged in imperialism, China engaged in imperialism, the Mongols were of course major imperialists, and don’t forget the Ottoman Empire; and did you realize that Tibet and Mali were at one time imperialists as well? And of course there are a whole bunch of empires you’ve probably never heard of, like the Parthians and the Ghaznavids and the Ummayyads. Even many of the people we’re accustoming to thinking of as innocent victims of colonialism were themselves imperialists—the Aztecs certainly were (they even sold people into slavery and used them for human sacrifice!), as were the Pequot, and the Iroquois may not have outright conquered anyone but were definitely at least “soft imperialists” the way that the US is today, spreading their influence around and using economic and sometimes military pressure to absorb other cultures into their own.

Of course, those were all civilizations, at least in the broadest sense of the word; but before that, it’s not that there wasn’t violence, it just wasn’t organized enough to be worthy of being called “imperialism”. The more general concept of intertribal warfare is a human universal, and some hunter-gatherer tribes actually engage in an essentially constant state of warfare we call “endemic warfare”. People have been grouping together to kill other people they perceived as different for at least as long as there have been people to do so.

This is of course not to excuse what European colonial powers did when they set up bases on other continents and exploited, enslaved, or even murdered the indigenous population. And the absolute numbers of people enslaved or killed are typically larger under European colonialism, mainly because European cultures became so powerful and conquered almost the entire world. Even if European societies were not uniquely predisposed to be violent (and I see no evidence to say that they were—humans are pretty much humans), they were more successful in their violent conquering, and so more people suffered and died. It’s also a first-mover effect: If the Ming Dynasty had supported Zheng He more in his colonial ambitions, I’d probably be writing this post in Mandarin and reflecting on why Asian cultures have engaged in so much colonial oppression.

While there is a deeply condescending paternalism (and often post-hoc rationalization of your own self-interested exploitation) involved in saying that you are conquering other people in order to civilize them, humans are also perfectly capable of committing atrocities for far less noble-sounding motives. There are holy wars such as the Crusades and ethnic genocides like in Rwanda, and the Arab slave trade was purely for profit and didn’t even have the pretense of civilizing people (not that the Atlantic slave trade was ever really about that anyway).

Indeed, I think it’s important to distinguish between colonialists who really did make some effort at civilizing the populations they conquered (like Britain, and also the Mongols actually) and those that clearly were just using that as an excuse to rape and pillage (like Spain and Portugal). This is similar to but not quite the same thing as the distinction between settler colonialism, where you send colonists to live there and build up the country, and exploitation colonialism, where you send military forces to take control of the existing population and exploit them to get their resources. Countries that experienced settler colonialism (such as the US and Australia) have fared a lot better in the long run than countries that experienced exploitation colonialism (such as Haiti and Zimbabwe).

The worst consequences of colonialism weren’t even really anyone’s fault, actually. The reason something like 98% of all Native Americans died as a result of European colonization was not that Europeans killed them—they did kill thousands of course, and I hope it goes without saying that that’s terrible, but it was a small fraction of the total deaths. The reason such a huge number died and whole cultures were depopulated was disease, and the inability of medical technology in any culture at that time to handle such a catastrophic plague. The primary cause was therefore accidental, and not really foreseeable given the state of scientific knowledge at the time. (I therefore think it’s wrong to consider it genocide—maybe democide.) Indeed, what really would have saved these people would be if Europe had advanced even faster into industrial capitalism and modern science, or else waited to colonize until they had; and then they could have distributed vaccines and antibiotics when they arrived. (Of course, there is evidence that a few European colonists used the diseases intentionally as biological weapons, which no amount of vaccine technology would prevent—and that is indeed genocide. But again, this was a small fraction of the total deaths.)

However, even with all those caveats, I hope we can all agree that colonialism and imperialism were morally wrong. No nation has the right to invade and conquer other nations; no one has the right to enslave people; no one has the right to kill people based on their culture or ethnicity.

My point is that it is entirely possible to recognize that and still appreciate that Western civilization has dramatically improved the standard of human life over the last few centuries. It simply doesn’t follow from the fact that British government and culture were more advanced and pluralistic that British soldiers can just go around taking over other people’s countries and planting their own flag (follow the link if you need some comic relief from this dark topic). That was the moral failing of colonialism; not that they thought their society was better—for in many ways it was—but that they thought that gave them the right to terrorize, slaughter, enslave, and conquer people.

Indeed, the “justification” of colonialism is a lot like that bizarre pseudo-utilitarianism I mentioned in my post on torture, where the mere presence of some benefit is taken to justify any possible action toward achieving that benefit. No, that’s not how morality works. You can’t justify unlimited evil by any good—it has to be a greater good, as in actually greater.

So let’s suppose that you do find yourself encountering another culture which is clearly more primitive than yours; their inferior technology results in them living in poverty and having very high rates of disease and death, especially among infants and children. What, if anything, are you justified in doing to intervene to improve their condition?

One idea would be to hold to the Prime Directive: No intervention, no sir, not ever. This is clearly what Gene Roddenberry thought of imperialism, hence why he built it into the Federation’s core principles.

But does that really make sense? Even as Star Trek shows progressed, the writers kept coming up with situations where the Prime Directive really seemed like it should have an exception, and sometimes decided that the honorable crew of Enterprise or Voyager really should intervene in this more primitive society to save them from some terrible fate. And I hope I’m not committing a Fictional Evidence Fallacy when I say that if your fictional universe specifically designed not to let that happen makes that happen, well… maybe it’s something we should be considering.

What if people are dying of a terrible disease that you could easily cure? Should you really deny them access to your medicine to avoid intervening in their society?

What if the primitive culture is ruled by a horrible tyrant that you could easily depose with little or no bloodshed? Should you let him continue to rule with an iron fist?

What if the natives are engaged in slavery, or even their own brand of imperialism against other indigenous cultures? Can you fight imperialism with imperialism?

And then we have to ask, does it really matter whether their babies are being murdered by the tyrant or simply dying from malnutrition and infection? The babies are just as dead, aren’t they? Even if we say that being murdered by a tyrant is worse than dying of malnutrition, it can’t be that much worse, can it? Surely 10 babies dying of malnutrition is at least as bad as 1 baby being murdered?

But then it begins to seem like we have a duty to intervene, and moreover a duty that applies in almost every circumstance! If you are on opposite sides of the technology threshold where infant mortality drops from 30% to 1%, how can you justify not intervening?

I think the best answer here is to keep in mind the very large costs of intervention as well as the potentially large benefits. The answer sounds simple, but is actually perhaps the hardest possible answer to apply in practice: You must do a cost-benefit analysis. Furthermore, you must do it well. We can’t demand perfection, but it must actually be a serious good-faith effort to predict the consequences of different intervention policies.

We know that people tend to resist most outside interventions, especially if you have the intention of toppling their leaders (even if they are indeed tyrannical). Even the simple act of offering people vaccines could be met with resistance, as the native people might think you are poisoning them or somehow trying to control them. But in general, opening contact with with gifts and trade is almost certainly going to trigger less hostility and therefore be more effective than going in guns blazing.

If you do use military force, it must be targeted at the particular leaders who are most harmful, and it must be designed to achieve swift, decisive victory with minimal collateral damage. (Basically I’m talking about just war theory.) If you really have such an advanced civilization, show it by exhibiting total technological dominance and minimizing the number of innocent people you kill. The NATO interventions in Kosovo and Libya mostly got this right. The Vietnam War and Iraq War got it totally wrong.

As you change their society, you should be prepared to bear most of the cost of transition; you are, after all, much richer than they are, and also the ones responsible for effecting the transition. You should not expect to see short-term gains for your own civilization, only long-term gains once their culture has advanced to a level near your own. You can’t bear all the costs of course—transition is just painful, no matter what you do—but at least the fungible economic costs should be borne by you, not by the native population. Examples of doing this wrong include basically all the standard examples of exploitation colonialism: Africa, the Caribbean, South America. Examples of doing this right include West Germany and Japan after WW2, and South Korea after the Korean War—which is to say, the greatest economic successes in the history of the human race. This was us winning development, humanity. Do this again everywhere and we will have not only ended world hunger, but achieved global prosperity.

What happens if we apply these principles to real-world colonialism? It does not fare well. Nor should it, as we’ve already established that most if not all real-world colonialism was morally wrong.

15th and 16th century colonialism fail immediately; they offer no benefit to speak of. Europe’s technological superiority was enough to give them gunpowder but not enough to drop their infant mortality rate. Maybe life was better in 16th century Spain than it was in the Aztec Empire, but honestly not by all that much; and life in the Iroquois Confederacy was in many ways better than life in 15th century England. (Though maybe that justifies some Iroquois imperialism, at least their “soft imperialism”?)

If these principles did justify any real-world imperialism—and I am not convinced that it does—it would only be much later imperialism, like the British Empire in the 19th and 20th century. And even then, it’s not clear that the talk of “civilizing” people and “the White Man’s Burden” was much more than rationalization, an attempt to give a humanitarian justification for what were really acts of self-interested economic exploitation. Even though India and South Africa are probably better off now than they were when the British first took them over, it’s not at all clear that this was really the goal of the British government so much as a side effect, and there are a lot of things the British could have done differently that would obviously have made them better off still—you know, like not implementing the precursors to apartheid, or making India a parliamentary democracy immediately instead of starting with the Raj and only conceding to democracy after decades of protest. What actually happened doesn’t exactly look like Britain cared nothing for actually improving the lives of people in India and South Africa (they did build a lot of schools and railroads, and sought to undermine slavery and the caste system), but it also doesn’t look like that was their only goal; it was more like one goal among several which also included the strategic and economic interests of Britain. It isn’t enough that Britain was a better society or even that they made South Africa and India better societies than they were; if the goal wasn’t really about making people’s lives better where you are intervening, it’s clearly not justified intervention.

And that’s the relatively beneficent imperialism; the really horrific imperialists throughout history made only the barest pretense of spreading civilization and were clearly interested in nothing more than maximizing their own wealth and power. This is probably why we get things like the Prime Directive; we saw how bad it can get, and overreacted a little by saying that intervening in other cultures is always, always wrong, no matter what. It was only a slight overreaction—intervening in other cultures is usually wrong, and almost all historical examples of it were wrong—but it is still an overreaction. There are exceptional cases where intervening in another culture can be not only morally right but obligatory.

Indeed, one underappreciated consequence of colonialism and imperialism is that they have triggered a backlash against real good-faith efforts toward economic development. People in Africa, Asia, and Latin America see economists from the US and the UK (and most of the world’s top economists are in fact educated in the US or the UK) come in and tell them that they need to do this and that to restructure their society for greater prosperity, and they understandably ask: “Why should I trust you this time?” The last two or four or seven batches of people coming from the US and Europe to intervene in their countries exploited them or worse, so why is this time any different?

It is different, of course; UNDP is not the East India Company, not by a longshot. Even for all their faults, the IMF isn’t the East India Company either. Indeed, while these people largely come from the same places as the imperialists, and may be descended from them, they are in fact completely different people, and moral responsibility does not inherit across generations. While the suspicion is understandable, it is ultimately unjustified; whatever happened hundreds of years ago, this time most of us really are trying to help—and it’s working.

Actually, our economic growth has been fairly ecologically sustainable lately!

JDN 2457538

Environmentalists have a reputation for being pessimists, and it is not entirely undeserved. While as Paul Samuelson said, all Street indexes have predicted nine out of the last five recessions, environmentalists have predicted more like twenty out of the last zero ecological collapses.

Some fairly serious scientists have endorsed predictions of imminent collapse that haven’t panned out, and many continue to do so. This Guardian article should be hilarious to statisticians, as it literally takes trends that are going one direction, maps them onto a theory that arbitrarily decides they’ll suddenly reverse, and then says “the theory fits the data”. This should be taught in statistics courses as a lesson in how not to fit models. More data distortion occurs in this Scientific American article, which contains the phrase “food per capita is decreasing”; well, that’s true if you just look at the last couple of years, but according to FAOSTAT, food production per capita in 2012 (the most recent data in FAOSTAT) was higher than literally every other year on record except 2011. So if you allow for even the slightest amount of random fluctuation, it’s very clear that food per capita is increasing, not decreasing.

global_food.png

So many people predicting imminent collapse of human civilization. And yet, for some reason, all the people predicting this go about their lives as if it weren’t happening! Why, it’s almost as if they don’t really believe it, and just say it to get attention. Nobody gets on the news by saying “Civilization is doing fine; things are mostly getting better.”

There’s a long history of these sorts of gloom and doom predictions; perhaps the paradigm example is Thomas Malthus in 1779 predicting the imminent destruction of civilization by inevitable famine—just in time for global infant mortality rates to start plummeting and economic output to surge beyond anyone’s wildest dreams.

Still, when I sat down to study this it was remarkable to me just how good the outlook is for future sustainability. The Index of Sustainable Economic Welfare was created essentially in an attempt to show how our economic growth is largely an illusion driven by our rapacious natural resource consumption, but it has since been discontinued, perhaps because it didn’t show that. Using the US as an example, I reconstructed the index as best I could from World Bank data, and here’s what came out for the period since 1990:

ISEW

The top line is US GDP as normally measured. The bottom line is the ISEW. The gap between those lines expands on a linear scale, but not on a logarithmic scale; that is to say, GDP and ISEW grow at almost exactly the same rate, so ISEW is always a constant (and large) proportion of GDP. By construction it is necessarily smaller (it basically takes GDP and subtracts out from it), but the fact that it is growing at the same rate shows that our economic growth is not being driven by depletion of natural resources or the military-industrial complex; it’s being driven by real improvements in education and technology.

The Human Development Index has grown in almost every country (albeit at quite different rates) since 1990. Global poverty is the lowest it has ever been. We are living in a golden age of prosperity. This is such a golden age for our civilization, our happiness rating maxed out and now we’re getting +20% production and extra gold from every source. (Sorry, gamer in-joke.)

Now, it is said that pride cometh before a fall; so perhaps our current mind-boggling improvements in human welfare have only been purchased on borrowed time as we further drain our natural resources.

There is some cause for alarm: We’re literally running out of fish, and groundwater tables are falling rapidly. Due to poor land use deserts are expanding. Huge quantities of garbage now float in our oceans. And of course, climate change is poised to kill millions of people. Arctic ice will melt every summer starting in the next few years.

And yet, global carbon emissions have not been increasing the last few years, despite strong global economic growth. We need to be reducing emissions, not just keeping them flat (in a previous post I talked about some policies to do that); but even keeping them flat while still raising standard of living is something a lot of environmentalists kept telling us we couldn’t possibly do. Despite constant talk of “overpopulation” and a “population bomb”, population growth rates are declining and world population is projected to level off around 9 billion. Total solar power production in the US expanded by a factor of 40 in just the last 10 years.

Of course, I don’t deny that there are serious environmental problems, and we need to make policies to combat them; but we are doing that. Humanity is not mindlessly plunging headlong into an abyss; we are taking steps to improve our future.

And in fact I think environmentalists deserve a lot of credit for that! Raising awareness of environmental problems has made most Americans recognize that climate change is a serious problem. Further pressure might make them realize it should be one of our top priorities (presently most Americans do not).

And who knows, maybe the extremist doomsayers are necessary to set the Overton Window for the rest of us. I think we of the center-left (toward which reality has a well-known bias) often underestimate how much we rely upon the radical left to pull the discussion away from the radical right and make us seem more reasonable by comparison. It could well be that “climate change will kill tens of millions of people unless we act now to institute a carbon tax and build hundreds of nuclear power plants” is easier to swallow after hearing “climate change will destroy humanity unless we act now to transform global capitalism to agrarian anarcho-socialism.” Ultimately I wish people could be persuaded simply by the overwhelming scientific evidence in favor of the carbon tax/nuclear power argument, but alas, humans are simply not rational enough for that; and you must go to policy with the public you have. So maybe irrational levels of pessimism are a worthwhile corrective to the irrational levels of optimism coming from the other side, like the execrable sophistry of “in praise of fossil fuels” (yes, we know our economy was built on coal and oil—that’s the problem. We’re “rolling drunk on petroleum”; when we’re trying to quit drinking, reminding us how much we enjoy drinking is not helpful.).

But I worry that this sort of irrational pessimism carries its own risks. First there is the risk of simply giving up, succumbing to learned helplessness and deciding there’s nothing we can possibly do to save ourselves. Second is the risk that we will do something needlessly drastic (like the a radical socialist revolution) that impoverishes or even kills millions of people for no reason. The extreme fear that we are on the verge of ecological collapse could lead people to take a “by any means necessary” stance and end up with a cure worse than the disease. So far the word “ecoterrorism” has mainly been applied to what was really ecovandalism; but if we were in fact on the verge of total civilizational collapse, I can understand why someone would think quite literal terrorism was justified (actually the main reason I don’t is that I just don’t see how it could actually help). Just about anything is worth it to save humanity from destruction.

What is progress? How far have we really come?

JDN 2457534

It is a controversy that has lasted throughout the ages: Is the world getting better? Is it getting worse? Or is it more or less staying the same, changing in ways that don’t really constitute improvements or detriments?

The most obvious and indisputable change in human society over the course of history has been the advancement of technology. At one extreme there are techno-utopians, who believe that technology will solve all the world’s problems and bring about a glorious future; at the other extreme are anarcho-primitivists, who maintain that civilization, technology, and industrialization were all grave mistakes, removing us from our natural state of peace and harmony.

I am not a techno-utopian—I do not believe that technology will solve all our problems—but I am much closer to that end of the scale. Technology has solved a lot of our problems, and will continue to solve a lot more. My aim in this post is to convince you that progress is real, that things really are, on the whole, getting better.

One of the more baffling arguments against progress comes from none other than Jared Diamond, the social scientist most famous for Guns, Germs and Steel (which oddly enough is mainly about horses and goats). About seven months before I was born, Diamond wrote an essay for Discover magazine arguing quite literally that agriculture—and by extension, civilization—was a mistake.

Diamond fortunately avoids the usual argument based solely on modern hunter-gatherers, which is a selection bias if ever I heard one. Instead his main argument seems to be that paleontological evidence shows an overall decrease in health around the same time as agriculture emerged. But that’s still an endogeneity problem, albeit a subtler one. Maybe agriculture emerged as a response to famine and disease. Or maybe they were both triggered by rising populations; higher populations increase disease risk, and are also basically impossible to sustain without agriculture.

I am similarly dubious of the claim that hunter-gatherers are always peaceful and egalitarian. It does seem to be the case that herders are more violent than other cultures, as they tend to form honor cultures that punish all sleights with overwhelming violence. Even after the Industrial Revolution there were herder honor cultures—the Wild West. Yet as Steven Pinker keeps trying to tell people, the death rates due to homicide in all human cultures appear to have steadily declined for thousands of years.

I read an article just a few days ago on the Scientific American blog which included the following claim so astonishingly nonsensical it makes me wonder if the authors can even do arithmetic or read statistical tables correctly:

I keep reminding readers (see Further Reading), the evidence is overwhelming that war is a relatively recent cultural invention. War emerged toward the end of the Paleolithic era, and then only sporadically. A new study by Japanese researchers published in the Royal Society journal Biology Letters corroborates this view.

Six Japanese scholars led by Hisashi Nakao examined the remains of 2,582 hunter-gatherers who lived 12,000 to 2,800 years ago, during Japan’s so-called Jomon Period. The researchers found bashed-in skulls and other marks consistent with violent death on 23 skeletons, for a mortality rate of 0.89 percent.

That is supposed to be evidence that ancient hunter-gatherers were peaceful? The global homicide rate today is 62 homicides per million people per year. Using the worldwide life expectancy of 71 years (which is biasing against modern civilization because our life expectancy is longer), that means that the worldwide lifetime homicide rate is 4,400 homicides per million people, or 0.44%—that’s less than half the homicide rate of these “peaceful” hunter-gatherers. If you compare just against First World countries, the difference is even starker; let’s use the US, which has the highest homicide rate in the First World. Our homicide rate is 38 homicides per million people per year, which at our life expectancy of 79 years is 3,000 homicides per million people, or an overall homicide rate of 0.3%, slightly more than a third of this “peaceful” ancient culture. The most peaceful societies today—notably Japan, where these remains were found—have homicide rates as low as 3 per million people per year, which is a lifetime homicide rate of 0.02%, forty times smaller than their supposedly utopian ancestors. (Yes, all of Japan has fewer total homicides than Chicago. I’m sure it has nothing to do with their extremely strict gun control laws.) Indeed, to get a modern homicide rate as high as these hunter-gatherers, you need to go to a country like Congo, Myanmar, or the Central African Republic. To get a substantially higher homicide rate, you essentially have to be in Latin America. Honduras, the murder capital of the world, has a lifetime homicide rate of about 6.7%.

Again, how did I figure these things out? By reading basic information from publicly-available statistical tables and then doing some simple arithmetic. Apparently these paleoanthropologists couldn’t be bothered to do that, or didn’t know how to do it correctly, before they started proclaiming that human nature is peaceful and civilization is the source of violence. After an oversight as egregious as that, it feels almost petty to note that a sample size of a few thousand people from one particular region and culture isn’t sufficient data to draw such sweeping judgments or speak of “overwhelming” evidence.

Of course, in order to decide whether progress is a real phenomenon, we need a clearer idea of what we mean by progress. It would be presumptuous to use per-capita GDP, though there can be absolutely no doubt that technology and capitalism do in fact raise per-capita GDP. If we measure by inequality, modern society clearly fares much worse (our top 1% share and Gini coefficient may be higher than Classical Rome!), but that is clearly biased in the opposite direction, because the main way we have raised inequality is by raising the ceiling, not lowering the floor. Most of our really good measures (like the Human Development Index) only exist for the last few decades and can barely even be extrapolated back through the 20th century.

How about babies not dying? This is my preferred measure of a society’s value. It seems like something that should be totally uncontroversial: Babies dying is bad. All other things equal, a society is better if fewer babies die.

I suppose it doesn’t immediately follow that all things considered a society is better if fewer babies die; maybe the dying babies could be offset by some greater good. Perhaps a totalitarian society where no babies die is in fact worse than a free society in which a few babies die, or perhaps we should be prepared to accept some small amount of babies dying in order to save adults from poverty, or something like that. But without some really powerful overriding reason, babies not dying probably means your society is doing something right. (And since most ancient societies were in a state of universal poverty and quite frequently tyranny, these exceptions would only strengthen my case.)

Well, get ready for some high-yield truth bombs about infant mortality rates.

It’s hard to get good data for prehistoric cultures, but the best data we have says that infant mortality in ancient hunter-gatherer cultures was about 20-50%, with a best estimate around 30%. This is statistically indistinguishable from early agricultural societies.

Indeed, 30% seems to be the figure humanity had for most of history. Just shy of a third of all babies died for most of history.

In Medieval times, infant mortality was about 30%.

This same rate (fluctuating based on various plagues) persisted into the Enlightenment—Sweden has the best records, and their infant mortality rate in 1750 was about 30%.

The decline in infant mortality began slowly: During the Industrial Era, infant mortality was about 15% in isolated villages, but still as high as 40% in major cities due to high population densities with poor sanitation.

Even as recently as 1900, there were US cities with infant mortality rates as high as 30%, though the overall rate was more like 10%.

Most of the decline was recent and rapid: Just within the US since WW2, infant mortality fell from about 5.5% to 0.7%, though there remains a substantial disparity between White and Black people.

Globally, the infant mortality rate fell from 6.3% to 3.2% within my lifetime, and in Africa today, the region where it is worst, it is about 5.5%—or what it was in the US in the 1940s.

This precipitous decline in babies dying is the main reason ancient societies have such low life expectancies; actually once they reached adulthood they lived to be about 70 years old, not much worse than we do today. So my multiplying everything by 71 actually isn’t too far off even for ancient societies.

Let me make a graph for you here, of the approximate rate of babies dying over time from 10,000 BC to today:

Infant_mortality.png

Let’s zoom in on the last 250 years, where the data is much more solid:

Infant_mortality_recent.png

I think you may notice something in these graphs. There is quite literally a turning point for humanity, a kink in the curve where we suddenly begin a rapid decline from an otherwise constant mortality rate.

That point occurs around or shortly before 1800—that is, it occurs at industrial capitalism. Adam Smith (not to mention Thomas Jefferson) was writing at just about the point in time when humanity made a sudden and unprecedented shift toward saving the lives of millions of babies.

So now, think about that the next time you are tempted to say that capitalism is an evil system that destroys the world; the evidence points to capitalism quite literally saving babies from dying.

How would it do so? Well, there’s that rising per-capita GDP we previously ignored, for one thing. But more important seems to be the way that industrialization and free markets support technological innovation, and in this case especially medical innovation—antibiotics and vaccines. Our higher rates of literacy and better communication, also a result of raised standard of living and improved technology, surely didn’t hurt. I’m not often in agreement with the Cato Institute, but they’re right about this one: Industrial capitalism is the chief source of human progress.

Billions of babies would have died but we saved them. So yes, I’m going to call that progress. Civilization, and in particular industrialization and free markets, have dramatically improved human life over the last few hundred years.

In a future post I’ll address one of the common retorts to this basically indisputable fact: “You’re making excuses for colonialism and imperialism!” No, I’m not. Saying that modern capitalism is a better system (not least because it saves babies) is not at all the same thing as saying that our ancestors were justified in using murder, slavery, and tyranny to force people into it.

Why is Tatooine poor?

JDN 2457513—May 4, 2016

May the Fourth be with you.

In honor of International Star Wars Day, this post is going to be about Star Wars!

[I wanted to include some images from Star Wars, but here are the copyright issues that made me decide it ultimately wasn’t a good idea.]

But this won’t be as frivolous as it may sound. Star Wars has a lot of important lessons to teach us about economics and other social sciences, and its universal popularity gives us common ground to start with. I could use Zimbabwe and Botswana as examples, and sometimes I do; but a lot of people don’t know much about Zimbabwe and Botswana. A lot more people know about Tatooine and Naboo, so sometimes it’s better to use those instead.

In fact, this post is just a small sample of a much larger work to come; several friends of mine who are social scientists in different fields (I am of course the economist, and we also have a political scientist, a historian, and a psychologist) are writing a book about this; we are going to use Star Wars as a jumping-off point to explain some real-world issues in social science.

So, my topic for today, which may end up forming the basis for a chapter of the book, is quite simple:
Why is Tatooine poor?

First, let me explain why this is such a mystery to begin with. We’re so accustomed to poverty being in the world that we expect to see it, we think of it as normal—and for most of human history, that was probably the correct attitude to have. Up until at least the Industrial Revolution, there simply was no way of raising the standard of living of most people much beyond bare subsistence. A wealthy few could sometimes live better, and most societies have had such an elite; but it was never more than about 1% of the population—and sometimes as little as 0.01%. They could have distributed wealth more evenly than they did, but there simply wasn’t that much to go around.

The “prosperous” “democracy” of Periclean Athens for example was really an aristocratic oligarchy, in which the top 1%—the ones who could read and write, and hence whose opinions we read—owned just about everything (including a fair number of the people—slavery). Their “democracy” was a voting system that only applied to a small portion of the population.

But now we live in a very different age, the Information Age, where we are absolutely basking in wealth thanks to enormous increases in productivity. Indeed, the standard of living of an Athenian philosopher was in many ways worse than that of a single mother on Welfare in the United States today; certainly the single mom has far better medicine, communication, and transportation than the philosopher, but she may even have better nutrition and higher education. Really the only things I can think of that the philosopher has more of are jewelry and real estate. The single mom also surely spends a lot more time doing housework, but a good chunk of her work is automated (dishwasher, microwave, washing machine), while the philosopher simply has slaves for that sort of thing. The smartphone in her pocket (81% of poor households in the US have a cellphone, and about half of these are smartphones) and the car in her driveway (75% of poor households in the US own at least one car) may be older models in disrepair, but they would still be unimaginable marvels to that ancient philosopher.

How is it, then, that we still have poverty in this world? Even if we argued that the poverty line in First World countries is too high because they have cars and smartphones (not an argument I agree with by the way—given our enormous productivity there’s no reason everyone shouldn’t have a car and a smartphone, and the main thing that poor people still can’t afford is housing), there are still over a billion people in the world today who live on less than $2 per day in purchasing-power-adjusted real income. That is poverty, no doubt about it. Indeed, it may in fact be a lower standard of living than most human beings had when we were hunter-gatherers. It may literally be a step downward from the Paleolithic.

Here is where Tatooine may give us some insights.

Productivity in the Star Wars universe is clearly enormous; indeed the proportional gap between Star Wars and us appears to be about the same as the proportional gap between us and hunter-gatherer times. The Death Star II had a diameter of 160 kilometers. Its cost is listed as “over 1 trillion credits”, but that’s almost meaningless because we have no idea what the exchange rate is or how the price of spacecraft varies relative to the price of other goods. (Spacecraft actually seem to be astonishingly cheap; in A New Hope it seems to be that a drink is a couple of credits while 10,000 credits is almost enough to buy an inexpensive starship. Basically their prices seem to be similar to ours for most goods, but spaceships are so cheap they are priced like cars instead of like, well, spacecraft.)

So let’s look at it another way: How much metal would it take to build such a thing, and how much would that cost in today’s money?

We actually see quite a bit of the inner structure of the Death Star II in Return of the Jedi, so I can hazard a guess that about 5% of the volume of the space station is taken up by solid material. Who knows what it’s actually made out of, but for a ballpark figure let’s assume it’s high-grade steel. The volume of a 160 km diameter sphere is 4*pi*r^3 = 4*(3.1415)*(80,000)^3 = 6.43 quadrillion cubic meters. If 5% is filled with material, that’s 320 trillion cubic meters. High-strength steel has a density of about 8000 kg/m^3, so that’s 2.6 quintillion kilograms of steel. A kilogram of high-grade steel costs about $2, so we’re looking at $5 quintillion as the total price just for the raw material of the Death Star II. That’s $5,000,000,000,000,000,000. I’m not even including the labor (droid labor, that is) and transportation costs (oh, the transportation costs!), so this is a very conservative estimate.

To get a sense of how ludicrously much money this is, the population of Coruscant is said to be over 1 trillion people, which is just about plausible for a city that covers an entire planet. The population of the entire galaxy is supposed to be about 400 quadrillion.

Suppose that instead of building the Death Star II, Emperor Palpatine had decided to give a windfall to everyone on Coruscant. How much would he have given each person (in our money)? $5 million.

Suppose instead he had offered the windfall to everyone in the galaxy? $12.50 per person. That’s 50 million worlds with an average population of 8 billion each. Instead of building the Death Star II, Palpatine could have bought the whole galaxy lunch.

Put another way, the cost I just estimated for the Death Star II is about 60 million times the current world GDP. So basically if the average world in the Empire produced as much as we currently produce on Earth, there would still not be enough to build that thing. In order to build the Death Star II in secret, it must be a small portion of the budget, maybe 5% tops. In order for only a small number of systems to revolt, the tax rates can’t be more than say 50%, if that; so total economic output on the average world in the Empire must in fact be more like 50 times what it is on Earth today, for a comparable average population. This puts their per-capita GDP somewhere around $500,000 per person per year.

So, economic output is extremely high in the Star Wars universe. Then why is Tatooine poor? If there’s enough output to make basically everyone a millionaire, why haven’t they?

In a word? Power.

Political power is of course very unequally distributed in the Star Wars universe, especially under the Empire but also even under the Old Republic and New Republic.

Core Worlds like Coruscant appear to have fairly strong centralized governments, and at least until the Emperor seized power and dissolved the Senate (as Tarkin announces in A New Hope) they also seemed to have fairly good representation in the Galactic Senate (though how you make a functioning Senate with millions of member worlds I have no idea—honestly, maybe they didn’t). As a result, Core Worlds are prosperous. Actually, even Naboo seems to be doing all right despite being in the Mid Rim, because of their strong and well-managed constitutional monarchy (“elected queen” is not as weird as it sounds—Sweden did that until the 16th century). They often talk about being a “democracy” even though they’re technically a constitutional monarchy—but the UK and Norway do the same thing with if anything less justification.

But Outer Rim Worlds like Tatooine seem to be out of reach of the central galactic government. (Oh, by the way, what hyperspace route drops you off at Tatooine if you’re going from Naboo to Coruscant? Did they take a wrong turn in addition to having engine trouble? “I knew we should have turned left at Christophsis!”) They even seem to be out of range of the monetary system (“Republic credits are no good out here,” said Watto in The Phantom Menace.), which is pretty extreme. That doesn’t usually happen—if there is a global hegemon, usually their money is better than gold. (“good as gold” isn’t strong enough—US money is better than gold, and that’s why people will accept negative real interest rates to hold onto it.) I guarantee you that if you want to buy something with a US $20 bill in Somalia or Zimbabwe, someone will take it. They might literally take it—i.e. steal it from you, and the government may not do anything to protect you—but it clearly will have value.

So, the Outer Rim worlds are extremely isolated from the central government, and therefore have their own local institutions that operate independently. Tatooine in particular appears to be controlled by the Hutts, who in turn seem to have a clan-based system of organized crime, similar to the Mafia. We never get much detail about the ins and outs of Hutt politics, but it seems pretty clear that Jabba is particularly powerful and may actually be the de facto monarch of a sizeable region or even the whole planet.

Jabba’s government is at the very far extreme of what Daron Acemoglu calls extractive regimes (I’ve been reading his tome Why Nations Fail, and while I agree with its core message, honestly it’s not very well-written or well-argued), systems of government that exist not to achieve overall prosperity or the public good, but to enrich a small elite few at the expense of everyone else. The opposite is inclusive regimes, under which power is widely shared and government exists to advance the public good. Real-world systems are usually somewhere in between; the US is still largely inclusive, but we’ve been getting more extractive over the last few decades and that’s a big problem.

Jabba himself appears to be fantastically wealthy, although even his huge luxury hover-yacht (…thing) is extremely ugly and spartan inside. I infer that he could have made it look however he wanted, and simply has baffling tastes in decor. The fact that he seems to be attracted to female humanoids is already pretty baffling, given the obvious total biological incompatibility; so Jabba is, shall we say, a weird dude. Eccentricity is quite common among despots of extractive regimes, as evidenced by Muammar Qaddafi’s ostentatious outfits, Idi Amin’s love of oranges and Kentucky Fried Chicken, and Kim Jong-Un’s fear of barbers and bond with Dennis Rodman. Maybe we would all be this eccentric if we had unlimited power, but our need to fit in with the rest of society suppresses it.

It’s difficult to put a figure on just how wealthy Jabba is, but it isn’t implausible to say that he has a million times as much as the average person on Tatooine, just as Bill Gates has a million times as much as the average person in the US. Like Qaddafi, before he was killed he probably feared that establishing more inclusive governance would only reduce his power and wealth and spread it to others, even if it did increase overall prosperity.
It’s not hard to make the figures work out so that is so. Suppose that for every 1% of the economy that is claimed by a single rentier despot, overall economic output drops by the same 1%. Then for concreteness, suppose that at optimal efficiency, the whole economy could produce $1 trillion. The amount of money that the despot can claim is determined by the portion he tries to claim, p, times the total amount that the economy will produce, which is (1-p) trillion dollars. So the despot’s wealth will be maximized when p(1-p) is maximized, which is p = 1/2; so the despot would maximize his own wealth at $250 billion if he claimed half of the economy, even though that also means that the economy produces half as much as it could. If he loosened his grip and claimed a smaller share, millions of his subjects would benefit; but he himself would lose more money than he gained. (You can also adjust these figures so that the “optimal” amount for the despot to claim is larger or smaller than half, depending on how severely the rent-seeking disrupts overall productivity.)

It’s important to note that it is not simply geography (galactography?) that makes Tatooine poor. Their sparse, hot desert may be less productive agriculturally, but that doesn’t mean that Tatooine is doomed to poverty. Indeed, today many of the world’s richest countries (such as Qatar) are in deserts, because they produce huge quantities of oil.

I doubt that oil would actually be useful in the Old Republic or the Empire, but energy more generally seems like something you’d always need. Tatooine has enormous flat desert plains and two suns, meaning that its potential to produce solar energy has to be huge. They couldn’t export the energy directly of course, but they could do so indirectly—the cheaper energy could allow them to build huge factories and produce starships at a fraction of the cost that other planets do. They could then sell these starships as exports and import water from planets where it is abundant like Naboo, instead of trying to produce their own water locally through those silly (and surely inefficient) moisture vaporators.

But Jabba likely has fought any efforts to invest in starship production, because it would require a more educated workforce that’s more likely to unionize and less likely to obey his every command. He probably has established a high tariff on water imports (or even banned them outright), so that he can maintain control by rationing the water supply. (Actually one thing I would have liked to see in the movies was Jabba being periodically doused by slaves with vats of expensive imported water. It would not only show an ostentatious display of wealth for a desert culture, but also serve the much more mundane function of keeping his sensitive gastropod skin from dangerously drying out. That’s why salt kills slugs, after all.) He also probably suppressed any attempt to establish new industries of any kind of Tatooine, fearing that with new industry could come a new balance of power.

The weirdest part to me is that the Old Republic didn’t do something about it. The Empire, okay, sure; they don’t much care about humanitarian concerns, so as long as Tatooine is paying its Imperial taxes and staying out of the Emperor’s way maybe he leaves them alone. But surely the Republic would care that this whole planet of millions if not billions of people is being oppressed by the Hutts? And surely the Republic Navy is more than a match for whatever pitiful military forces Jabba and his friends can muster, precisely because they haven’t established themselves as the shipbuilding capital of the galaxy? So why hasn’t the Republic deployed a fleet to Tatooine to unseat the Hutts and establish democracy? (It could be over pretty fast; we’ve seen that one good turbolaser can destroy Jabba’s hover-yacht—and it looks big enough to target from orbit.)

But then, we come full circle, back to the real world: Why hasn’t the US done the same thing in Zimbabwe? Would it not actually work? We sort of tried it in Libya—a lot of people died, and results are still pending I guess. But doesn’t it seem like we should be doing something?

What really happened in Greece

JDN 2457506

I said I’d get back to this issue, so here goes.

Let’s start with what is uncontroversial: Greece is in trouble.

Their per-capita GDP PPP has fallen from a peak of over $32,000 in 2007 to a trough of just over $24,000 in 2013, and only just began to recover over the last 2 years. That’s a fall of 29 log points. Put another way, the average person in Greece has about the same real income now that they had in the year 2000—a decade and a half of economic growth disappeared.

Their unemployment rate surged from about 7% in 2007 to almost 28% in 2013. It remains over 24%. That is, almost one quarter of all adults in Greece are seeking jobs and not finding them. The US has not seen an unemployment rate that high since the Great Depression.

Most shocking of all, over 40% of the population in Greece is now below the national poverty line. They define poverty as 60% of the inflation-adjusted average income in 2009, which works out to 665 Euros per person ($756 at current exchange rates) per month, or about $9000 per year. They also have an absolute poverty line, which 14% of Greeks now fall below, but only 2% did before the crash.

So now, let’s talk about why.

There’s a standard narrative you’ve probably heard many times, which goes something like this:

The Greek government spent too profligately, heaping social services on the population without the tax base to support them. Unemployment insurance was too generous; pensions were too large; it was too hard to fire workers or cut wages. Thus, work incentives were too weak, and there was no way to sustain a high GDP. But they refused to cut back on these social services, and as a result went further and further into debt until it finally became unsustainable. Now they are cutting spending and raising taxes like they needed to, and it will eventually allow them to repay their debt.

Here’s a fellow of the Cato Institute spreading this narrative on the BBC. Here’s ABC with a five bullet-point list: Pension system, benefits, early retirement, “high unemployment and work culture issues” (yes, seriously), and tax evasion. Here the Telegraph says that Greece “went on a spending spree” and “stopped paying taxes”.

That story is almost completely wrong. Almost nothing about it is true. Cato and the Telegraph got basically everything wrong. The only one ABC got right was tax evasion.

Here’s someone else arguing that Greece has a problem with corruption and failed governance; there is something to be said for this, as Greece is fairly corrupt by European standards—though hardly by world standards. For being only a generation removed from an authoritarian military junta, they’re doing quite well actually. They’re about as corrupt as a typical upper-middle income country like Libya or Botswana; and Botswana is widely regarded as the shining city on a hill of transparency as far as Sub-Saharan Africa is concerned. So corruption may have made things worse, but it can’t be the whole story.

First of all, social services in Greece were not particularly extensive compared to the rest of Europe.

Before the crisis, Greece’s government spending was about 44% of GDP.

That was about the same as Germany. It was slightly more than the UK. It was less than Denmark and France, both of which have government spending of about 50% of GDP.

Greece even tried to cut spending to pay down their debt—it didn’t work, because they simply ended up worsening the economic collapse and undermining the tax base they needed to do that.

Europe has fairly extensive social services by world standards—but that’s a major part of why it’s the First World. Even the US, despite spending far less than Europe on social services, still spends a great deal more than most countries—about 36% of GDP.

Second, if work incentives were a problem, you would not have high unemployment. People don’t seem to grasp what the word unemployment actually means, which is part of why I can’t stand it when news outlets just arbitrarily substitute “jobless” to save a couple of syllables. Unemployment does not mean simply that you don’t have a job. It means that you don’t have a job and are trying to get one.

The word you’re looking for to describe simply not having a job is nonemployment, and that’s such a rarely used term my spell-checker complains about it. Yet economists rarely use this term precisely because it doesn’t matter; a high nonemployment rate is not a symptom of a failing economy but a result of high productivity moving us toward the post-scarcity future (kicking and screaming, evidently). If the problem with Greece were that they were too lazy and they retire too early (which is basically what ABC was saying in slightly more polite language), there would be high nonemployment, but there would not be high unemployment. “High unemployment and work culture issues” is actually a contradiction.

Before the crisis, Greece had an employment-to-population ratio of 49%, meaning a nonemployment rate of 51%. If that sounds ludicrously high, you’re not accustomed to nonemployment figures. During the same time, the United States had an employment-to-population ratio of 52% and thus a nonemployment rate of 48%. So the number of people in Greece who were voluntarily choosing to drop out of work before the crisis was just slightly larger than the number in the US—and actually when you adjust for the fact that the US is full of young immigrants and Greece is full of old people (their median age is 10 years older than ours), it begins to look like it’s we Americans who are lazy. (Actually, it’s that we are studious—the US has an extremely high rate of college enrollment and the best colleges in the world. Full-time students are nonemployed, but they are certainly not unemployed.)

But Greece does have an enormously high debt, right? Yes—but it was actually not as bad before the crisis. Their government debt surged from 105% of GDP to almost 180% today. 105% of GDP is about what we have right now in the US; it’s less than what we had right after WW2. This is a little high, but really nothing to worry about, especially if you’ve incurred the debt for the right reasons. (The famous paper by Rogart and Reinhoff arguing that 90% of GDP is a horrible point of no return was literally based on math errors.)

Moreover, Ireland and Spain suffered much the same fate as Greece, despite running primary budget surpluses.

So… what did happen? If it wasn’t their profligate spending that put them in this mess, what was it?

Well, first of all, there was the Second Depression, a worldwide phenomenon triggered by the collapse of derivatives markets in the United States. (You want unsustainable debt? Try 20 to 1 leveraged CDO-squareds and one quadrillion dollars in notional value. Notional value isn’t everything, but it’s a lot.) So it’s mainly our fault, or rather the fault of our largest banks. As far as us voters, it’s “our fault” in the way that if your car gets stolen it’s “your fault” for not locking the doors and installing a LoJack. We could have regulated against this and enforced those regulations, but we didn’t. (Fortunately, Dodd-Frank looks like it might be working.)

Greece was hit particularly hard because they are highly dependent on trade, particularly in services like tourism that are highly sensitive to the business cycle. Before the crash they imported 36% of GDP and exported 23% of GDP. Now they import 35% of GDP and export 33% of GDP—but it’s a much smaller GDP. Their exports have only slightly increased while their imports have plummeted. (This has reduced their “trade deficit”, but that has always been a silly concept. I guess it’s less silly if you don’t control your own currency, but it’s still silly.)

Once the crash happened, the US had sovereign monetary policy and the wherewithal to actually use that monetary policy effectively, so we weathered the crash fairly well, all things considered. Our unemployment rate barely went over 10%. But Greece did not have sovereign monetary policy—they are tied to the Euro—and that severely limited their options for expanding the money supply as a result of the crisis. Raising spending and cutting taxes was the best thing they could do.

But the bank(st?)ers and their derivatives schemes caused the Greek debt crisis a good deal more directly than just that. Part of the condition of joining the Euro was that countries must limit their fiscal deficit to no more than 3% of GDP (which is a totally arbitrary figure with no economic basis in case you were wondering). Greece was unwilling or unable to do so, but wanted to look like they were following the rules—so they called up Goldman Sachs and got them to make some special derivatives that Greece could use to continue borrowing without looking like they were borrowing. The bank could have refused; they could have even reported it to the European Central Bank. But of course they didn’t; they got their brokerage fee, and they knew they’d sell it off to some other bank long before they had to worry about whether Greece could ever actually repay it. And then (as I said I’d get back to in a previous post) they paid off the credit rating agencies to get them to rate these newfangled securities as low-risk.

In other words, Greece is not broke; they are being robbed.

Like homeowners in the US, Greece was offered loans they couldn’t afford to pay, but the banks told them they could, because the banks had lost all incentive to actually bother with the question of whether loans can be repaid. They had “moved on”; their “financial innovation” of securitization and collateralized debt obligations meant that they could collect origination fees and brokerage fees on loans that could never possibly be repaid, then sell them off to some Greater Fool down the line who would end up actually bearing the default. As long as the system was complex enough and opaque enough, the buyers would never realize the garbage they were getting until it was too late. The entire concept of loans was thereby broken: The basic assumption that you only loan money you expect to be repaid no longer held.

And it worked, for awhile, until finally the unpayable loans tried to create more money than there was in the world, and people started demanding repayment that simply wasn’t possible. Then the whole scheme fell apart, and banks began to go under—but of course we saved them, because you’ve got to save the banks, how can you not save the banks?

Honestly I don’t even disagree with saving the banks, actually. It was probably necessary. What bothers me is that we did nothing to save everyone else. We did nothing to keep people in their homes, nothing to stop businesses from collapsing and workers losing their jobs. Precisely because of the absurd over-leveraging of the financial system, the cost to simply refinance every mortgage in America would have been less than the amount we loaned out in bank bailouts. The banks probably would have done fine anyway, but if they didn’t, so what? The banks exist to serve the people—not the other way around.

We can stop this from happening again—here in the US, in Greece, in the rest of Europe, everywhere. But in order to do that we must first understand what actually happened; we must stop blaming the victims and start blaming the perpetrators.