Several of the world’s largest banks are known to have committed large-scale fraud. Why have we done so little about it?

July 16, JDN 2457951

In 2014, JPMorgan Chase paid a settlement of $614 million for fraudulent mortgage lending contributing to the crisis; but this was spare change compared to the $16.5 billion Bank of America paid in settlements for their fradulent mortgages.

In 2015, Citibank paid $700 million in restitution and $35 million in penalties for fraudulent advertising of “payment protection” services.

In 2016, Wells Fargo paid $190 in settlements for defrauding their customers with fake accounts.

Even PayPal has paid $25 million in settlements over abuses of their “PayPal Credit” system.
In 2016, Goldman Sachs paid $5.1 billion in settlements over their fraudulent sales of mortgage-backed securities.
But the worst offender of course is HSBC, which has paid $2.5 billion in settlements over fraud, as well as $1.9 billion in settlements for laundering money for terrorists. The US Justice Department has kept their money-laundering protections classified because they’re so bad that simply revealing them to the public could result in vast amounts of criminal abuse.
These are some of the world’s largest banks. JPMorgan Chase alone owns 8.0% of all investment banking worldwide; Goldman Sachs owns 6.6%; Citi owns 4.9%; Wells Fargo 2.5%; and HSBC 1.8%. That means that between them, these five corporations—all proven to have engaged in large-scale fraud—own almost one-fourth of all the world’s investment banking assets.

What shocks me the most about this is that hardly anyone seems to care. It’s seen as “normal”, as “business as usual” that a quarter of the world’s investment banking system is owned by white-collar criminals. When the issue is even brought up, often the complaint seems to be that the government is being somehow overzealous. The Economist even went so far as to characterize the prosecution of Wall Street fraud as a “shakedown”. Apparently the idea that our world’s most profitable companies shouldn’t be able to launder money for terrorists is just ridiculous. These are rich people; you expect them to follow rules? What is this, some kind of democracy?

Is this just always how it has been? Has corruption always been so thoroughly infused with finance that we don’t even know how to separate them? Has the oligarchy of the top 0.01% become so strong that we can’t even bring ourselves to challenge them when they commit literal treason? For, in case you’ve forgotten, that is what money-laundering for terrorists is: HSBC gave aid and comfort to the enemies of the free world. Like “freedom” and “terrorism”, the word “treason” has been so overused that we begin to forget its meaning; but one of the groups that HSBC gladly loaned money to is an organization that has financed Hezbollah and Al-Qaeda. These are people that American and British soldiers have died fighting against, and when a British bank was found colluding with them, the penalty was… a few weeks of profits, no personal responsibility, and not a single day of prison time. The settlement was in fact less than the profits gained from the criminal enterprise, so this wasn’t even a fine; it was a tax. Our response to treason was to impose a tax.

And this of course was not the result of some newfound leniency in American government in general. No, we are still the nation that imprisons 700 out of every 100,000 people, the nation with more prisoners than any other nation on Earth. Our police officers still kill young Black men with impunity, including at least three dozen unarmed Black men every year, many of them for no apparent reason at all. (The precise number is still unknown, as the police refuse to keep an official database of all the citizens they kill.) Decades of “law and order” politicians promising to stop the “rising crime” (that is actually falling) have made the United States very close to a police state, especially in poor neighborhoods that are primarily inhabited by Black and Hispanic people. We don’t even have an especially high crime rate, except for gun homicides (and that because we have so many guns, also more than any other nation on Earth). We are, if anything, an especially vindictive society, cruel, unforgiving, and violent towards those we perceive as transgressors.

Except, that is, when the criminals are rich. Even the racial biases seem to go away in such circumstances; there is no reasonable doubt as to the guilt of O.J. Simpson or Bill Cosby, but Simpson only ended up in prison years later on a completely unrelated offense, and after Cosby’s mistrial it’s unclear if he’ll ever see any prison time. I don’t see how either man could have been less punished for his crimes had he been White; but can anyone seriously doubt that both men would be punished more had they not been rich?

I do not think that capitalism is an irredeemable system. I think that, in themselves, free markets are very useful, and we should not remove or restrict them unnecessarily. But capitalism isn’t supposed to be a system where the rich can do whatever they want and the poor have to accept it. Capitalism is supposed to be a system where everyone is free to do as they choose, unless they are harming others—and the rules are supposed to be the same for everyone. A free market is not one where you can buy the right to take away other people’s freedom.

Is this just some utopian idealism? It would surely be utopian to imagine a world where fraud never happens, that much is true. Someone, somewhere, will always be defrauding someone else. But a world where fraud is punished most of the time? Where our most powerful institutions are still subject to the basic rule of law? Is that a pipe dream as well?

The difference between price, cost, and value

JDN 2457559

This topic has been on the voting list for my Patreons for several months, but it never quite seems to win the vote. Well, this time it did. I’m glad, because I was tempted to do it anyway.

“Price”, “cost”, and “value”; the words are often used more or less interchangeably, not only by regular people but even by economists. I’ve read papers that talked about “rising labor costs” when what they clearly meant was rising wages—rising labor prices. I’ve read papers that tried to assess the projected “cost” of climate change by using the prices of different commodity futures. And hardly a day goes buy that I don’t see a TV commercial listing one (purely theoretical) price, cutting it in half (to the actual price), and saying they’re now giving you “more value”.

As I’ll get to, there are reasons to think they would be approximately the same for some purposes. Indeed, they would be equal, at the margin, in a perfectly efficient market—that may be why so many economists use them this way, because they implicitly or explicitly assume efficient markets. But they are fundamentally different concepts, and it’s dangerous to equate them casually.

Price

Price is exactly what you think it is: The number of dollars you must pay to purchase something. Most of the time when we talk about “cost” or “value” and then give a dollar figure, we’re actually talking about some notion of price.

Generally we speak in terms of nominal prices, which are the usual concept of prices in actual dollars paid, but sometimes we do also speak in terms of real prices, which are relative prices of different things once you’ve adjusted for overall inflation. “Inflation-adjusted price” can be a somewhat counter-intuitive concept; if a good’s (nominal) price rises, but by less than most other prices have risen, its real price has actually fallen.

You also need to be careful about just what price you’re looking at. When we look at labor prices, for example, we need to consider not only cash wages, but also fringe benefits and other compensation such as stock options. But other than that, prices are fairly straightforward.

Cost

Cost is probably not at all what you think it is. The real cost of something has nothing to do with money; saying that a candy bar “costs $2” or a computer “costs $2,000” is at best a somewhat sloppy shorthand and at worst a fundamental distortion of what cost is and why it matters. No, those are prices. The cost of a candy bar is the toil of children in cocoa farms in Cote d’Ivoire. The cost of a computer is the ecological damage and displaced indigenous people caused by coltan mining in Congo.

The cost of something is the harm that it does to human well-being (or for that matter to the well-being of any sentient being). It is not measured in money but in “the sweat of our laborers, the genius of our scientists, the hopes of our children” (to quote Eisenhower, who understood real cost better than most economists). There is also opportunity cost, the real cost we pay not by what we did, but by what we didn’t do—what we could have done instead.

This is important precisely because while costs should always be reduced when possible, prices can in fact be too low—and indeed, artificially low prices of goods due to externalities are probably the leading reason why humanity bears so many excess real costs. If the price of that chocolate bar accurately reflected the suffering of those African children (perhaps by—Gasp! Paying them a fair wage?), and the price of that computer accurately reflected the ecological damage of those coltan mines (a carbon tax, at least?), you might not want to buy them anymore; in which case, you should not have bought them. In fact, as I’ll get to once I discuss value, there is reason to think that even if you would buy them at a price that accurately reflected the dollar value of the real cost to their producers, we would still buy more than we should.

There is a point at which we should still buy things even though people get hurt making them; if you deny this, stop buying literally anything ever again. We don’t like to think about it, but any product we buy did cause some person, in some place, some degree of discomfort or unpleasantness in production. And many quite useful products will in fact cause death to a nonzero number of human beings.

For some products this is only barely true—it’s hard to feel bad for bestselling authors and artists who sell their work for millions, for whatever toil they may put into their work, whatever their elevated suicide rate (which is clearly endogenous; people aren’t randomly assigned to be writers), they also surely enjoy it a good deal of the time, and even if they didn’t, their work sells for millions. But for many products it is quite obviously true: A certain proportion of roofers, steelworkers, and truck drivers will die doing their jobs. We can either accept that, recognizing that it’s worth it to have roofs, steel, and trucking—and by extension, industrial capitalism, and its whole babies not dying thing—or we can give up on the entire project of human civilization, and go back to hunting and gathering; even if we somehow managed to avoid the direct homicide most hunter-gatherers engage in, far more people would simply die of disease or get eaten by predators.

Of course, we should have safety standards; but the benefits of higher safety must be carefully weighed against the potential costs of inefficiency, unemployment, and poverty. Safety regulations can reduce some real costs and increase others, even if they almost always increase prices. A good balance is struck when real cost is minimized, where any additional regulation would increase inefficiency more than it improves safety.

Actually OSHA are unsung heroes for their excellent performance at striking this balance, just as EPA are unsung heroes for their balance in environmental regulations (and that whole cutting crime in half business). If activists are mad at you for not banning everything bad and business owners are mad at you for not letting them do whatever they want, you’re probably doing it right. Would you rather people saved from fires, or fires prevented by good safety procedures? Would you rather murderers imprisoned, or boys who grow up healthy and never become murderers? If an ounce of prevention is worth a pound of cure, why does everyone love firefighters and hate safety regulators?So let me take this opportunity to say thank you, OSHA and EPA, for doing the jobs of firefighters and police way better than they do, and unlike them, never expecting to be lauded for it.

And now back to our regularly scheduled programming. Markets are supposed to reflect costs in prices, which is why it’s not totally nonsensical to say “cost” when you mean “price”; but in fact they aren’t very good at that, for reasons I’ll get to in a moment.

Value

Value is how much something is worth—not to sell it (that’s the price again), but to use it. One of the core principles of economics is that trade is nonzero-sum, because people can exchange goods that they value differently and thereby make everyone better off. They can’t price them differently—the buyer and the seller must agree upon a price to make the trade. But they can value them differently.

To see how this works, let’s look at a very simple toy model, the simplest essence of trade: Alice likes chocolate ice cream, but all she has is a gallon of vanilla ice cream. Bob likes vanilla ice cream, but all he has is a gallon of chocolate ice cream. So Alice and Bob agree to trade their ice cream, and both of them are happier.

We can measure value in “willingness-to-pay” (WTP), the highest price you’d willingly pay for something. That makes value look more like a price; but there are several reasons we must be careful when we do that. The obvious reason is that WTP is obviously going to vary based on overall inflation; since $5 isn’t worth as much in 2016 as it was in 1956, something with a WTP of $5 in 1956 would have a much higher WTP in 2016. The not-so-obvious reason is that money is worth less to you the more you have, so we also need to take into account the effect of wealth, and the marginal utility of wealth. The more money you have, the more money you’ll be willing to pay in order to get the same amount of real benefit. (This actually creates some very serious market distortions in the presence of high income inequality, which I may make the subject of a post or even a paper at some point.) Similarly there is “willingness-to-accept” (WTA), the lowest price you’d willingly accept for it. In theory these should be equal; in practice, WTA is usually slightly higher than WTP in what’s called endowment effect.

So to make our model a bit more quantitative, we could suppose that Alice values vanilla at $5 per gallon and chocolate at $10 per gallon, while Bob also values vanilla at $5 per gallon but only values chocolate at $4 per gallon. (I’m using these numbers to point out that not all the valuations have to be different for trade to be beneficial, as long as some are.) Therefore, if Alice sells her vanilla ice cream to Bob for $5, both will (just barely) accept that deal; and then Alice can buy chocolate ice cream from Bob for anywhere between $4 and $10 and still make both people better off. Let’s say they agree to also sell for $5, so that no net money is exchanged and it is effectively the same as just trading ice cream for ice cream. In that case, Alice has gained $5 in consumer surplus (her WTP of $10 minus the $5 she paid) while Bob has gained $1 in producer surplus (the $5 he received minus his $4 WTP). The total surplus will be $6 no matter what price they choose, which we can compute directly from Alice’s WTP of $10 minus Bob’s WTA of $4. The price ultimately decides how that total surplus is distributed between the two parties, and in the real world it would very likely be the result of which one is the better negotiator.

The enormous cost of our distorted understanding

(See what I did there?) If markets were perfectly efficient, prices would automatically adjust so that, at the margin, value is equal to price is equal to cost. What I mean by “at the margin” might be clearer with an example: Suppose we’re selling apples. How many apples do you decide to buy? Well, the value of each successive apple to you is lower, the more apples you have (the law of diminishing marginal utility, which unlike most “laws” in economics is actually almost always true). At some point, the value of the next apple will be just barely above what you have to pay for it, so you’ll stop there. By a similar argument, the cost of producing apples increases the more apples you produce (the law of diminishing returns, which is a lot less reliable, more like the Pirate Code), and the producers of apples will keep selling them until the price they can get is only just barely larger than the cost of production. Thus, in the theoretical limit of infinitely-divisible apples and perfect rationality, marginal value = price = marginal cost. In such a world, markets are perfectly efficient and they maximize surplus, which is the difference between value and cost.

But in the real world of course, none of those assumptions are true. No product is infinitely divisible (though the gasoline in a car is obviously a lot more divisible than the car itself). No one is perfectly rational. And worst of all, we’re not measuring value in the same units. As a result, there is basically no reason to think that markets are optimizing anything; their optimization mechanism is setting two things equal that aren’t measured the same way, like trying to achieve thermal equilibrium by matching the temperature of one thing in Celsius to the temperature of other things in Fahrenheit.

An implicit assumption of the above argument that didn’t even seem worth mentioning was that when I set value equal to price and set price equal to cost, I’m setting value equal to cost; transitive property of equality, right? Wrong. The value is equal to the price, as measured by the buyer. The cost is equal to the price, as measured by the seller.

If the buyer and seller have the same marginal utility of wealth, no problem; they are measuring in the same units. But if not, we convert from utility to money and then back to utility, using a different function to convert each time. In the real world, wealth inequality is massive, so it’s wildly implausible that we all have anything close to the same marginal utility of wealth. Maybe that’s close enough if you restrict yourself to middle-class people in the First World; so when a tutoring client pays me, we might really be getting close to setting marginal value equal to marginal cost. But once you include corporations that are owned by billionaires and people who live on $2 per day, there’s simply no way that those price-to-utility conversions are the same at each end. For Bill Gates, a million dollars is a rounding error. For me, it would buy a house, give me more flexible work options, and keep me out of debt, but not radically change the course of my life. For a child on a cocoa farm in Cote d’Ivoire, it could change her life in ways she can probably not even comprehend.

The market distortions created by this are huge; indeed, most of the fundamental flaws in capitalism as we know it are ultimately traceable to this. Why do Americans throw away enough food to feed all the starving children in Africa? Marginal utility of wealth. Why are Silicon Valley programmers driving the prices for homes in San Francisco higher than most Americans will make in their lifetimes? Marginal utility of wealth. Why are the Koch brothers spending more on this year’s elections than the nominal GDP of the Gambia? Marginal utility of wealth. It’s the sort of pattern that once you see it suddenly seems obvious and undeniable, a paradigm shift a bit like the heliocentric model of the solar system. Forget trade barriers, immigration laws, and taxes; the most important market distortions around the world are all created by wealth inequality. Indeed, the wonder is that markets work as well as they do.

The real challenge is what to do about it, how to reduce this huge inequality of wealth and therefore marginal utility of wealth, without giving up entirely on the undeniable successes of free market capitalism. My hope is that once more people fully appreciate the difference between price, cost, and value, this paradigm shift will be much easier to make; and then perhaps we can all work together to find a solution.

Selling debt goes against everything the free market stands for

JDN 2457555

I don’t think most people—or even most economists—have any concept of just how fundamentally perverse and destructive our financial system has become, and a large chunk of it ultimately boils down to one thing: Selling debt.

Certainly collateralized debt obligations (CDOs), and their meta-form, CDO2s (pronounced “see-dee-oh squareds”), are nothing more than selling debt, and along with credit default swaps (CDS; they are basically insurance, but without those pesky regulations against things like fraud and conflicts of interest) they were directly responsible for the 2008 financial crisis and the ensuing Great Recession and Second Depression.

But selling debt continues in a more insidious way, underpinning the entire debt collection industry which raises tens of billions of dollars per year by harassment, intimidation and extortion, especially of the poor and helpless. Frankly, I think what’s most shocking is how little money they make, given the huge number of people they harass and intimidate.

John Oliver did a great segment on debt collections (with a very nice surprise at the end):

But perhaps most baffling to me is the number of people who defend the selling of debt on the grounds that it is a “free market” activity which must be protected from government “interference in personal liberty”. To show this is not a strawman, here’s the American Enterprise Institute saying exactly that.

So let me say this in no uncertain terms: Selling debt goes against everything the free market stands for.

One of the most basic principles of free markets, one of the founding precepts of capitalism laid down by no less than Adam Smith (and before him by great political philosophers like John Locke), is the freedom of contract. This is the good part of capitalism, the part that makes sense, the reason we shouldn’t tear it all down but should instead try to reform it around the edges.

Indeed, the freedom of contract is so fundamental to human liberty that laws can only be considered legitimate insofar as they do not infringe upon it without a compelling public interest. Freedom of contract is right up there with freedom of speech, freedom of the press, freedom of religion, and the right of due process.

The freedom of contract is the right to make agreements, including financial agreements, with anyone you please, and under conditions that you freely and rationally impose in a state of good faith and transparent discussion. Conversely, it is the right not to make agreements with those you choose not to, and to not be forced into agreements under conditions of fraud, intimidation, or impaired judgment.

Freedom of contract is the basis of my right to take on debt, provided that I am honest about my circumstances and I can find a lender who is willing to lend to me. So taking on debt is a fundamental part of freedom of contract.

But selling debt is something else entirely. Far from exercising the freedom of contract, it violates it. When I take out a loan from bank A, and then they turn around and sell that loan to bank B, I suddenly owe money to bank B, but I never agreed to do that. I had nothing to do with their decision to work with bank B as opposed to keeping the loan or selling it to bank C.

Current regulations prohibit banks from “changing the terms of the loan”, but in practice they change them all the time—they can’t change the principal balance, the loan term, or the interest rate, but they can change the late fees, the payment schedule, and lots of subtler things about the loan that can still make a very big difference. Indeed, as far as I’m concerned they have changed the terms of the loan—one of the terms of the loan was that I was to pay X amount to bank A, not that I was to pay X amount to bank B. I may or may not have good reasons not to want to pay bank B—they might be far less trustworthy than bank A, for instance, or have a far worse social responsibility record—and in any case it doesn’t matter; it is my choice whether or not I want anything to do with bank B, whatever my reasons might be.

I take this matter quite personally, for it is by the selling of debt that, in moral (albeit not legal) terms, a British bank stole my parents’ house. Indeed, not just any British bank; it was none other than HSBC, the money launderers for terrorists.

When they first obtained their mortgage, my parents did not actually know that HSBC was quite so evil as to literally launder money for terrorists, but they did already know that they were involved in a great many shady dealings, and even specifically told their lender that they did not want the loan sold, and if it was to be sold, it was absolutely never to be sold to HSBC in particular. Their mistake (which was rather like the “mistake” of someone who leaves their car unlocked and has it stolen, or forgets to arm the home alarm system and suffers a burglary) was not to get this written into the formal contract, rather than simply made as a verbal agreement with the bankers. Such verbal contracts are enforceable under the law, at least in theory; but that would require proof of the verbal contract (and what proof could we provide?), and also probably have cost as much as the house in litigation fees.

Oh, by the way, they were given a subprime interest rate of 8% despite being middle-class professionals with good credit, no doubt to maximize the broker’s closing commission. Most banks reserved such behavior for racial minorities, but apparently this one was equal-opportunity in the worst way.Perhaps my parents were naive to trust bankers any further than they could throw them.

As a result, I think you know what happened next: They sold the loan to HSBC.

Now, had it ended there, with my parents unwittingly forced into supporting a bank that launders money for terrorists, that would have been bad enough. But it assuredly did not.

By a series of subtle and manipulative practices that poked through one loophole after another, HSBC proceeded to raise my parents’ payments higher and higher. One particularly insidious tactic they used was to sit on the checks until just after the due date passed, so they could charge late fees on the payments, then they recapitalized the late fees. My parents caught on to this particular trick after a few months, and started mailing the checks certified so they would be date-stamped; and lo and behold, all the payments were suddenly on time! By several other similarly devious tactics, all of which were technically legal or at least not provable, they managed to raise my parents’ monthly mortgage payments by over 50%.

Note that it was a fixed-rate, fixed-term mortgage. The initial payments—what should have been always the payments, that’s the point of a fixed-rate fixed-term mortgage—were under $2000 per month. By the end they were paying over $3000 per month. HSBC forced my parents to overpay on a mortgage an amount equal to the US individual poverty line, or the per-capita GDP of Peru.

They tried to make the payments, but after being wildly over budget and hit by other unexpected expenses (including defects in the house’s foundation that they had to pay to fix, but because of the “small” amount at stake and the overwhelming legal might of the construction company, no lawyer was willing to sue over), they simply couldn’t do it anymore, and gave up. They gave the house to the bank with a deed in lieu of foreclosure.

And that is the story of how a bank that my parents never agreed to work with, never would have agreed to work with, indeed specifically said they would not work with, still ended up claiming their house—our house, the house I grew up in from the age of 12. Legally, I cannot prove they did anything against the law. (I mean, other than laundered money for terrorists.) But morally, how is this any less than theft? Would we not be victimized less had a burglar broken into our home, vandalized the walls and stolen our furniture?

Indeed, that would probably be covered under our insurance! Where can I buy insurance against the corrupt and predatory financial system? Where are my credit default swaps to pay me when everything goes wrong?

And all of this could have been prevented, if banks simply weren’t allowed to violate our freedom of contract by selling their loans to other banks.

Indeed, the Second Depression could probably have been likewise prevented. Without selling debt, there is no securitization. Without securitization, there is far less leverage. Without leverage, there are not bank failures. Without bank failures, there is no depression. A decade of global economic growth was lost because we allowed banks to sell debt whenever they please.

I have heard the counter-arguments many times:

“But what if banks need the liquidity?” Easy. They can take out their own loans with those other banks. If bank A finds they need more cashflow, they should absolutely feel free to take out a loan from bank B. They can even point to their projected revenues from the mortgage payments we owe them, as a means of repaying that loan. But they should not be able to involve us in that transaction. If you want to trust HSBC, that’s your business (you’re an idiot, but it’s a free country). But you have no right to force me to trust HSBC.

“But banks might not be willing to make those loans, if they knew they couldn’t sell or securitize them!” THAT’S THE POINT. Banks wouldn’t take on all these ridiculous risks in their lending practices that they did (“NINJA loans” and mortgages with payments larger than their buyers’ annual incomes), if they knew they couldn’t just foist the debt off on some Greater Fool later on. They would only make loans they actually expect to be repaid. Obviously any loan carries some risk, but banks would only take on risks they thought they could bear, as opposed to risks they thought they could convince someone else to bear—which is the definition of moral hazard.

“Homes would be unaffordable if people couldn’t take out large loans!” First of all, I’m not against mortgages—I’m against securitization of mortgages. Yes, of course, people need to be able to take out loans. But they shouldn’t be forced to pay those loans to whoever their bank sees fit. If indeed the loss of subprime securitized mortgages made it harder for people to get homes, that’s a problem; but the solution to that problem was never to make it easier for people to get loans they can’t afford—it is clearly either to reduce the price of homes or increase the incomes of buyers. Subsidized housing construction, public housing, changes in zoning regulation, a basic income, lower property taxes, an expanded earned-income tax credit—these are the sort of policies that one implements to make housing more affordable, not “go ahead and let banks exploit people however they want”.

Remember, a regulation against selling debt would protect the freedom of contract. It would remove a way for private individuals and corporations to violate that freedom, like regulations against fraud, intimidation, and coercion. It should be uncontroversial that no one has any right to force you to do business with someone you would not voluntarily do business with, certainly not in a private transaction between for-profit corporations. Maybe that sort of mandate makes sense in rare circumstances by the government, but even then it should really be implemented as a tax, not a mandate to do business with a particular entity. The right to buy what you choose is the foundation of a free market—and implicit in it is the right not to buy what you do not choose.

There are many regulations on debt that do impose upon freedom of contract: As horrific as payday loans are, if someone really honestly knowingly wants to take on short-term debt at 400% APR I’m not sure it’s my business to stop them. And some people may really be in such dire circumstances that they need money that urgently and no one else will lend to them. Insofar as I want payday loans regulated, it is to ensure that they are really lending in good faith—as many surely are not—and ultimately I want to outcompete them by providing desperate people with more reasonable loan terms. But a ban on securitization is like a ban on fraud; it is the sort of law that protects our rights.

Believing in civilization without believing in colonialism

JDN 2457541

In a post last week I presented some of the overwhelming evidence that society has been getting better over time, particularly since the start of the Industrial Revolution. I focused mainly on infant mortality rates—babies not dying—but there are lots of other measures you could use as well. Despite popular belief, poverty is rapidly declining, and is now the lowest it’s ever been. War is rapidly declining. Crime is rapidly declining in First World countries, and to the best of our knowledge crime rates are stable worldwide. Public health is rapidly improving. Lifespans are getting longer. And so on, and so on. It’s not quite true to say that every indicator of human progress is on an upward trend, but the vast majority of really important indicators are.

Moreover, there is every reason to believe that this great progress is largely the result of what we call “civilization”, even Western civilization: Stable, centralized governments, strong national defense, representative democracy, free markets, openness to global trade, investment in infrastructure, science and technology, secularism, a culture that values innovation, and freedom of speech and the press. We did not get here by Marxism, nor agragrian socialism, nor primitivism, nor anarcho-capitalism. We did not get here by fascism, nor theocracy, nor monarchy. This progress was built by the center-left welfare state, “social democracy”, “modified capitalism”, the system where free, open markets are coupled with a strong democratic government to protect and steer them.

This fact is basically beyond dispute; the evidence is overwhelming. The serious debate in development economics is over which parts of the Western welfare state are most conducive to raising human well-being, and which parts of the package are more optional. And even then, some things are fairly obvious: Stable government is clearly necessary, while speaking English is clearly optional.

Yet many people are resistant to this conclusion, or even offended by it, and I think I know why: They are confusing the results of civilization with the methods by which it was established.

The results of civilization are indisputably positive: Everything I just named above, especially babies not dying.

But the methods by which civilization was established are not; indeed, some of the greatest atrocities in human history are attributable at least in part to attempts to “spread civilization” to “primitive” or “savage” people.
It is therefore vital to distinguish between the result, civilization, and the processes by which it was effected, such as colonialism and imperialism.

First, it’s important not to overstate the link between civilization and colonialism.

We tend to associate colonialism and imperialism with White people from Western European cultures conquering other people in other cultures; but in fact colonialism and imperialism are basically universal to any human culture that attains sufficient size and centralization. India engaged in colonialism, Persia engaged in imperialism, China engaged in imperialism, the Mongols were of course major imperialists, and don’t forget the Ottoman Empire; and did you realize that Tibet and Mali were at one time imperialists as well? And of course there are a whole bunch of empires you’ve probably never heard of, like the Parthians and the Ghaznavids and the Ummayyads. Even many of the people we’re accustoming to thinking of as innocent victims of colonialism were themselves imperialists—the Aztecs certainly were (they even sold people into slavery and used them for human sacrifice!), as were the Pequot, and the Iroquois may not have outright conquered anyone but were definitely at least “soft imperialists” the way that the US is today, spreading their influence around and using economic and sometimes military pressure to absorb other cultures into their own.

Of course, those were all civilizations, at least in the broadest sense of the word; but before that, it’s not that there wasn’t violence, it just wasn’t organized enough to be worthy of being called “imperialism”. The more general concept of intertribal warfare is a human universal, and some hunter-gatherer tribes actually engage in an essentially constant state of warfare we call “endemic warfare”. People have been grouping together to kill other people they perceived as different for at least as long as there have been people to do so.

This is of course not to excuse what European colonial powers did when they set up bases on other continents and exploited, enslaved, or even murdered the indigenous population. And the absolute numbers of people enslaved or killed are typically larger under European colonialism, mainly because European cultures became so powerful and conquered almost the entire world. Even if European societies were not uniquely predisposed to be violent (and I see no evidence to say that they were—humans are pretty much humans), they were more successful in their violent conquering, and so more people suffered and died. It’s also a first-mover effect: If the Ming Dynasty had supported Zheng He more in his colonial ambitions, I’d probably be writing this post in Mandarin and reflecting on why Asian cultures have engaged in so much colonial oppression.

While there is a deeply condescending paternalism (and often post-hoc rationalization of your own self-interested exploitation) involved in saying that you are conquering other people in order to civilize them, humans are also perfectly capable of committing atrocities for far less noble-sounding motives. There are holy wars such as the Crusades and ethnic genocides like in Rwanda, and the Arab slave trade was purely for profit and didn’t even have the pretense of civilizing people (not that the Atlantic slave trade was ever really about that anyway).

Indeed, I think it’s important to distinguish between colonialists who really did make some effort at civilizing the populations they conquered (like Britain, and also the Mongols actually) and those that clearly were just using that as an excuse to rape and pillage (like Spain and Portugal). This is similar to but not quite the same thing as the distinction between settler colonialism, where you send colonists to live there and build up the country, and exploitation colonialism, where you send military forces to take control of the existing population and exploit them to get their resources. Countries that experienced settler colonialism (such as the US and Australia) have fared a lot better in the long run than countries that experienced exploitation colonialism (such as Haiti and Zimbabwe).

The worst consequences of colonialism weren’t even really anyone’s fault, actually. The reason something like 98% of all Native Americans died as a result of European colonization was not that Europeans killed them—they did kill thousands of course, and I hope it goes without saying that that’s terrible, but it was a small fraction of the total deaths. The reason such a huge number died and whole cultures were depopulated was disease, and the inability of medical technology in any culture at that time to handle such a catastrophic plague. The primary cause was therefore accidental, and not really foreseeable given the state of scientific knowledge at the time. (I therefore think it’s wrong to consider it genocide—maybe democide.) Indeed, what really would have saved these people would be if Europe had advanced even faster into industrial capitalism and modern science, or else waited to colonize until they had; and then they could have distributed vaccines and antibiotics when they arrived. (Of course, there is evidence that a few European colonists used the diseases intentionally as biological weapons, which no amount of vaccine technology would prevent—and that is indeed genocide. But again, this was a small fraction of the total deaths.)

However, even with all those caveats, I hope we can all agree that colonialism and imperialism were morally wrong. No nation has the right to invade and conquer other nations; no one has the right to enslave people; no one has the right to kill people based on their culture or ethnicity.

My point is that it is entirely possible to recognize that and still appreciate that Western civilization has dramatically improved the standard of human life over the last few centuries. It simply doesn’t follow from the fact that British government and culture were more advanced and pluralistic that British soldiers can just go around taking over other people’s countries and planting their own flag (follow the link if you need some comic relief from this dark topic). That was the moral failing of colonialism; not that they thought their society was better—for in many ways it was—but that they thought that gave them the right to terrorize, slaughter, enslave, and conquer people.

Indeed, the “justification” of colonialism is a lot like that bizarre pseudo-utilitarianism I mentioned in my post on torture, where the mere presence of some benefit is taken to justify any possible action toward achieving that benefit. No, that’s not how morality works. You can’t justify unlimited evil by any good—it has to be a greater good, as in actually greater.

So let’s suppose that you do find yourself encountering another culture which is clearly more primitive than yours; their inferior technology results in them living in poverty and having very high rates of disease and death, especially among infants and children. What, if anything, are you justified in doing to intervene to improve their condition?

One idea would be to hold to the Prime Directive: No intervention, no sir, not ever. This is clearly what Gene Roddenberry thought of imperialism, hence why he built it into the Federation’s core principles.

But does that really make sense? Even as Star Trek shows progressed, the writers kept coming up with situations where the Prime Directive really seemed like it should have an exception, and sometimes decided that the honorable crew of Enterprise or Voyager really should intervene in this more primitive society to save them from some terrible fate. And I hope I’m not committing a Fictional Evidence Fallacy when I say that if your fictional universe specifically designed not to let that happen makes that happen, well… maybe it’s something we should be considering.

What if people are dying of a terrible disease that you could easily cure? Should you really deny them access to your medicine to avoid intervening in their society?

What if the primitive culture is ruled by a horrible tyrant that you could easily depose with little or no bloodshed? Should you let him continue to rule with an iron fist?

What if the natives are engaged in slavery, or even their own brand of imperialism against other indigenous cultures? Can you fight imperialism with imperialism?

And then we have to ask, does it really matter whether their babies are being murdered by the tyrant or simply dying from malnutrition and infection? The babies are just as dead, aren’t they? Even if we say that being murdered by a tyrant is worse than dying of malnutrition, it can’t be that much worse, can it? Surely 10 babies dying of malnutrition is at least as bad as 1 baby being murdered?

But then it begins to seem like we have a duty to intervene, and moreover a duty that applies in almost every circumstance! If you are on opposite sides of the technology threshold where infant mortality drops from 30% to 1%, how can you justify not intervening?

I think the best answer here is to keep in mind the very large costs of intervention as well as the potentially large benefits. The answer sounds simple, but is actually perhaps the hardest possible answer to apply in practice: You must do a cost-benefit analysis. Furthermore, you must do it well. We can’t demand perfection, but it must actually be a serious good-faith effort to predict the consequences of different intervention policies.

We know that people tend to resist most outside interventions, especially if you have the intention of toppling their leaders (even if they are indeed tyrannical). Even the simple act of offering people vaccines could be met with resistance, as the native people might think you are poisoning them or somehow trying to control them. But in general, opening contact with with gifts and trade is almost certainly going to trigger less hostility and therefore be more effective than going in guns blazing.

If you do use military force, it must be targeted at the particular leaders who are most harmful, and it must be designed to achieve swift, decisive victory with minimal collateral damage. (Basically I’m talking about just war theory.) If you really have such an advanced civilization, show it by exhibiting total technological dominance and minimizing the number of innocent people you kill. The NATO interventions in Kosovo and Libya mostly got this right. The Vietnam War and Iraq War got it totally wrong.

As you change their society, you should be prepared to bear most of the cost of transition; you are, after all, much richer than they are, and also the ones responsible for effecting the transition. You should not expect to see short-term gains for your own civilization, only long-term gains once their culture has advanced to a level near your own. You can’t bear all the costs of course—transition is just painful, no matter what you do—but at least the fungible economic costs should be borne by you, not by the native population. Examples of doing this wrong include basically all the standard examples of exploitation colonialism: Africa, the Caribbean, South America. Examples of doing this right include West Germany and Japan after WW2, and South Korea after the Korean War—which is to say, the greatest economic successes in the history of the human race. This was us winning development, humanity. Do this again everywhere and we will have not only ended world hunger, but achieved global prosperity.

What happens if we apply these principles to real-world colonialism? It does not fare well. Nor should it, as we’ve already established that most if not all real-world colonialism was morally wrong.

15th and 16th century colonialism fail immediately; they offer no benefit to speak of. Europe’s technological superiority was enough to give them gunpowder but not enough to drop their infant mortality rate. Maybe life was better in 16th century Spain than it was in the Aztec Empire, but honestly not by all that much; and life in the Iroquois Confederacy was in many ways better than life in 15th century England. (Though maybe that justifies some Iroquois imperialism, at least their “soft imperialism”?)

If these principles did justify any real-world imperialism—and I am not convinced that it does—it would only be much later imperialism, like the British Empire in the 19th and 20th century. And even then, it’s not clear that the talk of “civilizing” people and “the White Man’s Burden” was much more than rationalization, an attempt to give a humanitarian justification for what were really acts of self-interested economic exploitation. Even though India and South Africa are probably better off now than they were when the British first took them over, it’s not at all clear that this was really the goal of the British government so much as a side effect, and there are a lot of things the British could have done differently that would obviously have made them better off still—you know, like not implementing the precursors to apartheid, or making India a parliamentary democracy immediately instead of starting with the Raj and only conceding to democracy after decades of protest. What actually happened doesn’t exactly look like Britain cared nothing for actually improving the lives of people in India and South Africa (they did build a lot of schools and railroads, and sought to undermine slavery and the caste system), but it also doesn’t look like that was their only goal; it was more like one goal among several which also included the strategic and economic interests of Britain. It isn’t enough that Britain was a better society or even that they made South Africa and India better societies than they were; if the goal wasn’t really about making people’s lives better where you are intervening, it’s clearly not justified intervention.

And that’s the relatively beneficent imperialism; the really horrific imperialists throughout history made only the barest pretense of spreading civilization and were clearly interested in nothing more than maximizing their own wealth and power. This is probably why we get things like the Prime Directive; we saw how bad it can get, and overreacted a little by saying that intervening in other cultures is always, always wrong, no matter what. It was only a slight overreaction—intervening in other cultures is usually wrong, and almost all historical examples of it were wrong—but it is still an overreaction. There are exceptional cases where intervening in another culture can be not only morally right but obligatory.

Indeed, one underappreciated consequence of colonialism and imperialism is that they have triggered a backlash against real good-faith efforts toward economic development. People in Africa, Asia, and Latin America see economists from the US and the UK (and most of the world’s top economists are in fact educated in the US or the UK) come in and tell them that they need to do this and that to restructure their society for greater prosperity, and they understandably ask: “Why should I trust you this time?” The last two or four or seven batches of people coming from the US and Europe to intervene in their countries exploited them or worse, so why is this time any different?

It is different, of course; UNDP is not the East India Company, not by a longshot. Even for all their faults, the IMF isn’t the East India Company either. Indeed, while these people largely come from the same places as the imperialists, and may be descended from them, they are in fact completely different people, and moral responsibility does not inherit across generations. While the suspicion is understandable, it is ultimately unjustified; whatever happened hundreds of years ago, this time most of us really are trying to help—and it’s working.

What is progress? How far have we really come?

JDN 2457534

It is a controversy that has lasted throughout the ages: Is the world getting better? Is it getting worse? Or is it more or less staying the same, changing in ways that don’t really constitute improvements or detriments?

The most obvious and indisputable change in human society over the course of history has been the advancement of technology. At one extreme there are techno-utopians, who believe that technology will solve all the world’s problems and bring about a glorious future; at the other extreme are anarcho-primitivists, who maintain that civilization, technology, and industrialization were all grave mistakes, removing us from our natural state of peace and harmony.

I am not a techno-utopian—I do not believe that technology will solve all our problems—but I am much closer to that end of the scale. Technology has solved a lot of our problems, and will continue to solve a lot more. My aim in this post is to convince you that progress is real, that things really are, on the whole, getting better.

One of the more baffling arguments against progress comes from none other than Jared Diamond, the social scientist most famous for Guns, Germs and Steel (which oddly enough is mainly about horses and goats). About seven months before I was born, Diamond wrote an essay for Discover magazine arguing quite literally that agriculture—and by extension, civilization—was a mistake.

Diamond fortunately avoids the usual argument based solely on modern hunter-gatherers, which is a selection bias if ever I heard one. Instead his main argument seems to be that paleontological evidence shows an overall decrease in health around the same time as agriculture emerged. But that’s still an endogeneity problem, albeit a subtler one. Maybe agriculture emerged as a response to famine and disease. Or maybe they were both triggered by rising populations; higher populations increase disease risk, and are also basically impossible to sustain without agriculture.

I am similarly dubious of the claim that hunter-gatherers are always peaceful and egalitarian. It does seem to be the case that herders are more violent than other cultures, as they tend to form honor cultures that punish all sleights with overwhelming violence. Even after the Industrial Revolution there were herder honor cultures—the Wild West. Yet as Steven Pinker keeps trying to tell people, the death rates due to homicide in all human cultures appear to have steadily declined for thousands of years.

I read an article just a few days ago on the Scientific American blog which included the following claim so astonishingly nonsensical it makes me wonder if the authors can even do arithmetic or read statistical tables correctly:

I keep reminding readers (see Further Reading), the evidence is overwhelming that war is a relatively recent cultural invention. War emerged toward the end of the Paleolithic era, and then only sporadically. A new study by Japanese researchers published in the Royal Society journal Biology Letters corroborates this view.

Six Japanese scholars led by Hisashi Nakao examined the remains of 2,582 hunter-gatherers who lived 12,000 to 2,800 years ago, during Japan’s so-called Jomon Period. The researchers found bashed-in skulls and other marks consistent with violent death on 23 skeletons, for a mortality rate of 0.89 percent.

That is supposed to be evidence that ancient hunter-gatherers were peaceful? The global homicide rate today is 62 homicides per million people per year. Using the worldwide life expectancy of 71 years (which is biasing against modern civilization because our life expectancy is longer), that means that the worldwide lifetime homicide rate is 4,400 homicides per million people, or 0.44%—that’s less than half the homicide rate of these “peaceful” hunter-gatherers. If you compare just against First World countries, the difference is even starker; let’s use the US, which has the highest homicide rate in the First World. Our homicide rate is 38 homicides per million people per year, which at our life expectancy of 79 years is 3,000 homicides per million people, or an overall homicide rate of 0.3%, slightly more than a third of this “peaceful” ancient culture. The most peaceful societies today—notably Japan, where these remains were found—have homicide rates as low as 3 per million people per year, which is a lifetime homicide rate of 0.02%, forty times smaller than their supposedly utopian ancestors. (Yes, all of Japan has fewer total homicides than Chicago. I’m sure it has nothing to do with their extremely strict gun control laws.) Indeed, to get a modern homicide rate as high as these hunter-gatherers, you need to go to a country like Congo, Myanmar, or the Central African Republic. To get a substantially higher homicide rate, you essentially have to be in Latin America. Honduras, the murder capital of the world, has a lifetime homicide rate of about 6.7%.

Again, how did I figure these things out? By reading basic information from publicly-available statistical tables and then doing some simple arithmetic. Apparently these paleoanthropologists couldn’t be bothered to do that, or didn’t know how to do it correctly, before they started proclaiming that human nature is peaceful and civilization is the source of violence. After an oversight as egregious as that, it feels almost petty to note that a sample size of a few thousand people from one particular region and culture isn’t sufficient data to draw such sweeping judgments or speak of “overwhelming” evidence.

Of course, in order to decide whether progress is a real phenomenon, we need a clearer idea of what we mean by progress. It would be presumptuous to use per-capita GDP, though there can be absolutely no doubt that technology and capitalism do in fact raise per-capita GDP. If we measure by inequality, modern society clearly fares much worse (our top 1% share and Gini coefficient may be higher than Classical Rome!), but that is clearly biased in the opposite direction, because the main way we have raised inequality is by raising the ceiling, not lowering the floor. Most of our really good measures (like the Human Development Index) only exist for the last few decades and can barely even be extrapolated back through the 20th century.

How about babies not dying? This is my preferred measure of a society’s value. It seems like something that should be totally uncontroversial: Babies dying is bad. All other things equal, a society is better if fewer babies die.

I suppose it doesn’t immediately follow that all things considered a society is better if fewer babies die; maybe the dying babies could be offset by some greater good. Perhaps a totalitarian society where no babies die is in fact worse than a free society in which a few babies die, or perhaps we should be prepared to accept some small amount of babies dying in order to save adults from poverty, or something like that. But without some really powerful overriding reason, babies not dying probably means your society is doing something right. (And since most ancient societies were in a state of universal poverty and quite frequently tyranny, these exceptions would only strengthen my case.)

Well, get ready for some high-yield truth bombs about infant mortality rates.

It’s hard to get good data for prehistoric cultures, but the best data we have says that infant mortality in ancient hunter-gatherer cultures was about 20-50%, with a best estimate around 30%. This is statistically indistinguishable from early agricultural societies.

Indeed, 30% seems to be the figure humanity had for most of history. Just shy of a third of all babies died for most of history.

In Medieval times, infant mortality was about 30%.

This same rate (fluctuating based on various plagues) persisted into the Enlightenment—Sweden has the best records, and their infant mortality rate in 1750 was about 30%.

The decline in infant mortality began slowly: During the Industrial Era, infant mortality was about 15% in isolated villages, but still as high as 40% in major cities due to high population densities with poor sanitation.

Even as recently as 1900, there were US cities with infant mortality rates as high as 30%, though the overall rate was more like 10%.

Most of the decline was recent and rapid: Just within the US since WW2, infant mortality fell from about 5.5% to 0.7%, though there remains a substantial disparity between White and Black people.

Globally, the infant mortality rate fell from 6.3% to 3.2% within my lifetime, and in Africa today, the region where it is worst, it is about 5.5%—or what it was in the US in the 1940s.

This precipitous decline in babies dying is the main reason ancient societies have such low life expectancies; actually once they reached adulthood they lived to be about 70 years old, not much worse than we do today. So my multiplying everything by 71 actually isn’t too far off even for ancient societies.

Let me make a graph for you here, of the approximate rate of babies dying over time from 10,000 BC to today:

Infant_mortality.png

Let’s zoom in on the last 250 years, where the data is much more solid:

Infant_mortality_recent.png

I think you may notice something in these graphs. There is quite literally a turning point for humanity, a kink in the curve where we suddenly begin a rapid decline from an otherwise constant mortality rate.

That point occurs around or shortly before 1800—that is, it occurs at industrial capitalism. Adam Smith (not to mention Thomas Jefferson) was writing at just about the point in time when humanity made a sudden and unprecedented shift toward saving the lives of millions of babies.

So now, think about that the next time you are tempted to say that capitalism is an evil system that destroys the world; the evidence points to capitalism quite literally saving babies from dying.

How would it do so? Well, there’s that rising per-capita GDP we previously ignored, for one thing. But more important seems to be the way that industrialization and free markets support technological innovation, and in this case especially medical innovation—antibiotics and vaccines. Our higher rates of literacy and better communication, also a result of raised standard of living and improved technology, surely didn’t hurt. I’m not often in agreement with the Cato Institute, but they’re right about this one: Industrial capitalism is the chief source of human progress.

Billions of babies would have died but we saved them. So yes, I’m going to call that progress. Civilization, and in particular industrialization and free markets, have dramatically improved human life over the last few hundred years.

In a future post I’ll address one of the common retorts to this basically indisputable fact: “You’re making excuses for colonialism and imperialism!” No, I’m not. Saying that modern capitalism is a better system (not least because it saves babies) is not at all the same thing as saying that our ancestors were justified in using murder, slavery, and tyranny to force people into it.

Meanwhile, we’ve been ending world hunger.

JDN 2457303 EDT 19:56

As reported in The Washington Post and Fortune, the World Bank recently released a report showing that for the first time on record—possibly the first time in human history—global extreme poverty has fallen below 10% of the population. Based on a standard of living of $1.90 per day at 2011 purchasing power parity—that’s about $700 per year, a bit less than the average income in Malawi.

The UN World Millennium Development Goal set in 1990 was to cut extreme poverty in half by 2015; in fact we have cut it by more than two-thirds, reducing it from 37% of the world’s population in 1990 to 9.6% today. This is an estimate, based upon models of what’s going on in countries where we don’t have reliable data; ever the cautious scientists, the World Bank prefers to focus on the most recent fully reliable data, which says that we reduced extreme poverty to 12.7% in 2012 and therefore achieved the Millennium Development Goal.

Most of this effect comes from one very big country: China. Over 750 million people in China saw their standard of living rise above the extreme poverty level in the last 30 years.
The slowest reduction in poverty has been in Africa, specifically Sub-Saharan Africa, where extreme poverty has barely budged, from 53% in 1981 to 47% in 2011. But some particular countries in Africa have done better; thanks to good governance—including better free speech protection than the United States, shame on us—Botswana has reduced their extreme poverty rate from over 50% in 1965 to 19% today.

A lot of World Bank officials have been focusing on the fact that there is still much to be done; 10% in extreme poverty is still 10% too many, and even once everyone is above $1.90 per day that still leaves a lot of people at $3 per day and $4 per day which is still pretty darn poor. The project of global development won’t really seem complete until everyone in the world lives above not just the global poverty line, but something more like a First World poverty line, with a home to live in, a doctor to see, a school to attend, clean water, flush toilets, electricity, and probably even a smartphone with Internet access. (If the latter seems extravagant, let me remind you that more people in the world have smartphones than have flush toilets, because #weliveinthefuture.)

Pace the Heritage Foundation, the fact that what we call poverty in America typically includes having a refrigerator, a microwave, and a car doesn’t mean it isn’t actually poverty; it simply means that poverty in the First World isn’t nearly as bad as poverty in the Third World. (After all, over 9% of children in the US live in households with low food security, and 1% live in households with very low food security; hunger in America isn’t as bad as hunger in Malawi, but it’s still hunger.) Maybe it even means we should focus on the Third World, though that argument isn’t as strong as it might appear; to eliminate poverty in the US, all we’d need to do is pass a law that implements a basic income. To eliminate poverty worldwide, we’d need a global project of economic and political reforms to change how hundreds of countries are governed.

Yet, this focus on what we haven’t accomplished (as though we were going to cut funding to the UN Development Program because we’re done now or something) is not only disheartening, it’s unreasonable. We have accomplished something truly spectacular.

We are now on the verge of solving on one of the great problems of human existence, a problem so deep, so ancient, and so fundamental that it’s practically a cliche: We say “end world hunger” in the same breath as “cure cancer” (which doesn’t even make sense) or “conquer death” (which is not as far off as you may think). Yet, in a very real sense, we are on the verge of ending world hunger.

While most people have been focused on other things, from a narcissistic billionaire running for President to the uniquely American tragedy of mass shootings, development economists have been focused on one thing: Conquering global poverty. What this report means is that now, at last, victory is within our grasp.

Development economists are unsung heroes; without their research, their field work, and their advice and pressure to policymakers, we would never have gotten this far. It was development economists who made the UN Millennium Development Goals, and development economists who began to achieve them.

Yet perhaps there is an even more unsung hero in all of this: Capitalism.

I often have a lot of criticisms of capitalism, at least as it operates in the real world; yet it was in the real world that extreme poverty was just brought down below 10%, and it was done primarily by capitalism. I know a lot of people who think that we need to tear down this whole system and replace it with something fundamentally different, but the kind of progress we are making in global development tells me that we need nothing of the sort. We do need to make changes in policy, but they are small changes, simple changes—many of them could be made with the passing of a few simple laws. Capitalism is not fundamentally broken; on the contrary, it is the fundamentals of capitalism that have brought humanity for the first time within arm’s reach of ending world hunger. We need to fix the system at the edges, not throw it away.

Recall that I said most of the poverty reduction occurred in China. What has China been doing lately? They’ve been opening to world trade—that “free trade” stuff I talked about before. They’ve been cutting tariffs. They’ve been privatizing industries. They’ve been letting unprofitable businesses fail so that new ones can rise in their place. They have, in short, been making themselves more capitalist. Building schools, factories, and yes, even sweatshops is what has made China’s rise out of poverty possible. They are still doing many things wrong—not least their authoritarian government, which is now gamifying oppression in truly cyberpunk fashion—but they are doing a few very important things right.

World hunger is on the way out. And I can think of no better reason to celebrate.

What is socialism?

JDN 2457265 EDT 10:47

Last night I was having a political discussion with some friends (as I am wont to do), and it became a little heated, though never uncongenial. A key point of contention was the fact that Bernie Sanders is a socialist, and what exactly that entails.

One of my friends was arguing that this makes him far-left, and thus it is fair when the news media often likes to make a comparison between Sanders on the left and Trump on the right. Donald Trump is actually oddly liberal on some issues, but his attitudes on racial purity, nativism, military unilateralism, and virtually unlimited executive power are literally fascist. Even his “liberal” views are more like the kind of populism that fascists have often used to win support in the past: Don’t you hate being disenfranchised? Give me absolute power and I’ll fix everything for you! Don’t like how our democracy has become corrupt? Don’t worry, I’ll get rid of it! (The democracy, that is.) While he certainly doesn’t align well with the Republican Party platform, I think it’s quite fair to say that Donald Trump is a far-right candidate.

Bernie Sanders, however, is not a far-left candidate. He is a center-left candidate. His views are basically consonant with the Labour Party of the UK and the Social Democratic Party of Germany. He has spoken often about the Scandinavian model (because, well, #Scandinaviaisbetter—Denmark, Sweden, and Norway are some of the happiest places on Earth). When we talk about Bernie Sanders we aren’t talking about following Cuba and the Soviet Union; we’re talking about following Norway and Sweden. As Jon Stewart put it, he isn’t a “crazy-pants cuckoo bird” as some would have you think.

But he’s a socialist, right? Well… sort of—we have to be very clear what that means.

The word “socialism” has been used to mean many things; it has been a cover for genocidal fascism (“National Socialism”) and tyrannical Communism (“Union of Soviet Socialist Republics”). It has become a pejorative thrown at Social Security, Medicare, banking regulations—basically any policy left of Milton Friedman. So apparently it means something between Medicare and the Holocaust.

Social democracy is often classified as a form of socialism—but one can actually make a pretty compelling case that social democracy is not socialism, but in fact a form of capitalism.

If we want a simple, consistent definition of “socialism”, I think I would put it thus: Socialism is a system in which the majority of economic activity is directly controlled by the government. Most, if not all, industries are nationalized; production and distribution are handled by centrally-planned quotas instead of market supply and demand. Under this definition, the USSR, Venezuela, Cuba, and (at least until recently) China are socialist—and under this definition, socialism is a very bad idea. The best-case scenario is inefficiency; the worst-case scenario is mass murder.

Social democracy, the position that Bernie Sanders espouses (and I basically agree wit), is as follows: Social democracy is a system in which markets are taxed and regulated by a democratically-elected government to ensure that they promote general welfare, public goods are provided by the government, and transfer programs are used to reduce poverty and inequality.

Let’s also try to define “capitalism”: Capitalism is a system in which the majority of economic activity is handled by private sector markets.

Under the Scandinavian model, the majority of economic activity is handled by private sector markets, which are in turn regulated and taxed to promote the general welfare—that is, at least on these definitions, Scandinavia is both capitalist and social democratic.

In fact, so is the United States; while our taxes are lower and our regulations weaker, we still have substantial taxes and regulations. We do have transfer programs like WIC, SNAP, and Social Security that attempt to redistribute wealth and reduce poverty.

We could define “socialism” more broadly to mean any government intervention in the economy, in which case Bernie Sander is a socialist and so is… almost everyone else, including most economists.

The majority of the most eminent American economists are in favor of social democracy. I don’t intend this as an argument from authority, but rather to give a sense of the scientific consensus. The consensus in economics is by no means as strong as that in biology or physics (or climatology, ahem), but there is still broad agreement on many issues.

In a survey of 264 members of the American Economics Association [pdf link], 77% opposed government ownership of enterprise (14% mixed feelings, 8% favor) but 71% favored redistribution of wealth in some form (7% mixed feelings, 20% opposed). That’s social democracy is a nutshell. 67% favored public schools (14% mixed feelings, 17% opposed); 75% favored Keynesian monetary policy (12% mixed feelings, 12% opposed); 51% favored Keynesian fiscal policy (19% mixed feelings, 30% opposed). 58% opposed tighter immigration restrictions (16% mixed feelings, 25% opposed). 79% support anti-discrimination laws. 68% favor gun control.

The major departure from left-wing views that the majority of economists make is a near-universal opposition to protectionism, with 86.8% opposed, 7.6% with mixed feelings, and only 5.3% in favor. It seems I am not the only economist to cringe when politicians say they want to “stop sending jobs overseas”, which they do left and right. This view is quite popular; but the evidence says that it is wrong. Protectionism is not the answer; you make your trading partners poorer, they retaliate with their own protections, and you both end up worse off. We need open trade. I’ll save the details on why open trade is so important for a later post.

One issue that economists are very divided on right now is minimum wage; 47.3% favor minimum wage, 38.3% oppose it, and 14.4% have mixed feelings. This division likely reflects the ambiguity of empirical results on the employment effect of minimum wage, which have a wide margin of error but effect sizes that cluster around zero. Economists are also somewhat divided on military aid, with 36.8% in favor, 33% opposed, and 29.9% with mixed feelings. This I attribute more to the fact that military aid, like most military action, can be justified in principle but is typically unjustified in practice. And indeed perhaps “mixed feelings” is the most reasonable view to have on war and its instruments.

Since Bernie Sanders strongly supports raising minimum wage and some of his statements verge on protectionism, I do have to place him to the left of the economic consensus. A lot of economists would probably disagree on the particulars of his tax plans and such. But his core policies are entirely in line with that consensus, and being a social democrat is absolutely part of that. Compare this to the Republicans, who keep trying to out-crazy each other (apparently Scott Walker thinks we should not only build a wall against Mexico, but also against Canada?) and want policies that were abandoned decades ago by mainstream economists (like the gold standard, or a balanced-budget amendment), or simply would never be taken seriously by mainstream economists at all (the aforementioned border wall, eliminating all environmental regulation, or ending all transfer payments and social welfare programs). Even the things they supposedly agree on I’m not sure they do; when economists say they want “deregulation” Republicans seem to think that means “no rules at all” when in fact it’s supposed to mean “simple, transparent rules that can be tightly and fairly enforced”. (I think we need a new term for it, though there is a slogan I like: “Deregulate with a scalpel, not a chainsaw.”) Obama has done a very good job of deregulating in the sense that economists intend, and I think in general most economists view him positively as a leader who made the best of a bad situation.

In any case, the broad consensus of American economists (and I think most economists around the world) is that some form of capitalist social democracy is the best system we have so far. There is dispute about particular policies—how much should the tax rates be, should we tax income, consumption, real estate, capital, etc.; how large should the transfers be; what regulations should be added or removed—but the basic concept of a market economy with a government that taxes, transfers, and regulates is not in serious dispute.

Indeed, social democracy is the economic system of the free world.

Even using the conservative Heritage Foundation’s data, the correlation between tax burden and economic freedom—that’s economic freedom—is small but positive. (I’m excluding missing data, as well as Timor-Leste because it has a “tax burden” larger than its GDP due to weird accounting of its tourism-based economy, and North Korea because they lie to us and they theoretically have “zero taxes” but that’s clearly not true; the Heritage Foundation reports them as 100% taxes, but that’s also clearly not true either.) See for yourself:

Graph: Heritage Foundation Economic Freedom Index and tax burden

Why is this? Do taxes automatically make you more free? No, they make you less free, because you have to pay for things you didn’t choose to buy (which I admit and the Heritage Foundation includes in their index). But taxes are how you manage a free economy. You need to control monetary policy somehow, which means adding and removing money. The way that social democracies do this is by spending on public goods and transfers to add money, and taxing income, consumption, or assets to remove money. Even if you tie your money to the gold standard, you still need to pay for public goods like military and police; and with a fixed money supply that means spending must be matched by taxes.

There are other ways to do this. You could be like Zimbabwe and print as much money as you feel like. You could be like Venezuela, and have government-owned industries form the majority of your economy. Or, actually, you could not do it; you could fail to manage your country’s economy and leave it wallowing in poverty, like Ghana. All of the countries I just listed have lower tax burdens than the United States.

Within the framework of social democracy, there are higher taxes so that spending and transfers can be higher, which means that more public goods are provided and poverty is lower, which means that real equality of opportunity and thus, real economic freedom, are higher. It’s not that raising taxes automatically makes people more free; rather, the kind of policies that make people more free tend to be the kind of social-democratic policies that involve relatively high taxes.

Worldwide, US is 12th in terms of economic freedom and 62nd in terms of tax burden. We currently stand at 24%. That’s quite low for a First World country, but still relatively high by world standards. The highest tax burden is in Eritrea at 50%; the lowest is in Kuwait at an astonishing 0.7% (I don’t even know how that’s possible). Neither is a really wonderful place to live (though Kuwait is better).

Indeed, if you restrict the sample to North America and Europe, the correlation basically disappears; all the countries are fairly free, all the taxes are fairly high, and within that the two aren’t very much related. (It’s been a long time since I’ve seen a trendline that flat, actually!)

Graph: Heritage Foundation Economic Freedom Index and tax burden, Europe and North America

Switzerland, Canada, and Denmark all have higher economic freedom scores than the United States, as well as higher tax burdens; but on the other hand, Greece, Spain, and Austria have higher tax burdens but lower freedom scores. All of them are variations on social democracy.

Is that socialism? I’m really not sure. Why does it matter, really?