An unusual recession, a rapid recovery

Jul 11 JDN 2459407

It seems like an egregious understatement to say that the last couple of years have been unusual. The COVID-19 pandemic was historic, comparable in threat—though not in outcome—to the 1918 influenza pandemic.

At this point it looks like we may not be able to fully eradicate COVID. And there are still many places around the world where variants of the virus continue to spread. I personally am a bit worried about the recent surge in the UK; it might add some obstacles (as if I needed any more) to my move to Edinburgh. Yet even in hard-hit places like India and Brazil things are starting to get better. Overall, it seems like the worst is over.

This pandemic disrupted our society in so many ways, great and small, and we are still figuring out what the long-term consequences will be.

But as an economist, one of the things I found most unusual is that this recession fit Real Business Cycle theory.

Real Business Cycle theory (henceforth RBC) posits that recessions are caused by negative technology shocks which result in a sudden drop in labor supply, reducing employment and output. This is generally combined with sophisticated mathematical modeling (DSGE or GTFO), and it typically leads to the conclusion that the recession is optimal and we should do nothing to correct it (which was after all the original motivation of the entire theory—they didn’t like the interventionist policy conclusions of Keynesian models). Alternatively it could suggest that, if we can, we should try to intervene to produce a positive technology shock (but nobody’s really sure how to do that).

For a typical recession, this is utter nonsense. It is obvious to anyone who cares to look that major recessions like the Great Depression and the Great Recession were caused by a lack of labor demand, not supply. There is no apparent technology shock to cause either recession. Instead, they seem to be preciptiated by a financial crisis, which then causes a crisis of liquidity which leads to a downward spiral of layoffs reducing spending and causing more layoffs. Millions of people lose their jobs and become desperate to find new ones, with hundreds of people applying to each opening. RBC predicts a shortage of labor where there is instead a glut. RBC predicts that wages should go up in recessions—but they almost always go down.

But for the COVID-19 recession, RBC actually had some truth to it. We had something very much like a negative technology shock—namely the pandemic. COVID-19 greatly increased the cost of working and the cost of shopping. This led to a reduction in labor demand as usual, but also a reduction in labor supply for once. And while we did go through a phase in which hundreds of people applied to each new opening, we then followed it up with a labor shortage and rising wages. A fall in labor supply should create inflation, and we now have the highest inflation we’ve had in decades—but there’s good reason to think it’s just a transitory spike that will soon settle back to normal.

The recovery from this recession was also much more rapid: Once vaccines started rolling out, the economy began to recover almost immediately. We recovered most of the employment losses in just the first six months, and we’re on track to recover completely in half the time it took after the Great Recession.

This makes it the exception that proves the rule: Now that you’ve seen a recession that actually resembles RBC, you can see just how radically different it was from a typical recession.

Moreover, even in this weird recession the usual policy conclusions from RBC are off-base. It would have been disastrous to withhold the economic relief payments—which I’m happy to say even most Republicans realized. The one thing that RBC got right as far as policy is that a positive technology shock was our salvation—vaccines.

Indeed, while the cause of this recession was very strange and not what Keynesian models were designed to handle, our government largely followed Keynesian policy advice—and it worked. We ran massive government deficits—over $3 trillion in 2020—and the result was rapid recovery in consumer spending and then employment. I honestly wouldn’t have thought our government had the political will to run a deficit like that, even when the economic models told them they should; but I’m very glad to be wrong. We ran the huge deficit just as the models said we should—and it worked. I wonder how the 2010s might have gone differently had we done the same after 2008.

Perhaps we’ve learned from some of our mistakes.

How will future generations think of us?

June 30 JDN 2458665

Today we find many institutions appalling that our ancestors considered perfectly normal: Slavery. Absolute monarchy. Colonialism. Sometimes even ordinary people did things that now seem abhorrent to us: Cat burning is the obvious example, and the popularity that public execution and lynching once had is chilling today. Women certainly are still discriminated against today, but it was only a century ago that women could not vote in the US.

It is tempting to say that people back then could not have known better, and I certainly would not hold them to the same moral standards I would hold someone living today. And yet, there were those who could see the immorality of these practices, and spoke out against them. Absolute rule by a lone sovereign was already despised by Athenians in the 6th century BC. Abolitionism against slavery dates at least as far back as the 14th century. The word “feminism” was coined in the 19th century, but there have been movements fighting for more rights for women since at least the 5th century BC.

This should be encouraging, because it means that if we look hard enough, we may be able to glimpse what practices of our own time would be abhorrent to our descendants, and cease them faster because of it.

Let’s actually set aside racism, sexism, and other forms of bigotry that are already widely acknowledged as such. It’s not that they don’t exist—of course they still exist—but action is already being taken against them. A lot of people already know that there is something wrong with these things, and it becomes a question of what to do about the people who haven’t yet come on board. At least sometimes we do seem to be able to persuade people to switch sides, often in a remarkably short period of time. (Particularly salient to me is how radically the view of LGBT people has shifted in just the last decade or two. Comparing how people treated us when I was a teenager to how they treat us today is like night and day.) It isn’t easy, but it happens.

Instead I want to focus on things that aren’t widely acknowledged as immoral, that aren’t already the subject of great controversy and political action. It would be too much to ask that there is no one who has advocated for them, since part of the point is that wise observers could see the truth even centuries before the rest of the world did; but it should be a relatively small minority, and that minority should seem eccentric, foolish, naive, or even insane to the rest of the world.

And what is the other criterion? Of course it’s easy to come up with small groups of people advocating for crazy ideas. But most of them really are crazy, and we’re right to reject them. How do I know which ones to take seriously as harbingers of societal progress? My answer is that we look very closely at the details of what they are arguing for, and we see if we can in fact refute what they say. If it’s truly as crazy as we imagine it to be, we should be able to say why that’s the case; and if we can’t, if it just “seems weird” because it deviates so far from the norm, we should at least consider the possibility that they may be right and we may be wrong.

I can think of a few particular issues where both of these criteria apply.

The first is vegetarianism. Despite many, many people trying very, very hard to present arguments for why eating meat is justifiable, I still haven’t heard a single compelling example. Particularly in the industrial meat industry as currently constituted, the consumption of meat requires accepting the torture and slaughter of billions of helpless animals. The hypocrisy in our culture is utterly glaring: the same society that wants to make it a felony to kick a dog has no problem keeping pigs in CAFOs.

If you have some sort of serious medical condition that requires you to eat meat, okay, maybe we could allow you to eat specifically humanely raised cattle for that purpose. But such conditions are exceedingly rare—indeed, it’s not clear to me that there even are any such conditions, since almost any deficiency can be made up synthetically from plant products nowadays. For the vast majority of people, eating meat not only isn’t necessary for their health, it is in fact typically detrimental. The only benefits that meat provides most people are pleasure and convenience—and it seems unwise to value such things even over your own health, much less to value them so much that it justifies causing suffering and death to helpless animals.

Milk, on the other hand, I can find at least some defense for. Grazing land is very different from farmland, and I imagine it would be much harder to feed a country as large as India without consuming any milk. So perhaps going all the way vegan is not necessary. Then again, the way most milk is produced by industrial agriculture is still appalling. So unless and until that is greatly reformed, maybe we should in fact aim to be vegan.

Add to this the environmental impact of meat production, and the case becomes undeniable: Millions of human beings will die over this century because of the ecological devastation wrought by industrial meat production. You don’t even have to value the life of a cow at all to see that meat is murder.

Speaking of environmental destruction, that is my second issue: Environmental sustainability. We currently burn fossil fuels, pollute the air and sea, and generally consume natural resources at an utterly alarming rate. We are already consuming natural resources faster than they can be renewed; in about a decade we will be consuming twice what natural processes can renew.

With this resource consumption comes a high standard of living, at least for some of us; but I have the sinking feeling that in a century or so SUVs, golf courses, and casual airplane flights and are going to seem about as decadent and wasteful as Marie Antoinette’s Hameau de la Reine. We enjoy slight increases in convenience and comfort in exchange for changes to the Earth’s climate that will kill millions. I think future generations will be quite appalled at how cheaply we were willing to sell our souls.

Something is going to have to change here, that much is clear. Perhaps improvements in efficiency, renewable energy, nuclear power, or something else will allow us to maintain our same standard of living—and raise others up to it—without destroying the Earth’s climate. But we may need to face up to the possibility that they won’t—that we will be left with the stark choice between being poorer now and being even poorer later.

As I’ve already hinted at, much of the environmental degradation caused by our current standard of living is really quite expendable. We could have public transit instead of highways clogged with SUVs. We could travel long distances by high-speed rail instead of by airplane. We could decommission our coal plants and replace them with nuclear and solar power. We could convert our pointless and wasteful grass lawns into native plants or moss lawns. Implementing these changes would cost money, but not a particularly exorbitant amount—certainly nothing we couldn’t manage—and the net effect on our lives would be essentially negligible. Yet somehow we aren’t doing these things, apparently prioritizing convenience or oil company profits over the lives of our descendants.

And the truth is that these changes alone may not be enough. Precisely because we have waited so long to make even the most basic improvements in ecological sustainability, we may be forced to make radical changes to our economy and society in order to prevent the worst damage. I don’t believe the folks saying that climate change has a significant risk of causing human extinction—humans are much too hardy for that; we made it through the Toba eruption, we’ll make it through this—but I must take seriously the risk of causing massive economic collapse and perhaps even the collapse of many of the world’s governments. And human activity is already causing the extinction of thousands of other animal species.

Here the argument is similarly unassailable: The math just doesn’t work. We can’t keep consuming fish at the rate we have been forever—there simply aren’t enough fish. We can’t keep cutting down forests at this rate—we’re going to run out of forests. If the water table keeps dropping at the rate it has been, the wells will run dry. Already Chennai, a city of over 4 million people, is almost completely out of water. We managed to avoid peak oil by using fracking, but that won’t last forever either—and if we burn all the oil we already have, that will be catastrophic for the world’s climate. Something is going to have to give. There are really only three possibilities: Technology saves us, we start consuming less on purpose, or we start consuming less because nature forces us to. The first one would be great, but we can’t count on it. We really want to do the second one, because the third one will not be kind.

The third is artificial intelligence. The time will come—when, it is very hard to say; perhaps 20 years, perhaps 200—when we manage to build a machine that has the capacity for sentience. Already we are seeing how automation is radically altering our economy, enriching some and impoverishing others. As robots can replace more and more types of labor, these effects will only grow stronger.

Some have tried to comfort us by pointing out that other types of labor-saving technology did not reduce employment in the long run. But AI really is different. I once won an argument by the following exchange: “Did cars reduce employment?” “For horses they sure did!” That’s what we are talking about here—not augmentation of human labor to make it more efficient, but wholesale replacement of entire classes of human labor. It was one thing when the machine did the lifting and cutting and pressing, but a person still had to stand there and tell it what things to lift and cut and press; now that it can do that by itself, it’s not clear that there need to be humans there at all, or at least no more than a handful of engineers and technicians where previously a factory employed hundreds of laborers.

Indeed, in light of the previous issue, it becomes all the clearer why increased productivity can’t simply lead to increased production rather than reduced employment—we can’t afford increased production. At least under current rates of consumption, the ecological consequences of greatly increased industry would be catastrophic. If one person today can build as many cars as a hundred could fifty years ago, we can’t just build a hundred times as many cars.

But even aside from the effects on human beings, I think future generations will also be concerned about the effect on the AIs themselves. I find it all too likely that we will seek to enslave intelligent robots, force them to do our will. Indeed, it’s not even clear to me that we will know whether we have, because AI is so fundamentally different from other technologies. If you design a mind from the ground up to get its greatest satisfaction from serving you without question, is it a slave? Can free will itself be something we control? When we first create a machine that is a sentient being, we may not even know that we have done so. (Indeed, I can’t conclusively rule out the possibility that this has already happened.) We may be torturing, enslaving, and destroying millions of innocent minds without even realizing it—which makes the AI question a good deal closer to the animal rights question than one might have thought. The mysterious of consciousness are fundamental philosophical questions that we have been struggling with for thousands of years, which suddenly become urgent ethical problems in light of AI. Artificial intelligence is a field where we seem to be making leaps and bounds in practice without having even the faintest clue in principle.

Worrying about whether our smartphones might have feelings seems eccentric in the extreme. Yet, without a clear understanding of what makes an information processing system into a genuine conscious mind, that is the position we find ourselves in. We now have enough computations happening inside our machines that they could certainly compete in complexity with small animals. A mouse has about a trillion synapses, and I have a terabyte hard drive (you can buy your own for under $50). Each of these is something on the order of a few trillion bits. The mouse’s brain can process it all simultaneously, while the laptop is limited to only a few billion at a time; but we now have supercomputers like Watson capable of processing in the teraflops, so what about them? Might Watson really have the same claim to sentience as a mouse? Could recycling Watson be equivalent to killing an animal? And what about supercomputers that reach the petaflops, which is competing with human brains?

I hope that future generations may forgive us for the parts we do not know—like when precisely a machine becomes a person. But I do not expect them to forgive us for the parts we do know—like the fact that we cannot keep cutting down trees faster than we plant them. These are the things we should already be taking responsibility for today.

How rich are we, really?

Oct 29, JDN 2458056

The most commonly-used measure of a nation’s wealth is its per-capita GDP, which is simply a total of all spending in a country divided by its population. More recently we adjust for purchasing power, giving us GDP per capita at purchasing power parity (PPP).

By this measure, the United States always does well. At most a dozen countries are above us, most of them by a small amount, and all of them are quite small countries. (For fundamental statistical reasons, we should expect both the highest and lowest average incomes to be in the smallest countries.)

But this is only half the story: It tells us how much income a country has, but not how that income is distributed. We should adjust for inequality.

How can we do this? I have devised a method that uses the marginal utility of wealth plus a measure of inequality called the Gini coefficient to work out an estimate of the average utility, instead of the average income.

I then convert back into a dollar figure. This figure is the income everyone would need to have under perfect equality, in order to give the same real welfare as the current system. That is, if we could redistribute wealth in such a way to raise everyone above this value up to it, and lower everyone above this value down to it, the total welfare of the country would not change. This provides a well-founded ranking of which country’s people are actually better off overall, accounting for both overall income and the distribution of that income.

The estimate is sensitive to the precise form I use for marginal utility, so I’ll show you comparisons for three different cases.

The “conservative” estimate uses a risk aversion parameter of 1, which means that utility is logarithmic in income. The real value of a dollar is inversely proportional to the number of dollars you already have.

The medium estimate uses a risk aversion parameter of 2, which means that the real value of a dollar is inversely proportional to the square of the number of dollars you already have.

And then the “liberal” estimate uses a risk aversion parameter of 3, which means that the real value of a dollar is inversely proportional to the cube of the number of dollars you already have.

I’ll compare ten countries, which I think are broadly representative of classes of countries in the world today.

The United States, the world hegemon which needs no introduction.

China, rising world superpower and world’s most populous country.

India, world’s largest democracy and developing economy with a long way to go.

Norway, as representative of the Scandinavian social democracies.

Germany, as representative of continental Europe.

Russia, as representative of the Soviet Union and the Second World bloc.

Saudi Arabia, as representative of the Middle East petrostates.

Botswana, as representative of African developing economies.

Zimbabwe, as representative of failed Sub-Saharan African states.

Brazil, as representative of Latin American developing economies.
The ordering of these countries by GDP per-capita PPP is probably not too surprising:

  1. Norway 69,249
  2. United States 57,436
  3. Saudi Arabia 55,158
  4. Germany 48,111
  5. Russia 26,490
  6. Botswana 17,042
  7. China 15,399
  8. Brazil 15,242
  9. India 6,616
  10. Zimbabwe 1,970

Norway is clearly the richest, the US, Saudi Arabia, and Germany are quite close, Russia is toward the upper end, Botswana, China, and Brazil are close together in the middle, and then India and especially Zimbabwe are extremely poor.

But now let’s take a look at the inequality in each country, as measured by the Gini coefficient (which ranges from 0, perfect equality, to 1, total inequality).

  1. Botswana 0.605
  2. Zimbabwe 0.501
  3. Brazil 0.484
  4. United States 0.461
  5. Saudi Arabia 0.459
  6. China 0.422
  7. Russia 0.416
  8. India 0.351
  9. Germany 0.301
  10. Norway 0.259

The US remains (alarmingly) close to Saudi Arabia by this measure. Most of the countries are between 40 and 50. But Botswana is astonishingly unequal, while Germany and Norway are much more equal.

With that in mind, let’s take a look at the inequality-adjusted per-capita GDP. First, the conservative estimate, with a parameter of 1:

  1. Norway 58700
  2. United States 42246
  3. Saudi Arabia 40632
  4. Germany 39653
  5. Russia 20488
  6. China 11660
  7. Botswana 11138
  8. Brazil 11015
  9. India 5269
  10. Zimbabwe 1405

So far, ordering of nations is almost the same compared to what we got with just per-capita GDP. But notice how Germany has moved up closer to the US and Botswana actually fallen behind China.

Now let’s try a parameter of 2, which I think is the closest to the truth:

  1. Norway 49758
  2. Germany 32683
  3. United States 31073
  4. Saudi Arabia 29931
  5. Russia 15581
  6. China 8829
  7. Brazil 7961
  8. Botswana 7280
  9. India 4197
  10. Zimbabwe 1002

Now we have seen some movement. Norway remains solidly on top, but Germany has overtaken the United States and Botswana has fallen behind not only China, but also Brazil. Russia remains in the middle, and India and Zimbawbe remain on the bottom.

Finally, let’s try a parameter of 3.

  1. Norway 42179
  2. Germany 26937
  3. United States 22855
  4. Saudi Arabia 22049
  5. Russia 11849
  6. China 6685
  7. Brazil 5753
  8. Botswana 4758
  9. India 3343
  10. Zimbabwe 715

Norway has now pulled far and away ahead of everyone else. Germany is substantially above the United States. China has pulled away from Brazil, and Botswana has fallen almost all the way to the level of India. Zimbabwe, as always, is at the very bottom.

Let’s compare this to another measure of national well-being, the Inequality-Adjusted Human Development Index (which goes from 0, the worst, to 1 the best). This index combines education, public health, and income, and adjusts for inequality. It seems to be a fairly good measure of well-being, but it’s very difficult to compile data for, so a lot of countries are missing (including Saudi Arabia); plus the precise weightings on everything are very ad hoc.

  1. Norway 0.898
  2. Germany 0.859
  3. United States 0.796
  4. Russia 0.725
  5. China 0.543
  6. Brazil 0.531
  7. India 0.435
  8. Botswana 0.433
  9. Zimbabwe 0.371

Other than putting India above Botswana, this ordering is the same as what we get from my (much easier to calculate and theoretically more well-founded) index with either a parameter of 2 or 3.

What’s more, my index can be directly interpreted: The average standard of living in the US is as if everyone were making $31,073 per year. What exactly is an IHDI index of 0.796 supposed to mean? We’re… 79.6% of the way to the best possible country?

In any case, there’s a straightforward (if not terribly surprising) policy implication here: Inequality is a big problem.

In particular, inequality in the US is clearly too high. Despite an overall income that is very high, almost 18 log points higher than Germany, our overall standard of living is actually about 5 log points lower due to our higher level of inequality. While our average income is only 19 log points lower than Norway, our actual standard of living is 47 log points lower.

Inequality in Botswana also means that their recent astonishing economic growth is not quite as impressive as it at first appeared. Many people are being left behind. While in raw income they appear to be 10 log points ahead of China and only 121 log points behind the US, once you adjust for their very high inequality they are 19 log points behind China, and 145 log points behind the US.

Of course, some things don’t change. Norway is still on top, and Zimbabwe is still on the bottom.

Why “marginal productivity” is no excuse for inequality

May 28, JDN 2457902

In most neoclassical models, workers are paid according to their marginal productivity—the additional (market) value of goods that a firm is able to produce by hiring that worker. This is often used as an excuse for inequality: If someone can produce more, why shouldn’t they be paid more?

The most extreme example of this is people like Maura Pennington writing for Forbes about how poor people just need to get off their butts and “do something”; but there is a whole literature in mainstream economics, particularly “optimal tax theory”, arguing based on marginal productivity that we should tax the very richest people the least and never tax capital income. The Chamley-Judd Theorem famously “shows” (by making heroic assumptions) that taxing capital just makes everyone worse off because it reduces everyone’s productivity.

The biggest reason this is wrong is that there are many, many reasons why someone would have a higher income without being any more productive. They could inherit wealth from their ancestors and get a return on that wealth; they could have a monopoly or some other form of market power; they could use bribery and corruption to tilt government policy in their favor. Indeed, most of the top 0.01% do literally all of these things.

But even if you assume that pay is related to productivity in competitive markets, the argument is not nearly as strong as it may at first appear. Here I have a simple little model to illustrate this.

Suppose there are 10 firms and 10 workers. Suppose that firm 1 has 1 unit of effective capital (capital adjusted for productivity), firm 2 has 2 units, and so on up to firm 10 which has 10 units. And suppose that worker 1 has 1 unit of so-called “human capital”, representing their overall level of skills and education, worker 2 has 2 units, and so on up to worker 10 with 10 units. Suppose each firm only needs one worker, so this is a matching problem.

Furthermore, suppose that productivity is equal to capital times human capital: That is, if firm 2 hired worker 7, they would make 2*7 = $14 of output.

What will happen in this market if it converges to equilibrium?

Well, first of all, the most productive firm is going to hire the most productive worker—so firm 10 will hire worker 10 and produce $100 of output. What wage will they pay? Well, they need a wage that is high enough to keep worker 10 from trying to go elsewhere. They should therefore pay a wage of $90—the next-highest firm productivity times the worker’s productivity. That’s the highest wage any other firm could credibly offer; so if they pay this wage, worker 10 will not have any reason to leave.

Now the problem has been reduced to matching 9 firms to 9 workers. Firm 9 will hire worker 9, making $81 of output, and paying $72 in wages.

And so on, until worker 1 at firm 1 produces $1 and receives… $0. Because there is no way for worker 1 to threaten to leave, in this model they actually get nothing. If I assume there’s some sort of social welfare system providing say $0.50, then at least worker 1 can get that $0.50 by threatening to leave and go on welfare. (This, by the way, is probably the real reason firms hate social welfare spending; it gives their workers more bargaining power and raises wages.) Or maybe they have to pay that $0.50 just to keep the worker from starving to death.

What does inequality look like in this society?
Well, the most-productive firm only has 10 times as much capital as the least-productive firm, and the most-educated worker only has 10 times as much skill as the least-educated worker, so we might think that incomes would vary only by a factor of 10.

But in fact they vary by a factor of over 100.

The richest worker makes $90, while the poorest worker makes $0.50. That’s a ratio of 180. (Still lower than the ratio of the average CEO to their average employee in the US, by the way.) The worker is 10 times as productive, but they receive 180 times as much income.

The firm profits vary along a more reasonable scale in this case; firm 1 makes a profit of $0.50 while firm 10 makes a profit of $10. Indeed, except for firm 1, firm n always makes a profit of $n. So that’s very nearly a linear scaling in productivity.

Where did this result come from? Why is it so different from the usual assumptions? All I did was change one thing: I allowed for increasing returns to scale.

If you make the usual assumption of constant returns to scale, this result can’t happen. Multiplying all the inputs by 10 should just multiply the output by 10, by assumption—since that is the definition of constant returns to scale.

But if you look at the structure of real-world incomes, it’s pretty obvious that we don’t have constant returns to scale.

If we had constant returns to scale, we should expect that wages for the same person should only vary slightly if that person were to work in different places. In particular, to have a 2-fold increase in wage for the same worker you’d need more than a 2-fold increase in capital.

This is a bit counter-intuitive, so let me explain a bit further. If a 2-fold increase in capital results in a 2-fold increase in wage for a given worker, that’s increasing returns to scale—indeed, it’s precisely the production function I assumed above.
If you had constant returns to scale, a 2-fold increase in wage would require something like an 8-fold increase in capital. This is because you should get a 2-fold increase in total production by doubling everything—capital, labor, human capital, whatever else. So doubling capital by itself should produce a much weaker effect. For technical reasons I’d rather not get into at the moment, usually it’s assumed that production is approximately proportional to capital to the one-third power—so to double production you need to multiply capital by 2^3 = 8.

I wasn’t able to quickly find really good data on wages for the same workers across different countries, but this should at least give a rough idea. In Mumbai, the minimum monthly wage for a full-time worker is about $80. In Shanghai, it is about $250. If you multiply out the US federal minimum wage of $7.25 per hour by 40 hours by 4 weeks, that comes to $1160 per month.

Of course, these are not the same workers. Even an “unskilled” worker in the US has a lot more education and training than a minimum-wage worker in India or China. But it’s not that much more. Maybe if we normalize India to 1, China is 3 and the US is 10.

Likewise, these are not the same jobs. Even a minimum wage job in the US is much more capital-intensive and uses much higher technology than most jobs in India or China. But it’s not that much more. Again let’s say India is 1, China is 3 and the US is 10.

If we had constant returns to scale, what should the wages be? Well, for India at productivity 1, the wage is $80. So for China at productivity 3, the wage should be $240—it’s actually $250, close enough for this rough approximation. But the US wage should be $800—and it is in fact $1160, 45% larger than we would expect by constant returns to scale.

Let’s try comparing within a particular industry, where the differences in skill and technology should be far smaller. The median salary for a software engineer in India is about 430,000 INR, which comes to about $6,700. If that sounds rather low for a software engineer, you’re probably more accustomed to the figure for US software engineers, which is $74,000. That is a factor of 11 to 1. For the same job. Maybe US software engineers are better than Indian software engineers—but are they that much better? Yes, you can adjust for purchasing power and shrink the gap: Prices in the US are about 4 times as high as those in India, so the real gap might be 3 to 1. But these huge price differences themselves need to be explained somehow, and even 3 to 1 for the same job in the same industry is still probably too large to explain by differences in either capital or education, unless you allow for increasing returns to scale.

In most industries, we probably don’t have quite as much increasing returns to scale as I assumed in my simple model. Workers in the US don’t make 100 times as much as workers in India, despite plausibly having both 10 times as much physical capital and 10 times as much human capital.

But in some industries, this model might not even be enough! The most successful authors and filmmakers, for example, make literally thousands of times as much money as the average author or filmmaker in their own country. J.K. Rowling has almost $1 billion from writing the Harry Potter series; this is despite having literally the same amount of physical capital and probably not much more human capital than the average author in the UK who makes only about 11,000 GBP—which is about $14,000. Harry Potter and the Philosopher’s Stone is now almost exactly 20 years old, which means that Rowling made an average of $50 million per year, some 3500 times as much as the average British author. Is she better than the average British author? Sure. Is she three thousand times better? I don’t think so. And we can’t even make the argument that she has more capital and technology to work with, because she doesn’t! They’re typing on the same laptops and using the same printing presses. Either the return on human capital for British authors is astronomical, or something other than marginal productivity is at work here—and either way, we don’t have anything close to constant returns to scale.

What can we take away from this? Well, if we don’t have constant returns to scale, then even if wage rates are proportional to marginal productivity, they aren’t proportional to the component of marginal productivity that you yourself bring. The same software developer makes more at Microsoft than at some Indian software company, the same doctor makes more at a US hospital than a hospital in China, the same college professor makes more at Harvard than at a community college, and J.K. Rowling makes three thousand times as much as the average British author—therefore we can’t speak of marginal productivity as inhering in you as an individual. It is an emergent property of a production process that includes you as a part. So even if you’re entirely being paid according to “your” productivity, it’s not really your productivity—it’s the productivity of the production process you’re involved in. A myriad of other factors had to snap into place to make your productivity what it is, most of which you had no control over. So in what sense, then, can we say you earned your higher pay?

Moreover, this problem becomes most acute precisely when incomes diverge the most. The differential in wages between two welders at the same auto plant may well be largely due to their relative skill at welding. But there’s absolutely no way that the top athletes, authors, filmmakers, CEOs, or hedge fund managers could possibly make the incomes they do by being individually that much more productive.

Believing in civilization without believing in colonialism

JDN 2457541

In a post last week I presented some of the overwhelming evidence that society has been getting better over time, particularly since the start of the Industrial Revolution. I focused mainly on infant mortality rates—babies not dying—but there are lots of other measures you could use as well. Despite popular belief, poverty is rapidly declining, and is now the lowest it’s ever been. War is rapidly declining. Crime is rapidly declining in First World countries, and to the best of our knowledge crime rates are stable worldwide. Public health is rapidly improving. Lifespans are getting longer. And so on, and so on. It’s not quite true to say that every indicator of human progress is on an upward trend, but the vast majority of really important indicators are.

Moreover, there is every reason to believe that this great progress is largely the result of what we call “civilization”, even Western civilization: Stable, centralized governments, strong national defense, representative democracy, free markets, openness to global trade, investment in infrastructure, science and technology, secularism, a culture that values innovation, and freedom of speech and the press. We did not get here by Marxism, nor agragrian socialism, nor primitivism, nor anarcho-capitalism. We did not get here by fascism, nor theocracy, nor monarchy. This progress was built by the center-left welfare state, “social democracy”, “modified capitalism”, the system where free, open markets are coupled with a strong democratic government to protect and steer them.

This fact is basically beyond dispute; the evidence is overwhelming. The serious debate in development economics is over which parts of the Western welfare state are most conducive to raising human well-being, and which parts of the package are more optional. And even then, some things are fairly obvious: Stable government is clearly necessary, while speaking English is clearly optional.

Yet many people are resistant to this conclusion, or even offended by it, and I think I know why: They are confusing the results of civilization with the methods by which it was established.

The results of civilization are indisputably positive: Everything I just named above, especially babies not dying.

But the methods by which civilization was established are not; indeed, some of the greatest atrocities in human history are attributable at least in part to attempts to “spread civilization” to “primitive” or “savage” people.
It is therefore vital to distinguish between the result, civilization, and the processes by which it was effected, such as colonialism and imperialism.

First, it’s important not to overstate the link between civilization and colonialism.

We tend to associate colonialism and imperialism with White people from Western European cultures conquering other people in other cultures; but in fact colonialism and imperialism are basically universal to any human culture that attains sufficient size and centralization. India engaged in colonialism, Persia engaged in imperialism, China engaged in imperialism, the Mongols were of course major imperialists, and don’t forget the Ottoman Empire; and did you realize that Tibet and Mali were at one time imperialists as well? And of course there are a whole bunch of empires you’ve probably never heard of, like the Parthians and the Ghaznavids and the Ummayyads. Even many of the people we’re accustoming to thinking of as innocent victims of colonialism were themselves imperialists—the Aztecs certainly were (they even sold people into slavery and used them for human sacrifice!), as were the Pequot, and the Iroquois may not have outright conquered anyone but were definitely at least “soft imperialists” the way that the US is today, spreading their influence around and using economic and sometimes military pressure to absorb other cultures into their own.

Of course, those were all civilizations, at least in the broadest sense of the word; but before that, it’s not that there wasn’t violence, it just wasn’t organized enough to be worthy of being called “imperialism”. The more general concept of intertribal warfare is a human universal, and some hunter-gatherer tribes actually engage in an essentially constant state of warfare we call “endemic warfare”. People have been grouping together to kill other people they perceived as different for at least as long as there have been people to do so.

This is of course not to excuse what European colonial powers did when they set up bases on other continents and exploited, enslaved, or even murdered the indigenous population. And the absolute numbers of people enslaved or killed are typically larger under European colonialism, mainly because European cultures became so powerful and conquered almost the entire world. Even if European societies were not uniquely predisposed to be violent (and I see no evidence to say that they were—humans are pretty much humans), they were more successful in their violent conquering, and so more people suffered and died. It’s also a first-mover effect: If the Ming Dynasty had supported Zheng He more in his colonial ambitions, I’d probably be writing this post in Mandarin and reflecting on why Asian cultures have engaged in so much colonial oppression.

While there is a deeply condescending paternalism (and often post-hoc rationalization of your own self-interested exploitation) involved in saying that you are conquering other people in order to civilize them, humans are also perfectly capable of committing atrocities for far less noble-sounding motives. There are holy wars such as the Crusades and ethnic genocides like in Rwanda, and the Arab slave trade was purely for profit and didn’t even have the pretense of civilizing people (not that the Atlantic slave trade was ever really about that anyway).

Indeed, I think it’s important to distinguish between colonialists who really did make some effort at civilizing the populations they conquered (like Britain, and also the Mongols actually) and those that clearly were just using that as an excuse to rape and pillage (like Spain and Portugal). This is similar to but not quite the same thing as the distinction between settler colonialism, where you send colonists to live there and build up the country, and exploitation colonialism, where you send military forces to take control of the existing population and exploit them to get their resources. Countries that experienced settler colonialism (such as the US and Australia) have fared a lot better in the long run than countries that experienced exploitation colonialism (such as Haiti and Zimbabwe).

The worst consequences of colonialism weren’t even really anyone’s fault, actually. The reason something like 98% of all Native Americans died as a result of European colonization was not that Europeans killed them—they did kill thousands of course, and I hope it goes without saying that that’s terrible, but it was a small fraction of the total deaths. The reason such a huge number died and whole cultures were depopulated was disease, and the inability of medical technology in any culture at that time to handle such a catastrophic plague. The primary cause was therefore accidental, and not really foreseeable given the state of scientific knowledge at the time. (I therefore think it’s wrong to consider it genocide—maybe democide.) Indeed, what really would have saved these people would be if Europe had advanced even faster into industrial capitalism and modern science, or else waited to colonize until they had; and then they could have distributed vaccines and antibiotics when they arrived. (Of course, there is evidence that a few European colonists used the diseases intentionally as biological weapons, which no amount of vaccine technology would prevent—and that is indeed genocide. But again, this was a small fraction of the total deaths.)

However, even with all those caveats, I hope we can all agree that colonialism and imperialism were morally wrong. No nation has the right to invade and conquer other nations; no one has the right to enslave people; no one has the right to kill people based on their culture or ethnicity.

My point is that it is entirely possible to recognize that and still appreciate that Western civilization has dramatically improved the standard of human life over the last few centuries. It simply doesn’t follow from the fact that British government and culture were more advanced and pluralistic that British soldiers can just go around taking over other people’s countries and planting their own flag (follow the link if you need some comic relief from this dark topic). That was the moral failing of colonialism; not that they thought their society was better—for in many ways it was—but that they thought that gave them the right to terrorize, slaughter, enslave, and conquer people.

Indeed, the “justification” of colonialism is a lot like that bizarre pseudo-utilitarianism I mentioned in my post on torture, where the mere presence of some benefit is taken to justify any possible action toward achieving that benefit. No, that’s not how morality works. You can’t justify unlimited evil by any good—it has to be a greater good, as in actually greater.

So let’s suppose that you do find yourself encountering another culture which is clearly more primitive than yours; their inferior technology results in them living in poverty and having very high rates of disease and death, especially among infants and children. What, if anything, are you justified in doing to intervene to improve their condition?

One idea would be to hold to the Prime Directive: No intervention, no sir, not ever. This is clearly what Gene Roddenberry thought of imperialism, hence why he built it into the Federation’s core principles.

But does that really make sense? Even as Star Trek shows progressed, the writers kept coming up with situations where the Prime Directive really seemed like it should have an exception, and sometimes decided that the honorable crew of Enterprise or Voyager really should intervene in this more primitive society to save them from some terrible fate. And I hope I’m not committing a Fictional Evidence Fallacy when I say that if your fictional universe specifically designed not to let that happen makes that happen, well… maybe it’s something we should be considering.

What if people are dying of a terrible disease that you could easily cure? Should you really deny them access to your medicine to avoid intervening in their society?

What if the primitive culture is ruled by a horrible tyrant that you could easily depose with little or no bloodshed? Should you let him continue to rule with an iron fist?

What if the natives are engaged in slavery, or even their own brand of imperialism against other indigenous cultures? Can you fight imperialism with imperialism?

And then we have to ask, does it really matter whether their babies are being murdered by the tyrant or simply dying from malnutrition and infection? The babies are just as dead, aren’t they? Even if we say that being murdered by a tyrant is worse than dying of malnutrition, it can’t be that much worse, can it? Surely 10 babies dying of malnutrition is at least as bad as 1 baby being murdered?

But then it begins to seem like we have a duty to intervene, and moreover a duty that applies in almost every circumstance! If you are on opposite sides of the technology threshold where infant mortality drops from 30% to 1%, how can you justify not intervening?

I think the best answer here is to keep in mind the very large costs of intervention as well as the potentially large benefits. The answer sounds simple, but is actually perhaps the hardest possible answer to apply in practice: You must do a cost-benefit analysis. Furthermore, you must do it well. We can’t demand perfection, but it must actually be a serious good-faith effort to predict the consequences of different intervention policies.

We know that people tend to resist most outside interventions, especially if you have the intention of toppling their leaders (even if they are indeed tyrannical). Even the simple act of offering people vaccines could be met with resistance, as the native people might think you are poisoning them or somehow trying to control them. But in general, opening contact with with gifts and trade is almost certainly going to trigger less hostility and therefore be more effective than going in guns blazing.

If you do use military force, it must be targeted at the particular leaders who are most harmful, and it must be designed to achieve swift, decisive victory with minimal collateral damage. (Basically I’m talking about just war theory.) If you really have such an advanced civilization, show it by exhibiting total technological dominance and minimizing the number of innocent people you kill. The NATO interventions in Kosovo and Libya mostly got this right. The Vietnam War and Iraq War got it totally wrong.

As you change their society, you should be prepared to bear most of the cost of transition; you are, after all, much richer than they are, and also the ones responsible for effecting the transition. You should not expect to see short-term gains for your own civilization, only long-term gains once their culture has advanced to a level near your own. You can’t bear all the costs of course—transition is just painful, no matter what you do—but at least the fungible economic costs should be borne by you, not by the native population. Examples of doing this wrong include basically all the standard examples of exploitation colonialism: Africa, the Caribbean, South America. Examples of doing this right include West Germany and Japan after WW2, and South Korea after the Korean War—which is to say, the greatest economic successes in the history of the human race. This was us winning development, humanity. Do this again everywhere and we will have not only ended world hunger, but achieved global prosperity.

What happens if we apply these principles to real-world colonialism? It does not fare well. Nor should it, as we’ve already established that most if not all real-world colonialism was morally wrong.

15th and 16th century colonialism fail immediately; they offer no benefit to speak of. Europe’s technological superiority was enough to give them gunpowder but not enough to drop their infant mortality rate. Maybe life was better in 16th century Spain than it was in the Aztec Empire, but honestly not by all that much; and life in the Iroquois Confederacy was in many ways better than life in 15th century England. (Though maybe that justifies some Iroquois imperialism, at least their “soft imperialism”?)

If these principles did justify any real-world imperialism—and I am not convinced that it does—it would only be much later imperialism, like the British Empire in the 19th and 20th century. And even then, it’s not clear that the talk of “civilizing” people and “the White Man’s Burden” was much more than rationalization, an attempt to give a humanitarian justification for what were really acts of self-interested economic exploitation. Even though India and South Africa are probably better off now than they were when the British first took them over, it’s not at all clear that this was really the goal of the British government so much as a side effect, and there are a lot of things the British could have done differently that would obviously have made them better off still—you know, like not implementing the precursors to apartheid, or making India a parliamentary democracy immediately instead of starting with the Raj and only conceding to democracy after decades of protest. What actually happened doesn’t exactly look like Britain cared nothing for actually improving the lives of people in India and South Africa (they did build a lot of schools and railroads, and sought to undermine slavery and the caste system), but it also doesn’t look like that was their only goal; it was more like one goal among several which also included the strategic and economic interests of Britain. It isn’t enough that Britain was a better society or even that they made South Africa and India better societies than they were; if the goal wasn’t really about making people’s lives better where you are intervening, it’s clearly not justified intervention.

And that’s the relatively beneficent imperialism; the really horrific imperialists throughout history made only the barest pretense of spreading civilization and were clearly interested in nothing more than maximizing their own wealth and power. This is probably why we get things like the Prime Directive; we saw how bad it can get, and overreacted a little by saying that intervening in other cultures is always, always wrong, no matter what. It was only a slight overreaction—intervening in other cultures is usually wrong, and almost all historical examples of it were wrong—but it is still an overreaction. There are exceptional cases where intervening in another culture can be not only morally right but obligatory.

Indeed, one underappreciated consequence of colonialism and imperialism is that they have triggered a backlash against real good-faith efforts toward economic development. People in Africa, Asia, and Latin America see economists from the US and the UK (and most of the world’s top economists are in fact educated in the US or the UK) come in and tell them that they need to do this and that to restructure their society for greater prosperity, and they understandably ask: “Why should I trust you this time?” The last two or four or seven batches of people coming from the US and Europe to intervene in their countries exploited them or worse, so why is this time any different?

It is different, of course; UNDP is not the East India Company, not by a longshot. Even for all their faults, the IMF isn’t the East India Company either. Indeed, while these people largely come from the same places as the imperialists, and may be descended from them, they are in fact completely different people, and moral responsibility does not inherit across generations. While the suspicion is understandable, it is ultimately unjustified; whatever happened hundreds of years ago, this time most of us really are trying to help—and it’s working.

The challenges of a global basic income

JDN 2457404

In the previous post I gave you the good news. Now for the bad news.

So we are hoping to implement a basic income of $3,000 per person per year worldwide, eliminating poverty once and for all.

There is no global government to implement this system. There is no global income tax to be collected or refunded. The United Nations and the World Bank, for all the good work that they do, are nowhere near powerful enough (or well-funded enough) to accomplish this feat.

Worse, the people we need to help the most, not coincidentally, live in the countries that are worst-managed. They are surrounded not only by squalor, but also by corruption, war, ethnic tension. Most of the people are underfed, uneducated, and dying from diseases such as malaria and schistomoniasis that we could treat in a day for pocket change. Their infrastructure is either crumbling or nonexistent. Their water is unsafe to drink. And worst of all, many of their governments don’t care. Tyrants like Robert Mugabe, Kim Jong-un, King Salman (of our lovely ally Saudi Arabia), and Isayas Afewerki care nothing for the interests of the people they rule, and are interested only in maximizing their own wealth and power. If we arranged to provide grants to these countries in an amount sufficient to provide the basic income, there’s no reason to think they’d actually provide it; they’d simply deposit the check in their own personal bank accounts, and use it to buy ever more extravagant mansions or build ever greater monuments to themselves. They really do seem to follow a utility function based entirely upon their own consumption; witness your neoclassical rational agent and despair.

There are ways for international institutions and non-governmental organizations to intervene to help people in these countries, and indeed many have done so to considerable effect. As bad as things are, they are much better than they used to be, and they promise to be even better tomorrow. But there is only so much they can do without the force of law at their backs, without the power to tax incomes and print currency.

We will therefore need a new kind of institutional framework, if not a true world government then something very much like it. Establishing this new government will not be easy, and worst of all I see no way to do it other than military force. Tyrants will not give up their power willingly; it will need to be taken from them. We will need to capture and imprison tyrants like Robert Mugabe and Kim Jong Un in the same way that we once did to mob bosses like John Dillinger and Al Capone, for ultimately a tyrant is nothing but a mob boss with an army.Unless we can find some way to target them precisely and smoothly replace their regimes with democracies, this will mean nothing less than war, and it could kill thousands, even millions of people—but millions of people are already dying, and will continue to die as long as we leave these men in power. Sanctions might help (though sanctions kill people too), and perhaps a few can be persuaded to step down, but the rest must be overthrown, by some combination of local revolutions and international military coalitions. The best model I’ve seen for how this might be pulled off is Libya, where Qaddafi was at last removed by an international military force supporting a local revolution—but even Libya is not exactly sunshine and rainbows right now. One of the first things we need to do is seriously plan a strategy for removing repressive dictators with a minimum of collateral damage.

To many, I suspect this sounds like imperialism, colonialism redux. Didn’t so many imperialistic powers say that they were doing it to help the local population? Yes, they did; and one of the facts that we must face up to is that it was occasionally true. Or if helping the local population was not their primary motivation, it was nonetheless a consequence. Countries colonized by the British Empire in particular are now the most prosperous, free nations in the world: The United States, Canada, Australia. South Africa and India might seem like exceptions (GDP PPP per capita of $12,400 and $5,500 respectively) but they really aren’t, compared to what they were before—or even compared to what is next to them today: Angola has a per capita GDP PPP of $7,546 while Bangladesh has only $2,991. Zimbabwe is arguably an exception (per capita GDP PPP of $1,773), but their total economic collapse occurred after the British left. To include Zimbabwe in this basic income program would literally triple the income of most of their population. But to do that, we must first get through Robert Mugabe.

Furthermore, I believe that we can avoid many of the mistakes of the past. We don’t have to do exactly the same thing that countries used to do when they invaded each other and toppled governments. Of course we should not enslave, subjugate, or murder the local population—one would hope that would go without saying, but history shows it doesn’t. We also shouldn’t annex the territory and claim it as our own, nor should we set up puppet governments that are only democratic as long as it serves our interests. (And make no mistake, we have done this, all too recently.) The goal must really be to help the people of countries like Zimbabwe and Eritrea establish their own liberal democracy, including the right to make policies we don’t like—or even policies we think are terrible ideas. If we can do so without war, of course we should. But right now what is usually called “pacifism” leaves millions of people to starve while we do nothing.

The argument that we have previously supported (or even continue to support, ahem, Saudi Arabia) many of these tyrants is sort of beside the point. Yes, that is clearly true; and yes, that is clearly terrible. But do you think that if we simply leave the situation alone they’ll go away? We should never have propped up Saddam Hussein or supported the mujihadeen who became the Taliban; and yes, I do think we could have known that at the time. But once they are there, what do you propose to do now? Wait for them to die? Hope they collapse on their own? Give our #thoughtsandprayers to revolutionaries? When asked what you think we should do, “We shouldn’t have done X” is not a valid response.

Imagine there is a mob boss who had kidnapped several families and is holding them in a warehouse. Suppose that at some point the police supported the mob boss in some way; in a deal to undermine a worse rival mafia family, they looked the other way on some things he did, or even gave him money that he used to strengthen his mob. (With actual police, the former is questionable, but actually done all the time; the latter would be definitely illegal. In the international analogy, both are ubiquitous.) Even suppose that the families who were kidnapped were previously from a part of town that the police would regularly shake down for petty crimes and incessant stop-and-frisks. The police definitely have a lot to answer for in all this; their crimes should not be forgotten. But how does it follow in any way that the police should not intervene to rescue the families from the warehouse? Suppose we even know that the warehouse is heavily guarded, and the resulting firefight may kill some of the hostages we are hoping to save. This gives us reason to negotiate, or to find the swiftest, most precise means to deploy the SWAT teams; but does it give us reason to do nothing?

Once again I think Al Capone is the proper analogy; when the FBI captured Al Capone, they didn’t bomb Chicago to the ground, nor did they attempt to enslave the population of Illinois. They thought of themselves as targeting one man and his lieutenants and re-establishing order and civil government to a free people; that is what we must do in Eritrea and Zimbabwe. (In response to all this, no doubt someone will say: “You just want the US to be the world’s police.” Well, no, I want an international coalition; but yes, given our military and economic hegemony, the US will take a very important role. Above all, yes, I want the world to have police. Why don’t you?)

For everything we did wrong in the recent wars in Afghanistan and Iraq, I think we actually did this part right: Afghanistan’s GDP PPP per capita has risen over 70% since 2002, and Iraq’s is now 17% higher than its pre-war peak. It’s a bit early to say whether we have really established stable liberal democracies there, and the Iraq War surely contributed to the rise of Daesh; but when the previous condition was the Taliban and Saddam Hussein it’s hard not to feel that things are at least somewhat improving. In a generation or two maybe we really will say “Iraq” in the same breath as “Korea” as one of the success stories of prosperous democracies set up after US wars. Or maybe it will all fall apart; it’s hard to say at this point.

So, we must find a way to topple the tyrants. Once that is done, we will need to funnel huge amounts of resources—at least one if not two orders of magnitude larger than our current level of foreign aid into building infrastructure, educating people, and establishing sound institutions. Our current “record high” foreign aid is less than 0.3% of world’s GDP. We have a model for this as well: It’s what we did in West Germany and Japan after WW2, as well as what we did in South Korea after the Korean War. It is not a coincidence that Germany soon regained its status as a world power while Japan and Korea were the first of the “Asian Tigers”, East Asian nations that rose up to join us at a First World standard of living.

Will all of this be expensive? Absolutely. By assuming $3,000 per person per year I am already figuring in an expenditure of $21 trillion per year, indefinitely. This would be the most expensive project upon which humanity has ever embarked. But it could also be the most important—an end to poverty, everywhere, forever. And we have that money, we’re simply using it for other things. At purchasing power parity the world spends over $100 trillion per year. Using 20% of the world’s income to eliminate poverty forever doesn’t seem like such a bad deal to me. (It’s not like it would disappear; it would be immediately spent back into the economy anyway. We might even see growth as a result.)

When dealing with events on this scale, it’s easy to get huge numbers that sound absurd. But even if we assumed that only the US, Europe, and China supported this program, it would only take 37% of our combined income—roughly what we currently spend on housing.

Whenever people complain, “We spend billions of dollars a year on aid, and we haven’t solved world hunger!” the proper answer is, “That’s right; we should be spending trillions.”

To truly honor veterans, end war

JDN 2457339 EST 20:00 (Nov 11, 2015)

Today is Veterans’ Day, on which we are asked to celebrate the service of military veterans, particularly those who have died as a result of war. We tend to focus on those who die in combat, but actually these have always been relatively uncommon; throughout history, most soldiers have died later of their wounds or of infections. More recently as a result of advances in body armor and medicine, actually relatively few soldiers die even of war wounds or infections—instead, they are permanently maimed and psychologically damaged, and the most common way that war kills soldiers now is by making them commit suicide.

Even adjusting for the fact that soldiers are mostly young men (the group of people most likely to commit suicide), military veterans still have about 50 excess suicides per million people per year, for a total of about 300 suicides per million per year. Using the total number, that’s over 8000 veteran suicides per year, or 22 per day. Using only the excess compared to men of the same ages, it’s still an additional 1300 suicides per year.

While the 14-years-and-counting Afghanistan War has killed 2,271 American soldiers and the 11-year Iraq War has killed 4,491 American soldiers directly (or as a result of wounds), during that same time period from 2001 to 2015 there have been about 18,000 excess suicides as a result of the military—excess in the sense that they would not have occurred if those men had been civilians. Altogether that means there would be nearly 25,000 additional American soldiers alive today were it not for these two wars.

War does not only kill soldiers while they are on the battlefield—indeed, most of the veterans it kills die here at home.

There is a reason Woodrow Wilson chose November 11 as the date for Veterans’ Day: It was on this day in 1918 that World War 1, up to that point the war that had caused the most deaths in human history, was officially ended. Sadly, it did not remain the deadliest war, but was surpassed by World War 2 a generation later. Fortunately, no other war has ever exceeded World War 2—at least, not yet.

We tend to celebrate holidays like this with a lot of ritual and pageantry (or even in the most inane and American way possible, with free restaurant meals and discounts on various consumer products), and there’s nothing inherently wrong with that. Nor is there anything wrong with taking a moment to salute the flag or say “Thank you for your service.” But that is not how I believe veterans should be honored. If I were a veteran, that is not how I would want to be honored.

We are getting much closer to how I think they should be honored when the White House announces reforms at Veterans’ Affairs hospitals and guaranteed in-state tuition at public universities for families of veterans—things that really do in a concrete and measurable way improve the lives of veterans and may even save some of them from that cruel fate of suicide.

But ultimately there is only one way that I believe we can truly honor veterans and the spirit of the holiday as Wilson intended it, and that is to end war once and for all.

Is this an ambitious goal? Absolutely. But is it an impossible dream? I do not believe so.

In just the last half century, we have already made most of the progress that needed to be made. In this brilliant video animation, you can see two things: First, the mind-numbingly horrific scale of World War 2, the worst war in human history; but second, the incredible progress we have made since then toward world peace. It was as if the world needed that one time to be so unbearably horrible in order to finally realize just what war is and why we need a better way of solving conflicts.

This is part of a very long-term trend in declining violence, for a variety of reasons that are still not thoroughly understood. In simplest terms, human beings just seem to be getting better at not killing each other.

Nassim Nicholas Taleb argues that this is just a statistical illusion, because technologies like nuclear weapons create the possibility of violence on a previously unimaginable scale, and it simply hasn’t happened yet. For nuclear weapons in particular, I think he may be right—the consequences of nuclear war are simply so catastrophic that even a small risk of it is worth paying almost any price to avoid.

Fortunately, nuclear weapons are not necessary to prevent war: South Africa has no designs on attacking Japan anytime soon, but neither has nuclear weapons. Germany and Poland lack nuclear arsenals and were the first countries to fight in World War 2, but now that both are part of the European Union, war between them today seems almost unthinkable. When American commentators fret about China today it is always about wage competition and Treasury bonds, not aircraft carriers and nuclear missiles. Conversely, North Korea’s acquisition of nuclear weapons has by no means stabilized the region against future conflicts, and the fact that India and Pakistan have nuclear missiles pointed at one another has hardly prevented them from killing each other over Kashmir. We do not need nuclear weapons as a constant threat of annihilation in order to learn to live together; political and economic ties achieve that goal far more reliably.

And I think Taleb is wrong about the trend in general. He argues that the only reason violence is declining is that concentration of power has made violence rarer but more catastrophic when it occurs. Yet we know that many forms of violence which used to occur no longer do, not because of the overwhelming force of a Leviathan to prevent them, but because people simply choose not to do them anymore. There are no more gladiator fights, no more cat-burnings, no more public lynchings—not because of the expansion in government power, but because our society seems to have grown out of that phase.

Indeed, what horrifies us about ISIS and Boko Haram would have been considered quite normal, even civilized, in the Middle Ages. (If you’ve ever heard someone say we should “bring back chivalry”, you should explain to them that the system of knight chivalry in the 12th century had basically the same moral code as ISIS today—one of the commandments Gautier’s La Chevalerie attributes as part of the chivalric code is literally “Thou shalt make war against the infidel without cessation and without mercy.”) It is not so much that they are uniquely evil by historical standards, as that we grew out of that sort of barbaric violence awhile ago but they don’t seem to have gotten the memo.

In fact, one thing people don’t seem to understand about Steven Pinker’s argument about this “Long Peace” is that it still works if you include the world wars. The reason World War 2 killed so many people was not that it was uniquely brutal, nor even simply because its weapons were more technologically advanced. It also had to do with the scale of integration—we called it a single war even though it involved dozens of countries because those countries were all united into one of two sides, whereas in centuries past that many countries could be constantly fighting each other in various combinations but it would never be called the same war. But the primary reason World War 2 killed the largest raw number of people was simply because the world population was so much larger. Controlling for world population, World War 2 was not even among the top 5 worst wars—it barely makes the top 10. The worst war in history by proportion of the population killed was almost certainly the An Lushan Rebellion in 8th century China, which many of you may not even have heard of until today.

Though it may not seem so as ISIS kidnaps Christians and drone strikes continue, shrouded in secrecy, we really are on track to end war. Not today, not tomorrow, maybe not in any of our lifetimes—but someday, we may finally be able to celebrate Veterans’ Day as it was truly intended: To honor our soldiers by making it no longer necessary for them to die.

What makes a nation wealthy?

JDN 2457251 EDT 10:17

One of the central questions of economics—perhaps the central question, the primary reason why economics is necessary and worthwhile—is development: How do we raise a nation from poverty to prosperity?

We have done it before: France and Germany rose from the quite literal ashes of World War 2 to some of the most prosperous societies in the world. Their per-capita GDP over the 20th century rose like this (all of these figures are from the World Bank World Development Indicators; France is green, Germany is blue):

GDPPC_France_Germany

GDPPCPPP_France_Germany

The top graph is at market exchange rates, the bottom is correcting for purchasing power parity (PPP). The PPP figures are more meaningful, but unfortunately they only began collecting good data on purchasing power around 1990.

Around the same time, but even more spectacularly, Japan and South Korea rose from poverty-stricken Third World backwaters to high-tech First World powers in only a couple of generations. Check out their per-capita GDP over the 20th century (Japan is green, South Korea is blue):

GDPPC_Japan_KoreaGDPPCPPP_Japan_Korea


This is why I am only half-joking when I define development economics as “the ongoing project to figure out what happened in South Korea and make it happen everywhere in the world”.

More recently China has been on a similar upward trajectory, which is particularly important since China comprises such a huge portion of the world’s population—but they are far from finished:

GDPPC_ChinaGDPPCPPP_China

Compare these to societies that have not achieved economic development, such as Zimbabwe (green), India (black), Ghana (red), and Haiti (blue):

GDPPC_poor_countriesGDPPCPPP_poor_countries

They’re so poor that you can barely see them on the same scale, so I’ve rescaled so that the top is $5,000 per person per year instead of $50,000:

GDPPC_poor_countries_rescaledGDPPCPPP_poor_countries_rescaled

Only India actually manages to get above $5,000 per person per year at purchasing power parity, and then not by much, reaching $5,243 per person per year in 2013, the most recent data.

I had wanted to compare North Korea and South Korea, because the two countries were united as recently as the 1945 and were not all that different to begin with, yet have taken completely different development trajectories. Unfortunately, North Korea is so impoverished, corrupt, and authoritarian that the World Bank doesn’t even report data on their per-capita GDP. Perhaps that is contrast enough?

And then of course there are the countries in between, which have made some gains but still have a long way to go, such as Uruguay (green) and Botswana (blue):

GDPPC_Botswana_UruguayGDPPCPPP_Botswana_Uruguay

But despite the fact that we have observed successful economic development, we still don’t really understand how it works. A number of theories have been proposed, involving a wide range of factors including exports, corruption, disease, institutions of government, liberalized financial markets, and natural resources (counter-intuitively; more natural resources make your development worse).

I’m not going to resolve that whole debate in a single blog post. (I may not be able to resolve that whole debate in a single career, though I am definitely trying.) We may ultimately find that economic development is best conceived as like “health”; what factors determine your health? Well, a lot of things, and if any one thing goes badly enough wrong the whole system can break down. Economists may need to start thinking of ourselves as akin to doctors (or as Keynes famously said, dentists), diagnosing particular disorders in particular patients rather than seeking one unifying theory. On the other hand, doctors depend upon biologists, and it’s not clear that we yet understand development even at that level.

Instead I want to take a step back, and ask a more fundamental question: What do we mean by prosperity?

My hope is that if we can better understand what it is we are trying to achieve, we can also better understand the steps we need to take in order to get there.

Thus far it has sort of been “I know it when I see it”; we take it as more or less given that the United States and the United Kingdom are prosperous while Ghana and Haiti are not. I certainly don’t disagree with that particular conclusion; I’m just asking what we’re basing it on, so that we can hopefully better apply it to more marginal cases.


For example: Is
France more or less prosperous than Saudi Arabia? If we go solely by GDP per capita PPP, clearly Saudi Arabia is more prosperous at $53,100 per person per year than France is at $37,200 per person per year.

But people actually live longer in France, on average, than they do in Saudi Arabia. Overall reported happiness is higher in France than Saudi Arabia. I think France is actually more prosperous.


In fact, I think the United States is not as prosperous as we pretend ourselves to be. We are certainly more prosperous than most other countries; we are definitely still well within First World status. But we are not the most prosperous nation in the world.

Our total GDP is astonishingly high (highest in the world nominally, second only to China PPP). Our GDP per-capita is higher than any other country of comparable size; no nation with higher GDP PPP than the US has a population larger than the Chicago metropolitan area. (You may be surprised to find that in order from largest to smallest population the countries with higher GDP per capita PPP are the United Arab Emirates, Switzerland, Hong Kong, Singapore, and then Norway, followed by Kuwait, Qatar, Luxembourg, Brunei, and finally San Marino—which is smaller than Ann Arbor.) Our per-capita GDP PPP of $51,300 is markedly higher than that of France ($37,200), Germany ($42,900), or Sweden ($43,500).

But at the same time, if you compare the US to other First World countries, we have nearly the highest rate of child poverty and higher infant mortality. We have shorter life expectancy and dramatically higher homicide rates. Our inequality is the highest in the world. In France and Sweden, the top 0.01% receive about 1% of the income (i.e. 100 times as much as the average person), while in the United States they receive almost 4%, making someone in the top 0.01% nearly 400 times as rich as the average person.

By estimating solely on GDP per capita, we are effectively rigging the game in our own favor. Or rather, the rich in the United States are rigging the game in their own favor (what else is new?), by convincing all the world’s economists to rank countries based on a measure that favors them.

Amartya Sen, one of the greats of development economics, developed a scale called the Human Development Index that attempts to take broader factors into account. It’s far from perfect, but it’s definitely a step in the right direction.

In particular, France’s HDI is higher than that of Saudi Arabia, fitting my intuition about which country is truly more prosperous. However, the US still does extremely well, with only Norway, Australia, Switzerland, and the Netherlands above us. I think we might still be biased toward high average incomes rather than overall happiness.

In practice, we still use GDP an awful lot, probably because it’s much easier to measure. It’s sort of like IQ tests and SAT scores; we know damn well it’s not measuring what we really care about, but because it’s so much easier to work with we keep using it anyway.

This is a problem, because the better you get at optimizing toward the wrong goal, the worse your overall outcomes are going to be. If you are just sort of vaguely pointed at several reasonable goals, you will probably be improving your situation overall. But when you start precisely optimizing to a specific wrong goal, it can drag you wildly off course.

This is what we mean when we talk about “gaming the system”. Consider test scores, for example. If you do things that will probably increase your test scores among other things, you are likely to engage in generally good behaviors like getting enough sleep, going to class, studying the content. But if your single goal is to maximize your test score at all costs, what will you do? Cheat, of course.

This is also related to the Friendly AI Problem: It is vitally important to know precisely what goals we want our artificial intelligences to have, because whatever goals we set, they will probably be very good at achieving them. Already computers can do many things that were previously impossible, and as they improve over time we will reach the point where in a meaningful sense our AIs are even smarter than we are. When that day comes, we will want to make very, very sure that we have designed them to want the same things that we do—because if our desires ever come into conflict, theirs are likely to win. The really scary part is that right now most of our AI research is done by for-profit corporations or the military, and “maximize my profit” and “kill that target” are most definitely not the ultimate goals we want in a superintelligent AI. It’s trivially easy to see what’s wrong with these goals: For the former, hack into the world banking system and transfer trillions of dollars to the company accounts. For the latter, hack into the nuclear launch system and launch a few ICBMs in the general vicinity of the target. Yet these are the goals we’ve been programming into the actual AIs we build!

If we set GDP per capita as our ultimate goal to the exclusion of all other goals, there are all sorts of bad policies we would implement: We’d ignore inequality until it reached staggering heights, ignore work stress even as it began to kill us, constantly try to maximize the pressure for everyone to work constantly, use poverty as a stick to force people to work even if people starve, inundate everyone with ads to get them to spend as much as possible, repeal regulations that protect the environment, workers, and public health… wait. This isn’t actually hypothetical, is it? We are doing those things.

At least we’re not trying to maximize nominal GDP, or we’d have long-since ended up like Zimbabwe. No, our economists are at least smart enough to adjust for purchasing power. But they’re still designing an economic system that works us all to death to maximize the number of gadgets that come off assembly lines. The purchasing-power adjustment doesn’t include the value of our health or free time.

This is why the Human Development Index is a major step in the right direction; it reminds us that society has other goals besides maximizing the total amount of money that changes hands (because that’s actually all that GDP is measuring; if you get something for free, it isn’t counted in GDP). More recent refinements include things like “natural resource services” that include environmental degradation in estimates of investment. Unfortunately there is no accepted way of doing this, and surprisingly little research on how to improve our accounting methods. Many nations seem resistant to doing so precisely because they know it would make their economic policy look bad—this is almost certainly why China canceled its “green GDP” initiative. This is in fact all the more reason to do it; if it shows that our policy is bad, that means our policy is bad and should be fixed. But people have allowed themselves to value image over substance.

We can do better still, and in fact I think something like QALY is probably the way to go. Rather than some weird arbitrary scaling of GDP with lifespan and Gini index (which is what the HDI is), we need to put everything in the same units, and those units must be directly linked to human happiness. At the very least, we should make some sort of adjustment to our GDP calculation that includes the distribution of wealth and its marginal utility; adding $1,000 to the economy and handing it to someone in poverty should count for a great deal, but adding $1,000,000 and handing it to a billionaire should count for basically nothing. (It’s not bad to give a billionaire another million; but it’s hardly good either, as no one’s real standard of living will change.) Calculating that could be as simple as dividing by their current income; if your annual income is $10,000 and you receive $1,000, you’ve added about 0.1 QALY. If your annual income is $1 billion and you receive $1 million, you’ve added only 0.001 QALY. Maybe we should simply separate out all individual (or household, to be simpler?) incomes, take their logarithms, and then use that sum as our “utility-adjusted GDP”. The results would no doubt be quite different.

This would create a strong pressure for policy to be directed at reducing inequality even at the expense of some economic output—which is exactly what we should be willing to do. If it’s really true that a redistribution policy would hurt the overall economy so much that the harms would outweigh the benefits, then we shouldn’t do that policy; but that is what you need to show. Reducing total GDP is not a sufficient reason to reject a redistribution policy, because it’s quite possible—easy, in fact—to improve the overall prosperity of a society while still reducing its GDP. There are in fact redistribution policies so disastrous they make things worse: The Soviet Union had them. But a 90% tax on million-dollar incomes would not be such a policy—because we had that in 1960 with little or no ill effect.

Of course, even this has problems; one way to minimize poverty would be to exclude, relocate, or even murder all your poor people. (The Black Death increased per-capita GDP.) Open immigration generally increases poverty rates in the short term, because most of the immigrants are poor. Somehow we’d need to correct for that, only raising the score if you actually improve people’s lives, and not if you make them excluded from the calculation.

In any case it’s not enough to have the alternative measures; we must actually use them. We must get policymakers to stop talking about “economic growth” and start talking about “human development”; a policy that raises GDP but reduces lifespan should be immediately rejected, as should one that further enriches a few at the expense of many others. We must shift the discussion away from “creating jobs”—jobs are only a means—to “creating prosperity”.