Why “marginal productivity” is no excuse for inequality

May 28, JDN 2457902

In most neoclassical models, workers are paid according to their marginal productivity—the additional (market) value of goods that a firm is able to produce by hiring that worker. This is often used as an excuse for inequality: If someone can produce more, why shouldn’t they be paid more?

The most extreme example of this is people like Maura Pennington writing for Forbes about how poor people just need to get off their butts and “do something”; but there is a whole literature in mainstream economics, particularly “optimal tax theory”, arguing based on marginal productivity that we should tax the very richest people the least and never tax capital income. The Chamley-Judd Theorem famously “shows” (by making heroic assumptions) that taxing capital just makes everyone worse off because it reduces everyone’s productivity.

The biggest reason this is wrong is that there are many, many reasons why someone would have a higher income without being any more productive. They could inherit wealth from their ancestors and get a return on that wealth; they could have a monopoly or some other form of market power; they could use bribery and corruption to tilt government policy in their favor. Indeed, most of the top 0.01% do literally all of these things.

But even if you assume that pay is related to productivity in competitive markets, the argument is not nearly as strong as it may at first appear. Here I have a simple little model to illustrate this.

Suppose there are 10 firms and 10 workers. Suppose that firm 1 has 1 unit of effective capital (capital adjusted for productivity), firm 2 has 2 units, and so on up to firm 10 which has 10 units. And suppose that worker 1 has 1 unit of so-called “human capital”, representing their overall level of skills and education, worker 2 has 2 units, and so on up to worker 10 with 10 units. Suppose each firm only needs one worker, so this is a matching problem.

Furthermore, suppose that productivity is equal to capital times human capital: That is, if firm 2 hired worker 7, they would make 2*7 = $14 of output.

What will happen in this market if it converges to equilibrium?

Well, first of all, the most productive firm is going to hire the most productive worker—so firm 10 will hire worker 10 and produce $100 of output. What wage will they pay? Well, they need a wage that is high enough to keep worker 10 from trying to go elsewhere. They should therefore pay a wage of $90—the next-highest firm productivity times the worker’s productivity. That’s the highest wage any other firm could credibly offer; so if they pay this wage, worker 10 will not have any reason to leave.

Now the problem has been reduced to matching 9 firms to 9 workers. Firm 9 will hire worker 9, making $81 of output, and paying $72 in wages.

And so on, until worker 1 at firm 1 produces $1 and receives… $0. Because there is no way for worker 1 to threaten to leave, in this model they actually get nothing. If I assume there’s some sort of social welfare system providing say $0.50, then at least worker 1 can get that $0.50 by threatening to leave and go on welfare. (This, by the way, is probably the real reason firms hate social welfare spending; it gives their workers more bargaining power and raises wages.) Or maybe they have to pay that $0.50 just to keep the worker from starving to death.

What does inequality look like in this society?
Well, the most-productive firm only has 10 times as much capital as the least-productive firm, and the most-educated worker only has 10 times as much skill as the least-educated worker, so we might think that incomes would vary only by a factor of 10.

But in fact they vary by a factor of over 100.

The richest worker makes $90, while the poorest worker makes $0.50. That’s a ratio of 180. (Still lower than the ratio of the average CEO to their average employee in the US, by the way.) The worker is 10 times as productive, but they receive 180 times as much income.

The firm profits vary along a more reasonable scale in this case; firm 1 makes a profit of $0.50 while firm 10 makes a profit of $10. Indeed, except for firm 1, firm n always makes a profit of $n. So that’s very nearly a linear scaling in productivity.

Where did this result come from? Why is it so different from the usual assumptions? All I did was change one thing: I allowed for increasing returns to scale.

If you make the usual assumption of constant returns to scale, this result can’t happen. Multiplying all the inputs by 10 should just multiply the output by 10, by assumption—since that is the definition of constant returns to scale.

But if you look at the structure of real-world incomes, it’s pretty obvious that we don’t have constant returns to scale.

If we had constant returns to scale, we should expect that wages for the same person should only vary slightly if that person were to work in different places. In particular, to have a 2-fold increase in wage for the same worker you’d need more than a 2-fold increase in capital.

This is a bit counter-intuitive, so let me explain a bit further. If a 2-fold increase in capital results in a 2-fold increase in wage for a given worker, that’s increasing returns to scale—indeed, it’s precisely the production function I assumed above.
If you had constant returns to scale, a 2-fold increase in wage would require something like an 8-fold increase in capital. This is because you should get a 2-fold increase in total production by doubling everything—capital, labor, human capital, whatever else. So doubling capital by itself should produce a much weaker effect. For technical reasons I’d rather not get into at the moment, usually it’s assumed that production is approximately proportional to capital to the one-third power—so to double production you need to multiply capital by 2^3 = 8.

I wasn’t able to quickly find really good data on wages for the same workers across different countries, but this should at least give a rough idea. In Mumbai, the minimum monthly wage for a full-time worker is about $80. In Shanghai, it is about $250. If you multiply out the US federal minimum wage of $7.25 per hour by 40 hours by 4 weeks, that comes to $1160 per month.

Of course, these are not the same workers. Even an “unskilled” worker in the US has a lot more education and training than a minimum-wage worker in India or China. But it’s not that much more. Maybe if we normalize India to 1, China is 3 and the US is 10.

Likewise, these are not the same jobs. Even a minimum wage job in the US is much more capital-intensive and uses much higher technology than most jobs in India or China. But it’s not that much more. Again let’s say India is 1, China is 3 and the US is 10.

If we had constant returns to scale, what should the wages be? Well, for India at productivity 1, the wage is $80. So for China at productivity 3, the wage should be $240—it’s actually $250, close enough for this rough approximation. But the US wage should be $800—and it is in fact $1160, 45% larger than we would expect by constant returns to scale.

Let’s try comparing within a particular industry, where the differences in skill and technology should be far smaller. The median salary for a software engineer in India is about 430,000 INR, which comes to about $6,700. If that sounds rather low for a software engineer, you’re probably more accustomed to the figure for US software engineers, which is $74,000. That is a factor of 11 to 1. For the same job. Maybe US software engineers are better than Indian software engineers—but are they that much better? Yes, you can adjust for purchasing power and shrink the gap: Prices in the US are about 4 times as high as those in India, so the real gap might be 3 to 1. But these huge price differences themselves need to be explained somehow, and even 3 to 1 for the same job in the same industry is still probably too large to explain by differences in either capital or education, unless you allow for increasing returns to scale.

In most industries, we probably don’t have quite as much increasing returns to scale as I assumed in my simple model. Workers in the US don’t make 100 times as much as workers in India, despite plausibly having both 10 times as much physical capital and 10 times as much human capital.

But in some industries, this model might not even be enough! The most successful authors and filmmakers, for example, make literally thousands of times as much money as the average author or filmmaker in their own country. J.K. Rowling has almost $1 billion from writing the Harry Potter series; this is despite having literally the same amount of physical capital and probably not much more human capital than the average author in the UK who makes only about 11,000 GBP—which is about $14,000. Harry Potter and the Philosopher’s Stone is now almost exactly 20 years old, which means that Rowling made an average of $50 million per year, some 3500 times as much as the average British author. Is she better than the average British author? Sure. Is she three thousand times better? I don’t think so. And we can’t even make the argument that she has more capital and technology to work with, because she doesn’t! They’re typing on the same laptops and using the same printing presses. Either the return on human capital for British authors is astronomical, or something other than marginal productivity is at work here—and either way, we don’t have anything close to constant returns to scale.

What can we take away from this? Well, if we don’t have constant returns to scale, then even if wage rates are proportional to marginal productivity, they aren’t proportional to the component of marginal productivity that you yourself bring. The same software developer makes more at Microsoft than at some Indian software company, the same doctor makes more at a US hospital than a hospital in China, the same college professor makes more at Harvard than at a community college, and J.K. Rowling makes three thousand times as much as the average British author—therefore we can’t speak of marginal productivity as inhering in you as an individual. It is an emergent property of a production process that includes you as a part. So even if you’re entirely being paid according to “your” productivity, it’s not really your productivity—it’s the productivity of the production process you’re involved in. A myriad of other factors had to snap into place to make your productivity what it is, most of which you had no control over. So in what sense, then, can we say you earned your higher pay?

Moreover, this problem becomes most acute precisely when incomes diverge the most. The differential in wages between two welders at the same auto plant may well be largely due to their relative skill at welding. But there’s absolutely no way that the top athletes, authors, filmmakers, CEOs, or hedge fund managers could possibly make the incomes they do by being individually that much more productive.

Believing in civilization without believing in colonialism

JDN 2457541

In a post last week I presented some of the overwhelming evidence that society has been getting better over time, particularly since the start of the Industrial Revolution. I focused mainly on infant mortality rates—babies not dying—but there are lots of other measures you could use as well. Despite popular belief, poverty is rapidly declining, and is now the lowest it’s ever been. War is rapidly declining. Crime is rapidly declining in First World countries, and to the best of our knowledge crime rates are stable worldwide. Public health is rapidly improving. Lifespans are getting longer. And so on, and so on. It’s not quite true to say that every indicator of human progress is on an upward trend, but the vast majority of really important indicators are.

Moreover, there is every reason to believe that this great progress is largely the result of what we call “civilization”, even Western civilization: Stable, centralized governments, strong national defense, representative democracy, free markets, openness to global trade, investment in infrastructure, science and technology, secularism, a culture that values innovation, and freedom of speech and the press. We did not get here by Marxism, nor agragrian socialism, nor primitivism, nor anarcho-capitalism. We did not get here by fascism, nor theocracy, nor monarchy. This progress was built by the center-left welfare state, “social democracy”, “modified capitalism”, the system where free, open markets are coupled with a strong democratic government to protect and steer them.

This fact is basically beyond dispute; the evidence is overwhelming. The serious debate in development economics is over which parts of the Western welfare state are most conducive to raising human well-being, and which parts of the package are more optional. And even then, some things are fairly obvious: Stable government is clearly necessary, while speaking English is clearly optional.

Yet many people are resistant to this conclusion, or even offended by it, and I think I know why: They are confusing the results of civilization with the methods by which it was established.

The results of civilization are indisputably positive: Everything I just named above, especially babies not dying.

But the methods by which civilization was established are not; indeed, some of the greatest atrocities in human history are attributable at least in part to attempts to “spread civilization” to “primitive” or “savage” people.
It is therefore vital to distinguish between the result, civilization, and the processes by which it was effected, such as colonialism and imperialism.

First, it’s important not to overstate the link between civilization and colonialism.

We tend to associate colonialism and imperialism with White people from Western European cultures conquering other people in other cultures; but in fact colonialism and imperialism are basically universal to any human culture that attains sufficient size and centralization. India engaged in colonialism, Persia engaged in imperialism, China engaged in imperialism, the Mongols were of course major imperialists, and don’t forget the Ottoman Empire; and did you realize that Tibet and Mali were at one time imperialists as well? And of course there are a whole bunch of empires you’ve probably never heard of, like the Parthians and the Ghaznavids and the Ummayyads. Even many of the people we’re accustoming to thinking of as innocent victims of colonialism were themselves imperialists—the Aztecs certainly were (they even sold people into slavery and used them for human sacrifice!), as were the Pequot, and the Iroquois may not have outright conquered anyone but were definitely at least “soft imperialists” the way that the US is today, spreading their influence around and using economic and sometimes military pressure to absorb other cultures into their own.

Of course, those were all civilizations, at least in the broadest sense of the word; but before that, it’s not that there wasn’t violence, it just wasn’t organized enough to be worthy of being called “imperialism”. The more general concept of intertribal warfare is a human universal, and some hunter-gatherer tribes actually engage in an essentially constant state of warfare we call “endemic warfare”. People have been grouping together to kill other people they perceived as different for at least as long as there have been people to do so.

This is of course not to excuse what European colonial powers did when they set up bases on other continents and exploited, enslaved, or even murdered the indigenous population. And the absolute numbers of people enslaved or killed are typically larger under European colonialism, mainly because European cultures became so powerful and conquered almost the entire world. Even if European societies were not uniquely predisposed to be violent (and I see no evidence to say that they were—humans are pretty much humans), they were more successful in their violent conquering, and so more people suffered and died. It’s also a first-mover effect: If the Ming Dynasty had supported Zheng He more in his colonial ambitions, I’d probably be writing this post in Mandarin and reflecting on why Asian cultures have engaged in so much colonial oppression.

While there is a deeply condescending paternalism (and often post-hoc rationalization of your own self-interested exploitation) involved in saying that you are conquering other people in order to civilize them, humans are also perfectly capable of committing atrocities for far less noble-sounding motives. There are holy wars such as the Crusades and ethnic genocides like in Rwanda, and the Arab slave trade was purely for profit and didn’t even have the pretense of civilizing people (not that the Atlantic slave trade was ever really about that anyway).

Indeed, I think it’s important to distinguish between colonialists who really did make some effort at civilizing the populations they conquered (like Britain, and also the Mongols actually) and those that clearly were just using that as an excuse to rape and pillage (like Spain and Portugal). This is similar to but not quite the same thing as the distinction between settler colonialism, where you send colonists to live there and build up the country, and exploitation colonialism, where you send military forces to take control of the existing population and exploit them to get their resources. Countries that experienced settler colonialism (such as the US and Australia) have fared a lot better in the long run than countries that experienced exploitation colonialism (such as Haiti and Zimbabwe).

The worst consequences of colonialism weren’t even really anyone’s fault, actually. The reason something like 98% of all Native Americans died as a result of European colonization was not that Europeans killed them—they did kill thousands of course, and I hope it goes without saying that that’s terrible, but it was a small fraction of the total deaths. The reason such a huge number died and whole cultures were depopulated was disease, and the inability of medical technology in any culture at that time to handle such a catastrophic plague. The primary cause was therefore accidental, and not really foreseeable given the state of scientific knowledge at the time. (I therefore think it’s wrong to consider it genocide—maybe democide.) Indeed, what really would have saved these people would be if Europe had advanced even faster into industrial capitalism and modern science, or else waited to colonize until they had; and then they could have distributed vaccines and antibiotics when they arrived. (Of course, there is evidence that a few European colonists used the diseases intentionally as biological weapons, which no amount of vaccine technology would prevent—and that is indeed genocide. But again, this was a small fraction of the total deaths.)

However, even with all those caveats, I hope we can all agree that colonialism and imperialism were morally wrong. No nation has the right to invade and conquer other nations; no one has the right to enslave people; no one has the right to kill people based on their culture or ethnicity.

My point is that it is entirely possible to recognize that and still appreciate that Western civilization has dramatically improved the standard of human life over the last few centuries. It simply doesn’t follow from the fact that British government and culture were more advanced and pluralistic that British soldiers can just go around taking over other people’s countries and planting their own flag (follow the link if you need some comic relief from this dark topic). That was the moral failing of colonialism; not that they thought their society was better—for in many ways it was—but that they thought that gave them the right to terrorize, slaughter, enslave, and conquer people.

Indeed, the “justification” of colonialism is a lot like that bizarre pseudo-utilitarianism I mentioned in my post on torture, where the mere presence of some benefit is taken to justify any possible action toward achieving that benefit. No, that’s not how morality works. You can’t justify unlimited evil by any good—it has to be a greater good, as in actually greater.

So let’s suppose that you do find yourself encountering another culture which is clearly more primitive than yours; their inferior technology results in them living in poverty and having very high rates of disease and death, especially among infants and children. What, if anything, are you justified in doing to intervene to improve their condition?

One idea would be to hold to the Prime Directive: No intervention, no sir, not ever. This is clearly what Gene Roddenberry thought of imperialism, hence why he built it into the Federation’s core principles.

But does that really make sense? Even as Star Trek shows progressed, the writers kept coming up with situations where the Prime Directive really seemed like it should have an exception, and sometimes decided that the honorable crew of Enterprise or Voyager really should intervene in this more primitive society to save them from some terrible fate. And I hope I’m not committing a Fictional Evidence Fallacy when I say that if your fictional universe specifically designed not to let that happen makes that happen, well… maybe it’s something we should be considering.

What if people are dying of a terrible disease that you could easily cure? Should you really deny them access to your medicine to avoid intervening in their society?

What if the primitive culture is ruled by a horrible tyrant that you could easily depose with little or no bloodshed? Should you let him continue to rule with an iron fist?

What if the natives are engaged in slavery, or even their own brand of imperialism against other indigenous cultures? Can you fight imperialism with imperialism?

And then we have to ask, does it really matter whether their babies are being murdered by the tyrant or simply dying from malnutrition and infection? The babies are just as dead, aren’t they? Even if we say that being murdered by a tyrant is worse than dying of malnutrition, it can’t be that much worse, can it? Surely 10 babies dying of malnutrition is at least as bad as 1 baby being murdered?

But then it begins to seem like we have a duty to intervene, and moreover a duty that applies in almost every circumstance! If you are on opposite sides of the technology threshold where infant mortality drops from 30% to 1%, how can you justify not intervening?

I think the best answer here is to keep in mind the very large costs of intervention as well as the potentially large benefits. The answer sounds simple, but is actually perhaps the hardest possible answer to apply in practice: You must do a cost-benefit analysis. Furthermore, you must do it well. We can’t demand perfection, but it must actually be a serious good-faith effort to predict the consequences of different intervention policies.

We know that people tend to resist most outside interventions, especially if you have the intention of toppling their leaders (even if they are indeed tyrannical). Even the simple act of offering people vaccines could be met with resistance, as the native people might think you are poisoning them or somehow trying to control them. But in general, opening contact with with gifts and trade is almost certainly going to trigger less hostility and therefore be more effective than going in guns blazing.

If you do use military force, it must be targeted at the particular leaders who are most harmful, and it must be designed to achieve swift, decisive victory with minimal collateral damage. (Basically I’m talking about just war theory.) If you really have such an advanced civilization, show it by exhibiting total technological dominance and minimizing the number of innocent people you kill. The NATO interventions in Kosovo and Libya mostly got this right. The Vietnam War and Iraq War got it totally wrong.

As you change their society, you should be prepared to bear most of the cost of transition; you are, after all, much richer than they are, and also the ones responsible for effecting the transition. You should not expect to see short-term gains for your own civilization, only long-term gains once their culture has advanced to a level near your own. You can’t bear all the costs of course—transition is just painful, no matter what you do—but at least the fungible economic costs should be borne by you, not by the native population. Examples of doing this wrong include basically all the standard examples of exploitation colonialism: Africa, the Caribbean, South America. Examples of doing this right include West Germany and Japan after WW2, and South Korea after the Korean War—which is to say, the greatest economic successes in the history of the human race. This was us winning development, humanity. Do this again everywhere and we will have not only ended world hunger, but achieved global prosperity.

What happens if we apply these principles to real-world colonialism? It does not fare well. Nor should it, as we’ve already established that most if not all real-world colonialism was morally wrong.

15th and 16th century colonialism fail immediately; they offer no benefit to speak of. Europe’s technological superiority was enough to give them gunpowder but not enough to drop their infant mortality rate. Maybe life was better in 16th century Spain than it was in the Aztec Empire, but honestly not by all that much; and life in the Iroquois Confederacy was in many ways better than life in 15th century England. (Though maybe that justifies some Iroquois imperialism, at least their “soft imperialism”?)

If these principles did justify any real-world imperialism—and I am not convinced that it does—it would only be much later imperialism, like the British Empire in the 19th and 20th century. And even then, it’s not clear that the talk of “civilizing” people and “the White Man’s Burden” was much more than rationalization, an attempt to give a humanitarian justification for what were really acts of self-interested economic exploitation. Even though India and South Africa are probably better off now than they were when the British first took them over, it’s not at all clear that this was really the goal of the British government so much as a side effect, and there are a lot of things the British could have done differently that would obviously have made them better off still—you know, like not implementing the precursors to apartheid, or making India a parliamentary democracy immediately instead of starting with the Raj and only conceding to democracy after decades of protest. What actually happened doesn’t exactly look like Britain cared nothing for actually improving the lives of people in India and South Africa (they did build a lot of schools and railroads, and sought to undermine slavery and the caste system), but it also doesn’t look like that was their only goal; it was more like one goal among several which also included the strategic and economic interests of Britain. It isn’t enough that Britain was a better society or even that they made South Africa and India better societies than they were; if the goal wasn’t really about making people’s lives better where you are intervening, it’s clearly not justified intervention.

And that’s the relatively beneficent imperialism; the really horrific imperialists throughout history made only the barest pretense of spreading civilization and were clearly interested in nothing more than maximizing their own wealth and power. This is probably why we get things like the Prime Directive; we saw how bad it can get, and overreacted a little by saying that intervening in other cultures is always, always wrong, no matter what. It was only a slight overreaction—intervening in other cultures is usually wrong, and almost all historical examples of it were wrong—but it is still an overreaction. There are exceptional cases where intervening in another culture can be not only morally right but obligatory.

Indeed, one underappreciated consequence of colonialism and imperialism is that they have triggered a backlash against real good-faith efforts toward economic development. People in Africa, Asia, and Latin America see economists from the US and the UK (and most of the world’s top economists are in fact educated in the US or the UK) come in and tell them that they need to do this and that to restructure their society for greater prosperity, and they understandably ask: “Why should I trust you this time?” The last two or four or seven batches of people coming from the US and Europe to intervene in their countries exploited them or worse, so why is this time any different?

It is different, of course; UNDP is not the East India Company, not by a longshot. Even for all their faults, the IMF isn’t the East India Company either. Indeed, while these people largely come from the same places as the imperialists, and may be descended from them, they are in fact completely different people, and moral responsibility does not inherit across generations. While the suspicion is understandable, it is ultimately unjustified; whatever happened hundreds of years ago, this time most of us really are trying to help—and it’s working.

The challenges of a global basic income

JDN 2457404

In the previous post I gave you the good news. Now for the bad news.

So we are hoping to implement a basic income of $3,000 per person per year worldwide, eliminating poverty once and for all.

There is no global government to implement this system. There is no global income tax to be collected or refunded. The United Nations and the World Bank, for all the good work that they do, are nowhere near powerful enough (or well-funded enough) to accomplish this feat.

Worse, the people we need to help the most, not coincidentally, live in the countries that are worst-managed. They are surrounded not only by squalor, but also by corruption, war, ethnic tension. Most of the people are underfed, uneducated, and dying from diseases such as malaria and schistomoniasis that we could treat in a day for pocket change. Their infrastructure is either crumbling or nonexistent. Their water is unsafe to drink. And worst of all, many of their governments don’t care. Tyrants like Robert Mugabe, Kim Jong-un, King Salman (of our lovely ally Saudi Arabia), and Isayas Afewerki care nothing for the interests of the people they rule, and are interested only in maximizing their own wealth and power. If we arranged to provide grants to these countries in an amount sufficient to provide the basic income, there’s no reason to think they’d actually provide it; they’d simply deposit the check in their own personal bank accounts, and use it to buy ever more extravagant mansions or build ever greater monuments to themselves. They really do seem to follow a utility function based entirely upon their own consumption; witness your neoclassical rational agent and despair.

There are ways for international institutions and non-governmental organizations to intervene to help people in these countries, and indeed many have done so to considerable effect. As bad as things are, they are much better than they used to be, and they promise to be even better tomorrow. But there is only so much they can do without the force of law at their backs, without the power to tax incomes and print currency.

We will therefore need a new kind of institutional framework, if not a true world government then something very much like it. Establishing this new government will not be easy, and worst of all I see no way to do it other than military force. Tyrants will not give up their power willingly; it will need to be taken from them. We will need to capture and imprison tyrants like Robert Mugabe and Kim Jong Un in the same way that we once did to mob bosses like John Dillinger and Al Capone, for ultimately a tyrant is nothing but a mob boss with an army.Unless we can find some way to target them precisely and smoothly replace their regimes with democracies, this will mean nothing less than war, and it could kill thousands, even millions of people—but millions of people are already dying, and will continue to die as long as we leave these men in power. Sanctions might help (though sanctions kill people too), and perhaps a few can be persuaded to step down, but the rest must be overthrown, by some combination of local revolutions and international military coalitions. The best model I’ve seen for how this might be pulled off is Libya, where Qaddafi was at last removed by an international military force supporting a local revolution—but even Libya is not exactly sunshine and rainbows right now. One of the first things we need to do is seriously plan a strategy for removing repressive dictators with a minimum of collateral damage.

To many, I suspect this sounds like imperialism, colonialism redux. Didn’t so many imperialistic powers say that they were doing it to help the local population? Yes, they did; and one of the facts that we must face up to is that it was occasionally true. Or if helping the local population was not their primary motivation, it was nonetheless a consequence. Countries colonized by the British Empire in particular are now the most prosperous, free nations in the world: The United States, Canada, Australia. South Africa and India might seem like exceptions (GDP PPP per capita of $12,400 and $5,500 respectively) but they really aren’t, compared to what they were before—or even compared to what is next to them today: Angola has a per capita GDP PPP of $7,546 while Bangladesh has only $2,991. Zimbabwe is arguably an exception (per capita GDP PPP of $1,773), but their total economic collapse occurred after the British left. To include Zimbabwe in this basic income program would literally triple the income of most of their population. But to do that, we must first get through Robert Mugabe.

Furthermore, I believe that we can avoid many of the mistakes of the past. We don’t have to do exactly the same thing that countries used to do when they invaded each other and toppled governments. Of course we should not enslave, subjugate, or murder the local population—one would hope that would go without saying, but history shows it doesn’t. We also shouldn’t annex the territory and claim it as our own, nor should we set up puppet governments that are only democratic as long as it serves our interests. (And make no mistake, we have done this, all too recently.) The goal must really be to help the people of countries like Zimbabwe and Eritrea establish their own liberal democracy, including the right to make policies we don’t like—or even policies we think are terrible ideas. If we can do so without war, of course we should. But right now what is usually called “pacifism” leaves millions of people to starve while we do nothing.

The argument that we have previously supported (or even continue to support, ahem, Saudi Arabia) many of these tyrants is sort of beside the point. Yes, that is clearly true; and yes, that is clearly terrible. But do you think that if we simply leave the situation alone they’ll go away? We should never have propped up Saddam Hussein or supported the mujihadeen who became the Taliban; and yes, I do think we could have known that at the time. But once they are there, what do you propose to do now? Wait for them to die? Hope they collapse on their own? Give our #thoughtsandprayers to revolutionaries? When asked what you think we should do, “We shouldn’t have done X” is not a valid response.

Imagine there is a mob boss who had kidnapped several families and is holding them in a warehouse. Suppose that at some point the police supported the mob boss in some way; in a deal to undermine a worse rival mafia family, they looked the other way on some things he did, or even gave him money that he used to strengthen his mob. (With actual police, the former is questionable, but actually done all the time; the latter would be definitely illegal. In the international analogy, both are ubiquitous.) Even suppose that the families who were kidnapped were previously from a part of town that the police would regularly shake down for petty crimes and incessant stop-and-frisks. The police definitely have a lot to answer for in all this; their crimes should not be forgotten. But how does it follow in any way that the police should not intervene to rescue the families from the warehouse? Suppose we even know that the warehouse is heavily guarded, and the resulting firefight may kill some of the hostages we are hoping to save. This gives us reason to negotiate, or to find the swiftest, most precise means to deploy the SWAT teams; but does it give us reason to do nothing?

Once again I think Al Capone is the proper analogy; when the FBI captured Al Capone, they didn’t bomb Chicago to the ground, nor did they attempt to enslave the population of Illinois. They thought of themselves as targeting one man and his lieutenants and re-establishing order and civil government to a free people; that is what we must do in Eritrea and Zimbabwe. (In response to all this, no doubt someone will say: “You just want the US to be the world’s police.” Well, no, I want an international coalition; but yes, given our military and economic hegemony, the US will take a very important role. Above all, yes, I want the world to have police. Why don’t you?)

For everything we did wrong in the recent wars in Afghanistan and Iraq, I think we actually did this part right: Afghanistan’s GDP PPP per capita has risen over 70% since 2002, and Iraq’s is now 17% higher than its pre-war peak. It’s a bit early to say whether we have really established stable liberal democracies there, and the Iraq War surely contributed to the rise of Daesh; but when the previous condition was the Taliban and Saddam Hussein it’s hard not to feel that things are at least somewhat improving. In a generation or two maybe we really will say “Iraq” in the same breath as “Korea” as one of the success stories of prosperous democracies set up after US wars. Or maybe it will all fall apart; it’s hard to say at this point.

So, we must find a way to topple the tyrants. Once that is done, we will need to funnel huge amounts of resources—at least one if not two orders of magnitude larger than our current level of foreign aid into building infrastructure, educating people, and establishing sound institutions. Our current “record high” foreign aid is less than 0.3% of world’s GDP. We have a model for this as well: It’s what we did in West Germany and Japan after WW2, as well as what we did in South Korea after the Korean War. It is not a coincidence that Germany soon regained its status as a world power while Japan and Korea were the first of the “Asian Tigers”, East Asian nations that rose up to join us at a First World standard of living.

Will all of this be expensive? Absolutely. By assuming $3,000 per person per year I am already figuring in an expenditure of $21 trillion per year, indefinitely. This would be the most expensive project upon which humanity has ever embarked. But it could also be the most important—an end to poverty, everywhere, forever. And we have that money, we’re simply using it for other things. At purchasing power parity the world spends over $100 trillion per year. Using 20% of the world’s income to eliminate poverty forever doesn’t seem like such a bad deal to me. (It’s not like it would disappear; it would be immediately spent back into the economy anyway. We might even see growth as a result.)

When dealing with events on this scale, it’s easy to get huge numbers that sound absurd. But even if we assumed that only the US, Europe, and China supported this program, it would only take 37% of our combined income—roughly what we currently spend on housing.

Whenever people complain, “We spend billions of dollars a year on aid, and we haven’t solved world hunger!” the proper answer is, “That’s right; we should be spending trillions.”

To truly honor veterans, end war

JDN 2457339 EST 20:00 (Nov 11, 2015)

Today is Veterans’ Day, on which we are asked to celebrate the service of military veterans, particularly those who have died as a result of war. We tend to focus on those who die in combat, but actually these have always been relatively uncommon; throughout history, most soldiers have died later of their wounds or of infections. More recently as a result of advances in body armor and medicine, actually relatively few soldiers die even of war wounds or infections—instead, they are permanently maimed and psychologically damaged, and the most common way that war kills soldiers now is by making them commit suicide.

Even adjusting for the fact that soldiers are mostly young men (the group of people most likely to commit suicide), military veterans still have about 50 excess suicides per million people per year, for a total of about 300 suicides per million per year. Using the total number, that’s over 8000 veteran suicides per year, or 22 per day. Using only the excess compared to men of the same ages, it’s still an additional 1300 suicides per year.

While the 14-years-and-counting Afghanistan War has killed 2,271 American soldiers and the 11-year Iraq War has killed 4,491 American soldiers directly (or as a result of wounds), during that same time period from 2001 to 2015 there have been about 18,000 excess suicides as a result of the military—excess in the sense that they would not have occurred if those men had been civilians. Altogether that means there would be nearly 25,000 additional American soldiers alive today were it not for these two wars.

War does not only kill soldiers while they are on the battlefield—indeed, most of the veterans it kills die here at home.

There is a reason Woodrow Wilson chose November 11 as the date for Veterans’ Day: It was on this day in 1918 that World War 1, up to that point the war that had caused the most deaths in human history, was officially ended. Sadly, it did not remain the deadliest war, but was surpassed by World War 2 a generation later. Fortunately, no other war has ever exceeded World War 2—at least, not yet.

We tend to celebrate holidays like this with a lot of ritual and pageantry (or even in the most inane and American way possible, with free restaurant meals and discounts on various consumer products), and there’s nothing inherently wrong with that. Nor is there anything wrong with taking a moment to salute the flag or say “Thank you for your service.” But that is not how I believe veterans should be honored. If I were a veteran, that is not how I would want to be honored.

We are getting much closer to how I think they should be honored when the White House announces reforms at Veterans’ Affairs hospitals and guaranteed in-state tuition at public universities for families of veterans—things that really do in a concrete and measurable way improve the lives of veterans and may even save some of them from that cruel fate of suicide.

But ultimately there is only one way that I believe we can truly honor veterans and the spirit of the holiday as Wilson intended it, and that is to end war once and for all.

Is this an ambitious goal? Absolutely. But is it an impossible dream? I do not believe so.

In just the last half century, we have already made most of the progress that needed to be made. In this brilliant video animation, you can see two things: First, the mind-numbingly horrific scale of World War 2, the worst war in human history; but second, the incredible progress we have made since then toward world peace. It was as if the world needed that one time to be so unbearably horrible in order to finally realize just what war is and why we need a better way of solving conflicts.

This is part of a very long-term trend in declining violence, for a variety of reasons that are still not thoroughly understood. In simplest terms, human beings just seem to be getting better at not killing each other.

Nassim Nicholas Taleb argues that this is just a statistical illusion, because technologies like nuclear weapons create the possibility of violence on a previously unimaginable scale, and it simply hasn’t happened yet. For nuclear weapons in particular, I think he may be right—the consequences of nuclear war are simply so catastrophic that even a small risk of it is worth paying almost any price to avoid.

Fortunately, nuclear weapons are not necessary to prevent war: South Africa has no designs on attacking Japan anytime soon, but neither has nuclear weapons. Germany and Poland lack nuclear arsenals and were the first countries to fight in World War 2, but now that both are part of the European Union, war between them today seems almost unthinkable. When American commentators fret about China today it is always about wage competition and Treasury bonds, not aircraft carriers and nuclear missiles. Conversely, North Korea’s acquisition of nuclear weapons has by no means stabilized the region against future conflicts, and the fact that India and Pakistan have nuclear missiles pointed at one another has hardly prevented them from killing each other over Kashmir. We do not need nuclear weapons as a constant threat of annihilation in order to learn to live together; political and economic ties achieve that goal far more reliably.

And I think Taleb is wrong about the trend in general. He argues that the only reason violence is declining is that concentration of power has made violence rarer but more catastrophic when it occurs. Yet we know that many forms of violence which used to occur no longer do, not because of the overwhelming force of a Leviathan to prevent them, but because people simply choose not to do them anymore. There are no more gladiator fights, no more cat-burnings, no more public lynchings—not because of the expansion in government power, but because our society seems to have grown out of that phase.

Indeed, what horrifies us about ISIS and Boko Haram would have been considered quite normal, even civilized, in the Middle Ages. (If you’ve ever heard someone say we should “bring back chivalry”, you should explain to them that the system of knight chivalry in the 12th century had basically the same moral code as ISIS today—one of the commandments Gautier’s La Chevalerie attributes as part of the chivalric code is literally “Thou shalt make war against the infidel without cessation and without mercy.”) It is not so much that they are uniquely evil by historical standards, as that we grew out of that sort of barbaric violence awhile ago but they don’t seem to have gotten the memo.

In fact, one thing people don’t seem to understand about Steven Pinker’s argument about this “Long Peace” is that it still works if you include the world wars. The reason World War 2 killed so many people was not that it was uniquely brutal, nor even simply because its weapons were more technologically advanced. It also had to do with the scale of integration—we called it a single war even though it involved dozens of countries because those countries were all united into one of two sides, whereas in centuries past that many countries could be constantly fighting each other in various combinations but it would never be called the same war. But the primary reason World War 2 killed the largest raw number of people was simply because the world population was so much larger. Controlling for world population, World War 2 was not even among the top 5 worst wars—it barely makes the top 10. The worst war in history by proportion of the population killed was almost certainly the An Lushan Rebellion in 8th century China, which many of you may not even have heard of until today.

Though it may not seem so as ISIS kidnaps Christians and drone strikes continue, shrouded in secrecy, we really are on track to end war. Not today, not tomorrow, maybe not in any of our lifetimes—but someday, we may finally be able to celebrate Veterans’ Day as it was truly intended: To honor our soldiers by making it no longer necessary for them to die.

What makes a nation wealthy?

JDN 2457251 EDT 10:17

One of the central questions of economics—perhaps the central question, the primary reason why economics is necessary and worthwhile—is development: How do we raise a nation from poverty to prosperity?

We have done it before: France and Germany rose from the quite literal ashes of World War 2 to some of the most prosperous societies in the world. Their per-capita GDP over the 20th century rose like this (all of these figures are from the World Bank World Development Indicators; France is green, Germany is blue):



The top graph is at market exchange rates, the bottom is correcting for purchasing power parity (PPP). The PPP figures are more meaningful, but unfortunately they only began collecting good data on purchasing power around 1990.

Around the same time, but even more spectacularly, Japan and South Korea rose from poverty-stricken Third World backwaters to high-tech First World powers in only a couple of generations. Check out their per-capita GDP over the 20th century (Japan is green, South Korea is blue):


This is why I am only half-joking when I define development economics as “the ongoing project to figure out what happened in South Korea and make it happen everywhere in the world”.

More recently China has been on a similar upward trajectory, which is particularly important since China comprises such a huge portion of the world’s population—but they are far from finished:


Compare these to societies that have not achieved economic development, such as Zimbabwe (green), India (black), Ghana (red), and Haiti (blue):


They’re so poor that you can barely see them on the same scale, so I’ve rescaled so that the top is $5,000 per person per year instead of $50,000:


Only India actually manages to get above $5,000 per person per year at purchasing power parity, and then not by much, reaching $5,243 per person per year in 2013, the most recent data.

I had wanted to compare North Korea and South Korea, because the two countries were united as recently as the 1945 and were not all that different to begin with, yet have taken completely different development trajectories. Unfortunately, North Korea is so impoverished, corrupt, and authoritarian that the World Bank doesn’t even report data on their per-capita GDP. Perhaps that is contrast enough?

And then of course there are the countries in between, which have made some gains but still have a long way to go, such as Uruguay (green) and Botswana (blue):


But despite the fact that we have observed successful economic development, we still don’t really understand how it works. A number of theories have been proposed, involving a wide range of factors including exports, corruption, disease, institutions of government, liberalized financial markets, and natural resources (counter-intuitively; more natural resources make your development worse).

I’m not going to resolve that whole debate in a single blog post. (I may not be able to resolve that whole debate in a single career, though I am definitely trying.) We may ultimately find that economic development is best conceived as like “health”; what factors determine your health? Well, a lot of things, and if any one thing goes badly enough wrong the whole system can break down. Economists may need to start thinking of ourselves as akin to doctors (or as Keynes famously said, dentists), diagnosing particular disorders in particular patients rather than seeking one unifying theory. On the other hand, doctors depend upon biologists, and it’s not clear that we yet understand development even at that level.

Instead I want to take a step back, and ask a more fundamental question: What do we mean by prosperity?

My hope is that if we can better understand what it is we are trying to achieve, we can also better understand the steps we need to take in order to get there.

Thus far it has sort of been “I know it when I see it”; we take it as more or less given that the United States and the United Kingdom are prosperous while Ghana and Haiti are not. I certainly don’t disagree with that particular conclusion; I’m just asking what we’re basing it on, so that we can hopefully better apply it to more marginal cases.

For example: Is
France more or less prosperous than Saudi Arabia? If we go solely by GDP per capita PPP, clearly Saudi Arabia is more prosperous at $53,100 per person per year than France is at $37,200 per person per year.

But people actually live longer in France, on average, than they do in Saudi Arabia. Overall reported happiness is higher in France than Saudi Arabia. I think France is actually more prosperous.

In fact, I think the United States is not as prosperous as we pretend ourselves to be. We are certainly more prosperous than most other countries; we are definitely still well within First World status. But we are not the most prosperous nation in the world.

Our total GDP is astonishingly high (highest in the world nominally, second only to China PPP). Our GDP per-capita is higher than any other country of comparable size; no nation with higher GDP PPP than the US has a population larger than the Chicago metropolitan area. (You may be surprised to find that in order from largest to smallest population the countries with higher GDP per capita PPP are the United Arab Emirates, Switzerland, Hong Kong, Singapore, and then Norway, followed by Kuwait, Qatar, Luxembourg, Brunei, and finally San Marino—which is smaller than Ann Arbor.) Our per-capita GDP PPP of $51,300 is markedly higher than that of France ($37,200), Germany ($42,900), or Sweden ($43,500).

But at the same time, if you compare the US to other First World countries, we have nearly the highest rate of child poverty and higher infant mortality. We have shorter life expectancy and dramatically higher homicide rates. Our inequality is the highest in the world. In France and Sweden, the top 0.01% receive about 1% of the income (i.e. 100 times as much as the average person), while in the United States they receive almost 4%, making someone in the top 0.01% nearly 400 times as rich as the average person.

By estimating solely on GDP per capita, we are effectively rigging the game in our own favor. Or rather, the rich in the United States are rigging the game in their own favor (what else is new?), by convincing all the world’s economists to rank countries based on a measure that favors them.

Amartya Sen, one of the greats of development economics, developed a scale called the Human Development Index that attempts to take broader factors into account. It’s far from perfect, but it’s definitely a step in the right direction.

In particular, France’s HDI is higher than that of Saudi Arabia, fitting my intuition about which country is truly more prosperous. However, the US still does extremely well, with only Norway, Australia, Switzerland, and the Netherlands above us. I think we might still be biased toward high average incomes rather than overall happiness.

In practice, we still use GDP an awful lot, probably because it’s much easier to measure. It’s sort of like IQ tests and SAT scores; we know damn well it’s not measuring what we really care about, but because it’s so much easier to work with we keep using it anyway.

This is a problem, because the better you get at optimizing toward the wrong goal, the worse your overall outcomes are going to be. If you are just sort of vaguely pointed at several reasonable goals, you will probably be improving your situation overall. But when you start precisely optimizing to a specific wrong goal, it can drag you wildly off course.

This is what we mean when we talk about “gaming the system”. Consider test scores, for example. If you do things that will probably increase your test scores among other things, you are likely to engage in generally good behaviors like getting enough sleep, going to class, studying the content. But if your single goal is to maximize your test score at all costs, what will you do? Cheat, of course.

This is also related to the Friendly AI Problem: It is vitally important to know precisely what goals we want our artificial intelligences to have, because whatever goals we set, they will probably be very good at achieving them. Already computers can do many things that were previously impossible, and as they improve over time we will reach the point where in a meaningful sense our AIs are even smarter than we are. When that day comes, we will want to make very, very sure that we have designed them to want the same things that we do—because if our desires ever come into conflict, theirs are likely to win. The really scary part is that right now most of our AI research is done by for-profit corporations or the military, and “maximize my profit” and “kill that target” are most definitely not the ultimate goals we want in a superintelligent AI. It’s trivially easy to see what’s wrong with these goals: For the former, hack into the world banking system and transfer trillions of dollars to the company accounts. For the latter, hack into the nuclear launch system and launch a few ICBMs in the general vicinity of the target. Yet these are the goals we’ve been programming into the actual AIs we build!

If we set GDP per capita as our ultimate goal to the exclusion of all other goals, there are all sorts of bad policies we would implement: We’d ignore inequality until it reached staggering heights, ignore work stress even as it began to kill us, constantly try to maximize the pressure for everyone to work constantly, use poverty as a stick to force people to work even if people starve, inundate everyone with ads to get them to spend as much as possible, repeal regulations that protect the environment, workers, and public health… wait. This isn’t actually hypothetical, is it? We are doing those things.

At least we’re not trying to maximize nominal GDP, or we’d have long-since ended up like Zimbabwe. No, our economists are at least smart enough to adjust for purchasing power. But they’re still designing an economic system that works us all to death to maximize the number of gadgets that come off assembly lines. The purchasing-power adjustment doesn’t include the value of our health or free time.

This is why the Human Development Index is a major step in the right direction; it reminds us that society has other goals besides maximizing the total amount of money that changes hands (because that’s actually all that GDP is measuring; if you get something for free, it isn’t counted in GDP). More recent refinements include things like “natural resource services” that include environmental degradation in estimates of investment. Unfortunately there is no accepted way of doing this, and surprisingly little research on how to improve our accounting methods. Many nations seem resistant to doing so precisely because they know it would make their economic policy look bad—this is almost certainly why China canceled its “green GDP” initiative. This is in fact all the more reason to do it; if it shows that our policy is bad, that means our policy is bad and should be fixed. But people have allowed themselves to value image over substance.

We can do better still, and in fact I think something like QALY is probably the way to go. Rather than some weird arbitrary scaling of GDP with lifespan and Gini index (which is what the HDI is), we need to put everything in the same units, and those units must be directly linked to human happiness. At the very least, we should make some sort of adjustment to our GDP calculation that includes the distribution of wealth and its marginal utility; adding $1,000 to the economy and handing it to someone in poverty should count for a great deal, but adding $1,000,000 and handing it to a billionaire should count for basically nothing. (It’s not bad to give a billionaire another million; but it’s hardly good either, as no one’s real standard of living will change.) Calculating that could be as simple as dividing by their current income; if your annual income is $10,000 and you receive $1,000, you’ve added about 0.1 QALY. If your annual income is $1 billion and you receive $1 million, you’ve added only 0.001 QALY. Maybe we should simply separate out all individual (or household, to be simpler?) incomes, take their logarithms, and then use that sum as our “utility-adjusted GDP”. The results would no doubt be quite different.

This would create a strong pressure for policy to be directed at reducing inequality even at the expense of some economic output—which is exactly what we should be willing to do. If it’s really true that a redistribution policy would hurt the overall economy so much that the harms would outweigh the benefits, then we shouldn’t do that policy; but that is what you need to show. Reducing total GDP is not a sufficient reason to reject a redistribution policy, because it’s quite possible—easy, in fact—to improve the overall prosperity of a society while still reducing its GDP. There are in fact redistribution policies so disastrous they make things worse: The Soviet Union had them. But a 90% tax on million-dollar incomes would not be such a policy—because we had that in 1960 with little or no ill effect.

Of course, even this has problems; one way to minimize poverty would be to exclude, relocate, or even murder all your poor people. (The Black Death increased per-capita GDP.) Open immigration generally increases poverty rates in the short term, because most of the immigrants are poor. Somehow we’d need to correct for that, only raising the score if you actually improve people’s lives, and not if you make them excluded from the calculation.

In any case it’s not enough to have the alternative measures; we must actually use them. We must get policymakers to stop talking about “economic growth” and start talking about “human development”; a policy that raises GDP but reduces lifespan should be immediately rejected, as should one that further enriches a few at the expense of many others. We must shift the discussion away from “creating jobs”—jobs are only a means—to “creating prosperity”.