Why “marginal productivity” is no excuse for inequality

May 28, JDN 2457902

In most neoclassical models, workers are paid according to their marginal productivity—the additional (market) value of goods that a firm is able to produce by hiring that worker. This is often used as an excuse for inequality: If someone can produce more, why shouldn’t they be paid more?

The most extreme example of this is people like Maura Pennington writing for Forbes about how poor people just need to get off their butts and “do something”; but there is a whole literature in mainstream economics, particularly “optimal tax theory”, arguing based on marginal productivity that we should tax the very richest people the least and never tax capital income. The Chamley-Judd Theorem famously “shows” (by making heroic assumptions) that taxing capital just makes everyone worse off because it reduces everyone’s productivity.

The biggest reason this is wrong is that there are many, many reasons why someone would have a higher income without being any more productive. They could inherit wealth from their ancestors and get a return on that wealth; they could have a monopoly or some other form of market power; they could use bribery and corruption to tilt government policy in their favor. Indeed, most of the top 0.01% do literally all of these things.

But even if you assume that pay is related to productivity in competitive markets, the argument is not nearly as strong as it may at first appear. Here I have a simple little model to illustrate this.

Suppose there are 10 firms and 10 workers. Suppose that firm 1 has 1 unit of effective capital (capital adjusted for productivity), firm 2 has 2 units, and so on up to firm 10 which has 10 units. And suppose that worker 1 has 1 unit of so-called “human capital”, representing their overall level of skills and education, worker 2 has 2 units, and so on up to worker 10 with 10 units. Suppose each firm only needs one worker, so this is a matching problem.

Furthermore, suppose that productivity is equal to capital times human capital: That is, if firm 2 hired worker 7, they would make 2*7 = $14 of output.

What will happen in this market if it converges to equilibrium?

Well, first of all, the most productive firm is going to hire the most productive worker—so firm 10 will hire worker 10 and produce $100 of output. What wage will they pay? Well, they need a wage that is high enough to keep worker 10 from trying to go elsewhere. They should therefore pay a wage of $90—the next-highest firm productivity times the worker’s productivity. That’s the highest wage any other firm could credibly offer; so if they pay this wage, worker 10 will not have any reason to leave.

Now the problem has been reduced to matching 9 firms to 9 workers. Firm 9 will hire worker 9, making $81 of output, and paying $72 in wages.

And so on, until worker 1 at firm 1 produces $1 and receives… $0. Because there is no way for worker 1 to threaten to leave, in this model they actually get nothing. If I assume there’s some sort of social welfare system providing say $0.50, then at least worker 1 can get that $0.50 by threatening to leave and go on welfare. (This, by the way, is probably the real reason firms hate social welfare spending; it gives their workers more bargaining power and raises wages.) Or maybe they have to pay that $0.50 just to keep the worker from starving to death.

What does inequality look like in this society?
Well, the most-productive firm only has 10 times as much capital as the least-productive firm, and the most-educated worker only has 10 times as much skill as the least-educated worker, so we might think that incomes would vary only by a factor of 10.

But in fact they vary by a factor of over 100.

The richest worker makes $90, while the poorest worker makes $0.50. That’s a ratio of 180. (Still lower than the ratio of the average CEO to their average employee in the US, by the way.) The worker is 10 times as productive, but they receive 180 times as much income.

The firm profits vary along a more reasonable scale in this case; firm 1 makes a profit of $0.50 while firm 10 makes a profit of $10. Indeed, except for firm 1, firm n always makes a profit of $n. So that’s very nearly a linear scaling in productivity.

Where did this result come from? Why is it so different from the usual assumptions? All I did was change one thing: I allowed for increasing returns to scale.

If you make the usual assumption of constant returns to scale, this result can’t happen. Multiplying all the inputs by 10 should just multiply the output by 10, by assumption—since that is the definition of constant returns to scale.

But if you look at the structure of real-world incomes, it’s pretty obvious that we don’t have constant returns to scale.

If we had constant returns to scale, we should expect that wages for the same person should only vary slightly if that person were to work in different places. In particular, to have a 2-fold increase in wage for the same worker you’d need more than a 2-fold increase in capital.

This is a bit counter-intuitive, so let me explain a bit further. If a 2-fold increase in capital results in a 2-fold increase in wage for a given worker, that’s increasing returns to scale—indeed, it’s precisely the production function I assumed above.
If you had constant returns to scale, a 2-fold increase in wage would require something like an 8-fold increase in capital. This is because you should get a 2-fold increase in total production by doubling everything—capital, labor, human capital, whatever else. So doubling capital by itself should produce a much weaker effect. For technical reasons I’d rather not get into at the moment, usually it’s assumed that production is approximately proportional to capital to the one-third power—so to double production you need to multiply capital by 2^3 = 8.

I wasn’t able to quickly find really good data on wages for the same workers across different countries, but this should at least give a rough idea. In Mumbai, the minimum monthly wage for a full-time worker is about $80. In Shanghai, it is about $250. If you multiply out the US federal minimum wage of $7.25 per hour by 40 hours by 4 weeks, that comes to $1160 per month.

Of course, these are not the same workers. Even an “unskilled” worker in the US has a lot more education and training than a minimum-wage worker in India or China. But it’s not that much more. Maybe if we normalize India to 1, China is 3 and the US is 10.

Likewise, these are not the same jobs. Even a minimum wage job in the US is much more capital-intensive and uses much higher technology than most jobs in India or China. But it’s not that much more. Again let’s say India is 1, China is 3 and the US is 10.

If we had constant returns to scale, what should the wages be? Well, for India at productivity 1, the wage is $80. So for China at productivity 3, the wage should be $240—it’s actually $250, close enough for this rough approximation. But the US wage should be $800—and it is in fact $1160, 45% larger than we would expect by constant returns to scale.

Let’s try comparing within a particular industry, where the differences in skill and technology should be far smaller. The median salary for a software engineer in India is about 430,000 INR, which comes to about $6,700. If that sounds rather low for a software engineer, you’re probably more accustomed to the figure for US software engineers, which is $74,000. That is a factor of 11 to 1. For the same job. Maybe US software engineers are better than Indian software engineers—but are they that much better? Yes, you can adjust for purchasing power and shrink the gap: Prices in the US are about 4 times as high as those in India, so the real gap might be 3 to 1. But these huge price differences themselves need to be explained somehow, and even 3 to 1 for the same job in the same industry is still probably too large to explain by differences in either capital or education, unless you allow for increasing returns to scale.

In most industries, we probably don’t have quite as much increasing returns to scale as I assumed in my simple model. Workers in the US don’t make 100 times as much as workers in India, despite plausibly having both 10 times as much physical capital and 10 times as much human capital.

But in some industries, this model might not even be enough! The most successful authors and filmmakers, for example, make literally thousands of times as much money as the average author or filmmaker in their own country. J.K. Rowling has almost $1 billion from writing the Harry Potter series; this is despite having literally the same amount of physical capital and probably not much more human capital than the average author in the UK who makes only about 11,000 GBP—which is about $14,000. Harry Potter and the Philosopher’s Stone is now almost exactly 20 years old, which means that Rowling made an average of $50 million per year, some 3500 times as much as the average British author. Is she better than the average British author? Sure. Is she three thousand times better? I don’t think so. And we can’t even make the argument that she has more capital and technology to work with, because she doesn’t! They’re typing on the same laptops and using the same printing presses. Either the return on human capital for British authors is astronomical, or something other than marginal productivity is at work here—and either way, we don’t have anything close to constant returns to scale.

What can we take away from this? Well, if we don’t have constant returns to scale, then even if wage rates are proportional to marginal productivity, they aren’t proportional to the component of marginal productivity that you yourself bring. The same software developer makes more at Microsoft than at some Indian software company, the same doctor makes more at a US hospital than a hospital in China, the same college professor makes more at Harvard than at a community college, and J.K. Rowling makes three thousand times as much as the average British author—therefore we can’t speak of marginal productivity as inhering in you as an individual. It is an emergent property of a production process that includes you as a part. So even if you’re entirely being paid according to “your” productivity, it’s not really your productivity—it’s the productivity of the production process you’re involved in. A myriad of other factors had to snap into place to make your productivity what it is, most of which you had no control over. So in what sense, then, can we say you earned your higher pay?

Moreover, this problem becomes most acute precisely when incomes diverge the most. The differential in wages between two welders at the same auto plant may well be largely due to their relative skill at welding. But there’s absolutely no way that the top athletes, authors, filmmakers, CEOs, or hedge fund managers could possibly make the incomes they do by being individually that much more productive.

No, this isn’t like Watergate. It’s worse.

May 21, JDN 2457895

Make no mistake: This a historic moment. This may be the greatest corruption scandal in the history of the United States. Donald Trump has fired the director of the FBI in order to block an investigation—and he said so himself.

It has become cliche to compare scandals to Watergate—to the point where we even stick the suffix “-gate” on things to indicate scandals. “Gamergate”, “Climategate”, and so on. So any comparison to Watergate is bound to draw some raised eyebrows.

But just as it’s not Godwin’s Law when you’re really talking about fascism and genocide, it’s not the “-gate” cliche when we are talking about a corruption scandal that goes all the way up to the President of the United States. And The Atlantic is right: this isn’t Watergate; it’s worse.

First of all, let’s talk about the crime of which Trump is accused. Nixon was accused of orchestrating burglary and fraud. These are not minor offenses, to be sure. But they are ordinary criminal offenses, felonies at worst. Trump is accused of fundamental Constitutional violations (particularly the First Amendment and the Emoluments Clause), and above all, Trump is accused of treason. This is the highest crime recognized by the Constitution of the United States. It is the only crime with a specifically listed Constitutional punishment—and that punishment is execution.

Donald Trump is being investigated not for stealing something or concealing information, but for colluding with foreign powers in the attempt to undermine American democracy. Is he guilty? I don’t know; that’s why we’re investigating. But let me say this: If he isn’t guilty of something, it’s quite baffling that he would fight so hard to stop the investigation.

Speaking of which: Trump’s intervention to stop Comey is much more direct, and much more sudden, than anything Nixon did to stop the Watergate investigations. Nixon of course tried to stonewall the investigations, but he did so subtly, cautiously, always trying to at least appear like he valued due process and rule of law. Trump made no such efforts, openly threatening Comey personally on Twitter and publicly declaring on national television that he had fired him to block the investigation.

But perhaps what makes the Trump-Comey affair most terrifying is how the supposedly “mainstream” Republican Party has reacted. The Republicans of Nixon had some honor left in them; several resigned rather than follow Nixon’s illegal orders, and dozens of Republicans in Congress supported the investigations and called for Nixon’s impeachment. Apparently that honor is gone now, as GOP leaders like Mitch McConnell and Lindsey Graham have expressed support for the President’s corrupt and illegal actions citing no principle other than party loyalty. If we needed any more proof that the Republican Party of the United States is no longer a mainstream political party, this is it. They don’t believe in democracy or rule of law anymore. They believe in winning at any cost, loyalty at any price. They have become a radical far-right organization—indeed, if they continue down this road of supporting the President in undermining the freedom of the press and consolidating his own power, I think it is fair to call them literally neo-fascist.

We are about to see whether American institutions can withstand such an onslaught, whether liberty and justice can prevail against corruption and tyranny. So far, there have been reasons to be optimistic: In particular, the judicial branch has proudly and bravely held the line, blocking Trump’s travel ban (multiple times), resisting his order to undermine sanctuary cities, and standing up to direct criticisms and even threats from the President himself. Our system of checks and balances is being challenged, but so far it is holding up against that challenge. We will find out soon enough whether the American system truly is robust enough to survive.

Our government just voted to let thousands of people die for no reason

May 14, JDN 2457888

The US House of Representatives just voted to pass a bill that will let thousands of Americans die for no reason. At the time of writing it hasn’t yet passed the Senate, but it may yet do so. And if it does, there can be little doubt that President Trump (a phrase I still feel nauseous saying) will sign it.

Some already call it Trumpcare (or “Trump-doesn’t-care”); but officially they call it the American Health Care Act. I think we should use the formal name, because it is a name which is already beginning to take on a dark irony; yes, only in America would such a terrible health care act be considered. Every other highly-developed country has a universal healthcare system; most of them have single-payer systems (and this has been true for over two decades).
The Congressional Budget Office estimates that the AHCA will increase the number of uninsured Americans by 24 million. Of these, 14 million will be people near the poverty line who lose access to Medicaid.

In 2009, a Harvard study estimated that 45,000 Americans die each year because they don’t have health insurance. This is on the higher end; other studies have estimated more like 20,000. But based on the increases in health insurance rates under Obamacare, somewhere between 5,000 and 10,000 American lives have been saved each year since it was enacted. That reduction came from insuring about 10 million people who weren’t insured before.

Making a linear projection, we can roughly estimate the number of additional Americans who will die every year if this American Health Care Act is implemented. (24 million/10 million)(5,000 to 10,000) = 12,000 to 24,000 deaths per year. For comparison, there are about 14,000 total homicides in the United States each year (and we have an exceptionally high homicide rate for a highly-developed country).
Indeed, morally, it might make sense to count these deaths as homicides (by the principle of “depraved indifference”); Trump therefore intends to double our homicide rate.

Of course, it will not be prosecuted this way. And one can even make an ethical case for why it shouldn’t be, why it would be impossible to make policy if every lawmaker had to face the consequences of every policy choice. (Start a war? A hundred thousand deaths. Fail to start a war in response to a genocide? A different hundred thousand deaths.)

But for once, I might want to make an exception. Because these deaths will not be the result of a complex policy trade-off with merits and demerits on both sides. They will not be the result of honest mistakes or unforeseen disasters. These people will die out of pure depraved indifference.

We had a healthcare bill that was working. Indeed, Obamacare was remarkably successful. It increased insurance rates and reduced mortality rates while still managing to slow the growth in healthcare expenditure.

The only real cost was an increase in taxes on the top 5% (and particularly the top 1%) of the income distribution. But the Republican Party—and make no mistake, the vote was on almost completely partisan lines, and not a single Democrat supported it—has now made it a matter of official policy that they care more about cutting taxes on millionaires than they do about poor people dying from lack of healthcare.

Yet there may be a silver lining in all of this: Once people saw that Obamacare could work, the idea of universal healthcare in the United States began to seem like a serious political position. The Overton Window has grown. Indeed, it may even have shifted to the left for once; the responses to the American Health Care Act have been almost uniformly comprised of shock and outrage, when really what the bill does is goes back to the same awful system we had before. Going backward and letting thousands of people die for no reason should appall people—but I feared that it might not, because it would seem “normal”. We in America have grown very accustomed to letting poor people die in order to slightly increase the profits of billionaires, and I thought this time might be no different—but it was different. Once Obamacare actually passed and began to work, people really saw what was happening—that all this suffering and death wasn’t necessary, it wasn’t an inextricable part of having a functioning economy. And now that they see that, they aren’t willing to go back.

Argumentum ab scientia is not argumentum baculo: The difference between authority and expertise

May 7, JDN 2457881

Americans are, on the whole, suspicious of authority. This is a very good thing; it shields us against authoritarianism. But it comes with a major downside, which is a tendency to forget the distinction between authority and expertise.

Argument from authority is an informal fallacy, argumentum baculo. The fact that something was said by the Pope, or the President, or the General Secretary of the UN, doesn’t make it true. (Aside: You’re probably more familiar with the phrase argumentum ad baculum, which is terrible Latin. That would mean “argument toward a stick”, when clearly the intended meaning was “argument by means of a stick”, which is argumentum baculo.)

But argument from expertise, argumentum ab scientia, is something quite different. The world is much too complicated for any one person to know everything about everything, so we have no choice but to specialize our knowledge, each of us becoming an expert in only a few things. So if you are not an expert in a subject, when someone who is an expert in that subject tells you something about that subject, you should probably believe them.

You should especially be prepared to believe them when the entire community of experts is in consensus or near-consensus on a topic. The scientific consensus on climate change is absolutely overwhelming. Is this a reason to believe in climate change? You’re damn right it is. Unless you have years of education and experience in understanding climate models and atmospheric data, you have no basis for challenging the expert consensus on this issue.

This confusion has created a deep current of anti-intellectualism in our culture, as Isaac Asimov famously recognized:

There is a cult of ignorance in the United States, and there always has been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that “my ignorance is just as good as your knowledge.”

This is also important to understand if you have heterodox views on any scientific topic. The fact that the whole field disagrees with you does not prove that you are wrong—but it does make it quite likely that you are wrong. Cranks often want to compare themselves to Galileo or Einstein, but here’s the thing: Galileo and Einstein didn’t act like cranks. They didn’t expect the scientific community to respect their ideas before they had gathered compelling evidence in their favor.

When behavioral economists found that neoclassical models of human behavior didn’t stand up to scrutiny, did they shout from the rooftops that economics is all a lie? No, they published their research in peer-reviewed journals, and talked with economists about the implications of their results. There may have been times when they felt ignored or disrespected by the mainstream, but they pressed on, because the data was on their side. And ultimately, the mainstream gave in: Daniel Kahneman won the Nobel Prize in Economics.

Experts are not always right, that is true. But they are usually right, and if you think they are wrong you’d better have a good reason to think so. The best reasons are the sort that come about when you yourself have spent the time and effort to become an expert, able to challenge the consensus on its own terms.

Admittedly, that is a very difficult thing to do—and more difficult than it should be. I have seen firsthand how difficult and painful the slow grind toward a PhD can be, and how many obstacles will get thrown in your way, ranging from nepotism and interdepartmental politics, to discrimination against women and minorities, to mismatches of interest between students and faculty, all the way to illness, mental health problems, and the slings and arrows of outrageous fortune in general. If you have particularly heterodox ideas, you may face particularly harsh barriers, and sometimes it behooves you to hold your tongue and toe the lie awhile.

But this is no excuse not to gain expertise. Even if academia itself is not available to you, we live in an age of unprecedented availability of information—it’s not called the Information Age for nothing. A sufficiently talented and dedicated autodidact can challenge the mainstream, if their ideas are truly good enough. (Perhaps the best example of this is the mathematician savant Srinivasa Ramanujan. But he’s… something else. I think he is about as far from the average genius as the average genius is from the average person.) No, that won’t be easy either. But if you are really serious about advancing human understanding rather than just rooting for your political team (read: tribe), you should be prepared to either take up the academic route or attack it as an autodidact from the outside.

In fact, most scientific fields are actually quite good about admitting what they don’t know. A total consensus that turns out to be wrong is actually a very rare phenomenon; much more common is a clash of multiple competing paradigms where one ultimately wins out, or they end up replaced by a totally new paradigm or some sort of synthesis. In almost all cases, the new paradigm wins not because it becomes fashionable or the ancien regime dies out (as Planck cynically claimed) but because overwhelming evidence is observed in its favor, often in the form of explaining some phenomenon that was previously impossible to understand. If your heterodox theory doesn’t do that, then it probably won’t win, because it doesn’t deserve to.

(Right now you might think of challenging me: Does my heterodox theory do that? Does the tribal paradigm explain things that either total selfishness or total altruism cannot? I think it’s pretty obvious that it does. I mean, you are familiar with a little thing called “racism”, aren’t you? There is no explanation for racism in neoclassical economics; to understand it at all you have to just impose it as an arbitrary term on the utility function. But at that point, why not throw in whatever you please? Maybe some people enjoy bashing their heads against walls, and other people take great pleasure in the taste of arsenic. Why would this particular self- (not to mention other-) destroying behavior be universal to all human societies?)

In practice, I think most people who challenge the mainstream consensus aren’t genuinely interested in finding out the truth—certainly not enough to actually go through the work of doing it. It’s a pattern you can see in a wide range of fringe views: Anti-vaxxers, 9/11 truthers, climate denialists, they all think the same way. The mainstream disagrees with my preconceived ideology, therefore the mainstream is some kind of global conspiracy to deceive us. The overwhelming evidence that vaccination is safe and (wildly) cost-effective, 9/11 was indeed perpetrated by Al Qaeda and neither planned nor anticipated by anyone in the US government , and the global climate is being changed by human greenhouse gas emissions—these things simply don’t matter to them, because it was never really about the truth. They knew the answer before they asked the question. Because their identity is wrapped up in that political ideology, they know it couldn’t possibly be otherwise, and no amount of evidence will change their mind.

How do we reach such people? That, I don’t know. I wish I did. But I can say this much: We can stop taking them seriously when they say that the overwhelming scientific consensus against them is just another “appeal to authority”. It’s not. It never was. It’s an argument from expertise—there are people who know this a lot better than you, and they think you’re wrong, so you’re probably wrong.