Will China’s growth continue forever?

July 23, JDN 2457958

It’s easy to make the figures sound alarming, especially if you are a xenophobic American:

Annual GDP growth in the US is currently 2.1%, while annual GDP growth in China is 6.9%. At markte exchange rates, US GDP is currently $18.6 trillion, while China’s GDP is $11.2 trillion. If these growth rates continue, that means that China’s GDP will surpass ours in just 12 years.

Looking instead at per-capita GDP (and now using purchasing-power-parity, which is a much better measure for standard of living), the US is currently at $53,200 per person per year while China is at $14,400 per person per year. Since 2010 US per-capita GDP PPP has been growing at about 1.2%, while China’s has been growing at 7.1%. At that rate, China will surpass the US in standard of living in only 24 years.

And then if you really want to get scared, you start thinking about what happens if this growth continues for 20, or 30, or 50 years. At 50 years of these growth rates, US GDP will just about triple; but China’s GDP would increase by almost a factor of thirty. US per-capita GDP will increase to about $150,000, while China’s per-capita GDP will increase all the way to $444,000.

But while China probably will surpass the US in total nominal GDP within say 15 years, the longer-horizon predictions are totally unfounded. In fact, there is reason to believe that China will never surpass the US in standard of living, at least within the foreseeable future. Sure, some sort of global catastrophe could realign the world’s fortunes (climate change being a plausible candidate) and over very long time horizons all sorts of things can happen; but barring catastrophe and looking within the next few generations, there’s little reason to think that the average person in China will actually be better off than the average person in the United States. Indeed, while that $150,000 figure is actually remarkably plausible, that $444,000 figure is totally nonsensical. I project that in 2065, per-capita GDP in the US will indeed be about $150,000, but per-capita GDP in China will be more like $100,000.

That’s still a dramatic improvement over today for both countries, and something worth celebrating; but the panic that the US must be doing something wrong and China must be doing something right, that China is “eating our lunch” in Trump’s terminology, is simply unfounded.

Why am I so confident of this? Because, for all the proud proclamations of Chinese officials and panicked reports of American pundits, China’s rapid growth rates are not unprecedented. We have seen this before.

Look at South Korea. As I like to say, the discipline of development economics is basically the attempt to determine what happened in South Korea 1950-2000 and how to make it happen everywhere.

In 1960, South Korea’s nominal per-capita GDP was only $944. In 2016, it was $25,500. That takes them from solidly Third World underdeveloped status into very nearly First World highly-developed status in just two generations. This was an average rate of growth of 6.0%. But South Korea didn’t grow steadily at 6.0% for that entire period. Their growth fluctuated wildly (small countries tend to do that; they are effectively undiversified assets), but also overall trended downward.

The highest annual growth rate in South Korea over that time period was an astonishing 20.8%. Over twenty percent per year. Now that is growth you would feel. Imagine going from an income of $10,000 to an income of $12,000, in just one year. Imagine your entire country doing this. In its best years, South Korea was achieving annual growth rates in income comparable to the astronomical investment returns of none other than Warren Buffett (For once, we definitely had r < g). Even if you smooth out over the boom-and-bust volatility South Korea went through during that period, they were still averaging growth rates over 7.5% in the 1970s.

I wasn’t alive then, but I wouldn’t be surprised if Americans back then were panicking about South Korea’s growth too. Maybe not, since South Korea was and remains a close US ally, and their success displayed the superiority of capitalism over Communism (boy did it ever: North Korea’s per capita GDP also started at about $900 in 1960, and is still today… only about $1000!); but you could have made the same pie-in-the-sky forecasts of Korea taking over the world if you’d extrapolated their growth rates forward.

South Korea’s current growth rate, on the other hand? 2.9%. Not so shocking now!

Moreover, this is a process we understand theoretically as well as empirically. The Solow model is now well-established as the mainstream neoclassical model of economic growth, and it directly and explicitly predicts this sort of growth pattern, where a country that starts very poor will initially grow extremely fast as they build a capital base and reverse-engineer technology from more advanced countries, but then over a couple of generations their growth will slow down and eventually level off once they reach a high level of economic development.

Indeed, the basic reason is quite simple: A given proportional growth is easier to do when you start small. (There’s more to it than that, involving capital degradation and diminishing marginal returns, but at its core, that’s the basic idea.)

I think I can best instill this realization in you by making another comparison between the US and China: How much income are we adding in absolute terms?

US per-capita GDP of $53,200 is growing at 1.2% per year; that means we’re adding $640 per person per year. China per-capita GDP of $14,400 is growing at 7.1% per year; that means they’re adding $1,020 per year. So while it sounds like they are growing almost six times faster, they’re actually only adding about 40% more real income per person each year than we are. It’s just a larger proportion to them.

Indeed, China is actually doing relatively well on this scale. Many developing countries that are growing “fast” are actually adding less income per person in absolute terms than many highly-developed countries. India’s per capita GDP is growing at 5.8% per year, but adding only $340 per person per year. Ethiopia’s income per person is growing by 4.9%—which is only $75 per person per year. Compare this to the “slow” growth of the UK, where 1.0% annual growth is still $392 per person per year, or France, where “stagnant” growth of 0.8% is still $293 per person per year.

Back when South Korea was growing at 20%, that was still on the order of $200 per person per year. Their current 2.9%, on the other hand, is actually $740 per person per year. We often forget just how poor many poor countries truly are; what sounds like a spectacular growth rate still may not be all that much in absolute terms.

Here’s a graph (on a log scale) of GDP per capita in the US, Japan, China, and Korea, from World Bank data since 1960. I’d prefer to use GDP PPP, but the World Bank data doesn’t go back far enough.

As you can see, there is a general pattern of growth at a decreasing rate; it’s harder to see in China because they are earlier in the process; but there’s good reason to think that they will follow the same pattern.

If anything, I think the panic about Japan in the 1990s may have been more justifiable (not that it was terribly justified either). As you can see on the graph, in terms of nominal GDP per capita, Japan actually did briefly surpass the United States in the 1990s. Of course, the outcome of that was not a global war or Japan ruling the world or something; it was… the Nintendo Wii and the Toyota Prius.

Of course, that doesn’t stop people from writing news articles and even publishing economic papers about how this time is different, not like all the other times we saw the exact same pattern. Many Chinese officials appear to believe that China is special, that they can continue to grow at extremely high rates indefinitely without the constraints that other countries would face. But for once economic theory and economic data are actually in very good agreement: These high growth rates will not last forever. They will slow down, and that’s not such a bad thing. By the time they do, China will have greatly raised their standard of living to something very close to our own. Hundreds of millions of people have already been lifted out of abject poverty; continued growth could benefit hundreds of millions more.

The far bigger problem would be if the government refuses to accept that growth must slow down, and begins trying to force impossible levels of growth or altering the economic data to make it appear as though growth has occurred that hasn’t. We already know that the People’s Republic of China has a track record of doing this sort of thing: we know they have manipulated some data, though we think only in small ways, and the worst example of an attempt at forcing economic growth in human history was in China, the so-called “Great Leap Forward” that killed 20 million people. The danger is not that China will grow this fast forever, nor that they will slow down soon enough, but that they will slow down and their government will refuse to admit it.

Why “marginal productivity” is no excuse for inequality

May 28, JDN 2457902

In most neoclassical models, workers are paid according to their marginal productivity—the additional (market) value of goods that a firm is able to produce by hiring that worker. This is often used as an excuse for inequality: If someone can produce more, why shouldn’t they be paid more?

The most extreme example of this is people like Maura Pennington writing for Forbes about how poor people just need to get off their butts and “do something”; but there is a whole literature in mainstream economics, particularly “optimal tax theory”, arguing based on marginal productivity that we should tax the very richest people the least and never tax capital income. The Chamley-Judd Theorem famously “shows” (by making heroic assumptions) that taxing capital just makes everyone worse off because it reduces everyone’s productivity.

The biggest reason this is wrong is that there are many, many reasons why someone would have a higher income without being any more productive. They could inherit wealth from their ancestors and get a return on that wealth; they could have a monopoly or some other form of market power; they could use bribery and corruption to tilt government policy in their favor. Indeed, most of the top 0.01% do literally all of these things.

But even if you assume that pay is related to productivity in competitive markets, the argument is not nearly as strong as it may at first appear. Here I have a simple little model to illustrate this.

Suppose there are 10 firms and 10 workers. Suppose that firm 1 has 1 unit of effective capital (capital adjusted for productivity), firm 2 has 2 units, and so on up to firm 10 which has 10 units. And suppose that worker 1 has 1 unit of so-called “human capital”, representing their overall level of skills and education, worker 2 has 2 units, and so on up to worker 10 with 10 units. Suppose each firm only needs one worker, so this is a matching problem.

Furthermore, suppose that productivity is equal to capital times human capital: That is, if firm 2 hired worker 7, they would make 2*7 = $14 of output.

What will happen in this market if it converges to equilibrium?

Well, first of all, the most productive firm is going to hire the most productive worker—so firm 10 will hire worker 10 and produce $100 of output. What wage will they pay? Well, they need a wage that is high enough to keep worker 10 from trying to go elsewhere. They should therefore pay a wage of $90—the next-highest firm productivity times the worker’s productivity. That’s the highest wage any other firm could credibly offer; so if they pay this wage, worker 10 will not have any reason to leave.

Now the problem has been reduced to matching 9 firms to 9 workers. Firm 9 will hire worker 9, making $81 of output, and paying $72 in wages.

And so on, until worker 1 at firm 1 produces $1 and receives… $0. Because there is no way for worker 1 to threaten to leave, in this model they actually get nothing. If I assume there’s some sort of social welfare system providing say $0.50, then at least worker 1 can get that $0.50 by threatening to leave and go on welfare. (This, by the way, is probably the real reason firms hate social welfare spending; it gives their workers more bargaining power and raises wages.) Or maybe they have to pay that $0.50 just to keep the worker from starving to death.

What does inequality look like in this society?
Well, the most-productive firm only has 10 times as much capital as the least-productive firm, and the most-educated worker only has 10 times as much skill as the least-educated worker, so we might think that incomes would vary only by a factor of 10.

But in fact they vary by a factor of over 100.

The richest worker makes $90, while the poorest worker makes $0.50. That’s a ratio of 180. (Still lower than the ratio of the average CEO to their average employee in the US, by the way.) The worker is 10 times as productive, but they receive 180 times as much income.

The firm profits vary along a more reasonable scale in this case; firm 1 makes a profit of $0.50 while firm 10 makes a profit of $10. Indeed, except for firm 1, firm n always makes a profit of $n. So that’s very nearly a linear scaling in productivity.

Where did this result come from? Why is it so different from the usual assumptions? All I did was change one thing: I allowed for increasing returns to scale.

If you make the usual assumption of constant returns to scale, this result can’t happen. Multiplying all the inputs by 10 should just multiply the output by 10, by assumption—since that is the definition of constant returns to scale.

But if you look at the structure of real-world incomes, it’s pretty obvious that we don’t have constant returns to scale.

If we had constant returns to scale, we should expect that wages for the same person should only vary slightly if that person were to work in different places. In particular, to have a 2-fold increase in wage for the same worker you’d need more than a 2-fold increase in capital.

This is a bit counter-intuitive, so let me explain a bit further. If a 2-fold increase in capital results in a 2-fold increase in wage for a given worker, that’s increasing returns to scale—indeed, it’s precisely the production function I assumed above.
If you had constant returns to scale, a 2-fold increase in wage would require something like an 8-fold increase in capital. This is because you should get a 2-fold increase in total production by doubling everything—capital, labor, human capital, whatever else. So doubling capital by itself should produce a much weaker effect. For technical reasons I’d rather not get into at the moment, usually it’s assumed that production is approximately proportional to capital to the one-third power—so to double production you need to multiply capital by 2^3 = 8.

I wasn’t able to quickly find really good data on wages for the same workers across different countries, but this should at least give a rough idea. In Mumbai, the minimum monthly wage for a full-time worker is about $80. In Shanghai, it is about $250. If you multiply out the US federal minimum wage of $7.25 per hour by 40 hours by 4 weeks, that comes to $1160 per month.

Of course, these are not the same workers. Even an “unskilled” worker in the US has a lot more education and training than a minimum-wage worker in India or China. But it’s not that much more. Maybe if we normalize India to 1, China is 3 and the US is 10.

Likewise, these are not the same jobs. Even a minimum wage job in the US is much more capital-intensive and uses much higher technology than most jobs in India or China. But it’s not that much more. Again let’s say India is 1, China is 3 and the US is 10.

If we had constant returns to scale, what should the wages be? Well, for India at productivity 1, the wage is $80. So for China at productivity 3, the wage should be $240—it’s actually $250, close enough for this rough approximation. But the US wage should be $800—and it is in fact $1160, 45% larger than we would expect by constant returns to scale.

Let’s try comparing within a particular industry, where the differences in skill and technology should be far smaller. The median salary for a software engineer in India is about 430,000 INR, which comes to about $6,700. If that sounds rather low for a software engineer, you’re probably more accustomed to the figure for US software engineers, which is $74,000. That is a factor of 11 to 1. For the same job. Maybe US software engineers are better than Indian software engineers—but are they that much better? Yes, you can adjust for purchasing power and shrink the gap: Prices in the US are about 4 times as high as those in India, so the real gap might be 3 to 1. But these huge price differences themselves need to be explained somehow, and even 3 to 1 for the same job in the same industry is still probably too large to explain by differences in either capital or education, unless you allow for increasing returns to scale.

In most industries, we probably don’t have quite as much increasing returns to scale as I assumed in my simple model. Workers in the US don’t make 100 times as much as workers in India, despite plausibly having both 10 times as much physical capital and 10 times as much human capital.

But in some industries, this model might not even be enough! The most successful authors and filmmakers, for example, make literally thousands of times as much money as the average author or filmmaker in their own country. J.K. Rowling has almost $1 billion from writing the Harry Potter series; this is despite having literally the same amount of physical capital and probably not much more human capital than the average author in the UK who makes only about 11,000 GBP—which is about $14,000. Harry Potter and the Philosopher’s Stone is now almost exactly 20 years old, which means that Rowling made an average of $50 million per year, some 3500 times as much as the average British author. Is she better than the average British author? Sure. Is she three thousand times better? I don’t think so. And we can’t even make the argument that she has more capital and technology to work with, because she doesn’t! They’re typing on the same laptops and using the same printing presses. Either the return on human capital for British authors is astronomical, or something other than marginal productivity is at work here—and either way, we don’t have anything close to constant returns to scale.

What can we take away from this? Well, if we don’t have constant returns to scale, then even if wage rates are proportional to marginal productivity, they aren’t proportional to the component of marginal productivity that you yourself bring. The same software developer makes more at Microsoft than at some Indian software company, the same doctor makes more at a US hospital than a hospital in China, the same college professor makes more at Harvard than at a community college, and J.K. Rowling makes three thousand times as much as the average British author—therefore we can’t speak of marginal productivity as inhering in you as an individual. It is an emergent property of a production process that includes you as a part. So even if you’re entirely being paid according to “your” productivity, it’s not really your productivity—it’s the productivity of the production process you’re involved in. A myriad of other factors had to snap into place to make your productivity what it is, most of which you had no control over. So in what sense, then, can we say you earned your higher pay?

Moreover, this problem becomes most acute precisely when incomes diverge the most. The differential in wages between two welders at the same auto plant may well be largely due to their relative skill at welding. But there’s absolutely no way that the top athletes, authors, filmmakers, CEOs, or hedge fund managers could possibly make the incomes they do by being individually that much more productive.

Unpaid work and the double burden

Apr 16, JDN 2457860

When we say the word “work”, what leaps to mind is usually paid work in the formal sector—the work people do for employers. When you “go to work” each morning, you are going to do your paid work in the formal sector.

But a large quantity of the world’s labor does not take this form. First, there is the informal sectorwork done for cash “under the table”, where there is no formal employment structure and often no reporting or payment of taxes. Many economists estimate that the majority of the world’s workers are employed in the informal sector. The ILO found that informal employment comprises as much as 70% of employment in some countries. However, it depends how you count: A lot of self-employment could be considered either formal or informal. If you base it on whether you do any work outside an employer-employee relationship, informal sector work is highly prevalent around the world. If you base it on not reporting to the government to avoid taxes, informal sector work is less common. If it must be your primary source of income, whether or not you pay taxes, informal sector work is uncommon. And if you only include informal sector work when it is your primary income source and not reported to the government, informal sector work is relatively rare and largely restricted to underdeveloped countries.

But that’s not really my focus for today, because you at least get paid in the informal sector. Nor am I talking about forced laborthat is, slavery, essentially—which is a serious human rights violation that sadly still goes on in many countries.

No, the unpaid work I want to talk about today is work that people willingly do for free.

I’m also excluding internships and student work, where (at least in theory) the idea is that instead of getting paid you are doing the work in order to acquire skills and experience that will be valuable to you later on. I’m talking about work that you do for its own sake.

Such work can be divided into three major categories.
First there is vocation—the artist who would paint even if she never sold a single canvas; the author who is compelled to write day and night and would give the books away for free. Vocation is work that you do for fun, or because it is fulfilling. It doesn’t even feel like “work” in quite the same sense. For me, writing and research are vocation, at least in part; even if I had $5 million in stocks I would still do at least some writing and research as part of what gives my life meaning.

Second there is volunteering—the soup kitchen, the animal shelter, the protest march. Volunteering is work done out of altruism, to help other people or work toward some greater public goal. You don’t do it for yourself, you do it for others.

Third, and really my main focus for this post, is domestic labor—vacuuming the rug, mopping the floor, washing the dishes, fixing the broken faucet, changing the baby’s diapers. This is generally not work that anyone finds particularly meaningful or fulfilling, nor is it done out of any great sense of altruism (perhaps toward your own family, but that’s about the extent of it). But you also don’t get paid to do it. You do it because it must be done.

There is also considerable overlap, of course: Many people find meaning in their activism or charitable work, and part of what motivates artists and authors is a desire to change the world.

Vocation is ultimately what I would like to see the world move towards. One of the great promises of a basic income is that it might finally free us from the grind of conventional employment that has gripped us ever since we first managed to escape the limitations of subsistence farming—which in turn gripped us ever since we escaped the desperation of hunter-gatherer survival. The fourth great stage in human prosperity might finally be a world where we can work not for food or for pay, but for meaning. A world of musicians and painters, of authors and playwrights, of sculptors and woodcutters, yes; but also a world of cinematographers and video remixers, of 3D modelers and holographers, of VR designers and video game modders. If you ever fret that no work would be done without the constant pressure of the wage incentive, spend some time on Stack Overflow or the Steam Workshop. People will spend hundreds of person-hours at extremely high-skill tasks—I’m talking AI programming and 3D modeling here—not for money but for fun.

Volunteering is frankly kind of overrated; as the Effective Altruism community will eagerly explain to you any chance they get, it’s usually more efficient for you to give money rather than time, because money is fungible while giving your time only makes sense if your skills are actually the ones that the project needs. If this criticism of so much well-intentioned work sounds petty, note that literally thousands of lives would be saved each year if instead of volunteering people donated an equivalent amount of money so that charities could hire qualified workers instead. Unskilled volunteers and donations of useless goods after a disaster typically cause what aid professionals call the “second disaster”. Still, people do find meaning in volunteering, and there is value in that; and also there are times when you really are the best one to do it, particularly when it comes to local politics.

But what should we do with domestic labor?

Some of it can and will be automated away—the Parable of the Dishwasher with literal dishwashers. But it will be awhile before it all can, and right now it’s still a bit expensive. Maybe instead of vacuuming I should buy a Roomba—but $500 feels like a lot of money right now.

Much domestic labor we could hire out to someone else, but we simply choose not to. I could always hire someone to fix my computer, unclog my bathtub, or even mop my floors; I just don’t because it seems too expensive.
From the perspective of an economist, it’s actually a bit odd that it seems too expensive. I might have a comparative advantage in fixing my computer—it’s mine, after all, so I know its ins and outs, and while I’m no hotshot Google admin I am a reasonably competent programmer and debugger in my own right. And while for many people auto repair is a household chore, I do actually hire auto mechanics; I don’t even change my own oil, though partly that’s because my little Smart has an extremely compact design that makes it hard to work on. But I surely have no such comparative advantage in cleaning my floors or unclogging my pipes; so why doesn’t it seem worth it to hire someone else to do that?

Maybe I’m being irrational; hiring a cleaning service isn’t that expensive after all. I could hire a cleaning service to do my whole apartment for something like $80, and if I scheduled a regular maid it would probably be something like that per month. That’s what I would charge for two hours of tutoring, so maybe it would behoove me to hire a maid and spend that extra time tutoring or studying.

Or maybe it’s this grad student budget of mine; money is pretty tight at the moment, as I go through this strange societal ritual where young adults go through a period of near-poverty, overwhelming workload and constant anxiety not in spite but because we are so intelligent and hard-working. Perhaps if and when I get that $70,000 job as a professional economist my marginal utility of wealth will decrease and I will feel more inclined to hire maid services.

There are also transaction costs I save on by doing the work myself. A maid would have to commute here, first of all, reducing the efficiency gains from their comparative advantage in the work; but more than that, there’s a lot of effort I’d have to put in just to prepare for the maid and deal with any problems that might arise. There are scheduling issues, and the work probably wouldn’t get done as quickly unless I were to spend enough to hire a maid on a regular basis. There’s also a psychological cost in comfort and privacy to dealing with a stranger in one’s home, and a small but nontrivial risk that the maid might damage or steal something important.

But honestly it might be as simple as social norms (remember: to a first approximation, all human behavior is social norms). Regardless of whether or not it is affordable, it feels strange to hire a maid. That’s the sort of thing only rich, decadent people do. A responsible middle-class adult is supposed to mop their own floors and do their own laundry. Indeed, while hiring a plumber or an auto mechanic feels like paying for a service, hiring a maid crosses a line and feels like hiring a servant. (I honestly always feel a little awkward around the gardeners hired by our housing development for that reason. I’m only paying them indirectly, but there’s still this vague sense that they are somehow subservient—and surely, we are of quite distinct socioeconomic classes. Maybe it would help if I brushed up on my Spanish and got to know them better?)

And then there’s the gender factor. Being in a same-sex couple household changes the domestic labor dynamic quite a bit relative to the conventional opposite-sex couple household. Even in ostensibly liberal, feminist, egalitarian households, and even when both partners are employed full-time, it usually ends up being the woman who does most of the housework. This is true in the US; it is true in the UK; it is true in Europe; indeed it’s true in most if not all countries around the world, and, unsurprisingly, it is worst in India, where women spend a whopping five hours per day more on housework than men. (I was not surprised by the fact that Japan and China also do poorly, given their overall gender norms; but I’m a bit shocked at how badly Ireland and Italy do on this front.) And yes, while #ScandinaviaIsBetter, still in Sweden and Norway women spend half an hour to an hour more on housework on an average day than men.

Which, of course, supports the social norm theory. Any time you see both an overwhelming global trend against women and considerable cross-country variation within that trend, your first hypothesis should be sexism. Without the cross-country variation, maybe it could be biology—the sex differences in height and upper-body strength, for example, are pretty constant across countries. But women doing half an hour more in Norway but five hours more in India looks an awful lot like sexism.

This is called the double burden: To meet the social norms of being responsible middle-class adults, men are merely expected to work full-time at a high-paying job, but women are expected to do both the full effort of maintaining a household and the full effort of working at a full-time job. This is surely an improvement over the time when women were excluded from the formal workforce, not least because of the financial freedom that full-time work affords many women; but it would be very nice if we could also find a way to share some of that domestic burden as well. There has been some trend toward a less unequal share of housework as more women enter the workforce, but it still has a long way to go, even in highly-developed countries.

So, we can start by trying to shift the social norm that housework is gendered: Women clean the floors and change the diapers, while men fix the car and paint the walls. Childcare in particular is something that should be done equally by all parents, and while it’s plausible that one person may be better or worse at mopping or painting, it strains credulity to think that it’s always the woman who is better at mopping and the man who is better at painting.

Yet perhaps this is a good reason to try to shift away from another social norm as well, the one where only rich people hire maids and maids are servants. Unfortunately, it’s likely that most maids will continue to be women for the foreseeable future—cleaning services are gendered in much the same way that nursing and childcare are gendered. But at least by getting paid to clean, one can fulfill the “job” norm and the “housekeeping” norm in one fell swoop; and then women who are in other professions can carry only one burden instead of two. And if we can begin to think of cleaning services as more like plumbing and auto repair—buying a service, not hiring a servant—this is likely to improve the condition and social status of a great many maids. I doubt we’d ever get to the point where mopping floors is as prestigious as performing neurosurgery, but maybe we can at least get to the point where being a maid is as respectable as being a plumber. Cleaning needs done; it shouldn’t be shameful to be someone who is very good at doing it and gets paid to do so. (That is perhaps the most pernicious aspect of socioeconomic class, this idea that some jobs are “shameful” because they are done by workers with less education or involve more physical labor.)
This also makes good sense in terms of economic efficiency: Your comparative advantage is probably not in cleaning services, or if it is then perhaps you should do that as a career. So by selling your labor at whatever you are good at and then buying the services of someone who is especially good at cleaning, you should, at least in theory, be able to get the same cleaning done and maintain the same standard of living for yourself while also accomplishing more at whatever it is you do in your profession and providing income for whomever you hire to do the cleaning.

So, should I go hire a cleaning service after all? I don’t know, that still sounds pretty expensive.

Intellectual Property, revisited

Mar 12, JDN 2457825

A few weeks ago I wrote a post laying out the burden of proof for intellectual property, but didn’t have time to get into the empirical question of whether our existing intellectual property system can meet this burden of proof.

First of all, I want to make a very sharp distinction between three types of regulations that are all called “intellectual property”.

First there are trademarks, which I have absolutely no quarrel with. Avoiding fraud and ensuring transparency are fundamental functions without which markets would unravel, and without trademarks these things would be much harder to accomplish. Trademarks allow a company to establish a brand identity that others cannot usurp; they ensure that when you buy Coca-Cola (R) it is really in fact the beverage you expect and not some counterfeit knockoff. (And if counterfeit Coke sounds silly, note that counterfeit honey and maple syrup are actually a major problem.) Yes, there should be limits on how much you can trademark—no one wants to live in a world where you feel Love ™ and open Screen Doors ™—but in fact our courts are already fairly good about only allowing corporations to trademark newly-coined words and proper names for their products.

Next there are copyrights, which I believe are currently too strong and often abused, but I do think should exist in some form (or perhaps copylefts instead). Authors should have at least certain basic rights over how their work can be used and published. If nothing else, proper attribution should always be required, as without that plagiarism becomes intolerably easy. And steps should be taken to ensure that if any people profit from its sale, the author is among them. I publish this blog under a by-sa copyleft, which essentially means that you can share it with whomever you like and even adapt its content into your own work, so long as you properly attribute it to me and you do not attempt to claim ownership over it. For scientific content, I think only a copyleft of this sort makes sense—the era of for-profit journals with paywalls must end, as it is holding back our civilization. But for artistic content (and I mean art in the broadest sense, including books, music, movies, plays, and video games), stronger regulations might well make sense. The question is whether our current system is actually too strong, or is protecting the wrong people—often it seems to protect the corporations that sell the content rather than the artists who created it.

Finally there are patents. Unlike copyright which applies to a specific work of art, patent is meant to apply to the underlying concept of a technology. Copyright (or rather the by-sa copyleft) protects the text of this article; you can’t post it on your own blog and claim you wrote it. But if I were to patent it somehow (generally, verbal arguments cannot be patented, fortunately), you wouldn’t even be able to paraphrase it. The trademark on a Samsung ™ TV just means that if I make a TV I can’t say I am Samsung, because I’m not. You wouldn’t copyright a TV, but the analogous process would be if I were to copy every single detail of the television and try to sell that precise duplicate. But the patents on that TV mean that if I take it apart, study each component, find a way to build them all from my own raw materials, even make them better, and build a new TV out of them that looks different and performs better—I would still be infringing on intellectual property. Patents grant an extremely strong notion of property rights, one which actually undermines a lot of other, more basic concepts of property. It’s my TV, why can’t I take it apart and copy the components? Well, as long as the patent holds, it’s not entirely my TV. Property rights this strong—that allow a corporation to have its cake of selling the TV but eat it too by owning the rights to all its components—require a much stronger justification.

Trademark protects a name, which is unproblematic. Copyright protects a work, which carries risks but is still probably necessary in many cases. But patent protects an idea—and we should ask ourselves whether that is really something it makes sense to do.

In previous posts I’ve laid out some of the basic philosophical arguments for why patents do not seem to support innovation and may actually undermine it. But in this post I want to do something more direct and quantitative: Empirically, what is the actual effect of copyrights and patents on innovation? Can we find a way to quantify the costs and benefits to our society of different modes of intellectual property?

Economists quantify things all the time, so I briefly combed the literature to see what sort of empirical studies had been done on the economic impact of copyrights and patents.

Patents definitely create barriers to scientific collaboration: Scientific articles with ideas that don’t get patented are about 10-20% more likely to be cited than scientific articles with ideas that are patented. (I would have expected a larger effect, but that’s still not trivial.)

A 1995 study found that creased patent protections do seem to be positively associated with more trade.

A 2009 study of Great Britain published in AER found it “puzzling” that stronger patents actually seem to reduce the rate of innovation domestically, while having no effect on foreign innovation—yet this is exactly what I would have predicted. Foreign innovations should be largely unaffected by UK patents, but stricter patent laws in the UK make it harder for most actual innovators, only benefiting a handful of corporations that aren’t even particularly innovative.

This 1996 study did find a positive effect of stronger patent laws on economic growth, but it was quite small and only statistically significant when using instrumental variables that they couldn’t be bothered to define except in an appendix. When your result hinges on the use of instrumental variables that you haven’t even clearly defined in the paper, something is very fishy. My guess is that they p-hacked the instruments until they got the result they wanted.

This other 1996 study is a great example of why economists need to listen to psychologists. It found a negative correlation between foreign direct investment and—wait for it—the number of companies that answered “yes” to a survey question, “Does country X have intellectual property protection too weak to allow you to transfer your newest or most effective technology to a wholly-owned subsidiarythere?” Oh, wow, you found a correlation between foreign direct investment and a question directly asking about foreign direct investment.

his 2004 study found a nonlinear relationship whereby increased economic development affects intellectual property rights, rather than the other way around. But I find their theoretical model quite odd, and the scatter plot that lies at the core of their empirical argument reminds me of Rexthor, the Dog-Bearer. “This relationship appears to be non-linear,” they say when pointing at a scatter plot that looks mostly like nothing and maybe like a monotonic increase.

This 1997 study found a positive correlation between intellectual property strength, R&D spending, and economic growth. The effect is weak, but the study looks basically sound. (Though I must say I’d never heard anyone use the words “significant at the 24% level” before. Normally one would say “nonsignificant” for that variable methinks. It’s okay for it not to be significant in some of your regressions, you know.)

This 1992 paper found that intellectual property harms poor countries and may or may not benefit rich countries, but it uses a really weird idiosyncratic theoretical model to get there. Frankly if I see the word “theorem” anywhere in your empirical paper, I get suspicious. No, it is not a theorem that “For economies in steady state the South loses from tighter intellectual property rights.” It may be true, but it does not follow from the fundamental axioms of mathematics.

This law paper is excellent; it focuses on the fact that intellectual property is a unique arrangement and a significant deviation from conventional property rights. It tracks the rise of legal arguments that erroneously equate intellectual property with real property, and makes the vital point that fully internalizing the positive externalities of technology was never the goal, and would in fact be horrible should it come to pass. We would all have to pay most of our income in royalties to the Newton and Faraday estates. So, I highly recommend reading it. But it doesn’t contain any empirical results on the economic effects of intellectual property.

This is the best paper I was able to find showing empirical effects of different intellectual property regimes; I really have no complaints about its econometrics. But it was limited to post-Soviet economies shortly after the fall of the USSR, which were rather unique circumstances. (Indeed, by studying only those countries, you’d probably conclude that free markets are harmful, because the shock of transition was so great.)

This 1999 paper is also quite good; using a natural experiment from a sudden shift in Japanese patent policy, they found almost no difference in actual R&D. The natural experiment design makes this particularly credible, but it’s difficult to generalize since it only covered Japan specifically.

This study focused in particular on copyrights and the film industry, and found a nonlinear effect: While having no copyright protection at all was harmful to the film industry, making the copyright protections too strong had a strangling effect on new filmmakers entering the industry. This would suggest that the optimal amount of copyright is moderate, which sounds reasonable to me.

This 2009 study did a much more detailed comparison of different copyright regimes, and was unable to find a meaningful pattern amidst the noise. Indeed, they found that the only variable that consistently predicted the number of new works of art was population—more people means more art, and nothing else seemed to matter. If this is correct, it’s quite damning to copyright; it would suggest that people make art for reasons fundamentally orthogonal to copyright, and copyright does almost nothing useful. (And I must say, if you talk to most artists, that tends to be their opinion on the matter!)

This 1996 paper found that stronger patents had no benefits for poor countries, but benefited rich countries quite a large amount: Increased patent protection was estimated to add as much as 0.7% annual GDP growth over the whole period. That’s a lot; if this is really true, stronger patents are almost certainly worth it. But then it becomes difficult to explain why more precise studies haven’t found effects anywhere near that large.

This paper was pretty interesting; they found a fat-tailed distribution of patents, where most firms have none, many have one or a few, and a handful of firms have a huge number of patents. This is also consistent with the distribution of firm revenue and profit—and I’d be surprised if I didn’t find a strong correlation between all three. But this really doesn’t tell us whether patents are contributing to innovation.
This paper found that the harmonization of global patents in the Uruguay Round did lead to gains from trade for most countries, but also transferred about $4.5 billion to the US from the rest of the world. Of course, that’s really not that large an amount when we’re talking about global policy over several years.

What does all that mean? I don’t know. It’s a mess. There just don’t seem to be any really compelling empirical studies on the economic impact of copyrights and patents. The preponderance of the evidence, such as it is, would seem to suggest that copyrights provide a benefit as long as they aren’t too strong, while patents provide a benefit but it is quite small and likely offset by the rent-seeking of the corporations that own them. The few studies that found really large effects (like 0.7% annual GDP growth) don’t seem very credible to me; if the effect were really that large, it shouldn’t be so ambiguous. 0.7% per year over 25 years is a GDP 20% larger. Over 50 years, GDP would be 42% larger. We would be able to see that.

Does this ambiguity mean we should do nothing, and wait until the data is better? I don’t think so. Remember, the burden of proof for intellectual property should be high. It’s a fundamentally bizarre notion of property, one which runs against most of our standard concepts of real property; it restricts our rights in very basic ways, making literally the majority of our population into criminals. Such a draconian policy requires a very strong justification, but such a justification does not appear to be forthcoming. If it could be supported, that 0.7% GDP growth might be enough; but it doesn’t seem to be replicable. A free society does not criminalize activities just in case it might be beneficial to do so—it only criminalizes activities that have demonstrable harm. And the harm of copyright and patent infringement simply isn’t demonstrable enough to justify its criminalization.

We don’t have to remove them outright, but we should substantially weaken copyright and patent laws. They should be short-term, they should provide very basic protection, and they should never be owned by corporations, always by individuals (corporations should be able to license them—but not own them). If we then observe a substantial reduction in innovation and economic output, then we can put them back. But I think that what defenders of intellectual property fear most is that if we tried this, it wouldn’t be so bad—and then the “doom and gloom” justification they’ve been relying on all this time would fall apart.

Games as economic simulations—and education tools

Mar 5, JDN 2457818 [Sun]

Moore’s Law is a truly astonishing phenomenon. Now as we are well into the 21st century (I’ve lived more of my life in the 21st century than the 20th now!) it may finally be slowing down a little bit, but it has had quite a run, and even this could be a temporary slowdown due to economic conditions or the lull before a new paradigm (quantum computing?) matures. Since at least 1975, the computing power of an individual processor has doubled approximately every year and a half; that means it has doubled over 25 times—or in other words that it has increased by a factor of over 30 million. I now have in my pocket a smartphone with several thousand times the processing speed of the guidance computer of the Saturn V that landed on the Moon.

This meteoric increase in computing power has had an enormous impact on the way science is done, including economics. Simple theoretical models that could be solved by hand are now being replaced by enormous simulation models that have to be processed by computers. It is now commonplace to devise models with systems of dozens of nonlinear equations that are literally impossible to solve analytically, and just solve them iteratively with computer software.

But one application of this technology that I believe is currently underutilized is video games.

As a culture, we still have the impression that video games are for children; even games like Dragon Age and Grand Theft Auto that are explicitly for adults (and really quite inappropriate for children!) are viewed as in some sense “childish”—that no serious adult would be involved with such frivolities. The same cultural critics who treat Shakespeare’s vagina jokes as the highest form of art are liable to dismiss the poignant critique of war in Call of Duty: Black Ops or the reflections on cultural diversity in Skyrim as mere puerility.

But video games are an art form with a fundamentally greater potential than any other. Now that graphics are almost photorealistic, there is really nothing you can do in a play or a film that you can’t do in a video game—and there is so, so much more that you can only do in a game.
In what other medium can we witness the spontaneous emergence and costly aftermath of a war? Yet EVE Online has this sort of event every year or so—just today there was a surprise attack involving hundreds of players that destroyed thousands of hours’—and dollars’—worth of starships, something that has more or less become an annual tradition. A few years ago there was a massive three-faction war that destroyed over $300,000 in ships and has now been commemorated as “the Bloodbath of B-R5RB”.
Indeed, the immersion and interactivity of games present an opportunity to do nothing less than experimental macroeconomics. For generations it has been impossible, or at least absurdly unethical, to ever experimentally manipulate an entire macroeconomy. But in a video game like EVE Online or Second Life, we can now do so easily, cheaply, and with little or no long-term harm to the participants—and we can literally control everything in the experiment. Forget the natural resource constraints and currency exchange rates—we can change the laws of physics if we want. (Indeed, EVE‘s whole trade network is built around FTL jump points, and in Second Life it’s a basic part of the interface that everyone can fly like Superman.)

This provides untold potential for economic research. With sufficient funding, we could build a game that would allow us to directly test hypotheses about the most fundamental questions of economics: How do governments emerge and maintain security? How is the rule of law sustained, and when can it be broken? What controls the value of money and the rate of inflation? What is the fundamental cause of unemployment, and how can it be corrected? What influences the rate of technological development? How can we maximize the rate of economic growth? What effect does redistribution of wealth have on employment and output? I envision a future where we can directly simulate these questions with thousands of eager participants, varying the subtlest of parameters and carrying out events over any timescale we like from seconds to centuries.

Nor is the potential of games in economics limited to research; it also has enormous untapped potential in education. I’ve already seen in my classes how tabletop-style games with poker chips can teach a concept better in a few minutes than hours of writing algebra derivations on the board; but custom-built video games could be made that would teach economics far better still, and to a much wider audience. In a well-designed game, people could really feel the effects of free trade or protectionism, not just on themselves as individuals but on entire nations that they control—watch their GDP numbers go down as they scramble to produce in autarky what they could have bought for half the price if not for the tariffs. They could see, in real time, how in the absence of environmental regulations and Pigovian taxes the actions of millions of individuals could despoil our planet for everyone.

Of course, games are fundamentally works of fiction, subject to the Fictional Evidence Fallacy and only as reliable as their authors make them. But so it is with all forms of art. I have no illusions about the fact that we will never get the majority of the population to regularly read peer-reviewed empirical papers. But perhaps if we are clever enough in the games we offer them to play, we can still convey some of the knowledge that those papers contain. We could also update and expand the games as new information comes in. Instead of complaining that our students are spending time playing games on their phones and tablets, we could actually make education into games that are as interesting and entertaining as the ones they would have been playing. We could work with the technology instead of against it. And in a world where more people have access to a smartphone than to a toilet, we could finally bring high-quality education to the underdeveloped world quickly and cheaply.

Rapid growth in computing power has given us a gift of great potential. But soon our capacity will widen even further. Even if Moore’s Law slows down, computing power will continue to increase for awhile yet. Soon enough, virtual reality will finally take off and we’ll have even greater depth of immersion available. The future is bright—if we can avoid this corporatist cyberpunk dystopia we seem to be hurtling toward, of course.

Student debt crisis? What student debt crisis?

Dec 18, JDN 2457741
As of this writing, I have over $99,000 in student loans. This is a good thing. It means that I was able to pay for my four years of college, and two years of a master’s program, in order to be able to start this coming five years of a PhD. When I have concluded these eleven years of postgraduate education and incurred six times the world per-capita income in debt, what then will become of me? Will I be left to live on the streets, destitute and overwhelmed by debt?

No. I’ll have a PhD. The average lifetime income of individuals with PhDs in the United States is $3.4 million. Indeed, the median annual income for economists in the US is almost exactly what I currently owe in debt—so if I save well, I could very well pay it off in just a few years. With an advanced degree in economics like mine, or similarly high-paying fields such as physics, medicine, and law one can expect the higher end of that scale, $4 million or more; with a degree in a less-lucrative field such as art, literature, history, or philosophy, one would have to settle for “only” say $3 million. The average lifetime income in the US for someone without any college education is only $1.2 million. So even in literature or history, a PhD is worth about $2 million in future income.

On average, an additional year of college results in a gain in lifetime future earnings of about 15% to 20%. Even when you adjust for interest rates and temporal discounting, this is a rate of return that would make any stock trader envious.

Fitting the law of diminishing returns, the rates of return on education in poor countries are even larger, often mind-bogglingly huge; the increase in lifetime income from a year of college education in Botswana was estimated at 38%. This implies that someone who graduates from college in Botswana earns four times as much money as someone who only finished high school.

We who pay $100,000 to receive an additional $2 to $3 million can hardly be called unfortunate.

Indeed, we are mind-bogglingly fortunate; we have been given an opportunity to better ourselves and the society we live in that is all but unprecedented in human history granted only to a privileged few even today. Right now, only about half of adults in the most educated countries in the world (Canada, Russia, Israel, Japan, Luxembourg, South Korea, and the United States) ever go to college. Only 30% of Americans ever earn a bachelor’s degree, and as recently as 1975 that figure was only 20%. Worldwide, the majority of people never graduate from high school. The average length of schooling in developing countries today is six yearsthat is, sixth grade—and this is an enormous improvement from the two years of average schooling found in developing countries in 1950.

If we look a bit further back in history, the improvements in education are even more staggering. In the United States in 1910, only 13.5% of people graduated high school, and only 2.7% completed a bachelor’s degree. There was no student debt crisis then, to be sure—because there were no college students.

Indeed, I have been underestimating the benefits of education thus far, because education is both a public and private good. The figures I’ve just given have been only the private financial return on education—the additional income received by an individual because they went to college. But there is also a non-financial return, such as the benefits of working in a more appealing or exciting career and the benefits of learning for its own sake. The reason so many people do go into history and literature instead of economics and physics very likely has to do with valuing these other aspects of education as highly as or even more highly than financial income, and it is entirely rational for people to do so. (An interesting survey question I’ve alas never seen asked: “How much money would we have to give you right now to convince you to quit working in philosophy for the rest of your life?”)

Yet even more important is the public return on education, the increased productivity and prosperity of our society as a result of greater education—and these returns are enormous. For every $1 spent on education in the US, the economy grows by an estimated $1.50. Public returns on college education worldwide are on the order of 10%-20% per year of education. This is over and above the 15-20% return already being made by the individuals going to school. This means that raising the average level of education in a country by just one year raises that country’s income by between 25% and 40%.

Indeed, perhaps the simplest way to understand the enormous social benefits of education is to note the strong correlation between education level and income level. This graph comes from the UN Human Development Report Data Explorer; it plots the HDI education index (which ranges from 0, least educated, to 1, most educated) and the per-capita GDP at purchasing power parity (on a log scale, so that each increment corresponds to a proportional increase in GDP); as you can see, educated countries tend to be rich countries, and vice-versa.


Of course, income drives education just as education drives income. But more detailed econometric studies generally (though not without some controversy) show the same basic result: The more educated a country’s people become, the richer that country becomes.

And indeed, the United States is a spectacularly rich country. The figure of “$1 trillion in college debt” sounds alarming (and has been used to such effect in many a news article, ranging from the New York Daily News, Slate, and POLITICO to USA Today and CNN all the way to Bloomberg, MarketWatch, and Business Insider, and even getting support from the Consumer Financial Protection Bureau and The Federal Reserve Bank of New York).

But the United States has a total GDP of over $18.6 trillion, and total net wealth somewhere around $84 trillion. Is it really so alarming that our nation’s most important investment would result in debt of less than two percent of our total nation’s wealth? Democracy Now asks who is getting rich off of $1.3 trillion in student debt? All of us—the students especially.

In fact, the probability of defaulting on student loans is inversely proportional to the amount of loans a student has. Students with over $100,000 in student debt default only 18% of the time, while students with less than $5,000 in student debt default 34% of the time. This should be shocking to those who think that we have a crisis of too much student debt; if student debt were an excess burden that is imposed upon us for little gain, default rates should rise as borrowing amounts increase, as we observe, for example, with credit cards: there is a positive correlation between carrying higher balances and being more likely to default. (This also raises doubts about the argument that higher debt loads should carry higher interest rates—why, if the default rate doesn’t go up?) But it makes perfect sense if you realize that college is an investment—indeed, almost certainly both the most profitable and the most socially responsible investment most people will ever have the opportunity to make. More debt means you had access to more credit to make a larger investment—and therefore your payoff was greater and you were more likely to be able to repay the debt.

Yes, job prospects were bad for college graduates right after the Great Recession—because it was right after the Great Recession, and job prospects were bad for everyone. Indeed, the unemployment rate for people with college degrees was substantially lower than for those without college degrees, all the way through the Second Depression. The New York Times has a nice little gadget where you can estimate the unemployment rate for college graduates; my hint for you is that I just said it’s lower, and I still guessed too high. There was variation across fields, of course; unsurprisingly computer science majors did extremely well and humanities majors did rather poorly. Underemployment was a big problem, but again, clearly because of the recession, not because going to college was a mistake. In fact, unemployment for college graduates (about 9%) has always been so much lower than unemployment for high school dropouts that the maximum unemployment rate for young college graduates is less than the minimum unemployment rate for young high school graduates (10%) over the entire period since the year 2000. Young high school dropouts have fared even worse; their minimum unemployment rate since 2000 was 18%, while their maximum was a terrifying Great Depression-level of 32%. Education isn’t just a good investment—it’s an astonishingly good investment.

There are a lot of things worth panicking about, now that Trump has been elected President. But student debt isn’t one of them. This is a very smart investment, made with a reasonable portion of our nation’s wealth. If you have student debt like I do, make sure you have enough—or otherwise you might not be able to pay it back.

“The cake is a lie”: The fundamental distortions of inequality

July 13, JDN 2457583

Inequality of wealth and income, especially when it is very large, fundamentally and radically distorts outcomes in a capitalist market. I’ve already alluded to this matter in previous posts on externalities and marginal utility of wealth, but it is so important I think it deserves to have its own post. In many ways this marks a paradigm shift: You can’t think about economics the same way once you realize it is true.

To motivate what I’m getting at, I’ll expand upon an example from a previous post.

Suppose there are only two goods in the world; let’s call them “cake” (K) and “money” (M). Then suppose there are three people, Baker, who makes cakes, Richie, who is very rich, and Hungry, who is very poor. Furthermore, suppose that Baker, Richie and Hungry all have exactly the same utility function, which exhibits diminishing marginal utility in cake and money. To make it more concrete, let’s suppose that this utility function is logarithmic, specifically: U = 10*ln(K+1) + ln(M+1)

The only difference between them is in their initial endowments: Baker starts with 10 cakes, Richie starts with $100,000, and Hungry starts with $10.

Therefore their starting utilities are:

U(B) = 10*ln(10+1)= 23.98

U(R) = ln(100,000+1) = 11.51

U(H) = ln(10+1) = 2.40

Thus, the total happiness is the sum of these: U = 37.89

Now let’s ask two very simple questions:

1. What redistribution would maximize overall happiness?
2. What redistribution will actually occur if the three agents trade rationally?

If multiple agents have the same diminishing marginal utility function, it’s actually a simple and deep theorem that the total will be maximized if they split the wealth exactly evenly. In the following blockquote I’ll prove the simplest case, which is two agents and one good; it’s an incredibly elegant proof:

Given: for all x, f(x) > 0, f'(x) > 0, f”(x) < 0.

Maximize: f(x) + f(A-x) for fixed A

f'(x) – f'(A – x) = 0

f'(x) = f'(A – x)

Since f”(x) < 0, this is a maximum.

Since f'(x) > 0, f is monotonic; therefore f is injective.

x = A – x


This can be generalized to any number of agents, and for multiple goods. Thus, in this case overall happiness is maximized if the cakes and money are both evenly distributed, so that each person gets 3 1/3 cakes and $33,336.66.

The total utility in that case is:

3 * (10 ln(10/3+1) + ln(33,336.66+1)) = 3 * (14.66 + 10.414) = 3 (25.074) =75.22

That’s considerably better than our initial distribution (almost twice as good). Now, how close do we get by rational trade?

Each person is willing to trade up until the point where their marginal utility of cake is equal to their marginal utility of money. The price of cake will be set by the respective marginal utilities.

In particular, let’s look at the trade that will occur between Baker and Richie. They will trade until their marginal rate of substitution is the same.

The actual algebra involved is obnoxious (if you’re really curious, here are some solved exercises of similar trade problems), so let’s just skip to the end. (I rushed through, so I’m not actually totally sure I got it right, but to make my point the precise numbers aren’t important.)
Basically what happens is that Richie pays an exorbitant price of $10,000 per cake, buying half the cakes with half of his money.

Baker’s new utility and Richie’s new utility are thus the same:
U(R) = U(B) = 10*ln(5+1) + ln(50,000+1) = 17.92 + 10.82 = 28.74
What about Hungry? Yeah, well, he doesn’t have $10,000. If cakes are infinitely divisible, he can buy up to 1/1000 of a cake. But it turns out that even that isn’t worth doing (it would cost too much for what he gains from it), so he may as well buy nothing, and his utility remains 2.40.

Hungry wanted cake just as much as Richie, and because Richie has so much more Hungry would have gotten more happiness from each new bite. Neoclassical economists promised him that markets were efficient and optimal, and so he thought he’d get the cake he needs—but the cake is a lie.

The total utility is therefore:

U = U(B) + U(R) + U(H)

U = 28.74 + 28.74 + 2.40

U = 59.88

Note three things about this result: First, it is more than where we started at 37.89—trade increases utility. Second, both Richie and Baker are better off than they were—trade is Pareto-improving. Third, the total is less than the optimal value of 75.22—trade is not utility-maximizing in the presence of inequality. This is a general theorem that I could prove formally, if I wanted to bore and confuse all my readers. (Perhaps someday I will try to publish a paper doing that.)

This result is incredibly radical—it basically goes against the core of neoclassical welfare theory, or at least of all its applications to real-world policy—so let me be absolutely clear about what I’m saying, and what assumptions I had to make to get there.

I am saying that if people start with different amounts of wealth, the trades they would willfully engage in, acting purely under their own self interest, would not maximize the total happiness of the population. Redistribution of wealth toward equality would increase total happiness.

First, I had to assume that we could simply redistribute goods however we like without affecting the total amount of goods. This is wildly unrealistic, which is why I’m not actually saying we should reduce inequality to zero (as would follow if you took this result completely literally). Ironically, this is an assumption that most neoclassical welfare theory agrees with—the Second Welfare Theorem only makes any sense in a world where wealth can be magically redistributed between people without any harmful economic effects. If you weaken this assumption, what you find is basically that we should redistribute wealth toward equality, but beware of the tradeoff between too much redistribution and too little.

Second, I had to assume that there’s such a thing as “utility”—specifically, interpersonally comparable cardinal utility. In other words, I had to assume that there’s some way of measuring how much happiness each person has, and meaningfully comparing them so that I can say whether taking something from one person and giving it to someone else is good or bad in any given circumstance.

This is the assumption neoclassical welfare theory generally does not accept; instead they use ordinal utility, on which we can only say whether things are better or worse, but never by how much. Thus, their only way of determining whether a situation is better or worse is Pareto efficiency, which I discussed in a post a couple years ago. The change from the situation where Baker and Richie trade and Hungry is left in the lurch to the situation where all share cake and money equally in socialist utopia is not a Pareto-improvement. Richie and Baker are slightly worse off with 25.07 utilons in the latter scenario, while they had 28.74 utilons in the former.

Third, I had to assume selfishness—which is again fairly unrealistic, but again not something neoclassical theory disagrees with. If you weaken this assumption and say that people are at least partially altruistic, you can get the result where instead of buying things for themselves, people donate money to help others out, and eventually the whole system achieves optimal utility by willful actions. (It depends just how altruistic people are, as well as how unequal the initial endowments are.) This actually is basically what I’m trying to make happen in the real world—I want to show people that markets won’t do it on their own, but we have the chance to do it ourselves. But even then, it would go a lot faster if we used the power of government instead of waiting on private donations.

Also, I’m ignoring externalities, which are a different type of market failure which in no way conflicts with this type of failure. Indeed, there are three basic functions of government in my view: One is to maintain security. The second is to cancel externalities. The third is to redistribute wealth. The DOD, the EPA, and the SSA, basically. One could also add macroeconomic stability as a fourth core function—the Fed.

One way to escape my theorem would be to deny interpersonally comparable utility, but this makes measuring welfare in any way (including the usual methods of consumer surplus and GDP) meaningless, and furthermore results in the ridiculous claim that we have no way of being sure whether Bill Gates is happier than a child starving and dying of malaria in Burkina Faso, because they are two different people and we can’t compare different people. Far more reasonable is not to believe in cardinal utility, meaning that we can say an extra dollar makes you better off, but we can’t put a number on how much.

And indeed, the difficulty of even finding a unit of measure for utility would seem to support this view: Should I use QALY? DALY? A Likert scale from 0 to 10? There is no known measure of utility that is without serious flaws and limitations.

But it’s important to understand just how strong your denial of cardinal utility needs to be in order for this theorem to fail. It’s not enough that we can’t measure precisely; it’s not even enough that we can’t measure with current knowledge and technology. It must be fundamentally impossible to measure. It must be literally meaningless to say that taking a dollar from Bill Gates and giving it to the starving Burkinabe would do more good than harm, as if you were asserting that triangles are greener than schadenfreude.

Indeed, the whole project of welfare theory doesn’t make a whole lot of sense if all you have to work with is ordinal utility. Yes, in principle there are policy changes that could make absolutely everyone better off, or make some better off while harming absolutely no one; and the Pareto criterion can indeed tell you that those would be good things to do.

But in reality, such policies almost never exist. In the real world, almost anything you do is going to harm someone. The Nuremburg trials harmed Nazi war criminals. The invention of the automobile harmed horse trainers. The discovery of scientific medicine took jobs away from witch doctors. Inversely, almost any policy is going to benefit someone. The Great Leap Forward was a pretty good deal for Mao. The purges advanced the self-interest of Stalin. Slavery was profitable for plantation owners. So if you can only evaluate policy outcomes based on the Pareto criterion, you are literally committed to saying that there is no difference in welfare between the Great Leap Forward and the invention of the polio vaccine.

One way around it (that might actually be a good kludge for now, until we get better at measuring utility) is to broaden the Pareto criterion: We could use a majoritarian criterion, where you care about the number of people benefited versus harmed, without worrying about magnitudes—but this can lead to Tyranny of the Majority. Or you could use the Difference Principle developed by Rawls: find an ordering where we can say that some people are better or worse off than others, and then make the system so that the worst-off people are benefited as much as possible. I can think of a few cases where I wouldn’t want to apply this criterion (essentially they are circumstances where autonomy and consent are vital), but in general it’s a very good approach.

Neither of these depends upon cardinal utility, so have you escaped my theorem? Well, no, actually. You’ve weakened it, to be sure—it is no longer a statement about the fundamental impossibility of welfare-maximizing markets. But applied to the real world, people in Third World poverty are obviously the worst off, and therefore worthy of our help by the Difference Principle; and there are an awful lot of them and very few billionaires, so majority rule says take from the billionaires. The basic conclusion that it is a moral imperative to dramatically reduce global inequality remains—as does the realization that the “efficiency” and “optimality” of unregulated capitalism is a chimera.