Markets value rich people more

Feb 26, JDN 2457811

Competitive markets are optimal at maximizing utility, as long as you value rich people more.

That is literally a theorem in neoclassical economics. I had previously thought that this was something most economists didn’t realize; I had delusions of grandeur that maybe I could finally convince them that this is the case. But no, it turns out this is actually a well-known finding; it’s just that somehow nobody seems to care. Or if they do care, they never talk about it. For all the thousands of papers and articles about the distortions created by minimum wage and capital gains tax, you’d think someone could spare the time to talk about the vastly larger fundamental distortions created by the structure of the market itself.

It’s not as if this is something completely hopeless we could never deal with. A basic income would go a long way toward correcting this distortion, especially if coupled with highly progressive taxes. By creating a hard floor and a soft ceiling on income, you can reduce the inequality that makes these distortions so large.

The basics of the theorem are quite straightforward, so I think it’s worth explaining them here. It’s extremely general; it applies anywhere that goods are allocated by market prices and different individuals have wildly different amounts of wealth.

Suppose that each person has a certain amount of wealth W to spend. Person 1 has W1, person 2 has W2, and so on. They all have some amount of happiness, defined by a utility function, which I’ll assume is only dependent on wealth; this is a massive oversimplification of course, but it wouldn’t substantially change my conclusions to include other factors—it would just make everything more complicated. (In fact, including altruistic motives would make the whole argument stronger, not weaker.) Thus I can write each person’s utility as a function U(W). The rate of change of this utility as wealth increases, the marginal utility of wealth, is denoted U'(W).

By the law of diminishing marginal utility, the marginal utility of wealth U'(W) is decreasing. That is, the more wealth you have, the less each new dollar is worth to you.

Now suppose people are buying goods. Each good C provides some amount of marginal utility U'(C) to the person who buys it. This can vary across individuals; some people like Pepsi, others Coke. This marginal utility is also decreasing; a house is worth a lot more to you if you are living in the street than if you already have a mansion. Ideally we would want the goods to go to the people who want them the most—but as you’ll see in a moment, markets systematically fail to do this.

If people are making their purchases rationally, each person’s willingness-to-pay P for a given good C will be equal to their marginal utility of that good, divided by their marginal utility of wealth:

P = U'(C)/U'(W)

Now consider this from the perspective of society as a whole. If you wanted to maximize utility, you’d equalize marginal utility across individuals (by the Extreme Value Theorem). The idea is that if marginal utility is higher for one person, you should give that person more, because the benefit of what you give them will be larger that way; and if marginal utility is lower for another person, you should give that person less, because the benefit of what you give them will be smaller. When everyone is equal, you are at the maximum.

But market prices don’t actually do this. Instead they equalize over willingness-to-pay. So if you’ve got two individuals 1 and 2, instead of having this:

U'(C1) = U'(C2)

you have this:

P1 = P2

which translates to:

U'(C1)/U'(W1) = U'(C2)/U'(W2)

If the marginal utilities were the same, U'(W1) = U'(W2), we’d be fine; these would give the same results. But that would only happen if W1 = W2, that is, if the two individuals had the same amount of wealth.

Now suppose we were instead maximizing weighted utility, where each person gets a weighting factor A based on how “important” they are or something. If your A is higher, your utility matters more. If we maximized this new weighted utility, we would end up like this:

A1*U'(C1) = A2*U'(C2)

Because person 1’s utility counts for more, their marginal utility also counts for more. This seems very strange; why are we valuing some people more than others? On what grounds?

Yet this is effectively what we’ve already done by using market prices.
Just set:
A = 1/U'(W)

Since marginal utility of wealth is decreasing, 1/U'(W) is higher precisely when W is higher.

How much higher? Well, that depends on the utility function. The two utility functions I find most plausible are logarithmic and harmonic. (Actually I think both apply, one to other-directed spending and the other to self-directed spending.)

If utility is logarithmic:

U = ln(W)

Then marginal utility is inversely proportional:

U'(W) = 1/W

In that case, your value as a human being, as spoken by the One True Market, is precisely equal to your wealth:

A = 1/U'(W) = W

If utility is harmonic, matters are even more severe.

U(W) = 1-1/W

Marginal utility goes as the inverse square of wealth:

U'(W) = 1/W^2

And thus your value, according to the market, is equal to the square of your wealth:

A = 1/U'(W) = W^2

What are we really saying here? Hopefully no one actually believes that Bill Gates is really morally worth 400 trillion times as much as a starving child in Malawi, as the calculation from harmonic utility would imply. (Bill Gates himself certainly doesn’t!) Even the logarithmic utility estimate saying that he’s worth 20 million times as much is pretty hard to believe.

But implicitly, the market “believes” that, because when it decides how to allocate resources, something that is worth 1 microQALY to Bill Gates (about the value a nickel dropped on the floor to you or I) but worth 20 QALY (twenty years of life!) to the Malawian child, will in either case be priced at $8,000, and since the child doesn’t have $8,000, it will probably go to Mr. Gates. Perhaps a middle-class American could purchase it, provided it was worth some 0.3 QALY to them.

Now consider that this is happening in every transaction, for every good, in every market. Goods are not being sold to the people who get the most value out of them; they are being sold to the people who have the most money.

And suddenly, the entire edifice of “market efficiency” comes crashing down like a house of cards. A global market that quite efficiently maximizes willingness-to-pay is so thoroughly out of whack when it comes to actually maximizing utility that massive redistribution of wealth could enormously increase human welfare, even if it turned out to cut our total output in half—if utility is harmonic, even if it cut our total output to one-tenth its current value.

The only way to escape this is to argue that marginal utility of wealth is not decreasing, or at least decreasing very, very slowly. Suppose for instance that utility goes as the 0.9 power of wealth:

U(W) = W^0.9

Then marginal utility goes as the -0.1 power of wealth:

U'(W) = 0.9 W^(-0.1)

On this scale, Bill Gates is only worth about 5 times as much as the Malawian child, which in his particular case might actually be too small—if a trolley is about to kill either Bill Gates or 5 Malawian children, I think I save Bill Gates, because he’ll go on to save many more than 5 Malawian children. (Of course, substitute Donald Trump or Charles Koch and I’d let the trolley run over him without a second thought if even a single child is at stake, so it’s not actually a function of wealth.) In any case, a 5 to 1 range across the whole range of human wealth is really not that big a deal. It would introduce some distortions, but not enough to justify any redistribution that would meaningfully reduce overall output.

Of course, that commits you to saying that $1 to a Malawian child is only worth about $1.50 to you or I and $5 to Bill Gates. If you can truly believe this, then perhaps you can sleep at night accepting the outcomes of neoclassical economics. But can you, really, believe that? If you had the choice between an intervention that would give $100 to each of 10,000 children in Malawi, and another that would give $50,000 to each of 100 billionaires, would you really choose the billionaires? Do you really think that the world would be better off if you did?

We don’t have precise measurements of marginal utility of wealth, unfortunately. At the moment, I think logarithmic utility is the safest assumption; it’s about the slowest decrease that is consistent with the data we have and it is very intuitive and mathematically tractable. Perhaps I’m wrong and the decrease is even slower than that, say W^(-0.5) (then the market only values billionaires as worth thousands of times as much as starving children). But there’s no way you can go as far as it would take to justify our current distribution of wealth. W^(-0.1) is simply not a plausible value.

And this means that free markets, left to their own devices, will systematically fail to maximize human welfare. We need redistribution—a lot of redistribution. Don’t take my word for it; the math says so.

What good are macroeconomic models? How could they be better?

Dec 11, JDN 2457734

One thing that I don’t think most people know, but which immediately obvious to any student of economics at the college level or above, is that there is a veritable cornucopia of different macroeconomic models. There are growth models (the Solow model, the Harrod-Domar model, the Ramsey model), monetary policy models (IS-LM, aggregate demand-aggregate supply), trade models (the Mundell-Fleming model, the Heckscher-Ohlin model), large-scale computational models (dynamic stochastic general equilibrium, agent-based computational economics), and I could go on.

This immediately raises the question: What are all these models for? What good are they?

A cynical view might be that they aren’t useful at all, that this is all false mathematical precision which makes economics persuasive without making it accurate or useful. And with such a proliferation of models and contradictory conclusions, I can see why such a view would be tempting.

But many of these models are useful, at least in certain circumstances. They aren’t completely arbitrary. Indeed, one of the litmus tests of the last decade has been how well the models held up against the events of the Great Recession and following Second Depression. The Keynesian and cognitive/behavioral models did rather well, albeit with significant gaps and flaws. The Monetarist, Real Business Cycle, and most other neoclassical models failed miserably, as did Austrian and Marxist notions so fluid and ill-defined that I’m not sure they deserve to even be called “models”. So there is at least some empirical basis for deciding what assumptions we should be willing to use in our models. Yet even if we restrict ourselves to Keynesian and cognitive/behavioral models, there are still a great many to choose from, which often yield inconsistent results.

So let’s compare with a science that is uncontroversially successful: Physics. How do mathematical models in physics compare with mathematical models in economics?

Well, there are still a lot of models, first of all. There’s the Bohr model, the Schrodinger equation, the Dirac equation, Newtonian mechanics, Lagrangian mechanics, Bohmian mechanics, Maxwell’s equations, Faraday’s law, Coulomb’s law, the Einstein field equations, the Minkowsky metric, the Schwarzschild metric, the Rindler metric, Feynman-Wheeler theory, the Navier-Stokes equations, and so on. So a cornucopia of models is not inherently a bad thing.

Yet, there is something about physics models that makes them more reliable than economics models.

Partly it is that the systems physicists study are literally two dozen orders of magnitude or more smaller and simpler than the systems economists study. Their task is inherently easier than ours.

But it’s not just that; their models aren’t just simpler—actually they often aren’t. The Navier-Stokes equations are a lot more complicated than the Solow model. They’re also clearly a lot more accurate.

The feature that models in physics seem to have that models in economics do not is something we might call nesting, or maybe consistency. Models in physics don’t come out of nowhere; you can’t just make up your own new model based on whatever assumptions you like and then start using it—which you very much can do in economics. Models in physics are required to fit consistently with one another, and usually inside one another, in the following sense:

The Dirac equation strictly generalizes the Schrodinger equation, which strictly generalizes the Bohr model. Bohmian mechanics is consistent with quantum mechanics, which strictly generalizes Lagrangian mechanics, which generalizes Newtonian mechanics. The Einstein field equations are consistent with Maxwell’s equations and strictly generalize the Minkowsky, Schwarzschild, and Rindler metrics. Maxwell’s equations strictly generalize Faraday’s law and Coulomb’s law.
In other words, there are a small number of canonical models—the Dirac equation, Maxwell’s equations and the Einstein field equation, essentially—inside which all other models are nested. The simpler models like Coulomb’s law and Newtonian mechanics are not contradictory with these canonical models; they are contained within them, subject to certain constraints (such as macroscopic systems far below the speed of light).

This is something I wish more people understood (I blame Kuhn for confusing everyone about what paradigm shifts really entail); Einstein did not overturn Newton’s laws, he extended them to domains where they previously had failed to apply.

This is why it is sensible to say that certain theories in physics are true; they are the canonical models that underlie all known phenomena. Other models can be useful, but not because we are relativists about truth or anything like that; Newtonian physics is a very good approximation of the Einstein field equations at the scale of many phenomena we care about, and is also much more mathematically tractable. If we ever find ourselves in situations where Newton’s equations no longer apply—near a black hole, traveling near the speed of light—then we know we can fall back on the more complex canonical model; but when the simpler model works, there’s no reason not to use it.

There are still very serious gaps in the knowledge of physics; in particular, there is a fundamental gulf between quantum mechanics and the Einstein field equations that has been unresolved for decades. A solution to this “quantum gravity problem” would be essentially a guaranteed Nobel Prize. So even a canonical model can be flawed, and can be extended or improved upon; the result is then a new canonical model which we now regard as our best approximation to truth.

Yet the contrast with economics is still quite clear. We don’t have one or two or even ten canonical models to refer back to. We can’t say that the Solow model is an approximation of some greater canonical model that works for these purposes—because we don’t have that greater canonical model. We can’t say that agent-based computational economics is approximately right, because we have nothing to approximate it to.

I went into economics thinking that neoclassical economics needed a new paradigm. I have now realized something much more alarming: Neoclassical economics doesn’t really have a paradigm. Or if it does, it’s a very informal paradigm, one that is expressed by the arbitrary judgments of journal editors, not one that can be written down as a series of equations. We assume perfect rationality, except when we don’t. We assume constant returns to scale, except when that doesn’t work. We assume perfect competition, except when that doesn’t get the results we wanted. The agents in our models are infinite identical psychopaths, and they are exactly as rational as needed for the conclusion I want.

This is quite likely why there is so much disagreement within economics. When you can permute the parameters however you like with no regard to a canonical model, you can more or less draw whatever conclusion you want, especially if you aren’t tightly bound to empirical evidence. I know a great many economists who are sure that raising minimum wage results in large disemployment effects, because the models they believe in say that it must, even though the empirical evidence has been quite clear that these effects are small if they are present at all. If we had a canonical model of employment that we could calibrate to the empirical evidence, that couldn’t happen anymore; there would be a coefficient I could point to that would refute their argument. But when every new paper comes with a new model, there’s no way to do that; one set of assumptions is as good as another.

Indeed, as I mentioned in an earlier post, a remarkable number of economists seem to embrace this relativism. “There is no true model.” they say; “We do what is useful.” Recently I encountered a book by the eminent economist Deirdre McCloskey which, though I confess I haven’t read it in its entirety, appears to be trying to argue that economics is just a meaningless language game that doesn’t have or need to have any connection with actual reality. (If any of you have read it and think I’m misunderstanding it, please explain. As it is I haven’t bought it for a reason any economist should respect: I am disinclined to incentivize such writing.)

Creating such a canonical model would no doubt be extremely difficult. Indeed, it is a task that would require the combined efforts of hundreds of researchers and could take generations to achieve. The true equations that underlie the economy could be totally intractable even for our best computers. But quantum mechanics wasn’t built in a day, either. The key challenge here lies in convincing economists that this is something worth doing—that if we really want to be taken seriously as scientists we need to start acting like them. Scientists believe in truth, and they are trying to find it out. While not immune to tribalism or ideology or other human limitations, they resist them as fiercely as possible, always turning back to the evidence above all else. And in their combined strivings, they attempt to build a grand edifice, a universal theory to stand the test of time—a canonical model.

How do people think about probability?

Nov 27, JDN 2457690

(This topic was chosen by vote of my Patreons.)

In neoclassical theory, it is assumed (explicitly or implicitly) that human beings judge probability in something like the optimal Bayesian way: We assign prior probabilities to events, and then when confronted with evidence we infer using the observed data to update our prior probabilities to posterior probabilities. Then, when we have to make decisions, we maximize our expected utility subject to our posterior probabilities.

This, of course, is nothing like how human beings actually think. Even very intelligent, rational, numerate people only engage in a vague approximation of this behavior, and only when dealing with major decisions likely to affect the course of their lives. (Yes, I literally decide which universities to attend based upon formal expected utility models. Thus far, I’ve never been dissatisfied with a decision made that way.) No one decides what to eat for lunch or what to do this weekend based on formal expected utility models—or at least I hope they don’t, because that point the computational cost far exceeds the expected benefit.

So how do human beings actually think about probability? Well, a good place to start is to look at ways in which we systematically deviate from expected utility theory.

A classic example is the Allais paradox. See if it applies to you.

In game A, you get $1 million dollars, guaranteed.
In game B, you have a 10% chance of getting $5 million, an 89% chance of getting $1 million, but now you have a 1% chance of getting nothing.

Which do you prefer, game A or game B?

In game C, you have an 11% chance of getting $1 million, and an 89% chance of getting nothing.

In game D, you have a 10% chance of getting $5 million, and a 90% chance of getting nothing.

Which do you prefer, game C or game D?

I have to think about it for a little bit and do some calculations, and it’s still very hard because it depends crucially on my projected lifetime income (which could easily exceed $3 million with a PhD, especially in economics) and the precise form of my marginal utility (I think I have constant relative risk aversion, but I’m not sure what parameter to use precisely), but in general I think I want to choose game A and game C, but I actually feel really ambivalent, because it’s not hard to find plausible parameters for my utility where I should go for the gamble.

But if you’re like most people, you choose game A and game D.

There is no coherent expected utility by which you would do this.

Why? Either a 10% chance of $5 million instead of $1 million is worth risking a 1% chance of nothing, or it isn’t. If it is, you should play B and D. If it’s not, you should play A and C. I can’t tell you for sure whether it is worth it—I can’t even fully decide for myself—but it either is or it isn’t.

Yet most people have a strong intuition that they should take game A but game D. Why? What does this say about how we judge probability?
The leading theory in behavioral economics right now is cumulative prospect theory, developed by the great Kahneman and Tversky, who essentially founded the field of behavioral economics. It’s quite intimidating to try to go up against them—which is probably why we should force ourselves to do it. Fear of challenging the favorite theories of the great scientists before us is how science stagnates.

I wrote about it more in a previous post, but as a brief review, cumulative prospect theory says that instead of judging based on a well-defined utility function, we instead consider gains and losses as fundamentally different sorts of thing, and in three specific ways:

First, we are loss-averse; we feel a loss about twice as intensely as a gain of the same amount.

Second, we are risk-averse for gains, but risk-seeking for losses; we assume that gaining twice as much isn’t actually twice as good (which is almost certainly true), but we also assume that losing twice as much isn’t actually twice as bad (which is almost certainly false and indeed contradictory with the previous).

Third, we judge probabilities as more important when they are close to certainty. We make a large distinction between a 0% probability and a 0.0000001% probability, but almost no distinction at all between a 41% probability and a 43% probability.

That last part is what I want to focus on for today. In Kahneman’s model, this is a continuous, monotonoic function that maps 0 to 0 and 1 to 1, but systematically overestimates probabilities below but near 1/2 and systematically underestimates probabilities above but near 1/2.

It looks something like this, where red is true probability and blue is subjective probability:

cumulative_prospect
I don’t believe this is actually how humans think, for two reasons:

  1. It’s too hard. Humans are astonishingly innumerate creatures, given the enormous processing power of our brains. It’s true that we have some intuitive capacity for “solving” very complex equations, but that’s almost all within our motor system—we can “solve a differential equation” when we catch a ball, but we have no idea how we’re doing it. But probability judgments are often made consciously, especially in experiments like the Allais paradox; and the conscious brain is terrible at math. It’s actually really amazing how bad we are at math. Any model of normal human judgment should assume from the start that we will not do complicated math at any point in the process. Maybe you can hypothesize that we do so subconsciously, but you’d better have a good reason for assuming that.
  2. There is no reason to do this. Why in the world would any kind of optimization system function this way? You start with perfectly good probabilities, and then instead of using them, you subject them to some bizarre, unmotivated transformation that makes them less accurate and costs computing power? You may as well hit yourself in the head with a brick.

So, why might it look like we are doing this? Well, my proposal, admittedly still rather half-baked, is that human beings don’t assign probabilities numerically at all; we assign them categorically.

You may call this, for lack of a better term, categorical prospect theory.

My theory is that people don’t actually have in their head “there is an 11% chance of rain today” (unless they specifically heard that from a weather report this morning); they have in their head “it’s fairly unlikely that it will rain today”.

That is, we assign some small number of discrete categories of probability, and fit things into them. I’m not sure what exactly the categories are, and part of what makes my job difficult here is that they may be fuzzy-edged and vary from person to person, but roughly speaking, I think they correspond to the sort of things psychologists usually put on Likert scales in surveys: Impossible, almost impossible, very unlikely, unlikely, fairly unlikely, roughly even odds, fairly likely, likely, very likely, almost certain, certain. If I’m putting numbers on these probability categories, they go something like this: 0, 0.001, 0.01, 0.10, 0.20, 0.50, 0.8, 0.9, 0.99, 0.999, 1.

Notice that this would preserve the same basic effect as cumulative prospect theory: You care a lot more about differences in probability when they are near 0 or 1, because those are much more likely to actually shift your category. Indeed, as written, you wouldn’t care about a shift from 0.4 to 0.6 at all, despite caring a great deal about a shift from 0.001 to 0.01.

How does this solve the above problems?

  1. It’s easy. Not only don’t you compute a probability and then recompute it for no reason; you never even have to compute it precisely. Just get it within some vague error bounds and that will tell you what box it goes in. Instead of computing an approximation to a continuous function, you just slot things into a small number of discrete boxes, a dozen at the most.
  2. That explains why we would do it: It’s easy. Our brains need to conserve their capacity, and they did especially in our ancestral environment when we struggled to survive. Rather than having to iterate your approximation to arbitrary precision, you just get within 0.1 or so and call it a day. That saves time and computing power, which saves energy, which could save your life.

What new problems have I introduced?

  1. It’s very hard to know exactly where people’s categories are, if they vary between individuals or even between situations, and whether they are fuzzy-edged.
  2. If you take the model I just gave literally, even quite large probability changes will have absolutely no effect as long as they remain within a category such as “roughly even odds”.

With regard to 2, I think Kahneman may himself be able to save me, with his dual process theory concept of System 1 and System 2. What I’m really asserting is that System 1, the fast, intuitive judgment system, operates on these categories. System 2, on the other hand, the careful, rational thought system, can actually make use of proper numerical probabilities; it’s just very costly to boot up System 2 in the first place, much less ensure that it actually gets the right answer.

How might we test this? Well, I think that people are more likely to use System 1 when any of the following are true:

  1. They are under harsh time-pressure
  2. The decision isn’t very important
  3. The intuitive judgment is fast and obvious

And conversely they are likely to use System 2 when the following are true:

  1. They have plenty of time to think
  2. The decision is very important
  3. The intuitive judgment is difficult or unclear

So, it should be possible to arrange an experiment varying these parameters, such that in one treatment people almost always use System 1, and in another they almost always use System 2. And then, my prediction is that in the System 1 treatment, people will in fact not change their behavior at all when you change the probability from 15% to 25% (fairly unlikely) or 40% to 60% (roughly even odds).

To be clear, you can’t just present people with this choice between game E and game F:

Game E: You get a 60% chance of $50, and a 40% chance of nothing.

Game F: You get a 40% chance of $50, and a 60% chance of nothing.

People will obviously choose game E. If you can directly compare the numbers and one game is strictly better in every way, I think even without much effort people will be able to choose correctly.

Instead, what I’m saying is that if you make the following offers to two completely different sets of people, you will observe little difference in their choices, even though under expected utility theory you should.
Group I receives a choice between game E and game G:

Game E: You get a 60% chance of $50, and a 40% chance of nothing.

Game G: You get a 100% chance of $20.

Group II receives a choice between game F and game G:

Game F: You get a 40% chance of $50, and a 60% chance of nothing.

Game G: You get a 100% chance of $20.

Under two very plausible assumptions about marginal utility of wealth, I can fix what the rational judgment should be in each game.

The first assumption is that marginal utility of wealth is decreasing, so people are risk-averse (at least for gains, which these are). The second assumption is that most people’s lifetime income is at least two orders of magnitude higher than $50.

By the first assumption, group II should choose game G. The expected income is precisely the same, and being even ever so slightly risk-averse should make you go for the guaranteed $20.

By the second assumption, group I should choose game E. Yes, there is some risk, but because $50 should not be a huge sum to you, your risk aversion should be small and the higher expected income of $30 should sway you.

But I predict that most people will choose game G in both cases, and (within statistical error) the same proportion will choose F as chose E—thus showing that the difference between a 40% chance and a 60% chance was in fact negligible to their intuitive judgments.

However, this doesn’t actually disprove Kahneman’s theory; perhaps that part of the subjective probability function is just that flat. For that, I need to set up an experiment where I show discontinuity. I need to find the edge of a category and get people to switch categories sharply. Next week I’ll talk about how we might pull that off.

Bigotry is more powerful than the market

Nov 20, JDN 2457683

If there’s one message we can take from the election of Donald Trump, it is that bigotry remains a powerful force in our society. A lot of autoflagellating liberals have been trying to explain how this election result really reflects our failure to help people displaced by technology and globalization (despite the fact that personal income and local unemployment had negligible correlation with voting for Trump), or Hillary Clinton’s “bad campaign” that nonetheless managed the same proportion of Democrat turnout that re-elected her husband in 1996.

No, overwhelmingly, the strongest predictor of voting for Trump was being White, and living in an area where most people are White. (Well, actually, that’s if you exclude authoritarianism as an explanatory variable—but really I think that’s part of what we’re trying to explain.) Trump voters were actually concentrated in areas less affected by immigration and globalization. Indeed, there is evidence that these people aren’t racist because they have anxiety about the economy—they are anxious about the economy because they are racist. How does that work? Obama. They can’t believe that the economy is doing well when a Black man is in charge. So all the statistics and even personal experiences mean nothing to them. They know in their hearts that unemployment is rising, even as the BLS data clearly shows it’s falling.

The wide prevalence and enormous power of bigotry should be obvious. But economists rarely talk about it, and I think I know why: Their models say it shouldn’t exist. The free market is supposed to automatically eliminate all forms of bigotry, because they are inefficient.

The argument for why this is supposed to happen actually makes a great deal of sense: If a company has the choice of hiring a White man or a Black woman to do the same job, but they know that the market wage for Black women is lower than the market wage for White men (which it most certainly is), and they will do the same quality and quantity of work, why wouldn’t they hire the Black woman? And indeed, if human beings were rational profit-maximizers, this is probably how they would think.

More recently some neoclassical models have been developed to try to “explain” this behavior, but always without daring to give up the precious assumption of perfect rationality. So instead we get the two leading neoclassical theories of discrimination, which are statistical discrimination and taste-based discrimination.

Statistical discrimination is the idea that under asymmetric information (and we surely have that), features such as race and gender can act as signals of quality because they are correlated with actual quality for various reasons (usually left unspecified), so it is not irrational after all to choose based upon them, since they’re the best you have.

Taste-based discrimination is the idea that people are rationally maximizing preferences that simply aren’t oriented toward maximizing profit or well-being. Instead, they have this extra term in their utility function that says they should also treat White men better than women or Black people. It’s just this extra thing they have.

A small number of studies have been done trying to discern which of these is at work.
The correct answer, of course, is neither.

Statistical discrimination, at least, could be part of what’s going on. Knowing that Black people are less likely to be highly educated than Asians (as they definitely are) might actually be useful information in some circumstances… then again, you list your degree on your resume, don’t you? Knowing that women are more likely to drop out of the workforce after having a child could rationally (if coldly) affect your assessment of future productivity. But shouldn’t the fact that women CEOs outperform men CEOs be incentivizing shareholders to elect women CEOs? Yet that doesn’t seem to happen. Also, in general, people seem to be pretty bad at statistics.

The bigger problem with statistical discrimination as a theory is that it’s really only part of a theory. It explains why not all of the discrimination has to be irrational, but some of it still does. You need to explain why there are these huge disparities between groups in the first place, and statistical discrimination is unable to do that. In order for the statistics to differ this much, you need a past history of discrimination that wasn’t purely statistical.

Taste-based discrimination, on the other hand, is not a theory at all. It’s special pleading. Rather than admit that people are failing to rationally maximize their utility, we just redefine their utility so that whatever they happen to be doing now “maximizes” it.

This is really what makes the Axiom of Revealed Preference so insidious; if you really take it seriously, it says that whatever you do, must by definition be what you preferred. You can’t possibly be irrational, you can’t possibly be making mistakes of judgment, because by definition whatever you did must be what you wanted. Maybe you enjoy bashing your head into a wall, who am I to judge?

I mean, on some level taste-based discrimination is what’s happening; people think that the world is a better place if they put women and Black people in their place. So in that sense, they are trying to “maximize” some “utility function”. (By the way, most human beings behave in ways that are provably inconsistent with maximizing any well-defined utility function—the Allais Paradox is a classic example.) But the whole framework of calling it “taste-based” is a way of running away from the real explanation. If it’s just “taste”, well, it’s an unexplainable brute fact of the universe, and we just need to accept it. If people are happier being racist, what can you do, eh?

So I think it’s high time to start calling it what it is. This is not a question of taste. This is a question of tribal instinct. This is the product of millions of years of evolution optimizing the human brain to act in the perceived interest of whatever it defines as its “tribe”. It could be yourself, your family, your village, your town, your religion, your nation, your race, your gender, or even the whole of humanity or beyond into all sentient beings. But whatever it is, the fundamental tribe is the one thing you care most about. It is what you would sacrifice anything else for.

And what we learned on November 9 this year is that an awful lot of Americans define their tribe in very narrow terms. Nationalistic and xenophobic at best, racist and misogynistic at worst.

But I suppose this really isn’t so surprising, if you look at the history of our nation and the world. Segregation was not outlawed in US schools until 1955, and there are women who voted in this election who were born before American women got the right to vote in 1920. The nationalistic backlash against sending jobs to China (which was one of the chief ways that we reduced global poverty to its lowest level ever, by the way) really shouldn’t seem so strange when we remember that over 100,000 Japanese-Americans were literally forcibly relocated into camps as recently as 1942. The fact that so many White Americans seem all right with the biases against Black people in our justice system may not seem so strange when we recall that systemic lynching of Black people in the US didn’t end until the 1960s.

The wonder, in fact, is that we have made as much progress as we have. Tribal instinct is not a strange aberration of human behavior; it is our evolutionary default setting.

Indeed, perhaps it is unreasonable of me to ask humanity to change its ways so fast! We had millions of years to learn how to live the wrong way, and I’m giving you only a few centuries to learn the right way?

The problem, of course, is that the pace of technological change leaves us with no choice. It might be better if we could wait a thousand years for people to gradually adjust to globalization and become cosmopolitan; but climate change won’t wait a hundred, and nuclear weapons won’t wait at all. We are thrust into a world that is changing very fast indeed, and I understand that it is hard to keep up; but there is no way to turn back that tide of change.

Yet “turn back the tide” does seem to be part of the core message of the Trump voter, once you get past the racial slurs and sexist slogans. People are afraid of what the world is becoming. They feel that it is leaving them behind. Coal miners fret that we are leaving them behind by cutting coal consumption. Factory workers fear that we are leaving them behind by moving the factory to China or inventing robots to do the work in half the time for half the price.

And truth be told, they are not wrong about this. We are leaving them behind. Because we have to. Because coal is polluting our air and destroying our climate, we must stop using it. Moving the factories to China has raised them out of the most dire poverty, and given us a fighting chance toward ending world hunger. Inventing the robots is only the next logical step in the process that has carried humanity forward from the squalor and suffering of primitive life to the security and prosperity of modern society—and it is a step we must take, for the progress of civilization is not yet complete.

They wouldn’t have to let themselves be left behind, if they were willing to accept our help and learn to adapt. That carbon tax that closes your coal mine could also pay for your basic income and your job-matching program. The increased efficiency from the automated factories could provide an abundance of wealth that we could redistribute and share with you.

But this would require them to rethink their view of the world. They would have to accept that climate change is a real threat, and not a hoax created by… uh… never was clear on that point actually… the Chinese maybe? But 45% of Trump supporters don’t believe in climate change (and that’s actually not as bad as I’d have thought). They would have to accept that what they call “socialism” (which really is more precisely described as social democracy, or tax-and-transfer redistribution of wealth) is actually something they themselves need, and will need even more in the future. But despite rising inequality, redistribution of wealth remains fairly unpopular in the US, especially among Republicans.

Above all, it would require them to redefine their tribe, and start listening to—and valuing the lives of—people that they currently do not.

Perhaps we need to redefine our tribe as well; many liberals have argued that we mistakenly—and dangerously—did not include people like Trump voters in our tribe. But to be honest, that rings a little hollow to me: We aren’t the ones threatening to deport people or ban them from entering our borders. We aren’t the ones who want to build a wall (though some have in fact joked about building a wall to separate the West Coast from the rest of the country, I don’t think many people really want to do that). Perhaps we live in a bubble of liberal media? But I make a point of reading outlets like The American Conservative and The National Review for other perspectives (I usually disagree, but I do at least read them); how many Trump voters do you think have ever read the New York Times, let alone Huffington Post? Cosmopolitans almost by definition have the more inclusive tribe, the more open perspective on the world (in fact, do I even need the “almost”?).

Nor do I think we are actually ignoring their interests. We want to help them. We offer to help them. In fact, I want to give these people free money—that’s what a basic income would do, it would take money from people like me and give it to people like them—and they won’t let us, because that’s “socialism”! Rather, we are simply refusing to accept their offered solutions, because those so-called “solutions” are beyond unworkable; they are absurd, immoral and insane. We can’t bring back the coal mining jobs, unless we want Florida underwater in 50 years. We can’t reinstate the trade tariffs, unless we want millions of people in China to starve. We can’t tear down all the robots and force factories to use manual labor, unless we want to trigger a national—and then global—economic collapse. We can’t do it their way. So we’re trying to offer them another way, a better way, and they’re refusing to take it. So who here is ignoring the concerns of whom?

Of course, the fact that it’s really their fault doesn’t solve the problem. We do need to take it upon ourselves to do whatever we can, because, regardless of whose fault it is, the world will still suffer if we fail. And that presents us with our most difficult task of all, a task that I fully expect to spend a career trying to do and yet still probably failing: We must understand the human tribal instinct well enough that we can finally begin to change it. We must know enough about how human beings form their mental tribes that we can actually begin to shift those parameters. We must, in other words, cure bigotry—and we must do it now, for we are running out of time.

Wrong answers are better than no answer

Nov 6, JDN 2457699

I’ve been hearing some disturbing sentiments from some surprising places lately, things like “Economics is not a science, it’s just an extension of politics” and “There’s no such thing as a true model”. I’ve now met multiple economists who speak this way, who seem to be some sort of “subjectivists” or “anti-realists” (those links are to explanations of moral subjectivism and anti-realism, which are also mistaken, but in a much less obvious way, and are far more common views to express). It is possible to read most of the individual statements in a non-subjectivist way, but in the context of all of them together, it really gives me the general impression that many of these economists… don’t believe in economics. (Nor do they even believe in believing it, or they’d put up a better show.)

I think what has happened is that in the wake of the Second Depression, economists have had a sort of “crisis of faith”. The models we thought were right were wrong, so we may as well give up; there’s no such thing as a true model. The science of economics failed, so maybe economics was never a science at all.

Never really thought I’d be in this position, but in such circumstances actually feel strongly inclined to defend neoclassical economics. Neoclassical economics is wrong; but subjectivism is not even wrong.

If a model is wrong, you can fix it. You can make it right, or at least less wrong. But if you give up on modeling altogether, your theory avoids being disproven only by making itself totally detached from reality. I can’t prove you wrong, but only because you’ve given up on the whole idea of being right or wrong.

As Isaac Asimov wrote, “when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.”

What we might call “folk economics”, what most people seem to believe about economics, is like thinking the Earth is flat—it’s fundamentally wrong, but not so obviously inaccurate on an individual scale that it can’t be a useful approximation for your daily life. Neoclassical economics is like thinking the Earth is spherical—it’s almost right, but still wrong in some subtle but important ways. Thinking that economics isn’t a science is wronger than both of them put together.

The sense in which “there’s no such thing as a true model” is true is a trivial one: There’s no such thing as a perfect model, because by the time you included everything you’d just get back the world itself. But there are better and worse models, and some of our very best models (quantum mechanics, Darwinian evolution) are really good enough that I think it’s quite perverse not to call them simply true. Economics doesn’t have such models yet for more than a handful of phenomena—but we’re working on it (at least, I thought that’s what we were doing!).

Indeed, a key point I like to make about realism—in science, morality, or whatever—is that if you think something can be wrong, you must be a realist. In order for an idea to be wrong, there must be some objective reality to compare it to that it can fail to match. If everything is just subjective beliefs and sociopolitical pressures, there is no such thing as “wrong”, only “unpopular”. I’ve heard many people say things like “Well, that’s just your opinion; you could be wrong.” No, if it’s just my opinion, then I cannot possibly be wrong. So choose a lane! Either you think I’m wrong, or you think it’s just my opinion—but you can’t have it both ways.

Now, it’s clearly true in the real world that there is a lot of very bad and unscientific economics going on. The worst is surely the stuff that comes out of right-wing think-tanks that are paid almost explicitly to come up with particular results that are convenient for their right-wing funders. (As Krugman puts it, “there are liberal professional economists, conservative professional economists, and professional conservative economists.”) But there’s also a lot of really unscientific economics done without such direct and obvious financial incentives. Economists get blinded by their own ideology, they choose what topics to work on based on what will garner the most prestige, they use fundamentally defective statistical techniques because journals won’t publish them if they don’t.

But of course, the same is true of many other fields, particularly in social science. Sociologists also get blinded by their pet theories; psychologists also abuse statistics because the journals make them do it; political scientists are influenced by their funding sources; anthropologists also choose what to work on based on what’s prestigious in the field.

Moreover, natural sciences do this too. String theorists are (almost by definition) blinded by their favorite theory. Biochemists are manipulated by the financial pressures of the pharmaceutical industry. Neuroscientists publish all sorts of statistically nonsensical research. I’d be very surprised if even geologists were immune to the social norms of academia telling them to work on the most prestigious problems. If this is enough reason to abandon a field as a science, it is a reason to abandon science, full stop. That is what you are arguing for here.

And really, this should be fairly obvious, actually. Are workers and factories and televisions actual things that are actually here? Obviously they are. Therefore you can be right or wrong about how they interact. There is an obvious objective reality here that one can have more or less accurate beliefs about.

For socially-constructed phenomena like money, markets, and prices, this isn’t as obvious; if everyone stopped believing in the US Dollar, like Tinkerbell the US Dollar would cease to exist. But there does remain some objective reality (or if you like, intersubjective reality) here: I can be right or wrong about the price of a dishwasher or the exchange rate from dollars to pounds.

So, in order to abandon the possibility of scientifically accurate economics, you have to say that even though there is this obvious physical reality of workers and factories and televisions, we can’t actually study that scientifically, even when it sure looks like we’re studying it scientifically by performing careful observations, rigorous statistics, and even randomized controlled experiments. Even when I perform my detailed Bayesian analysis of my randomized controlled experiment, nope, that’s not science. It doesn’t count, for some reason.

The only at all principled way I can see you could justify such a thing is to say that once you start studying other humans you lose all possibility of scientific objectivity—but notice that by making such a claim you haven’t just thrown out psychology and economics, you’ve also thrown out anthropology and neuroscience. The statements “DNA evidence shows that all modern human beings descend from a common migration out of Africa” and “Human nerve conduction speed is approximately 300 meters per second” aren’t scientific? Then what in the world are they?

Or is it specifically behavioral sciences that bother you? Now perhaps you can leave out biological anthropology and basic neuroscience; there’s some cultural anthropology and behavioral neuroscience you have to still include, but maybe that’s a bullet you’re willing to bite. There is perhaps something intuitively appealing here: Since science is a human behavior, you can’t use science to study human behavior without an unresolvable infinite regress.

But there are still two very big problems with this idea.

First, you’ve got to explain how there can be this obvious objective reality of human behavior that is nonetheless somehow forever beyond our understanding. Even though people actually do things, and we can study those things using the usual tools of science, somehow we’re not really doing science, and we can never actually learn anything about how human beings behave.

Second, you’ve got to explain why we’ve done as well as we have. For some reason, people seem to have this impression that psychology and especially economics have been dismal failures, they’ve brought us nothing but nonsense and misery.

But where exactly do you think we got the lowest poverty rate in the history of the world? That just happened by magic, or by accident while we were doing other things? No, economists did that, on purpose—the UN Millennium Goals were designed, implemented, and evaluated by economists. Against staunch opposition from both ends of the political spectrum, we have managed to bring free trade to the world, and with it, some measure of prosperity.

The only other science I can think of that has been more successful at its core mission is biology; as XCKD pointed out, the biologists killed a Horseman of the Apocalypse while the physicists were busy making a new one. Congratulations on beating Pestilence, biologists; we economists think we finally have Famine on the ropes now. Hey political scientists, how is War going? Oh, not bad, actually? War deaths per capita are near their lowest levels in history? But clearly it would be foolhardy to think that economics and political science are actually sciences!

I can at least see why people might think psychology is a failure, because rates of diagnosis of mental illness keep rising higher and higher; but the key word there is diagnosis. People were already suffering from anxiety and depression across the globe; it’s just that nobody was giving them therapy or medication for it. Some people argue that all we’ve done is pathologize normal human experience—but this wildly underestimates the severity of many mental disorders. Wanting to end your own life for reasons you yourself cannot understand is not normal human experience being pathologized. (And the fact that 40,000 Americans commit suicide every year may make it common, but it does not make it normal. Is trying to keep people from dying of influenza “pathologizing normal human experience”? Well, suicide kills almost as many.) It’s possible there is some overdiagnosis; but there is also an awful lot of real mental illness that previously went untreated—and yes, meta-analysis shows that treatment can and does work.

Of course, we’ve made a lot of mistakes. We will continue to make mistakes. Many of our existing models are seriously flawed in very important ways, and many economists continue to use those models incautiously, blind to their defects. The Second Depression was largely the fault of economists, because it was economists who told everyone that markets are efficient, banks will regulate themselves, leave it alone, don’t worry about it.

But we can do better. We will do better. And we can only do that because economics is a science, it does reflect reality, and therefore we make ourselves less wrong.

Toward an economics of social norms

Sep 17, JDN 2457649

It is typical in economics to assume that prices are set by perfect competition in markets with perfect information. This is obviously ridiculous, so many economists do go further and start looking into possible distortions of the market, such as externalities and monopolies. But almost always the assumption is still that human beings are neoclassical rational agents, what I call “infinite identical psychopaths”, selfish profit-maximizers with endless intelligence and zero empathy.

What happens when we recognize that human beings are not like this, but in fact are empathetic, social creatures, who care about one another and work toward the interests of (what they perceive to be) their tribe? How are prices really set? What actually decides what is made and sold? What does economics become once you understand sociology? (The good news is that experiments are now being done to find out.)

Presumably some degree of market competition is involved, and no small amount of externalities and monopolies. But one of the very strongest forces involved in setting prices in the real world is almost completely ignored, and that is social norms.

Social norms are tremendously powerful. They will drive us to bear torture, fight and die on battlefields, even detonate ourselves as suicide bombs. When we talk about “religion” or “ideology” motivating people to do things, really what we are talking about is social norms. While some weaker norms can be overridden, no amount of economic incentive can ever override a social norm at its full power. Moreover, most of our behavior in daily life is driven by social norms: How to dress, what to eat, where to live. Even the fundamental structure of our lives is written by social norms: Go to school, get a job, get married, raise a family.

Even academic economists, who imagine themselves one part purveyor of ultimate wisdom and one part perfectly rational agent, are clearly strongly driven by social norms—what problems are “interesting”, which researchers are “renowned”, what approaches are “sensible”, what statistical methods are “appropriate”. If economists were perfectly rational, dynamic stochastic general equilibrium models would be in the dustbin of history (because, like string theory, they have yet to lead to a single useful empirical prediction), research journals would not be filled with endless streams of irrelevant but impressive equations (I recently read one that basically spent half a page of calculus re-deriving the concept of GDP—and computer-generated gibberish has been published, because its math looked so impressive), and instead of frequentist p-values (and often misinterpreted at that), all the statistics would be written in the form of Bayesian logodds.

Indeed, in light of all this, I often like to say that to a first approximation, all human behavior is social norms.

How does this affect buying and selling? Well, first of all, there are some things we refuse to buy and sell, or at least that most of us refuse to buy and sell, and who use social pressure, public humilitation, or even the force of law to prevent. You’re not supposed to sell children. You’re not supposed to sell your vote. You’re not even supposed to sell sexual favors (though every society has always had a large segment of people who do, and more recently people are becoming more open to the idea of at least decriminalizing it). If we were neoclassical rational agents, we would have no such qualms; if we want something and someone is willing to sell it to us, we’ll buy it. But as actual human beings with emotions and social norms, we recognize that there is something fundamentally different about selling your vote as opposed to selling a shirt or a television. It’s not always immediately obvious where to draw the line, which is why sex work can be such a complicated issue (You can’t get paid to have sex… unless someone is filming it?). Different societies may do it differently: Part of the challenge of fighting corruption in Third World countries is that much of what we call corruption—and which actually is harmful to long-run economic development—isn’t perceived as “corruption” by the people involved in it, just as social custom (“Of course I’d hire my cousin! What kind of cousin would I be if I didn’t?”). Yet despite all that, almost everyone agrees that there is a line to be drawn. So there are whole markets that theoretically could exist, but don’t, or only exist as tiny black markets most people never participate in, because we consider selling those things morally wrong. Recently a whole subfield of cognitive economics has emerged studying these repugnant markets.

Even if a transaction is not considered so repugnant as to be unacceptable, there are also other classes of goods that are in some sense unsavory; something you really shouldn’t buy, but you’re not a monster for doing so. These are often called sin goods, and they have always included drugs, alcohol, and gambling—and I do mean always, as every human civilization has had these things—they include prostitution where it is legal, and as social norms change they are now beginning to include oil and coal as well (which can only be good for the future of Earth’s climate). Sin goods are systematically more expensive than they should be for their marginal cost, because most people are unwilling to participate in selling them. As a result, the financial returns for producing sin goods are systematically higher. Actually, this could partially explain why Wall Street banks are so profitable; when the banking system is corrupt as it is—and you’re not imagining that; laundering money for terroriststhen banking becomes a sin good, and good people don’t want to participate in it. Or perhaps the effect runs the other way around: Banking has been viewed as sinful for centuries (in Medieval times, usury was punished much the same way as witchcraft), and as a result only the sort of person who doesn’t care about social and moral norms becomes a banker—and so the banking system becomes horrifically corrupt. Is this a reason for good people to force ourselves to become bankers? Or is there another way—perhaps credit unions?

There are other ways that social norms drive prices as well. We have a concept ofa “fair wage”, which is quite distinct from the economic concept of a “market-clearing wage”. When people ask whether someone’s wage is fair, they don’t look at supply and demand and try to determine whether there are too many or too few people offering that service. They ask themselves what the labor is worth—what value has it added—and how hard that person has worked to do it—what cost it bore. Now, these aren’t totally unrelated to supply and demand (people are less likely to supply harder work, people are more likely to demand higher value), so it’s conceivable that these heuristics could lead us to more or less achieve the market-clearing wage most of the time. But there are also some systematic distortions to consider.

Perhaps the most important way fairness matters in economics is necessities: Basic requirements for human life such as food, housing, and medicine. The structure of our society also makes transportation, education, and Internet access increasingly necessary for basic functioning. From the perspective of an economist, it is a bit paradoxical how angry people get when the price of something important (such as healthcare) is increased: If it’s extremely valuable, shouldn’t you be willing to pay more? Why does it bother you less when something like a Lamborghini or a Rolex rises in price, something that almost certainly wasn’t even worth its previous price? You’re going to buy the necessities anyway, right? Well, as far as most economists are concerned, that’s all that matters—what gets bought and sold. But of course as a human being I do understand why people get angry about these things, and it is because they have to buy them anyway. When someone like Martin Shkreli raises the prices on basic goods, we feel exploited. There’s even a way to make this economically formal: When demand is highly inelastic, we are rightly very sensitive to the possibility of a monopoly, because monopolies under inelastic demand can extract huge profits and cause similarly huge amounts of damage to the welfare of their customers. That isn’t quite how most people would put it, but I think that has something to do with the ultimate reason we evolved that heuristic: It’s dangerous to let someone else control your basic necessities, because that gives them enormous power to exploit you. If they control things that aren’t as important to you, that doesn’t matter so much, because you can always do without if you must. So a norm that keeps businesses from overcharging on necessities is very important—and probably not as strong anymore as it should be.

Another very important way that fairness and markets can be misaligned is talent: What if something is just easier for one person than another? If you achieve the same goal with half the work, should you be rewarded more for being more efficient, or less because you bore less cost? Neoclassical economics doesn’t concern itself with such questions, asking only if supply and demand reached equilibrium. But we as human beings do care about such things; we want to know what wage a person deserves, not just what wage they would receive in a competitive market.

Could we be wrong to do that? Might it be better if we just let the market do its work? In some cases I think that may actually be true. Part of why CEO pay is rising so fast despite being uncorrelated with corporate profitability or even negatively correlated is that CEOs have convinced us (or convinced their boards of directors) that this is fair, that they deserve more stock options. They even convince them that their pay is based on performance, by using highly distorted measures of performance. If boards thought more like economic rational agents, when a CEO asked for more pay they’d ask: “What other company gave you a higher offer?” and if the CEO didn’t have an answer, they’d laugh and refuse the raise. Because in purely economic terms, that is all a salary does: it keeps you from quitting to work somewhere else. The competitive mechanism of the market is supposed to then ensure that your wage aligns with your marginal cost and marginal productivity purely due to that.

On the other hand, there are many groups of people who simply aren’t doing very well in the market: Women, racial minorities, people with disabilities. There are a lot of reasons for this, some of which might go away if markets were made more competitive—the classic argument that competitive markets reward companies that don’t discriminate—but many clearly wouldn’t. Indeed, that argument was never as strong as it at first appears; in a society where social norms are strongly in favor of bigotry, it can be completely economically rational to participate in bigotry to avoid being penalized. When Chick-Fil-A was revealed to have donated to anti-LGBT political groups, many people tried to boycott—but their sales actually increased from the publicity. Honestly it’s a bit baffling that they promised not to donate to such causes anymore; it was apparently a profitable business decision to be revealed as supporters of bigotry. And even when discrimination does hurt economic performance, companies are run by human beings, and they are still quite capable of discriminating regardless. Indeed, the best evidence we have that discrimination is inefficient comes from… businesses that persist in discriminating despite the fact that it is inefficient.

But okay, suppose we actually did manage to make everyone compensated according to their marginal productivity. (Or rather, what Rawls derided: “From each according to his marginal productivity, to each according to his threat advantage.”) The market would then clear and be highly efficient. Would that actually be a good thing? I’m not so sure.

A lot of people are highly unproductive through no fault of their own—particularly children and people with disabilities. Much of this is not discrimination; it’s just that they aren’t as good at providing services. Should we simply leave them to fend for themselves? Then there’s the key point about what marginal means in this case—it means “given what everyone else is doing”. But that means that you can be made obsolete by someone else’s actions, and in this era of rapid technological advancement, jobs become obsolete faster than ever. Unlike a lot of people, I recognize that it makes no sense to keep people working at jobs that can be automated—the machines are better. But still, what do we do with the people whose jobs have been eliminated? Do we treat them as worthless? When automated buses become affordable—and they will; I give it 20 years—do we throw the human bus drivers under them?

One way out is of course a basic income: Let the market wage be what it will, and then use the basic income to provide for what human beings deserve irrespective of their market productivity. I definitely support a basic income, of course, and this does solve the most serious problems like children and quadriplegics starving in the streets.

But as I read more of the arguments by people who favor a job guarantee instead of a basic income, I begin to understand better why they are uncomfortable with the idea: It doesn’t seem fair. A basic income breaks once and for all the link between “a fair day’s work” and “a fair day’s wage”. It runs counter to this very deep-seated intuition most people have that money is what you earn—and thereby deserve—by working, and only by working. That is an extremely powerful social norm, and breaking it will be very difficult; so it’s worth asking: Should we even try to break it? Is there a way to achieve a system where markets are both efficient and fair?

I’m honestly not sure; but I do know that we could make substantial progress from where we currently stand. Most billionaire wealth is pure rent in the economic sense: It’s received by corruption and market distortion, not by efficient market competition. Most poverty is due to failures of institutions, not lack of productivity of workers. As George Monblot famously wrote, “If wealth was the inevitable result of hard work and enterprise, every woman in Africa would be a millionaire.” Most of the income disparity between White men and others is due to discrimination, not actual skill—and what skill differences there are are largely the result of differences in education and upbringing anyway. So if we do in fact correct these huge inefficiencies, we will also be moving toward fairness at the same time. But still that nagging thought remains: When all that is done, will there come a day where we must decide whether we would rather have an efficient economy or a just society? And if it does, will we decide the right way?

The high cost of frictional unemployment

Sep 3, JDN 2457635

I had wanted to open this post with an estimate of the number of people in the world, or at least in the US, who are currently between jobs. It turns out that such estimates are essentially nonexistent. The Bureau of Labor Statistics maintains a detailed database of US unemployment; they don’t estimate this number. We have this concept in macroeconomics of frictional unemployment, the unemployment that results from people switching jobs; but nobody seems to have any idea how common it is.

I often hear a ballpark figure of about 4-5%, which is related to a notion that “full employment” should really be about 4-5% unemployment because otherwise we’ll trigger horrible inflation or something. There is almost no evidence for this. In fact, the US unemployment rate has gotten as low as 2.5%, and before that was stable around 3%. This was during the 1950s, the era of the highest income tax rates ever imposed in the United States, a top marginal rate of 92%. Coincidence? Maybe. Obviously there were a lot of other things going on at the time. But it sure does hurt the argument that high income taxes “kill jobs”, don’t you think?

Indeed, it may well be that the rate of frictional unemployment varies all the time, depending on all sorts of different factors. But here’s what we do know: Frictional unemployment is a serious problem, and yet most macroeconomists basically ignore it.

Talk to most macroeconomists about “unemployment”, and they will assume you mean either cyclical unemployment (the unemployment that results from recessions and bad fiscal and monetary policy responses to them), or structural unemployment (the unemployment that results from systematic mismatches between worker skills and business needs). If you specifically mention frictional unemployment, the response is usually that it’s no big deal and there’s nothing we can do about it anyway.

Yet at least when we aren’t in a recession, frictional employment very likely accounts for the majority of unemployment, and thus probably the majority of misery created by unemployment. (Not necessarily, since it probably doesn’t account for much long-term unemployment, which is by far the worst.) And it is quite clear to me that there are things we can do about it—they just might be difficult and/or expensive.

Most of you have probably changed jobs at least once. Many of you have, like me, moved far away to a new place for school or work. Think about how difficult that was. There is the monetary cost, first of all; you need to pay for the travel of course, and then usually leases and paychecks don’t line up properly for a month or two (for some baffling and aggravating reason, UCI won’t actually pay me my paychecks until November, despite demanding rent starting the last week of July!). But even beyond that, you are torn from your social network and forced to build a new one. You have to adapt to living in a new place which may have differences in culture and climate. Bureaucracy often makes it difficult to change over documentation of such as your ID and your driver’s license.

And that’s assuming that you already found a job before you moved, which isn’t always an option. Many people move to new places and start searching for jobs when they arrive, which adds an extra layer of risk and difficulty above and beyond the transition itself.

With all this in mind, the wonder is that anyone is willing to move at all! And this is probably a large part of why people are so averse to losing their jobs even when it is clearly necessary; the frictional unemployment carries enormous real costs. (That and loss aversion, of course.)

What could we do, as a matter of policy, to make such transitions easier?

Well, one thing we could do is expand unemployment insurance, which reduces the cost of losing your job (which, despite the best efforts of Republicans in Congress, we ultimately did do in the Second Depression). We could expand unemployment insurance to cover voluntary quits. Right now, quitting voluntarily makes you forgo all unemployment benefits, which employers pay for in the form of insurance premiums; so an employer is much better off making your life miserable until you quit than they are laying you off. They could also fire you for cause, if they can find a cause (and usually there’s something they could trump up enough to get rid of you, especially if you’re not prepared for the protracted legal battle of a wrongful termination lawsuit). The reasoning of our current system appears to be something like this: Only lazy people ever quit jobs, and why should we protect lazy people? This is utter nonsense and it needs to go. Many states already have no-fault divorce and no-fault auto collision insurance; it’s time for no-fault employment termination.

We could establish a basic income of course; then when you lose your job your income would go down, but to a higher floor where you know you can meet certain basic needs. We could provide subsidized personal loans, similar to the current student loan system, that allow people to bear income gaps without losing their homes or paying exorbitant interest rates on credit cards.

We could use active labor market programs to match people with jobs, or train them with the skills needed for emerging job markets. Denmark has extensive active labor market programs (they call it “flexicurity”), and Denmark’s unemployment rate was 2.4% before the Great Recession, hit a peak of 6.2%, and has now recovered to 4.2%. What Denmark calls a bad year, the US calls a good year—and Greece fantasizes about as something they hope one day to achieve. #ScandinaviaIsBetter once again, and Norway fits this pattern also, though to be fair Sweden’s unemployment rate is basically comparable to the US or even slightly worse (though it’s still nothing like Greece).

Maybe it’s actually all right that we don’t have estimates of the frictional unemployment rate, because the goal really isn’t to reduce the number of people who are unemployed; it’s to reduce the harm caused by unemployment. Most of these interventions would very likely increase the rate frictional unemployment, as people who always wanted to try to find better jobs but could never afford to would now be able to—but they would dramatically reduce the harm caused by that unemployment.

This is a more general principle, actually; it’s why we should basically stop taking seriously this argument that social welfare benefits destroy work incentives. That may well be true; so what? Maximizing work incentives was never supposed to be a goal of public policy, as far as I can tell. Maximizing human welfare is the goal, and the only way a welfare program could reduce work incentives is by making life better for people who aren’t currently working, and thereby reducing the utility gap between working and not working. If your claim is that the social welfare program (and its associated funding mechanism, i.e. taxes, debt, or inflation) would make life sufficiently worse for everyone else that it’s not worth it, then say that (and for some programs that might actually be true). But in and of itself, making life better for people who don’t work is a benefit to society. Your supposed downside is in fact an upside. If there’s a downside, it must be found elsewhere.

Indeed, I think it’s worth pointing out that slavery maximizes work incentives. If you beat or kill people who don’t work, sure enough, everyone works! But that is not even an efficient economy, much less a just society. To be clear, I don’t think most people who say they want to maximize work incentives would actually support slavery, but that is the logical extent of the assertion. (Also, many Libertarians, often the first to make such arguments, do have a really bizarre attitude toward slavery; taxation is slavery, regulation is slavery, conscription is slavery—the last not quite as ridiculous—but actual forced labor… well, that really isn’t so bad, especially if the contract is “voluntary”. Fortunately some Libertarians are not so foolish.) If your primary goal is to make people work as much as possible, slavery would be a highly effective way to achieve that goal. And that really is the direction you’re heading when you say we shouldn’t do anything to help starving children lest their mothers have insufficient incentive to work.

More people not working could have a downside, if it resulted in less overall production of goods. But even in the US, one of the most efficient labor markets in the world, the system of job matching is still so ludicrously inefficient that people have to send out dozens if not hundreds of applications to jobs they barely even want, and there are still 1.4 times as many job seekers as there are openings (at the trough of the Great Recession, the ratio was 6.6 to 1). There’s clearly a lot of space here to improve the matching efficiency, and simply giving people more time to search could make a big difference there. Total output might decrease for a little while during the first set of transitions, but afterward people would be doing jobs they want, jobs they care about, jobs they’re good at—and people are vastly more productive under those circumstances. It’s quite likely that total employment would decrease, but productivity would increase so much that total output increased.

Above all, people would be happier, and that should have been our goal all along.

The only thing necessary for the triumph of evil is that good people refuse to do cost-benefit analysis

July 27, JDN 2457597

My title is based on a famous quote often attributed to Edmund Burke, but which we have no record of him actually saying:

The only thing necessary for the triumph of evil is that good men do nothing.

The closest he actually appears to have written is this:

When bad men combine, the good must associate; else they will fall one by one, an unpitied sacrifice in a contemptible struggle.

Burke’s intended message was about the need for cooperation and avoiding diffusion of responsibility; then his words were distorted into a duty to act against evil in general.

But my point today is going to be a little bit more specific: A great deal of real-world evils would be eliminated if good people were more willing to engage in cost-benefit analysis.

As discussed on Less Wrong awhile back, there is a common “moral” saying which comes from the Talmud (if not earlier; and of course it’s hardly unique to Judaism), which gives people a great warm and fuzzy glow whenever they say it:

Whoever saves a single life, it is as if he had saved the whole world.

Yet this is in fact the exact opposite of moral. It is a fundamental, insane perversion of morality. It amounts to saying that “saving a life” is just a binary activity, either done or not, and once you’ve done it once, congratulations, you’re off the hook for the other 7 billion. All those other lives mean literally nothing, once you’ve “done your duty”.

Indeed, it would seem to imply that you can be a mass murderer, as long as you save someone else somewhere along the line. If Mao Tse-tung at some point stopped someone from being run over by a car, it’s okay that his policies killed more people than the population of Greater Los Angeles.

Conversely, if anything you have ever done has resulted in someone’s death, you’re just as bad as Mao; in fact if you haven’t also saved someone somewhere along the line and he has, you’re worse.

Maybe this is how you get otherwise-intelligent people saying such insanely ridiculous things as George W. Bush’s crimes are uncontroversially worse than Osama bin Laden’s.” (No, probably not, since Chomsky at least feigns something like cost-benefit analysis. I’m not sure what his failure mode is, but it’s probably not this one in particular. “Uncontroversially”… you keep using that word…)

Cost-benefit analysis is actually a very simple concept (though applying it in practice can be mind-bogglingly difficult): Try to maximize the good things minus the bad things. If an action would increase good things more than bad things, do it; if it would increase bad things more than good things, don’t do it.

What it replaces is simplistic deontological reasoning about “X is always bad” or “Y is always good”; that’s almost never true. Even great evils can be justified by greater goods, and many goods are not worth having because of the evils they would require to achieve. We seem to want all our decisions to have no downside, perhaps because that would resolve our cognitive dissonance most easily; but in the real world, most decisions have an upside and a downside, and it’s a question of which is larger.

Why is it that so many people—especially good people—have such an aversion to cost-benefit analysis?

I gained some insight into this by watching a video discussion from an online Harvard course taught by Michael Sandel (which is free, by the way, if you’d like to try it out). He was leading the discussion Socratically, which is in general a good method of teaching—but like anything else can be used to teach things that are wrong, and is in some ways more effective at doing so because it has a way of making students think they came up with the answers on their own. He says something like, “Do we really want our moral judgments to be based on cost-benefit analysis?” and gives some examples where people made judgments using cost-benefit analysis to support his suggestion that this is something bad.

But of course his examples are very specific: They all involve corporations using cost-benefit analysis to maximize profits. One of them is the Ford Pinto case, where Ford estimated the cost to them of a successful lawsuit, multiplied by the probability of such lawsuits, and then compared that with the cost of a total recall. Finding that the lawsuits were projected to be cheaper, they opted for that result, and thereby allowed several people to be killed by their known defective product.

Now, it later emerged that Ford Pintos were not actually especially dangerous, and in fact Ford didn’t just include lawsuits but also a standard estimate of the “value of a statistical human life”, and as a result of that their refusal to do the recall was probably the completely correct decision—but why let facts get in the way of a good argument?

But let’s suppose that all the facts had been as people thought they were—the product was unsafe and the company was only interested in their own profits. We don’t need to imagine this hypothetically; this is clearly what actually happened with the tobacco industry, and indeed with the oil industry. Is that evil? Of course it is. But not because it’s cost-benefit analysis.

Indeed, the reason this is evil is the same reason most things are evil: They are psychopathically selfish. They advance the interests of those who do them, while causing egregious harms to others.

Exxon is apparently prepared to sacrifice millions of lives to further their own interests, which makes them literally no better than Mao, as opposed to this bizarre “no better than Mao” that we would all be if the number of lives saved versus killed didn’t matter. Let me be absolutely clear; I am not speaking in hyperbole when I say that the board of directors of Exxon is morally no better than Mao. No, I mean they literally are willing to murder 20 million people to serve their own interests—more precisely 10 to 100 million, by WHO estimates. Maybe it matters a little bit that these people will be killed by droughts and hurricanes rather than by knives and guns; but then, most of the people Mao killed died of starvation, and plenty of the people killed by Exxon will too. But this statement wouldn’t have the force it does if I could not speak in terms of quantitative cost-benefit analysis. Killing people is one thing, and most industries would have to own up to it; being literally willing to kill as many people as history’s greatest mass murderers is quite anotherand yet it is true of Exxon.

But I can understand why people would tend to associate cost-benefit analysis with psychopaths maximizing their profits; there are two reasons for this.

First, most neoclassical economists appear to believe in both cost-benefit analysis and psychopathic profit maximization. They don’t even clearly distinguish their concept of “rational” from the concept of total psychopathic selfishness—hence why I originally titled this blog “infinite identical psychopaths”. The people arguing for cost-benefit analysis are usually economists, and economists are usually neoclassical, so most of the time you hear arguments for cost-benefit analysis they are also linked with arguments for horrifically extreme levels of selfishness.

Second, most people are uncomfortable with cost-benefit analysis, and as a result don’t use it. So, most of the cost-benefit analysis you’re likely to hear is done by terrible human beings, typically at the reins of multinational corporations. This becomes self-reinforcing, as all the good people don’t do cost-benefit analysis, so they don’t see good people doing it, so they don’t do it, and so on.

Therefore, let me present you with some clear-cut cases where cost-benefit analysis can save millions of lives, and perhaps even save the world.

Imagine if our terrorism policy used cost-benefit analysis; we wouldn’t kill 100,000 innocent people and sacrifice 4,400 soldiers fighting a war that didn’t have any appreciable benefit as a bizarre form of vengeance for 3,000 innocent people being killed. Moreover, we wouldn’t sacrifice core civil liberties to prevent a cause of death that’s 300 times rarer than car accidents.

Imagine if our healthcare policy used cost-benefit analysis; we would direct research funding to maximize our chances of saving lives, not toward the form of cancer that is quite literally the sexiest. We would go to a universal healthcare system like the rest of the First World, and thereby save thousands of additional lives while spending less on healthcare.

With cost-benefit analysis, we would reform our system of taxes and subsidies to internalize the cost of carbon emissions, most likely resulting in a precipitous decline of the oil and coal industries and the rapid rise of solar and nuclear power, and thereby save millions of lives. Without cost-benefit analysis, we instead get unemployed coal miners appearing on TV to grill politicians about how awful it is to lose your job even though that job is decades obsolete and poisoning our entire planet. Would eliminating coal hurt coal miners? Yes, it would, at least in the short run. It’s also completely, totally worth it, by at least a thousandfold.

We would invest heavily in improving our transit systems, with automated cars or expanded rail networks, thereby preventing thousands of deaths per year—instead of being shocked and outraged when an automated car finally kills one person, while manual vehicles in their place would have killed half a dozen by now.

We would disarm all of our nuclear weapons, because the risk of a total nuclear apocalypse is not worth it to provide some small increment in national security above our already overwhelming conventional military. While we’re at it, we would downsize that military in order to save enough money to end world hunger.

And oh by the way, we would end world hunger. The benefits of doing so are enormous; the costs are remarkably small. We’ve actually been making a great deal of progress lately—largely due to the work of development economists, and lots and lots of cost-benefit analysis. This process involves causing a lot of economic disruption, making people unemployed, taking riches away from some people and giving them to others; if we weren’t prepared to bear those costs, we would never get these benefits.

Could we do all these things without cost-benefit analysis? I suppose so, if we go through the usual process of covering of our ears whenever a downside is presented and amplification whenever an upside is presented, until we can more or less convince ourselves that there is no downside even though there always is. We can continue having arguments where one side presents only downsides, the other side presents only upsides, and then eventually one side prevails by sheer numbers, and it could turn out to be the upside team (or should I say “tribe”?).

But I think we’d progress a lot faster if we were honest about upsides and downsides, and had the courage to stand up and say, “Yes, that downside is real; but it’s worth it.” I realize it’s not easy to tell a coal miner to his face that his job is obsolete and killing people, and I don’t really blame Hillary Clinton for being wishy-washy about it; but the truth is, we need to start doing that. If we accept that costs are real, we may be able to mitigate them (as Hillary plans to do with a $30 billion investment in coal mining communities, by the way); if we pretend they don’t exist, people will still get hurt but we will be blind to their suffering. Or worse, we will do nothing—and evil will triumph.

Expensive cheap things, cheap expensive things

July 20, JDN 2457590

My posts recently have been fairly theoretical and mathematically intensive, so I thought I’d take a break from that today and offer you a much simpler, more practical post that you could use right away to improve your own finances.

Cognitive economists are so accustomed to using the word “heuristic” in contrast with words like “optimal” and “rational” that we tend to treat them as something bad. If only we didn’t have these darn heuristics, we could be those perfect rational agents the neoclassicists keep telling us about!

But in fact this is almost completely backwards: Heuristics are the reason human beings are capable of rational thought, unlike, well, anything else in the known universe. To be fair, many animals are capable of some limited rationality, often more than most people realize, but still far less than our own—and what rationality they have is born of the same evolutionary heuristics we use. Computers and robots are now approaching something that could be called rationality, but they still have a long way to go before they’ll really be acting rationally rather than perfectly following precise instructions—and of course we made them, modeled after our own thought processes. Current robots are logical, but not rational. The difference between logic and rationality is rather like that between intelligence and wisdom. Logic dictates that coffee is a berry; rationality says you may not enjoy it in your fruit salad. Robots are still at the point where they’d put coffee in our fruit salads if we told them to include a random mix of berries.

Heuristics are what allows us to make rational decisions 90% of the time. We might wish for something that would make us rational 100% of the time, but no known method exists; the best we can do is learn better heuristics to raise our percentage to perhaps 92% or 95%. With no heuristics at all, we would be 0% rational, not 100%.

So today I’m going to offer you a new heuristic, which I think might help you give your choices that little 2% boost. Expensive cheap things, cheap expensive things.

This is a little mantra to repeat to yourself whenever you have a purchasing decision to make—which, in a consumerist economy like ours, is surely several times a day. The precise definition of “cheap” and “expensive” will vary according to your income (to a billionaire, my lifetime income is a pittance; to someone at the UN poverty level, my annual income is an unimaginable bounty of riches). But for a typical middle-class American, “cheap” can be approximately defined by a Jackson heuristic—anything less than $20 is cheap—and “expensive” by a Benjamin heuristic—anything over $100 is expensive. It doesn’t need to be hard-edged either; you should apply this heuristic more thoroughly for purchases of $10,000 (i.e. cars) than you do for purchase of $1,000, and still more so for purchase of $100,000 (houses).

Expensive cheap things, cheap expensive things; what do I mean by that?

If you are going to buy something cheap, you can choose the expensive variety if you like. If you have the choice of a $1 toothbrush, a $5 toothbrush, and a $10 toothbrush, and you really do like the $10 toothbrush, don’t agonize over it—just buy the damn $10 toothbrush. Obviously there’s no reason to do that if the $1 toothbrush is really just as good for your needs; but if there’s any difference in quality you care about, it is almost certainly worth it to buy the better one.

If you are going to buy something expensive, you should choose the cheap variety if you can. If you have the choice of a $14,000 car, a $15,000 car, and a $16,000 car, you should buy the $14,000 car, unless the other cars are massively superior. You should basically be aiming for the cheapest bare-minimum choice that allows you to meet your needs. (I should be careful using cars as my example, because many old used cars that seem “cheap” are actually more expensive to fuel and maintain than it would cost to simply buy a newer model—but assume you’ve factored in a good estimate of the maintenance cost. You should almost never buy cars that aren’t at least a year old, however—first-year depreciation is huge. Let someone else lease it for a year before it you buy it.)

Why do I say this? Many people find the result counter-intuitive: I just told you to spend 900% more on toothbrushes, but insisted that you scrounge to save 12.5% on a car. Even if we adjust for the asymmetry using log points, I told you to indulge 230 log points of toothbrush for a tiny gain, while insisted you bear no-frills bare-minimum to save 13 log points of car.

I have also saved you $1,991. That’s why.

Intuitively we tend to think in terms of proportional prices—this car is 12.5% cheaper than that car, this toothbrush is 900% more expensive than that toothbrush. But you don’t spend money in proportions. You spend it in absolute amounts. So when you decide to make a purchase, you need to train yourself to think in terms of the absolute difference in price—paying $9 more versus paying $2000 more.

Businesses are counting on you not to think this way; that car dealer is surely going to point out that the $16,000 model has a sunroof and upgraded tire rims and whatever, and it’s only 14% more! But unless you would seriously be willing to pay $2,000 to get a sunroof and upgraded tire rims installed later, you should not upgrade to the $16,000 model. Don’t let them bamboozle you with “it’s a $5,000 value!”; it might well be a $5,000 price to do elsewhere, but that’s not the same thing. Only you can decide whether it’s of sufficient value to you.

There’s another reason this heuristic can be useful, which is that it will tend to pressure you into buying experiences instead of objects—and it is a well-established pattern in cognitive economics that experiences are a more cost-effective source of happiness than objects. “Expensive cheap things, cheap expensive things” doesn’t necessarily pressure toward buying experiences, as one could certainly load up on useless $20 gadgets or spend $5,000 on a luxurious vacation to Paris. But as a general pattern (and heuristics are all about general patterns!) you’re more likely to spend $20 on a dinner or $5,000 on a car. Some of the cheapest things people buy, like dining out with friends, are some of the greatest sources of happiness—you are, in a real sense, buying friendship. Some of the most expensive things people buy, like real estate, are precisely the sort of thing you should be willing to skimp on, because they really won’t bring you happiness. Larger houses are not statistically associated with higher happiness.

Indeed, part of the great crisis of real estate prices (which is a phenomenon across all First World cities, and surprisingly worse in Canada than the US, though worse still in California in particular) probably comes from people not applying this sort of heuristic. “This house is $240,000, but that one is only 10% more and look how much nicer it is!” That’s $24,000. You can buy that nicer house, or you can buy a second car. Or you can have an extra year of your child’s college fund. That is what that 10% actually means. I’m sure this isn’t the primary reason why housing in the US is so ludicrously expensive, but it may be a contributing factor. (Krugman argued similarly during the housing crash.)

Like any heuristic, “Expensive cheap things, cheap expensive things” will sometimes fail you, and if you think carefully you can probably outperform it. But I’ve found it’s a good habit to get into; it has helped me save money more than just about anything else I’ve tried.

“The cake is a lie”: The fundamental distortions of inequality

July 13, JDN 2457583

Inequality of wealth and income, especially when it is very large, fundamentally and radically distorts outcomes in a capitalist market. I’ve already alluded to this matter in previous posts on externalities and marginal utility of wealth, but it is so important I think it deserves to have its own post. In many ways this marks a paradigm shift: You can’t think about economics the same way once you realize it is true.

To motivate what I’m getting at, I’ll expand upon an example from a previous post.

Suppose there are only two goods in the world; let’s call them “cake” (K) and “money” (M). Then suppose there are three people, Baker, who makes cakes, Richie, who is very rich, and Hungry, who is very poor. Furthermore, suppose that Baker, Richie and Hungry all have exactly the same utility function, which exhibits diminishing marginal utility in cake and money. To make it more concrete, let’s suppose that this utility function is logarithmic, specifically: U = 10*ln(K+1) + ln(M+1)

The only difference between them is in their initial endowments: Baker starts with 10 cakes, Richie starts with $100,000, and Hungry starts with $10.

Therefore their starting utilities are:

U(B) = 10*ln(10+1)= 23.98

U(R) = ln(100,000+1) = 11.51

U(H) = ln(10+1) = 2.40

Thus, the total happiness is the sum of these: U = 37.89

Now let’s ask two very simple questions:

1. What redistribution would maximize overall happiness?
2. What redistribution will actually occur if the three agents trade rationally?

If multiple agents have the same diminishing marginal utility function, it’s actually a simple and deep theorem that the total will be maximized if they split the wealth exactly evenly. In the following blockquote I’ll prove the simplest case, which is two agents and one good; it’s an incredibly elegant proof:

Given: for all x, f(x) > 0, f'(x) > 0, f”(x) < 0.

Maximize: f(x) + f(A-x) for fixed A

f'(x) – f'(A – x) = 0

f'(x) = f'(A – x)

Since f”(x) < 0, this is a maximum.

Since f'(x) > 0, f is monotonic; therefore f is injective.

x = A – x

QED

This can be generalized to any number of agents, and for multiple goods. Thus, in this case overall happiness is maximized if the cakes and money are both evenly distributed, so that each person gets 3 1/3 cakes and $33,336.66.

The total utility in that case is:

3 * (10 ln(10/3+1) + ln(33,336.66+1)) = 3 * (14.66 + 10.414) = 3 (25.074) =75.22

That’s considerably better than our initial distribution (almost twice as good). Now, how close do we get by rational trade?

Each person is willing to trade up until the point where their marginal utility of cake is equal to their marginal utility of money. The price of cake will be set by the respective marginal utilities.

In particular, let’s look at the trade that will occur between Baker and Richie. They will trade until their marginal rate of substitution is the same.

The actual algebra involved is obnoxious (if you’re really curious, here are some solved exercises of similar trade problems), so let’s just skip to the end. (I rushed through, so I’m not actually totally sure I got it right, but to make my point the precise numbers aren’t important.)
Basically what happens is that Richie pays an exorbitant price of $10,000 per cake, buying half the cakes with half of his money.

Baker’s new utility and Richie’s new utility are thus the same:
U(R) = U(B) = 10*ln(5+1) + ln(50,000+1) = 17.92 + 10.82 = 28.74
What about Hungry? Yeah, well, he doesn’t have $10,000. If cakes are infinitely divisible, he can buy up to 1/1000 of a cake. But it turns out that even that isn’t worth doing (it would cost too much for what he gains from it), so he may as well buy nothing, and his utility remains 2.40.

Hungry wanted cake just as much as Richie, and because Richie has so much more Hungry would have gotten more happiness from each new bite. Neoclassical economists promised him that markets were efficient and optimal, and so he thought he’d get the cake he needs—but the cake is a lie.

The total utility is therefore:

U = U(B) + U(R) + U(H)

U = 28.74 + 28.74 + 2.40

U = 59.88

Note three things about this result: First, it is more than where we started at 37.89—trade increases utility. Second, both Richie and Baker are better off than they were—trade is Pareto-improving. Third, the total is less than the optimal value of 75.22—trade is not utility-maximizing in the presence of inequality. This is a general theorem that I could prove formally, if I wanted to bore and confuse all my readers. (Perhaps someday I will try to publish a paper doing that.)

This result is incredibly radical—it basically goes against the core of neoclassical welfare theory, or at least of all its applications to real-world policy—so let me be absolutely clear about what I’m saying, and what assumptions I had to make to get there.

I am saying that if people start with different amounts of wealth, the trades they would willfully engage in, acting purely under their own self interest, would not maximize the total happiness of the population. Redistribution of wealth toward equality would increase total happiness.

First, I had to assume that we could simply redistribute goods however we like without affecting the total amount of goods. This is wildly unrealistic, which is why I’m not actually saying we should reduce inequality to zero (as would follow if you took this result completely literally). Ironically, this is an assumption that most neoclassical welfare theory agrees with—the Second Welfare Theorem only makes any sense in a world where wealth can be magically redistributed between people without any harmful economic effects. If you weaken this assumption, what you find is basically that we should redistribute wealth toward equality, but beware of the tradeoff between too much redistribution and too little.

Second, I had to assume that there’s such a thing as “utility”—specifically, interpersonally comparable cardinal utility. In other words, I had to assume that there’s some way of measuring how much happiness each person has, and meaningfully comparing them so that I can say whether taking something from one person and giving it to someone else is good or bad in any given circumstance.

This is the assumption neoclassical welfare theory generally does not accept; instead they use ordinal utility, on which we can only say whether things are better or worse, but never by how much. Thus, their only way of determining whether a situation is better or worse is Pareto efficiency, which I discussed in a post a couple years ago. The change from the situation where Baker and Richie trade and Hungry is left in the lurch to the situation where all share cake and money equally in socialist utopia is not a Pareto-improvement. Richie and Baker are slightly worse off with 25.07 utilons in the latter scenario, while they had 28.74 utilons in the former.

Third, I had to assume selfishness—which is again fairly unrealistic, but again not something neoclassical theory disagrees with. If you weaken this assumption and say that people are at least partially altruistic, you can get the result where instead of buying things for themselves, people donate money to help others out, and eventually the whole system achieves optimal utility by willful actions. (It depends just how altruistic people are, as well as how unequal the initial endowments are.) This actually is basically what I’m trying to make happen in the real world—I want to show people that markets won’t do it on their own, but we have the chance to do it ourselves. But even then, it would go a lot faster if we used the power of government instead of waiting on private donations.

Also, I’m ignoring externalities, which are a different type of market failure which in no way conflicts with this type of failure. Indeed, there are three basic functions of government in my view: One is to maintain security. The second is to cancel externalities. The third is to redistribute wealth. The DOD, the EPA, and the SSA, basically. One could also add macroeconomic stability as a fourth core function—the Fed.

One way to escape my theorem would be to deny interpersonally comparable utility, but this makes measuring welfare in any way (including the usual methods of consumer surplus and GDP) meaningless, and furthermore results in the ridiculous claim that we have no way of being sure whether Bill Gates is happier than a child starving and dying of malaria in Burkina Faso, because they are two different people and we can’t compare different people. Far more reasonable is not to believe in cardinal utility, meaning that we can say an extra dollar makes you better off, but we can’t put a number on how much.

And indeed, the difficulty of even finding a unit of measure for utility would seem to support this view: Should I use QALY? DALY? A Likert scale from 0 to 10? There is no known measure of utility that is without serious flaws and limitations.

But it’s important to understand just how strong your denial of cardinal utility needs to be in order for this theorem to fail. It’s not enough that we can’t measure precisely; it’s not even enough that we can’t measure with current knowledge and technology. It must be fundamentally impossible to measure. It must be literally meaningless to say that taking a dollar from Bill Gates and giving it to the starving Burkinabe would do more good than harm, as if you were asserting that triangles are greener than schadenfreude.

Indeed, the whole project of welfare theory doesn’t make a whole lot of sense if all you have to work with is ordinal utility. Yes, in principle there are policy changes that could make absolutely everyone better off, or make some better off while harming absolutely no one; and the Pareto criterion can indeed tell you that those would be good things to do.

But in reality, such policies almost never exist. In the real world, almost anything you do is going to harm someone. The Nuremburg trials harmed Nazi war criminals. The invention of the automobile harmed horse trainers. The discovery of scientific medicine took jobs away from witch doctors. Inversely, almost any policy is going to benefit someone. The Great Leap Forward was a pretty good deal for Mao. The purges advanced the self-interest of Stalin. Slavery was profitable for plantation owners. So if you can only evaluate policy outcomes based on the Pareto criterion, you are literally committed to saying that there is no difference in welfare between the Great Leap Forward and the invention of the polio vaccine.

One way around it (that might actually be a good kludge for now, until we get better at measuring utility) is to broaden the Pareto criterion: We could use a majoritarian criterion, where you care about the number of people benefited versus harmed, without worrying about magnitudes—but this can lead to Tyranny of the Majority. Or you could use the Difference Principle developed by Rawls: find an ordering where we can say that some people are better or worse off than others, and then make the system so that the worst-off people are benefited as much as possible. I can think of a few cases where I wouldn’t want to apply this criterion (essentially they are circumstances where autonomy and consent are vital), but in general it’s a very good approach.

Neither of these depends upon cardinal utility, so have you escaped my theorem? Well, no, actually. You’ve weakened it, to be sure—it is no longer a statement about the fundamental impossibility of welfare-maximizing markets. But applied to the real world, people in Third World poverty are obviously the worst off, and therefore worthy of our help by the Difference Principle; and there are an awful lot of them and very few billionaires, so majority rule says take from the billionaires. The basic conclusion that it is a moral imperative to dramatically reduce global inequality remains—as does the realization that the “efficiency” and “optimality” of unregulated capitalism is a chimera.