Valuing harm without devaluing the harmed

June 9 JDN 2458644

In last week’s post I talked about the matter of “putting a value on a human life”. I explained how we don’t actually need to make a transparently absurd statement like “a human life is worth $5 million” to do cost-benefit analysis; we simply need to ask ourselves what else we could do with any given amount of money. We don’t actually need to put a dollar value on human lives; we need only value them in terms of other lives.

But there is a deeper problem to face here, which is how we ought to value not simply life, but quality of life. The notion is built into the concept of quality-adjusted life-years (QALY), but how exactly do we make such a quality adjustment?

Indeed, much like cost-benefit analysis in general or the value of a statistical life, the very concept of QALY can be repugnant to many people. The problem seems to be that it violates our deeply-held belief that all lives are of equal value: If I say that saving one person adds 2.5 QALY and saving another adds 68 QALY, I seem to be saying that the second person is worth more than the first.

But this is not really true. QALY aren’t associated with a particular individual. They are associated with the duration and quality of life.

It should be fairly easy to convince yourself that duration matters: Saving a newborn baby who will go on to live to be 84 years old adds an awful lot more in terms of human happiness than extending the life of a dying person by a single hour. To call each of these things “saving a life” is actually very unequal: It’s implying that 1 hour for the second person is worth 84 years for the first.

Quality, on the other hand, poses much thornier problems. Presumably, we’d like to be able to say that being wheelchair-bound is a bad thing, and if we can make people able to walk we should want to do that. But this means that we need to assign some sort of QALY cost to being in a wheelchair, which then seems to imply that people in wheelchairs are worth less than people who can walk.

And the same goes for any disability or disorder: Assigning a QALY cost to depression, or migraine, or cystic fibrosis, or diabetes, or blindness, or pneumonia, always seems to imply that people with the condition are worth less than people without. This is a deeply unsettling result.

Yet I think the mistake is in how we are using the concept of “worth”. We are not saying that the happiness of someone with depression is less important than the happiness of someone without; we are saying that the person with depression experiences less happiness—which, in this case of depression especially, is basically true by construction.

Does this imply, however, that if we are given the choice between saving two people, one of whom has a disability, we should save the one without?

Well, here’s an extreme example: Suppose there is a plague which kills 50% of its victims within one year. There are two people in a burning building. One of them has the plague, the other does not. You only have time to save one: Which do you save? I think it’s quite obvious you save the person who doesn’t have the plague.

But that only relies upon duration, which wasn’t so difficult. All right, fine; say the plague doesn’t kill you. Instead, it renders you paralyzed and in constant pain for the rest of your life. Is it really that far-fetched to say that we should save the person who won’t have that experience?

We really shouldn’t think of it as valuing people; we should think of it as valuing actions. QALY are a way of deciding which actions we should take, not which people are more important or more worthy. “Is a person who can walk worth more than a person who needs a wheelchair?” is a fundamentally bizarre and ultimately useless question. ‘Worth more’ in what sense? “Should we spend $100 million developing this technology that will allow people who use wheelchairs to walk?” is the question we should be asking. The QALY cost we assign to a condition isn’t about how much people with that condition are worth; it’s about what resources we should be willing to commit in order to treat that condition. If you have a given condition, you should want us to assign a high QALY cost to it, to motivate us to find better treatments.

I think it’s also important to consider which individuals are having QALY added or subtracted. In last week’s post I talked about how some people read “the value of a statistical life is $5 million” to mean “it’s okay to kill someone as long as you profit at least $5 million”; but this doesn’t follow at all. We don’t say that it’s all right to steal $1,000 from someone just because they lose $1,000 and you gain $1,000. We wouldn’t say it was all right if you had a better investment strategy and would end up with $1,100 afterward. We probably wouldn’t even say it was all right if you were much poorer and desperate for the money (though then we might at least be tempted). If a billionaire kills people to make $10 million each (sadly I’m quite sure that oil executives have killed for far less), that’s still killing people. And in fact since he is a billionaire, his marginal utility of wealth is so low that his value of a statistical life isn’t $5 million; it’s got to be in the billions. So the net happiness of the world has not increased, in fact.

Above all, it’s vital to appreciate the benefits of doing good cost-benefit analysis. Cost-benefit analysis tells us to stop fighting wars. It tells us to focus our spending on medical research and foreign aid instead of yet more corporate subsidies or aircraft carriers. It tells us how to allocate our public health resources so as to save the most lives. It emphasizes how vital our environmental regulations are in making our lives better and longer.

Could we do all these things without QALY? Maybe—but I suspect we would not do them as well, and when millions of lives are on the line, “not as well” is thousands of innocent people dead. Sometimes we really are faced with two choices for a public health intervention, and we need to decide which one will help the most people. Sometimes we really do have to set a pollution target, and decide just what amount of risk is worth accepting for the economic benefits of industry. These are very difficult questions, and without good cost-benefit analysis we could get the answers dangerously wrong.

Two terms in marginal utility of wealth

JDN 2457569

This post is going to be a little wonkier than most; I’m actually trying to sort out my thoughts and draw some public comment on a theory that has been dancing around my head for awhile. The original idea of separating terms in marginal utility of wealth was actually suggested by my boyfriend, and from there I’ve been trying to give it some more mathematical precision to see if I can come up with a way to test it experimentally. My thinking is also influenced by a paper Miles Kimball wrote about the distinction between happiness and utility.

There are lots of ways one could conceivably spend money—everything from watching football games to buying refrigerators to building museums to inventing vaccines. But insofar as we are rational (and we are after all about 90% rational), we’re going to try to spend our money in such a way that its marginal utility is approximately equal across various activities. You’ll buy one refrigerator, maybe two, but not seven, because the marginal utility of refrigerators drops off pretty fast; instead you’ll spend that money elsewhere. You probably won’t buy a house that’s twice as large if it means you can’t afford groceries anymore. I don’t think our spending is truly optimal at maximizing utility, but I think it’s fairly good.

Therefore, it doesn’t make much sense to break down marginal utility of wealth into all these different categories—cars, refrigerators, football games, shoes, and so on—because we already do a fairly good job of equalizing marginal utility across all those different categories. I could see breaking it down into a few specific categories, such as food, housing, transportation, medicine, and entertainment (and this definitely seems useful for making your own household budget); but even then, I don’t get the impression that most people routinely spend too much on one of these categories and not enough on the others.

However, I can think of two quite different fundamental motives behind spending money, which I think are distinct enough to be worth separating.

One way to spend money is on yourself, raising your own standard of living, making yourself more comfortable. This would include both football games and refrigerators, really anything that makes your life better. We could call this the consumption motive, or maybe simply the self-directed motive.

The other way is to spend it on other people, which, depending on your personality can take either the form of philanthropy to help others, or as a means of self-aggrandizement to raise your own relative status. It’s also possible to do both at the same time in various combinations; while the Gates Foundation is almost entirely philanthropic and Trump Tower is almost entirely self-aggrandizing, Carnegie Hall falls somewhere in between, being at once a significant contribution to our society and an obvious attempt to bring praise and adulation to himself. I would also include spending on Veblen goods that are mainly to show off your own wealth and status in this category. We can call this spending the philanthropic/status motive, or simply the other-directed motive.

There is some spending which combines both motives: A car is surely useful, but a Ferrari is mainly for show—but then, a Lexus or a BMW could be either to show off or really because you like the car better. Some form of housing is a basic human need, and bigger, fancier houses are often better, but the main reason one builds mansions in Beverly Hills is to demonstrate to the world that one is fabulously rich. This complicates the theory somewhat, but basically I think the best approach is to try to separate a sort of “spending proportion” on such goods, so that say $20,000 of the Lexus is for usefulness and $15,000 is for show. Empirically this might be hard to do, but theoretically it makes sense.

One of the central mysteries in cognitive economics right now is the fact that while self-reported happiness rises very little, if at all, as income increases, a finding which was recently replicated even in poor countries where we might not expect it to be true, nonetheless self-reported satisfaction continues to rise indefinitely. A number of theories have been proposed to explain this apparent paradox.

This model might just be able to account for that, if by “happiness” we’re really talking about the self-directed motive, and by “satisfaction” we’re talking about the other-directed motive. Self-reported happiness seems to obey a rule that $100 is worth as much to someone with $10,000 as $25 is to someone with $5,000, or $400 to someone with $20,000.

Self-reported satisfaction seems to obey a different rule, such that each unit of additional satisfaction requires a roughly equal proportional increase in income.

By having a utility function with two terms, we can account for both of these effects. Total utility will be u(x), happiness h(x), and satisfaction s(x).

u(x) = h(x) + s(x)

To obey the above rule, happiness must obey harmonic utility, like this, for some constants h0 and r:

h(x) = h0 – r/x

Proof of this is straightforward, though to keep it simple I’ve hand-waved why it’s a power law:

Given

h'(2x) = 1/4 h'(x)

Let

h'(x) = r x^n

h'(2x) = r (2x)^n

r (2x)^n = 1/4 r x^n

n = -2

h'(x) = r/x^2

h(x) = – r x^(-1) + C

h(x) = h0 – r/x

Miles Kimball also has some more discussion on his blog about how a utility function of this form works. (His statement about redistribution at the end is kind of baffling though; sure, dollar for dollar, redistributing wealth from the middle class to the poor would produce a higher gain in utility than redistributing wealth from the rich to the middle class. But neither is as good as redistributing from the rich to the poor, and the rich have a lot more dollars to redistribute.)

Satisfaction, however, must obey logarithmic utility, like this, for some constants s0 and k.

The x+1 means that it takes slightly less proportionally to have the same effect as your wealth increases, but it allows the function to be equal to s0 at x=0 instead of going to negative infinity:

s(x) = s0 + k ln(x)

Proof of this is very simple, almost trivial:

Given

s'(x) = k/x

s(x) = k ln(x) + s0

Both of these functions actually have a serious problem that as x approaches zero, they go to negative infinity. For self-directed utility this almost makes sense (if your real consumption goes to zero, you die), but it makes no sense at all for other-directed utility, and since there are causes most of us would willingly die for, the disutility of dying should be large, but not infinite.

Therefore I think it’s probably better to use x +1 in place of x:

h(x) = h0 – r/(x+1)

s(x) = s0 + k ln(x+1)

This makes s0 the baseline satisfaction of having no other-directed spending, though the baseline happiness of zero self-directed spending is actually h0 – r rather than just h0. If we want it to be h0, we could use this form instead:

h(x) = h0 + r x/(x+1)

This looks quite different, but actually only differs by a constant.

Therefore, my final answer for the utility of wealth (or possibly income, or spending? I’m not sure which interpretation is best just yet) is actually this:

u(x) = h(x) + s(x)

h(x) = h0 + r x/(x+1)

s(x) = s0 + k ln(x+1)

Marginal utility is then the derivatives of these:

h'(x) = r/(x+1)^2

s'(x) = k/(x+1)

Let’s assign some values to the constants so that we can actually graph these.

Let h0 = s0 = 0, so our baseline is just zero.

Furthermore, let r = k = 1, which would mean that the value of $1 is the same whether spent either on yourself or on others, if $1 is all you have. (This is probably wrong, actually, but it’s the simplest to start with. Shortly I’ll discuss what happens as you vary the ratio k/r.)

Here is the result graphed on a linear scale:

Utility_linear

And now, graphed with wealth on a logarithmic scale:

Utility_log

As you can see, self-directed marginal utility drops off much faster than other-directed marginal utility, so the amount you spend on others relative to yourself rapidly increases as your wealth increases. If that doesn’t sound right, remember that I’m including Veblen goods as “other-directed”; when you buy a Ferrari, it’s not really for yourself. While proportional rates of charitable donation do not increase as wealth increases (it’s actually a U-shaped pattern, largely driven by poor people giving to religious institutions), they probably should (people should really stop giving to religious institutions! Even the good ones aren’t cost-effective, and some are very, very bad.). Furthermore, if you include spending on relative power and status as the other-directed motive, that kind of spending clearly does proportionally increase as wealth increases—gotta keep up with those Joneses.

If r/k = 1, that basically means you value others exactly as much as yourself, which I think is implausible (maybe some extreme altruists do that, and Peter Singer seems to think this would be morally optimal). r/k < 1 would mean you should never spend anything on yourself, which not even Peter Singer believes. I think r/k = 10 is a more reasonable estimate.

For any given value of r/k, there is an optimal ratio of self-directed versus other-directed spending, which can vary based on your total wealth.

Actually deriving what the optimal proportion would be requires a whole lot of algebra in a post that probably already has too much algebra, but the point is, there is one, and it will depend strongly on the ratio r/k, that is, the overall relative importance of self-directed versus other-directed motivation.

Take a look at this graph, which uses r/k = 10.

Utility_marginal

If you only have 2 to spend, you should spend it entirely on yourself, because up to that point the marginal utility of self-directed spending is always higher. If you have 3 to spend, you should spend most of it on yourself, but a little bit on other people, because after you’ve spent about 2.2 on yourself there is more marginal utility for spending on others than on yourself.

If your available wealth is W, you would spend some amount x on yourself, and then W-x on others:

u(x) = h(x) + s(W-x)

u(x) = r x/(x+1) + k ln(W – x + 1)

Then you take the derivative and set it equal to zero to find the local maximum. I’ll spare you the algebra, but this is the result of that optimization:

x = – 1 – r/(2k) + sqrt(r/k) sqrt(2 + W + r/(4k))

As long as k <= r (which more or less means that you care at least as much about yourself as about others—I think this is true of basically everyone) then as long as W > 0 (as long as you have some money to spend) we also have x > 0 (you will spend at least something on yourself).

Below a certain threshold (depending on r/k), the optimal value of x is greater than W, which means that, if possible, you should be receiving donations from other people and spending them on yourself. (Otherwise, just spend everything on yourself). After that, x < W, which means that you should be donating to others. The proportion that you should be donating smoothly increases as W increases, as you can see on this graph (which uses r/k = 10, a figure I find fairly plausible):

Utility_donation

While I’m sure no one literally does this calculation, most people do seem to have an intuitive sense that you should donate an increasing proportion of your income to others as your income increases, and similarly that you should pay a higher proportion in taxes. This utility function would justify that—which is something that most proposed utility functions cannot do. In most models there is a hard cutoff where you should donate nothing up to the point where your marginal utility is equal to the marginal utility of donating, and then from that point forward you should donate absolutely everything. Maybe a case can be made for that ethically, but psychologically I think it’s a non-starter.

I’m still not sure exactly how to test this empirically. It’s already quite difficult to get people to answer questions about marginal utility in a way that is meaningful and coherent (people just don’t think about questions like “Which is worth more? $4 to me now or $10 if I had twice as much wealth?” on a regular basis). I’m thinking maybe they could play some sort of game where they have the opportunity to make money at the game, but must perform tasks or bear risks to do so, and can then keep the money or donate it to charity. The biggest problem I see with that is that the amounts would probably be too small to really cover a significant part of anyone’s total wealth, and therefore couldn’t cover much of their marginal utility of wealth function either. (This is actually a big problem with a lot of experiments that use risk aversion to try to tease out marginal utility of wealth.) But maybe with a variety of experimental participants, all of whom we get income figures on?

How do we measure happiness?

JDN 2457028 EST 20:33.

No, really, I’m asking. I strongly encourage my readers to offer in the comments any ideas they have about the measurement of happiness in the real world; this has been a stumbling block in one of my ongoing research projects.

In one sense the measurement of happiness—or more formally utility—is absolutely fundamental to economics; in another it’s something most economists are astonishingly afraid of even trying to do.

The basic question of economics has nothing to do with money, and is really only incidentally related to “scarce resources” or “the production of goods” (though many textbooks will define economics in this way—apparently implying that a post-scarcity economy is not an economy). The basic question of economics is really this: How do we make people happy?

This must always be the goal in any economic decision, and if we lose sight of that fact we can make some truly awful decisions. Other goals may work sometimes, but they inevitably fail: If you conceive of the goal as “maximize GDP”, then you’ll try to do any policy that will increase the amount of production, even if that production comes at the expense of stress, injury, disease, or pollution. (And doesn’t that sound awfully familiar, particularly here in the US? 40% of Americans report their jobs as “very stressful” or “extremely stressful”.) If you were to conceive of the goal as “maximize the amount of money”, you’d print money as fast as possible and end up with hyperinflation and total economic collapse ala Zimbabwe. If you were to conceive of the goal as “maximize human life”, you’d support methods of increasing population to the point where we had a hundred billion people whose lives were barely worth living. Even if you were to conceive of the goal as “save as many lives as possible”, you’d find yourself investing in whatever would extend lifespan even if it meant enormous pain and suffering—which is a major problem in end-of-life care around the world. No, there is one goal and one goal only: Maximize happiness.

I suppose technically it should be “maximize utility”, but those are in fact basically the same thing as long as “happiness” is broadly conceived as eudaimoniathe joy of a life well-lived—and not a narrow concept of just adding up pleasure and subtracting out pain. The goal is not to maximize the quantity of dopamine and endorphins in your brain; the goal is to achieve a world where people are safe from danger, free to express themselves, with friends and family who love them, who participate in a world that is just and peaceful. We do not want merely the illusion of these things—we want to actually have them. So let me be clear that this is what I mean when I say “maximize happiness”.

The challenge, therefore, is how we figure out if we are doing that. Things like money and GDP are easy to measure; but how do you measure happiness?
Early economists like Adam Smith and John Stuart Mill tried to deal with this question, and while they were not very successful I think they deserve credit for recognizing its importance and trying to resolve it. But sometime around the rise of modern neoclassical economics, economists gave up on the project and instead sought a narrower task, to measure preferences.

This is often called technically ordinal utility, as opposed to cardinal utility; but this terminology obscures the fundamental distinction. Cardinal utility is actual utility; ordinal utility is just preferences.

(The notion that cardinal utility is defined “up to a linear transformation” is really an eminently trivial observation, and it shows just how little physics the physics-envious economists really understand. All we’re talking about here is units of measurement—the same distance is 10.0 inches or 25.4 centimeters, so is distance only defined “up to a linear transformation”? It’s sometimes argued that there is no clear zero—like Fahrenheit and Celsius—but actually it’s pretty clear to me that there is: Zero utility is not existing. So there you go, now you have Kelvin.)

Preferences are a bit easier to measure than happiness, but not by as much as most economists seem to think. If you imagine a small number of options, you can just put them in order from most to least preferred and there you go; and we could imagine asking someone to do that, or—the technique of revealed preferenceuse the choices they make to infer their preferences by assuming that when given the choice of X and Y, choosing X means you prefer X to Y.

Like much of neoclassical theory, this sounds good in principle and utterly collapses when applied to the real world. Above all: How many options do you have? It’s not easy to say, but the number is definitely huge—and both of those facts pose serious problems for a theory of preferences.

The fact that it’s not easy to say means that we don’t have a well-defined set of choices; even if Y is theoretically on the table, people might not realize it, or they might not see that it’s better even though it actually is. Much of our cognitive effort in any decision is actually spent narrowing the decision space—when deciding who to date or where to go to college or even what groceries to buy, simply generating a list of viable options involves a great deal of effort and extremely complex computation. If you have a true utility function, you can satisficechoosing the first option that is above a certain threshold—or engage in constrained optimizationchoosing whether to continue searching or accept your current choice based on how good it is. Under preference theory, there is no such “how good it is” and no such thresholds. You either search forever or choose a cutoff arbitrarily.

Even if we could decide how many options there are in any given choice, in order for this to form a complete guide for human behavior we would need an enormous amount of information. Suppose there are 10 different items I could have or not have; then there are 10! = 3.6 million possible preference orderings. If there were 100 items, there would be 100! = 9e157 possible orderings. It won’t do simply to decide on each item whether I’d like to have it or not. Some things are complements: I prefer to have shoes, but I probably prefer to have $100 and no shoes at all rather than $50 and just a left shoe. Other things are substitutes: I generally prefer eating either a bowl of spaghetti or a pizza, rather than both at the same time. No, the combinations matter, and that means that we have an exponentially increasing decision space every time we add a new option. If there really is no more structure to preferences than this, we have an absurd computational task to make even the most basic decisions.

This is in fact most likely why we have happiness in the first place. Happiness did not emerge from a vacuum; it evolved by natural selection. Why make an organism have feelings? Why make it care about things? Wouldn’t it be easier to just hard-code a list of decisions it should make? No, on the contrary, it would be exponentially more complex. Utility exists precisely because it is more efficient for an organism to like or dislike things by certain amounts rather than trying to define arbitrary preference orderings. Adding a new item means assigning it an emotional value and then slotting it in, instead of comparing it to every single other possibility.

To illustrate this: I like Coke more than I like Pepsi. (Let the flame wars begin?) I also like getting massages more than I like being stabbed. (I imagine less controversy on this point.) But the difference in my mind between massages and stabbings is an awful lot larger than the difference between Coke and Pepsi. Yet according to preference theory (“ordinal utility”), that difference is not meaningful; instead I have to say that I prefer the pair “drink Pepsi and get a massage” to the pair “drink Coke and get stabbed”. There’s no such thing as “a little better” or “a lot worse”; there is only what I prefer over what I do not prefer, and since these can be assigned arbitrarily there is an impossible computational task before me to make even the most basic decisions.

Real utility also allows you to make decisions under risk, to decide when it’s worth taking a chance. Is a 50% chance of $100 worth giving up a guaranteed $50? Probably. Is a 50% chance of $10 million worth giving up a guaranteed $5 million? Not for me. Maybe for Bill Gates. How do I make that decision? It’s not about what I prefer—I do in fact prefer $10 million to $5 million. It’s about how much difference there is in terms of my real happiness—$5 million is almost as good as $10 million, but $100 is a lot better than $50. My marginal utility of wealth—as I discussed in my post on progressive taxation—is a lot steeper at $50 than it is at $5 million. There’s actually a way to use revealed preferences under risk to estimate true (“cardinal”) utility, developed by Von Neumann and Morgenstern. In fact they proved a remarkably strong theorem: If you don’t have a cardinal utility function that you’re maximizing, you can’t make rational decisions under risk. (In fact many of our risk decisions clearly aren’t rational, because we aren’t actually maximizing an expected utility; what we’re actually doing is something more like cumulative prospect theory, the leading cognitive economic theory of risk decisions. We overrespond to extreme but improbable events—like lightning strikes and terrorist attacks—and underrespond to moderate but probable events—like heart attacks and car crashes. We play the lottery but still buy health insurance. We fear Ebola—which has never killed a single American—but not influenza—which kills 10,000 Americans every year.)

A lot of economists would argue that it’s “unscientific”—Kenneth Arrow said “impossible”—to assign this sort of cardinal distance between our choices. But assigning distances between preferences is something we do all the time. Amazon.com lets us vote on a 5-star scale, and very few people send in error reports saying that cardinal utility is meaningless and only preference orderings exist. In 2000 I would have said “I like Gore best, Nader is almost as good, and Bush is pretty awful; but of course they’re all a lot better than the Fascist Party.” If we had simply been able to express those feelings on the 2000 ballot according to a range vote, either Nader would have won and the United States would now have a three-party system (and possibly a nationalized banking system!), or Gore would have won and we would be a decade ahead of where we currently are in preventing and mitigating global warming. Either one of these things would benefit millions of people.

This is extremely important because of another thing that Arrow said was “impossible”—namely, “Arrow’s Impossibility Theorem”. It should be called Arrow’s Range Voting Theorem, because simply by restricting preferences to a well-defined utility and allowing people to make range votes according to that utility, we can fulfill all the requirements that are supposedly “impossible”. The theorem doesn’t say—as it is commonly paraphrased—that there is no fair voting system; it says that range voting is the only fair voting system. A better claim is that there is no perfect voting system, which is true if you mean that there is no way to vote strategically that doesn’t accurately reflect your true beliefs. The Myerson-Satterthwaithe Theorem is then the proper theorem to use; if you could design a voting system that would force you to reveal your beliefs, you could design a market auction that would force you to reveal your optimal price. But the least expressive way to vote in a range vote is to pick your favorite and give them 100% while giving everyone else 0%—which is identical to our current plurality vote system. The worst-case scenario in range voting is our current system.

But the fact that utility exists and matters, unfortunately doesn’t tell us how to measure it. The current state-of-the-art in economics is what’s called “willingness-to-pay”, where we arrange (or observe) decisions people make involving money and try to assign dollar values to each of their choices. This is how you get disturbing calculations like “the lives lost due to air pollution are worth $10.2 billion.”

Why are these calculations disturbing? Because they have the whole thing backwards—people aren’t valuable because they are worth money; money is valuable because it helps people. It’s also really bizarre because it has to be adjusted for inflation. Finally—and this is the point that far too few people appreciate—the value of a dollar is not constant across people. Because different people have different marginal utilities of wealth, something that I would only be willing to pay $1000 for, Bill Gates might be willing to pay $1 million for—and a child in Africa might only be willing to pay $10, because that is all he has to spend. This makes the “willingness-to-pay” a basically meaningless concept independent of whose wealth we are spending.

Utility, on the other hand, might differ between people—but, at least in principle, it can still be added up between them on the same scale. The problem is that “in principle” part: How do we actually measure it?

So far, the best I’ve come up with is to borrow from public health policy and use the QALY, or quality-adjusted life year. By asking people macabre questions like “What is the maximum number of years of your life you would give up to not have a severe migraine every day?” (I’d say about 20—that’s where I feel ambivalent. At 10 I definitely would; at 30 I definitely wouldn’t.) or “What chance of total paralysis would you take in order to avoid being paralyzed from the waist down?” (I’d say about 20%.) we assign utility values: 80 years of migraines is worth giving up 20 years to avoid, so chronic migraine is a quality of life factor of 0.75. Total paralysis is 5 times as bad as paralysis from the waist down, so if waist-down paralysis is a quality of life factor of 0.90 then total paralysis is 0.50.

You can probably already see that there are lots of problems: What if people don’t agree? What if due to framing effects the same person gives different answers to slightly different phrasing? Some conditions will directly bias our judgments—depression being the obvious example. How many years of your life would you give up to not be depressed? Suicide means some people say all of them. How well do we really know our preferences on these sorts of decisions, given that most of them are decisions we will never have to make? It’s difficult enough to make the actual decisions in our lives, let alone hypothetical decisions we’ve never encountered.

Another problem is often suggested as well: How do we apply this methodology outside questions of health? Does it really make sense to ask you how many years of your life drinking Coke or driving your car is worth?
Well, actually… it better, because you make that sort of decision all the time. You drive instead of staying home, because you value where you’re going more than the risk of dying in a car accident. You drive instead of walking because getting there on time is worth that additional risk as well. You eat foods you know aren’t good for you because you think the taste is worth the cost. Indeed, most of us aren’t making most of these decisions very well—maybe you shouldn’t actually drive or drink that Coke. But in order to know that, we need to know how many years of your life a Coke is worth.

As a very rough estimate, I figure you can convert from willingness-to-pay to QALY by dividing by your annual consumption spending Say you spend annually about $20,000—pretty typical for a First World individual. Then $1 is worth about 50 microQALY, or about 26 quality-adjusted life-minutes. Now suppose you are in Third World poverty; your consumption might be only $200 a year, so $1 becomes worth 5 milliQALY, or 1.8 quality-adjusted life-days. The very richest individuals might spend as much as $10 million on consumption, so $1 to them is only worth 100 nanoQALY, or 3 quality-adjusted life-seconds.

That’s an extremely rough estimate, of course; it assumes you are in perfect health, all your time is equally valuable and all your purchasing decisions are optimized by purchasing at marginal utility. Don’t take it too literally; based on the above estimate, an hour to you is worth about $2.30, so it would be worth your while to work for even $3 an hour. Here’s a simple correction we should probably make: if only a third of your time is really usable for work, you should expect at least $6.90 an hour—and hey, that’s a little less than the US minimum wage. So I think we’re in the right order of magnitude, but the details have a long way to go.

So let’s hear it, readers: How do you think we can best measure happiness?

The moral—and economic—case for progressive taxation

JDN 2456935 PDT 09:44.

Broadly speaking, there are three ways a tax system can be arranged: It can be flat, in which every person pays the same tax rate; it can be regressive, in which people with higher incomes pay lower rates; or it can be progressive, in which case people with higher incomes pay higher rates.

There are certain benefits to a flat tax: Above all, it’s extremely easy to calculate. It’s easy to determine how much revenue a given tax rate will raise; multiply the rate times your GDP. It’s also easy to determine how much a given person should owe; multiply the rate times their income. This also makes the tax withholding process much easier; a fixed proportion can be withheld from all income everyone makes without worrying about how much they made before or are expected to make later. If your goal is minimal bureaucracy, a flat tax does have something to be said for it.

A regressive tax, on the other hand, is just as complicated as a progressive tax but has none of the benefits. It’s unfair because you’re actually taking more from people who can afford the least. (Note that this is true even if the rich actually pay a higher total; the key point, which I will explain in detail shortly, is that a dollar is worth more to you if you don’t have very many.) There is basically no reason you would ever want to have a regressive tax system—and yet, all US states have regressive tax systems. This is mainly because they rely upon sales taxes, which are regressive because rich people spend a smaller portion of what they have. If you make $10,000 per year, you probably spend $9,500 (you may even spend $15,000 and rack up the difference in debt!). If you make $50,000, you probably spend $40,000. But if you make $10 million, you probably only spend $4 million. Since sales taxes only tax on what you spend, the rich effectively pay a lower rate. This could be corrected to some extent by raising the sales tax on luxury goods—say a 20% rate on wine and a 50% rate on yachts—but this is awkward and very few states even try. Not even my beloved California; they fear drawing the ire of wineries and Silicon Valley.

The best option is to make the tax system progressive. Thomas Piketty has been called a “Communist” for favoring strongly progressive taxation, but in fact most Americans—including Republicans—agree that our tax system should be progressive. (Most Americans also favor cutting the Department of Defense rather than Medicare. This then raises the question: Why isn’t Congress doing that? Why aren’t people voting in representatives to Congress who will do that?) Most people judge whether taxes are fair based on what they themselves pay—which is why, in surveys, the marginal rate on the top 1% is basically unrelated to whether people think taxes are too high, even though that one bracket is the critical decision in deciding any tax system—you can raise about 20% of your revenue by hurting about 1% of your people. In a typical sample of 1,000 respondents, only about 10 are in the top 1%. If you want to run for Congress, the implication is clear: Cut taxes on all but the top 1%, raise them enormously on the top 0.1%, 0.01%, and 0.001%, and leave the 1% the same. People will feel that you’ve made the taxes more fair, and you’ve also raised more revenue. In other words, make the tax system more progressive.

The good news on this front is that the US federal tax system is progressive—barely. Actually the US tax system is especially progressive over the whole distribution—by some measures the most progressive in the world—but the problem is that it’s not nearly progressive enough at the very top, where the real money is. The usual measure based on our Gini coefficient ignores the fact that Warren Buffett pays a lower rate than his secretary. The Gini is based on population, and billionaires are a tiny portion of the population—but they are not a tiny portion of the money. Net wealth of the 400 richest people (the top 0.0001%) adds up to about $2 trillion (13% of our $15 trillion GDP, or about 4% of our $54 trillion net wealth). It also matters of course how you spend your tax revenue; even though Sweden’s tax system is no more progressive than ours and their pre-tax inequality is about the same, their spending is much more targeted at reducing inequality.

Progressive taxation is inherently more fair, because the value of a dollar decreases the more you have. We call this diminishing marginal utility of wealth. There is a debate within the cognitive economics literature about just how quickly the marginal utility of wealth decreases. On the low end, Easterlin argues that it drops off extremely fast, becoming almost negligible as low as $75,000 per year. This paper is on the high end, arguing that marginal utility decreases “only” as the logarithm of how much you have. That’s what I’ll use in this post, because it’s the most conservative reasonable estimate. I actually think the truth is somewhere in between, with marginal utility decreasing about exponentially.

Logarithms are also really easy to work with, once you get used to them. So let’s say that the amount of happiness (utility) U you get from an amount of income I is like this: U = ln(I)

Now let’s suppose the IRS comes along and taxes your money at a rate r. We must have r < 1, or otherwise they’re trying to take money you don’t have. We don’t need to have r > 0; r < 0 would just mean that you receive more in transfers than you lose in taxes. For the poor we should have r < 0.

Now your happiness is U = ln((1-r)I).

By the magic of logarithms, this is U = ln(I) + ln(1-r).

If r is between 0 and 1, ln(1-r) is negative and you’re losing happiness. (If r < 0, you’re gaining happiness.) The amount of happiness you lose, ln(1-r), is independent of your income. So if your goal is to take a fixed amount of happiness, you should tax at a fixed rate of income—a flat tax.

But that really isn’t fair, is it? If I’m getting 100 utilons of happiness from my money and you’re only getting 2 utilons from your money, then taking that 1 utilon, while it hurts the same—that’s the whole point of utility—leaves you an awful lot worse off than I. It actually makes the ratio between us worse, going from 50 to 1, all the way up to 99 to 1.

Notice how if we had a regressive tax, it would be obviously unfair—we’d actually take more utility from poor people than rich people. I have 100 utilons, you have 2 utilons; the taxes take 1.5 of yours but only 0.5 of mine. That seems frankly outrageous; but it’s what all US states have.

Most of the money you have is ultimately dependent on your society. Let’s say you own a business and made your wealth selling products; it seems like you deserve to have that wealth, doesn’t it? (Don’t get me started on people who inherited their wealth!) Well, in order to do that, you need to have strong institutions of civil government; you need security against invasion; you need protection of property rights and control of crime; you need a customer base who can afford your products (that’s our problem in the Second Depression); you need workers who are healthy and skilled; you need a financial system that provides reliable credit (also a problem). I’m having trouble finding any good research on exactly what proportion of individual wealth is dependent upon the surrounding society, but let’s just say Bill Gates wouldn’t be spending billions fighting malaria in villages in Ghana if he had been born in a village in Ghana. It doesn’t matter how brilliant or determined or hard-working you are, if you live in a society that can’t support economic activity.

In other words, society is giving you a lot of happiness you wouldn’t otherwise have. Because of this, it makes sense that in order to pay for all that stuff society is doing for you (and maintain a stable monetary system), they would tax you according to how much happiness they’re giving you. Hence we shouldn’t tax your money at a constant rate; we should tax your utility at a constant rate and then convert back to money. This defines a new sort of “tax rate” which I’ll call p. Like our tax rate r, p needs to be less than 1, but it doesn’t need to be greater than 0.

Of the U = ln(I) utility you get from your money, you will get to keep U = (1-p) ln(I). Say it’s 10%; then if I have 100 utilons, they take 10 utilons and leave me with 90. If you have 2 utilons, they take 0.2 and leave you with 1.8. The ratio between us remains the same: 50 to 1.

What does this mean for the actual tax rate? It has to be progressive. Very progressive, as a matter of fact. And in particular, progressive all the way up—there is no maximum tax bracket.

The amount of money you had before is just I.

The amount of money you have now can be found as the amount of money I’ that gives you the right amount of utility. U = ln(I’) = (1-p) ln(I). Take the exponential of both sides: I’ = I^(1-p).

The units on this are a bit weird, “dollars to the 0.8 power”? Oddly, this rarely seems to bother economists when they use Cobb-Douglas functions which are like K^(1/3) L^(2/3). It bothers me though; to really make this tax system in practice you’d need to fix the units of measurement, probably using some subsistence level. Say that’s set at $10,000; instead of saying you make $2 million, we’d say you make 200 subsistence levels.

The tax rate you pay is then r = 1 – I’/I, which is r = 1 – I^-p. As I increases, I^-p decreases, so r gets closer and closer to 1. It never actually hits 1 (that would be a 100% tax rate, which hardly anyone thinks is fair), but for very large income is does get quite close.

Here, let’s use some actual numbers. Suppose as I said we make the subsistence level $10,000. Let’s also set p = 0.1, meaning we tax 10% of your utility. Then, if you make the US median individual income, that’s about $30,000 which would be I = 3. US per-capita GDP of $55,000 would be I = 5.5, and so on. I’ll ignore incomes below the subsistence level for now—basically what you want to do there is establish a basic income so that nobody is below the subsistence level.

I made a table of tax rates and after-tax incomes that would result:

Pre-tax income Tax rate After-tax income
$10,000 0.0% $10,000
$20,000 6.7% $18,661
$30,000 10.4% $26,879
$40,000 12.9% $34,822
$50,000 14.9% $42,567
$60,000 16.4% $50,158
$70,000 17.7% $57,622
$80,000 18.8% $64,980
$90,000 19.7% $72,247
$100,000 20.6% $79,433
$1,000,000 36.9% $630,957
$10,000,000 49.9% $5,011,872
$100,000,000 60.2% $39,810,717
$1,000,000,000 68.4% $316,227,766

What if that’s not enough revenue? We could raise to p = 0.2:

Pre-tax income Tax rate After-tax income
$10,000 0.0% $10,000
$20,000 12.9% $17,411
$30,000 19.7% $24,082
$40,000 24.2% $30,314
$50,000 27.5% $36,239
$60,000 30.1% $41,930
$70,000 32.2% $47,433
$80,000 34.0% $52,780
$90,000 35.6% $57,995
$100,000 36.9% $63,096
$1,000,000 60.2% $398,107
$10,000,000 74.9% $2,511,886
$100,000,000 84.2% $15,848,932
$1,000,000,000 90.0% $100,000,000

The richest 400 people in the US have a combined net wealth of about $2.2 trillion. If we assume that billionaires make about a 10% return on their net wealth, this 90% rate would raise over $200 billion just from those 400 billionaires alone, enough to pay all interest on the national debt. Let me say that again: This tax system would raise enough money from a group of people who could fit in a large lecture hall to provide for servicing the national debt. And it could do so indefinitely, because we are only taxing the interest, not the principal.

And what if that’s still not enough? We could raise it even further, to p = 0.3. Now the tax rates look a bit high for most people, but not absurdly so—and notice how the person at the poverty line is still paying nothing, as it should be. The millionaire is unhappy with 75%, but the billionaire is really unhappy with his 97% rate. But the government now has plenty of money.

Pre-tax income Tax rate After-tax income
$10,000 0.0% $10,000
$20,000 18.8% $16,245
$30,000 28.1% $21,577
$40,000 34.0% $26,390
$50,000 38.3% $30,852
$60,000 41.6% $35,051
$70,000 44.2% $39,045
$80,000 46.4% $42,871
$90,000 48.3% $46,555
$100,000 49.9% $50,119
$1,000,000 74.9% $251,189
$10,000,000 87.4% $1,258,925
$100,000,000 93.7% $6,309,573
$1,000,000,000 96.8% $31,622,777

Is it fair to tax the super-rich at such extreme rates? Well, why wouldn’t it be? They are living fabulously well, and most of their opportunity to do so is dependent upon living in our society. It’s actually not at all unreasonable to think that over 97% of the wealth a billionaire has is dependent upon society in this way—indeed, I think it’s unreasonable to imagine that it’s any less than 99.9%. If you say that the portion a billionaire receives from society is less than 99.9%, you are claiming that it is possible to become a millionaire while living on a desert island. (Remember, 0.1% of $1 billion is $1 million.) Forget the money system; do you really think that anything remotely like a millionaire standard of living is possible from catching your own fish and cutting down your own trees?Another fun fact is that this tax system will not change the ordering of income at all. If you were the 37,824th richest person yesterday, you will be the 37,824th richest person today; you’ll just have a lot less money while you do so. And if you were the 300,120,916th richest person, you’ll still be the 300,120,916th person, and probably still have the same amount of money you did before (or even more, if the basic income is doled out on tax day).

And these figures, remember, are based on a conservative estimate of how quickly the marginal utility of wealth decreases. I’m actually pretty well convinced that it’s much faster than that, in which case even these tax rates may not be progressive enough.

Many economists worry that taxes reduce the incentive to work. If you are taxed at 30%, that’s like having a wage that’s 30% lower. It’s not hard to imagine why someone might not work as much if they were being paid 30% less.

But there are actually two effects here. One is the substitution effect: a higher wage gives you more reason to work. The other is the income effect: having more money means that you can meet your needs without working as much.

For low incomes, the substitution effect dominates; if your pay rises from $12,000 a year to $15,000, you’re probably going to work more, because you get paid more to work and you’re still hardly wealthy enough to rest on your laurels.

For moderate incomes, the effects actually balance quite well; people who make $40,000 work about the same number of hours as people who make $50,000.

For high incomes, the income effect dominates; if your pay rises from $300,000 to $400,000, you’re probably going to work less, because you can pay all your bills while putting in less work.

So if you want to maximize work incentives, what should you do? You want to raise the wages of poor people and lower the wages of rich people. In other words, you want very low—or negative—taxes on the lower brackets, and very high taxes on the upper brackets. If you’re genuinely worried about taxes distorting incentives to work, you should be absolutely in favor of progressive taxation.

In conclusion: Because money is worth less to you the more of it you have, in order to take a fixed proportion of the happiness, we should be taking an increasing proportion of the money. In order to be fair in terms of real utility, taxes should be progressive. And this would actually increase work incentives.

Pareto Efficiency: Why we need it—and why it’s not enough

JDN 2456914 PDT 11:45.

I already briefly mentioned the concept in an earlier post, but Pareto-efficiency is so fundamental to both ethics and economics I decided I would spent some more time on explaining exactly what it’s about.

This is the core idea: A system is Pareto-efficient if you can’t make anyone better off without also making someone else worse off. It is Pareto-inefficient if the opposite is true, and you could improve someone’s situation without hurting anyone else.

Improving someone’s situation without harming anyone else is called a Pareto-improvement. A system is Pareto-efficient if and only if there are no possible Pareto-improvements.

Zero-sum games are always Pareto-efficient. If the game is about how we distribute the same $10 between two people, any dollar I get is a dollar you don’t get, so no matter what we do, we can’t make either of us better off without harming the other. You may have ideas about what the fair or right solution is—and I’ll get back to that shortly—but all possible distributions are Pareto-efficient.

Where Pareto-efficiency gets interesting is in nonzero-sum games. The most famous and most important such game is the so-called Prisoner’s Dilemma; I don’t like the standard story to set up the game, so I’m going to give you my own. Two corporations, Alphacomp and Betatech, make PCs. The computers they make are of basically the same quality and neither is a big brand name, so very few customers are going to choose on anything except price. Combining labor, materials, equipment and so on, each PC costs each company $300 to manufacture a new PC, and most customers are willing to buy a PC as long as it’s no more than $1000. Suppose there are 1000 customers buying. Now the question is, what price do they set? They would both make the most profit if they set the price at $1000, because customers would still buy and they’d make $700 on each unit, each making $350,000. But now suppose Alphacomp sets a price at $1000; Betatech could undercut them by making the price $999 and sell twice as many PCs, making $699,000. And then Alphacomp could respond by setting the price at $998, and so on. The only stable end result if they are both selfish profit-maximizers—the Nash equilibrium—is when the price they both set is $301, meaning each company only profits $1 per PC, making $1000. Indeed, this result is what we call in economics perfect competition. This is great for consumers, but not so great for the companies.

If you focus on the most important choice, $1000 versus $999—to collude or to compete—we can set up a table of how much each company would profit by making that choice (a payoff matrix or normal form game in game theory jargon).

A: $999 A: $1000
B: $999 A:$349k

B:$349k

A:$0

B:$699k

B: $1000 A:$699k

B:$0

A:$350k

B:$350k

Obviously the choice that makes both companies best-off is for both companies to make the price $1000; that is Pareto-efficient. But it’s also Pareto-efficient for Alphacomp to choose $999 and the other one to choose $1000, because then they sell twice as many computers. We have made someone worse off—Betatech—but it’s still Pareto-efficient because we couldn’t give Betatech back what they lost without taking some of what Alphacomp gained.

There’s only one option that’s not Pareto-efficient: If both companies charge $999, they could both have made more money if they’d charged $1000 instead. The problem is, that’s not the Nash equilibrium; the stable state is the one where they set the price lower.

This means that only case that isn’t Pareto-efficient is the one that the system will naturally trend toward if both compal selfish profit-maximizers. (And while most human beings are nothing like that, most corporations actually get pretty close. They aren’t infinite, but they’re huge; they aren’t identical, but they’re very similar; and they basically are psychopaths.)

In jargon, we say the Nash equilibrium of a Prisoner’s Dilemma is Pareto-inefficient. That one sentence is basically why John Nash was such a big deal; up until that point, everyone had assumed that if everyone acted in their own self-interest, the end result would have to be Pareto-efficient; Nash proved that this isn’t true at all. Everyone acting in their own self-interest can doom us all.

It’s not hard to see why Pareto-efficiency would be a good thing: if we can make someone better off without hurting anyone else, why wouldn’t we? What’s harder for most people—and even most economists—to understand is that just because an outcome is Pareto-efficient, that doesn’t mean it’s good.

I think this is easiest to see in zero-sum games, so let’s go back to my little game of distributing the same $10. Let’s say it’s all within my power to choose—this is called the ultimatum game. If I take $9 for myself and only give you $1, is that Pareto-efficient? It sure is; for me to give you any more, I’d have to lose some for myself. But is it fair? Obviously not! The fair option is for me to go fifty-fifty, $5 and $5; and maybe you’d forgive me if I went sixty-forty, $6 and $4. But if I take $9 and only offer you $1, you know you’re getting a raw deal.

Actually as the game is often played, you have the choice the say, “Forget it; if that’s your offer, we both get nothing.” In that case the game is nonzero-sum, and the choice you’ve just taken is not Pareto-efficient! Neoclassicists are typically baffled at the fact that you would turn down that free $1, paltry as it may be; but I’m not baffled at all, and I’d probably do the same thing in your place. You’re willing to pay that $1 to punish me for being so stingy. And indeed, if you allow this punishment option, guess what? People aren’t as stingy! If you play the game without the rejection option, people typically take about $7 and give about $3 (still fairer than the $9/$1, you may notice; most people aren’t psychopaths), but if you allow it, people typically take about $6 and give about $4. Now, these are pretty small sums of money, so it’s a fair question what people might do if $100,000 were on the table and they were offered $10,000. But that doesn’t mean people aren’t willing to stand up for fairness; it just means that they’re only willing to go so far. They’ll take a $1 hit to punish someone for being unfair, but that $10,000 hit is just too much. I suppose this means most of us do what Guess Who told us: “You can sell your soul, but don’t you sell it too cheap!”

Now, let’s move on to the more complicated—and more realistic—scenario of a nonzero-sum game. In fact, let’s make the “game” a real-world situation. Suppose Congress is debating a bill that would introduce a 70% marginal income tax on the top 1% to fund a basic income. (Please, can we debate that, instead of proposing a balanced-budget amendment that would cripple US fiscal policy indefinitely and lead to a permanent depression?)

This tax would raise about 14% of GDP in revenue, or about $2.4 trillion a year (yes, really). It would then provide, for every man, woman and child in America, a $7000 per year income, no questions asked. For a family of four, that would be $28,000, which is bound to make their lives better.

But of course it would also take a lot of money from the top 1%; Mitt Romney would only make $6 million a year instead of $20 million, and Bill Gates would have to settle for $2.4 billion a year instead of $8 billion. Since it’s the whole top 1%, it would also hurt a lot of people with more moderate high incomes, like your average neurosurgeon or Paul Krugman, who each make about $500,000 year. About $100,000 of that is above the cutoff for the top 1%, so they’d each have to pay about $70,000 more than they currently do in taxes; so if they were paying $175,000 they’re now paying $245,000. Once taking home $325,000, now only $255,000. (Probably not as big a difference as you thought, right? Most people do not seem to understand how marginal tax rates work, as evinced by “Joe the Plumber” who thought that if he made $250,001 he would be taxed at the top rate on the whole amount—no, just that last $1.)

You can even suppose that it would hurt the economy as a whole, though in fact there’s no evidence of that—we had tax rates like this in the 1960s and our economy did just fine. The basic income itself would inject so much spending into the economy that we might actually see more growth. But okay, for the sake of argument let’s suppose it also drops our per-capita GDP by 5%, from $53,000 to $50,300; that really doesn’t sound so bad, and any bigger drop than that is a totally unreasonable estimate based on prejudice rather than data. For the same tax rate might have to drop the basic income a bit too, say $6600 instead of $7000.

So, this is not a Pareto-improvement; we’re making some people better off, but others worse off. In fact, the way economists usually estimate Pareto-efficiency based on so-called “economic welfare”, they really just count up the total number of dollars and divide by the number of people and call it a day; so if we lose 5% in GDP they would register this as a Pareto-loss. (Yes, that’s a ridiculous way to do it for obvious reasons—$1 to Mitt Romney isn’t worth as much as it is to you and me—but it’s still how it’s usually done.)

But does that mean that it’s a bad idea? Not at all. In fact, if you assume that the real value—the utility—of a dollar decreases exponentially with each dollar you have, this policy could almost double the total happiness in US society. If you use a logarithm instead, it’s not quite as impressive; it’s only about a 20% improvement in total happiness—in other words, “only” making as much difference to the happiness of Americans from 2014 to 2015 as the entire period of economic growth from 1900 to 2000.

If right now you’re thinking, “Wow! Why aren’t we doing that?” that’s good, because I’ve been thinking the same thing for years. And maybe if we keep talking about it enough we can get people to start voting on it and actually make it happen.

But in order to make things like that happen, we must first get past the idea that Pareto-efficiency is the only thing that matters in moral decisions. And once again, that means overcoming the standard modes of thinking in neoclassical economics.

Something strange happened to economics in about 1950. Before that, economists from Marx to Smith to Keynes were always talking about differences in utility, marginal utility of wealth, how to maximize utility. But then economists stopped being comfortable talking about happiness, deciding (for reasons I still do not quite grasp) that it was “unscientific”, so they eschewed all discussion of the subject. Since we still needed to know why people choose what they do, a new framework was created revolving around “preferences”, which are a simple binary relation—you either prefer it or you don’t, you can’t like it “a lot more” or “a little more”—that is supposedly more measurable and therefore more “scientific”. But under this framework, there’s no way to say that giving a dollar to a homeless person makes a bigger difference to them than giving the same dollar to Mitt Romney, because a “bigger difference” is something you’ve defined out of existence. All you can say is that each would prefer to receive the dollar, and that both Mitt Romney and the homeless person would, given the choice, prefer to be Mitt Romney. While both of these things are true, it does seem to be kind of missing the point, doesn’t it?

There are stirrings of returning to actual talk about measuring actual (“cardinal”) utility, but still preferences (so-called “ordinal utility”) are the dominant framework. And in this framework, there’s really only one way to evaluate a situation as good or bad, and that’s Pareto-efficiency.

Actually, that’s not quite right; John Rawls cleverly came up with a way around this problem, by using the idea of “maximin”—maximize the minimum. Since each would prefer to be Romney, given the chance, we can say that the homeless person is worse off than Mitt Romney, and therefore say that it’s better to make the homeless person better off. We can’t say how much better, but at least we can say that it’s better, because we’re raising the floor instead of the ceiling. This is certainly a dramatic improvement, and on these grounds alone you can argue for the basic income—your floor is now explicitly set at the $6600 per year of the basic income.

But is that really all we can say? Think about how you make your own decisions; do you only speak in terms of strict preferences? I like Coke more than Pepsi; I like massages better than being stabbed. If preference theory is right, then there is no greater distance in the latter case than the former, because this whole notion of “distance” is unscientific. I guess we could expand the preference over groups of goods (baskets as they are generally called), and say that I prefer the set “drink Pepsi and get a massage” to the set “drink Coke and get stabbed”, which is certainly true. But do we really want to have to define that for every single possible combination of things that might happen to me? Suppose there are 1000 things that could happen to me at any given time, which is surely conservative. In that case there are 2^1000 = 10^300 possible combinations. If I were really just reading off a table of unrelated preference relations, there wouldn’t be room in my brain—or my planet—to store it, nor enough time in the history of the universe to read it. Even imposing rational constraints like transitivity doesn’t shrink the set anywhere near small enough—at best maybe now it’s 10^20, well done; now I theoretically could make one decision every billion years or so. At some point doesn’t it become a lot more parsimonious—dare I say, more scientific—to think that I am using some more organized measure than that? It certainly feels like I am; even if couldn’t exactly quantify it, I can definitely say that some differences in my happiness are large and others are small. The mild annoyance of drinking Pepsi instead of Coke will melt away in the massage, but no amount of Coke deliciousness is going to overcome the agony of being stabbed.

And indeed if you give people surveys and ask them how much they like things or how strongly they feel about things, they have no problem giving you answers out of 5 stars or on a scale from 1 to 10. Very few survey participants ever write in the comments box: “I was unable to take this survey because cardinal utility does not exist and I can only express binary preferences.” A few do write 1s and 10s on everything, but even those are fairly rare. This “cardinal utility” that supposedly doesn’t exist is the entire basis of the scoring system on Netflix and Amazon. In fact, if you use cardinal utility in voting, it is mathematically provable that you have the best possible voting system, which may have something to do with why Netflix and Amazon like it. (That’s another big “Why aren’t we doing this already?”)

If you can actually measure utility in this way, then there’s really not much reason to worry about Pareto-efficiency. If you just maximize utility, you’ll automatically get a Pareto-efficient result; but the converse is not true because there are plenty of Pareto-efficient scenarios that don’t maximize utility. Thinking back to our ultimatum game, all options are Pareto-efficient, but you can actually prove that the $5/$5 choice is the utility-maximizing one, if the two players have the same amount of wealth to start with. (Admittedly for those small amounts there isn’t much difference; but that’s also not too surprising, since $5 isn’t going to change anybody’s life.) And if they don’t—suppose I’m rich and you’re poor and we play the game—well, maybe I should give you more, precisely because we both know you need it more.

Perhaps even more significant, you can move from a Pareto-inefficient scenario to a Pareto-efficient one and make things worse in terms of utility. The scenario in which the top 1% are as wealthy as they can possibly be and the rest of us live on scraps may in fact be Pareto-efficient; but that doesn’t mean any of us should be interested in moving toward it (though sadly, we kind of are). If you’re only measuring in terms of Pareto-efficiency, your attempts at improvement can actually make things worse. It’s not that the concept is totally wrong; Pareto-efficiency is, other things equal, good; but other things are never equal.

So that’s Pareto-efficiency—and why you really shouldn’t care about it that much.