Prospect Theory: Why we buy insurance and lottery tickets

JDN 2457061 PST 14:18.

Today’s topic is called prospect theory. Prospect theory is basically what put cognitive economics on the map; it was the knock-down argument that Kahneman used to show that human beings are not completely rational in their economic decisions. It all goes back to a 1979 paper by Kahneman and Tversky that now has 34000 citations (yes, we’ve been having this argument for a rather long time now). In the 1990s it was refined into cumulative prospect theory, which is more mathematically precise but basically the same idea.

What was that argument? People buy both insurance and lottery tickets.

The “both” is very important. Buying insurance can definitely be rational—indeed, typically is. Buying lottery tickets could theoretically be rational, under very particular circumstances. But they cannot both be rational at the same time.

To see why, let’s talk some more about marginal utility of wealth. Recall that a dollar is not worth the same to everyone; to a billionaire a dollar is a rounding error, to most of us it is a bottle of Coke, but to a starving child in Ghana it could be life itself. We typically observe diminishing marginal utility of wealth—the more money you have, the less another dollar is worth to you.

If we sketch a graph of your utility versus wealth it would look something like this:

Marginal_utility_wealth

Notice how it increases as your wealth increases, but at a rapidly diminishing rate.

If you have diminishing marginal utility of wealth, you are what we call risk-averse. If you are risk-averse, you’ll (sometimes) want to buy insurance. Let’s suppose the units on that graph are tens of thousands of dollars. Suppose you currently have an income of $50,000. You are offered the chance to pay $10,000 a year to buy unemployment insurance, so that if you lose your job, instead of making $10,000 on welfare you’ll make $30,000 on unemployment. You think you have about a 20% chance of losing your job.

If you had constant marginal utility of wealth, this would not be a good deal for you. Your expected value of money would be reduced if you buy the insurance: Before you had an 80% chance of $50,000 and a 20% chance of $10,000 so your expected amount of money is $42,000. With the insurance you have an 80% chance of $40,000 and a 20% chance of $30,000 so your expected amount of money is $38,000. Why would you take such a deal? That’s like giving up $4,000 isn’t it?

Well, let’s look back at that utility graph. At $50,000 your utility is 1.80, uh… units, er… let’s say QALY. 1.80 QALY per year, meaning you live 80% better than the average human. Maybe, I guess? Doesn’t seem too far off. In any case, the units of measurement aren’t that important.

Insurance_options

By buying insurance your effective income goes down to $40,000 per year, which lowers your utility to 1.70 QALY. That’s a fairly significant hit, but it’s not unbearable. If you lose your job (20% chance), you’ll fall down to $30,000 and have a utility of 1.55 QALY. Again, noticeable, but bearable. Your overall expected utility with insurance is therefore 1.67 QALY.

But what if you don’t buy insurance? Well then you have a 20% chance of taking a big hit and falling all the way down to $10,000 where your utility is only 1.00 QALY. Your expected utility is therefore only 1.64 QALY. You’re better off going with the insurance.

And this is how insurance companies make a profit (well; the legitimate way anyway; they also like to gouge people and deny cancer patients of course); on average, they make more from each customer than they pay out, but customers are still better off because they are protected against big losses. In this case, the insurance company profits $4,000 per customer per year, customers each get 30 milliQALY per year (about the same utility as an extra $2,000 more or less), everyone is happy.

But if this is your marginal utility of wealth—and it most likely is, approximately—then you would never want to buy a lottery ticket. Let’s suppose you actually have pretty good odds; it’s a 1 in 1 million chance of $1 million for a ticket that costs $2. This means that the state is going to take in about $2 million for every $1 million they pay out to a winner.

That’s about as good as your odds for a lottery are ever going to get; usually it’s more like a 1 in 400 million chance of $150 million for $1, which is an even bigger difference than it sounds, because $150 million is nowhere near 150 times as good as $1 million. It’s a bit better from the state’s perspective though, because they get to receive $400 million for every $150 million they pay out.

For your convenience I have zoomed out the graph so that you can see 100, which is an income of $1 million (which you’ll have this year if you win; to get it next year, you’ll have to play again). You’ll notice I did not have to zoom out the vertical axis, because 20 times as much money only ends up being about 2 times as much utility. I’ve marked with lines the utility of $50,000 (1.80, as we said before) versus $1 million (3.30).

Lottery_utility

What about the utility of $49,998 which is what you’ll have if you buy the ticket and lose? At this number of decimal places you can’t see the difference, so I’ll need to go out a few more. At $50,000 you have 1.80472 QALY. At $49,998 you have 1.80470 QALY. That $2 only costs you 0.00002 QALY, 20 microQALY. Not much, really; but of course not, it’s only $2.

How much does the 1 in 1 million chance of $1 million give you? Even less than that. Remember, the utility gain for going from $50,000 to $1 million is only 1.50 QALY. So you’re adding one one-millionth of that in expected utility, which is of course 1.5 microQALY, or 0.0000015 QALY.

That $2 may not seem like it’s worth much, but that 1 in 1 million chance of $1 million is worth less than one tenth as much. Again, I’ve tried to make these figures fairly realistic; they are by no means exact (I don’t actually think $49,998 corresponds to exactly 1.804699 QALY), but the order of magnitude difference is right. You gain about ten times as much utility from spending that $2 on something you want than you do on taking the chance at $1 million.

I said before that it is theoretically possible for you to have a utility function for which the lottery would be rational. For that you’d need to have increasing marginal utility of wealth, so that you could be what we call risk-seeking. Your utility function would have to look like this:

Weird_utility

There’s no way marginal utility of wealth looks like that. This would be saying that it would hurt Bill Gates more to lose $1 than it would hurt a starving child in Ghana, which makes no sense at all. (It certainly would makes you wonder why he’s so willing to give it to them.) So frankly even if we didn’t buy insurance the fact that we buy lottery tickets would already look pretty irrational.

But in order for it to be rational to buy both lottery tickets and insurance, our utility function would have to be totally nonsensical. Maybe it could look like this or something; marginal utility decreases normally for awhile, and then suddenly starts going upward again for no apparent reason:

Weirder_utility

Clearly it does not actually look like that. Not only would this mean that Bill Gates is hurt more by losing $1 than the child in Ghana, we have this bizarre situation where the middle class are the people who have the lowest marginal utility of wealth in the world. Both the rich and the poor would need to have higher marginal utility of wealth than we do. This would mean that apparently yachts are just amazing and we have no idea. Riding a yacht is the pinnacle of human experience, a transcendence beyond our wildest imaginings; and riding a slightly bigger yacht is even more amazing and transcendent. Love and the joy of a life well-lived pale in comparison to the ecstasy of adding just one more layer of gold plate to your Ferrari collection.

Where increasing marginal utility is ridiculous, this is outright special pleading. You’re just making up bizarre utility functions that perfectly line up with whatever behavior people happen to have so that you can still call it rational. It’s like saying, “It could be perfectly rational! Maybe he enjoys banging his head against the wall!”

Kahneman and Tversky had a better idea. They realized that human beings aren’t so great at assessing probability, and furthermore tend not to think in terms of total amounts of wealth or annual income at all, but in terms of losses and gains. Through a series of clever experiments they showed that we are not so much risk-averse as we are loss-averse; we are actually willing to take more risk if it means that we will be able to avoid a loss.

In effect, we seem to be acting as if our utility function looks like this, where the zero no longer means “zero income”, it means “whatever we have right now“:

Prospect_theory

We tend to weight losses about twice as much as gains, and we tend to assume that losses also diminish in their marginal effect the same way that gains do. That is, we would only take a 50% chance to lose $1000 if it meant a 50% chance to gain $2000; but we’d take a 10% chance at losing $10,000 to save ourselves from a guaranteed loss of $1000.

This can explain why we buy insurance, provided that you frame it correctly. One of the things about prospect theory—and about human behavior in general—is that it exhibits framing effects: The answer we give depends upon the way you ask the question. That’s so totally obviously irrational it’s honestly hard to believe that we do it; but we do, and sometimes in really important situations. Doctors—doctors—will decide a moral dilemma differently based on whether you describe it as “saving 400 out of 600 patients” or “letting 200 out of 600 patients die”.

In this case, you need to frame insurance as the default option, and not buying insurance as an extra risk you are taking. Then saving money by not buying insurance is a gain, and therefore less important, while a higher risk of a bad outcome is a loss, and therefore important.

If you frame it the other way, with not buying insurance as the default option, then buying insurance is taking a loss by making insurance payments, only to get a gain if the insurance pays out. Suddenly the exact same insurance policy looks less attractive. This is a big part of why Obamacare has been effective but unpopular. It was set up as a fine—a loss—if you don’t buy insurance, rather than as a bonus—a gain—if you do buy insurance. The latter would be more expensive, but we could just make it up by taxing something else; and it might have made Obamacare more popular, because people would see the government as giving them something instead of taking something away. But the fine does a better job of framing insurance as the default option, so it motivates more people to actually buy insurance.

But even that would still not be enough to explain how it is rational to buy lottery tickets (Have I mentioned how it’s really not a good idea to buy lottery tickets?), because buying a ticket is a loss and winning the lottery is a gain. You actually have to get people to somehow frame not winning the lottery as a loss, making winning the default option despite the fact that it is absurdly unlikely. But I have definitely heard people say things like this: “Well if my numbers come up and I didn’t play that week, how would I feel then?” Pretty bad, I’ll grant you. But how much you wanna bet that never happens? (They’ll bet… the price of the ticket, apparently.)

In order for that to work, people either need to dramatically overestimate the probability of winning, or else ignore it entirely. Both of those things totally happen.

First, we overestimate the probability of rare events and underestimate the probability of common events—this is actually the part that makes it cumulative prospect theory instead of just regular prospect theory. If you make a graph of perceived probability versus actual probability, it looks like this:

cumulative_prospect

We don’t make much distinction between 40% and 60%, even though that’s actually pretty big; but we make a huge distinction between 0% and 0.00001% even though that’s actually really tiny. I think we basically have categories in our heads: “Never, almost never, rarely, sometimes, often, usually, almost always, always.” Moving from 0% to 0.00001% is going from “never” to “almost never”, but going from 40% to 60% is still in “often”. (And that for some reason reminded me of “Well, hardly ever!”)

But that’s not even the worst of it. After all that work to explain how we can make sense of people’s behavior in terms of something like a utility function (albeit a distorted one), I think there’s often a simpler explanation still: Regret aversion under total neglect of probability.

Neglect of probability is self-explanatory: You totally ignore the probability. But what’s regret aversion, exactly? Unfortunately I’ve had trouble finding any good popular sources on the topic; it’s all scholarly stuff. (Maybe I’m more cutting-edge than I thought!)

The basic idea that is that you minimize regret, where regret can be formalized as the difference in utility between the outcome you got and the best outcome you could have gotten. In effect, it doesn’t matter whether something is likely or unlikely; you only care how bad it is.

This explains insurance and lottery tickets in one fell swoop: With insurance, you have the choice of risking a big loss (big regret) which you can avoid by paying a small amount (small regret). You take the small regret, and buy insurance. With lottery tickets, you have the chance of getting a large gain (big regret if you don’t) which you gain by paying a small amount (small regret).

This can also explain why a typical American’s fears go in the order terrorists > Ebola > sharks > > cars > cheeseburgers, while the actual risk of dying goes in almost the opposite order, cheeseburgers > cars > > terrorists > sharks > Ebola. (Terrorists are scarier than sharks and Ebola and actually do kill more Americans! Yay, we got something right! Other than that it is literally reversed.)

Dying from a terrorist attack would be horrible; in addition to your own death you have all the other likely deaths and injuries, and the sheer horror and evil of the terrorist attack itself. Dying from Ebola would be almost as bad, with gruesome and agonizing symptoms. Dying of a shark attack would be still pretty awful, as you get dismembered alive. But dying in a car accident isn’t so bad; it’s usually over pretty quick and the event seems tragic but ordinary. And dying of heart disease and diabetes from your cheeseburger overdose will happen slowly over many years, you’ll barely even notice it coming and probably die rapidly from a heart attack or comfortably in your sleep. (Wasn’t that a pleasant paragraph? But there’s really no other way to make the point.)

If we try to estimate the probability at all—and I don’t think most people even bother—it isn’t by rigorous scientific research; it’s usually by availability heuristic: How many examples can you think of in which that event happened? If you can think of a lot, you assume that it happens a lot.

And that might even be reasonable, if we still lived in hunter-gatherer tribes or small farming villages and the 150 or so people you knew were the only people you ever heard about. But now that we have live TV and the Internet, news can get to us from all around the world, and the news isn’t trying to give us an accurate assessment of risk, it’s trying to get our attention by talking about the biggest, scariest, most exciting things that are happening around the world. The amount of news attention an item receives is in fact in inverse proportion to the probability of its occurrence, because things are more exciting if they are rare and unusual. Which means that if we are estimating how likely something is based on how many times we heard about it on the news, our estimates are going to be almost exactly reversed from reality. Ironically it is the very fact that we have more information that makes our estimates less accurate, because of the way that information is presented.

It would be a pretty boring news channel that spent all day saying things like this: “82 people died in car accidents today, and 1657 people had fatal heart attacks, 11.8 million had migraines, and 127 million played the lottery and lost; in world news, 214 countries did not go to war, and 6,147 children starved to death in Africa…” This would, however, be vastly more informative.

In the meantime, here are a couple of counter-heuristics I recommend to you: Don’t think about losses and gains, think about where you are and where you might be. Don’t say, “I’ll gain $1,000”; say “I’ll raise my income this year to $41,000.” Definitely do not think in terms of the percentage price of things; think in terms of absolute amounts of money. Cheap expensive things, expensive cheap things is a motto of mine; go ahead and buy the $5 toothbrush instead of the $1, because that’s only $4. But be very hesitant to buy the $22,000 car instead of the $21,000, because that’s $1,000. If you need to estimate the probability of something, actually look it up; don’t try to guess based on what it feels like the probability should be. Make this unprecedented access to information work for you instead of against you. If you want to know how many people die in car accidents each year, you can literally ask Google and it will tell you that (I tried it—it’s 1.3 million worldwide). The fatality rate of a given disease versus the risk of its vaccine, the safety rating of a particular brand of car, the number of airplane crash deaths last month, the total number of terrorist attacks, the probability of becoming a university professor, the average functional lifespan of a new television—all these things and more await you at the click of a button. Even if you think you’re pretty sure, why not look it up anyway?

Perhaps then we can make prospect theory wrong by making ourselves more rational.

How do we measure happiness?

JDN 2457028 EST 20:33.

No, really, I’m asking. I strongly encourage my readers to offer in the comments any ideas they have about the measurement of happiness in the real world; this has been a stumbling block in one of my ongoing research projects.

In one sense the measurement of happiness—or more formally utility—is absolutely fundamental to economics; in another it’s something most economists are astonishingly afraid of even trying to do.

The basic question of economics has nothing to do with money, and is really only incidentally related to “scarce resources” or “the production of goods” (though many textbooks will define economics in this way—apparently implying that a post-scarcity economy is not an economy). The basic question of economics is really this: How do we make people happy?

This must always be the goal in any economic decision, and if we lose sight of that fact we can make some truly awful decisions. Other goals may work sometimes, but they inevitably fail: If you conceive of the goal as “maximize GDP”, then you’ll try to do any policy that will increase the amount of production, even if that production comes at the expense of stress, injury, disease, or pollution. (And doesn’t that sound awfully familiar, particularly here in the US? 40% of Americans report their jobs as “very stressful” or “extremely stressful”.) If you were to conceive of the goal as “maximize the amount of money”, you’d print money as fast as possible and end up with hyperinflation and total economic collapse ala Zimbabwe. If you were to conceive of the goal as “maximize human life”, you’d support methods of increasing population to the point where we had a hundred billion people whose lives were barely worth living. Even if you were to conceive of the goal as “save as many lives as possible”, you’d find yourself investing in whatever would extend lifespan even if it meant enormous pain and suffering—which is a major problem in end-of-life care around the world. No, there is one goal and one goal only: Maximize happiness.

I suppose technically it should be “maximize utility”, but those are in fact basically the same thing as long as “happiness” is broadly conceived as eudaimoniathe joy of a life well-lived—and not a narrow concept of just adding up pleasure and subtracting out pain. The goal is not to maximize the quantity of dopamine and endorphins in your brain; the goal is to achieve a world where people are safe from danger, free to express themselves, with friends and family who love them, who participate in a world that is just and peaceful. We do not want merely the illusion of these things—we want to actually have them. So let me be clear that this is what I mean when I say “maximize happiness”.

The challenge, therefore, is how we figure out if we are doing that. Things like money and GDP are easy to measure; but how do you measure happiness?
Early economists like Adam Smith and John Stuart Mill tried to deal with this question, and while they were not very successful I think they deserve credit for recognizing its importance and trying to resolve it. But sometime around the rise of modern neoclassical economics, economists gave up on the project and instead sought a narrower task, to measure preferences.

This is often called technically ordinal utility, as opposed to cardinal utility; but this terminology obscures the fundamental distinction. Cardinal utility is actual utility; ordinal utility is just preferences.

(The notion that cardinal utility is defined “up to a linear transformation” is really an eminently trivial observation, and it shows just how little physics the physics-envious economists really understand. All we’re talking about here is units of measurement—the same distance is 10.0 inches or 25.4 centimeters, so is distance only defined “up to a linear transformation”? It’s sometimes argued that there is no clear zero—like Fahrenheit and Celsius—but actually it’s pretty clear to me that there is: Zero utility is not existing. So there you go, now you have Kelvin.)

Preferences are a bit easier to measure than happiness, but not by as much as most economists seem to think. If you imagine a small number of options, you can just put them in order from most to least preferred and there you go; and we could imagine asking someone to do that, or—the technique of revealed preferenceuse the choices they make to infer their preferences by assuming that when given the choice of X and Y, choosing X means you prefer X to Y.

Like much of neoclassical theory, this sounds good in principle and utterly collapses when applied to the real world. Above all: How many options do you have? It’s not easy to say, but the number is definitely huge—and both of those facts pose serious problems for a theory of preferences.

The fact that it’s not easy to say means that we don’t have a well-defined set of choices; even if Y is theoretically on the table, people might not realize it, or they might not see that it’s better even though it actually is. Much of our cognitive effort in any decision is actually spent narrowing the decision space—when deciding who to date or where to go to college or even what groceries to buy, simply generating a list of viable options involves a great deal of effort and extremely complex computation. If you have a true utility function, you can satisficechoosing the first option that is above a certain threshold—or engage in constrained optimizationchoosing whether to continue searching or accept your current choice based on how good it is. Under preference theory, there is no such “how good it is” and no such thresholds. You either search forever or choose a cutoff arbitrarily.

Even if we could decide how many options there are in any given choice, in order for this to form a complete guide for human behavior we would need an enormous amount of information. Suppose there are 10 different items I could have or not have; then there are 10! = 3.6 million possible preference orderings. If there were 100 items, there would be 100! = 9e157 possible orderings. It won’t do simply to decide on each item whether I’d like to have it or not. Some things are complements: I prefer to have shoes, but I probably prefer to have $100 and no shoes at all rather than $50 and just a left shoe. Other things are substitutes: I generally prefer eating either a bowl of spaghetti or a pizza, rather than both at the same time. No, the combinations matter, and that means that we have an exponentially increasing decision space every time we add a new option. If there really is no more structure to preferences than this, we have an absurd computational task to make even the most basic decisions.

This is in fact most likely why we have happiness in the first place. Happiness did not emerge from a vacuum; it evolved by natural selection. Why make an organism have feelings? Why make it care about things? Wouldn’t it be easier to just hard-code a list of decisions it should make? No, on the contrary, it would be exponentially more complex. Utility exists precisely because it is more efficient for an organism to like or dislike things by certain amounts rather than trying to define arbitrary preference orderings. Adding a new item means assigning it an emotional value and then slotting it in, instead of comparing it to every single other possibility.

To illustrate this: I like Coke more than I like Pepsi. (Let the flame wars begin?) I also like getting massages more than I like being stabbed. (I imagine less controversy on this point.) But the difference in my mind between massages and stabbings is an awful lot larger than the difference between Coke and Pepsi. Yet according to preference theory (“ordinal utility”), that difference is not meaningful; instead I have to say that I prefer the pair “drink Pepsi and get a massage” to the pair “drink Coke and get stabbed”. There’s no such thing as “a little better” or “a lot worse”; there is only what I prefer over what I do not prefer, and since these can be assigned arbitrarily there is an impossible computational task before me to make even the most basic decisions.

Real utility also allows you to make decisions under risk, to decide when it’s worth taking a chance. Is a 50% chance of $100 worth giving up a guaranteed $50? Probably. Is a 50% chance of $10 million worth giving up a guaranteed $5 million? Not for me. Maybe for Bill Gates. How do I make that decision? It’s not about what I prefer—I do in fact prefer $10 million to $5 million. It’s about how much difference there is in terms of my real happiness—$5 million is almost as good as $10 million, but $100 is a lot better than $50. My marginal utility of wealth—as I discussed in my post on progressive taxation—is a lot steeper at $50 than it is at $5 million. There’s actually a way to use revealed preferences under risk to estimate true (“cardinal”) utility, developed by Von Neumann and Morgenstern. In fact they proved a remarkably strong theorem: If you don’t have a cardinal utility function that you’re maximizing, you can’t make rational decisions under risk. (In fact many of our risk decisions clearly aren’t rational, because we aren’t actually maximizing an expected utility; what we’re actually doing is something more like cumulative prospect theory, the leading cognitive economic theory of risk decisions. We overrespond to extreme but improbable events—like lightning strikes and terrorist attacks—and underrespond to moderate but probable events—like heart attacks and car crashes. We play the lottery but still buy health insurance. We fear Ebola—which has never killed a single American—but not influenza—which kills 10,000 Americans every year.)

A lot of economists would argue that it’s “unscientific”—Kenneth Arrow said “impossible”—to assign this sort of cardinal distance between our choices. But assigning distances between preferences is something we do all the time. Amazon.com lets us vote on a 5-star scale, and very few people send in error reports saying that cardinal utility is meaningless and only preference orderings exist. In 2000 I would have said “I like Gore best, Nader is almost as good, and Bush is pretty awful; but of course they’re all a lot better than the Fascist Party.” If we had simply been able to express those feelings on the 2000 ballot according to a range vote, either Nader would have won and the United States would now have a three-party system (and possibly a nationalized banking system!), or Gore would have won and we would be a decade ahead of where we currently are in preventing and mitigating global warming. Either one of these things would benefit millions of people.

This is extremely important because of another thing that Arrow said was “impossible”—namely, “Arrow’s Impossibility Theorem”. It should be called Arrow’s Range Voting Theorem, because simply by restricting preferences to a well-defined utility and allowing people to make range votes according to that utility, we can fulfill all the requirements that are supposedly “impossible”. The theorem doesn’t say—as it is commonly paraphrased—that there is no fair voting system; it says that range voting is the only fair voting system. A better claim is that there is no perfect voting system, which is true if you mean that there is no way to vote strategically that doesn’t accurately reflect your true beliefs. The Myerson-Satterthwaithe Theorem is then the proper theorem to use; if you could design a voting system that would force you to reveal your beliefs, you could design a market auction that would force you to reveal your optimal price. But the least expressive way to vote in a range vote is to pick your favorite and give them 100% while giving everyone else 0%—which is identical to our current plurality vote system. The worst-case scenario in range voting is our current system.

But the fact that utility exists and matters, unfortunately doesn’t tell us how to measure it. The current state-of-the-art in economics is what’s called “willingness-to-pay”, where we arrange (or observe) decisions people make involving money and try to assign dollar values to each of their choices. This is how you get disturbing calculations like “the lives lost due to air pollution are worth $10.2 billion.”

Why are these calculations disturbing? Because they have the whole thing backwards—people aren’t valuable because they are worth money; money is valuable because it helps people. It’s also really bizarre because it has to be adjusted for inflation. Finally—and this is the point that far too few people appreciate—the value of a dollar is not constant across people. Because different people have different marginal utilities of wealth, something that I would only be willing to pay $1000 for, Bill Gates might be willing to pay $1 million for—and a child in Africa might only be willing to pay $10, because that is all he has to spend. This makes the “willingness-to-pay” a basically meaningless concept independent of whose wealth we are spending.

Utility, on the other hand, might differ between people—but, at least in principle, it can still be added up between them on the same scale. The problem is that “in principle” part: How do we actually measure it?

So far, the best I’ve come up with is to borrow from public health policy and use the QALY, or quality-adjusted life year. By asking people macabre questions like “What is the maximum number of years of your life you would give up to not have a severe migraine every day?” (I’d say about 20—that’s where I feel ambivalent. At 10 I definitely would; at 30 I definitely wouldn’t.) or “What chance of total paralysis would you take in order to avoid being paralyzed from the waist down?” (I’d say about 20%.) we assign utility values: 80 years of migraines is worth giving up 20 years to avoid, so chronic migraine is a quality of life factor of 0.75. Total paralysis is 5 times as bad as paralysis from the waist down, so if waist-down paralysis is a quality of life factor of 0.90 then total paralysis is 0.50.

You can probably already see that there are lots of problems: What if people don’t agree? What if due to framing effects the same person gives different answers to slightly different phrasing? Some conditions will directly bias our judgments—depression being the obvious example. How many years of your life would you give up to not be depressed? Suicide means some people say all of them. How well do we really know our preferences on these sorts of decisions, given that most of them are decisions we will never have to make? It’s difficult enough to make the actual decisions in our lives, let alone hypothetical decisions we’ve never encountered.

Another problem is often suggested as well: How do we apply this methodology outside questions of health? Does it really make sense to ask you how many years of your life drinking Coke or driving your car is worth?
Well, actually… it better, because you make that sort of decision all the time. You drive instead of staying home, because you value where you’re going more than the risk of dying in a car accident. You drive instead of walking because getting there on time is worth that additional risk as well. You eat foods you know aren’t good for you because you think the taste is worth the cost. Indeed, most of us aren’t making most of these decisions very well—maybe you shouldn’t actually drive or drink that Coke. But in order to know that, we need to know how many years of your life a Coke is worth.

As a very rough estimate, I figure you can convert from willingness-to-pay to QALY by dividing by your annual consumption spending Say you spend annually about $20,000—pretty typical for a First World individual. Then $1 is worth about 50 microQALY, or about 26 quality-adjusted life-minutes. Now suppose you are in Third World poverty; your consumption might be only $200 a year, so $1 becomes worth 5 milliQALY, or 1.8 quality-adjusted life-days. The very richest individuals might spend as much as $10 million on consumption, so $1 to them is only worth 100 nanoQALY, or 3 quality-adjusted life-seconds.

That’s an extremely rough estimate, of course; it assumes you are in perfect health, all your time is equally valuable and all your purchasing decisions are optimized by purchasing at marginal utility. Don’t take it too literally; based on the above estimate, an hour to you is worth about $2.30, so it would be worth your while to work for even $3 an hour. Here’s a simple correction we should probably make: if only a third of your time is really usable for work, you should expect at least $6.90 an hour—and hey, that’s a little less than the US minimum wage. So I think we’re in the right order of magnitude, but the details have a long way to go.

So let’s hear it, readers: How do you think we can best measure happiness?