Experimentally testing categorical prospect theory

Dec 4, JDN 2457727

In last week’s post I presented a new theory of probability judgments, which doesn’t rely upon people performing complicated math even subconsciously. Instead, I hypothesize that people try to assign categories to their subjective probabilities, and throw away all the information that wasn’t used to assign that category.

The way to most clearly distinguish this from cumulative prospect theory is to show discontinuity. Kahneman’s smooth, continuous function places fairly strong bounds on just how much a shift from 0% to 0.000001% can really affect your behavior. In particular, if you want to explain the fact that people do seem to behave differently around 10% compared to 1% probabilities, you can’t allow the slope of the smooth function to get much higher than 10 at any point, even near 0 and 1. (It does depend on the precise form of the function, but the more complicated you make it, the more free parameters you add to the model. In the most parsimonious form, which is a cubic polynomial, the maximum slope is actually much smaller than this—only 2.)

If that’s the case, then switching from 0.% to 0.0001% should have no more effect in reality than a switch from 0% to 0.00001% would to a rational expected utility optimizer. But in fact I think I can set up scenarios where it would have a larger effect than a switch from 0.001% to 0.01%.

Indeed, these games are already quite profitable for the majority of US states, and they are called lotteries.

Rationally, it should make very little difference to you whether your odds of winning the Powerball are 0 (you bought no ticket) or 0.000000001% (you bought a ticket), even when the prize is $100 million. This is because your utility of $100 million is nowhere near 100 million times as large as your marginal utility of $1. A good guess would be that your lifetime income is about $2 million, your utility is logarithmic, the units of utility are hectoQALY, and the baseline level is about 100,000.

I apologize for the extremely large number of decimals, but I had to do that in order to show any difference at all. I have bolded where the decimals first deviate from the baseline.

Your utility if you don’t have a ticket is ln(20) = 2.9957322736 hQALY.

Your utility if you have a ticket is (1-10^-9) ln(20) + 10^-9 ln(1020) = 2.9957322775 hQALY.

You gain a whopping 40 microQALY over your whole lifetime. I highly doubt you could even perceive such a difference.

And yet, people are willing to pay nontrivial sums for the chance to play such lotteries. Powerball tickets sell for about $2 each, and some people buy tickets every week. If you do that and live to be 80, you will spend some $8,000 on lottery tickets during your lifetime, which results in this expected utility: (1-4*10^-6) ln(20-0.08) + 4*10^-6 ln(1020) = 2.9917399955 hQALY.
You have now sacrificed 0.004 hectoQALY, which is to say 0.4 QALY—that’s months of happiness you’ve given up to play this stupid pointless game.

Which shouldn’t be surprising, as (with 99.9996% probability) you have given up four months of your lifetime income with nothing to show for it. Lifetime income of $2 million / lifespan of 80 years = $25,000 per year; $8,000 / $25,000 = 0.32. You’ve actually sacrificed slightly more than this, which comes from your risk aversion.

Why would anyone do such a thing? Because while the difference between 0 and 10^-9 may be trivial, the difference between “impossible” and “almost impossible” feels enormous. “You can’t win if you don’t play!” they say, but they might as well say “You can’t win if you do play either.” Indeed, the probability of winning without playing isn’t zero; you could find a winning ticket lying on the ground, or win due to an error that is then upheld in court, or be given the winnings bequeathed by a dying family member or gifted by an anonymous donor. These are of course vanishingly unlikely—but so was winning in the first place. You’re talking about the difference between 10^-9 and 10^-12, which in proportional terms sounds like a lot—but in absolute terms is nothing. If you drive to a drug store every week to buy a ticket, you are more likely to die in a car accident on the way to the drug store than you are to win the lottery.

Of course, these are not experimental conditions. So I need to devise a similar game, with smaller stakes but still large enough for people’s brains to care about the “almost impossible” category; maybe thousands? It’s not uncommon for an economics experiment to cost thousands, it’s just usually paid out to many people instead of randomly to one person or nobody. Conducting the experiment in an underdeveloped country like India would also effectively amplify the amounts paid, but at the fixed cost of transporting the research team to India.

But I think in general terms the experiment could look something like this. You are given $20 for participating in the experiment (we treat it as already given to you, to maximize your loss aversion and endowment effect and thereby give us more bang for our buck). You then have a chance to play a game, where you pay $X to get a P probability of $Y*X, and we vary these numbers.

The actual participants wouldn’t see the variables, just the numbers and possibly the rules: “You can pay $2 for a 1% chance of winning $200. You can also play multiple times if you wish.” “You can pay $10 for a 5% chance of winning $250. You can only play once or not at all.”

So I think the first step is to find some dilemmas, cases where people feel ambivalent, and different people differ in their choices. That’s a good role for a pilot study.

Then we take these dilemmas and start varying their probabilities slightly.

In particular, we try to vary them at the edge of where people have mental categories. If subjective probability is continuous, a slight change in actual probability should never result in a large change in behavior, and furthermore the effect of a change shouldn’t vary too much depending on where the change starts.

But if subjective probability is categorical, these categories should have edges. Then, when I present you with two dilemmas that are on opposite sides of one of the edges, your behavior should radically shift; while if I change it in a different way, I can make a large change without changing the result.

Based solely on my own intuition, I guessed that the categories roughly follow this pattern:

Impossible: 0%

Almost impossible: 0.1%

Very unlikely: 1%

Unlikely: 10%

Fairly unlikely: 20%

Roughly even odds: 50%

Fairly likely: 80%

Likely: 90%

Very likely: 99%

Almost certain: 99.9%

Certain: 100%

So for example, if I switch from 0%% to 0.01%, it should have a very large effect, because I’ve moved you out of your “impossible” category (indeed, I think the “impossible” category is almost completely sharp; literally anything above zero seems to be enough for most people, even 10^-9 or 10^-10). But if I move from 1% to 2%, it should have a small effect, because I’m still well within the “very unlikely” category. Yet the latter change is literally one hundred times larger than the former. It is possible to define continuous functions that would behave this way to an arbitrary level of approximation—but they get a lot less parsimonious very fast.

Now, immediately I run into a problem, because I’m not even sure those are my categories, much less that they are everyone else’s. If I knew precisely which categories to look for, I could tell whether or not I had found it. But the process of both finding the categories and determining if their edges are truly sharp is much more complicated, and requires a lot more statistical degrees of freedom to get beyond the noise.

One thing I’m considering is assigning these values as a prior, and then conducting a series of experiments which would adjust that prior. In effect I would be using optimal Bayesian probability reasoning to show that human beings do not use optimal Bayesian probability reasoning. Still, I think that actually pinning down the categories would require a large number of participants or a long series of experiments (in frequentist statistics this distinction is vital; in Bayesian statistics it is basically irrelevant—one of the simplest reasons to be Bayesian is that it no longer bothers you whether someone did 2 experiments of 100 people or 1 experiment of 200 people, provided they were the same experiment of course). And of course there’s always the possibility that my theory is totally off-base, and I find nothing; a dissertation replicating cumulative prospect theory is a lot less exciting (and, sadly, less publishable) than one refuting it.

Still, I think something like this is worth exploring. I highly doubt that people are doing very much math when they make most probabilistic judgments, and using categories would provide a very good way for people to make judgments usefully with no math at all.

The difference between price, cost, and value

JDN 2457559

This topic has been on the voting list for my Patreons for several months, but it never quite seems to win the vote. Well, this time it did. I’m glad, because I was tempted to do it anyway.

“Price”, “cost”, and “value”; the words are often used more or less interchangeably, not only by regular people but even by economists. I’ve read papers that talked about “rising labor costs” when what they clearly meant was rising wages—rising labor prices. I’ve read papers that tried to assess the projected “cost” of climate change by using the prices of different commodity futures. And hardly a day goes buy that I don’t see a TV commercial listing one (purely theoretical) price, cutting it in half (to the actual price), and saying they’re now giving you “more value”.

As I’ll get to, there are reasons to think they would be approximately the same for some purposes. Indeed, they would be equal, at the margin, in a perfectly efficient market—that may be why so many economists use them this way, because they implicitly or explicitly assume efficient markets. But they are fundamentally different concepts, and it’s dangerous to equate them casually.

Price

Price is exactly what you think it is: The number of dollars you must pay to purchase something. Most of the time when we talk about “cost” or “value” and then give a dollar figure, we’re actually talking about some notion of price.

Generally we speak in terms of nominal prices, which are the usual concept of prices in actual dollars paid, but sometimes we do also speak in terms of real prices, which are relative prices of different things once you’ve adjusted for overall inflation. “Inflation-adjusted price” can be a somewhat counter-intuitive concept; if a good’s (nominal) price rises, but by less than most other prices have risen, its real price has actually fallen.

You also need to be careful about just what price you’re looking at. When we look at labor prices, for example, we need to consider not only cash wages, but also fringe benefits and other compensation such as stock options. But other than that, prices are fairly straightforward.

Cost

Cost is probably not at all what you think it is. The real cost of something has nothing to do with money; saying that a candy bar “costs $2” or a computer “costs $2,000” is at best a somewhat sloppy shorthand and at worst a fundamental distortion of what cost is and why it matters. No, those are prices. The cost of a candy bar is the toil of children in cocoa farms in Cote d’Ivoire. The cost of a computer is the ecological damage and displaced indigenous people caused by coltan mining in Congo.

The cost of something is the harm that it does to human well-being (or for that matter to the well-being of any sentient being). It is not measured in money but in “the sweat of our laborers, the genius of our scientists, the hopes of our children” (to quote Eisenhower, who understood real cost better than most economists). There is also opportunity cost, the real cost we pay not by what we did, but by what we didn’t do—what we could have done instead.

This is important precisely because while costs should always be reduced when possible, prices can in fact be too low—and indeed, artificially low prices of goods due to externalities are probably the leading reason why humanity bears so many excess real costs. If the price of that chocolate bar accurately reflected the suffering of those African children (perhaps by—Gasp! Paying them a fair wage?), and the price of that computer accurately reflected the ecological damage of those coltan mines (a carbon tax, at least?), you might not want to buy them anymore; in which case, you should not have bought them. In fact, as I’ll get to once I discuss value, there is reason to think that even if you would buy them at a price that accurately reflected the dollar value of the real cost to their producers, we would still buy more than we should.

There is a point at which we should still buy things even though people get hurt making them; if you deny this, stop buying literally anything ever again. We don’t like to think about it, but any product we buy did cause some person, in some place, some degree of discomfort or unpleasantness in production. And many quite useful products will in fact cause death to a nonzero number of human beings.

For some products this is only barely true—it’s hard to feel bad for bestselling authors and artists who sell their work for millions, for whatever toil they may put into their work, whatever their elevated suicide rate (which is clearly endogenous; people aren’t randomly assigned to be writers), they also surely enjoy it a good deal of the time, and even if they didn’t, their work sells for millions. But for many products it is quite obviously true: A certain proportion of roofers, steelworkers, and truck drivers will die doing their jobs. We can either accept that, recognizing that it’s worth it to have roofs, steel, and trucking—and by extension, industrial capitalism, and its whole babies not dying thing—or we can give up on the entire project of human civilization, and go back to hunting and gathering; even if we somehow managed to avoid the direct homicide most hunter-gatherers engage in, far more people would simply die of disease or get eaten by predators.

Of course, we should have safety standards; but the benefits of higher safety must be carefully weighed against the potential costs of inefficiency, unemployment, and poverty. Safety regulations can reduce some real costs and increase others, even if they almost always increase prices. A good balance is struck when real cost is minimized, where any additional regulation would increase inefficiency more than it improves safety.

Actually OSHA are unsung heroes for their excellent performance at striking this balance, just as EPA are unsung heroes for their balance in environmental regulations (and that whole cutting crime in half business). If activists are mad at you for not banning everything bad and business owners are mad at you for not letting them do whatever they want, you’re probably doing it right. Would you rather people saved from fires, or fires prevented by good safety procedures? Would you rather murderers imprisoned, or boys who grow up healthy and never become murderers? If an ounce of prevention is worth a pound of cure, why does everyone love firefighters and hate safety regulators?So let me take this opportunity to say thank you, OSHA and EPA, for doing the jobs of firefighters and police way better than they do, and unlike them, never expecting to be lauded for it.

And now back to our regularly scheduled programming. Markets are supposed to reflect costs in prices, which is why it’s not totally nonsensical to say “cost” when you mean “price”; but in fact they aren’t very good at that, for reasons I’ll get to in a moment.

Value

Value is how much something is worth—not to sell it (that’s the price again), but to use it. One of the core principles of economics is that trade is nonzero-sum, because people can exchange goods that they value differently and thereby make everyone better off. They can’t price them differently—the buyer and the seller must agree upon a price to make the trade. But they can value them differently.

To see how this works, let’s look at a very simple toy model, the simplest essence of trade: Alice likes chocolate ice cream, but all she has is a gallon of vanilla ice cream. Bob likes vanilla ice cream, but all he has is a gallon of chocolate ice cream. So Alice and Bob agree to trade their ice cream, and both of them are happier.

We can measure value in “willingness-to-pay” (WTP), the highest price you’d willingly pay for something. That makes value look more like a price; but there are several reasons we must be careful when we do that. The obvious reason is that WTP is obviously going to vary based on overall inflation; since $5 isn’t worth as much in 2016 as it was in 1956, something with a WTP of $5 in 1956 would have a much higher WTP in 2016. The not-so-obvious reason is that money is worth less to you the more you have, so we also need to take into account the effect of wealth, and the marginal utility of wealth. The more money you have, the more money you’ll be willing to pay in order to get the same amount of real benefit. (This actually creates some very serious market distortions in the presence of high income inequality, which I may make the subject of a post or even a paper at some point.) Similarly there is “willingness-to-accept” (WTA), the lowest price you’d willingly accept for it. In theory these should be equal; in practice, WTA is usually slightly higher than WTP in what’s called endowment effect.

So to make our model a bit more quantitative, we could suppose that Alice values vanilla at $5 per gallon and chocolate at $10 per gallon, while Bob also values vanilla at $5 per gallon but only values chocolate at $4 per gallon. (I’m using these numbers to point out that not all the valuations have to be different for trade to be beneficial, as long as some are.) Therefore, if Alice sells her vanilla ice cream to Bob for $5, both will (just barely) accept that deal; and then Alice can buy chocolate ice cream from Bob for anywhere between $4 and $10 and still make both people better off. Let’s say they agree to also sell for $5, so that no net money is exchanged and it is effectively the same as just trading ice cream for ice cream. In that case, Alice has gained $5 in consumer surplus (her WTP of $10 minus the $5 she paid) while Bob has gained $1 in producer surplus (the $5 he received minus his $4 WTP). The total surplus will be $6 no matter what price they choose, which we can compute directly from Alice’s WTP of $10 minus Bob’s WTA of $4. The price ultimately decides how that total surplus is distributed between the two parties, and in the real world it would very likely be the result of which one is the better negotiator.

The enormous cost of our distorted understanding

(See what I did there?) If markets were perfectly efficient, prices would automatically adjust so that, at the margin, value is equal to price is equal to cost. What I mean by “at the margin” might be clearer with an example: Suppose we’re selling apples. How many apples do you decide to buy? Well, the value of each successive apple to you is lower, the more apples you have (the law of diminishing marginal utility, which unlike most “laws” in economics is actually almost always true). At some point, the value of the next apple will be just barely above what you have to pay for it, so you’ll stop there. By a similar argument, the cost of producing apples increases the more apples you produce (the law of diminishing returns, which is a lot less reliable, more like the Pirate Code), and the producers of apples will keep selling them until the price they can get is only just barely larger than the cost of production. Thus, in the theoretical limit of infinitely-divisible apples and perfect rationality, marginal value = price = marginal cost. In such a world, markets are perfectly efficient and they maximize surplus, which is the difference between value and cost.

But in the real world of course, none of those assumptions are true. No product is infinitely divisible (though the gasoline in a car is obviously a lot more divisible than the car itself). No one is perfectly rational. And worst of all, we’re not measuring value in the same units. As a result, there is basically no reason to think that markets are optimizing anything; their optimization mechanism is setting two things equal that aren’t measured the same way, like trying to achieve thermal equilibrium by matching the temperature of one thing in Celsius to the temperature of other things in Fahrenheit.

An implicit assumption of the above argument that didn’t even seem worth mentioning was that when I set value equal to price and set price equal to cost, I’m setting value equal to cost; transitive property of equality, right? Wrong. The value is equal to the price, as measured by the buyer. The cost is equal to the price, as measured by the seller.

If the buyer and seller have the same marginal utility of wealth, no problem; they are measuring in the same units. But if not, we convert from utility to money and then back to utility, using a different function to convert each time. In the real world, wealth inequality is massive, so it’s wildly implausible that we all have anything close to the same marginal utility of wealth. Maybe that’s close enough if you restrict yourself to middle-class people in the First World; so when a tutoring client pays me, we might really be getting close to setting marginal value equal to marginal cost. But once you include corporations that are owned by billionaires and people who live on $2 per day, there’s simply no way that those price-to-utility conversions are the same at each end. For Bill Gates, a million dollars is a rounding error. For me, it would buy a house, give me more flexible work options, and keep me out of debt, but not radically change the course of my life. For a child on a cocoa farm in Cote d’Ivoire, it could change her life in ways she can probably not even comprehend.

The market distortions created by this are huge; indeed, most of the fundamental flaws in capitalism as we know it are ultimately traceable to this. Why do Americans throw away enough food to feed all the starving children in Africa? Marginal utility of wealth. Why are Silicon Valley programmers driving the prices for homes in San Francisco higher than most Americans will make in their lifetimes? Marginal utility of wealth. Why are the Koch brothers spending more on this year’s elections than the nominal GDP of the Gambia? Marginal utility of wealth. It’s the sort of pattern that once you see it suddenly seems obvious and undeniable, a paradigm shift a bit like the heliocentric model of the solar system. Forget trade barriers, immigration laws, and taxes; the most important market distortions around the world are all created by wealth inequality. Indeed, the wonder is that markets work as well as they do.

The real challenge is what to do about it, how to reduce this huge inequality of wealth and therefore marginal utility of wealth, without giving up entirely on the undeniable successes of free market capitalism. My hope is that once more people fully appreciate the difference between price, cost, and value, this paradigm shift will be much easier to make; and then perhaps we can all work together to find a solution.