On the Overton Window

Jul 24 JDN 2459786

As you are no doubt aware, a lot of people on the Internet like to loudly proclaim support for really crazy, extreme ideas. Some of these people actually believe in those ideas, and if you challenge them, will do their best to defend them. Those people are wrong at the level of substantive policy, but there’s nothing wrong with their general approach: If you really think that anarchism or communism is a good thing, it only makes sense that you’d try to convince other people. You might have a hard time of it (in part because you are clearly wrong), but it makes sense that you’d try.

But there is another class of people who argue for crazy, extreme ideas. When pressed, they will admit they don’t really believe in abolishing the police or collectivizing all wealth, but they believe in something else that’s sort of vaguely in that direction, and they think that advocating for the extreme idea will make people more likely to accept what they actually want.

They often refer to this as “shifting the Overton Window”. As Matt Yglesias explained quite well a year ago, this is not actually what Overton was talking about.

But, in principle, it could still be a thing that works. There is a cognitive bias known as anchoring which is often used in marketing: If I only offered a $5 bottle of wine and a $20 bottle of wine, you might think the $20 bottle is too expensive. But if I also include a $50 bottle, that makes you adjust your perceptions of what constitutes a “reasonable” price for wine, and may make you more likely to buy the $20 bottle after all.

It could be, therefore, that an extreme policy demand makes people more willing to accept moderate views, as a sort of compromise. Maybe demanding the abolition of police is a way of making other kinds of police reform seem more reasonable. Maybe showing pictures of Marx and chanting “eat the rich” could make people more willing to accept higher capital gains taxes. Maybe declaring that we are on the verge of apocalyptic climate disaster will make people more willing to accept tighter regulations on carbon emissions and subsidies for solar energy.

Then again—does it actually seem to do that? I see very little evidence that it does. All those demands for police abolition haven’t changed the fact that defunding the police is unpopular. Raising taxes on the rich is popular, but it has been for awhile now (and never was with, well, the rich). And decades of constantly shouting about imminent climate catastrophe is really starting to look like crying wolf.

To see why this strategy seems to be failing, I think it’s helpful to consider how it feels from the other side. Take a look at some issues where someone else is trying to get you to accept a particular view, and consider whether someone advocating a more extreme view would make you more likely to compromise.

Your particular opinions may vary, but here are some examples that would apply to me, and, I suspect, many of you.

If someone says they want tighter border security, I’m skeptical—it’s pretty tight already. But in and of itself, this would not be such a crazy idea. Certainly I agree that it is possible to have too little border security, and so maybe that turns out to be the state we’re in.

But then, suppose that same person, or someone closely allied to them, starts demanding the immediate deportation of everyone who was not born in the United States, even those who immigrated legally and are naturalized or here on green cards. This is a crazy, extreme idea that’s further in the same direction, so on this anchoring theory, it should make me more willing to accept the idea of tighter border security. And yet, I can say with some confidence that it has no such effect.

Indeed, if anything I think it would make me less likely to accept tighter border security, in proportion to how closely aligned those two arguments are. If they are coming from the same person, or the same political party, it would cause me to suspect that the crazy, extreme policy is the true objective, and the milder, compromise policy is just a means toward that end. It also suggests certain beliefs and attitudes about immigration in general—xenophobia, racism, ultranationalism—that I oppose even more strongly. If you’re talking about deporting all immigrants, you make me suspect that your reasons for wanting tighter border security are not good ones.

Let’s try another example. Suppose someone wants to cut taxes on upper income brackets. In our current state, I think that would be a bad idea. But there was a time not so long ago when I would have agreed with it: Even I have to admit that a top bracket of 94% (as we had in 1943) sounds a little ridiculous, and is surely on the wrong side of the Laffer curve. So the basic idea of cutting top tax rates is not inherently crazy or ridiculous.

Now, suppose that same idea came from the same person, or the same party, or the same political movement, as one that was arguing for the total abolition of all taxation. This is a crazy, extreme idea; it would amount to either total anarcho-capitalism with no government at all, or some sort of bizarre system where the government is funded entirely through voluntary contributions. I think it’s pretty obvious that such a system would be terrible, if not outright impossible; and anyone whose understanding of political economy is sufficiently poor that they would fail to see this is someone whose overall judgment on questions of policy I must consider dubious. Once again, the presence of the extreme view does nothing to make me want to consider the moderate view, and may even make me less willing to do so.

Perhaps I am an unusually rational person, not so greatly affected by anchoring biases? Perhaps. But whereas I do feel briefly tempted by to buy the $20 wine bottle by the effect of the $50 wine bottle, and must correct myself with knowledge I have about anchoring bias, the presentation of an extreme political view never even makes me feel any temptation to accept some kind of compromise with it. Learning that someone supports something crazy or ridiculous—or is willing to say they do, even if deep down they don’t—makes me automatically lower my assessment of their overall credibility. If anything, I think I am tempted to overreact in that direction, and have to remind myself of the Stopped Clock Principle: reversed stupidity is not intelligence, and someone can have both bad ideas and good ones.

Moreover, the empirical data, while sketchy, doesn’t seem to support this either; where the Overton Window (in the originally intended sense) has shifted, as on LGBT rights, it was because people convincingly argued that the “extreme” position was in fact an entirely reasonable and correct view. There was a time not so long ago that same-sex marriage was deemed unthinkable, and the “moderate” view was merely decriminalizing sodomy; but we demanded, and got, same-sex marriage, not as a strategy to compromise on decriminalizing sodomy, but because we actually wanted same-sex marriage and had good arguments for it. I highly doubt we would have been any more successful if we had demanded something ridiculous and extreme, like banning opposite-sex marriage.

The resulting conclusion seems obvious and banal: Only argue for things you actually believe in.

Yet, somehow, that seems to be a controversial view these days.

Experimentally testing categorical prospect theory

Dec 4, JDN 2457727

In last week’s post I presented a new theory of probability judgments, which doesn’t rely upon people performing complicated math even subconsciously. Instead, I hypothesize that people try to assign categories to their subjective probabilities, and throw away all the information that wasn’t used to assign that category.

The way to most clearly distinguish this from cumulative prospect theory is to show discontinuity. Kahneman’s smooth, continuous function places fairly strong bounds on just how much a shift from 0% to 0.000001% can really affect your behavior. In particular, if you want to explain the fact that people do seem to behave differently around 10% compared to 1% probabilities, you can’t allow the slope of the smooth function to get much higher than 10 at any point, even near 0 and 1. (It does depend on the precise form of the function, but the more complicated you make it, the more free parameters you add to the model. In the most parsimonious form, which is a cubic polynomial, the maximum slope is actually much smaller than this—only 2.)

If that’s the case, then switching from 0.% to 0.0001% should have no more effect in reality than a switch from 0% to 0.00001% would to a rational expected utility optimizer. But in fact I think I can set up scenarios where it would have a larger effect than a switch from 0.001% to 0.01%.

Indeed, these games are already quite profitable for the majority of US states, and they are called lotteries.

Rationally, it should make very little difference to you whether your odds of winning the Powerball are 0 (you bought no ticket) or 0.000000001% (you bought a ticket), even when the prize is $100 million. This is because your utility of $100 million is nowhere near 100 million times as large as your marginal utility of $1. A good guess would be that your lifetime income is about $2 million, your utility is logarithmic, the units of utility are hectoQALY, and the baseline level is about 100,000.

I apologize for the extremely large number of decimals, but I had to do that in order to show any difference at all. I have bolded where the decimals first deviate from the baseline.

Your utility if you don’t have a ticket is ln(20) = 2.9957322736 hQALY.

Your utility if you have a ticket is (1-10^-9) ln(20) + 10^-9 ln(1020) = 2.9957322775 hQALY.

You gain a whopping 40 microQALY over your whole lifetime. I highly doubt you could even perceive such a difference.

And yet, people are willing to pay nontrivial sums for the chance to play such lotteries. Powerball tickets sell for about $2 each, and some people buy tickets every week. If you do that and live to be 80, you will spend some $8,000 on lottery tickets during your lifetime, which results in this expected utility: (1-4*10^-6) ln(20-0.08) + 4*10^-6 ln(1020) = 2.9917399955 hQALY.
You have now sacrificed 0.004 hectoQALY, which is to say 0.4 QALY—that’s months of happiness you’ve given up to play this stupid pointless game.

Which shouldn’t be surprising, as (with 99.9996% probability) you have given up four months of your lifetime income with nothing to show for it. Lifetime income of $2 million / lifespan of 80 years = $25,000 per year; $8,000 / $25,000 = 0.32. You’ve actually sacrificed slightly more than this, which comes from your risk aversion.

Why would anyone do such a thing? Because while the difference between 0 and 10^-9 may be trivial, the difference between “impossible” and “almost impossible” feels enormous. “You can’t win if you don’t play!” they say, but they might as well say “You can’t win if you do play either.” Indeed, the probability of winning without playing isn’t zero; you could find a winning ticket lying on the ground, or win due to an error that is then upheld in court, or be given the winnings bequeathed by a dying family member or gifted by an anonymous donor. These are of course vanishingly unlikely—but so was winning in the first place. You’re talking about the difference between 10^-9 and 10^-12, which in proportional terms sounds like a lot—but in absolute terms is nothing. If you drive to a drug store every week to buy a ticket, you are more likely to die in a car accident on the way to the drug store than you are to win the lottery.

Of course, these are not experimental conditions. So I need to devise a similar game, with smaller stakes but still large enough for people’s brains to care about the “almost impossible” category; maybe thousands? It’s not uncommon for an economics experiment to cost thousands, it’s just usually paid out to many people instead of randomly to one person or nobody. Conducting the experiment in an underdeveloped country like India would also effectively amplify the amounts paid, but at the fixed cost of transporting the research team to India.

But I think in general terms the experiment could look something like this. You are given $20 for participating in the experiment (we treat it as already given to you, to maximize your loss aversion and endowment effect and thereby give us more bang for our buck). You then have a chance to play a game, where you pay $X to get a P probability of $Y*X, and we vary these numbers.

The actual participants wouldn’t see the variables, just the numbers and possibly the rules: “You can pay $2 for a 1% chance of winning $200. You can also play multiple times if you wish.” “You can pay $10 for a 5% chance of winning $250. You can only play once or not at all.”

So I think the first step is to find some dilemmas, cases where people feel ambivalent, and different people differ in their choices. That’s a good role for a pilot study.

Then we take these dilemmas and start varying their probabilities slightly.

In particular, we try to vary them at the edge of where people have mental categories. If subjective probability is continuous, a slight change in actual probability should never result in a large change in behavior, and furthermore the effect of a change shouldn’t vary too much depending on where the change starts.

But if subjective probability is categorical, these categories should have edges. Then, when I present you with two dilemmas that are on opposite sides of one of the edges, your behavior should radically shift; while if I change it in a different way, I can make a large change without changing the result.

Based solely on my own intuition, I guessed that the categories roughly follow this pattern:

Impossible: 0%

Almost impossible: 0.1%

Very unlikely: 1%

Unlikely: 10%

Fairly unlikely: 20%

Roughly even odds: 50%

Fairly likely: 80%

Likely: 90%

Very likely: 99%

Almost certain: 99.9%

Certain: 100%

So for example, if I switch from 0%% to 0.01%, it should have a very large effect, because I’ve moved you out of your “impossible” category (indeed, I think the “impossible” category is almost completely sharp; literally anything above zero seems to be enough for most people, even 10^-9 or 10^-10). But if I move from 1% to 2%, it should have a small effect, because I’m still well within the “very unlikely” category. Yet the latter change is literally one hundred times larger than the former. It is possible to define continuous functions that would behave this way to an arbitrary level of approximation—but they get a lot less parsimonious very fast.

Now, immediately I run into a problem, because I’m not even sure those are my categories, much less that they are everyone else’s. If I knew precisely which categories to look for, I could tell whether or not I had found it. But the process of both finding the categories and determining if their edges are truly sharp is much more complicated, and requires a lot more statistical degrees of freedom to get beyond the noise.

One thing I’m considering is assigning these values as a prior, and then conducting a series of experiments which would adjust that prior. In effect I would be using optimal Bayesian probability reasoning to show that human beings do not use optimal Bayesian probability reasoning. Still, I think that actually pinning down the categories would require a large number of participants or a long series of experiments (in frequentist statistics this distinction is vital; in Bayesian statistics it is basically irrelevant—one of the simplest reasons to be Bayesian is that it no longer bothers you whether someone did 2 experiments of 100 people or 1 experiment of 200 people, provided they were the same experiment of course). And of course there’s always the possibility that my theory is totally off-base, and I find nothing; a dissertation replicating cumulative prospect theory is a lot less exciting (and, sadly, less publishable) than one refuting it.

Still, I think something like this is worth exploring. I highly doubt that people are doing very much math when they make most probabilistic judgments, and using categories would provide a very good way for people to make judgments usefully with no math at all.