The sunk-cost fallacy

JDN 2457075 EST 14:46.

I am back on Eastern Time once again, because we just finished our 3600-km road trek from Long Beach to Ann Arbor. I seem to move an awful lot; this makes me a bit like Schumpeter, who moved an average of every two years his whole adult life. Schumpeter and I have much in common, in fact, though I have no particular interest in horses.

Today’s topic is the sunk-cost fallacy, which was particularly salient as I had to box up all my things for the move. There were many items that I ended up having to throw away because it wasn’t worth moving them—but this was always painful, because I couldn’t help but think of all the work or money I had put into them. I threw away craft projects I had spent hours working on and collections of bottlecaps I had gathered over years—because I couldn’t think of when I’d use them, and ultimately the question isn’t how hard they were to make in the past, it’s what they’ll be useful for in the future. But each time it hurt, like I was giving up a little part of myself.

That’s the sunk-cost fallacy in a nutshell: Instead of considering whether it will be useful to us later and thus worth having around, we naturally tend to consider the effort that went into getting it. Instead of making our decisions based on the future, we make them based on the past.

Come to think of it, the entire Marxist labor theory of value is basically one gigantic sunk-cost fallacy: Instead of caring about the usefulness of a product—the mainstream utility theory of value—we are supposed to care about the labor that went into making it. To see why this is wrong, imagine someone spends 10,000 hours carving meaningless symbols into a rock, and someone else spends 10 minutes working with chemicals but somehow figures out how to cure pancreatic cancer. Which one would you pay more for—particularly if you had pancreatic cancer?

This is one of the most common irrational behaviors humans do, and it’s worth considering why that might be. Most people commit the sunk-cost fallacy on a daily basis, and even those of us who are aware of it will still fall into it if we aren’t careful.

This often seems to come from a fear of being wasteful; I don’t know of any data on this, but my hunch is that the more environmentalist you are, the more often you tend to run into the sunk-cost fallacy. You feel particularly bad wasting things when you are conscious of the damage that waste does to our planetary ecosystem. (Which is not to say that you should not be environmentalist; on the contrary, most of us should be a great deal more environmentalist than we are. The negative externalities of environmental degradation are almost unimaginably enormous—climate change already kills 150,000 people every year and is projected to kill tens if not hundreds of millions people over the 21st century.)

I think sunk-cost fallacy is involved in a lot of labor regulations as well. Most countries have employment protection legislation that makes it difficult to fire people for various reasons, ranging from the basically reasonable (discrimination against women and racial minorities) to the totally absurd (in some countries you can’t even fire people for being incompetent). These sorts of regulations are often quite popular, because people really don’t like the idea of losing their jobs. When faced with the possibility of losing your job, you should be thinking about what your future options are; but many people spend a lot of time thinking about the past effort they put into this one. I think there is also some endowment effect and loss aversion at work as well: You value your job more simply because you already have it, so you don’t want to lose it even for something better.

Yet these regulations are widely regarded by economists as inefficient; and for once I am inclined to agree. While I certainly don’t want people being fired frivolously or for discriminatory reasons, sometimes companies really do need to lay off workers because there simply isn’t enough demand for their products. When a factory closes down, we think about the jobs that are lost—but we don’t think about the better jobs they can now do instead.

I favor a system like what they have in Denmark (I’m popularizing a hashtag about this sort of thing: #Scandinaviaisbetter): We don’t try to protect your job, we try to protect you. Instead of regulations that make it hard to fire people, Denmark has a generous unemployment insurance system, strong social welfare policies, and active labor market policies that help people retrain and find new and better jobs. One thing I think Denmark might want to consider is restrictions on cyclical layoffs—in a recession there is pressure to lay off workers, but that can create a vicious cycle that makes recessions worse. Denmark was hit considerably harder by the Great Recession than France, for example; where France’s unemployment rose from 7.5% to 9.6%, Denmark’s rose from an astonishing 3.1% all the way up to 7.6%.

Then again, sometimes what looks like a sunk-cost fallacy actually isn’t—and I think this gives us insight into how we might have evolved such an apparently silly heuristic in the first place.

Why would you care about what you did in the past when deciding what to do in the future? Well there’s one reason in particular: Credible commitment. There are many cases in life where you’d like to be able to plan to do something in the future, but when the time comes to actually do it you’ll be tempted not to follow through.

This sort of thing happens all the time: When you take out a loan, you plan to pay it back—but when you need to actually make payments it sure would be nice if you didn’t have to. If you’re trying to slim down, you go on a diet—but doesn’t that cookie look delicious? You know you should quit smoking for your health—but what’s one more cigarette, really? When you get married, you promise to be faithful—but then sometimes someone else comes along who seems so enticing! Your term paper is due in two weeks, so you really should get working on it—but your friends are going out for drinks tonight, why not start the paper tomorrow?

Our true long-term interests are often misaligned with our short-term temptations. This often happens because of hyperbolic discounting, which is a bit technical; but the basic idea is that you tend to rate the importance of an event in inverse proportion to its distance in time. That turns out to be irrational, because as you get closer to the event, your valuations will change disproportionately. The optimal rational choice would be exponential discounting, where you value each successive moment a fixed percentage less than the last—since that percentage doesn’t change, your valuations will always stay in line with one another. But basically nobody really uses exponential discounting in real life.

We can see this vividly in experiments: If we ask people whether they would you rather receive $100 today, or $110 a week from now, they often go with $100 today. But if you ask them whether they would rather receive $100 in 52 weeks or $110 in 53 weeks, almost everyone chooses the $110. The value of a week apparently depends on how far away it is! (The $110 is clearly the rational choice by the way. Discounting 10% per week makes no sense at all—unless you literally believe that $1,000 today is as good as $140,000 a year from now.)

To solve this problem, it can be advantageous to make commitments—either enforced by direct measures such as legal penalties, or even simply by making promises that we feel guilty breaking. That’s why cold turkey is often the most effective way to quit a drug. Physiologically that makes no sense, because gradual cessation clearly does reduce withdrawal symptoms. But psychologically it does, because cold turkey allows you to make a hardline commitment to never again touch the stuff. The majority of successful smokers report using cold turkey, though there is still ongoing research on whether properly-orchestrated gradual reduction can be more effective. Likewise, vague notions like “I’ll eat better and exercise more” are virtually useless, while specific prescriptions like “I will do 20 minutes of exercise every day and stop eating red meat” are much more effective—the latter allows you to make a promise to yourself that can be broken, and since you feel bad breaking it you are motivated to keep it.

In the presence of such commitments, the past does matter, at least insofar as you made commitments to yourself or others in the past. If you promised never to smoke another cigarette, or never to cheat on your wife, or never to eat meat again, you actually have a good reason—and a good chance—to never do those things. This is easy to confuse with a sunk cost; when you think about the 20 years you’ve been married or the 10 years you’ve been vegetarian, you might be thinking of the sunk cost you’ve incurred over that time, or you might be thinking of the promises you’ve made and kept to yourself and others. In the former case you are irrationally committing a sunk-cost fallacy; in the latter you are rationally upholding a credible commitment.

This is most likely why we evolved in such a way as to commit sunk-cost fallacies. The ability to enforce commitments on ourselves and others was so important that it was worth it to overcompensate and sometimes let us care about sunk costs. Because commitments and sunk costs are often difficult to distinguish, it would have been more costly to evolve better ways of distinguish them than it was to simply make the mistake.

Perhaps people who are outraged by being laid off aren’t actually committing a sunk-cost fallacy at all; perhaps they are instead assuming the existence of a commitment where none exists. “I gave this company 20 good years, and now they’re getting rid of me?” But the truth is, you gave the company nothing. They never committed to keeping you (unless they signed a contract, but that’s different; if they are violating a contract, of course they should be penalized for that). They made you a trade, and when that trade ceases to be advantageous they will stop making it. Corporations don’t think of themselves as having any moral obligations whatsoever; they exist only to make profit. It is certainly debatable whether it was a good idea to set up corporations in this way; but unless and until we change that system it is important to keep it in mind. You will almost never see a corporation do something out of kindness or moral obligation; that’s simply not how corporations work. At best, they do nice things to enhance their brand reputation (Starbucks, Whole Foods, Microsoft, Disney, Costco). Some don’t even bother doing that, letting people hate as long as they continue to buy (Walmart, BP, DeBeers). Actually the former model seems to be more successful lately, which bodes well for the future; but be careful to recognize that few if any of these corporations are genuinely doing it out of the goodness of their hearts. Human beings are often altruistic; corporations are specifically designed not to be.

And there were some things I did promise myself I would keep—like old photos and notebooks that I want to keep as memories—so those went in boxes. Other things were obviously still useful—clothes, furniture, books. But for the rest? It was painful, but I thought about what I could realistically use them for, and if I couldn’t think of anything, they went into the trash.

Love is rational

JDN 2457066 PST 15:29.

Since I am writing this the weekend of Valentine’s Day (actually by the time it is published it will be Valentine’s Day) and sitting across from my boyfriend, it seems particularly appropriate that today’s topic should be love. As I am writing it is in fact Darwin Day, so it is fitting that evolution will be a major topic as well.

Usually we cognitive economists are the ones reminding neoclassical economists that human beings are not always rational. Today however I must correct a misconception in the opposite direction: Love is rational, or at least it can be, should be, and typically is.

Lately I’ve been reading The Logic of Life which actually makes much the same point, about love and many other things. I had expected it to be a dogmatic defense of economic rationality—published in 2008 no less, which would make it the scream of a dying paradigm as it carries us all down with it—but I was in fact quite pleasantly surprised. The book takes a nuanced position on rationality very similar to my own, and actually incorporates many of the insights from neuroeconomics and cognitive economics. I think Harford would basically agree with me that human beings are 90% rational (but woe betide the other 10%).

We have this romantic (Romantic?) notion in our society that love is not rational, it is “beyond” rationality somehow. “Love is blind”, they say; and this is often used as a smug reply to the notion that rationality is the proper guide to live our lives.

The argument would seem to follow: “Love is not rational, love is good, therefore rationality is not always good.”

But then… the argument would follow? What do you mean, follow? Follow logically? Follow rationally? Something is clearly wrong if we’ve constructed a rational argument intended to show that we should not live our lives by rational arguments.

And the problem of course is the premise that love is not rational. Whatever made you say that?

It’s true that love is not directly volitional, not in the way that it is volitional to move your arm upward or close your eyes or type the sentence “Jackdaws ate my big sphinx of quartz.” You don’t exactly choose to love someone, weighing the pros and cons and making a decision the way you might choose which job offer to take or which university to attend.

But then, you don’t really choose which university you like either, now do you? You choose which to attend. But your enjoyment of that university is not a voluntary act. And similarly you do in fact choose whom to date, whom to marry. And you might well consider the pros and cons of such decisions. So the difference is not as large as it might at first seem.

More importantly, to say that our lives should be rational is not the same as saying they should be volitional. You simply can’t live your life as completely volitional, no matter how hard you try. You simply don’t have the cognitive resources to maintain constant awareness of every breath, every heartbeat. Yet there is nothing irrational about breathing or heartbeats—indeed they are necessary for survival and thus a precondition of anything rational you might ever do.

Indeed, in many ways it is our subconscious that is the most intelligent part of us. It is not as flexible as our conscious mind—that is why our conscious mind is there—but the human subconscious is unmatched in its efficiency and reliability among literally all known computational systems in the known universe. Walk across a room and it will solve reverse kinematics in real time. Throw a ball and it will solve three-dimensional nonlinear differential equations as well. Look at a familiar face and it will immediately identify it among a set of hundreds of faces with near-perfect accuracy regardless of the angle, lighting conditions, or even hairstyle. To see that I am not exaggerating the immense difficulty of these tasks, look at how difficult it is to make robots that can walk on two legs or throw balls. Face recognition is so difficult that it is still an unsolved problem with an extensive body of ongoing research.

And love, of course, is the subconscious system that has been most directly optimized by natural selection. Our very survival has depended upon it for millions of years. Indeed, it’s amazing how often it does seem to fail given those tight optimization constraints; I think this is for two reasons. First, natural selection optimizes for inclusive fitness, which is not the same thing as optimizing for happiness—what’s good for your genes may not be good for you per se. Many of the ways that love hurts us seem to be based around behaviors that probably did on average spread more genes on the African savannah. Second, the task of selecting an optimal partner is so mind-bogglingly complex that even the most powerful computational system in the known universe still can only do it so well. Imagine trying to construct a formal decision model that would tell you whom you should marry—all the variables you’d need to consider, the cost of sampling each of those variables sufficiently, the proper weightings on all the different terms in the utility function. Perhaps the wonder is that love is as rational as it is.

Indeed, love is evidence-based—and when it isn’t, this is cause for concern. The evidence is most often presented in small ways over long periods of time—a glance, a kiss, a gift, a meeting canceled to stay home and comfort you. Some ways are larger—a career move postponed to keep the family together, a beautiful wedding, a new house. We aren’t formally calculating the Bayesian probability at each new piece of evidence—though our subconscious brains might be, and whatever they’re doing the results aren’t far off from that mathematical optimum.

The notion that you will never “truly know” if others love you is no more epistemically valid or interesting than the notion that you will never “truly know” if your shirt is grue instead of green or if you are a brain in a vat. Perhaps we’ve been wrong about gravity all these years, and on April 27, 2016 it will suddenly reverse direction! No, it won’t, and I’m prepared to literally bet the whole world on that (frankly I’m not sure I have a choice). To be fair, the proposition that your spouse of twenty years or your mother loves you is perhaps not that certain—but it’s pretty darn certain. Perhaps the proper comparison is the level of certainty that climate change is caused by human beings, or even less, the level of certainty that your car will not suddenly veer off the road and kill you. The latter is something that actually happens—but we all drive every day assuming it won’t. By the time you marry someone, you can and should be that certain that they love you.

Love without evidence is bad love. The sort of unrequited love that builds in secret based upon fleeing glimpses, hours of obsessive fantasy, and little or no interaction with its subject isn’t romantic—it’s creepy and psychologically unhealthy. The extreme of that sort of love is what drove John Hinckley Jr. to shoot Ronald Reagan in order to impress Jodie Foster.

I don’t mean to make you feel guilty if you have experienced such a love—most of us have at one point or another—but it disgusts me how much our society tries to elevate that sort of love as the “true love” to which we should all aspire. We encourage people—particularly teenagers—to conceal their feelings for a long time and then release them in one grand surprise gesture of affection, which is just about the opposite of what you should actually be doing. (Look at Love Actually, which is just about the opposite of what its title says.) I think a great deal of strife in our society would be eliminated if we taught our children how to build relationships gradually over time instead of constantly presenting them with absurd caricatures of love that no one can—or should—follow.

I am pleased to see that our cultural norms on that point seem to be changing. A corporation as absurdly powerful as Disney is both an influence upon and a barometer of our social norms, and the trope in the most recent Disney films (like Frozen and Maleficent) is that true love is not the fiery passion of love at first sight, but the deep bond between family members that builds over time. This is a much healthier concept of love, though I wouldn’t exclude romantic love entirely. Romantic love can be true love, but only by building over time through a similar process.

Perhaps there is another reason people are uncomfortable with the idea that love is rational; by definition, rational behaviors respond to incentives. And since we tend to conceive of incentives as a purely selfish endeavor, this would seem to imply that love is selfish, which seems somewhere between painfully cynical and outright oxymoronic.

But while love certainly does carry many benefits for its users—being in love will literally make you live longer, by quite a lot, an effect size comparable to quitting smoking or exercising twice a week—it also carries many benefits for its recipients as well. Love is in fact the primary means by which evolution has shaped us toward altruism; it is the love for our family and our tribe that makes us willing to sacrifice so much for them. Not all incentives are selfish; indeed, an incentive is really just something that motivates you to action. If you could truly convince me that a given action I took would have even a reasonable chance of ending world hunger, I would do almost anything to achieve it; I can scarcely imagine a greater incentive, even though I would be harmed and the benefits would incur to people I have never met.

Love evolved because it advanced the fitness of our genes, of course. And this bothers many people; it seems to make our altruism ultimately just a different form of selfishness I guess, selfishness for our genes instead of ourselves. But this is a genetic fallacy, isn’t it? Yes, evolution by natural selection is a violent process, full of death and cruelty and suffering (as Darwin said, red in tooth and claw); but that doesn’t mean that its outcome—namely ourselves—is so irredeemable. We are, in fact, altruistic, regardless of where that altruism came from. The fact that it advanced our genes can actually be comforting in a way, because it reminds us that the universe is nonzero-sum and benefiting others does not have to mean harming ourselves.

One question I like to ask when people suggest that some scientific fact undermines our moral status in this way is: “Well, what would you prefer?” If the causal determinism of neural synapses undermines our free will, then what should we have been made of? Magical fairy dust? If we were, fairy dust would be a real phenomenon, and it would obey laws of nature, and you’d just say that the causal determinism of magical fairy dust undermines free will all over again. If the fact that our altruistic emotions evolved by natural selection to advance our inclusive fitness makes us not truly altruistic, then where should have altruism come from? A divine creator who made us to love one another? But then we’re just following our programming! You can always make this sort of argument, which either means that live is necessarily empty of meaning, that no possible universe could ever assuage our ennui—or, what I believe, that life’s meaning does not come from such ultimate causes. It is not what you are made of or where you come from that defines what you are. We are best defined by what we do.

It seems to depend how you look at it: Romantics are made of stardust and the fabric of the cosmos, while cynics are made of the nuclear waste expelled in the planet-destroying explosions of dying balls of fire. Romantics are the cousins of all living things in one grand family, while cynics are apex predators evolved from millions of years of rape and murder. Both of these views are in some sense correct—but I think the real mistake is in thinking that they are incompatible. Human beings are both those things, and more; we are capable of both great compassion and great cruelty—and also great indifference. It is a mistake to think that only the dark sides—or for that matter only the light sides—of us are truly real.

Love is rational; love responds to incentives; love is an evolutionary adaptation. Love binds us together; love makes us better; love leads us to sacrifice for one another.

Love is, above all, what makes us not infinite identical psychopaths.

Prospect Theory: Why we buy insurance and lottery tickets

JDN 2457061 PST 14:18.

Today’s topic is called prospect theory. Prospect theory is basically what put cognitive economics on the map; it was the knock-down argument that Kahneman used to show that human beings are not completely rational in their economic decisions. It all goes back to a 1979 paper by Kahneman and Tversky that now has 34000 citations (yes, we’ve been having this argument for a rather long time now). In the 1990s it was refined into cumulative prospect theory, which is more mathematically precise but basically the same idea.

What was that argument? People buy both insurance and lottery tickets.

The “both” is very important. Buying insurance can definitely be rational—indeed, typically is. Buying lottery tickets could theoretically be rational, under very particular circumstances. But they cannot both be rational at the same time.

To see why, let’s talk some more about marginal utility of wealth. Recall that a dollar is not worth the same to everyone; to a billionaire a dollar is a rounding error, to most of us it is a bottle of Coke, but to a starving child in Ghana it could be life itself. We typically observe diminishing marginal utility of wealth—the more money you have, the less another dollar is worth to you.

If we sketch a graph of your utility versus wealth it would look something like this:

Marginal_utility_wealth

Notice how it increases as your wealth increases, but at a rapidly diminishing rate.

If you have diminishing marginal utility of wealth, you are what we call risk-averse. If you are risk-averse, you’ll (sometimes) want to buy insurance. Let’s suppose the units on that graph are tens of thousands of dollars. Suppose you currently have an income of $50,000. You are offered the chance to pay $10,000 a year to buy unemployment insurance, so that if you lose your job, instead of making $10,000 on welfare you’ll make $30,000 on unemployment. You think you have about a 20% chance of losing your job.

If you had constant marginal utility of wealth, this would not be a good deal for you. Your expected value of money would be reduced if you buy the insurance: Before you had an 80% chance of $50,000 and a 20% chance of $10,000 so your expected amount of money is $42,000. With the insurance you have an 80% chance of $40,000 and a 20% chance of $30,000 so your expected amount of money is $38,000. Why would you take such a deal? That’s like giving up $4,000 isn’t it?

Well, let’s look back at that utility graph. At $50,000 your utility is 1.80, uh… units, er… let’s say QALY. 1.80 QALY per year, meaning you live 80% better than the average human. Maybe, I guess? Doesn’t seem too far off. In any case, the units of measurement aren’t that important.

Insurance_options

By buying insurance your effective income goes down to $40,000 per year, which lowers your utility to 1.70 QALY. That’s a fairly significant hit, but it’s not unbearable. If you lose your job (20% chance), you’ll fall down to $30,000 and have a utility of 1.55 QALY. Again, noticeable, but bearable. Your overall expected utility with insurance is therefore 1.67 QALY.

But what if you don’t buy insurance? Well then you have a 20% chance of taking a big hit and falling all the way down to $10,000 where your utility is only 1.00 QALY. Your expected utility is therefore only 1.64 QALY. You’re better off going with the insurance.

And this is how insurance companies make a profit (well; the legitimate way anyway; they also like to gouge people and deny cancer patients of course); on average, they make more from each customer than they pay out, but customers are still better off because they are protected against big losses. In this case, the insurance company profits $4,000 per customer per year, customers each get 30 milliQALY per year (about the same utility as an extra $2,000 more or less), everyone is happy.

But if this is your marginal utility of wealth—and it most likely is, approximately—then you would never want to buy a lottery ticket. Let’s suppose you actually have pretty good odds; it’s a 1 in 1 million chance of $1 million for a ticket that costs $2. This means that the state is going to take in about $2 million for every $1 million they pay out to a winner.

That’s about as good as your odds for a lottery are ever going to get; usually it’s more like a 1 in 400 million chance of $150 million for $1, which is an even bigger difference than it sounds, because $150 million is nowhere near 150 times as good as $1 million. It’s a bit better from the state’s perspective though, because they get to receive $400 million for every $150 million they pay out.

For your convenience I have zoomed out the graph so that you can see 100, which is an income of $1 million (which you’ll have this year if you win; to get it next year, you’ll have to play again). You’ll notice I did not have to zoom out the vertical axis, because 20 times as much money only ends up being about 2 times as much utility. I’ve marked with lines the utility of $50,000 (1.80, as we said before) versus $1 million (3.30).

Lottery_utility

What about the utility of $49,998 which is what you’ll have if you buy the ticket and lose? At this number of decimal places you can’t see the difference, so I’ll need to go out a few more. At $50,000 you have 1.80472 QALY. At $49,998 you have 1.80470 QALY. That $2 only costs you 0.00002 QALY, 20 microQALY. Not much, really; but of course not, it’s only $2.

How much does the 1 in 1 million chance of $1 million give you? Even less than that. Remember, the utility gain for going from $50,000 to $1 million is only 1.50 QALY. So you’re adding one one-millionth of that in expected utility, which is of course 1.5 microQALY, or 0.0000015 QALY.

That $2 may not seem like it’s worth much, but that 1 in 1 million chance of $1 million is worth less than one tenth as much. Again, I’ve tried to make these figures fairly realistic; they are by no means exact (I don’t actually think $49,998 corresponds to exactly 1.804699 QALY), but the order of magnitude difference is right. You gain about ten times as much utility from spending that $2 on something you want than you do on taking the chance at $1 million.

I said before that it is theoretically possible for you to have a utility function for which the lottery would be rational. For that you’d need to have increasing marginal utility of wealth, so that you could be what we call risk-seeking. Your utility function would have to look like this:

Weird_utility

There’s no way marginal utility of wealth looks like that. This would be saying that it would hurt Bill Gates more to lose $1 than it would hurt a starving child in Ghana, which makes no sense at all. (It certainly would makes you wonder why he’s so willing to give it to them.) So frankly even if we didn’t buy insurance the fact that we buy lottery tickets would already look pretty irrational.

But in order for it to be rational to buy both lottery tickets and insurance, our utility function would have to be totally nonsensical. Maybe it could look like this or something; marginal utility decreases normally for awhile, and then suddenly starts going upward again for no apparent reason:

Weirder_utility

Clearly it does not actually look like that. Not only would this mean that Bill Gates is hurt more by losing $1 than the child in Ghana, we have this bizarre situation where the middle class are the people who have the lowest marginal utility of wealth in the world. Both the rich and the poor would need to have higher marginal utility of wealth than we do. This would mean that apparently yachts are just amazing and we have no idea. Riding a yacht is the pinnacle of human experience, a transcendence beyond our wildest imaginings; and riding a slightly bigger yacht is even more amazing and transcendent. Love and the joy of a life well-lived pale in comparison to the ecstasy of adding just one more layer of gold plate to your Ferrari collection.

Where increasing marginal utility is ridiculous, this is outright special pleading. You’re just making up bizarre utility functions that perfectly line up with whatever behavior people happen to have so that you can still call it rational. It’s like saying, “It could be perfectly rational! Maybe he enjoys banging his head against the wall!”

Kahneman and Tversky had a better idea. They realized that human beings aren’t so great at assessing probability, and furthermore tend not to think in terms of total amounts of wealth or annual income at all, but in terms of losses and gains. Through a series of clever experiments they showed that we are not so much risk-averse as we are loss-averse; we are actually willing to take more risk if it means that we will be able to avoid a loss.

In effect, we seem to be acting as if our utility function looks like this, where the zero no longer means “zero income”, it means “whatever we have right now“:

Prospect_theory

We tend to weight losses about twice as much as gains, and we tend to assume that losses also diminish in their marginal effect the same way that gains do. That is, we would only take a 50% chance to lose $1000 if it meant a 50% chance to gain $2000; but we’d take a 10% chance at losing $10,000 to save ourselves from a guaranteed loss of $1000.

This can explain why we buy insurance, provided that you frame it correctly. One of the things about prospect theory—and about human behavior in general—is that it exhibits framing effects: The answer we give depends upon the way you ask the question. That’s so totally obviously irrational it’s honestly hard to believe that we do it; but we do, and sometimes in really important situations. Doctors—doctors—will decide a moral dilemma differently based on whether you describe it as “saving 400 out of 600 patients” or “letting 200 out of 600 patients die”.

In this case, you need to frame insurance as the default option, and not buying insurance as an extra risk you are taking. Then saving money by not buying insurance is a gain, and therefore less important, while a higher risk of a bad outcome is a loss, and therefore important.

If you frame it the other way, with not buying insurance as the default option, then buying insurance is taking a loss by making insurance payments, only to get a gain if the insurance pays out. Suddenly the exact same insurance policy looks less attractive. This is a big part of why Obamacare has been effective but unpopular. It was set up as a fine—a loss—if you don’t buy insurance, rather than as a bonus—a gain—if you do buy insurance. The latter would be more expensive, but we could just make it up by taxing something else; and it might have made Obamacare more popular, because people would see the government as giving them something instead of taking something away. But the fine does a better job of framing insurance as the default option, so it motivates more people to actually buy insurance.

But even that would still not be enough to explain how it is rational to buy lottery tickets (Have I mentioned how it’s really not a good idea to buy lottery tickets?), because buying a ticket is a loss and winning the lottery is a gain. You actually have to get people to somehow frame not winning the lottery as a loss, making winning the default option despite the fact that it is absurdly unlikely. But I have definitely heard people say things like this: “Well if my numbers come up and I didn’t play that week, how would I feel then?” Pretty bad, I’ll grant you. But how much you wanna bet that never happens? (They’ll bet… the price of the ticket, apparently.)

In order for that to work, people either need to dramatically overestimate the probability of winning, or else ignore it entirely. Both of those things totally happen.

First, we overestimate the probability of rare events and underestimate the probability of common events—this is actually the part that makes it cumulative prospect theory instead of just regular prospect theory. If you make a graph of perceived probability versus actual probability, it looks like this:

cumulative_prospect

We don’t make much distinction between 40% and 60%, even though that’s actually pretty big; but we make a huge distinction between 0% and 0.00001% even though that’s actually really tiny. I think we basically have categories in our heads: “Never, almost never, rarely, sometimes, often, usually, almost always, always.” Moving from 0% to 0.00001% is going from “never” to “almost never”, but going from 40% to 60% is still in “often”. (And that for some reason reminded me of “Well, hardly ever!”)

But that’s not even the worst of it. After all that work to explain how we can make sense of people’s behavior in terms of something like a utility function (albeit a distorted one), I think there’s often a simpler explanation still: Regret aversion under total neglect of probability.

Neglect of probability is self-explanatory: You totally ignore the probability. But what’s regret aversion, exactly? Unfortunately I’ve had trouble finding any good popular sources on the topic; it’s all scholarly stuff. (Maybe I’m more cutting-edge than I thought!)

The basic idea that is that you minimize regret, where regret can be formalized as the difference in utility between the outcome you got and the best outcome you could have gotten. In effect, it doesn’t matter whether something is likely or unlikely; you only care how bad it is.

This explains insurance and lottery tickets in one fell swoop: With insurance, you have the choice of risking a big loss (big regret) which you can avoid by paying a small amount (small regret). You take the small regret, and buy insurance. With lottery tickets, you have the chance of getting a large gain (big regret if you don’t) which you gain by paying a small amount (small regret).

This can also explain why a typical American’s fears go in the order terrorists > Ebola > sharks > > cars > cheeseburgers, while the actual risk of dying goes in almost the opposite order, cheeseburgers > cars > > terrorists > sharks > Ebola. (Terrorists are scarier than sharks and Ebola and actually do kill more Americans! Yay, we got something right! Other than that it is literally reversed.)

Dying from a terrorist attack would be horrible; in addition to your own death you have all the other likely deaths and injuries, and the sheer horror and evil of the terrorist attack itself. Dying from Ebola would be almost as bad, with gruesome and agonizing symptoms. Dying of a shark attack would be still pretty awful, as you get dismembered alive. But dying in a car accident isn’t so bad; it’s usually over pretty quick and the event seems tragic but ordinary. And dying of heart disease and diabetes from your cheeseburger overdose will happen slowly over many years, you’ll barely even notice it coming and probably die rapidly from a heart attack or comfortably in your sleep. (Wasn’t that a pleasant paragraph? But there’s really no other way to make the point.)

If we try to estimate the probability at all—and I don’t think most people even bother—it isn’t by rigorous scientific research; it’s usually by availability heuristic: How many examples can you think of in which that event happened? If you can think of a lot, you assume that it happens a lot.

And that might even be reasonable, if we still lived in hunter-gatherer tribes or small farming villages and the 150 or so people you knew were the only people you ever heard about. But now that we have live TV and the Internet, news can get to us from all around the world, and the news isn’t trying to give us an accurate assessment of risk, it’s trying to get our attention by talking about the biggest, scariest, most exciting things that are happening around the world. The amount of news attention an item receives is in fact in inverse proportion to the probability of its occurrence, because things are more exciting if they are rare and unusual. Which means that if we are estimating how likely something is based on how many times we heard about it on the news, our estimates are going to be almost exactly reversed from reality. Ironically it is the very fact that we have more information that makes our estimates less accurate, because of the way that information is presented.

It would be a pretty boring news channel that spent all day saying things like this: “82 people died in car accidents today, and 1657 people had fatal heart attacks, 11.8 million had migraines, and 127 million played the lottery and lost; in world news, 214 countries did not go to war, and 6,147 children starved to death in Africa…” This would, however, be vastly more informative.

In the meantime, here are a couple of counter-heuristics I recommend to you: Don’t think about losses and gains, think about where you are and where you might be. Don’t say, “I’ll gain $1,000”; say “I’ll raise my income this year to $41,000.” Definitely do not think in terms of the percentage price of things; think in terms of absolute amounts of money. Cheap expensive things, expensive cheap things is a motto of mine; go ahead and buy the $5 toothbrush instead of the $1, because that’s only $4. But be very hesitant to buy the $22,000 car instead of the $21,000, because that’s $1,000. If you need to estimate the probability of something, actually look it up; don’t try to guess based on what it feels like the probability should be. Make this unprecedented access to information work for you instead of against you. If you want to know how many people die in car accidents each year, you can literally ask Google and it will tell you that (I tried it—it’s 1.3 million worldwide). The fatality rate of a given disease versus the risk of its vaccine, the safety rating of a particular brand of car, the number of airplane crash deaths last month, the total number of terrorist attacks, the probability of becoming a university professor, the average functional lifespan of a new television—all these things and more await you at the click of a button. Even if you think you’re pretty sure, why not look it up anyway?

Perhaps then we can make prospect theory wrong by making ourselves more rational.

The winner-takes-all effect

JDN 2457054 PST 14:06.

As I write there is some sort of mariachi band playing on my front lawn. It is actually rather odd that I have a front lawn, since my apartment is set back from the road; yet there is the patch of grass, and there is the band playing upon it. This sort of thing is part of the excitement of living in a large city (and Long Beach would seem like a large city were it not right next to the sprawling immensity that is Los Angeles—there are more people in Long Beach than in Cleveland, but there are more people in greater Los Angeles than in Sweden); with a certain critical mass of human beings comes unexpected pieces of culture.

The fact that people agglomerate in this way is actually relevant to today’s topic, which is what I will call the winner-takes-all effect. I actually just finished reading a book called The Winner-Take-All Society, which is particularly horrifying to read because it came out in 1996. That’s almost twenty years ago, and things were already bad; and since then everything it describes has only gotten worse.

What is the winner-takes-all effect? It is the simple fact that in competitive capitalist markets, a small difference in quality can yield an enormous difference in return. The third most popular soda drink company probably still makes drinks that are pretty good, but do you have any idea what it is? There’s Coke, there’s Pepsi, and then there’s… uh… Dr. Pepper, apparently! But I didn’t know that before today and I bet you didn’t either. Now think about what it must be like to be the 15th most popular soda drink company, or the 37th. That’s the winner-takes-all effect.

I don’t generally follow football, but since tomorrow is the Super Bowl I feel some obligation to use that example as well. The highest-paid quarterback is Russell Wilson of the Seattle Seahawks, who is signing onto a five-year contract worth $110 million ($22 million a year). In annual income that will make him pass Jay Cutler of the Chicago Bears who has a seven-year contract worth $127 million ($18.5 million a year). This shift may have something to do with the fact that the Seahawks are in the Super Bowl this year and the Bears are not (they haven’t since 2007). Now consider what life is like for most football players; the median income of football players is most likely zero (at least as far as football-related income), and the median income of NFL players—the cream of the crop already—is $770,000; that’s still very good money of course (more than Krugman makes, actually! But he could make more, if he were willing to sell out to Wall Street), but it’s barely 1/30 of what Wilson is going to be making. To make that million-dollar salary, you need to be the best, of the best, of the best (sir!). That’s the winner-takes-all effect.

To go back to the example of cities, it is for similar reasons that the largest cities (New York, Los Angeles, London, Tokyo, Shanghai, Hong Kong, Delhi) become packed with tens of millions of people while others (Long Beach, Ann Arbor, Cleveland) get hundreds of thousands and most (Petoskey, Ketchikan, Heber City, and hundreds of others you’ve never heard of) get only a few thousand. Beyond that there are thousands of tiny little hamlets that many don’t even consider cities. The median city probably has about 10,000 people in it, and that only because we’d stop calling it a city if it fell below 1,000. If we include every tiny little village, the median town size is probably about 20 people. Meanwhile the largest city in the world is Tokyo, with a greater metropolitan area that holds almost 38 million people—or to put it another way almost exactly as many people as California. Huh, LA doesn’t seem so big now does it? How big is a typical town? Well, that’s the thing about this sort of power-law distribution; the concept of “typical” or “average” doesn’t really apply anymore. Each little piece of the distribution has basically the same shape as the whole distribution, so there isn’t a “typical” size or scale. That’s the winner-takes-all effect.

As they freely admit in the book, it isn’t literally that a single winner takes everything. That is the theoretical maximum level of wealth inequality, and fortunately no society has ever quite reached it. The closest we get in today’s society is probably Saudi Arabia, which recently lost its king—and yes I do mean king in the fullest sense of the word, a man of virtually unlimited riches and near-absolute power. His net wealth was estimated at $18 billion, which frankly sounds low; still even if that’s half the true amount it’s oddly comforting to know that he is still not quite as rich as Bill Gates ($78 billion), who earned his wealth at least semi-legitimately in a basically free society. Say what you will about intellectual property rents and market manipulation—and you know I do—but they are worlds away from what Abdullah’s family did, which was literally and directly robbed from millions of people by the power of the sword. Mostly he just inherited all that, and he did implement some minor reforms, but make no mistake: He was ruthless and by no means willing to give up his absolute power—he beheaded dozens of political dissidents, for example. Saudi Arabia does spread their wealth around a little, such that basically no one is below the UN poverty lines of $1.25 and $2 per day, but about a fourth of the population is below the national poverty line—which is just about the same distribution of wealth as what we have in the US, which actually makes me wonder just how free and legitimate our markets really are.

The winner-takes-all effect would really be more accurately described as the “top small fraction takes the vast majority” effect, but that isn’t nearly as catchy, now is it?

There are several different causes that can all lead to this same result. In the book, Robert Frank and Philip Cook argue that we should not attribute the cause to market manipulation, but in fact to the natural functioning of competitive markets. There’s something to be said for this—I used to buy the whole idea that competitive markets are the best, but increasingly I’ve been seeing ways that less competitive markets can make better overall outcomes.

Where they lose me is in arguing that the skyrocketing compensation packages for CEOs are due to their superior performance, and corporations are just being rational in competing for the best CEOs. If that were true, we wouldn’t find that the rank correlation between the CEO’s pay and the company’s stock performance is statistically indistinguishable from zero. Actually even a small positive correlation wouldn’t prove that the CEOs are actually performing well; it could just be that companies that perform well are willing to pay their CEOs more—and stock option compensation will do this automatically. But in fact the correlation is so tiny as to be negligible; corporations would be better off hiring a random person off the street and paying them $50,000 for all the CEO does for their stock performance. If you adjust for the size of the company, you find that having a higher-paid CEO is positively related to performance for small startups, but negatively correlated for large well-established corporations. No, clearly there’s something going on here besides competitive pay for high performance—corruption comes to mind, which you’ll remember was the subject of my master’s thesis.

But in some cases there isn’t any apparent corruption, and yet we still see these enormously unequal distributions of income. Another good example of this is the publishing industry, in which J.K. Rowling can make over $1 billion (she donated enough to charity to officially lose her billionaire status) but most authors make little or nothing, particularly those who can’t get published in the first place. I have no reason to believe that J.K. Rowling acquired this massive wealth by corruption; she just sold an awful lot of booksover 100 million of the first Harry Potter book alone.

But why would she be able to sell 100 million while thousands of authors write books that are probably just as good or nearly so make nothing? Am I just bitter and envious, as Mitt Romney would say? Is J.K. Rowling actually a million times as good an author as I am?

Obviously not, right? She may be better, but she’s not that much better. So how is it that she ends up making a million times as much as I do from writing? It feels like squaring the circle: How can markets be efficient and competitive, yet some people are being paid millions of times as others despite being only slightly more productive?

The answer is simple but enormously powerful: positive feedback.Once you start doing well, it’s easier to do better. You have what economists call an economy of scale. The first 10,000 books sold is the hardest; then the next 10,000 is a little easier; the next 10,000 a little easier still. In fact I suspect that in many cases the first 10% growth is harder than the second 10% growth and so on—which is actually a much stronger claim. For my sales to grow 10% I’d need to add like 20 people. For J.K. Rowling’s sales to grow 10% she’d need to add 10 million. Yet it might actually be easier for J.K. Rowling to add 10 million than for me to add 20. If not, it isn’t much harder. Suppose we tried by just sending out enticing tweets. I have about 100 Twitter followers, so I’d need 0.2 sales per follower; she has about 4 million, so she’d need an average of 2.5 sales per follower. That’s an advantage for me, percentage-wise—but if we have the same uptake rate I sell 20 books and she sells 800,000.

If you have only a handful of book sales like I do, those sales are static; but once you cross that line into millions of sales, it’s easy for that to spread into tens or even hundreds of millions. In the particular case of books, this is because it spreads by word-of-mouth; say each person who reads a book recommends it to 10 friends, and you only read a book if at least 2 of your friends recommended it. In a city of 100,000 people, if you start with 50 people reading it, odds are that most of those people don’t have friends that overlap and so you stop at 50. But if you start at 50,000, there is bound to be a great deal of overlap; so then that 50,000 recruits another 10,000, then another 10,000, and pretty soon the whole 100,000 have read it. In this case we have what are called network externalitiesyou’re more likely to read a book if your friends have read it, so the more people there are who have read it, the more people there are who want to read it. There’s a very similar effect at work in social networks; why does everyone still use Facebook, even though it’s actually pretty awful? Because everyone uses Facebook. Less important than the quality of the software platform (Google Plus is better, and there are some third-party networks that are likely better still) is the fact that all your friends and family are on it. We all use Facebook because we all use Facebook? We all read Harry Potter books because we all read Harry Potter books? The first rule of tautology club is…

Languages are also like this, which is why I can write this post in English and yet people can still read it around the world. English is the winner of the language competition (we call it the lingua franca, as weird as that is—French is not the lingua franca anymore). The losers are those hundreds of New Guinean languages you’ve never heard of, many of which are dying. And their distribution obeys, once again, a power-law. (Individual words actually obey a power-law as well, which makes this whole fractal business delightfully ever more so.)
Network externalities are not the only way that the winner-takes-all effect can occur, though I think it is the most common. You can also have economies of scale from the supply side, particularly in the case of information: Recording a song is a lot of time and effort, but once you record a song, it’s trivial to make more copies of it. So that first recording costs a great deal, while every subsequent recording costs next to nothing. This is probably also at work in the case of J.K. Rowling and the NFL; the two phenomena are by no means mutually exclusive. But clearly the sizes of cities are due to network externalities: It’s quite expensive to live in a big city—no supply-side economy of scale—but you want to live in a city where other people live because that’s where friends and family and opportunities are.

The most worrisome kind of winner-takes-all effect is what Frank and Cook call deep pockets: Once you have concentration of wealth in a few hands, those few individuals can now choose their own winners in a much more literal sense: the rich can commission works of art from their favorite artists, exacerbating the inequality among artists; worse yet they can use their money to influence politicians (as the Kochs are planning on spending $900 million—$3 for every person in America—to do in 2016) and exacerbate the inequality in the whole system. That gives us even more positive feedback on top of all the other positive feedbacks.

Sure enough, if you run the standard neoclassical economic models of competition and just insert the assumption of economies of scale, the result is concentration of wealth—in fact, if nothing about the rules prevents it, the result is a complete monopoly. Nor is this result in any sense economically efficient; it’s just what naturally happens in the presence of economies of scale.

Frank and Cook seem most concerned about the fact that these winner-take-all incomes will tend to push too many people to seek those careers, leaving millions of would-be artists, musicians and quarterbacks with dashed dreams when they might have been perfectly happy as electrical engineers or high school teachers. While this may be true—next week I’ll go into detail about prospect theory and why human beings are terrible at making judgments based on probability—it isn’t really what I’m most concerned about. For all the cost of frustrated ambition there is also a good deal of benefit; striving for greatness does not just make the world better if we succeed, it can make ourselves better even if we fail. I’d strongly encourage people to have backup plans; but I’m not going to tell people to stop painting, singing, writing, or playing football just because they’re unlikely to make a living at it. The one concern I do have is that the competition is so fierce that we are pressured to go all in, to not have backup plans, to use performance-enhancing drugs—they may carry awful risks, but they also work. And it’s probably true, actually, that you’re a bit more likely to make it all the way to the top if you don’t have a backup plan. You’re also vastly more likely to end up at the bottom. Is raising your probability of being a bestselling author from 0.00011% to 0.00012% worth giving up all other career options? Skipping chemistry class to practice football may improve your chances of being an NFL quarterback from 0.000013% to 0.000014%, but it will also drop your chances of being a chemical engineer from 95% (a degree in chemical engineering almost guarantees you a job eventually) to more like 5% (it’s hard to get a degree when you flunk all your classes).

Frank and Cook offer a solution that I think is basically right; they call it positional arms control agreements. By analogy with arms control agreements between nations—and what is war, if not the ultimate winner-takes-all contest?—they propose that we use taxation and regulation policy to provide incentives to make people compete less fiercely for the top positions. Some of these we already do: Performance-enhancing drugs are banned in professional sports, for instance. Even where there are no regulations, we can use social norms: That’s why it’s actually a good thing that your parents rarely support your decision to drop out of school and become a movie star.

That’s yet another reason why progressive taxation is a good idea, as if we needed another; by paring down those top incomes it makes the prospect of winning big less enticing. If NFL quarterbacks only made 10 times what chemical engineers make instead of 300 times, people would be a lot more hesitant to give up on chemical engineering to become a quarterback. If top Wall Street executives only made 50 times what normal people make instead of 5000, people with physics degrees might go back to actually being physicists instead of speculating on stock markets.

There is one case where we might not want fewer people to try, and that is entrepreneurship. Most startups fail, and only a handful go on to make mind-bogglingly huge amounts of money (often for no apparent reason, like the Snuggie and Flappy Bird), yet entrepreneurship is what drives the dynamism of a capitalist economy. We need people to start new businesses, and right now they do that mainly because of a tiny chance of a huge benefit. Yet we don’t want them to be too unrealistic in their expectations: Entrepreneurs are much more optimistic than the general population, but the most successful entrepreneurs are a bit less optimistic than other entrepreneurs. The most successful strategy is to be optimistic but realistic; this outperforms both unrealistic optimism and pessimism. That seems pretty intuitive; you have to be confident you’ll succeed, but you can’t be totally delusional. Yet it’s precisely the realistic optimists who are most likely to be disincentivized by a reduction in the top prizes.

Here’s my solution: Let’s change it from a tiny change of a huge benefit to a large chance of a moderately large benefit. Let’s reward entrepreneurs for trying—with standards for what constitutes a really serious, good attempt rather than something frivolous that was guaranteed to fail. Use part of the funds from the progressive tax as a fund for angel grants, provided to a large number of the most promising entrepreneurs. It can’t be a million-dollar prize for the top 100. It needs to be more like a $50,000 prize for the top 100,000 (which would cost $5 billion a year, affordable for the US government). It should be paid at the proposal phase; the top 100,000 business plans receive the funding and are under no obligation to repay it. It has to be enough money that someone can rationally commit themselves to years of dedicated work without throwing themselves into poverty, and it has to be confirmed money so that they don’t have to worry about throwing themselves into debt. As for the upper limit, it only needs to be small enough that there is still an incentive for the business to succeed; but even with a 99% tax Mark Zuckerberg would still be a millionaire, so the rewards for success are high indeed.

The good news is that we actually have such a system to some extent. For research scientists rather than entrepreneurs, NSF grants are pretty close to what I have in mind, but at present they are a bit too competitive: 8,000 research grants with a median of $130,000 each and a 20% acceptance rate isn’t quite enough people—the acceptance rate should be higher, since most of these proposals are quite worthy. Still, it’s close, and definitely a much better incentive system than what we have for entrepreneurs; there are almost 12 million entrepreneurs in the United States, starting 6 million businesses a year, 75% of which fail before they can return their venture capital. Those that succeed have incomes higher than the general population, with a median income of around $70,000 per year, but most of this is accounted for by the fact that entrepreneurs are more educated and talented than the general population. Once you factor that in, successful entrepreneurs have about 50% more income on average, but their standard deviation of income is also 60% higher—so some are getting a lot and some are getting very little. Since 75% fail, we’re talking about a 25% chance of entering an income distribution that’s higher on average but much more variable, and a 75% chance of going through a period with little or no income at all—is it worth it? Maybe, maybe not. But if you could get a guaranteed $50,000 for having a good idea—and let me be clear, only serious proposals that have a good chance of success should qualify—that deal sounds an awful lot better.

Beware the false balance

JDN 2457046 PST 13:47.

I am now back in Long Beach, hence the return to Pacific Time. Today’s post is a little less economic than most, though it’s certainly still within the purview of social science and public policy. It concerns a question that many academic researchers and in general reasonable, thoughtful people have to deal with: How do we remain unbiased and nonpartisan?

This would not be so difficult if the world were as the most devoted “centrists” would have you believe, and it were actually the case that both sides have their good points and bad points, and both sides have their scandals, and both sides make mistakes or even lie, so you should never take the side of the Democrats or the Republicans but always present both views equally.

Sadly, this is not at all the world in which we live. While Democrats are far from perfect—they are human beings after all, not to mention politicians—Republicans have become completely detached from reality. As Stephen Colbert has said, “Reality has a liberal bias.” You know it’s bad when our detractors call us the reality-based community. Treating both sides as equal isn’t being unbiased—it’s committing a balance fallacy.

Don’t believe me? Here is a list of objective, scientific facts that the Republican Party (and particularly its craziest subset, the Tea Party) has officially taken political stances against:

  1. Global warming is a real problem, and largely caused by human activity. (The Republican majority in the Senate voted down a resolution acknowledging this.)
  2. Human beings share a common ancestor with chimpanzees. (48% of Republicans think that we were created in our present form.)
  3. Animals evolve over time due to natural selection. (Only 43% of Republicans believe this.)
  4. The Earth is approximately 4.5 billion years old. (Marco Rubio said he thinks maybe the Earth was made in seven days a few thousand years ago.)
  5. Hydraulic fracturing can trigger earthquakes.(Republican in Congress are trying to nullify local regulations on fracking because they insist it is so safe we don’t even need to keep track.)
  6. Income inequality in the United States is the worst it has been in decades and continues to rise. (Mitt Romney said that the concern about income inequality is just “envy”.)
  7. Progressive taxation reduces inequality without adversely affecting economic growth. (Here’s a Republican former New York Senator saying that the President “should be ashamed” for raising taxes on—you guessed it—”job creators”.)
  8. Moderate increases in the minimum wage do not yield significant losses in employment. (Republicans consistently vote against even small increases in the minimum wage, and Democrats consistently vote in favor.)
  9. The United States government has no reason to ever default on its debt. (John Boehner, now Speaker of the House, once said that “America is broke” and if we don’t stop spending we’ll never be able to pay the national debt.)
  10. Human embryos are not in any way sentient, and fetuses are not sentient until at least 17 weeks of gestation, probably more like 30 weeks. (Yet if I am to read it in a way that would make moral sense, “Life begins at conception”—which several Republicans explicitly endorsed at the National Right to Life Convention—would have to imply that even zygotes are sentient beings. If you really just meant “alive”, then that would equally well apply to plants or even bacteria. Sentience is the morally relevant category.)

And that’s not even counting the Republican Party’s association with Christianity and all of the objectively wrong scientific claims that necessarily entails—like the existence of an afterlife and the intervention of supernatural forces. Most Democrats also self-identify as Christian, though rarely with quite the same fervor (the last major Democrat I can think of who was a devout Christian was Jimmy Carter), probably because most Americans self-identify as Christian and are hesitant to elect an atheist President (despite the fact that 93% of the National Academy of Sciences is comprised of atheists and the higher your IQ the more likely you are to be an atheist; we wouldn’t want to elect someone who agrees with smart people, now would we?).

It’s true, there are some other crazy ideas out there with a left-wing slant, like the anti-vaccination movement that has wrought epidemic measles upon us, the anti-GMO crowd that rejects basic scientific facts about genetics, and the 9/11 “truth” movement that refuses to believe that Al Qaeda actually caused the attacks. There are in fact far-left Marxists out there who want to tear down the whole capitalist system by glorious revolution and replace it with… er… something (they’re never quite clear on that last point). But none of these things are the official positions of standing members of Congress.

The craziest belief by a standing Democrat I can think of is Dennis Kucinich’s belief that he saw an alien spacecraft. And to be perfectly honest, alien spacecraft are about a thousand times more plausible than Christianity in general, let alone Creationism. There almost certainly are alien spacecraft somewhere in the universe—just most likely so far away we’ll need FTL to encounter them. Moreover, this is not Kucinich’s official position as a member of Congress and it’s not something he has ever made policy based upon.

Indeed, if you’re willing to include the craziest individuals with no real political power who identify with a particular side of the political spectrum, then we should include on the right-wing side people like the Bundy militia in Nevada, neo-Nazis in Detroit, and the dozens of KKK chapters across the US. Not to mention this pastor who wants to murder all gay people in the world (because he truly believes what Leviticus 20:13 actually and clearly says).

If you get to include Marxists on the left, then we get to include Nazis on the right. Or, we could be reasonable and say that only the official positions of elected officials or mainstream pundits actually count, in which case Democrats have views that are basically accurate and reasonable while the majority of Republicans have views that are still completely objectively wrong.

There’s no balance here. For every Democrat who is wrong, there is a Republicans who is totally delusional. For every Democrat who distorts the truth, there is a Republican who blatantly lies about basic facts. Not to mention that for every Democrat who has had an ill-advised illicit affair there is a Republican who has committed war crimes.

Actually war crimes are something a fair number of Democrats have done as well, but the difference still stands out in high relief: Barack Obama has ordered double-tap drone strikes that are in violation of the Geneva Convention, but George W. Bush orchestrated a worldwide mass torture campaign and launched pointless wars that slaughtered hundreds of thousands of people. Bill Clinton ordered some questionable CIA operations, but George H.W. Bush was the director of the CIA.

I wish we had two parties that were equally reasonable. I wish there were two—or three, or four—proposals on the table in each discussion, all of which had merits and flaws worth considering. Maybe if we somehow manage to get the Green Party a significant seat in power, or the Social Democrat party, we can actually achieve that goal. But that is not where we are right now. Right now, we have the Democrats, who have some good ideas and some bad ideas; and then we have the Republicans, who are completely out of their minds.

There is an important concept in political science called the Overton window; it is the range of political ideas that are considered “reasonable” or “mainstream” within a society. Things near the middle of the Overton window are considered sensible, even “nonpartisan” ideas, while things near the edges are “partisan” or “political”, and things near but outside the window are seen as “extreme” and “radical”. Things far outside the window are seen as “absurd” or even “unthinkable”.

Right now, our Overton window is in the wrong place. Things like Paul Ryan’s plan to privatize Social Security and Medicare are seen as reasonable when they should be considered extreme. Progressive income taxes of the kind we had in the 1960s are seen as extreme when they should be considered reasonable. Cutting WIC and SNAP with nothing to replace them and letting people literally starve to death are considered at most partisan, when they should be outright unthinkable. Opposition to basic scientific facts like climate change and evolution is considered a mainstream political position—when in terms of empirical evidence Creationism should be more intellectually embarrassing than being a 9/11 truther or thinking you saw an alien spacecraft. And perhaps worst of all, military tactics like double-tap strikes that are literally war crimes are considered “liberal”, while the “conservative” position involves torture, worldwide surveillance and carpet bombing—if not outright full-scale nuclear devastation.

I want to restore reasonable conversation to our political system, I really do. But that really isn’t possible when half the politicians are totally delusional. We have but one choice: We must vote them out.

I say this particularly to people who say “Why bother? Both parties are the same.” No, they are not the same. They are deeply, deeply different, for all the reasons I just outlined above. And if you can’t bring yourself to vote for a Democrat, at least vote for someone! A Green, or a Social Democrat, or even a Libertarian or a Socialist if you must. It is only by the apathy of reasonable people that this insanity can propagate in the first place.

The irrationality of racism

JDN 2457039 EST 12:07.

I thought about making today’s post about the crazy currency crisis in Switzerland, but currency exchange rates aren’t really my area of expertise; this is much more in Krugman’s bailiwick, so you should probably read what Krugman says about the situation. There is one thing I’d like to say, however: I think there is a really easy way to create credible inflation and boost aggregate demand, but for some reason nobody is ever willing to do it: Give people money. Emphasis here on the people—not banks. Don’t adjust interest rates or currency pegs, don’t engage in quantitative easing. Give people money. Actually write a bunch of checks, presumably in the form of refundable tax rebates.

The only reason I can think of that economists don’t do this is they are afraid of helping poor people. They wouldn’t put it that way; maybe they’d say they want to avoid “moral hazard” or “perverse incentives”. But those fears didn’t stop them from loaning $2 trillion to banks or adding $4 trillion to the monetary base; they didn’t stop them from fighting for continued financial deregulation when what the world economy most desperately needs is stronger financial regulation. Our whole derivatives market practically oozes moral hazard and perverse incentives, but they aren’t willing to shut down that quadrillion-dollar con game. So that can’t be the actual fear. No, it has to be a fear of helping poor people instead of rich people, as though “capitalism” meant a system in which we squeeze the poor as tight as we can and heap all possible advantages upon those who are already wealthy. No, that’s called feudalism. Capitalism is supposed to be a system where markets are structured to provide free and fair competition, with everyone on a level playing field.

A basic income is a fundamentally capitalist policy, which maintains equal opportunity with a minimum of government intervention and allows the market to flourish. I suppose if you want to say that all taxation and government spending is “socialist”, fine; then every nation that has ever maintained stability for more than a decade has been in this sense “socialist”. Every soldier, firefighter and police officer paid by a government payroll is now part of a “socialist” system. Okay, as long as we’re consistent about that; but now you really can’t say that socialism is harmful; on the contrary, on this definition socialism is necessary for capitalism. In order to maintain security of property, enforcement of contracts, and equality of opportunity, you need government. Maybe we should just give up on the words entirely, and speak more clearly about what specific policies we want. If I don’t get to say that a basic income is “capitalist”, you don’t get to say financial deregulation is “capitalist”. Better yet, how about you can’t even call it “deregulation”? You have to actually argue in front of a crowd of people that it should be legal for banks to lie to them, and there should be no serious repercussions for any bank that cheats, steals, colludes, or even launders money for terrorists. That is, after all, what financial deregulation actually does in the real world.

Okay, that’s enough about that.

My birthday is coming up this Monday; thus completes my 27th revolution around the Sun. With birthdays come thoughts of ancestry: Though I appear White, I am legally one-quarter Native American, and my total ethnic mix includes English, German, Irish, Mohawk, and Chippewa.

Biologically, what exactly does that mean? Next to nothing.

Human genetic diversity is a real thing, and there are genetic links to not only dozens of genetic diseases and propensity toward certain types of cancer, but also personality and intelligence. There are also of course genes for skin pigmentation.

The human population does exhibit some genetic clustering, but the categories are not what you’re probably used to: Good examples of relatively well-defined genetic clusters include Ashkenazi, Papuan, and Mbuti. There are also many different haplogroups, such as mitochondrial haplogroups L3 and CZ.

Maybe you could even make a case for the “races” East Asian, South Asian, Pacific Islander, and Native American, since the indigenous populations of these geographic areas largely do come from the same genetic clusters. Or you could make a bigger category and call them all “Asian”—but if you include Papuan and Aborigine in “Asian” you’d pretty much have to include Chippewa and Najavo as well.

But I think it tells you a lot about what “race” really means when you realize that the two “race” categories which are most salient to Americans are in fact the categories that are genetically most meaningless. “White” and “Black” are totally nonsensical genetic categorizations.

Let’s start with “Black”; defining a “Black” race is like defining a category of animals by the fact that they are all tinted red—foxes yes, dogs no; robins yes, swallows no; ladybirds yes, cockroaches no. There is more genetic diversity within Africa than there is outside of it. There are African populations that are more closely related to European populations than they are to other African populations. The only thing “Black” people have in common is that their skin is dark, which is due to convergent evolution: It’s not due to common ancestry, but a common environment. Dark skin has a direct survival benefit in climates with intense sunlight.  The similarity is literally skin deep.

What about “White”? Well, there are some fairly well-defined European genetic populations, so if we clustered those together we might be able to get something worth calling “White”. The problem is, that’s not how it happened. “White” is a club. The definition of who gets to be “White” has expanded over time, and even occasionally contracted. Originally Hebrew, Celtic, Hispanic, and Italian were not included (and Hebrew, for once, is actually a fairly sensible genetic category, as long as you restrict it to Ashkenazi), but then later they were. But now that we’ve got a lot of poor people coming in from Mexico, we don’t quite think of Hispanics as “White” anymore. We actually watched Arabs lose their “White” card in real-time in 2001; before 9/11, they were “White”; now, “Arab” is a separate thing. And “Muslim” is even treated like a race now, which is like making a racial category of “Keynesians”—never forget that Islam is above all a belief system.

Actually, “White privilege” is almost a tautology—the privilege isn’t given to people who were already defined as “White”, the privilege is to be called “White”. The privilege is to have your ancestors counted in the “White” category so that they can be given rights, while people who are not in the category are denied those rights. There does seem to be a certain degree of restriction by appearance—to my knowledge, no population with skin as dark as Kenyans has ever been considered “White”, and Anglo-Saxons and Nordics have always been included—but the category is flexible to political and social changes.

But really I hate that word “privilege”, because it gets the whole situation backwards. When you talk about “White privilege”, you make it sound as though the problem with racism is that it gives unfair advantages to White people (or to people arbitrarily defined as “White”). No, the problem is that people who are not White are denied rights. It isn’t what White people have that’s wrong; it’s what Black people don’t have. Equating those two things creates a vision of the world as zero-sum, in which each gain for me is a loss for you.

Here’s the thing about zero-sum games: All outcomes are Pareto-efficient. Remember when I talked about Pareto-efficiency? As a quick refresher, an outcome is Pareto-efficient if there is no way for one person to be made better off without making someone else worse off. In general, it’s pretty hard to disagree that, other things equal, Pareto-efficiency is a good thing, and Pareto-inefficiency is a bad thing. But if racism were about “White privilege” and the game were zero-sum, racism would have to be Pareto-efficient.

In fact, racism is Pareto-inefficient, and that is part of why it is so obviously bad. It harms literally billions of people, and benefits basically no one. Maybe there are a few individuals who are actually, all things considered, better off than they would have been if racism had not existed. But there are certainly not very many such people, and in fact I’m not sure there are any at all. If there are any, it would mean that technically racism is not Pareto-inefficient—but it is definitely very close. At the very least, the damage caused by racism is several orders of magnitude larger than any benefits incurred.

That’s why the “privilege” language, while well-intentioned, is so insidious; it tells White people that racism means taking things away from them. Many of these people are already in dire straits—broke, unemployed, or even homeless—so taking away what they have sounds particularly awful. Of course they’d be hostile to or at least dubious of attempts to reduce racism. You just told them that racism is the only thing keeping them afloat! In fact, quite the opposite is the case: Poor White people are, second only to poor Black people, those who stand the most to gain from a more just society. David Koch and Donald Trump should be worried; we will probably have to take most of their money away in order to achieve social justice. (Bill Gates knows we’ll have to take most of his money away, but he’s okay with that; in fact he may end up giving it away before we get around to taking it.) But the average White person will almost certainly be better off than they were.

Why does it seem like there are benefits to racism? Again, because people are accustomed to thinking of the world as zero-sum. One person is denied a benefit, so that benefit must go somewhere else right? Nope—it can just disappear entirely, and in this case typically does.

When a Black person is denied a job in favor of a White person who is less qualified, doesn’t that White person benefit? Uh, no, actually, not really. They have been hired for a job that isn’t an optimal fit for them; they aren’t working to their comparative advantage, and that Black person isn’t either and may not be working at all. The total output of the economy will be thereby reduced slightly. When this happens millions of times, the total reduction in output can be quite substantial, and as a result that White person was hired at $30,000 for an unsuitable job when in a racism-free world they’d have been hired at $40,000 for a suitable one. A similar argument holds for sexism; men don’t benefit from getting jobs women are denied if one of those women would have invented a cure for prostate cancer.

Indeed, the empowerment of women and minorities is kind of the secret cheat code for creating a First World economy. The great successes of economic development—Korea, Japan, China, the US in WW2—had their successes precisely at a time when they suddenly started including women in manufacturing, effectively doubling their total labor capacity. Moreover, it’s pretty clear that the causation ran in this direction. Periods of economic growth are associated with increases in solidarity with other groups—and downturns with decreased solidarity—but the increase in women in the workforce was sudden and early while the increase in growth and total output was prolonged.

Racism is irrational. Indeed it is so obviously irrational that for decades now neoclassical economists have been insisting that there is no need for civil rights policy, affirmative action, etc. because the market will automatically eliminate racism by the rational profit motive. A more recent literature has attempted to show that, contrary to all appearances, racism actually is rational in some cases. Inevitably it relies upon either the background of a racist society (maybe Black people are, on average, genuinely less qualified, but it would only be because they’ve been given poorer opportunities), or an assumption of “discriminatory tastes”, which is basically giving up and redefining the utility function so that people simply get direct pleasure from being racists. Of course, on that sort of definition, you can basically justify any behavior as “rational”: Maybe he just enjoys banging his head against the wall! (A similar slipperiness is used by egoists to argue that caring for your children is actually “selfish”; well, it makes you happy, doesn’t it? Yes, but that’s not why we do it.)

There’s a much simpler way to understand this situation: Racism is irrational, and so is human behavior.

That isn’t a complete explanation, of course; and I think one major misunderstanding neoclassical economists have of cognitive economists is that they think this is what we do—we point out that something is irrational, and then high-five and go home. No, that’s not what we do. Finding the irrationality is just the start; next comes explaining the irrationality, understanding the irrationality, and finally—we haven’t reached this point in most cases—fixing the irrationality.

So what explains racism? In short, the tribal paradigm. Human beings evolved in an environment in which the most important factor in our survival and that of our offspring was not food supply or temperature or predators, it was tribal cohesion. With a cohesive tribe, we could find food, make clothes, fight off lions. Without one, we were helpless. Millions of years in this condition shaped our brains, programming them to treat threats to tribal cohesion as the greatest possible concern. We even reached the point where solidarity for the tribe actually began to dominate basic survival instincts: For a suicide bomber the unity of the tribe—be it Marxism for the Tamil Tigers or Islam for Al-Qaeda—is more important than his own life. We will do literally anything if we believe it is necessary to defend the identities we believe in.

And no, we rationalists are no exception here. We are indeed different from other groups; the beliefs that define us, unlike the beliefs of literally every other group that has ever existed, are actually rationally founded. The scientific method really isn’t just another religion, for unlike religion it actually works. But still, if push came to shove and we were forced to kill and die in order to defend rationality, we would; and maybe we’d even be right to do so. Maybe the French Revolution was, all things considered, a good thing—but it sure as hell wasn’t nonviolent.

This is the background we need to understand racism. It actually isn’t enough to show people that racism is harmful and irrational, because they are programmed not to care. As long as racial identification is the salient identity, the tribe by which we define ourselves, we will do anything to defend the cohesion of that tribe. It is not enough to show that racism is bad; we must in fact show that race doesn’t matter. Fortunately, this is easy, for as I explained above, race does not actually exist.

That makes racism in some sense easier to deal with than sexism, because the very categories of races upon which it is based are fundamentally faulty. Sexes, on the other hand, are definitely a real thing. Males and females actually are genetically different in important ways. Exactly how different in what ways is an open question, and what we do know is that for most of the really important traits like intelligence and personality the overlap outstrips the difference. (The really big, categorical differences all appear to be physical: Anatomy, size, testosterone.) But conquering sexism may always be a difficult balance, for there are certain differences we won’t be able to eliminate without altering DNA. That no more justifies sexism than the fact that height is partly genetic would justify denying rights to short people (which, actually, is something we do); but it does make matters complicated, because it’s difficult to know whether an observed difference (for instance, most pediatricians are female, while most neurosurgeons are male) is due to discrimination or innate differences.

Racism, on the other hand, is actually quite simple: Almost any statistically significant difference in behavior or outcome between races must be due to some form of discrimination somewhere down the line. Maybe it’s not discrimination right here, right now; maybe it’s discrimination years ago that denied opportunities, or discrimination against their ancestors that led them to inherit less generations later; but it almost has to be discrimination against someone somewhere, because it is only by social construction that races exist in the first place. I do say “almost” because I can think of a few exceptions: Black people are genuinely less likely to use tanning salons and genuinely more likely to need vitamin D supplements, but both of those things are directly due to skin pigmentation. They are also more likely to suffer from sickle-cell anemia, which is another convergent trait that evolved in tropical climates as a response to malaria. But unless you can think of a reason why employment outcomes would depend upon vitamin D, the huge difference in employment between Whites and Blacks really can’t be due to anything but discrimination.

I imagine most of my readers are more sophisticated than this, but just in case you’re wondering about the difference in IQ scores between Whites and Blacks, that is indeed a real observation, but IQ isn’t entirely genetic. The reason IQ scores are rising worldwide (the Flynn Effect) is due to improvements in environmental conditions: Fewer environmental pollutants—particularly lead and mercury, the removal of which is responsible for most of the reduction in crime in America over the last 20 yearsbetter nutrition, better education, less stress. Being stupid does not make you poor (or how would we explain Donald Trump?), but being poor absolutely does make you stupid. Combine that with the challenges and inconsistencies in cross-national IQ comparisons, and it’s pretty clear that the higher IQ scores in rich nations are an effect, not a cause, of their affluence. Likewise, the lower IQ scores of Black people in the US are entirely explained by their poorer living conditions, with no need for any genetic hypothesis—which would also be very difficult in the first place precisely because “Black” is such a weird genetic category.

Unfortunately, I don’t yet know exactly what it takes to change people’s concept of group identification. Obviously it can be done, for group identities change all the time, sometimes quite rapidly; but we simply don’t have good research on what causes those changes or how they might be affected by policy. That’s actually a major part of the experiment I’ve been trying to get funding to run since 2009, which I hope can now become my PhD thesis. All I can say is this: I’m working on it.

How is the economy doing?

JDN 2457033 EST 12:22.

Whenever you introduce yourself to someone as an economist, you will typically be asked a single question: “How is the economy doing?” I’ve already experienced this myself, and I don’t have very many dinner parties under my belt.

It’s an odd question, for a couple of reasons: First, I didn’t say I was a macroeconomic forecaster. That’s a very small branch of economics—even a small branch of macroeconomics. Second, it is widely recognized among economists that our forecasters just aren’t very good at what they do. But it is the sort of thing that pops into people’s minds when they hear the word “economist”, so we get asked it a lot.

Why are our forecasts so bad? Some argue that the task is just inherently too difficult due to the chaotic system involved; but they used to say that about weather forecasts, and yet with satellites and computer models our forecasts are now far more accurate than they were 20 years ago. Others have argued that “politics always dominates over economics”, as though politics were somehow a fundamentally separate thing, forever exogenous, a parameter in our models that cannot be predicted. I have a number of economic aphorisms I’m trying to popularize; the one for this occasion is: “Nothing is exogenous.” (Maybe fundamental constants of physics? But actually many physicists think that those constants can be derived from even more fundamental laws.) My most common is “It’s the externalities, stupid.”; next is “It’s not the incentives, it’s the opportunities.”; and the last is “Human beings are 90% rational. But woe betide that other 10%.” In fact, it’s not quite true that all our macroeconomic forecasters are bad; a few, such as Krugman, are actually quite good. The Klein Award is given each year to the best macroeconomic forecasters, and the same names pop up too often for it to be completely random. (Sadly, one of the most common is Citigroup, meaning that our banksters know perfectly well what they’re doing when they destroy our economy—they just don’t care.) So in fact I think our failures of forecasting are not inevitable or permanent.

And of course that’s not what I do at all. I am a cognitive economist; I study how economic systems behave when they are run by actual human beings, rather than by infinite identical psychopaths. I’m particularly interested in what I call the tribal paradigm, the way that people identify with groups and act in the interests of those groups, how much solidarity people feel for each other and why, and what role ideology plays in that identification. I’m hoping to one day formally model solidarity and make directly testable predictions about things like charitable donations, immigration policies and disaster responses.

I do have a more macroeconomic bent than most other cognitive economists; I’m not just interested in how human irrationality affects individuals or corporations, I’m also interested in how it affects society as a whole. But unlike most macroeconomists I care more about inequality than unemployment, and hardly at all about inflation. Unless you start getting 40% inflation per year, inflation really isn’t that harmful—and can you imagine what 40% unemployment would be like? (Also, while 100% inflation is awful, 100% unemployment would be no economy at all.) If we’re going to have a “misery index“, it should weight unemployment at least 10 times as much as inflation—and it should also include terms for poverty and inequality. Frankly maybe we should just use poverty, since I’d be prepared to accept just about any level of inflation, unemployment, or even inequality if it meant eliminating poverty. This is of course is yet another reason why a basic income is so great! An anti-poverty measure can really only be called a failure if it doesn’t actually reduce poverty; the only way that could happen with a basic income is if it somehow completely destabilized the economy, which is extremely unlikely as long as the basic income isn’t something ridiculous like $100,000 per year.

I could probably talk about my master’s thesis; the econometric models are relatively arcane, but the basic idea of correlating the income concentration of the top 1% of 1% and the level of corruption is something most people can grasp easily enough.

Of course, that wouldn’t be much of an answer to “How is the economy doing?”; usually my answer is to repeat what I’ve last read from mainstream macroeconomic forecasts, which is usually rather banal—but maybe that’s the idea? Most small talk is pretty banal I suppose (I never was very good at that sort of thing). It sounds a bit like this: No, we’re not on the verge of horrible inflation—actually inflation is currently too low. (At this point someone will probably bring up the gold standard, and I’ll have to explain that the gold standard is an unequivocally terrible idea on so, so many levels. The gold standard caused the Great Depression.) Unemployment is gradually improving, and actually job growth is looking pretty good right now; but wages are still stagnant, which is probably what’s holding down inflation. We could have prevented the Second Depression entirely, but we didn’t because Republicans are terrible at managing the economy—all of the 10 most recent recessions and almost 80% of the recessions in the last century were under Republican presidents. Instead the Democrats did their best to implement basic principles of Keynesian macroeconomics despite Republican intransigence, and we muddled through. In another year or two we will actually be back at an unemployment rate of 5%, which the Federal Reserve considers “full employment”. That’s already problematic—what about that other 5%?—but there’s another problem as well: Much of our reduction in unemployment has come not from more people being employed but instead by more people dropping out of the labor force. Our labor force participation rate is the lowest it’s been since 1978, and is still trending downward. Most of these people aren’t getting jobs; they’re giving up. At best we may hope that they are people like me, who gave up on finding work in order to invest in their own education, and will return to the labor force more knowledgeable and productive one day—and indeed, college participation rates are also rising rapidly. And no, that doesn’t mean we’re becoming “overeducated”; investment in education, so-called “human capital”, is literally the single most important factor in long-term economic output, by far. Education is why we’re not still in the Stone Age. Physical capital can be replaced, and educated people will do so efficiently. But all the physical capital in the world will do you no good if nobody knows how to use it. When everyone in the world is a millionaire with two PhDs and all our work is done by robots, maybe then you can say we’re “overeducated”—and maybe then you’d still be wrong. Being “too educated” is like being “too rich” or “too happy”.

That’s usually enough to placate my interlocutor. I should probably count my blessings, for I imagine that the first confrontation you get at a dinner party if you say you are a biologist involves a Creationist demanding that you “prove evolution”. I like to think that some mathematical biologists—yes, that’s a thing—take their request literally and set out to mathematically prove that if allele distributions in a population change according to a stochastic trend then the alleles with highest expected fitness have, on average, the highest fitness—which is what we really mean by “survival of the fittest”. The more formal, the better; the goal is to glaze some Creationist eyes. Of course that’s a tautology—but so is literally anything that you can actually prove. Cosmologists probably get similar demands to “prove the Big Bang”, which sounds about as annoying. I may have to deal with gold bugs, but I’ll take them over Creationists any day.

What do other scientists get? When I tell people I am a cognitive scientist (as a cognitive economist I am sort of both an economist and a cognitive scientist after all), they usually just respond with something like “Wow, you must be really smart.”; which I suppose is true enough, but always strikes me as an odd response. I think they just didn’t know enough about the field to even generate a reasonable-sounding question, whereas with economists they always have “How is the economy doing?” handy. Political scientists probably get “Who is going to win the election?” for the same reason. People have opinions about economics, but they don’t have opinions about cognitive science—or rather, they don’t think they do. Actually most people have an opinion about cognitive science that is totally and utterly ridiculous, more on a par with Creationists than gold bugs: That is, most people believe in a soul that survives after death. This is rather like believing that after your computer has been smashed to pieces and ground back into the sand from whence it came, all the files you had on it are still out there somewhere, waiting to be retrieved. No, they’re long gone—and likewise your memories and your personality will be long gone once your brain has rotted away. Yes, we have a soul, but it’s made of lots of tiny robots; when the tiny robots stop working the soul is no more. Everything you are is a result of the functioning of your brain. This does not mean that your feelings are not real or do not matter; they are just as real and important as you thought they were. What it means is that when a person’s brain is destroyed, that person is destroyed, permanently and irrevocably. This is terrifying and difficult to accept; but it is also most definitely true. It is as solid a fact as any in modern science. Many people see a conflict between evolution and religion; but the Pope has long since rendered that one inert. No, the real conflict, the basic fact that undermines everything religion is based upon, is not in biology but in cognitive science. It is indeed the Basic Fact of Cognitive Science: We are our brains, no more and no less. (But I suppose it wouldn’t be polite to bring that up at dinner parties.)

The “You must be really smart.” response is probably what happens to physicists and mathematicians. Quantum mechanics confuses basically everyone, so few dare go near it. The truly bold might try to bring up Schrodinger’s Cat, but are unlikely to understand the explanation of why it doesn’t work. General relativity requires thinking in tensors and four-dimensional spaces—perhaps they’ll be asked the question “What’s inside a black hole?”, which of course no physicist can really answer; the best answer may actually be, “What do you mean, inside?” And if a mathematician tries to explain their work in lay terms, it usually comes off as either incomprehensible or ridiculous: Stokes’ Theorem would be either “the integral of a differential form over the boundary of some orientable manifold is equal to the integral of its exterior derivative over the whole manifold” or else something like “The swirliness added up inside an object is equal to the swirliness added up around the edges.”

Economists, however, always seem to get this one: “How is the economy doing?”

Right now, the answer is this: “It’s still pretty bad, but it’s getting a lot better. Hopefully the new Congress won’t screw that up.”

How do we measure happiness?

JDN 2457028 EST 20:33.

No, really, I’m asking. I strongly encourage my readers to offer in the comments any ideas they have about the measurement of happiness in the real world; this has been a stumbling block in one of my ongoing research projects.

In one sense the measurement of happiness—or more formally utility—is absolutely fundamental to economics; in another it’s something most economists are astonishingly afraid of even trying to do.

The basic question of economics has nothing to do with money, and is really only incidentally related to “scarce resources” or “the production of goods” (though many textbooks will define economics in this way—apparently implying that a post-scarcity economy is not an economy). The basic question of economics is really this: How do we make people happy?

This must always be the goal in any economic decision, and if we lose sight of that fact we can make some truly awful decisions. Other goals may work sometimes, but they inevitably fail: If you conceive of the goal as “maximize GDP”, then you’ll try to do any policy that will increase the amount of production, even if that production comes at the expense of stress, injury, disease, or pollution. (And doesn’t that sound awfully familiar, particularly here in the US? 40% of Americans report their jobs as “very stressful” or “extremely stressful”.) If you were to conceive of the goal as “maximize the amount of money”, you’d print money as fast as possible and end up with hyperinflation and total economic collapse ala Zimbabwe. If you were to conceive of the goal as “maximize human life”, you’d support methods of increasing population to the point where we had a hundred billion people whose lives were barely worth living. Even if you were to conceive of the goal as “save as many lives as possible”, you’d find yourself investing in whatever would extend lifespan even if it meant enormous pain and suffering—which is a major problem in end-of-life care around the world. No, there is one goal and one goal only: Maximize happiness.

I suppose technically it should be “maximize utility”, but those are in fact basically the same thing as long as “happiness” is broadly conceived as eudaimoniathe joy of a life well-lived—and not a narrow concept of just adding up pleasure and subtracting out pain. The goal is not to maximize the quantity of dopamine and endorphins in your brain; the goal is to achieve a world where people are safe from danger, free to express themselves, with friends and family who love them, who participate in a world that is just and peaceful. We do not want merely the illusion of these things—we want to actually have them. So let me be clear that this is what I mean when I say “maximize happiness”.

The challenge, therefore, is how we figure out if we are doing that. Things like money and GDP are easy to measure; but how do you measure happiness?
Early economists like Adam Smith and John Stuart Mill tried to deal with this question, and while they were not very successful I think they deserve credit for recognizing its importance and trying to resolve it. But sometime around the rise of modern neoclassical economics, economists gave up on the project and instead sought a narrower task, to measure preferences.

This is often called technically ordinal utility, as opposed to cardinal utility; but this terminology obscures the fundamental distinction. Cardinal utility is actual utility; ordinal utility is just preferences.

(The notion that cardinal utility is defined “up to a linear transformation” is really an eminently trivial observation, and it shows just how little physics the physics-envious economists really understand. All we’re talking about here is units of measurement—the same distance is 10.0 inches or 25.4 centimeters, so is distance only defined “up to a linear transformation”? It’s sometimes argued that there is no clear zero—like Fahrenheit and Celsius—but actually it’s pretty clear to me that there is: Zero utility is not existing. So there you go, now you have Kelvin.)

Preferences are a bit easier to measure than happiness, but not by as much as most economists seem to think. If you imagine a small number of options, you can just put them in order from most to least preferred and there you go; and we could imagine asking someone to do that, or—the technique of revealed preferenceuse the choices they make to infer their preferences by assuming that when given the choice of X and Y, choosing X means you prefer X to Y.

Like much of neoclassical theory, this sounds good in principle and utterly collapses when applied to the real world. Above all: How many options do you have? It’s not easy to say, but the number is definitely huge—and both of those facts pose serious problems for a theory of preferences.

The fact that it’s not easy to say means that we don’t have a well-defined set of choices; even if Y is theoretically on the table, people might not realize it, or they might not see that it’s better even though it actually is. Much of our cognitive effort in any decision is actually spent narrowing the decision space—when deciding who to date or where to go to college or even what groceries to buy, simply generating a list of viable options involves a great deal of effort and extremely complex computation. If you have a true utility function, you can satisficechoosing the first option that is above a certain threshold—or engage in constrained optimizationchoosing whether to continue searching or accept your current choice based on how good it is. Under preference theory, there is no such “how good it is” and no such thresholds. You either search forever or choose a cutoff arbitrarily.

Even if we could decide how many options there are in any given choice, in order for this to form a complete guide for human behavior we would need an enormous amount of information. Suppose there are 10 different items I could have or not have; then there are 10! = 3.6 million possible preference orderings. If there were 100 items, there would be 100! = 9e157 possible orderings. It won’t do simply to decide on each item whether I’d like to have it or not. Some things are complements: I prefer to have shoes, but I probably prefer to have $100 and no shoes at all rather than $50 and just a left shoe. Other things are substitutes: I generally prefer eating either a bowl of spaghetti or a pizza, rather than both at the same time. No, the combinations matter, and that means that we have an exponentially increasing decision space every time we add a new option. If there really is no more structure to preferences than this, we have an absurd computational task to make even the most basic decisions.

This is in fact most likely why we have happiness in the first place. Happiness did not emerge from a vacuum; it evolved by natural selection. Why make an organism have feelings? Why make it care about things? Wouldn’t it be easier to just hard-code a list of decisions it should make? No, on the contrary, it would be exponentially more complex. Utility exists precisely because it is more efficient for an organism to like or dislike things by certain amounts rather than trying to define arbitrary preference orderings. Adding a new item means assigning it an emotional value and then slotting it in, instead of comparing it to every single other possibility.

To illustrate this: I like Coke more than I like Pepsi. (Let the flame wars begin?) I also like getting massages more than I like being stabbed. (I imagine less controversy on this point.) But the difference in my mind between massages and stabbings is an awful lot larger than the difference between Coke and Pepsi. Yet according to preference theory (“ordinal utility”), that difference is not meaningful; instead I have to say that I prefer the pair “drink Pepsi and get a massage” to the pair “drink Coke and get stabbed”. There’s no such thing as “a little better” or “a lot worse”; there is only what I prefer over what I do not prefer, and since these can be assigned arbitrarily there is an impossible computational task before me to make even the most basic decisions.

Real utility also allows you to make decisions under risk, to decide when it’s worth taking a chance. Is a 50% chance of $100 worth giving up a guaranteed $50? Probably. Is a 50% chance of $10 million worth giving up a guaranteed $5 million? Not for me. Maybe for Bill Gates. How do I make that decision? It’s not about what I prefer—I do in fact prefer $10 million to $5 million. It’s about how much difference there is in terms of my real happiness—$5 million is almost as good as $10 million, but $100 is a lot better than $50. My marginal utility of wealth—as I discussed in my post on progressive taxation—is a lot steeper at $50 than it is at $5 million. There’s actually a way to use revealed preferences under risk to estimate true (“cardinal”) utility, developed by Von Neumann and Morgenstern. In fact they proved a remarkably strong theorem: If you don’t have a cardinal utility function that you’re maximizing, you can’t make rational decisions under risk. (In fact many of our risk decisions clearly aren’t rational, because we aren’t actually maximizing an expected utility; what we’re actually doing is something more like cumulative prospect theory, the leading cognitive economic theory of risk decisions. We overrespond to extreme but improbable events—like lightning strikes and terrorist attacks—and underrespond to moderate but probable events—like heart attacks and car crashes. We play the lottery but still buy health insurance. We fear Ebola—which has never killed a single American—but not influenza—which kills 10,000 Americans every year.)

A lot of economists would argue that it’s “unscientific”—Kenneth Arrow said “impossible”—to assign this sort of cardinal distance between our choices. But assigning distances between preferences is something we do all the time. Amazon.com lets us vote on a 5-star scale, and very few people send in error reports saying that cardinal utility is meaningless and only preference orderings exist. In 2000 I would have said “I like Gore best, Nader is almost as good, and Bush is pretty awful; but of course they’re all a lot better than the Fascist Party.” If we had simply been able to express those feelings on the 2000 ballot according to a range vote, either Nader would have won and the United States would now have a three-party system (and possibly a nationalized banking system!), or Gore would have won and we would be a decade ahead of where we currently are in preventing and mitigating global warming. Either one of these things would benefit millions of people.

This is extremely important because of another thing that Arrow said was “impossible”—namely, “Arrow’s Impossibility Theorem”. It should be called Arrow’s Range Voting Theorem, because simply by restricting preferences to a well-defined utility and allowing people to make range votes according to that utility, we can fulfill all the requirements that are supposedly “impossible”. The theorem doesn’t say—as it is commonly paraphrased—that there is no fair voting system; it says that range voting is the only fair voting system. A better claim is that there is no perfect voting system, which is true if you mean that there is no way to vote strategically that doesn’t accurately reflect your true beliefs. The Myerson-Satterthwaithe Theorem is then the proper theorem to use; if you could design a voting system that would force you to reveal your beliefs, you could design a market auction that would force you to reveal your optimal price. But the least expressive way to vote in a range vote is to pick your favorite and give them 100% while giving everyone else 0%—which is identical to our current plurality vote system. The worst-case scenario in range voting is our current system.

But the fact that utility exists and matters, unfortunately doesn’t tell us how to measure it. The current state-of-the-art in economics is what’s called “willingness-to-pay”, where we arrange (or observe) decisions people make involving money and try to assign dollar values to each of their choices. This is how you get disturbing calculations like “the lives lost due to air pollution are worth $10.2 billion.”

Why are these calculations disturbing? Because they have the whole thing backwards—people aren’t valuable because they are worth money; money is valuable because it helps people. It’s also really bizarre because it has to be adjusted for inflation. Finally—and this is the point that far too few people appreciate—the value of a dollar is not constant across people. Because different people have different marginal utilities of wealth, something that I would only be willing to pay $1000 for, Bill Gates might be willing to pay $1 million for—and a child in Africa might only be willing to pay $10, because that is all he has to spend. This makes the “willingness-to-pay” a basically meaningless concept independent of whose wealth we are spending.

Utility, on the other hand, might differ between people—but, at least in principle, it can still be added up between them on the same scale. The problem is that “in principle” part: How do we actually measure it?

So far, the best I’ve come up with is to borrow from public health policy and use the QALY, or quality-adjusted life year. By asking people macabre questions like “What is the maximum number of years of your life you would give up to not have a severe migraine every day?” (I’d say about 20—that’s where I feel ambivalent. At 10 I definitely would; at 30 I definitely wouldn’t.) or “What chance of total paralysis would you take in order to avoid being paralyzed from the waist down?” (I’d say about 20%.) we assign utility values: 80 years of migraines is worth giving up 20 years to avoid, so chronic migraine is a quality of life factor of 0.75. Total paralysis is 5 times as bad as paralysis from the waist down, so if waist-down paralysis is a quality of life factor of 0.90 then total paralysis is 0.50.

You can probably already see that there are lots of problems: What if people don’t agree? What if due to framing effects the same person gives different answers to slightly different phrasing? Some conditions will directly bias our judgments—depression being the obvious example. How many years of your life would you give up to not be depressed? Suicide means some people say all of them. How well do we really know our preferences on these sorts of decisions, given that most of them are decisions we will never have to make? It’s difficult enough to make the actual decisions in our lives, let alone hypothetical decisions we’ve never encountered.

Another problem is often suggested as well: How do we apply this methodology outside questions of health? Does it really make sense to ask you how many years of your life drinking Coke or driving your car is worth?
Well, actually… it better, because you make that sort of decision all the time. You drive instead of staying home, because you value where you’re going more than the risk of dying in a car accident. You drive instead of walking because getting there on time is worth that additional risk as well. You eat foods you know aren’t good for you because you think the taste is worth the cost. Indeed, most of us aren’t making most of these decisions very well—maybe you shouldn’t actually drive or drink that Coke. But in order to know that, we need to know how many years of your life a Coke is worth.

As a very rough estimate, I figure you can convert from willingness-to-pay to QALY by dividing by your annual consumption spending Say you spend annually about $20,000—pretty typical for a First World individual. Then $1 is worth about 50 microQALY, or about 26 quality-adjusted life-minutes. Now suppose you are in Third World poverty; your consumption might be only $200 a year, so $1 becomes worth 5 milliQALY, or 1.8 quality-adjusted life-days. The very richest individuals might spend as much as $10 million on consumption, so $1 to them is only worth 100 nanoQALY, or 3 quality-adjusted life-seconds.

That’s an extremely rough estimate, of course; it assumes you are in perfect health, all your time is equally valuable and all your purchasing decisions are optimized by purchasing at marginal utility. Don’t take it too literally; based on the above estimate, an hour to you is worth about $2.30, so it would be worth your while to work for even $3 an hour. Here’s a simple correction we should probably make: if only a third of your time is really usable for work, you should expect at least $6.90 an hour—and hey, that’s a little less than the US minimum wage. So I think we’re in the right order of magnitude, but the details have a long way to go.

So let’s hear it, readers: How do you think we can best measure happiness?