Why New Year’s resolutions fail

Jan 1, JDN 2457755

Last week’s post was on Christmas, so by construction this week’s post will be on New Year’s Day.

It is a tradition in many cultures, especially in the US and Europe, to start every new year with a New Year’s resolution, a promise to ourselves to change our behavior in some positive way.

Yet, over 80% of these resolutions fail. Why is this?

If we are honest, most of us would agree that there is something about our own behavior that could stand to be improved. So why do we so rarely succeed in actually making such improvements?

One possibility, which I’m guessing most neoclassical economists would favor, is to say that we don’t actually want to. We may pretend that we do in order to appease others, but ultimately our rational optimization has already chosen that we won’t actually bear the cost to make the improvement.

I think this is actually quite rare. I’ve seen too many people with resolutions they didn’t share with anyone, for example, to think that it’s all about social pressure. And I’ve seen far too many people try very hard to achieve their resolutions, day after day, and yet still fail.

Sometimes we make resolutions that are not entirely within our control, such as “get a better job” or “find a girlfriend” (last year I made a resolution to publish a work of commercial fiction or a peer-reviewed article—and alas, failed at that task, unless I somehow manage it in the next few days). Such resolutions may actually be unwise to make in the first place, as it can feel like breaking a promise to yourself when you’ve actually done all you possibly could.

So let’s set those aside and talk only about things we should be in control over, like “lose weight” or “save more money”. Even these kinds of resolutions typically fail; why? What is this “weakness of will”? How is it possible to really want something that you are in full control over, and yet still fail to accomplish it?

Well, first of all, I should be clear what I mean by “in full control over”. In some sense you’re not in full control, which is exactly the problem. Your conscious mind is not actually an absolute tyrant over your entire body; you’re more like an elected president who has to deal with a legislature in order to enact policy.

You do have a great deal of power over your own behavior, and you can learn to improve this control (much as real executive power in presidential democracies has expanded over the last century!); but there are fundamental limits to just how well you can actually consciously will your body to do anything, limits imposed by billions of years of evolution that established most of the traits of your body and nervous system millions of generations before there even was such a thing as rational conscious reasoning.

One thing that makes a surprisingly large difference lies in whether your goals are reduced to specific, actionable objectives. “Lose weight” is almost guaranteed to fail. “Lose 30 pounds” is still unlikely to succeed. “Work out for 2 hours per week,” on the other hand, might have a chance. “Save money” is never going to make it, but “move to a smaller apartment and set aside $200 per month” just might.

I think the government metaphor is helpful here; if you President of the United States and you want something done, do you state some vague, broad goal like “Improve the economy”? No, you make a specific, actionable demand that allows you to enforce compliance, like “increase infrastructure spending by 24% over the next 5 years”. Even then it is possible to fail if you can’t push it through the legislature (in the metaphor, the “legislature” is your habits, instincts and other subconscious processes), but you’re much more likely to succeed if you have a detailed plan.

Another technique that helps is to visualize the benefits of succeeding and the costs of failing, and keep these in your mind. This counteracts the tendency for the costs of succeeding and the benefits of giving up to be more salient—losing 30 pounds sounds nice in theory, but that treadmill is so much work right now!

This salience effect has a lot to do with the fact that human beings are terrible at dealing with the future.

Rationally, we are supposed to use exponential discounting; each successive moment is supposed to be worth less to us than the previous by a fixed proportion, say 5% per year. This is actually a mathematical theorem; if you don’t discount this way, your decisions will be systematically irrational.

And yet… we don’t discount that way. Some behavioral economists argue that we use hyperbolic discounting, in which instead of discounting time by a fixed proportion, we use a different formula that drops off too quickly early on and not quickly enough later on.

But I am increasingly convinced that human beings don’t actually use discounting at all. We have a series of rough-and-ready heuristics for making future judgments, which can sort of act like discounting, but require far less computation than actually calculating a proper discount rate. (Recent empirical evidence seems to be tilting this direction.)

In any case, whatever we do is clearly not a proper rational discount rate. And this means that our behavior can be time-inconsistent; a choice that seems rational at one time can not seem rational at a later time. When we’re planning out our year and saying we will hit the treadmill more, it seems like a good idea; but when we actually get to the gym and feel our legs ache as we start running, we begin to regret our decision.

The challenge, really, is determining which “version” of us is correct! A priori, we don’t actually know whether the view of our distant self contemplating the future or the view of our current self making the choice in the moment is the right one. Actually, when I frame it this way, it almost seems like the self that’s closer to the choice should have better information—and yet typically we think the exact opposite, that it is our past self making plans that really knows what’s best for us.

So where does that come from? Why do we think, at least in most cases, that the “me” which makes a plan a year in advance is the smart one, and the “me” that actually decides in the moment is untrustworthy.

Kahneman has a good explanation for this, in his model of System 1 and System 2. System 1 is simple and fast, but often gets the wrong answer. System 2 usually gets the right answer, but it is complex and slow. When we are making plans, we have a lot of time to think, and we can afford to expend the extra effort to engage the full power of System 2. But when we are living in the moment, choosing what to do right now, we don’t have that luxury of time, and we are forced to fall back on System 1. System 1 is easier—but it’s also much more likely to be wrong.

How, then, do we resolve this conflict? Commitment. (Perhaps that’s why it’s called a New Year’s resolution!)

We make promises to ourselves, commitments that we will feel bad about not following through.

If we rationally discounted, this would be a baffling thing to do; we’re just imposing costs on ourselves for no reason. But because we don’t discount rationally, commitments allow us to change the calculation for our future selves.

This brings me to one last strategy to use when making your resolutions: Include punishment.

“I will work out at least 2 hours per week, and if I don’t, I’m not allowed to watch TV all weekend.” Now that is a resolution you are actually likely to keep.

To see why, consider the decision problem for your System 2 self today versus your System 1 self throughout the year.

Your System 2 self has done the cost-benefit analysis and ruled that working out 2 hours per week is worthwhile for its health benefits.

If you left it at that, your System 1 self would each day find an excuse to procrastinate the workouts, because at least from where they’re sitting, working out for 2 hours looks a lot more painful than the marginal loss in health from missing just this one week. And of course this will keep happening, week after week—and then 52 go by and you’ve had few if any workouts.

But by adding the punishment of “no TV”, you have imposed an additional cost on your System 1 self, something that they care about. Suddenly the calculation changes; it’s not just 2 hours of workout weighed against vague long-run health benefits, but 2 hours of workout weighed against no TV all weekend. That punishment is surely too much to bear; so you’d best do the workout after all.

Do it right, and you will rarely if ever have to impose the punishment. But don’t make it too large, or then it will seem unreasonable and you won’t want to enforce it if you ever actually need to. Your System 1 self will then know this, and treat the punishment as nonexistent. (Formally the equilibrium is not subgame perfect; I am gravely concerned that our nuclear deterrence policy suffers from precisely this flaw.) “If I don’t work out, I’ll kill myself” is a recipe for depression, not healthy exercise habits.

But if you set clear, actionable objectives and sufficient but reasonable punishments, there’s at least a good chance you will actually be in the minority of people who actually succeed in keeping their New Year’s resolution.

And if not, there’s always next year.

Toward an economics of social norms

Sep 17, JDN 2457649

It is typical in economics to assume that prices are set by perfect competition in markets with perfect information. This is obviously ridiculous, so many economists do go further and start looking into possible distortions of the market, such as externalities and monopolies. But almost always the assumption is still that human beings are neoclassical rational agents, what I call “infinite identical psychopaths”, selfish profit-maximizers with endless intelligence and zero empathy.

What happens when we recognize that human beings are not like this, but in fact are empathetic, social creatures, who care about one another and work toward the interests of (what they perceive to be) their tribe? How are prices really set? What actually decides what is made and sold? What does economics become once you understand sociology? (The good news is that experiments are now being done to find out.)

Presumably some degree of market competition is involved, and no small amount of externalities and monopolies. But one of the very strongest forces involved in setting prices in the real world is almost completely ignored, and that is social norms.

Social norms are tremendously powerful. They will drive us to bear torture, fight and die on battlefields, even detonate ourselves as suicide bombs. When we talk about “religion” or “ideology” motivating people to do things, really what we are talking about is social norms. While some weaker norms can be overridden, no amount of economic incentive can ever override a social norm at its full power. Moreover, most of our behavior in daily life is driven by social norms: How to dress, what to eat, where to live. Even the fundamental structure of our lives is written by social norms: Go to school, get a job, get married, raise a family.

Even academic economists, who imagine themselves one part purveyor of ultimate wisdom and one part perfectly rational agent, are clearly strongly driven by social norms—what problems are “interesting”, which researchers are “renowned”, what approaches are “sensible”, what statistical methods are “appropriate”. If economists were perfectly rational, dynamic stochastic general equilibrium models would be in the dustbin of history (because, like string theory, they have yet to lead to a single useful empirical prediction), research journals would not be filled with endless streams of irrelevant but impressive equations (I recently read one that basically spent half a page of calculus re-deriving the concept of GDP—and computer-generated gibberish has been published, because its math looked so impressive), and instead of frequentist p-values (and often misinterpreted at that), all the statistics would be written in the form of Bayesian logodds.

Indeed, in light of all this, I often like to say that to a first approximation, all human behavior is social norms.

How does this affect buying and selling? Well, first of all, there are some things we refuse to buy and sell, or at least that most of us refuse to buy and sell, and who use social pressure, public humilitation, or even the force of law to prevent. You’re not supposed to sell children. You’re not supposed to sell your vote. You’re not even supposed to sell sexual favors (though every society has always had a large segment of people who do, and more recently people are becoming more open to the idea of at least decriminalizing it). If we were neoclassical rational agents, we would have no such qualms; if we want something and someone is willing to sell it to us, we’ll buy it. But as actual human beings with emotions and social norms, we recognize that there is something fundamentally different about selling your vote as opposed to selling a shirt or a television. It’s not always immediately obvious where to draw the line, which is why sex work can be such a complicated issue (You can’t get paid to have sex… unless someone is filming it?). Different societies may do it differently: Part of the challenge of fighting corruption in Third World countries is that much of what we call corruption—and which actually is harmful to long-run economic development—isn’t perceived as “corruption” by the people involved in it, just as social custom (“Of course I’d hire my cousin! What kind of cousin would I be if I didn’t?”). Yet despite all that, almost everyone agrees that there is a line to be drawn. So there are whole markets that theoretically could exist, but don’t, or only exist as tiny black markets most people never participate in, because we consider selling those things morally wrong. Recently a whole subfield of cognitive economics has emerged studying these repugnant markets.

Even if a transaction is not considered so repugnant as to be unacceptable, there are also other classes of goods that are in some sense unsavory; something you really shouldn’t buy, but you’re not a monster for doing so. These are often called sin goods, and they have always included drugs, alcohol, and gambling—and I do mean always, as every human civilization has had these things—they include prostitution where it is legal, and as social norms change they are now beginning to include oil and coal as well (which can only be good for the future of Earth’s climate). Sin goods are systematically more expensive than they should be for their marginal cost, because most people are unwilling to participate in selling them. As a result, the financial returns for producing sin goods are systematically higher. Actually, this could partially explain why Wall Street banks are so profitable; when the banking system is corrupt as it is—and you’re not imagining that; laundering money for terroriststhen banking becomes a sin good, and good people don’t want to participate in it. Or perhaps the effect runs the other way around: Banking has been viewed as sinful for centuries (in Medieval times, usury was punished much the same way as witchcraft), and as a result only the sort of person who doesn’t care about social and moral norms becomes a banker—and so the banking system becomes horrifically corrupt. Is this a reason for good people to force ourselves to become bankers? Or is there another way—perhaps credit unions?

There are other ways that social norms drive prices as well. We have a concept ofa “fair wage”, which is quite distinct from the economic concept of a “market-clearing wage”. When people ask whether someone’s wage is fair, they don’t look at supply and demand and try to determine whether there are too many or too few people offering that service. They ask themselves what the labor is worth—what value has it added—and how hard that person has worked to do it—what cost it bore. Now, these aren’t totally unrelated to supply and demand (people are less likely to supply harder work, people are more likely to demand higher value), so it’s conceivable that these heuristics could lead us to more or less achieve the market-clearing wage most of the time. But there are also some systematic distortions to consider.

Perhaps the most important way fairness matters in economics is necessities: Basic requirements for human life such as food, housing, and medicine. The structure of our society also makes transportation, education, and Internet access increasingly necessary for basic functioning. From the perspective of an economist, it is a bit paradoxical how angry people get when the price of something important (such as healthcare) is increased: If it’s extremely valuable, shouldn’t you be willing to pay more? Why does it bother you less when something like a Lamborghini or a Rolex rises in price, something that almost certainly wasn’t even worth its previous price? You’re going to buy the necessities anyway, right? Well, as far as most economists are concerned, that’s all that matters—what gets bought and sold. But of course as a human being I do understand why people get angry about these things, and it is because they have to buy them anyway. When someone like Martin Shkreli raises the prices on basic goods, we feel exploited. There’s even a way to make this economically formal: When demand is highly inelastic, we are rightly very sensitive to the possibility of a monopoly, because monopolies under inelastic demand can extract huge profits and cause similarly huge amounts of damage to the welfare of their customers. That isn’t quite how most people would put it, but I think that has something to do with the ultimate reason we evolved that heuristic: It’s dangerous to let someone else control your basic necessities, because that gives them enormous power to exploit you. If they control things that aren’t as important to you, that doesn’t matter so much, because you can always do without if you must. So a norm that keeps businesses from overcharging on necessities is very important—and probably not as strong anymore as it should be.

Another very important way that fairness and markets can be misaligned is talent: What if something is just easier for one person than another? If you achieve the same goal with half the work, should you be rewarded more for being more efficient, or less because you bore less cost? Neoclassical economics doesn’t concern itself with such questions, asking only if supply and demand reached equilibrium. But we as human beings do care about such things; we want to know what wage a person deserves, not just what wage they would receive in a competitive market.

Could we be wrong to do that? Might it be better if we just let the market do its work? In some cases I think that may actually be true. Part of why CEO pay is rising so fast despite being uncorrelated with corporate profitability or even negatively correlated is that CEOs have convinced us (or convinced their boards of directors) that this is fair, that they deserve more stock options. They even convince them that their pay is based on performance, by using highly distorted measures of performance. If boards thought more like economic rational agents, when a CEO asked for more pay they’d ask: “What other company gave you a higher offer?” and if the CEO didn’t have an answer, they’d laugh and refuse the raise. Because in purely economic terms, that is all a salary does: it keeps you from quitting to work somewhere else. The competitive mechanism of the market is supposed to then ensure that your wage aligns with your marginal cost and marginal productivity purely due to that.

On the other hand, there are many groups of people who simply aren’t doing very well in the market: Women, racial minorities, people with disabilities. There are a lot of reasons for this, some of which might go away if markets were made more competitive—the classic argument that competitive markets reward companies that don’t discriminate—but many clearly wouldn’t. Indeed, that argument was never as strong as it at first appears; in a society where social norms are strongly in favor of bigotry, it can be completely economically rational to participate in bigotry to avoid being penalized. When Chick-Fil-A was revealed to have donated to anti-LGBT political groups, many people tried to boycott—but their sales actually increased from the publicity. Honestly it’s a bit baffling that they promised not to donate to such causes anymore; it was apparently a profitable business decision to be revealed as supporters of bigotry. And even when discrimination does hurt economic performance, companies are run by human beings, and they are still quite capable of discriminating regardless. Indeed, the best evidence we have that discrimination is inefficient comes from… businesses that persist in discriminating despite the fact that it is inefficient.

But okay, suppose we actually did manage to make everyone compensated according to their marginal productivity. (Or rather, what Rawls derided: “From each according to his marginal productivity, to each according to his threat advantage.”) The market would then clear and be highly efficient. Would that actually be a good thing? I’m not so sure.

A lot of people are highly unproductive through no fault of their own—particularly children and people with disabilities. Much of this is not discrimination; it’s just that they aren’t as good at providing services. Should we simply leave them to fend for themselves? Then there’s the key point about what marginal means in this case—it means “given what everyone else is doing”. But that means that you can be made obsolete by someone else’s actions, and in this era of rapid technological advancement, jobs become obsolete faster than ever. Unlike a lot of people, I recognize that it makes no sense to keep people working at jobs that can be automated—the machines are better. But still, what do we do with the people whose jobs have been eliminated? Do we treat them as worthless? When automated buses become affordable—and they will; I give it 20 years—do we throw the human bus drivers under them?

One way out is of course a basic income: Let the market wage be what it will, and then use the basic income to provide for what human beings deserve irrespective of their market productivity. I definitely support a basic income, of course, and this does solve the most serious problems like children and quadriplegics starving in the streets.

But as I read more of the arguments by people who favor a job guarantee instead of a basic income, I begin to understand better why they are uncomfortable with the idea: It doesn’t seem fair. A basic income breaks once and for all the link between “a fair day’s work” and “a fair day’s wage”. It runs counter to this very deep-seated intuition most people have that money is what you earn—and thereby deserve—by working, and only by working. That is an extremely powerful social norm, and breaking it will be very difficult; so it’s worth asking: Should we even try to break it? Is there a way to achieve a system where markets are both efficient and fair?

I’m honestly not sure; but I do know that we could make substantial progress from where we currently stand. Most billionaire wealth is pure rent in the economic sense: It’s received by corruption and market distortion, not by efficient market competition. Most poverty is due to failures of institutions, not lack of productivity of workers. As George Monblot famously wrote, “If wealth was the inevitable result of hard work and enterprise, every woman in Africa would be a millionaire.” Most of the income disparity between White men and others is due to discrimination, not actual skill—and what skill differences there are are largely the result of differences in education and upbringing anyway. So if we do in fact correct these huge inefficiencies, we will also be moving toward fairness at the same time. But still that nagging thought remains: When all that is done, will there come a day where we must decide whether we would rather have an efficient economy or a just society? And if it does, will we decide the right way?

Expensive cheap things, cheap expensive things

July 20, JDN 2457590

My posts recently have been fairly theoretical and mathematically intensive, so I thought I’d take a break from that today and offer you a much simpler, more practical post that you could use right away to improve your own finances.

Cognitive economists are so accustomed to using the word “heuristic” in contrast with words like “optimal” and “rational” that we tend to treat them as something bad. If only we didn’t have these darn heuristics, we could be those perfect rational agents the neoclassicists keep telling us about!

But in fact this is almost completely backwards: Heuristics are the reason human beings are capable of rational thought, unlike, well, anything else in the known universe. To be fair, many animals are capable of some limited rationality, often more than most people realize, but still far less than our own—and what rationality they have is born of the same evolutionary heuristics we use. Computers and robots are now approaching something that could be called rationality, but they still have a long way to go before they’ll really be acting rationally rather than perfectly following precise instructions—and of course we made them, modeled after our own thought processes. Current robots are logical, but not rational. The difference between logic and rationality is rather like that between intelligence and wisdom. Logic dictates that coffee is a berry; rationality says you may not enjoy it in your fruit salad. Robots are still at the point where they’d put coffee in our fruit salads if we told them to include a random mix of berries.

Heuristics are what allows us to make rational decisions 90% of the time. We might wish for something that would make us rational 100% of the time, but no known method exists; the best we can do is learn better heuristics to raise our percentage to perhaps 92% or 95%. With no heuristics at all, we would be 0% rational, not 100%.

So today I’m going to offer you a new heuristic, which I think might help you give your choices that little 2% boost. Expensive cheap things, cheap expensive things.

This is a little mantra to repeat to yourself whenever you have a purchasing decision to make—which, in a consumerist economy like ours, is surely several times a day. The precise definition of “cheap” and “expensive” will vary according to your income (to a billionaire, my lifetime income is a pittance; to someone at the UN poverty level, my annual income is an unimaginable bounty of riches). But for a typical middle-class American, “cheap” can be approximately defined by a Jackson heuristic—anything less than $20 is cheap—and “expensive” by a Benjamin heuristic—anything over $100 is expensive. It doesn’t need to be hard-edged either; you should apply this heuristic more thoroughly for purchases of $10,000 (i.e. cars) than you do for purchase of $1,000, and still more so for purchase of $100,000 (houses).

Expensive cheap things, cheap expensive things; what do I mean by that?

If you are going to buy something cheap, you can choose the expensive variety if you like. If you have the choice of a $1 toothbrush, a $5 toothbrush, and a $10 toothbrush, and you really do like the $10 toothbrush, don’t agonize over it—just buy the damn $10 toothbrush. Obviously there’s no reason to do that if the $1 toothbrush is really just as good for your needs; but if there’s any difference in quality you care about, it is almost certainly worth it to buy the better one.

If you are going to buy something expensive, you should choose the cheap variety if you can. If you have the choice of a $14,000 car, a $15,000 car, and a $16,000 car, you should buy the $14,000 car, unless the other cars are massively superior. You should basically be aiming for the cheapest bare-minimum choice that allows you to meet your needs. (I should be careful using cars as my example, because many old used cars that seem “cheap” are actually more expensive to fuel and maintain than it would cost to simply buy a newer model—but assume you’ve factored in a good estimate of the maintenance cost. You should almost never buy cars that aren’t at least a year old, however—first-year depreciation is huge. Let someone else lease it for a year before it you buy it.)

Why do I say this? Many people find the result counter-intuitive: I just told you to spend 900% more on toothbrushes, but insisted that you scrounge to save 12.5% on a car. Even if we adjust for the asymmetry using log points, I told you to indulge 230 log points of toothbrush for a tiny gain, while insisted you bear no-frills bare-minimum to save 13 log points of car.

I have also saved you $1,991. That’s why.

Intuitively we tend to think in terms of proportional prices—this car is 12.5% cheaper than that car, this toothbrush is 900% more expensive than that toothbrush. But you don’t spend money in proportions. You spend it in absolute amounts. So when you decide to make a purchase, you need to train yourself to think in terms of the absolute difference in price—paying $9 more versus paying $2000 more.

Businesses are counting on you not to think this way; that car dealer is surely going to point out that the $16,000 model has a sunroof and upgraded tire rims and whatever, and it’s only 14% more! But unless you would seriously be willing to pay $2,000 to get a sunroof and upgraded tire rims installed later, you should not upgrade to the $16,000 model. Don’t let them bamboozle you with “it’s a $5,000 value!”; it might well be a $5,000 price to do elsewhere, but that’s not the same thing. Only you can decide whether it’s of sufficient value to you.

There’s another reason this heuristic can be useful, which is that it will tend to pressure you into buying experiences instead of objects—and it is a well-established pattern in cognitive economics that experiences are a more cost-effective source of happiness than objects. “Expensive cheap things, cheap expensive things” doesn’t necessarily pressure toward buying experiences, as one could certainly load up on useless $20 gadgets or spend $5,000 on a luxurious vacation to Paris. But as a general pattern (and heuristics are all about general patterns!) you’re more likely to spend $20 on a dinner or $5,000 on a car. Some of the cheapest things people buy, like dining out with friends, are some of the greatest sources of happiness—you are, in a real sense, buying friendship. Some of the most expensive things people buy, like real estate, are precisely the sort of thing you should be willing to skimp on, because they really won’t bring you happiness. Larger houses are not statistically associated with higher happiness.

Indeed, part of the great crisis of real estate prices (which is a phenomenon across all First World cities, and surprisingly worse in Canada than the US, though worse still in California in particular) probably comes from people not applying this sort of heuristic. “This house is $240,000, but that one is only 10% more and look how much nicer it is!” That’s $24,000. You can buy that nicer house, or you can buy a second car. Or you can have an extra year of your child’s college fund. That is what that 10% actually means. I’m sure this isn’t the primary reason why housing in the US is so ludicrously expensive, but it may be a contributing factor. (Krugman argued similarly during the housing crash.)

Like any heuristic, “Expensive cheap things, cheap expensive things” will sometimes fail you, and if you think carefully you can probably outperform it. But I’ve found it’s a good habit to get into; it has helped me save money more than just about anything else I’ve tried.

What is the price of time?

JDN 2457562

If they were asked outright, “What is the price of time?” most people would find that it sounds nonsensical, like I’ve asked you “What is the diameter of calculus?” or “What is the electric charge of justice?” (It’s interesting that we generally try to assign meaning to such nonsensical questions, and they often seem strangely profound when we do; a good deal of what passes for “profound wisdom” is really better explained as this sort of reaction to nonsense. Deepak Chopra, for instance.)

But there is actually a quite sensible economic meaning of this question, and answering it turns out to have many important implications for how we should run our countries and how we should live our lives.

What we are really asking for is temporal discounting; we want to know how much more money today is worth compared to tomorrow, and how much more money tomorrow is worth compared to two days from now.

If you say that they are exactly the same, your discount rate (your “price of time”) is zero; if that is indeed how you feel, may I please borrow your entire net wealth at 0% interest for the next thirty years? If you like we can even inflation-index the interest rate so it always produces a real interest rate of zero, thus protecting you from potential inflation risk.
What? You don’t like my deal? You say you need that money sooner? Then your discount rate is not zero. Similarly, it can’t be negative; if you actually valued money tomorrow more than money today, you’d gladly give me my loan.

Money today is worth more to you than money tomorrow—the only question is how much more.

There’s a very simple theorem which says that as long as your temporal discounting doesn’t change over time, so it is dynamically consistent, it must have a very specific form. I don’t normally use math this advanced in my blog, but this one is so elegant I couldn’t resist. I’ll encase it in blockquotes so you can skim over it if you must.

The value of $1 today relative to… today is of course 1; f(0) = 1.

If you are dynamically consistent, at any time t you should discount tomorrow relative to today the same as you discounted today relative to yesterday, so for all t, f(t+1)/f(t) = f(t)/f(t-1)
Thus, f(t+1)/f(t) is independent of t, and therefore equal to some constant, which we can call r:

f(t+1)/f(t) = r, which implies f(t+1) = r f(t).

Starting at f(0) = 1, we have:

f(0) = 1, f(1) = r, f(2) = r^2

We can prove that this pattern continues to hold by mathematical induction.

Suppose the following is true for some integer k; we already know it works for k = 1:

f(k) = r^k

Let t = k:

f(k+1) = r f(k)

Therefore:

f(k+1) = r^(k+1)

Which by induction proves that for all integers n:

f(n) = r^n

The name of the variable doesn’t matter. Therefore:

f(t) = r^t

Whether you agree with me that this is beautiful, or you have no idea what I just said, the take-away is the same: If your discount rate is consistent over time, it must be exponential. There must be some constant number 0 < r < 1 such that each successive time period is worth r times as much as the previous. (You can also generalize this to the case of continuous time, where instead of r^t you get e^(-r t). This requires even more advanced math, so I’ll spare you.)

Most neoclassical economists would stop right there. But there are two very big problems with this argument:

(1) It doesn’t tell us the value r should actually be, only that it should be a constant.

(2) No actual human being thinks of time this way.

There is still ongoing research as to exactly how real human beings discount time, but one thing is quite clear from the experiments: It certainly isn’t exponential.

From about 2000 to 2010, the consensus among cognitive economists was that humans discount time hyperbolically; that is, our discount function looks like this:

f(t) = 1/(1 + r t)

In the 1990s there were a couple of experiments supporting hyperbolic discounting. There is even some theoretical work trying to show that this is actually optimal, given a certain kind of uncertainty about the future, and the argument for exponential discounting relies upon certainty we don’t actually have. Hyperbolic discounting could also result if we were reasoning as though we are given a simple interest rate, rather than a compound interest rate.

But even that doesn’t really seem like humans think, now does it? It’s already weird enough for someone to say “Should I take out this loan at 5%? Well, my discount rate is 7%, so yes.” But I can at least imagine that happening when people are comparing two different interest rates (“Should I pay down my student loans, or my credit cards?”). But I can’t imagine anyone thinking, “Should I take out this loan at 5% APR which I’d need to repay after 5 years? Well, let’s check my discount function, 1/(1+0.05 (5)) = 0.8, multiplied by 1.05^5 = 1.28, the product of which is 1.02, greater than 1, so no, I shouldn’t.” That isn’t how human brains function.

Moreover, recent experiments have shown that people often don’t seem to behave according to what hyperbolic discounting would predict.

Therefore I am very much in the other camp of cognitive economists, who say that we don’t have a well-defined discount function. It’s not exponential, it’s not hyperbolic, it’s not “quasi-hyperbolic” (yes that is a thing); we just don’t have one. We reason about time by simple heuristics. You can’t make a coherent function out of it because human beings… don’t always reason coherently.

Some economists seem to have an incredible amount of trouble accepting that; here we have one from the University of Chicago arguing that hyperbolic discounting can’t possibly exist, because then people could be Dutch-booked out of all their money; but this amounts to saying that human behavior cannot ever be irrational, lest all our money magically disappear. Yes, we know hyperbolic discounting (and heuristics) allow for Dutch-booking; that’s why they’re irrational. If you really want to know the formal assumption this paper makes that is wrong, it assumes that we have complete markets—and yes, complete markets essentially force you to be perfectly rational or die, because the slightest inconsistency in your reasoning results in someone convincing you to bet all your money on a sure loss. Why was it that we wanted complete markets, again? (Oh, yes, the fanciful Arrow-Debreu model, the magical fairy land where everyone is perfectly rational and all markets are complete and we all have perfect information and the same amount of wealth and skills and the same preferences, where everything automatically achieves a perfect equilibrium.)

There was a very good experiment on this, showing that rather than discount hyperbolically, behavior is better explained by a heuristic that people judge which of two options is better by a weighted sum of the absolute distance in time plus the relative distance in time. Now that sounds like something human beings might actually do. “$100 today or $110 tomorrow? That’s only 1 day away, but it’s also twice as long. I’m not waiting.” “$100 next year, or $110 in a year and a day? It’s only 1 day apart, and it’s only slightly longer, so I’ll wait.”

That might not actually be the precise heuristic we use, but it at least seems like one that people could use.

John Duffy, whom I hope to work with at UCI starting this fall, has been working on another experiment to test a different heuristic, based on the work of Daniel Kahneman, saying essentially that we have a fast, impulsive, System 1 reasoning layer and a slow, deliberative, System 2 reasoning layer; the result is that our judgments combine both “hand to mouth” where our System 1 essentially tries to get everything immediately and spend whatever we can get our hands on, and a more rational assessment by System 2 that might actually resemble an exponential discount rate. In the 5-minute judgment, System 1’s voice is overwhelming; but if we’re already planning a year out, System 1 doesn’t even care anymore and System 2 can take over. This model also has the nice feature of explaining why people with better self-control seem to behave more like they use exponential discounting,[PDF link] and why people do on occasion reason more or less exponentially, while I have literally never heard anyone try to reason hyperbolically, only economic theorists trying to use hyperbolic models to explain behavior.

Another theory is that discounting is “subadditive”, that is, if you break up a long time interval into many short intervals, people will discount it more, because it feels longer that way. Imagine a century. Now imagine a year, another year, another year, all the way up to 100 years. Now imagine a day, another day, another day, all the way up to 365 days for the first year, and then 365 days for the second year, and that on and on up to 100 years. It feels longer, doesn’t it? It is of course exactly the same. This can account for some weird anomalies in choice behavior, but I’m not convinced it’s as good as the two-system model.

Another theory is that we simply have a “present bias”, which we treat as a sort of fixed cost that we incur regardless of what the payments are. I like this because it is so supremely simple, but there’s something very fishy about it, because in this experiment it was just fixed at $4, and that can’t be right. It must be fixed at some proportion of the rewards, or something like that; or else we would always exhibit near-perfect exponential discounting for large amounts of money, which is more expensive to test (quite directly), but still seems rather unlikely.

Why is this important? This post is getting long, so I’ll save it for future posts, but in short, the ways that we value future costs and benefits, both as we actually do, and as we ought to, have far-reaching implications for everything from inflation to saving to environmental sustainability.

Prospect Theory: Why we buy insurance and lottery tickets

JDN 2457061 PST 14:18.

Today’s topic is called prospect theory. Prospect theory is basically what put cognitive economics on the map; it was the knock-down argument that Kahneman used to show that human beings are not completely rational in their economic decisions. It all goes back to a 1979 paper by Kahneman and Tversky that now has 34000 citations (yes, we’ve been having this argument for a rather long time now). In the 1990s it was refined into cumulative prospect theory, which is more mathematically precise but basically the same idea.

What was that argument? People buy both insurance and lottery tickets.

The “both” is very important. Buying insurance can definitely be rational—indeed, typically is. Buying lottery tickets could theoretically be rational, under very particular circumstances. But they cannot both be rational at the same time.

To see why, let’s talk some more about marginal utility of wealth. Recall that a dollar is not worth the same to everyone; to a billionaire a dollar is a rounding error, to most of us it is a bottle of Coke, but to a starving child in Ghana it could be life itself. We typically observe diminishing marginal utility of wealth—the more money you have, the less another dollar is worth to you.

If we sketch a graph of your utility versus wealth it would look something like this:

Marginal_utility_wealth

Notice how it increases as your wealth increases, but at a rapidly diminishing rate.

If you have diminishing marginal utility of wealth, you are what we call risk-averse. If you are risk-averse, you’ll (sometimes) want to buy insurance. Let’s suppose the units on that graph are tens of thousands of dollars. Suppose you currently have an income of $50,000. You are offered the chance to pay $10,000 a year to buy unemployment insurance, so that if you lose your job, instead of making $10,000 on welfare you’ll make $30,000 on unemployment. You think you have about a 20% chance of losing your job.

If you had constant marginal utility of wealth, this would not be a good deal for you. Your expected value of money would be reduced if you buy the insurance: Before you had an 80% chance of $50,000 and a 20% chance of $10,000 so your expected amount of money is $42,000. With the insurance you have an 80% chance of $40,000 and a 20% chance of $30,000 so your expected amount of money is $38,000. Why would you take such a deal? That’s like giving up $4,000 isn’t it?

Well, let’s look back at that utility graph. At $50,000 your utility is 1.80, uh… units, er… let’s say QALY. 1.80 QALY per year, meaning you live 80% better than the average human. Maybe, I guess? Doesn’t seem too far off. In any case, the units of measurement aren’t that important.

Insurance_options

By buying insurance your effective income goes down to $40,000 per year, which lowers your utility to 1.70 QALY. That’s a fairly significant hit, but it’s not unbearable. If you lose your job (20% chance), you’ll fall down to $30,000 and have a utility of 1.55 QALY. Again, noticeable, but bearable. Your overall expected utility with insurance is therefore 1.67 QALY.

But what if you don’t buy insurance? Well then you have a 20% chance of taking a big hit and falling all the way down to $10,000 where your utility is only 1.00 QALY. Your expected utility is therefore only 1.64 QALY. You’re better off going with the insurance.

And this is how insurance companies make a profit (well; the legitimate way anyway; they also like to gouge people and deny cancer patients of course); on average, they make more from each customer than they pay out, but customers are still better off because they are protected against big losses. In this case, the insurance company profits $4,000 per customer per year, customers each get 30 milliQALY per year (about the same utility as an extra $2,000 more or less), everyone is happy.

But if this is your marginal utility of wealth—and it most likely is, approximately—then you would never want to buy a lottery ticket. Let’s suppose you actually have pretty good odds; it’s a 1 in 1 million chance of $1 million for a ticket that costs $2. This means that the state is going to take in about $2 million for every $1 million they pay out to a winner.

That’s about as good as your odds for a lottery are ever going to get; usually it’s more like a 1 in 400 million chance of $150 million for $1, which is an even bigger difference than it sounds, because $150 million is nowhere near 150 times as good as $1 million. It’s a bit better from the state’s perspective though, because they get to receive $400 million for every $150 million they pay out.

For your convenience I have zoomed out the graph so that you can see 100, which is an income of $1 million (which you’ll have this year if you win; to get it next year, you’ll have to play again). You’ll notice I did not have to zoom out the vertical axis, because 20 times as much money only ends up being about 2 times as much utility. I’ve marked with lines the utility of $50,000 (1.80, as we said before) versus $1 million (3.30).

Lottery_utility

What about the utility of $49,998 which is what you’ll have if you buy the ticket and lose? At this number of decimal places you can’t see the difference, so I’ll need to go out a few more. At $50,000 you have 1.80472 QALY. At $49,998 you have 1.80470 QALY. That $2 only costs you 0.00002 QALY, 20 microQALY. Not much, really; but of course not, it’s only $2.

How much does the 1 in 1 million chance of $1 million give you? Even less than that. Remember, the utility gain for going from $50,000 to $1 million is only 1.50 QALY. So you’re adding one one-millionth of that in expected utility, which is of course 1.5 microQALY, or 0.0000015 QALY.

That $2 may not seem like it’s worth much, but that 1 in 1 million chance of $1 million is worth less than one tenth as much. Again, I’ve tried to make these figures fairly realistic; they are by no means exact (I don’t actually think $49,998 corresponds to exactly 1.804699 QALY), but the order of magnitude difference is right. You gain about ten times as much utility from spending that $2 on something you want than you do on taking the chance at $1 million.

I said before that it is theoretically possible for you to have a utility function for which the lottery would be rational. For that you’d need to have increasing marginal utility of wealth, so that you could be what we call risk-seeking. Your utility function would have to look like this:

Weird_utility

There’s no way marginal utility of wealth looks like that. This would be saying that it would hurt Bill Gates more to lose $1 than it would hurt a starving child in Ghana, which makes no sense at all. (It certainly would makes you wonder why he’s so willing to give it to them.) So frankly even if we didn’t buy insurance the fact that we buy lottery tickets would already look pretty irrational.

But in order for it to be rational to buy both lottery tickets and insurance, our utility function would have to be totally nonsensical. Maybe it could look like this or something; marginal utility decreases normally for awhile, and then suddenly starts going upward again for no apparent reason:

Weirder_utility

Clearly it does not actually look like that. Not only would this mean that Bill Gates is hurt more by losing $1 than the child in Ghana, we have this bizarre situation where the middle class are the people who have the lowest marginal utility of wealth in the world. Both the rich and the poor would need to have higher marginal utility of wealth than we do. This would mean that apparently yachts are just amazing and we have no idea. Riding a yacht is the pinnacle of human experience, a transcendence beyond our wildest imaginings; and riding a slightly bigger yacht is even more amazing and transcendent. Love and the joy of a life well-lived pale in comparison to the ecstasy of adding just one more layer of gold plate to your Ferrari collection.

Where increasing marginal utility is ridiculous, this is outright special pleading. You’re just making up bizarre utility functions that perfectly line up with whatever behavior people happen to have so that you can still call it rational. It’s like saying, “It could be perfectly rational! Maybe he enjoys banging his head against the wall!”

Kahneman and Tversky had a better idea. They realized that human beings aren’t so great at assessing probability, and furthermore tend not to think in terms of total amounts of wealth or annual income at all, but in terms of losses and gains. Through a series of clever experiments they showed that we are not so much risk-averse as we are loss-averse; we are actually willing to take more risk if it means that we will be able to avoid a loss.

In effect, we seem to be acting as if our utility function looks like this, where the zero no longer means “zero income”, it means “whatever we have right now“:

Prospect_theory

We tend to weight losses about twice as much as gains, and we tend to assume that losses also diminish in their marginal effect the same way that gains do. That is, we would only take a 50% chance to lose $1000 if it meant a 50% chance to gain $2000; but we’d take a 10% chance at losing $10,000 to save ourselves from a guaranteed loss of $1000.

This can explain why we buy insurance, provided that you frame it correctly. One of the things about prospect theory—and about human behavior in general—is that it exhibits framing effects: The answer we give depends upon the way you ask the question. That’s so totally obviously irrational it’s honestly hard to believe that we do it; but we do, and sometimes in really important situations. Doctors—doctors—will decide a moral dilemma differently based on whether you describe it as “saving 400 out of 600 patients” or “letting 200 out of 600 patients die”.

In this case, you need to frame insurance as the default option, and not buying insurance as an extra risk you are taking. Then saving money by not buying insurance is a gain, and therefore less important, while a higher risk of a bad outcome is a loss, and therefore important.

If you frame it the other way, with not buying insurance as the default option, then buying insurance is taking a loss by making insurance payments, only to get a gain if the insurance pays out. Suddenly the exact same insurance policy looks less attractive. This is a big part of why Obamacare has been effective but unpopular. It was set up as a fine—a loss—if you don’t buy insurance, rather than as a bonus—a gain—if you do buy insurance. The latter would be more expensive, but we could just make it up by taxing something else; and it might have made Obamacare more popular, because people would see the government as giving them something instead of taking something away. But the fine does a better job of framing insurance as the default option, so it motivates more people to actually buy insurance.

But even that would still not be enough to explain how it is rational to buy lottery tickets (Have I mentioned how it’s really not a good idea to buy lottery tickets?), because buying a ticket is a loss and winning the lottery is a gain. You actually have to get people to somehow frame not winning the lottery as a loss, making winning the default option despite the fact that it is absurdly unlikely. But I have definitely heard people say things like this: “Well if my numbers come up and I didn’t play that week, how would I feel then?” Pretty bad, I’ll grant you. But how much you wanna bet that never happens? (They’ll bet… the price of the ticket, apparently.)

In order for that to work, people either need to dramatically overestimate the probability of winning, or else ignore it entirely. Both of those things totally happen.

First, we overestimate the probability of rare events and underestimate the probability of common events—this is actually the part that makes it cumulative prospect theory instead of just regular prospect theory. If you make a graph of perceived probability versus actual probability, it looks like this:

cumulative_prospect

We don’t make much distinction between 40% and 60%, even though that’s actually pretty big; but we make a huge distinction between 0% and 0.00001% even though that’s actually really tiny. I think we basically have categories in our heads: “Never, almost never, rarely, sometimes, often, usually, almost always, always.” Moving from 0% to 0.00001% is going from “never” to “almost never”, but going from 40% to 60% is still in “often”. (And that for some reason reminded me of “Well, hardly ever!”)

But that’s not even the worst of it. After all that work to explain how we can make sense of people’s behavior in terms of something like a utility function (albeit a distorted one), I think there’s often a simpler explanation still: Regret aversion under total neglect of probability.

Neglect of probability is self-explanatory: You totally ignore the probability. But what’s regret aversion, exactly? Unfortunately I’ve had trouble finding any good popular sources on the topic; it’s all scholarly stuff. (Maybe I’m more cutting-edge than I thought!)

The basic idea that is that you minimize regret, where regret can be formalized as the difference in utility between the outcome you got and the best outcome you could have gotten. In effect, it doesn’t matter whether something is likely or unlikely; you only care how bad it is.

This explains insurance and lottery tickets in one fell swoop: With insurance, you have the choice of risking a big loss (big regret) which you can avoid by paying a small amount (small regret). You take the small regret, and buy insurance. With lottery tickets, you have the chance of getting a large gain (big regret if you don’t) which you gain by paying a small amount (small regret).

This can also explain why a typical American’s fears go in the order terrorists > Ebola > sharks > > cars > cheeseburgers, while the actual risk of dying goes in almost the opposite order, cheeseburgers > cars > > terrorists > sharks > Ebola. (Terrorists are scarier than sharks and Ebola and actually do kill more Americans! Yay, we got something right! Other than that it is literally reversed.)

Dying from a terrorist attack would be horrible; in addition to your own death you have all the other likely deaths and injuries, and the sheer horror and evil of the terrorist attack itself. Dying from Ebola would be almost as bad, with gruesome and agonizing symptoms. Dying of a shark attack would be still pretty awful, as you get dismembered alive. But dying in a car accident isn’t so bad; it’s usually over pretty quick and the event seems tragic but ordinary. And dying of heart disease and diabetes from your cheeseburger overdose will happen slowly over many years, you’ll barely even notice it coming and probably die rapidly from a heart attack or comfortably in your sleep. (Wasn’t that a pleasant paragraph? But there’s really no other way to make the point.)

If we try to estimate the probability at all—and I don’t think most people even bother—it isn’t by rigorous scientific research; it’s usually by availability heuristic: How many examples can you think of in which that event happened? If you can think of a lot, you assume that it happens a lot.

And that might even be reasonable, if we still lived in hunter-gatherer tribes or small farming villages and the 150 or so people you knew were the only people you ever heard about. But now that we have live TV and the Internet, news can get to us from all around the world, and the news isn’t trying to give us an accurate assessment of risk, it’s trying to get our attention by talking about the biggest, scariest, most exciting things that are happening around the world. The amount of news attention an item receives is in fact in inverse proportion to the probability of its occurrence, because things are more exciting if they are rare and unusual. Which means that if we are estimating how likely something is based on how many times we heard about it on the news, our estimates are going to be almost exactly reversed from reality. Ironically it is the very fact that we have more information that makes our estimates less accurate, because of the way that information is presented.

It would be a pretty boring news channel that spent all day saying things like this: “82 people died in car accidents today, and 1657 people had fatal heart attacks, 11.8 million had migraines, and 127 million played the lottery and lost; in world news, 214 countries did not go to war, and 6,147 children starved to death in Africa…” This would, however, be vastly more informative.

In the meantime, here are a couple of counter-heuristics I recommend to you: Don’t think about losses and gains, think about where you are and where you might be. Don’t say, “I’ll gain $1,000”; say “I’ll raise my income this year to $41,000.” Definitely do not think in terms of the percentage price of things; think in terms of absolute amounts of money. Cheap expensive things, expensive cheap things is a motto of mine; go ahead and buy the $5 toothbrush instead of the $1, because that’s only $4. But be very hesitant to buy the $22,000 car instead of the $21,000, because that’s $1,000. If you need to estimate the probability of something, actually look it up; don’t try to guess based on what it feels like the probability should be. Make this unprecedented access to information work for you instead of against you. If you want to know how many people die in car accidents each year, you can literally ask Google and it will tell you that (I tried it—it’s 1.3 million worldwide). The fatality rate of a given disease versus the risk of its vaccine, the safety rating of a particular brand of car, the number of airplane crash deaths last month, the total number of terrorist attacks, the probability of becoming a university professor, the average functional lifespan of a new television—all these things and more await you at the click of a button. Even if you think you’re pretty sure, why not look it up anyway?

Perhaps then we can make prospect theory wrong by making ourselves more rational.

The World Development Report is on cognitive economics this year!

JDN 2457013 EST 21:01.

On a personal note, I can now proudly report that I have successfully defended my thesis “Corruption, ‘the Inequality Trap’, and ‘the 1% of the 1%’ “, and I now have completed a master’s degree in economics. I’m back home in Michigan for the holidays (hence my use of Eastern Standard Time), and then, well… I’m not entirely sure. I have a gap of about six months before PhD programs start. I have a number of job applications out, but unless I get a really good offer (such as the position at the International Food Policy Research Institute in DC) I think I may just stay in Michigan for awhile and work on my own projects, particularly publishing two of my books (my nonfiction magnum opus, The Mathematics of Tears and Joy, and my first novel, First Contact) and making some progress on a couple of research papers—ideally publishing one of them as well. But the future for me right now is quite uncertain, and that is now my major source of stress. Ironically I’d probably be less stressed if I were working full-time, because I would have a clear direction and sense of purpose. If I could have any job in the world, it would be a hard choice between a professorship at UC Berkeley or a research position at the World Bank.

Which brings me to the topic of today’s post: The people who do my dream job have just released a report showing that they basically agree with me on how it should be done.

If you have some extra time, please take a look at the World Bank World Development Report. They put one out each year, and it provides a rigorous and thorough (236 pages) but quite readable summary of the most important issues in the world economy today. It’s not exactly light summer reading, but nor is it the usual morass of arcane jargon. If you like my blog, you can probably follow most of the World Development Report. If you don’t have time to read the whole thing, you can at least skim through all the sidebars and figures to get a general sense of what it’s all about. Much of the report is written in the form of personal vignettes that make the general principles more vivid; but these are not mere anecdotes, for the report rigorously cites an enormous volume of empirical research.

The title of the 2015 report? “Mind, Society, and Behavior”. In other words, cognitive economics. The world’s foremost international economic institution has just endorsed cognitive economics and rejected neoclassical economics, and their report on the subject provides a brilliant introduction to the subject replete with direct applications to international development.

For someone like me who lives and breathes cognitive economics, the report is pure joy. It’s all there, from anchoring heuristic to social proof, corruption to discrimination. The report is broadly divided into three parts.

Part 1 explains the theory and evidence of cognitive economics, subdivided into “thinking automatically” (heuristics), “thinking socially” (social cognition), and “thinking with mental models” (bounded rationality). (If I wrote it I’d also include sections on the tribal paradigm and narrative, but of course I’ll have to publish that stuff in the actual research literature first.) Anyway the report is so amazing as it is I really can’t complain. It includes some truly brilliant deorbits on neoclassical economics, such as this one from page 47: ” In other words, the canonical model of human behavior is not supported in any society that has been studied.”

Part 2 uses cognitive economic theory to analyze and improve policy. This is the core of the report, with chapters on poverty, childhood, finance, productivity, ethnography, health, and climate change. So many different policies are analyzed I’m not sure I can summarize them with any justice, but a few particularly stuck out: First, the high cognitive demands of poverty can basically explain the whole observed difference in IQ between rich and poor people—so contrary to the right-wing belief that people are poor because they are stupid, in fact people seem stupid because they are poor. Simplifying the procedures for participation in social welfare programs (which is desperately needed, I say with a stack of incomplete Medicaid paperwork on my table—even I find these packets confusing, and I have a master’s degree in economics) not only increases their uptake but also makes people more satisfied with them—and of course a basic income could simplify social welfare programs enormously. “Are you a US citizen? Is it the first of the month? Congratulations, here’s $670.” Another finding that I found particularly noteworthy is that productivity is in many cases enhanced by unconditional gifts more than it is by incentives that are conditional on behavior—which goes against the very core of neoclassical economic theory. (It also gives us yet another item on the enormous list of benefits of a basic income: Far from reducing work incentives by the income effect, an unconditional basic income, as a shared gift from your society, may well motivate you even more than the same payment as a wage.)

Part 3 is a particularly bold addition: It turns the tables and applies cognitive economics to economists themselves, showing that human irrationality is by no means limited to idiots or even to poor people (as the report discusses in chapter 4, there are certain biases that poor people exhibit more—but there are also some they exhibit less.); all human beings are limited by the same basic constraints, and economists are human beings. We like to think of ourselves as infallibly rational, but we are nothing of the sort. Even after years of studying cognitive economics I still sometimes catch myself making mistakes based on heuristics, particularly when I’m stressed or tired. As a long-term example, I have a number of vague notions of entrepreneurial projects I’d like to do, but none for which I have been able to muster the effort and confidence to actually seek loans or investors. Rationally, I should either commit or abandon them, yet cannot quite bring myself to do either. And then of course I’ve never met anyone who didn’t procrastinate to some extent, and actually those of us who are especially smart often seem especially prone—though we often adopt the strategy of “active procrastination”, in which you end up doing something else useful when procrastinating (my apartment becomes cleanest when I have an important project to work on), or purposefully choose to work under pressure because we are more effective that way.

And the World Bank pulled no punches here, showing experiments on World Bank economists clearly demonstrating confirmation bias, sunk-cost fallacy, and what the report calls “home team advantage”, more commonly called ingroup-outgroup bias—which is basically a form of the much more general principle that I call the tribal paradigm.

If there is one flaw in the report, it’s that it’s quite long and fairly exhausting to read, which means that many people won’t even try and many who do won’t make it all the way through. (The fact that it doesn’t seem to be available in hard copy makes it worse; it’s exhausting to read lengthy texts online.) We only have so much attention and processing power to devote to a task, after all—which is kind of the whole point, really.

Are humans rational?

JDN 2456928 PDT 11:21.

The central point of contention between cognitive economists and neoclassical economists hinges upon the word “rational”: Are humans rational? What do we mean by “rational”?

Neoclassicists are very keen to insist that they think humans are rational, and often characterize the cognitivist view as saying that humans are irrational. (Daniel Ariely has a habit of feeding this view, titling books things like Predictably Irrational and The Upside of Irrationality.) But I really don’t think this is the right way to characterize the difference.

Daniel Kahneman has a somewhat better formulation (from Thinking, Fast and Slow): “I often cringe when my work is credited as demonstrating that human choices are irrational, when in fact our research only shows that Humans are not well described by the rational-agent model.” (Yes, he capitalizes the word “Humans” throughout, which is annoying; but in general it is a great book.)

The problem is that saying “humans are irrational” has the connotation of a universal statement; it seems to be saying that everything we do, all the time, is always and everywhere utterly irrational. And this of course could hardly be further from the truth; we would not have even survived in the savannah, let alone invented the Internet, if we were that irrational. If we simply lurched about randomly without any concept of goals or response to information in the environment, we would have starved to death millions of years ago.

But at the same time, the neoclassical definition of “rational” obviously does not describe human beings. We aren’t infinite identical psychopaths. Particularly bizarre (and frustrating) is the continued insistence that rationality entails selfishness; apparently economists are getting all their philosophy from Ayn Rand (who barely even qualifies as such), rather than the greats such as Immanuel Kant and John Stuart Mill or even the best contemporary philosophers such as Thomas Pogge and John Rawls. All of these latter would be baffled by the notion that selfless compassion is irrational.

Indeed, Kant argued that rationality implies altruism, that a truly coherent worldview requires assent to universal principles that are morally binding on yourself and every other rational being in the universe. (I am not entirely sure he is correct on this point, and in any case it is clear to me that neither you nor I are anywhere near advanced enough beings to seriously attempt such a worldview. Where neoclassicists envision infinite identical psychopaths, Kant envisions infinite identical altruists. In reality we are finite diverse tribalists.)

But even if you drop selfishness, the requirements of perfect information and expected utility maximization are still far too strong to apply to real human beings. If that’s your standard for rationality, then indeed humans—like all beings in the real world—are irrational.

The confusion, I think, comes from the huge gap between ideal rationality and total irrationality. Our behavior is neither perfectly optimal nor hopelessly random, but somewhere in between.

In fact, we are much closer to the side of perfect rationality! Our brains are limited, so they operate according to heuristics: simplified, approximate rules that are correct most of the time. Clever experiments—or complex environments very different from how we evolved—can cause those heuristics to fail, but we must not forget that the reason we have them is that they work extremely well in most cases in the environment in which we evolved. We are about 90% rational—but woe betide that other 10%.

The most obvious example is phobias: Why are people all over the world afraid of snakes, spiders, falling, and drowning? Because those used to be leading causes of death. In the African savannah 200,000 years ago, you weren’t going to be hit by a car, shot with a rifle bullet or poisoned by carbon monoxide. (You’d probably die of malaria, actually; for that one, instead of evolving to be afraid of mosquitoes we evolved a biological defense mechanism—sickle-cell red blood cells.) Death in general was actually much more likely then, particularly for children.

A similar case can be made for other heuristics we use: We are tribal because the proper functioning of our 100-person tribe used to be the most important factor in our survival. We are racist because people physically different from us were usually part of rival tribes and hence potential enemies. We hoard resources even when our technology allows abundance, because a million years ago no such abundance was possible and every meal might be our last.

When asked how common something is, we don’t calculate a posterior probability based upon Bayesian inference—that’s hard. Instead we try to think of examples—that’s easy. That’s the availability heuristic. And if we didn’t have mass media constantly giving us examples of rare events we wouldn’t otherwise have known about, the availability heuristic would actually be quite accurate. Right now, people think of terrorism as common (even though it’s astoundingly rare) because it’s always all over the news; but if you imagine living in an ancient tribe—or even an medieval village!—anything you heard about that often would almost certainly be something actually worth worrying about. Our level of panic over Ebola is totally disproportionate; but in the 14th century that same level of panic about the Black Death would be entirely justified.

When we want to know whether something is a member of a category, again we don’t try to calculate the actual probability; instead we think about how well it seems to fit a model we have of the paradigmatic example of that category—the representativeness heuristic. You see a Black man on a street corner in New York City at night; how likely is it that he will mug you? Pretty small actually, because there were less than 200,000 crimes in all of New York City last year in a city of 8,000,000 people—meaning the probability any given person committed a crime in the previous year was only 2.5%; the probability on any given day would then be less than 0.01%. Maybe having those attributes raises the probability somewhat, but you can still be about 99% sure that this guy isn’t going to mug you tonight. But since he seemed representative of the category in your mind “criminals”, your mind didn’t bother asking how many criminals there are in the first place—an effect called base rate neglect. Even 200 years ago—let alone 1 million—you didn’t have these sorts of reliable statistics, so what else would you use? You basically had no choice but to assess based upon representative traits.

As you probably know, people have trouble dealing with big numbers, and this is a problem in our modern economy where we actually need to keep track of millions or billions or even trillions of dollars moving around. And really I shouldn’t say it that way, because $1 million ($1,000,000) is an amount of money an upper-middle class person could have in a retirement fund, while $1 billion ($1,000,000,000) would make you in the top 1000 richest people in the world, and $1 trillion ($1,000,000,000,000) is enough to end world hunger for at least the next 15 years (it would only take about $1.5 trillion to do it forever, by paying only the interest on the endowment). It’s important to keep this in mind, because otherwise the natural tendency of the human mind is to say “big number” and ignore these enormous differences—it’s called scope neglect. But how often do you really deal with numbers that big? In ancient times, never. Even in the 21st century, not very often. You’ll probably never have $1 billion, and even $1 million is a stretch—so it seems a bit odd to say that you’re irrational if you can’t tell the difference. I guess technically you are, but it’s an error that is unlikely to come up in your daily life.

Where it does come up, of course, is when we’re talking about national or global economic policy. Voters in the United States today have a level of power that for 99.99% of human existence no ordinary person has had. 2 million years ago you may have had a vote in your tribe, but your tribe was only 100 people. 2,000 years ago you may have had a vote in your village, but your village was only 1,000 people. Now you have a vote on the policies of a nation of 300 million people, and more than that really: As goes America, so goes the world. Our economic, cultural, and military hegemony is so total that decisions made by the United States reverberate through the entire human population. We have choices to make about war, trade, and ecology on a far larger scale than our ancestors could have imagined. As a result, the heuristics that served us well millennia ago are now beginning to cause serious problems.

[As an aside: This is why the “Downs Paradox” is so silly. If you’re calculating the marginal utility of your vote purely in terms of its effect on you—you are a psychopath—then yes, it would be irrational for you to vote. And really, by all means: psychopaths, feel free not to vote. But the effect of your vote is much larger than that; in a nation of N people, the decision will potentially affect N people. Your vote contributes 1/N to a decision that affects N people, making the marginal utility of your vote equal to N*1/N = 1. It’s constant. It doesn’t matter how big the nation is, the value of your vote will be exactly the same. The fact that your vote has a small impact on the decision is exactly balanced by the fact that the decision, once made, will have such a large effect on the world. Indeed, since larger nations also influence other nations, the marginal effect of your vote is probably larger in large elections, which means that people are being entirely rational when they go to greater lengths to elect the President of the United States (58% turnout) rather than the Wayne County Commission (18% turnout).]

So that’s the problem. That’s why we have economic crises, why climate change is getting so bad, why we haven’t ended world hunger. It’s not that we’re complete idiots bumbling around with no idea what we’re doing. We simply aren’t optimized for the new environment that has been recently thrust upon us. We are forced to deal with complex problems unlike anything our brains evolved to handle. The truly amazing part is actually that we can solve these problems at all; most lifeforms on Earth simply aren’t mentally flexible enough to do that. Humans found a really neat trick (actually in a formal evolutionary sense a goodtrick, which we know because it also evolved in cephalopods): Our brains have high plasticity, meaning they are capable of adapting themselves to their environment in real-time. Unfortunately this process is difficult and costly; it’s much easier to fall back on our old heuristics. We ask ourselves: Why spend 10 times the effort to make it work 99% of the time when you can make it work 90% of the time so much easier?

Why? Because it’s so incredibly important that we get these things right.