Why is it so hard to get a job?

JDN 2457411

The United States is slowly dragging itself out of the Second Depression.

Unemployment fell from almost 10% to about 5%.

Core inflation has been kept between 0% and 2% most of the time.

Overall inflation has been within a reasonable range:

US_inflation

Real GDP has returned to its normal growth trend, though with a permanent loss of output relative to what would have happened without the Great Recession.

US_GDP_growth

Consumption spending is also back on trend, tracking GDP quite precisely.

The Federal Reserve even raised the federal funds interest rate above the zero lower bound, signaling a return to normal monetary policy. (As I argued previously, I’m pretty sure that was their main goal actually.)

Employment remains well below the pre-recession peak, but is now beginning to trend upward once more.

The only thing that hasn’t recovered is labor force participation, which continues to decline. This is how we can have unemployment go back to normal while employment remains depressed; people leave the labor force by retiring, going back to school, or simply giving up looking for work. By the formal definition, someone is only unemployed if they are actively seeking work. No, this is not new, and it is certainly not Obama rigging the numbers. This is how we have measured unemployment for decades.

Actually, it’s kind of the opposite: Since the Clinton administration we’ve also kept track of “broad unemployment”, which includes people who’ve given up looking for work or people who have some work but are trying to find more. But we can’t directly compare it to anything that happened before 1994, because the BLS didn’t keep track of it before then. All we can do is estimate based on what we did measure. Based on such estimation, it is likely that broad unemployment in the Great Depression may have gotten as high as 50%. (I’ve found that one of the best-fitting models is actually one of the simplest; assume that broad unemployment is 1.8 times narrow unemployment. This fits much better than you might think.)

So, yes, we muddle our way through, and the economy eventually heals itself. We could have brought the economy back much sooner if we had better fiscal policy, but at least our monetary policy was good enough that we were spared the worst.

But I think most of us—especially in my generation—recognize that it is still really hard to get a job. Overall GDP is back to normal, and even unemployment looks all right; but why are so many people still out of work?

I have a hypothesis about this: I think a major part of why it is so hard to recover from recessions is that our system of hiring is terrible.

Contrary to popular belief, layoffs do not actually substantially increase during recessions. Quits are substantially reduced, because people are afraid to leave current jobs when they aren’t sure of getting new ones. As a result, rates of job separation actually go down in a recession. Job separation does predict recessions, but not in the way most people think. One of the things that made the Great Recession different from other recessions is that most layoffs were permanent, instead of temporary—but we’re still not sure exactly why.

Here, let me show you some graphs from the BLS.

This graph shows job openings from 2005 to 2015:

job_openings

This graph shows hires from 2005 to 2015:

job_hires

Both of those show the pattern you’d expect, with openings and hires plummeting in the Great Recession.

But check out this graph, of job separations from 2005 to 2015:

job_separations

Same pattern!

Unemployment in the Second Depression wasn’t caused by a lot of people losing jobs. It was caused by a lot of people not getting jobs—either after losing previous ones, or after graduating from school. There weren’t enough openings, and even when there were openings there weren’t enough hires.

Part of the problem is obviously just the business cycle itself. Spending drops because of a financial crisis, then businesses stop hiring people because they don’t project enough sales to justify it; then spending drops even further because people don’t have jobs, and we get caught in a vicious cycle.

But we are now recovering from the cyclical downturn; spending and GDP are back to their normal trend. Yet the jobs never came back. Something is wrong with our hiring system.

So what’s wrong with our hiring system? Probably a lot of things, but here’s one that’s been particularly bothering me for a long time.
As any job search advisor will tell you, networking is essential for career success.

There are so many different places you can hear this advice, it honestly gets tiring.

But stop and think for a moment about what that means. One of the most important determinants of what job you will get is… what people you know?

It’s not what you are best at doing, as it would be if the economy were optimally efficient.
It’s not even what you have credentials for, as we might expect as a second-best solution.

It’s not even how much money you already have, though that certainly is a major factor as well.

It’s what people you know.

Now, I realize, this is not entirely beyond your control. If you actively participate in your community, attend conferences in your field, and so on, you can establish new contacts and expand your network. A major part of the benefit of going to a good college is actually the people you meet there.

But a good portion of your social network is more or less beyond your control, and above all, says almost nothing about your actual qualifications for any particular job.

There are certain jobs, such as marketing, that actually directly relate to your ability to establish rapport and build weak relationships rapidly. These are a tiny minority. (Actually, most of them are the sort of job that I’m not even sure needs to exist.)

For the vast majority of jobs, your social skills are a tiny, almost irrelevant part of the actual skill set needed to do the job well. This is true of jobs from writing science fiction to teaching calculus, from diagnosing cancer to flying airliners, from cleaning up garbage to designing spacecraft. Social skills are rarely harmful, and even often provide some benefit, but if you need a quantum physicist, you should choose the recluse who can write down the Dirac equation by heart over the well-connected community leader who doesn’t know what an integral is.

At the very least, it strains credibility to suggest that social skills are so important for every job in the world that they should be one of the defining factors in who gets hired. And make no mistake: Networking is as beneficial for landing a job at a local bowling alley as it is for becoming Chair of the Federal Reserve. Indeed, for many entry-level positions networking is literally all that matters, while advanced positions at least exclude candidates who don’t have certain necessary credentials, and then make the decision based upon who knows whom.

Yet, if networking is so inefficient, why do we keep using it?

I can think of a couple reasons.

The first reason is that this is how we’ve always done it. Indeed, networking strongly pre-dates capitalism or even money; in ancient tribal societies there were certainly jobs to assign people to: who will gather berries, who will build the huts, who will lead the hunt. But there were no colleges, no certifications, no resumes—there was only your position in the social structure of the tribe. I think most people simply automatically default to a networking-based system without even thinking about it; it’s just the instinctual System 1 heuristic.

One of the few things I really liked about Debt: The First 5000 Years was the discussion of how similar the behavior of modern CEOs is to that of ancient tribal chieftans, for reasons that make absolutely no sense in terms of neoclassical economic efficiency—but perfect sense in light of human evolution. I wish Graeber had spent more time on that, instead of many of these long digressions about international debt policy that he clearly does not understand.

But there is a second reason as well, a better reason, a reason that we can’t simply give up on networking entirely.

The problem is that many important skills are very difficult to measure.

College degrees do a decent job of assessing our raw IQ, our willingness to persevere on difficult tasks, and our knowledge of the basic facts of a discipline (as well as a fantastic job of assessing our ability to pass standardized tests!). But when you think about the skills that really make a good physicist, a good economist, a good anthropologist, a good lawyer, or a good doctor—they really aren’t captured by any of the quantitative metrics that a college degree provides. Your capacity for creative problem-solving, your willingness to treat others with respect and dignity; these things don’t appear in a GPA.

This is especially true in research: The degree tells how good you are at doing the parts of the discipline that have already been done—but what we really want to know is how good you’ll be at doing the parts that haven’t been done yet.

Nor are skills precisely aligned with the content of a resume; the best predictor of doing something well may in fact be whether you have done so in the past—but how can you get experience if you can’t get a job without experience?

These so-called “soft skills” are difficult to measure—but not impossible. Basically the only reliable measurement mechanisms we have require knowing and working with someone for a long span of time. You can’t read it off a resume, you can’t see it in an interview (interviews are actually a horribly biased hiring mechanism, particularly biased against women). In effect, the only way to really know if someone will be good at a job is to work with them at that job for awhile.

There’s a fundamental information problem here I’ve never quite been able to resolve. It pops up in a few other contexts as well: How do you know whether a novel is worth reading without reading the novel? How do you know whether a film is worth watching without watching the film? When the information about the quality of something can only be determined by paying the cost of purchasing it, there is basically no way of assessing the quality of things before we purchase them.

Networking is an attempt to get around this problem. To decide whether to read a novel, ask someone who has read it. To decide whether to watch a film, ask someone who has watched it. To decide whether to hire someone, ask someone who has worked with them.

The problem is that this is such a weak measure that it’s not much better than no measure at all. I often wonder what would happen if businesses were required to hire people based entirely on resumes, with no interviews, no recommendation letters, and any personal contacts treated as conflicts of interest rather than useful networking opportunities—a world where the only thing we use to decide whether to hire someone is their documented qualifications. Could it herald a golden age of new economic efficiency and job fulfillment? Or would it result in widespread incompetence and catastrophic collapse? I honestly cannot say.

No, advertising is not signaling

JDN 2457373

Awhile ago, I wrote a post arguing that advertising is irrational, that at least with advertising as we know it, no real information is conveyed and thus either consumers are being irrational in their purchasing decisions, or advertisers are irrational for buying ads that don’t work.

One of the standard arguments neoclassical economists make to defend the rationality of advertising is that advertising is signaling—that even though the content of the ads conveys no useful information, the fact that there are ads is a useful signal of the real quality of goods being sold.

The idea is that by spending on advertising, a company shows that they have a lot of money to throw around, and are therefore a stable and solvent company that probably makes good products and is going to stick around for awhile.

Here are a number of different papers all making this same basic argument, often with sophisticated mathematical modeling. This paper takes an even bolder approach, arguing that people benefit from ads and would therefore pay to get them if they had to. Does that sound even remotely plausible to you? It sure doesn’t to me. Some ads are fairly entertaining, but generally if someone is willing to pay money for a piece of content, they charge money for that content.

Could spending on advertising offer a signal of the quality of a product or the company that makes it? Yes. That is something that actually could happen. The reason this argument is ridiculous is not that advertising signaling couldn’t happen—it’s that advertising is clearly nowhere near the best way to do that. The content of ads is clearly nothing remotely like what it would be if advertising were meant to be a costly signal of quality.

Look at this ad for Orangina. Look at it. Look at it.

Now, did that ad tell you anything about Orangina? Anything at all?

As far as I can tell, the thing it actually tells you isn’t even true—it strongly implies that Orangina is a form of aftershave when in fact it is an orange-flavored beverage. It’d be kind of like having an ad for the iPad that involves scantily-clad dog-people riding the iPad like it’s a hoverboard. (Now that I’ve said it, Apple is probably totally working on that ad.)

This isn’t an isolated incident for Orangina, who have a tendency to run bizarre and somewhat suggestive (let’s say PG-13) TV spots involving anthropomorphic animals.

But more than that, it’s endemic to the whole advertising industry.

Look at GEICO, for instance; without them specifically mentioning that this is car insurance, you’d never know what they were selling from all the geckos,

and Neanderthals,

and… golf Krakens?

Progressive does slightly better, talking about some of their actual services while also including an adorably-annoying spokesperson (she’s like Jar Jar, but done better):

State Farm also includes at least a few tidbits about their insurance amidst the teleportation insanity:

But honestly the only car insurance commercials I can think of that are actually about car insurance are Allstate’s, and even then they’re mostly about Dennis Haybert’s superhuman charisma. I would buy bacon cheeseburgers from this man, and I’m vegetarian.

Esurance is also relatively informative (and owned by Allstate, by the way); they talk about their customer service and low prices (in other words, the only things you actually care about with car insurance). But even so, what reason do we have to believe their bald assertions of good customer service? And what’s the deal with the whole money-printing thing?

And of course I could deluge you with examples from other companies, from Coca-Cola’s polar bears and Santa Claus to this commercial, which is literally the most American thing I have ever seen:

If you’re from some other country and are going, “What!?” right now, that’s totally healthy. Honestly I think we would too if constant immersion in this sort of thing hadn’t deadened our souls.

Do these ads signal that their companies have a lot of extra money to burn? Sure. But there are plenty of other ways to do that which would also serve other valuable functions. I honestly can’t imagine any scenario in which the best way to tell me the quality of an auto insurance company is to show me 30-second spots about geckos and Neanderthals.

If a company wants to signal that they have a lot of money, they could simply report their financial statement. That’s even regulated so that we know it has to be accurate (and this is one of the few financial regulations we actually enforce). The amount you spent on an ad is not obvious from the result of the ad, and doesn’t actually prove that you’re solvent, only that you have enough access to credit. (Pets.com famously collapsed the same year they ran a multi-million-dollar Super Bowl ad.)

If a company wants to signal that they make a good product, they could pay independent rating agencies to rate products on their quality (you know, like credit rating agencies and reviewers of movies and video games). Paying an independent agency is far more reliable than the signaling provided by advertising. Consumers could also pay their own agencies, which would be even more reliable; credit rating agencies and movie reviewers do sometimes have a conflict of interest, which could be resolved by making them report to consumers instead of producers.

If a company wants to establish that they are both financially stable and socially responsible, they could make large public donations to important charities. (This is also something that corporations do on occasion, such as Subaru’s recent campaign.) Or they could publicly announce a raise for all their employees. This would not only provide us with the information that they have this much money to spend—it would actually have a direct positive social effect, thus putting their money where there mouth is.

Signaling theory in advertising is based upon the success of signaling theory in evolutionary biology, which is beyond dispute; but evolution is tightly constrained in what it can do, so wasteful costly signals make sense. Human beings are smarter than that; we can find ways to convey information that don’t involve ludicrous amounts of waste.

If we were anywhere near as rational as these neoclassical models assume us to be, we would take the constant bombardment of meaningless ads not as a signal of a company’s quality but as a personal assault—they are needlessly attacking our time and attention when all the genuinely-valuable information they convey could have been conveyed much more easily and reliably. We would not buy more from them; we would refuse to buy from them. And indeed, I’ve learned to do just that; the more a company bombards me with annoying or meaningless advertisements, the more I make a point of not buying their product if I have a viable substitute. (For similar reasons, I make a point of never donating to any charity that uses hard-sell tactics to solicit donations.)

But of course the human mind is limited. We only have so much attention, and by bombarding us frequently and intensely enough they can overcome our mental defenses and get us to make decisions we wouldn’t if we were optimally rational. I can feel this happening when I am hungry and a food ad appears on TV; my autonomic hunger response combined with their expert presentation of food in the perfect lighting makes me want that food, if only for the few seconds it takes my higher cognitive functions to kick in and make me realize that I don’t eat meat and I don’t like mayonnaise.

Car commercials have always been particularly baffling to me. Who buys a car based on a commercial? A decision to spend $20,000 should not be made based upon 30 seconds of obviously biased information. But either people do buy cars based on commercials or they don’t; if they do, consumers are irrational, and if they don’t, car companies are irrational.

Advertising isn’t the source of human irrationality, but it feeds upon human irrationality, and is specifically designed to exploit our own stupidity to make us spend money in ways we wouldn’t otherwise. This means that markets will not be efficient, and huge amounts of productivity can be wasted because we spent it on what they convinced us to buy instead of what would truly have made our lives better. Those companies then profit more, which encourages them to make even more stuff nobody actually wants and sell it that much harder… and basically we all end up buying lots of worthless stuff and putting it in our garages and wondering what happened to our money and the meaning in our lives. Neoclassical economists really need to stop making ridiculous excuses for this damaging and irrational behavior–and maybe then we could actually find a way to make it stop.

Why building more roads doesn’t stop rush hour

JDN 2457362

The topic of this post was selected based on the very first Patreon vote (which was albeit limited because I only had three patrons eligible to vote and only one of them actually did vote; but these things always start small, right?). It is what you (well, one of you) wanted to see. In future months there will be more such posts, and hopefully more people will vote.

Most Americans face an economic paradox every morning and every evening. Our road network is by far the largest in the world (for three reasons: We’re a huge country geographically, we have more money than anyone else, and we love our cars), and we continue to expand it; yet every morning around 8:00-9:00 and every evening around 17:00-18:00 we face rush hour, in which our roads become completely clogged by commuters and it takes two or three times as long to get anywhere.

Indeed, rush hour is experienced around the world, though it often takes the slightly different form of clogged public transit instead of clogged roads. In most countries, there are two specific one-hour periods in the morning and the evening in which all transportation is clogged to a standstill.

This is probably such a familiar part of your existence you never stopped to question it. But in fact it is quite bizarre; the natural processes of economic supply and demand should have solved this problem decades ago, so why haven’t they?

There are a number of important forces at work here, all of which conspire to doom our transit systems.

The first is the Tragedy of the Commons, which I’ll likely write about in the future (but since it didn’t win the vote, not just yet). The basic idea of the Tragedy of the Commons is similar to the Prisoner’s Dilemma, but expanded to a large number of people. A Tragedy of the Commons is a situation in which there are many people, each of whom has the opportunity to either cooperate with the group and help everyone a small amount, or defect from the group and help themselves a larger amount. If everyone cooperates, everyone is better off; but holding everyone else’s actions fixed, it is in each person’s self-interest to defect.

As it turns out, people do act closer to the neoclassical prediction in the Tragedy of the Commons—which is something I’d definitely like to get into at some point. Two different psychological mechanisms counter one another, and result in something fairly close to the prediction of neoclassical rational self-interest, at least when the number of people involved is very large. It’s actually a good example of how real human beings can deviate from neoclassical rationality both in a good way (we are altruistic) and in a bad way (we are irrational).

The large-scale way roads are a Tragedy of the Commons is that they are a public good, something that we share as a society. Except for toll roads (which I’ll get to in a moment), roads are set up so that once they are built, anyone can use them; so the best option for any individual person is to get everyone else to pay to build them and then quite literally free-ride on the roads everyone else built. But if everyone tries to do that, nobody is going to pay for the roads at all.

And indeed, our roads are massively underfunded. Simply to maintain currently-existing roads we need to spend about an additional $100 billion per year over what we’re already spending. Yet once you factor in all the extra costs of damaged vehicles, increased accidents, time wasted, and the fact that fixing things is cheaper than replacing them, in fact the cost to not maintain our roads is about 3 times as large as that. This is exactly what you expect to see in a Tragedy of the Commons; there’s a huge benefit for everyone just sitting there, not getting done, because nobody wants to pay for it themselves. Michigan saw this quite dramatically when we voted down increased road funding because it would have slightly increased sales taxes. (Granted, we should be funding roads with fuel taxes, not general sales taxes—but those are hardly any more popular.)

Toll roads can help with this, because they internalize the externality: When you have to pay for the roads that you use, you either use them less (creating less wear and tear) or pay more; either way, the gap between what is paid and what is needed is closed. And indeed, toll roads are better maintained than other roads. There are downsides, however; the additional effort to administrate the tolls is expensive, and traffic can be slowed down by toll booths (though modern transponder systems mitigate this effect substantially). Also, it’s difficult to fully privatize roads, because there is a large up-front cost and it takes a long time for a toll road to become profitable; most corporations don’t want to wait that long.

But we do build a lot of roads, and yet still we have rush hour. So that isn’t the full explanation.

The small-scale way that roads are a Tragedy of the Commons is that when you decide to drive during rush hour, you are in a sense defecting in a Tragedy of the Commons. You will get to your destination sooner than if you had waited until traffic clears; but by adding one more car to the congestion you have slowed everyone else down just a little bit. When we sum up all these little delays, we get the total gridlock that is rush hour. If you had instead waited to drive on clear roads, you would get to your destination without inconveniencing anyone else—but you’d get there a lot later.

The second major reason why we have rush hour is what is called induced demand. When you widen a road or add a parallel route, you generally fail to reduce traffic congestion on that route in the long run. What happens instead is that driving during rush hour becomes more convenient for a little while, which makes more people start driving during rush hour—they buy a car when they used to take the bus, or they don’t leave as early to go to work. Eventually enough people shift over that the equilibrium is restored—and the equilibrium is gridlock.

But if you think carefully, that can’t be the whole explanation. There are only so many people who could start driving during rush hour, so what if we simply built enough roads to accommodate them all? And if our public transit systems were better, people would feel no need to switch to driving, even if driving had in fact been made more convenient. And indeed, transportation economists have found that adding more capacity does reduce congestion—it just isn’t enough unless you also improve public transit. So why aren’t we improving public transit? See above, Tragedy of the Commons.

Yet we still don’t have a complete explanation, because of something that’s quite obvious in hindsight: Why do we all work 9:00 to 17:00!? There’s no reason for that. There’s nothing inherent about the angle of sunlight or something which requires us to work these hours—indeed, if there were, Daylight Savings Time wouldn’t work (which is not to say that it works well—Daylight Savings Times kills).

There should be a competitive market pressure to work different hours, which should ultimately lead to an equilibrium where traffic is roughly constant throughout the day, at least during the time when a large swath of the population is awake and outside. Congestion should spread itself out over time, because it is to the advantage of all involved if each driver tries to drive at a time when other driver’s aren’t. Driving outside of rush hour gives us an opportunity for something like “temporal arbitrage”, where you can pay a small amount of time here to get a larger amount of time there. And if there’s one thing a competitive economy is supposed to get rid of, it’s arbitrage.

But no, we keep almost all our working hours aligned at 09:00-17:00, and thus we get rush hour.

In fact, a lot of jobs would function better if they weren’t aligned in this way—retail sales, for example, is most successful during the “off hours”, because people only shop when they aren’t working. (Well, except for online shopping, and even then they’re not supposed to.) Banks continually insist on making their hours 9:00 to 17:00 when they know that on most days they’d actually get more business from 17:00 to 19:00 than they did from 9:00 to 17:00. Some banks are at least figuring that out enough to be open from 17:00 to 19:00—but they still don’t seem to grasp that retail banking services have no reason to be open during normal business hours. Commerce banking services do; but that’s a small portion of their overall customers (albeit not of their overall revenue). There’s no reason to have so many full branches open so many hours with most of the tellers doing nothing most of the time.

Education would be better off being later in the day, when students—particularly teenagers—have a chance to sleep in the way their brains are evolved to. The benefits of later school days in terms of academic performance and public health are actually astonishingly large. When you move the start of high school from 07:00 to 09:00, auto collisions involving teenagers drop 70%. Perhaps should be the new slogans: “Early classes cause car crashes.” Since 25% of auto collisions occur during rush hour, here’s another: “Always working nine to five? Vehicular homicide.”

Other jobs could have whatever hours they please. There’s no reason for most forms of manufacturing to be done at any particular hour of the day. Most clerical and office work could be done at any time (and thanks to the Internet, any place; though there are real benefits to working in an office). Writing can be done whenever it is convenient for the author—and when you think about it, an awful lot of jobs basically amount to writing.

Finance is only handled 09:00-17:00 because we force it to be. The idea of “opening” and “closing” the stock market each day is profoundly anachronistic, and actually amounts to granting special arbitrage privileges to the small number of financial institutions that are allowed to do so-called “after hours” trading.

And then there’s the fact that different people have different circadian rhythms, require different amounts of sleep and prefer to sleep at different times—it’s genetic. (My boyfriend and I are roughly three hours phase-shifted relative to one another, which made it surprisingly convenient to stay in touch when I lived in California and he lived in Michigan.)

Why do we continue to accept such absurdity?

Whenever you find yourself asking that question, try this answer first, for it is by far the most likely:

Social norms.

Social norms will make human beings do just about anything, from eating cockroaches to murdering elephants, from kilts to burqas, from waving giant foam hands to throwing octopus onto ice rinks, from landing on the moon to crashing into the World Trade Center, from bombing Afghanistan to marching on Washington, from eating only raw foods to using dead pigs as sex toys. Our basic mental architecture is structured around tribal identity, and to preserve that identity we will follow almost any rule imaginable. To a first approximation, all human behavior is social norms.

And indeed I can find no other explanation for why we continue to work on a “nine-to-five” 09:00-17:00 schedule (or for that matter why it probably feels weird to you that I say “17:00” instead of the far less efficient and more confusion-prone “5:00 PM”). Our productivity has skyrocketed, increasing by a factor of 4 just since 1950 (and these figures dramatically underestimate the gains in productivity from computer technology, because so much is in the form of free content, which isn’t counted in GDP). We could do the same work in a quarter the time, or twice as much in half the time. Yet still we continue to work the same old 40-hour work week, nine-to-five work day. We each do the work of a dozen previous workers, yet we still find a way to fill the same old work week, and the rich who grow ever richer still pay us more or less the same real wages. It’s all basically social norms at this point; this is how things have always been done, and we can’t imagine any other way. When you get right down to it, capitalism is fundamentally a system of social norms—a very successful one, but far from the only possibility and perhaps not the best.

Thus, why does building more roads not solve the problem of rush hour? Because we have a social norm that says we are all supposed to start work at 09:00 and end work at 17:00.
And that, dear readers, is what we must endeavor to change. Change our thinking, and we will change the norms. Change the norms, and we will change the world.

How following the crowd can doom us all

JDN 2457110 EDT 21:30

Humans are nothing if not social animals. We like to follow the crowd, do what everyone else is doing—and many of us will continue to do so even if our own behavior doesn’t make sense to us. There is a very famous experiment in cognitive science that demonstrates this vividly.

People are given a very simple task to perform several times: We show you line X and lines A, B, and C. Now tell us which of A, B or C is the same length as X. Couldn’t be easier, right? But there’s a trick: seven other people are in the same room performing the same experiment, and they all say that B is the same length as X, even though you can clearly see that A is the correct answer. Do you stick with what you know, or say what everyone else is saying? Typically, you say what everyone else is saying. Over 18 trials, 75% of people followed the crowd at least once, and some people followed the crowd every single time. Some people even began to doubt their own perception, wondering if B really was the right answer—there are four lights, anyone?

Given that our behavior can be distorted by others in such simple and obvious tasks, it should be no surprise that it can be distorted even more in complex and ambiguous tasks—like those involved in finance. If everyone is buying up Beanie Babies or Tweeter stock, maybe you should too, right? Can all those people be wrong?

In fact, matters are even worse with the stock market, because it is in a sense rational to buy into a bubble if you know that other people will as well. As long as you aren’t the last to buy in, you can make a lot of money that way. In speculation, you try to predict the way that other people will cause prices to move and base your decisions around that—but then everyone else is doing the same thing. By Keynes called it a “beauty contest”; apparently in his day it was common to have contests for picking the most beautiful photo—but how is beauty assessed? By how many people pick it! So you actually don’t want to choose the one you think is most beautiful, you want to choose the one you think most people will think is the most beautiful—or the one you think most people will think most people will think….

Our herd behavior probably made a lot more sense when we evolved it millennia ago; when most of your threats are external and human beings don’t have that much influence over our environment, the majority opinion is quite likely to be right, and can often given you an answer much faster than you could figure it out on your own. (If everyone else thinks a lion is hiding in the bushes, there’s probably a lion hiding in the bushes—and if there is, the last thing you want is to be the only one who didn’t run.) The problem arises when this tendency to follow the ground feeds back on itself, and our behavior becomes driven not by the external reality but by an attempt to predict each other’s predictions of each other’s predictions. Yet this is exactly how financial markets are structured.

With this in mind, the surprise is not why markets are unstable—the surprise is why markets are ever stable. I think the main reason markets ever manage price stability is actually something most economists think of as a failure of markets: Price rigidity and so-called “menu costs“. If it’s costly to change your price, you won’t be constantly trying to adjust it to the mood of the hour—or the minute, or the microsecondbut instead trying to tie it to the fundamental value of what you’re selling so that the price will continue to be close for a long time ahead. You may get shortages in times of high demand and gluts in times of low demand, but as long as those two things roughly balance out you’ll leave the price where it is. But if you can instantly and costlessly change the price however you want, you can raise it when people seem particularly interested in buying and lower it when they don’t, and then people can start trying to buy when your price is low and sell when it is high. If people were completely rational and had perfect information, this arbitrage would stabilize prices—but since they’re not, arbitrage attempts can over- or under-compensate, and thus result in cyclical or even chaotic changes in prices.

Our herd behavior then makes this worse, as more people buying leads to, well, more people buying, and more people selling leads to more people selling. If there were no other causes of behavior, the result would be prices that explode outward exponentially; but even with other forces trying to counteract them, prices can move suddenly and unpredictably.

If most traders are irrational or under-informed while a handful are rational and well-informed, the latter can exploit the former for enormous amounts of money; this fact is often used to argue that irrational or under-informed traders will simply drop out, but it should only take you a few moments of thought to see why that isn’t necessarily true. The incentives isn’t just to be well-informed but also to keep others from being well-informed. If everyone were rational and had perfect information, stock trading would be the most boring job in the world, because the prices would never change except perhaps to grow with the growth rate of the overall economy. Wall Street therefore has every incentive in the world not to let that happen. And now perhaps you can see why they are so opposed to regulations that would require them to improve transparency or slow down market changes. Without the ability to deceive people about the real value of assets or trigger irrational bouts of mass buying or selling, Wall Street would make little or no money at all. Not only are markets inherently unstable by themselves, in addition we have extremely powerful individuals and institutions who are driven to ensure that this instability is never corrected.

This is why as our markets have become ever more streamlined and interconnected, instead of becoming more efficient as expected, they have actually become more unstable. They were never stable—and the gold standard made that instability worse—but despite monetary policy that has provided us with very stable inflation in the prices of real goods, the prices of assets such as stocks and real estate have continued to fluctuate wildly. Real estate isn’t as bad as stocks, again because of price rigidity—houses rarely have their values re-assessed multiple times per year, let alone multiple times per second. But real estate markets are still unstable, because of so many people trying to speculate on them. We think of real estate as a good way to make money fast—and if you’re lucky, it can be. But in a rational and efficient market, real estate would be almost as boring as stock trading; your profits would be driven entirely by population growth (increasing the demand for land without changing the supply) and the value added in construction of buildings. In fact, the population growth effect should be sapped by a land tax, and then you should only make a profit if you actually build things. Simply owning land shouldn’t be a way of making money—and the reason for this should be obvious: You’re not actually doing anything. I don’t like patent rents very much, but at least inventing new technologies is actually beneficial for society. Owning land contributes absolutely nothing, and yet it has been one of the primary means of amassing wealth for centuries and continues to be today.

But (so-called) investors and the banks and hedge funds they control have little reason to change their ways, as long as the system is set up so that they can keep profiting from the instability that they foster. Particularly when we let them keep the profits when things go well, but immediately rush to bail them out when things go badly, they have basically no incentive at all not to take maximum risk and seek maximum instability. We need a fundamentally different outlook on the proper role and structure of finance in our economy.

Fortunately one is emerging, summarized in a slogan among economically-savvy liberals: Banking should be boring. (Elizabeth Warren has said this, as have Joseph Stiglitz and Paul Krugman.) And indeed it should, for all banks are supposed to be doing is lending money from people who have it and don’t need it to people who need it but don’t have it. They aren’t supposed to be making large profits of their own, because they aren’t the ones actually adding value to the economy. Indeed it was never quite clear to me why banks should be privatized in the first place, though I guess it makes more sense than, oh, say, prisons.

Unfortunately, the majority opinion right now, at least among those who make policy, seems to be that banks don’t need to be restructured or even placed on a tighter leash; no, they need to be set free so they can work their magic again. Even otherwise reasonable, intelligent people quickly become unshakeable ideologues when it comes to the idea of raising taxes or tightening regulations. And as much as I’d like to think that it’s just a small but powerful minority of people who thinks this way, I know full well that a large proportion of Americans believe in these views and intentionally elect politicians who will act upon them.

All the more reason to break from the crowd, don’t you think?

Scope neglect and the question of optimal altruism

JDN 2457090 EDT 16:15.

We’re now on Eastern Daylight Time because of this bizarre tradition of shifting our time zone forward for half of the year. It’s supposed to save energy, but a natural experiment in India suggests it actually increases energy demand. So why do we do it? Like every ridiculous tradition (have you ever tried to explain Groundhog Day to someone from another country?), we do it because we’ve always done it.
This week’s topic is scope neglect, one of the most pervasive—and pernicious—cognitive heuristics human beings face. Scope neglect raises a great many challenges not only practically but also theoretically—it raises what I call the question of optimal altruism.

The question is simple to ask yet remarkably challenging to answer: How much should we be willing to sacrifice in order to benefit others? If we think of this as a number, your solidarity coefficient (s), it is equal to the cost you are willing to pay divided by the benefit your action has for someone else: s B > C.

This is analogous to the biological concept relatedness (r), on which Hamilton’s Rule applies: r B > C. Solidarity is the psychological analogue; instead of valuing people based on their genetic similarity to you, you value them based on… well, that’s the problem.

I can easily place upper and lower bounds: The lower bound is zero: You should definitely be willing to sacrifice something to help other people—otherwise you are a psychopath. The upper bound is one: There’s no point in paying more cost than you produce in benefit, and in fact even paying the same cost to yourself as you yield in benefits for other people doesn’t make a lot of sense, because it means that your own self-interest is meaningless and the fact that you understand your own needs better than the needs of others is also irrelevant.

But beyond that, it gets a lot harder—and that may explain why we suffer scope neglect in the first place. Should it be 90%? 50%? 10%? 1%? How should it vary between friends versus family versus strangers? It’s really hard to say. And this inability to precisely decide how much other people should be worth to us may be part of why we suffer scope neglect.

Scope neglect is the fact that we are not willing to expend effort or money in direct proportion to the benefit it would have. When different groups were asked how much they would be willing to donate in order to save the lives of 2,000 birds, 20,000 birds, or 200,000 birds, the answers they gave were statistically indistinguishable—always about $80. But however much a bird’s life is worth to you, shouldn’t 200,000 birds be worth, well, 200,000 times as much? In fact, more than that, because the marginal utility of wealth is decreasing, but I see no reason to think that the marginal utility of birds decreases nearly as fast.

But therein lies the problem: Usually we can’t pay 200,000 times as much. I’d feel like a horrible person if I weren’t willing to expend at least $10 or an equivalent amount of effort in order to save a bird. To save 200,000 birds that means I’d owe $2 million—and I simply don’t have $2 million.

You can get similar results to the bird experiment if you use children—though, as one might hope, the absolute numbers are a bit bigger, usually more like $500 to $1000. (And this, it turns out, is actually about how much it actually costs to save a child’s life by a particularly efficient means, such as anti-malaria nets, de-worming, or direct cash transfer. So please, by all means, give $1000 to UNICEF or the Against Malaria Foundation. If you can’t give $1000, give $100; if you can’t give $100, give $10.) It doesn’t much matter whether you say that the project will save 500 children, 5,000 children, or 50,000 children—people still will give about $500 to $1000. But once again, if I’m willing to spend $1000 to save a child—and I definitely am—how much should I be willing to spend to end malaria, which kills 500,000 children a year? Apparently $500 million, which not only do I not have, I almost certainly will not make that much money cumulatively through my entire life. ($2 million, on the other hand, I almost certainly will make cumulatively—the median income of an economist is $90,000 per year, so if I work for at least 22 years with that as my average income I’ll have cumulatively made $2 million. My net wealth may never be that high—though if I get better positions, or I’m lucky enough or clever enough with the stock market it might—but my cumulative income almost certainly will. Indeed, the average gain in cumulative income from a college degree is about $1 million. Because it takes time—time is money—and loans carry interest, this gives it a net present value of about $300,000.)

But maybe scope neglect isn’t such a bad thing after all. There is a very serious problem with these sort of moral dilemmas: The question didn’t say I would single-handedly save 200,000 birds—and indeed, that notion seems quite ridiculous. If I knew that I could actually save 200,000 birds and I were the only one who could do it, dammit, I would try to come up with that $2 million. I might not succeed, but I really would try as hard as I could.

And if I could single-handedly end malaria, I hereby vow that I would do anything it took to achieve that. Short of mass murder, anything I could do couldn’t be a higher cost to the world than malaria itself. I have no idea how I’d come up with $500 million, but I’d certainly try. Bill Gates could easily come up with that $500 million—so he did. In fact he endowed the Gates Foundation with $28 billion, and they’ve spent $1.3 billion of that on fighting malaria, saving hundreds of thousands of lives.

With this in mind, what is scope neglect really about? I think it’s about coordination. It’s not that people don’t care more about 200,000 birds than they do about 2,000; and it’s certainly not that they don’t care more about 50,000 children than they do about 500. Rather, the problem is that people don’t know how many other people are likely to donate, or how expensive the total project is likely to be; and we don’t know how much we should be willing to pay to save the life of a bird or a child.

Hence, what we basically do is give up; since we can’t actually assess the marginal utility of our donation dollars, we fall back on our automatic emotional response. Our mind focuses itself on visualizing that single bird covered in oil, or that single child suffering from malaria. We then hope that the representative heuristic will guide us in how much to give. Or we follow social norms, and give as much as we think others would expect us to give.

While many in the effective altruism community take this to be a failing, they never actually say what we should do—they never give us a figure for how much money we should be willing to donate to save the life of a child. Instead they retreat to abstraction, saying that whatever it is we’re willing to give to save a child, we should be willing to give 50,000 times as much to save 50,000 children.

But it’s not that simple. A bigger project may attract more supporters; if the two occur in direct proportion, then constant donation is the optimal response. Since it’s probably not actually proportional, you likely should give somewhat more to causes that affect more people; but exactly how much more is an astonishingly difficult question. I really don’t blame people—or myself—for only giving a little bit more to causes with larger impact, because actually getting the right answer is so incredibly hard. This is why it’s so important that we have institutions like GiveWell and Charity Navigator which do the hard work to research the effectiveness of charities and tell us which ones we should give to.

Yet even if we can properly prioritize which charities to give to first, that still leaves the question of how much each of us should give. 1% of our income? 5%? 10%? 20%? 50%? Should we give so much that we throw ourselves into the same poverty we are trying to save others from?

In his earlier work Peter Singer seemed to think we should give so much that it throws us into poverty ourselves; he asked us to literally compare every single purchase and ask ourselves whether a year of lattes or a nicer car is worth a child’s life. Of course even he doesn’t live that way, and in his later books Singer seems to have realized this, and now recommends the far more modest standard that everyone give at least 1% of their income. (He himself gives about 33%, but he’s also very rich so he doesn’t feel it nearly as much.) I think he may have overcompensated; while if literally everyone gave at least 1% that would be more than enough to end world hunger and solve many other problems—world nominal GDP is over $70 trillion, so 1% of that is $700 billion a year—we know that this won’t happen. Some will give more, others less; most will give nothing at all. Hence I think those of us who give should give more than our share; hence I lean toward figures more like 5% or 10%.

But then, why not 50% or 90%? It is very difficult for me to argue on principle why we shouldn’t be expected to give that much. Because my income is such a small proportion of the total donations, the marginal utility of each dollar I give is basically constant—and quite high; if it takes about $1000 to save a child’s life on average, and each of these children will then live about 60 more years at about half the world average happiness, that’s about 30 QALY per $1000, or about 30 milliQALY per dollar. Even at my current level of income (incidentally about as much as I think the US basic income should be), I’m benefiting myself only about 150 microQALY per dollar—so my money is worth about 200 times as much to those children as it is to me.

So now we have to ask ourselves the really uncomfortable question: How much do I value those children, relative to myself? If I am at all honest, the value is not 1; I’m not prepared to die for someone I’ve never met 10,000 kilometers away in a nation I’ve never even visited, nor am I prepared to give away all my possessions and throw myself into the same starvation I am hoping to save them from. I value my closest friends and family approximately the same as myself, but I have to admit that I value random strangers considerably less.

Do I really value them at less than 1%, as these figures would seem to imply? I feel like a monster saying that, but maybe it really isn’t so terrible—after all, most economists seem to think that the optimal solidarity coefficient is in fact zero. Maybe we need to become more comfortable admitting that random strangers aren’t worth that much to us, simply so that we can coherently acknowledge that they aren’t worth nothing. Very few of us actually give away all our possessions, after all.

Then again, what do we mean by worth? I can say from direct experience that a single migraine causes me vastly more pain than learning about the death of 200,000 people in an earthquake in Southeast Asia. And while I gave about $100 to the relief efforts involved in that earthquake, I’ve spent considerably more on migraine treatments—thousands, once you include health insurance. But given the chance, would I be willing to suffer a migraine to prevent such an earthquake? Without hesitation. So the amount of pain we feel is not the same as the amount of money we pay, which is not the same as what we would be willing to sacrifice. I think the latter is more indicative of how much people’s lives are really worth to us—but then, what we pay is what has the most direct effect on the world.

It’s actually possible to justify not dying or selling all my possessions even if my solidarity coefficient is much higher—it just leads to some really questionable conclusions. Essentially the argument is this: I am an asset. I have what economists call “human capital”—my health, my intelligence, my education—that gives me the opportunity to affect the world in ways those children cannot. In my ideal imagined future (albeit improbable) in which I actually become President of the World Bank and have the authority to set global development policy, I myself could actually have a marginal impact of megaQALY—millions of person-years of better life. In the far more likely scenario in which I attain some mid-level research or advisory position, I could be one of thousands of people who together have that sort of impact—which still means my own marginal effect is on the order of kiloQALY. And clearly it’s true that if I died, or even if I sold all my possessions, these events would no longer be possible.

The problem with that reasoning is that it’s wildly implausible to say that everyone in the First World are in this same sort of position—Peter Singer can say that, and maybe I can say that, and indeed hundreds of development economists can say that—but at least 99.9% of the First World population are not development economists, nor are they physicists likely to invent cold fusion, nor biomedical engineers likely to cure HIV, nor aid workers who distribute anti-malaria nets and polio vaccines, nor politicians who set national policy, nor diplomats who influence international relations, nor authors whose bestselling books raise worldwide consciousness. Yet I am not comfortable saying that all the world’s teachers, secretaries, airline pilots and truck drivers should give away their possessions either. (Maybe all the world’s bankers and CEOs should—or at least most of them.)

Is it enough that our economy would collapse without teachers, secretaries, airline pilots and truck drivers? But this seems rather like the fact that if everyone in the world visited the same restaurant there wouldn’t be enough room. Surely we could do without any individual teacher, any individual truck driver? If everyone gave the same proportion of their income, 1% would be more than enough to end malaria and world hunger. But we know that everyone won’t give, and the job won’t get done if those of us who do give only 1%.

Moreover, it’s also clearly not the case that everything I spend money on makes me more likely to become a successful and influential development economist. Buying a suit and a car actually clearly does—it’s much easier to get good jobs that way. Even leisure can be justified to some extent, since human beings need leisure and there’s no sense burning myself out before I get anything done. But do I need both of my video game systems? Couldn’t I buy a bit less Coke Zero? What if I watched a 20-inch TV instead of a 40-inch one? I still have free time; could I get another job and donate that money? This is the sort of question Peter Singer tells us to ask ourselves, and it quickly leads to a painfully spartan existence in which most of our time is spent thinking about whether what we’re doing is advancing or damaging the cause of ending world hunger. But then the cost of that stress and cognitive effort must be included; but how do you optimize your own cognitive effort? You need to think about the cost of thinking about the cost of thinking… and on and on. This is why bounded rationality modeling is hard, even though it’s plainly essential to both cognitive science and computer science. (John Stuart Mill wrote an essay that resonates deeply with me about how the pressure to change the world drove him into depression, and how he learned to accept that he could still change the world even if he weren’t constantly pressuring himself to do so—and indeed he did. James Mill set out to create in his son, John Stuart Mill, the greatest philosopher in the history of the world—and I believe that he succeeded.)

Perhaps we should figure out what proportion of the world’s people are likely to give, and how much we need altogether, and then assign the amount we expect from each of them based on that? The more money you ask from each, the fewer people are likely to give. This creates an optimization problem akin to setting the price of a product under monopoly—monopolies maximize profits by carefully balancing the quantity sold with the price at which they sell, and perhaps a similar balance would allow us to maximize development aid. But wouldn’t it be better if we could simply increase the number of people who give, so that we don’t have to ask so much of those who are generous? That means tax-funded foreign aid is the way to go, because it ensures coordination. And indeed I do favor increasing foreign aid to about 1% of GDP—in the US it is currently about $50 billion, 0.3% of GDP, a little more than 1% of the Federal budget. (Most people who say we should “cut” foreign aid don’t realize how small it already is.) But foreign aid is coercive; wouldn’t it be better if people would give voluntarily?

I don’t have a simple answer. I don’t know how much other people’s lives ought to be worth to us, or what it means for our decisions once we assign that value. But I hope I’ve convinced you that this problem is an important one—and made you think a little more about scope neglect and why we have it.

The sunk-cost fallacy

JDN 2457075 EST 14:46.

I am back on Eastern Time once again, because we just finished our 3600-km road trek from Long Beach to Ann Arbor. I seem to move an awful lot; this makes me a bit like Schumpeter, who moved an average of every two years his whole adult life. Schumpeter and I have much in common, in fact, though I have no particular interest in horses.

Today’s topic is the sunk-cost fallacy, which was particularly salient as I had to box up all my things for the move. There were many items that I ended up having to throw away because it wasn’t worth moving them—but this was always painful, because I couldn’t help but think of all the work or money I had put into them. I threw away craft projects I had spent hours working on and collections of bottlecaps I had gathered over years—because I couldn’t think of when I’d use them, and ultimately the question isn’t how hard they were to make in the past, it’s what they’ll be useful for in the future. But each time it hurt, like I was giving up a little part of myself.

That’s the sunk-cost fallacy in a nutshell: Instead of considering whether it will be useful to us later and thus worth having around, we naturally tend to consider the effort that went into getting it. Instead of making our decisions based on the future, we make them based on the past.

Come to think of it, the entire Marxist labor theory of value is basically one gigantic sunk-cost fallacy: Instead of caring about the usefulness of a product—the mainstream utility theory of value—we are supposed to care about the labor that went into making it. To see why this is wrong, imagine someone spends 10,000 hours carving meaningless symbols into a rock, and someone else spends 10 minutes working with chemicals but somehow figures out how to cure pancreatic cancer. Which one would you pay more for—particularly if you had pancreatic cancer?

This is one of the most common irrational behaviors humans do, and it’s worth considering why that might be. Most people commit the sunk-cost fallacy on a daily basis, and even those of us who are aware of it will still fall into it if we aren’t careful.

This often seems to come from a fear of being wasteful; I don’t know of any data on this, but my hunch is that the more environmentalist you are, the more often you tend to run into the sunk-cost fallacy. You feel particularly bad wasting things when you are conscious of the damage that waste does to our planetary ecosystem. (Which is not to say that you should not be environmentalist; on the contrary, most of us should be a great deal more environmentalist than we are. The negative externalities of environmental degradation are almost unimaginably enormous—climate change already kills 150,000 people every year and is projected to kill tens if not hundreds of millions people over the 21st century.)

I think sunk-cost fallacy is involved in a lot of labor regulations as well. Most countries have employment protection legislation that makes it difficult to fire people for various reasons, ranging from the basically reasonable (discrimination against women and racial minorities) to the totally absurd (in some countries you can’t even fire people for being incompetent). These sorts of regulations are often quite popular, because people really don’t like the idea of losing their jobs. When faced with the possibility of losing your job, you should be thinking about what your future options are; but many people spend a lot of time thinking about the past effort they put into this one. I think there is also some endowment effect and loss aversion at work as well: You value your job more simply because you already have it, so you don’t want to lose it even for something better.

Yet these regulations are widely regarded by economists as inefficient; and for once I am inclined to agree. While I certainly don’t want people being fired frivolously or for discriminatory reasons, sometimes companies really do need to lay off workers because there simply isn’t enough demand for their products. When a factory closes down, we think about the jobs that are lost—but we don’t think about the better jobs they can now do instead.

I favor a system like what they have in Denmark (I’m popularizing a hashtag about this sort of thing: #Scandinaviaisbetter): We don’t try to protect your job, we try to protect you. Instead of regulations that make it hard to fire people, Denmark has a generous unemployment insurance system, strong social welfare policies, and active labor market policies that help people retrain and find new and better jobs. One thing I think Denmark might want to consider is restrictions on cyclical layoffs—in a recession there is pressure to lay off workers, but that can create a vicious cycle that makes recessions worse. Denmark was hit considerably harder by the Great Recession than France, for example; where France’s unemployment rose from 7.5% to 9.6%, Denmark’s rose from an astonishing 3.1% all the way up to 7.6%.

Then again, sometimes what looks like a sunk-cost fallacy actually isn’t—and I think this gives us insight into how we might have evolved such an apparently silly heuristic in the first place.

Why would you care about what you did in the past when deciding what to do in the future? Well there’s one reason in particular: Credible commitment. There are many cases in life where you’d like to be able to plan to do something in the future, but when the time comes to actually do it you’ll be tempted not to follow through.

This sort of thing happens all the time: When you take out a loan, you plan to pay it back—but when you need to actually make payments it sure would be nice if you didn’t have to. If you’re trying to slim down, you go on a diet—but doesn’t that cookie look delicious? You know you should quit smoking for your health—but what’s one more cigarette, really? When you get married, you promise to be faithful—but then sometimes someone else comes along who seems so enticing! Your term paper is due in two weeks, so you really should get working on it—but your friends are going out for drinks tonight, why not start the paper tomorrow?

Our true long-term interests are often misaligned with our short-term temptations. This often happens because of hyperbolic discounting, which is a bit technical; but the basic idea is that you tend to rate the importance of an event in inverse proportion to its distance in time. That turns out to be irrational, because as you get closer to the event, your valuations will change disproportionately. The optimal rational choice would be exponential discounting, where you value each successive moment a fixed percentage less than the last—since that percentage doesn’t change, your valuations will always stay in line with one another. But basically nobody really uses exponential discounting in real life.

We can see this vividly in experiments: If we ask people whether they would you rather receive $100 today, or $110 a week from now, they often go with $100 today. But if you ask them whether they would rather receive $100 in 52 weeks or $110 in 53 weeks, almost everyone chooses the $110. The value of a week apparently depends on how far away it is! (The $110 is clearly the rational choice by the way. Discounting 10% per week makes no sense at all—unless you literally believe that $1,000 today is as good as $140,000 a year from now.)

To solve this problem, it can be advantageous to make commitments—either enforced by direct measures such as legal penalties, or even simply by making promises that we feel guilty breaking. That’s why cold turkey is often the most effective way to quit a drug. Physiologically that makes no sense, because gradual cessation clearly does reduce withdrawal symptoms. But psychologically it does, because cold turkey allows you to make a hardline commitment to never again touch the stuff. The majority of successful smokers report using cold turkey, though there is still ongoing research on whether properly-orchestrated gradual reduction can be more effective. Likewise, vague notions like “I’ll eat better and exercise more” are virtually useless, while specific prescriptions like “I will do 20 minutes of exercise every day and stop eating red meat” are much more effective—the latter allows you to make a promise to yourself that can be broken, and since you feel bad breaking it you are motivated to keep it.

In the presence of such commitments, the past does matter, at least insofar as you made commitments to yourself or others in the past. If you promised never to smoke another cigarette, or never to cheat on your wife, or never to eat meat again, you actually have a good reason—and a good chance—to never do those things. This is easy to confuse with a sunk cost; when you think about the 20 years you’ve been married or the 10 years you’ve been vegetarian, you might be thinking of the sunk cost you’ve incurred over that time, or you might be thinking of the promises you’ve made and kept to yourself and others. In the former case you are irrationally committing a sunk-cost fallacy; in the latter you are rationally upholding a credible commitment.

This is most likely why we evolved in such a way as to commit sunk-cost fallacies. The ability to enforce commitments on ourselves and others was so important that it was worth it to overcompensate and sometimes let us care about sunk costs. Because commitments and sunk costs are often difficult to distinguish, it would have been more costly to evolve better ways of distinguish them than it was to simply make the mistake.

Perhaps people who are outraged by being laid off aren’t actually committing a sunk-cost fallacy at all; perhaps they are instead assuming the existence of a commitment where none exists. “I gave this company 20 good years, and now they’re getting rid of me?” But the truth is, you gave the company nothing. They never committed to keeping you (unless they signed a contract, but that’s different; if they are violating a contract, of course they should be penalized for that). They made you a trade, and when that trade ceases to be advantageous they will stop making it. Corporations don’t think of themselves as having any moral obligations whatsoever; they exist only to make profit. It is certainly debatable whether it was a good idea to set up corporations in this way; but unless and until we change that system it is important to keep it in mind. You will almost never see a corporation do something out of kindness or moral obligation; that’s simply not how corporations work. At best, they do nice things to enhance their brand reputation (Starbucks, Whole Foods, Microsoft, Disney, Costco). Some don’t even bother doing that, letting people hate as long as they continue to buy (Walmart, BP, DeBeers). Actually the former model seems to be more successful lately, which bodes well for the future; but be careful to recognize that few if any of these corporations are genuinely doing it out of the goodness of their hearts. Human beings are often altruistic; corporations are specifically designed not to be.

And there were some things I did promise myself I would keep—like old photos and notebooks that I want to keep as memories—so those went in boxes. Other things were obviously still useful—clothes, furniture, books. But for the rest? It was painful, but I thought about what I could realistically use them for, and if I couldn’t think of anything, they went into the trash.

Prospect Theory: Why we buy insurance and lottery tickets

JDN 2457061 PST 14:18.

Today’s topic is called prospect theory. Prospect theory is basically what put cognitive economics on the map; it was the knock-down argument that Kahneman used to show that human beings are not completely rational in their economic decisions. It all goes back to a 1979 paper by Kahneman and Tversky that now has 34000 citations (yes, we’ve been having this argument for a rather long time now). In the 1990s it was refined into cumulative prospect theory, which is more mathematically precise but basically the same idea.

What was that argument? People buy both insurance and lottery tickets.

The “both” is very important. Buying insurance can definitely be rational—indeed, typically is. Buying lottery tickets could theoretically be rational, under very particular circumstances. But they cannot both be rational at the same time.

To see why, let’s talk some more about marginal utility of wealth. Recall that a dollar is not worth the same to everyone; to a billionaire a dollar is a rounding error, to most of us it is a bottle of Coke, but to a starving child in Ghana it could be life itself. We typically observe diminishing marginal utility of wealth—the more money you have, the less another dollar is worth to you.

If we sketch a graph of your utility versus wealth it would look something like this:

Marginal_utility_wealth

Notice how it increases as your wealth increases, but at a rapidly diminishing rate.

If you have diminishing marginal utility of wealth, you are what we call risk-averse. If you are risk-averse, you’ll (sometimes) want to buy insurance. Let’s suppose the units on that graph are tens of thousands of dollars. Suppose you currently have an income of $50,000. You are offered the chance to pay $10,000 a year to buy unemployment insurance, so that if you lose your job, instead of making $10,000 on welfare you’ll make $30,000 on unemployment. You think you have about a 20% chance of losing your job.

If you had constant marginal utility of wealth, this would not be a good deal for you. Your expected value of money would be reduced if you buy the insurance: Before you had an 80% chance of $50,000 and a 20% chance of $10,000 so your expected amount of money is $42,000. With the insurance you have an 80% chance of $40,000 and a 20% chance of $30,000 so your expected amount of money is $38,000. Why would you take such a deal? That’s like giving up $4,000 isn’t it?

Well, let’s look back at that utility graph. At $50,000 your utility is 1.80, uh… units, er… let’s say QALY. 1.80 QALY per year, meaning you live 80% better than the average human. Maybe, I guess? Doesn’t seem too far off. In any case, the units of measurement aren’t that important.

Insurance_options

By buying insurance your effective income goes down to $40,000 per year, which lowers your utility to 1.70 QALY. That’s a fairly significant hit, but it’s not unbearable. If you lose your job (20% chance), you’ll fall down to $30,000 and have a utility of 1.55 QALY. Again, noticeable, but bearable. Your overall expected utility with insurance is therefore 1.67 QALY.

But what if you don’t buy insurance? Well then you have a 20% chance of taking a big hit and falling all the way down to $10,000 where your utility is only 1.00 QALY. Your expected utility is therefore only 1.64 QALY. You’re better off going with the insurance.

And this is how insurance companies make a profit (well; the legitimate way anyway; they also like to gouge people and deny cancer patients of course); on average, they make more from each customer than they pay out, but customers are still better off because they are protected against big losses. In this case, the insurance company profits $4,000 per customer per year, customers each get 30 milliQALY per year (about the same utility as an extra $2,000 more or less), everyone is happy.

But if this is your marginal utility of wealth—and it most likely is, approximately—then you would never want to buy a lottery ticket. Let’s suppose you actually have pretty good odds; it’s a 1 in 1 million chance of $1 million for a ticket that costs $2. This means that the state is going to take in about $2 million for every $1 million they pay out to a winner.

That’s about as good as your odds for a lottery are ever going to get; usually it’s more like a 1 in 400 million chance of $150 million for $1, which is an even bigger difference than it sounds, because $150 million is nowhere near 150 times as good as $1 million. It’s a bit better from the state’s perspective though, because they get to receive $400 million for every $150 million they pay out.

For your convenience I have zoomed out the graph so that you can see 100, which is an income of $1 million (which you’ll have this year if you win; to get it next year, you’ll have to play again). You’ll notice I did not have to zoom out the vertical axis, because 20 times as much money only ends up being about 2 times as much utility. I’ve marked with lines the utility of $50,000 (1.80, as we said before) versus $1 million (3.30).

Lottery_utility

What about the utility of $49,998 which is what you’ll have if you buy the ticket and lose? At this number of decimal places you can’t see the difference, so I’ll need to go out a few more. At $50,000 you have 1.80472 QALY. At $49,998 you have 1.80470 QALY. That $2 only costs you 0.00002 QALY, 20 microQALY. Not much, really; but of course not, it’s only $2.

How much does the 1 in 1 million chance of $1 million give you? Even less than that. Remember, the utility gain for going from $50,000 to $1 million is only 1.50 QALY. So you’re adding one one-millionth of that in expected utility, which is of course 1.5 microQALY, or 0.0000015 QALY.

That $2 may not seem like it’s worth much, but that 1 in 1 million chance of $1 million is worth less than one tenth as much. Again, I’ve tried to make these figures fairly realistic; they are by no means exact (I don’t actually think $49,998 corresponds to exactly 1.804699 QALY), but the order of magnitude difference is right. You gain about ten times as much utility from spending that $2 on something you want than you do on taking the chance at $1 million.

I said before that it is theoretically possible for you to have a utility function for which the lottery would be rational. For that you’d need to have increasing marginal utility of wealth, so that you could be what we call risk-seeking. Your utility function would have to look like this:

Weird_utility

There’s no way marginal utility of wealth looks like that. This would be saying that it would hurt Bill Gates more to lose $1 than it would hurt a starving child in Ghana, which makes no sense at all. (It certainly would makes you wonder why he’s so willing to give it to them.) So frankly even if we didn’t buy insurance the fact that we buy lottery tickets would already look pretty irrational.

But in order for it to be rational to buy both lottery tickets and insurance, our utility function would have to be totally nonsensical. Maybe it could look like this or something; marginal utility decreases normally for awhile, and then suddenly starts going upward again for no apparent reason:

Weirder_utility

Clearly it does not actually look like that. Not only would this mean that Bill Gates is hurt more by losing $1 than the child in Ghana, we have this bizarre situation where the middle class are the people who have the lowest marginal utility of wealth in the world. Both the rich and the poor would need to have higher marginal utility of wealth than we do. This would mean that apparently yachts are just amazing and we have no idea. Riding a yacht is the pinnacle of human experience, a transcendence beyond our wildest imaginings; and riding a slightly bigger yacht is even more amazing and transcendent. Love and the joy of a life well-lived pale in comparison to the ecstasy of adding just one more layer of gold plate to your Ferrari collection.

Where increasing marginal utility is ridiculous, this is outright special pleading. You’re just making up bizarre utility functions that perfectly line up with whatever behavior people happen to have so that you can still call it rational. It’s like saying, “It could be perfectly rational! Maybe he enjoys banging his head against the wall!”

Kahneman and Tversky had a better idea. They realized that human beings aren’t so great at assessing probability, and furthermore tend not to think in terms of total amounts of wealth or annual income at all, but in terms of losses and gains. Through a series of clever experiments they showed that we are not so much risk-averse as we are loss-averse; we are actually willing to take more risk if it means that we will be able to avoid a loss.

In effect, we seem to be acting as if our utility function looks like this, where the zero no longer means “zero income”, it means “whatever we have right now“:

Prospect_theory

We tend to weight losses about twice as much as gains, and we tend to assume that losses also diminish in their marginal effect the same way that gains do. That is, we would only take a 50% chance to lose $1000 if it meant a 50% chance to gain $2000; but we’d take a 10% chance at losing $10,000 to save ourselves from a guaranteed loss of $1000.

This can explain why we buy insurance, provided that you frame it correctly. One of the things about prospect theory—and about human behavior in general—is that it exhibits framing effects: The answer we give depends upon the way you ask the question. That’s so totally obviously irrational it’s honestly hard to believe that we do it; but we do, and sometimes in really important situations. Doctors—doctors—will decide a moral dilemma differently based on whether you describe it as “saving 400 out of 600 patients” or “letting 200 out of 600 patients die”.

In this case, you need to frame insurance as the default option, and not buying insurance as an extra risk you are taking. Then saving money by not buying insurance is a gain, and therefore less important, while a higher risk of a bad outcome is a loss, and therefore important.

If you frame it the other way, with not buying insurance as the default option, then buying insurance is taking a loss by making insurance payments, only to get a gain if the insurance pays out. Suddenly the exact same insurance policy looks less attractive. This is a big part of why Obamacare has been effective but unpopular. It was set up as a fine—a loss—if you don’t buy insurance, rather than as a bonus—a gain—if you do buy insurance. The latter would be more expensive, but we could just make it up by taxing something else; and it might have made Obamacare more popular, because people would see the government as giving them something instead of taking something away. But the fine does a better job of framing insurance as the default option, so it motivates more people to actually buy insurance.

But even that would still not be enough to explain how it is rational to buy lottery tickets (Have I mentioned how it’s really not a good idea to buy lottery tickets?), because buying a ticket is a loss and winning the lottery is a gain. You actually have to get people to somehow frame not winning the lottery as a loss, making winning the default option despite the fact that it is absurdly unlikely. But I have definitely heard people say things like this: “Well if my numbers come up and I didn’t play that week, how would I feel then?” Pretty bad, I’ll grant you. But how much you wanna bet that never happens? (They’ll bet… the price of the ticket, apparently.)

In order for that to work, people either need to dramatically overestimate the probability of winning, or else ignore it entirely. Both of those things totally happen.

First, we overestimate the probability of rare events and underestimate the probability of common events—this is actually the part that makes it cumulative prospect theory instead of just regular prospect theory. If you make a graph of perceived probability versus actual probability, it looks like this:

cumulative_prospect

We don’t make much distinction between 40% and 60%, even though that’s actually pretty big; but we make a huge distinction between 0% and 0.00001% even though that’s actually really tiny. I think we basically have categories in our heads: “Never, almost never, rarely, sometimes, often, usually, almost always, always.” Moving from 0% to 0.00001% is going from “never” to “almost never”, but going from 40% to 60% is still in “often”. (And that for some reason reminded me of “Well, hardly ever!”)

But that’s not even the worst of it. After all that work to explain how we can make sense of people’s behavior in terms of something like a utility function (albeit a distorted one), I think there’s often a simpler explanation still: Regret aversion under total neglect of probability.

Neglect of probability is self-explanatory: You totally ignore the probability. But what’s regret aversion, exactly? Unfortunately I’ve had trouble finding any good popular sources on the topic; it’s all scholarly stuff. (Maybe I’m more cutting-edge than I thought!)

The basic idea that is that you minimize regret, where regret can be formalized as the difference in utility between the outcome you got and the best outcome you could have gotten. In effect, it doesn’t matter whether something is likely or unlikely; you only care how bad it is.

This explains insurance and lottery tickets in one fell swoop: With insurance, you have the choice of risking a big loss (big regret) which you can avoid by paying a small amount (small regret). You take the small regret, and buy insurance. With lottery tickets, you have the chance of getting a large gain (big regret if you don’t) which you gain by paying a small amount (small regret).

This can also explain why a typical American’s fears go in the order terrorists > Ebola > sharks > > cars > cheeseburgers, while the actual risk of dying goes in almost the opposite order, cheeseburgers > cars > > terrorists > sharks > Ebola. (Terrorists are scarier than sharks and Ebola and actually do kill more Americans! Yay, we got something right! Other than that it is literally reversed.)

Dying from a terrorist attack would be horrible; in addition to your own death you have all the other likely deaths and injuries, and the sheer horror and evil of the terrorist attack itself. Dying from Ebola would be almost as bad, with gruesome and agonizing symptoms. Dying of a shark attack would be still pretty awful, as you get dismembered alive. But dying in a car accident isn’t so bad; it’s usually over pretty quick and the event seems tragic but ordinary. And dying of heart disease and diabetes from your cheeseburger overdose will happen slowly over many years, you’ll barely even notice it coming and probably die rapidly from a heart attack or comfortably in your sleep. (Wasn’t that a pleasant paragraph? But there’s really no other way to make the point.)

If we try to estimate the probability at all—and I don’t think most people even bother—it isn’t by rigorous scientific research; it’s usually by availability heuristic: How many examples can you think of in which that event happened? If you can think of a lot, you assume that it happens a lot.

And that might even be reasonable, if we still lived in hunter-gatherer tribes or small farming villages and the 150 or so people you knew were the only people you ever heard about. But now that we have live TV and the Internet, news can get to us from all around the world, and the news isn’t trying to give us an accurate assessment of risk, it’s trying to get our attention by talking about the biggest, scariest, most exciting things that are happening around the world. The amount of news attention an item receives is in fact in inverse proportion to the probability of its occurrence, because things are more exciting if they are rare and unusual. Which means that if we are estimating how likely something is based on how many times we heard about it on the news, our estimates are going to be almost exactly reversed from reality. Ironically it is the very fact that we have more information that makes our estimates less accurate, because of the way that information is presented.

It would be a pretty boring news channel that spent all day saying things like this: “82 people died in car accidents today, and 1657 people had fatal heart attacks, 11.8 million had migraines, and 127 million played the lottery and lost; in world news, 214 countries did not go to war, and 6,147 children starved to death in Africa…” This would, however, be vastly more informative.

In the meantime, here are a couple of counter-heuristics I recommend to you: Don’t think about losses and gains, think about where you are and where you might be. Don’t say, “I’ll gain $1,000”; say “I’ll raise my income this year to $41,000.” Definitely do not think in terms of the percentage price of things; think in terms of absolute amounts of money. Cheap expensive things, expensive cheap things is a motto of mine; go ahead and buy the $5 toothbrush instead of the $1, because that’s only $4. But be very hesitant to buy the $22,000 car instead of the $21,000, because that’s $1,000. If you need to estimate the probability of something, actually look it up; don’t try to guess based on what it feels like the probability should be. Make this unprecedented access to information work for you instead of against you. If you want to know how many people die in car accidents each year, you can literally ask Google and it will tell you that (I tried it—it’s 1.3 million worldwide). The fatality rate of a given disease versus the risk of its vaccine, the safety rating of a particular brand of car, the number of airplane crash deaths last month, the total number of terrorist attacks, the probability of becoming a university professor, the average functional lifespan of a new television—all these things and more await you at the click of a button. Even if you think you’re pretty sure, why not look it up anyway?

Perhaps then we can make prospect theory wrong by making ourselves more rational.

The irrationality of racism

JDN 2457039 EST 12:07.

I thought about making today’s post about the crazy currency crisis in Switzerland, but currency exchange rates aren’t really my area of expertise; this is much more in Krugman’s bailiwick, so you should probably read what Krugman says about the situation. There is one thing I’d like to say, however: I think there is a really easy way to create credible inflation and boost aggregate demand, but for some reason nobody is ever willing to do it: Give people money. Emphasis here on the people—not banks. Don’t adjust interest rates or currency pegs, don’t engage in quantitative easing. Give people money. Actually write a bunch of checks, presumably in the form of refundable tax rebates.

The only reason I can think of that economists don’t do this is they are afraid of helping poor people. They wouldn’t put it that way; maybe they’d say they want to avoid “moral hazard” or “perverse incentives”. But those fears didn’t stop them from loaning $2 trillion to banks or adding $4 trillion to the monetary base; they didn’t stop them from fighting for continued financial deregulation when what the world economy most desperately needs is stronger financial regulation. Our whole derivatives market practically oozes moral hazard and perverse incentives, but they aren’t willing to shut down that quadrillion-dollar con game. So that can’t be the actual fear. No, it has to be a fear of helping poor people instead of rich people, as though “capitalism” meant a system in which we squeeze the poor as tight as we can and heap all possible advantages upon those who are already wealthy. No, that’s called feudalism. Capitalism is supposed to be a system where markets are structured to provide free and fair competition, with everyone on a level playing field.

A basic income is a fundamentally capitalist policy, which maintains equal opportunity with a minimum of government intervention and allows the market to flourish. I suppose if you want to say that all taxation and government spending is “socialist”, fine; then every nation that has ever maintained stability for more than a decade has been in this sense “socialist”. Every soldier, firefighter and police officer paid by a government payroll is now part of a “socialist” system. Okay, as long as we’re consistent about that; but now you really can’t say that socialism is harmful; on the contrary, on this definition socialism is necessary for capitalism. In order to maintain security of property, enforcement of contracts, and equality of opportunity, you need government. Maybe we should just give up on the words entirely, and speak more clearly about what specific policies we want. If I don’t get to say that a basic income is “capitalist”, you don’t get to say financial deregulation is “capitalist”. Better yet, how about you can’t even call it “deregulation”? You have to actually argue in front of a crowd of people that it should be legal for banks to lie to them, and there should be no serious repercussions for any bank that cheats, steals, colludes, or even launders money for terrorists. That is, after all, what financial deregulation actually does in the real world.

Okay, that’s enough about that.

My birthday is coming up this Monday; thus completes my 27th revolution around the Sun. With birthdays come thoughts of ancestry: Though I appear White, I am legally one-quarter Native American, and my total ethnic mix includes English, German, Irish, Mohawk, and Chippewa.

Biologically, what exactly does that mean? Next to nothing.

Human genetic diversity is a real thing, and there are genetic links to not only dozens of genetic diseases and propensity toward certain types of cancer, but also personality and intelligence. There are also of course genes for skin pigmentation.

The human population does exhibit some genetic clustering, but the categories are not what you’re probably used to: Good examples of relatively well-defined genetic clusters include Ashkenazi, Papuan, and Mbuti. There are also many different haplogroups, such as mitochondrial haplogroups L3 and CZ.

Maybe you could even make a case for the “races” East Asian, South Asian, Pacific Islander, and Native American, since the indigenous populations of these geographic areas largely do come from the same genetic clusters. Or you could make a bigger category and call them all “Asian”—but if you include Papuan and Aborigine in “Asian” you’d pretty much have to include Chippewa and Najavo as well.

But I think it tells you a lot about what “race” really means when you realize that the two “race” categories which are most salient to Americans are in fact the categories that are genetically most meaningless. “White” and “Black” are totally nonsensical genetic categorizations.

Let’s start with “Black”; defining a “Black” race is like defining a category of animals by the fact that they are all tinted red—foxes yes, dogs no; robins yes, swallows no; ladybirds yes, cockroaches no. There is more genetic diversity within Africa than there is outside of it. There are African populations that are more closely related to European populations than they are to other African populations. The only thing “Black” people have in common is that their skin is dark, which is due to convergent evolution: It’s not due to common ancestry, but a common environment. Dark skin has a direct survival benefit in climates with intense sunlight.  The similarity is literally skin deep.

What about “White”? Well, there are some fairly well-defined European genetic populations, so if we clustered those together we might be able to get something worth calling “White”. The problem is, that’s not how it happened. “White” is a club. The definition of who gets to be “White” has expanded over time, and even occasionally contracted. Originally Hebrew, Celtic, Hispanic, and Italian were not included (and Hebrew, for once, is actually a fairly sensible genetic category, as long as you restrict it to Ashkenazi), but then later they were. But now that we’ve got a lot of poor people coming in from Mexico, we don’t quite think of Hispanics as “White” anymore. We actually watched Arabs lose their “White” card in real-time in 2001; before 9/11, they were “White”; now, “Arab” is a separate thing. And “Muslim” is even treated like a race now, which is like making a racial category of “Keynesians”—never forget that Islam is above all a belief system.

Actually, “White privilege” is almost a tautology—the privilege isn’t given to people who were already defined as “White”, the privilege is to be called “White”. The privilege is to have your ancestors counted in the “White” category so that they can be given rights, while people who are not in the category are denied those rights. There does seem to be a certain degree of restriction by appearance—to my knowledge, no population with skin as dark as Kenyans has ever been considered “White”, and Anglo-Saxons and Nordics have always been included—but the category is flexible to political and social changes.

But really I hate that word “privilege”, because it gets the whole situation backwards. When you talk about “White privilege”, you make it sound as though the problem with racism is that it gives unfair advantages to White people (or to people arbitrarily defined as “White”). No, the problem is that people who are not White are denied rights. It isn’t what White people have that’s wrong; it’s what Black people don’t have. Equating those two things creates a vision of the world as zero-sum, in which each gain for me is a loss for you.

Here’s the thing about zero-sum games: All outcomes are Pareto-efficient. Remember when I talked about Pareto-efficiency? As a quick refresher, an outcome is Pareto-efficient if there is no way for one person to be made better off without making someone else worse off. In general, it’s pretty hard to disagree that, other things equal, Pareto-efficiency is a good thing, and Pareto-inefficiency is a bad thing. But if racism were about “White privilege” and the game were zero-sum, racism would have to be Pareto-efficient.

In fact, racism is Pareto-inefficient, and that is part of why it is so obviously bad. It harms literally billions of people, and benefits basically no one. Maybe there are a few individuals who are actually, all things considered, better off than they would have been if racism had not existed. But there are certainly not very many such people, and in fact I’m not sure there are any at all. If there are any, it would mean that technically racism is not Pareto-inefficient—but it is definitely very close. At the very least, the damage caused by racism is several orders of magnitude larger than any benefits incurred.

That’s why the “privilege” language, while well-intentioned, is so insidious; it tells White people that racism means taking things away from them. Many of these people are already in dire straits—broke, unemployed, or even homeless—so taking away what they have sounds particularly awful. Of course they’d be hostile to or at least dubious of attempts to reduce racism. You just told them that racism is the only thing keeping them afloat! In fact, quite the opposite is the case: Poor White people are, second only to poor Black people, those who stand the most to gain from a more just society. David Koch and Donald Trump should be worried; we will probably have to take most of their money away in order to achieve social justice. (Bill Gates knows we’ll have to take most of his money away, but he’s okay with that; in fact he may end up giving it away before we get around to taking it.) But the average White person will almost certainly be better off than they were.

Why does it seem like there are benefits to racism? Again, because people are accustomed to thinking of the world as zero-sum. One person is denied a benefit, so that benefit must go somewhere else right? Nope—it can just disappear entirely, and in this case typically does.

When a Black person is denied a job in favor of a White person who is less qualified, doesn’t that White person benefit? Uh, no, actually, not really. They have been hired for a job that isn’t an optimal fit for them; they aren’t working to their comparative advantage, and that Black person isn’t either and may not be working at all. The total output of the economy will be thereby reduced slightly. When this happens millions of times, the total reduction in output can be quite substantial, and as a result that White person was hired at $30,000 for an unsuitable job when in a racism-free world they’d have been hired at $40,000 for a suitable one. A similar argument holds for sexism; men don’t benefit from getting jobs women are denied if one of those women would have invented a cure for prostate cancer.

Indeed, the empowerment of women and minorities is kind of the secret cheat code for creating a First World economy. The great successes of economic development—Korea, Japan, China, the US in WW2—had their successes precisely at a time when they suddenly started including women in manufacturing, effectively doubling their total labor capacity. Moreover, it’s pretty clear that the causation ran in this direction. Periods of economic growth are associated with increases in solidarity with other groups—and downturns with decreased solidarity—but the increase in women in the workforce was sudden and early while the increase in growth and total output was prolonged.

Racism is irrational. Indeed it is so obviously irrational that for decades now neoclassical economists have been insisting that there is no need for civil rights policy, affirmative action, etc. because the market will automatically eliminate racism by the rational profit motive. A more recent literature has attempted to show that, contrary to all appearances, racism actually is rational in some cases. Inevitably it relies upon either the background of a racist society (maybe Black people are, on average, genuinely less qualified, but it would only be because they’ve been given poorer opportunities), or an assumption of “discriminatory tastes”, which is basically giving up and redefining the utility function so that people simply get direct pleasure from being racists. Of course, on that sort of definition, you can basically justify any behavior as “rational”: Maybe he just enjoys banging his head against the wall! (A similar slipperiness is used by egoists to argue that caring for your children is actually “selfish”; well, it makes you happy, doesn’t it? Yes, but that’s not why we do it.)

There’s a much simpler way to understand this situation: Racism is irrational, and so is human behavior.

That isn’t a complete explanation, of course; and I think one major misunderstanding neoclassical economists have of cognitive economists is that they think this is what we do—we point out that something is irrational, and then high-five and go home. No, that’s not what we do. Finding the irrationality is just the start; next comes explaining the irrationality, understanding the irrationality, and finally—we haven’t reached this point in most cases—fixing the irrationality.

So what explains racism? In short, the tribal paradigm. Human beings evolved in an environment in which the most important factor in our survival and that of our offspring was not food supply or temperature or predators, it was tribal cohesion. With a cohesive tribe, we could find food, make clothes, fight off lions. Without one, we were helpless. Millions of years in this condition shaped our brains, programming them to treat threats to tribal cohesion as the greatest possible concern. We even reached the point where solidarity for the tribe actually began to dominate basic survival instincts: For a suicide bomber the unity of the tribe—be it Marxism for the Tamil Tigers or Islam for Al-Qaeda—is more important than his own life. We will do literally anything if we believe it is necessary to defend the identities we believe in.

And no, we rationalists are no exception here. We are indeed different from other groups; the beliefs that define us, unlike the beliefs of literally every other group that has ever existed, are actually rationally founded. The scientific method really isn’t just another religion, for unlike religion it actually works. But still, if push came to shove and we were forced to kill and die in order to defend rationality, we would; and maybe we’d even be right to do so. Maybe the French Revolution was, all things considered, a good thing—but it sure as hell wasn’t nonviolent.

This is the background we need to understand racism. It actually isn’t enough to show people that racism is harmful and irrational, because they are programmed not to care. As long as racial identification is the salient identity, the tribe by which we define ourselves, we will do anything to defend the cohesion of that tribe. It is not enough to show that racism is bad; we must in fact show that race doesn’t matter. Fortunately, this is easy, for as I explained above, race does not actually exist.

That makes racism in some sense easier to deal with than sexism, because the very categories of races upon which it is based are fundamentally faulty. Sexes, on the other hand, are definitely a real thing. Males and females actually are genetically different in important ways. Exactly how different in what ways is an open question, and what we do know is that for most of the really important traits like intelligence and personality the overlap outstrips the difference. (The really big, categorical differences all appear to be physical: Anatomy, size, testosterone.) But conquering sexism may always be a difficult balance, for there are certain differences we won’t be able to eliminate without altering DNA. That no more justifies sexism than the fact that height is partly genetic would justify denying rights to short people (which, actually, is something we do); but it does make matters complicated, because it’s difficult to know whether an observed difference (for instance, most pediatricians are female, while most neurosurgeons are male) is due to discrimination or innate differences.

Racism, on the other hand, is actually quite simple: Almost any statistically significant difference in behavior or outcome between races must be due to some form of discrimination somewhere down the line. Maybe it’s not discrimination right here, right now; maybe it’s discrimination years ago that denied opportunities, or discrimination against their ancestors that led them to inherit less generations later; but it almost has to be discrimination against someone somewhere, because it is only by social construction that races exist in the first place. I do say “almost” because I can think of a few exceptions: Black people are genuinely less likely to use tanning salons and genuinely more likely to need vitamin D supplements, but both of those things are directly due to skin pigmentation. They are also more likely to suffer from sickle-cell anemia, which is another convergent trait that evolved in tropical climates as a response to malaria. But unless you can think of a reason why employment outcomes would depend upon vitamin D, the huge difference in employment between Whites and Blacks really can’t be due to anything but discrimination.

I imagine most of my readers are more sophisticated than this, but just in case you’re wondering about the difference in IQ scores between Whites and Blacks, that is indeed a real observation, but IQ isn’t entirely genetic. The reason IQ scores are rising worldwide (the Flynn Effect) is due to improvements in environmental conditions: Fewer environmental pollutants—particularly lead and mercury, the removal of which is responsible for most of the reduction in crime in America over the last 20 yearsbetter nutrition, better education, less stress. Being stupid does not make you poor (or how would we explain Donald Trump?), but being poor absolutely does make you stupid. Combine that with the challenges and inconsistencies in cross-national IQ comparisons, and it’s pretty clear that the higher IQ scores in rich nations are an effect, not a cause, of their affluence. Likewise, the lower IQ scores of Black people in the US are entirely explained by their poorer living conditions, with no need for any genetic hypothesis—which would also be very difficult in the first place precisely because “Black” is such a weird genetic category.

Unfortunately, I don’t yet know exactly what it takes to change people’s concept of group identification. Obviously it can be done, for group identities change all the time, sometimes quite rapidly; but we simply don’t have good research on what causes those changes or how they might be affected by policy. That’s actually a major part of the experiment I’ve been trying to get funding to run since 2009, which I hope can now become my PhD thesis. All I can say is this: I’m working on it.

The World Development Report is on cognitive economics this year!

JDN 2457013 EST 21:01.

On a personal note, I can now proudly report that I have successfully defended my thesis “Corruption, ‘the Inequality Trap’, and ‘the 1% of the 1%’ “, and I now have completed a master’s degree in economics. I’m back home in Michigan for the holidays (hence my use of Eastern Standard Time), and then, well… I’m not entirely sure. I have a gap of about six months before PhD programs start. I have a number of job applications out, but unless I get a really good offer (such as the position at the International Food Policy Research Institute in DC) I think I may just stay in Michigan for awhile and work on my own projects, particularly publishing two of my books (my nonfiction magnum opus, The Mathematics of Tears and Joy, and my first novel, First Contact) and making some progress on a couple of research papers—ideally publishing one of them as well. But the future for me right now is quite uncertain, and that is now my major source of stress. Ironically I’d probably be less stressed if I were working full-time, because I would have a clear direction and sense of purpose. If I could have any job in the world, it would be a hard choice between a professorship at UC Berkeley or a research position at the World Bank.

Which brings me to the topic of today’s post: The people who do my dream job have just released a report showing that they basically agree with me on how it should be done.

If you have some extra time, please take a look at the World Bank World Development Report. They put one out each year, and it provides a rigorous and thorough (236 pages) but quite readable summary of the most important issues in the world economy today. It’s not exactly light summer reading, but nor is it the usual morass of arcane jargon. If you like my blog, you can probably follow most of the World Development Report. If you don’t have time to read the whole thing, you can at least skim through all the sidebars and figures to get a general sense of what it’s all about. Much of the report is written in the form of personal vignettes that make the general principles more vivid; but these are not mere anecdotes, for the report rigorously cites an enormous volume of empirical research.

The title of the 2015 report? “Mind, Society, and Behavior”. In other words, cognitive economics. The world’s foremost international economic institution has just endorsed cognitive economics and rejected neoclassical economics, and their report on the subject provides a brilliant introduction to the subject replete with direct applications to international development.

For someone like me who lives and breathes cognitive economics, the report is pure joy. It’s all there, from anchoring heuristic to social proof, corruption to discrimination. The report is broadly divided into three parts.

Part 1 explains the theory and evidence of cognitive economics, subdivided into “thinking automatically” (heuristics), “thinking socially” (social cognition), and “thinking with mental models” (bounded rationality). (If I wrote it I’d also include sections on the tribal paradigm and narrative, but of course I’ll have to publish that stuff in the actual research literature first.) Anyway the report is so amazing as it is I really can’t complain. It includes some truly brilliant deorbits on neoclassical economics, such as this one from page 47: ” In other words, the canonical model of human behavior is not supported in any society that has been studied.”

Part 2 uses cognitive economic theory to analyze and improve policy. This is the core of the report, with chapters on poverty, childhood, finance, productivity, ethnography, health, and climate change. So many different policies are analyzed I’m not sure I can summarize them with any justice, but a few particularly stuck out: First, the high cognitive demands of poverty can basically explain the whole observed difference in IQ between rich and poor people—so contrary to the right-wing belief that people are poor because they are stupid, in fact people seem stupid because they are poor. Simplifying the procedures for participation in social welfare programs (which is desperately needed, I say with a stack of incomplete Medicaid paperwork on my table—even I find these packets confusing, and I have a master’s degree in economics) not only increases their uptake but also makes people more satisfied with them—and of course a basic income could simplify social welfare programs enormously. “Are you a US citizen? Is it the first of the month? Congratulations, here’s $670.” Another finding that I found particularly noteworthy is that productivity is in many cases enhanced by unconditional gifts more than it is by incentives that are conditional on behavior—which goes against the very core of neoclassical economic theory. (It also gives us yet another item on the enormous list of benefits of a basic income: Far from reducing work incentives by the income effect, an unconditional basic income, as a shared gift from your society, may well motivate you even more than the same payment as a wage.)

Part 3 is a particularly bold addition: It turns the tables and applies cognitive economics to economists themselves, showing that human irrationality is by no means limited to idiots or even to poor people (as the report discusses in chapter 4, there are certain biases that poor people exhibit more—but there are also some they exhibit less.); all human beings are limited by the same basic constraints, and economists are human beings. We like to think of ourselves as infallibly rational, but we are nothing of the sort. Even after years of studying cognitive economics I still sometimes catch myself making mistakes based on heuristics, particularly when I’m stressed or tired. As a long-term example, I have a number of vague notions of entrepreneurial projects I’d like to do, but none for which I have been able to muster the effort and confidence to actually seek loans or investors. Rationally, I should either commit or abandon them, yet cannot quite bring myself to do either. And then of course I’ve never met anyone who didn’t procrastinate to some extent, and actually those of us who are especially smart often seem especially prone—though we often adopt the strategy of “active procrastination”, in which you end up doing something else useful when procrastinating (my apartment becomes cleanest when I have an important project to work on), or purposefully choose to work under pressure because we are more effective that way.

And the World Bank pulled no punches here, showing experiments on World Bank economists clearly demonstrating confirmation bias, sunk-cost fallacy, and what the report calls “home team advantage”, more commonly called ingroup-outgroup bias—which is basically a form of the much more general principle that I call the tribal paradigm.

If there is one flaw in the report, it’s that it’s quite long and fairly exhausting to read, which means that many people won’t even try and many who do won’t make it all the way through. (The fact that it doesn’t seem to be available in hard copy makes it worse; it’s exhausting to read lengthy texts online.) We only have so much attention and processing power to devote to a task, after all—which is kind of the whole point, really.