Oppression is quantitative.

JDN 2457082 EDT 11:15.

Economists are often accused of assigning dollar values to everything, of being Oscar Wilde’s definition of a cynic, someone who knows the price of everything and the value of nothing. And there is more than a little truth to this, particularly among neoclassical economists; I was alarmed a few days ago to receive an email response from an economist that included the word ‘altruism’ in scare quotes as though this were somehow a problematic or unrealistic concept. (Actually, altruism is already formally modeled by biologists, and my claim that human beings are altruistic would be so uncontroversial among evolutionary biologists as to be considered trivial.)

But sometimes this accusation is based upon things economists do that is actually tremendously useful, even necessary to good policymaking: We make everything quantitative. Nothing is ever “yes” or “no” to an economist (sometimes even when it probably should be; the debate among economists in the 1960s over whether slavery is economically efficient does seem rather beside the point), but always more or less; never good or bad but always better or worse. For example, as I discussed in my post on minimum wage, the mainstream position among economists is not that minimum wage is always harmful nor that minimum wage is always beneficial, but that minimum wage is a policy with costs and benefits that on average neither increases nor decreases unemployment. The mainstream position among economists about climate policy is that we should institute either a high carbon tax or a system of cap-and-trade permits; no economist I know wants us to either do nothing and let the market decide (a position most Republicans currently seem to take) or suddenly ban coal and oil (the latter is a strawman position I’ve heard environmentalists accused of, but I’ve never actually heard advocated; even Greenpeace wants to ban offshore drilling, not oil in general.).

This makes people uncomfortable, I think, because they want moral issues to be simple. They want “good guys” who are always right and “bad guys” who are always wrong. (Speaking of strawman environmentalism, a good example of this is Captain Planet, in which no one ever seems to pollute the environment in order to help people or even in order to make money; no, they simply do it because the hate clean water and baby animals.) They don’t want to talk about options that are more good or less bad; they want one option that is good and all other options that are bad.

This attitude tends to become infused with righteousness, such that anyone who disagrees is an agent of the enemy. Politics is the mind-killer, after all. If you acknowledge that there might be some downside to a policy you agree with, that’s like betraying your team.

But in reality, the failure to acknowledge downsides can lead to disaster. Problems that could have been prevented are instead ignored and denied. Getting the other side to recognize the downsides of their own policies might actually help you persuade them to your way of thinking. And appreciating that there is a continuum of possibilities that are better and worse in various ways to various degrees is what allows us to make the world a better place even as we know that it will never be perfect.

There is a common refrain you’ll hear from a lot of social justice activists which sounds really nice and egalitarian, but actually has the potential to completely undermine the entire project of social justice.

This is the idea that oppression can’t be measured quantitatively, and we shouldn’t try to compare different levels of oppression. The notion that some people are more oppressed than others is often derided as the Oppression Olympics. (Some use this term more narrowly to mean when a discussion is derailed by debate over who has it worse—but then the problem is really discussions being derailed, isn’t it?)

This sounds nice, because it means we don’t have to ask hard questions like, “Which is worse, sexism or racism?” or “Who is worse off, people with cancer or people with diabetes?” These are very difficult questions, and maybe they aren’t the right ones to ask—after all, there’s no reason to think that fighting racism and fighting sexism are mutually exclusive; they can in fact be complementary. Research into cancer only prevents us from doing research into diabetes if our total research budget is fixed—this is more than anything else an argument for increasing research budgets.

But we must not throw out the baby with the bathwater. Oppression is quantitative. Some kinds of oppression are clearly worse than others.

Why is this important? Because otherwise you can’t measure progress. If you have a strictly qualitative notion of oppression where it’s black-and-white, on-or-off, oppressed-or-not, then we haven’t made any progress on just about any kind of oppression. There is still racism, there is still sexism, there is still homophobia, there is still religious discrimination. Maybe these things will always exist to some extent. This makes the fight for social justice a hopeless Sisyphean task.

But in fact, that’s not true at all. We’ve made enormous progress. Unbelievably fast progress. Mind-boggling progress. For hundreds of millennia humanity made almost no progress at all, and then in the last few centuries we have suddenly leapt toward justice.

Sexism used to mean that women couldn’t own property, they couldn’t vote, they could be abused and raped with impunity—or even beaten or killed for being raped (which Saudi Arabia still does by the way). Now sexism just means that women aren’t paid as well, are underrepresented in positions of power like Congress and Fortune 500 CEOs, and they are still sometimes sexually harassed or raped—but when men are caught doing this they go to prison for years. This change happened in only about 100 years. That’s fantastic.

Racism used to mean that Black people were literally property to be bought and sold. They were slaves. They had no rights at all, they were treated like animals. They were frequently beaten to death. Now they can vote, hold office—one is President!—and racism means that our culture systematically discriminates against them, particularly in the legal system. Racism used to mean you could be lynched; now it just means that it’s a bit harder to get a job and the cops will sometimes harass you. This took only about 200 years. That’s amazing.

Homophobia used to mean that gay people were criminals. We could be sent to prison or even executed for the crime of making love in the wrong way. If we were beaten or murdered, it was our fault for being faggots. Now, homophobia means that we can’t get married in some states (and fewer all the time!), we’re depicted on TV in embarrassing stereotypes, and a lot of people say bigoted things about us. This has only taken about 50 years! That’s astonishing.

And above all, the most extreme example: Religious discrimination used to mean you could be burned at the stake for not being Catholic. It used to mean—and in some countries still does mean—that it’s illegal to believe in certain religions. Now, it means that Muslims are stereotyped because, well, to be frank, there are some really scary things about Muslim culture and some really scary people who are Muslim leaders. (Personally, I think Muslims should be more upset about Ahmadinejad and Al Qaeda than they are about being profiled in airports.) It means that we atheists are annoyed by “In God We Trust”, but we’re no longer burned at the stake. This has taken longer, more like 500 years. But even though it took a long time, I’m going to go out on a limb and say that this progress is wonderful.

Obviously, there’s a lot more progress remaining to be made on all these issues, and others—like economic inequality, ableism, nationalism, and animal rights—but the point is that we have made a lot of progress already. Things are better than they used to be—a lot betterand keeping this in mind will help us preserve the hope and dedication necessary to make things even better still.

If you think that oppression is either-or, on-or-off, you can’t celebrate this progress, and as a result the whole fight seems hopeless. Why bother, when it’s always been on, and will probably never be off? But we started with oppression that was absolutely horrific, and now it’s considerably milder. That’s real progress. At least within the First World we have gone from 90% oppressed to 25% oppressed, and we can bring it down to 10% or 1% or 0.1% or even 0.01%. Those aren’t just numbers, those are the lives of millions of people. As democracy spreads worldwide and poverty is eradicated, oppression declines. Step by step, social changes are made, whether by protest marches or forward-thinking politicians or even by lawyers and lobbyists (they aren’t all corrupt).

And indeed, a four-year-old Black girl with a mental disability living in Ghana whose entire family’s income is $3 a day is more oppressed than I am, and not only do I have no qualms about saying that, it would feel deeply unseemly to deny it. I am not totally unoppressed—I am a bisexual atheist with chronic migraines and depression in a country that is suspicious of atheists, systematically discriminates against LGBT people, and does not make proper accommodations for chronic disorders, particularly mental ones. But I am far less oppressed, and that little girl (she does exist, though I know not her name) could be made much less oppressed than she is even by relatively simple interventions (like a basic income). In order to make her fully and totally unoppressed, we would need such a radical restructuring of human society that I honestly can’t really imagine what it would look like. Maybe something like The Culture? Even then as Iain Banks imagines it, there is inequality between those within The Culture and those outside it, and there have been wars like the Idiran-Culture War which killed billions, and among those trillions of people on thousands of vast orbital habitats someone, somewhere is probably making a speciesist remark. Yet I can state unequivocally that life in The Culture would be better than my life here now, which is better than the life of that poor disabled girl in Ghana.

To be fair, we can’t actually put a precise number on it—though many economists try, and one of my goals is to convince them to improve their methods so that they stop using willingness-to-pay and instead try to actually measure utility by something like QALY. A precise number would help, actually—it would allow us to do cost-benefit analyses to decide where to focus our efforts. But while we don’t need a precise number to tell when we are making progress, we do need to acknowledge that there are degrees of oppression, some worse than others.

Oppression is quantitative. And our goal should be minimizing that quantity.

The sunk-cost fallacy

JDN 2457075 EST 14:46.

I am back on Eastern Time once again, because we just finished our 3600-km road trek from Long Beach to Ann Arbor. I seem to move an awful lot; this makes me a bit like Schumpeter, who moved an average of every two years his whole adult life. Schumpeter and I have much in common, in fact, though I have no particular interest in horses.

Today’s topic is the sunk-cost fallacy, which was particularly salient as I had to box up all my things for the move. There were many items that I ended up having to throw away because it wasn’t worth moving them—but this was always painful, because I couldn’t help but think of all the work or money I had put into them. I threw away craft projects I had spent hours working on and collections of bottlecaps I had gathered over years—because I couldn’t think of when I’d use them, and ultimately the question isn’t how hard they were to make in the past, it’s what they’ll be useful for in the future. But each time it hurt, like I was giving up a little part of myself.

That’s the sunk-cost fallacy in a nutshell: Instead of considering whether it will be useful to us later and thus worth having around, we naturally tend to consider the effort that went into getting it. Instead of making our decisions based on the future, we make them based on the past.

Come to think of it, the entire Marxist labor theory of value is basically one gigantic sunk-cost fallacy: Instead of caring about the usefulness of a product—the mainstream utility theory of value—we are supposed to care about the labor that went into making it. To see why this is wrong, imagine someone spends 10,000 hours carving meaningless symbols into a rock, and someone else spends 10 minutes working with chemicals but somehow figures out how to cure pancreatic cancer. Which one would you pay more for—particularly if you had pancreatic cancer?

This is one of the most common irrational behaviors humans do, and it’s worth considering why that might be. Most people commit the sunk-cost fallacy on a daily basis, and even those of us who are aware of it will still fall into it if we aren’t careful.

This often seems to come from a fear of being wasteful; I don’t know of any data on this, but my hunch is that the more environmentalist you are, the more often you tend to run into the sunk-cost fallacy. You feel particularly bad wasting things when you are conscious of the damage that waste does to our planetary ecosystem. (Which is not to say that you should not be environmentalist; on the contrary, most of us should be a great deal more environmentalist than we are. The negative externalities of environmental degradation are almost unimaginably enormous—climate change already kills 150,000 people every year and is projected to kill tens if not hundreds of millions people over the 21st century.)

I think sunk-cost fallacy is involved in a lot of labor regulations as well. Most countries have employment protection legislation that makes it difficult to fire people for various reasons, ranging from the basically reasonable (discrimination against women and racial minorities) to the totally absurd (in some countries you can’t even fire people for being incompetent). These sorts of regulations are often quite popular, because people really don’t like the idea of losing their jobs. When faced with the possibility of losing your job, you should be thinking about what your future options are; but many people spend a lot of time thinking about the past effort they put into this one. I think there is also some endowment effect and loss aversion at work as well: You value your job more simply because you already have it, so you don’t want to lose it even for something better.

Yet these regulations are widely regarded by economists as inefficient; and for once I am inclined to agree. While I certainly don’t want people being fired frivolously or for discriminatory reasons, sometimes companies really do need to lay off workers because there simply isn’t enough demand for their products. When a factory closes down, we think about the jobs that are lost—but we don’t think about the better jobs they can now do instead.

I favor a system like what they have in Denmark (I’m popularizing a hashtag about this sort of thing: #Scandinaviaisbetter): We don’t try to protect your job, we try to protect you. Instead of regulations that make it hard to fire people, Denmark has a generous unemployment insurance system, strong social welfare policies, and active labor market policies that help people retrain and find new and better jobs. One thing I think Denmark might want to consider is restrictions on cyclical layoffs—in a recession there is pressure to lay off workers, but that can create a vicious cycle that makes recessions worse. Denmark was hit considerably harder by the Great Recession than France, for example; where France’s unemployment rose from 7.5% to 9.6%, Denmark’s rose from an astonishing 3.1% all the way up to 7.6%.

Then again, sometimes what looks like a sunk-cost fallacy actually isn’t—and I think this gives us insight into how we might have evolved such an apparently silly heuristic in the first place.

Why would you care about what you did in the past when deciding what to do in the future? Well there’s one reason in particular: Credible commitment. There are many cases in life where you’d like to be able to plan to do something in the future, but when the time comes to actually do it you’ll be tempted not to follow through.

This sort of thing happens all the time: When you take out a loan, you plan to pay it back—but when you need to actually make payments it sure would be nice if you didn’t have to. If you’re trying to slim down, you go on a diet—but doesn’t that cookie look delicious? You know you should quit smoking for your health—but what’s one more cigarette, really? When you get married, you promise to be faithful—but then sometimes someone else comes along who seems so enticing! Your term paper is due in two weeks, so you really should get working on it—but your friends are going out for drinks tonight, why not start the paper tomorrow?

Our true long-term interests are often misaligned with our short-term temptations. This often happens because of hyperbolic discounting, which is a bit technical; but the basic idea is that you tend to rate the importance of an event in inverse proportion to its distance in time. That turns out to be irrational, because as you get closer to the event, your valuations will change disproportionately. The optimal rational choice would be exponential discounting, where you value each successive moment a fixed percentage less than the last—since that percentage doesn’t change, your valuations will always stay in line with one another. But basically nobody really uses exponential discounting in real life.

We can see this vividly in experiments: If we ask people whether they would you rather receive $100 today, or $110 a week from now, they often go with $100 today. But if you ask them whether they would rather receive $100 in 52 weeks or $110 in 53 weeks, almost everyone chooses the $110. The value of a week apparently depends on how far away it is! (The $110 is clearly the rational choice by the way. Discounting 10% per week makes no sense at all—unless you literally believe that $1,000 today is as good as $140,000 a year from now.)

To solve this problem, it can be advantageous to make commitments—either enforced by direct measures such as legal penalties, or even simply by making promises that we feel guilty breaking. That’s why cold turkey is often the most effective way to quit a drug. Physiologically that makes no sense, because gradual cessation clearly does reduce withdrawal symptoms. But psychologically it does, because cold turkey allows you to make a hardline commitment to never again touch the stuff. The majority of successful smokers report using cold turkey, though there is still ongoing research on whether properly-orchestrated gradual reduction can be more effective. Likewise, vague notions like “I’ll eat better and exercise more” are virtually useless, while specific prescriptions like “I will do 20 minutes of exercise every day and stop eating red meat” are much more effective—the latter allows you to make a promise to yourself that can be broken, and since you feel bad breaking it you are motivated to keep it.

In the presence of such commitments, the past does matter, at least insofar as you made commitments to yourself or others in the past. If you promised never to smoke another cigarette, or never to cheat on your wife, or never to eat meat again, you actually have a good reason—and a good chance—to never do those things. This is easy to confuse with a sunk cost; when you think about the 20 years you’ve been married or the 10 years you’ve been vegetarian, you might be thinking of the sunk cost you’ve incurred over that time, or you might be thinking of the promises you’ve made and kept to yourself and others. In the former case you are irrationally committing a sunk-cost fallacy; in the latter you are rationally upholding a credible commitment.

This is most likely why we evolved in such a way as to commit sunk-cost fallacies. The ability to enforce commitments on ourselves and others was so important that it was worth it to overcompensate and sometimes let us care about sunk costs. Because commitments and sunk costs are often difficult to distinguish, it would have been more costly to evolve better ways of distinguish them than it was to simply make the mistake.

Perhaps people who are outraged by being laid off aren’t actually committing a sunk-cost fallacy at all; perhaps they are instead assuming the existence of a commitment where none exists. “I gave this company 20 good years, and now they’re getting rid of me?” But the truth is, you gave the company nothing. They never committed to keeping you (unless they signed a contract, but that’s different; if they are violating a contract, of course they should be penalized for that). They made you a trade, and when that trade ceases to be advantageous they will stop making it. Corporations don’t think of themselves as having any moral obligations whatsoever; they exist only to make profit. It is certainly debatable whether it was a good idea to set up corporations in this way; but unless and until we change that system it is important to keep it in mind. You will almost never see a corporation do something out of kindness or moral obligation; that’s simply not how corporations work. At best, they do nice things to enhance their brand reputation (Starbucks, Whole Foods, Microsoft, Disney, Costco). Some don’t even bother doing that, letting people hate as long as they continue to buy (Walmart, BP, DeBeers). Actually the former model seems to be more successful lately, which bodes well for the future; but be careful to recognize that few if any of these corporations are genuinely doing it out of the goodness of their hearts. Human beings are often altruistic; corporations are specifically designed not to be.

And there were some things I did promise myself I would keep—like old photos and notebooks that I want to keep as memories—so those went in boxes. Other things were obviously still useful—clothes, furniture, books. But for the rest? It was painful, but I thought about what I could realistically use them for, and if I couldn’t think of anything, they went into the trash.

Love is rational

JDN 2457066 PST 15:29.

Since I am writing this the weekend of Valentine’s Day (actually by the time it is published it will be Valentine’s Day) and sitting across from my boyfriend, it seems particularly appropriate that today’s topic should be love. As I am writing it is in fact Darwin Day, so it is fitting that evolution will be a major topic as well.

Usually we cognitive economists are the ones reminding neoclassical economists that human beings are not always rational. Today however I must correct a misconception in the opposite direction: Love is rational, or at least it can be, should be, and typically is.

Lately I’ve been reading The Logic of Life which actually makes much the same point, about love and many other things. I had expected it to be a dogmatic defense of economic rationality—published in 2008 no less, which would make it the scream of a dying paradigm as it carries us all down with it—but I was in fact quite pleasantly surprised. The book takes a nuanced position on rationality very similar to my own, and actually incorporates many of the insights from neuroeconomics and cognitive economics. I think Harford would basically agree with me that human beings are 90% rational (but woe betide the other 10%).

We have this romantic (Romantic?) notion in our society that love is not rational, it is “beyond” rationality somehow. “Love is blind”, they say; and this is often used as a smug reply to the notion that rationality is the proper guide to live our lives.

The argument would seem to follow: “Love is not rational, love is good, therefore rationality is not always good.”

But then… the argument would follow? What do you mean, follow? Follow logically? Follow rationally? Something is clearly wrong if we’ve constructed a rational argument intended to show that we should not live our lives by rational arguments.

And the problem of course is the premise that love is not rational. Whatever made you say that?

It’s true that love is not directly volitional, not in the way that it is volitional to move your arm upward or close your eyes or type the sentence “Jackdaws ate my big sphinx of quartz.” You don’t exactly choose to love someone, weighing the pros and cons and making a decision the way you might choose which job offer to take or which university to attend.

But then, you don’t really choose which university you like either, now do you? You choose which to attend. But your enjoyment of that university is not a voluntary act. And similarly you do in fact choose whom to date, whom to marry. And you might well consider the pros and cons of such decisions. So the difference is not as large as it might at first seem.

More importantly, to say that our lives should be rational is not the same as saying they should be volitional. You simply can’t live your life as completely volitional, no matter how hard you try. You simply don’t have the cognitive resources to maintain constant awareness of every breath, every heartbeat. Yet there is nothing irrational about breathing or heartbeats—indeed they are necessary for survival and thus a precondition of anything rational you might ever do.

Indeed, in many ways it is our subconscious that is the most intelligent part of us. It is not as flexible as our conscious mind—that is why our conscious mind is there—but the human subconscious is unmatched in its efficiency and reliability among literally all known computational systems in the known universe. Walk across a room and it will solve reverse kinematics in real time. Throw a ball and it will solve three-dimensional nonlinear differential equations as well. Look at a familiar face and it will immediately identify it among a set of hundreds of faces with near-perfect accuracy regardless of the angle, lighting conditions, or even hairstyle. To see that I am not exaggerating the immense difficulty of these tasks, look at how difficult it is to make robots that can walk on two legs or throw balls. Face recognition is so difficult that it is still an unsolved problem with an extensive body of ongoing research.

And love, of course, is the subconscious system that has been most directly optimized by natural selection. Our very survival has depended upon it for millions of years. Indeed, it’s amazing how often it does seem to fail given those tight optimization constraints; I think this is for two reasons. First, natural selection optimizes for inclusive fitness, which is not the same thing as optimizing for happiness—what’s good for your genes may not be good for you per se. Many of the ways that love hurts us seem to be based around behaviors that probably did on average spread more genes on the African savannah. Second, the task of selecting an optimal partner is so mind-bogglingly complex that even the most powerful computational system in the known universe still can only do it so well. Imagine trying to construct a formal decision model that would tell you whom you should marry—all the variables you’d need to consider, the cost of sampling each of those variables sufficiently, the proper weightings on all the different terms in the utility function. Perhaps the wonder is that love is as rational as it is.

Indeed, love is evidence-based—and when it isn’t, this is cause for concern. The evidence is most often presented in small ways over long periods of time—a glance, a kiss, a gift, a meeting canceled to stay home and comfort you. Some ways are larger—a career move postponed to keep the family together, a beautiful wedding, a new house. We aren’t formally calculating the Bayesian probability at each new piece of evidence—though our subconscious brains might be, and whatever they’re doing the results aren’t far off from that mathematical optimum.

The notion that you will never “truly know” if others love you is no more epistemically valid or interesting than the notion that you will never “truly know” if your shirt is grue instead of green or if you are a brain in a vat. Perhaps we’ve been wrong about gravity all these years, and on April 27, 2016 it will suddenly reverse direction! No, it won’t, and I’m prepared to literally bet the whole world on that (frankly I’m not sure I have a choice). To be fair, the proposition that your spouse of twenty years or your mother loves you is perhaps not that certain—but it’s pretty darn certain. Perhaps the proper comparison is the level of certainty that climate change is caused by human beings, or even less, the level of certainty that your car will not suddenly veer off the road and kill you. The latter is something that actually happens—but we all drive every day assuming it won’t. By the time you marry someone, you can and should be that certain that they love you.

Love without evidence is bad love. The sort of unrequited love that builds in secret based upon fleeing glimpses, hours of obsessive fantasy, and little or no interaction with its subject isn’t romantic—it’s creepy and psychologically unhealthy. The extreme of that sort of love is what drove John Hinckley Jr. to shoot Ronald Reagan in order to impress Jodie Foster.

I don’t mean to make you feel guilty if you have experienced such a love—most of us have at one point or another—but it disgusts me how much our society tries to elevate that sort of love as the “true love” to which we should all aspire. We encourage people—particularly teenagers—to conceal their feelings for a long time and then release them in one grand surprise gesture of affection, which is just about the opposite of what you should actually be doing. (Look at Love Actually, which is just about the opposite of what its title says.) I think a great deal of strife in our society would be eliminated if we taught our children how to build relationships gradually over time instead of constantly presenting them with absurd caricatures of love that no one can—or should—follow.

I am pleased to see that our cultural norms on that point seem to be changing. A corporation as absurdly powerful as Disney is both an influence upon and a barometer of our social norms, and the trope in the most recent Disney films (like Frozen and Maleficent) is that true love is not the fiery passion of love at first sight, but the deep bond between family members that builds over time. This is a much healthier concept of love, though I wouldn’t exclude romantic love entirely. Romantic love can be true love, but only by building over time through a similar process.

Perhaps there is another reason people are uncomfortable with the idea that love is rational; by definition, rational behaviors respond to incentives. And since we tend to conceive of incentives as a purely selfish endeavor, this would seem to imply that love is selfish, which seems somewhere between painfully cynical and outright oxymoronic.

But while love certainly does carry many benefits for its users—being in love will literally make you live longer, by quite a lot, an effect size comparable to quitting smoking or exercising twice a week—it also carries many benefits for its recipients as well. Love is in fact the primary means by which evolution has shaped us toward altruism; it is the love for our family and our tribe that makes us willing to sacrifice so much for them. Not all incentives are selfish; indeed, an incentive is really just something that motivates you to action. If you could truly convince me that a given action I took would have even a reasonable chance of ending world hunger, I would do almost anything to achieve it; I can scarcely imagine a greater incentive, even though I would be harmed and the benefits would incur to people I have never met.

Love evolved because it advanced the fitness of our genes, of course. And this bothers many people; it seems to make our altruism ultimately just a different form of selfishness I guess, selfishness for our genes instead of ourselves. But this is a genetic fallacy, isn’t it? Yes, evolution by natural selection is a violent process, full of death and cruelty and suffering (as Darwin said, red in tooth and claw); but that doesn’t mean that its outcome—namely ourselves—is so irredeemable. We are, in fact, altruistic, regardless of where that altruism came from. The fact that it advanced our genes can actually be comforting in a way, because it reminds us that the universe is nonzero-sum and benefiting others does not have to mean harming ourselves.

One question I like to ask when people suggest that some scientific fact undermines our moral status in this way is: “Well, what would you prefer?” If the causal determinism of neural synapses undermines our free will, then what should we have been made of? Magical fairy dust? If we were, fairy dust would be a real phenomenon, and it would obey laws of nature, and you’d just say that the causal determinism of magical fairy dust undermines free will all over again. If the fact that our altruistic emotions evolved by natural selection to advance our inclusive fitness makes us not truly altruistic, then where should have altruism come from? A divine creator who made us to love one another? But then we’re just following our programming! You can always make this sort of argument, which either means that live is necessarily empty of meaning, that no possible universe could ever assuage our ennui—or, what I believe, that life’s meaning does not come from such ultimate causes. It is not what you are made of or where you come from that defines what you are. We are best defined by what we do.

It seems to depend how you look at it: Romantics are made of stardust and the fabric of the cosmos, while cynics are made of the nuclear waste expelled in the planet-destroying explosions of dying balls of fire. Romantics are the cousins of all living things in one grand family, while cynics are apex predators evolved from millions of years of rape and murder. Both of these views are in some sense correct—but I think the real mistake is in thinking that they are incompatible. Human beings are both those things, and more; we are capable of both great compassion and great cruelty—and also great indifference. It is a mistake to think that only the dark sides—or for that matter only the light sides—of us are truly real.

Love is rational; love responds to incentives; love is an evolutionary adaptation. Love binds us together; love makes us better; love leads us to sacrifice for one another.

Love is, above all, what makes us not infinite identical psychopaths.

Prospect Theory: Why we buy insurance and lottery tickets

JDN 2457061 PST 14:18.

Today’s topic is called prospect theory. Prospect theory is basically what put cognitive economics on the map; it was the knock-down argument that Kahneman used to show that human beings are not completely rational in their economic decisions. It all goes back to a 1979 paper by Kahneman and Tversky that now has 34000 citations (yes, we’ve been having this argument for a rather long time now). In the 1990s it was refined into cumulative prospect theory, which is more mathematically precise but basically the same idea.

What was that argument? People buy both insurance and lottery tickets.

The “both” is very important. Buying insurance can definitely be rational—indeed, typically is. Buying lottery tickets could theoretically be rational, under very particular circumstances. But they cannot both be rational at the same time.

To see why, let’s talk some more about marginal utility of wealth. Recall that a dollar is not worth the same to everyone; to a billionaire a dollar is a rounding error, to most of us it is a bottle of Coke, but to a starving child in Ghana it could be life itself. We typically observe diminishing marginal utility of wealth—the more money you have, the less another dollar is worth to you.

If we sketch a graph of your utility versus wealth it would look something like this:

Marginal_utility_wealth

Notice how it increases as your wealth increases, but at a rapidly diminishing rate.

If you have diminishing marginal utility of wealth, you are what we call risk-averse. If you are risk-averse, you’ll (sometimes) want to buy insurance. Let’s suppose the units on that graph are tens of thousands of dollars. Suppose you currently have an income of $50,000. You are offered the chance to pay $10,000 a year to buy unemployment insurance, so that if you lose your job, instead of making $10,000 on welfare you’ll make $30,000 on unemployment. You think you have about a 20% chance of losing your job.

If you had constant marginal utility of wealth, this would not be a good deal for you. Your expected value of money would be reduced if you buy the insurance: Before you had an 80% chance of $50,000 and a 20% chance of $10,000 so your expected amount of money is $42,000. With the insurance you have an 80% chance of $40,000 and a 20% chance of $30,000 so your expected amount of money is $38,000. Why would you take such a deal? That’s like giving up $4,000 isn’t it?

Well, let’s look back at that utility graph. At $50,000 your utility is 1.80, uh… units, er… let’s say QALY. 1.80 QALY per year, meaning you live 80% better than the average human. Maybe, I guess? Doesn’t seem too far off. In any case, the units of measurement aren’t that important.

Insurance_options

By buying insurance your effective income goes down to $40,000 per year, which lowers your utility to 1.70 QALY. That’s a fairly significant hit, but it’s not unbearable. If you lose your job (20% chance), you’ll fall down to $30,000 and have a utility of 1.55 QALY. Again, noticeable, but bearable. Your overall expected utility with insurance is therefore 1.67 QALY.

But what if you don’t buy insurance? Well then you have a 20% chance of taking a big hit and falling all the way down to $10,000 where your utility is only 1.00 QALY. Your expected utility is therefore only 1.64 QALY. You’re better off going with the insurance.

And this is how insurance companies make a profit (well; the legitimate way anyway; they also like to gouge people and deny cancer patients of course); on average, they make more from each customer than they pay out, but customers are still better off because they are protected against big losses. In this case, the insurance company profits $4,000 per customer per year, customers each get 30 milliQALY per year (about the same utility as an extra $2,000 more or less), everyone is happy.

But if this is your marginal utility of wealth—and it most likely is, approximately—then you would never want to buy a lottery ticket. Let’s suppose you actually have pretty good odds; it’s a 1 in 1 million chance of $1 million for a ticket that costs $2. This means that the state is going to take in about $2 million for every $1 million they pay out to a winner.

That’s about as good as your odds for a lottery are ever going to get; usually it’s more like a 1 in 400 million chance of $150 million for $1, which is an even bigger difference than it sounds, because $150 million is nowhere near 150 times as good as $1 million. It’s a bit better from the state’s perspective though, because they get to receive $400 million for every $150 million they pay out.

For your convenience I have zoomed out the graph so that you can see 100, which is an income of $1 million (which you’ll have this year if you win; to get it next year, you’ll have to play again). You’ll notice I did not have to zoom out the vertical axis, because 20 times as much money only ends up being about 2 times as much utility. I’ve marked with lines the utility of $50,000 (1.80, as we said before) versus $1 million (3.30).

Lottery_utility

What about the utility of $49,998 which is what you’ll have if you buy the ticket and lose? At this number of decimal places you can’t see the difference, so I’ll need to go out a few more. At $50,000 you have 1.80472 QALY. At $49,998 you have 1.80470 QALY. That $2 only costs you 0.00002 QALY, 20 microQALY. Not much, really; but of course not, it’s only $2.

How much does the 1 in 1 million chance of $1 million give you? Even less than that. Remember, the utility gain for going from $50,000 to $1 million is only 1.50 QALY. So you’re adding one one-millionth of that in expected utility, which is of course 1.5 microQALY, or 0.0000015 QALY.

That $2 may not seem like it’s worth much, but that 1 in 1 million chance of $1 million is worth less than one tenth as much. Again, I’ve tried to make these figures fairly realistic; they are by no means exact (I don’t actually think $49,998 corresponds to exactly 1.804699 QALY), but the order of magnitude difference is right. You gain about ten times as much utility from spending that $2 on something you want than you do on taking the chance at $1 million.

I said before that it is theoretically possible for you to have a utility function for which the lottery would be rational. For that you’d need to have increasing marginal utility of wealth, so that you could be what we call risk-seeking. Your utility function would have to look like this:

Weird_utility

There’s no way marginal utility of wealth looks like that. This would be saying that it would hurt Bill Gates more to lose $1 than it would hurt a starving child in Ghana, which makes no sense at all. (It certainly would makes you wonder why he’s so willing to give it to them.) So frankly even if we didn’t buy insurance the fact that we buy lottery tickets would already look pretty irrational.

But in order for it to be rational to buy both lottery tickets and insurance, our utility function would have to be totally nonsensical. Maybe it could look like this or something; marginal utility decreases normally for awhile, and then suddenly starts going upward again for no apparent reason:

Weirder_utility

Clearly it does not actually look like that. Not only would this mean that Bill Gates is hurt more by losing $1 than the child in Ghana, we have this bizarre situation where the middle class are the people who have the lowest marginal utility of wealth in the world. Both the rich and the poor would need to have higher marginal utility of wealth than we do. This would mean that apparently yachts are just amazing and we have no idea. Riding a yacht is the pinnacle of human experience, a transcendence beyond our wildest imaginings; and riding a slightly bigger yacht is even more amazing and transcendent. Love and the joy of a life well-lived pale in comparison to the ecstasy of adding just one more layer of gold plate to your Ferrari collection.

Where increasing marginal utility is ridiculous, this is outright special pleading. You’re just making up bizarre utility functions that perfectly line up with whatever behavior people happen to have so that you can still call it rational. It’s like saying, “It could be perfectly rational! Maybe he enjoys banging his head against the wall!”

Kahneman and Tversky had a better idea. They realized that human beings aren’t so great at assessing probability, and furthermore tend not to think in terms of total amounts of wealth or annual income at all, but in terms of losses and gains. Through a series of clever experiments they showed that we are not so much risk-averse as we are loss-averse; we are actually willing to take more risk if it means that we will be able to avoid a loss.

In effect, we seem to be acting as if our utility function looks like this, where the zero no longer means “zero income”, it means “whatever we have right now“:

Prospect_theory

We tend to weight losses about twice as much as gains, and we tend to assume that losses also diminish in their marginal effect the same way that gains do. That is, we would only take a 50% chance to lose $1000 if it meant a 50% chance to gain $2000; but we’d take a 10% chance at losing $10,000 to save ourselves from a guaranteed loss of $1000.

This can explain why we buy insurance, provided that you frame it correctly. One of the things about prospect theory—and about human behavior in general—is that it exhibits framing effects: The answer we give depends upon the way you ask the question. That’s so totally obviously irrational it’s honestly hard to believe that we do it; but we do, and sometimes in really important situations. Doctors—doctors—will decide a moral dilemma differently based on whether you describe it as “saving 400 out of 600 patients” or “letting 200 out of 600 patients die”.

In this case, you need to frame insurance as the default option, and not buying insurance as an extra risk you are taking. Then saving money by not buying insurance is a gain, and therefore less important, while a higher risk of a bad outcome is a loss, and therefore important.

If you frame it the other way, with not buying insurance as the default option, then buying insurance is taking a loss by making insurance payments, only to get a gain if the insurance pays out. Suddenly the exact same insurance policy looks less attractive. This is a big part of why Obamacare has been effective but unpopular. It was set up as a fine—a loss—if you don’t buy insurance, rather than as a bonus—a gain—if you do buy insurance. The latter would be more expensive, but we could just make it up by taxing something else; and it might have made Obamacare more popular, because people would see the government as giving them something instead of taking something away. But the fine does a better job of framing insurance as the default option, so it motivates more people to actually buy insurance.

But even that would still not be enough to explain how it is rational to buy lottery tickets (Have I mentioned how it’s really not a good idea to buy lottery tickets?), because buying a ticket is a loss and winning the lottery is a gain. You actually have to get people to somehow frame not winning the lottery as a loss, making winning the default option despite the fact that it is absurdly unlikely. But I have definitely heard people say things like this: “Well if my numbers come up and I didn’t play that week, how would I feel then?” Pretty bad, I’ll grant you. But how much you wanna bet that never happens? (They’ll bet… the price of the ticket, apparently.)

In order for that to work, people either need to dramatically overestimate the probability of winning, or else ignore it entirely. Both of those things totally happen.

First, we overestimate the probability of rare events and underestimate the probability of common events—this is actually the part that makes it cumulative prospect theory instead of just regular prospect theory. If you make a graph of perceived probability versus actual probability, it looks like this:

cumulative_prospect

We don’t make much distinction between 40% and 60%, even though that’s actually pretty big; but we make a huge distinction between 0% and 0.00001% even though that’s actually really tiny. I think we basically have categories in our heads: “Never, almost never, rarely, sometimes, often, usually, almost always, always.” Moving from 0% to 0.00001% is going from “never” to “almost never”, but going from 40% to 60% is still in “often”. (And that for some reason reminded me of “Well, hardly ever!”)

But that’s not even the worst of it. After all that work to explain how we can make sense of people’s behavior in terms of something like a utility function (albeit a distorted one), I think there’s often a simpler explanation still: Regret aversion under total neglect of probability.

Neglect of probability is self-explanatory: You totally ignore the probability. But what’s regret aversion, exactly? Unfortunately I’ve had trouble finding any good popular sources on the topic; it’s all scholarly stuff. (Maybe I’m more cutting-edge than I thought!)

The basic idea that is that you minimize regret, where regret can be formalized as the difference in utility between the outcome you got and the best outcome you could have gotten. In effect, it doesn’t matter whether something is likely or unlikely; you only care how bad it is.

This explains insurance and lottery tickets in one fell swoop: With insurance, you have the choice of risking a big loss (big regret) which you can avoid by paying a small amount (small regret). You take the small regret, and buy insurance. With lottery tickets, you have the chance of getting a large gain (big regret if you don’t) which you gain by paying a small amount (small regret).

This can also explain why a typical American’s fears go in the order terrorists > Ebola > sharks > > cars > cheeseburgers, while the actual risk of dying goes in almost the opposite order, cheeseburgers > cars > > terrorists > sharks > Ebola. (Terrorists are scarier than sharks and Ebola and actually do kill more Americans! Yay, we got something right! Other than that it is literally reversed.)

Dying from a terrorist attack would be horrible; in addition to your own death you have all the other likely deaths and injuries, and the sheer horror and evil of the terrorist attack itself. Dying from Ebola would be almost as bad, with gruesome and agonizing symptoms. Dying of a shark attack would be still pretty awful, as you get dismembered alive. But dying in a car accident isn’t so bad; it’s usually over pretty quick and the event seems tragic but ordinary. And dying of heart disease and diabetes from your cheeseburger overdose will happen slowly over many years, you’ll barely even notice it coming and probably die rapidly from a heart attack or comfortably in your sleep. (Wasn’t that a pleasant paragraph? But there’s really no other way to make the point.)

If we try to estimate the probability at all—and I don’t think most people even bother—it isn’t by rigorous scientific research; it’s usually by availability heuristic: How many examples can you think of in which that event happened? If you can think of a lot, you assume that it happens a lot.

And that might even be reasonable, if we still lived in hunter-gatherer tribes or small farming villages and the 150 or so people you knew were the only people you ever heard about. But now that we have live TV and the Internet, news can get to us from all around the world, and the news isn’t trying to give us an accurate assessment of risk, it’s trying to get our attention by talking about the biggest, scariest, most exciting things that are happening around the world. The amount of news attention an item receives is in fact in inverse proportion to the probability of its occurrence, because things are more exciting if they are rare and unusual. Which means that if we are estimating how likely something is based on how many times we heard about it on the news, our estimates are going to be almost exactly reversed from reality. Ironically it is the very fact that we have more information that makes our estimates less accurate, because of the way that information is presented.

It would be a pretty boring news channel that spent all day saying things like this: “82 people died in car accidents today, and 1657 people had fatal heart attacks, 11.8 million had migraines, and 127 million played the lottery and lost; in world news, 214 countries did not go to war, and 6,147 children starved to death in Africa…” This would, however, be vastly more informative.

In the meantime, here are a couple of counter-heuristics I recommend to you: Don’t think about losses and gains, think about where you are and where you might be. Don’t say, “I’ll gain $1,000”; say “I’ll raise my income this year to $41,000.” Definitely do not think in terms of the percentage price of things; think in terms of absolute amounts of money. Cheap expensive things, expensive cheap things is a motto of mine; go ahead and buy the $5 toothbrush instead of the $1, because that’s only $4. But be very hesitant to buy the $22,000 car instead of the $21,000, because that’s $1,000. If you need to estimate the probability of something, actually look it up; don’t try to guess based on what it feels like the probability should be. Make this unprecedented access to information work for you instead of against you. If you want to know how many people die in car accidents each year, you can literally ask Google and it will tell you that (I tried it—it’s 1.3 million worldwide). The fatality rate of a given disease versus the risk of its vaccine, the safety rating of a particular brand of car, the number of airplane crash deaths last month, the total number of terrorist attacks, the probability of becoming a university professor, the average functional lifespan of a new television—all these things and more await you at the click of a button. Even if you think you’re pretty sure, why not look it up anyway?

Perhaps then we can make prospect theory wrong by making ourselves more rational.

The winner-takes-all effect

JDN 2457054 PST 14:06.

As I write there is some sort of mariachi band playing on my front lawn. It is actually rather odd that I have a front lawn, since my apartment is set back from the road; yet there is the patch of grass, and there is the band playing upon it. This sort of thing is part of the excitement of living in a large city (and Long Beach would seem like a large city were it not right next to the sprawling immensity that is Los Angeles—there are more people in Long Beach than in Cleveland, but there are more people in greater Los Angeles than in Sweden); with a certain critical mass of human beings comes unexpected pieces of culture.

The fact that people agglomerate in this way is actually relevant to today’s topic, which is what I will call the winner-takes-all effect. I actually just finished reading a book called The Winner-Take-All Society, which is particularly horrifying to read because it came out in 1996. That’s almost twenty years ago, and things were already bad; and since then everything it describes has only gotten worse.

What is the winner-takes-all effect? It is the simple fact that in competitive capitalist markets, a small difference in quality can yield an enormous difference in return. The third most popular soda drink company probably still makes drinks that are pretty good, but do you have any idea what it is? There’s Coke, there’s Pepsi, and then there’s… uh… Dr. Pepper, apparently! But I didn’t know that before today and I bet you didn’t either. Now think about what it must be like to be the 15th most popular soda drink company, or the 37th. That’s the winner-takes-all effect.

I don’t generally follow football, but since tomorrow is the Super Bowl I feel some obligation to use that example as well. The highest-paid quarterback is Russell Wilson of the Seattle Seahawks, who is signing onto a five-year contract worth $110 million ($22 million a year). In annual income that will make him pass Jay Cutler of the Chicago Bears who has a seven-year contract worth $127 million ($18.5 million a year). This shift may have something to do with the fact that the Seahawks are in the Super Bowl this year and the Bears are not (they haven’t since 2007). Now consider what life is like for most football players; the median income of football players is most likely zero (at least as far as football-related income), and the median income of NFL players—the cream of the crop already—is $770,000; that’s still very good money of course (more than Krugman makes, actually! But he could make more, if he were willing to sell out to Wall Street), but it’s barely 1/30 of what Wilson is going to be making. To make that million-dollar salary, you need to be the best, of the best, of the best (sir!). That’s the winner-takes-all effect.

To go back to the example of cities, it is for similar reasons that the largest cities (New York, Los Angeles, London, Tokyo, Shanghai, Hong Kong, Delhi) become packed with tens of millions of people while others (Long Beach, Ann Arbor, Cleveland) get hundreds of thousands and most (Petoskey, Ketchikan, Heber City, and hundreds of others you’ve never heard of) get only a few thousand. Beyond that there are thousands of tiny little hamlets that many don’t even consider cities. The median city probably has about 10,000 people in it, and that only because we’d stop calling it a city if it fell below 1,000. If we include every tiny little village, the median town size is probably about 20 people. Meanwhile the largest city in the world is Tokyo, with a greater metropolitan area that holds almost 38 million people—or to put it another way almost exactly as many people as California. Huh, LA doesn’t seem so big now does it? How big is a typical town? Well, that’s the thing about this sort of power-law distribution; the concept of “typical” or “average” doesn’t really apply anymore. Each little piece of the distribution has basically the same shape as the whole distribution, so there isn’t a “typical” size or scale. That’s the winner-takes-all effect.

As they freely admit in the book, it isn’t literally that a single winner takes everything. That is the theoretical maximum level of wealth inequality, and fortunately no society has ever quite reached it. The closest we get in today’s society is probably Saudi Arabia, which recently lost its king—and yes I do mean king in the fullest sense of the word, a man of virtually unlimited riches and near-absolute power. His net wealth was estimated at $18 billion, which frankly sounds low; still even if that’s half the true amount it’s oddly comforting to know that he is still not quite as rich as Bill Gates ($78 billion), who earned his wealth at least semi-legitimately in a basically free society. Say what you will about intellectual property rents and market manipulation—and you know I do—but they are worlds away from what Abdullah’s family did, which was literally and directly robbed from millions of people by the power of the sword. Mostly he just inherited all that, and he did implement some minor reforms, but make no mistake: He was ruthless and by no means willing to give up his absolute power—he beheaded dozens of political dissidents, for example. Saudi Arabia does spread their wealth around a little, such that basically no one is below the UN poverty lines of $1.25 and $2 per day, but about a fourth of the population is below the national poverty line—which is just about the same distribution of wealth as what we have in the US, which actually makes me wonder just how free and legitimate our markets really are.

The winner-takes-all effect would really be more accurately described as the “top small fraction takes the vast majority” effect, but that isn’t nearly as catchy, now is it?

There are several different causes that can all lead to this same result. In the book, Robert Frank and Philip Cook argue that we should not attribute the cause to market manipulation, but in fact to the natural functioning of competitive markets. There’s something to be said for this—I used to buy the whole idea that competitive markets are the best, but increasingly I’ve been seeing ways that less competitive markets can make better overall outcomes.

Where they lose me is in arguing that the skyrocketing compensation packages for CEOs are due to their superior performance, and corporations are just being rational in competing for the best CEOs. If that were true, we wouldn’t find that the rank correlation between the CEO’s pay and the company’s stock performance is statistically indistinguishable from zero. Actually even a small positive correlation wouldn’t prove that the CEOs are actually performing well; it could just be that companies that perform well are willing to pay their CEOs more—and stock option compensation will do this automatically. But in fact the correlation is so tiny as to be negligible; corporations would be better off hiring a random person off the street and paying them $50,000 for all the CEO does for their stock performance. If you adjust for the size of the company, you find that having a higher-paid CEO is positively related to performance for small startups, but negatively correlated for large well-established corporations. No, clearly there’s something going on here besides competitive pay for high performance—corruption comes to mind, which you’ll remember was the subject of my master’s thesis.

But in some cases there isn’t any apparent corruption, and yet we still see these enormously unequal distributions of income. Another good example of this is the publishing industry, in which J.K. Rowling can make over $1 billion (she donated enough to charity to officially lose her billionaire status) but most authors make little or nothing, particularly those who can’t get published in the first place. I have no reason to believe that J.K. Rowling acquired this massive wealth by corruption; she just sold an awful lot of booksover 100 million of the first Harry Potter book alone.

But why would she be able to sell 100 million while thousands of authors write books that are probably just as good or nearly so make nothing? Am I just bitter and envious, as Mitt Romney would say? Is J.K. Rowling actually a million times as good an author as I am?

Obviously not, right? She may be better, but she’s not that much better. So how is it that she ends up making a million times as much as I do from writing? It feels like squaring the circle: How can markets be efficient and competitive, yet some people are being paid millions of times as others despite being only slightly more productive?

The answer is simple but enormously powerful: positive feedback.Once you start doing well, it’s easier to do better. You have what economists call an economy of scale. The first 10,000 books sold is the hardest; then the next 10,000 is a little easier; the next 10,000 a little easier still. In fact I suspect that in many cases the first 10% growth is harder than the second 10% growth and so on—which is actually a much stronger claim. For my sales to grow 10% I’d need to add like 20 people. For J.K. Rowling’s sales to grow 10% she’d need to add 10 million. Yet it might actually be easier for J.K. Rowling to add 10 million than for me to add 20. If not, it isn’t much harder. Suppose we tried by just sending out enticing tweets. I have about 100 Twitter followers, so I’d need 0.2 sales per follower; she has about 4 million, so she’d need an average of 2.5 sales per follower. That’s an advantage for me, percentage-wise—but if we have the same uptake rate I sell 20 books and she sells 800,000.

If you have only a handful of book sales like I do, those sales are static; but once you cross that line into millions of sales, it’s easy for that to spread into tens or even hundreds of millions. In the particular case of books, this is because it spreads by word-of-mouth; say each person who reads a book recommends it to 10 friends, and you only read a book if at least 2 of your friends recommended it. In a city of 100,000 people, if you start with 50 people reading it, odds are that most of those people don’t have friends that overlap and so you stop at 50. But if you start at 50,000, there is bound to be a great deal of overlap; so then that 50,000 recruits another 10,000, then another 10,000, and pretty soon the whole 100,000 have read it. In this case we have what are called network externalitiesyou’re more likely to read a book if your friends have read it, so the more people there are who have read it, the more people there are who want to read it. There’s a very similar effect at work in social networks; why does everyone still use Facebook, even though it’s actually pretty awful? Because everyone uses Facebook. Less important than the quality of the software platform (Google Plus is better, and there are some third-party networks that are likely better still) is the fact that all your friends and family are on it. We all use Facebook because we all use Facebook? We all read Harry Potter books because we all read Harry Potter books? The first rule of tautology club is…

Languages are also like this, which is why I can write this post in English and yet people can still read it around the world. English is the winner of the language competition (we call it the lingua franca, as weird as that is—French is not the lingua franca anymore). The losers are those hundreds of New Guinean languages you’ve never heard of, many of which are dying. And their distribution obeys, once again, a power-law. (Individual words actually obey a power-law as well, which makes this whole fractal business delightfully ever more so.)
Network externalities are not the only way that the winner-takes-all effect can occur, though I think it is the most common. You can also have economies of scale from the supply side, particularly in the case of information: Recording a song is a lot of time and effort, but once you record a song, it’s trivial to make more copies of it. So that first recording costs a great deal, while every subsequent recording costs next to nothing. This is probably also at work in the case of J.K. Rowling and the NFL; the two phenomena are by no means mutually exclusive. But clearly the sizes of cities are due to network externalities: It’s quite expensive to live in a big city—no supply-side economy of scale—but you want to live in a city where other people live because that’s where friends and family and opportunities are.

The most worrisome kind of winner-takes-all effect is what Frank and Cook call deep pockets: Once you have concentration of wealth in a few hands, those few individuals can now choose their own winners in a much more literal sense: the rich can commission works of art from their favorite artists, exacerbating the inequality among artists; worse yet they can use their money to influence politicians (as the Kochs are planning on spending $900 million—$3 for every person in America—to do in 2016) and exacerbate the inequality in the whole system. That gives us even more positive feedback on top of all the other positive feedbacks.

Sure enough, if you run the standard neoclassical economic models of competition and just insert the assumption of economies of scale, the result is concentration of wealth—in fact, if nothing about the rules prevents it, the result is a complete monopoly. Nor is this result in any sense economically efficient; it’s just what naturally happens in the presence of economies of scale.

Frank and Cook seem most concerned about the fact that these winner-take-all incomes will tend to push too many people to seek those careers, leaving millions of would-be artists, musicians and quarterbacks with dashed dreams when they might have been perfectly happy as electrical engineers or high school teachers. While this may be true—next week I’ll go into detail about prospect theory and why human beings are terrible at making judgments based on probability—it isn’t really what I’m most concerned about. For all the cost of frustrated ambition there is also a good deal of benefit; striving for greatness does not just make the world better if we succeed, it can make ourselves better even if we fail. I’d strongly encourage people to have backup plans; but I’m not going to tell people to stop painting, singing, writing, or playing football just because they’re unlikely to make a living at it. The one concern I do have is that the competition is so fierce that we are pressured to go all in, to not have backup plans, to use performance-enhancing drugs—they may carry awful risks, but they also work. And it’s probably true, actually, that you’re a bit more likely to make it all the way to the top if you don’t have a backup plan. You’re also vastly more likely to end up at the bottom. Is raising your probability of being a bestselling author from 0.00011% to 0.00012% worth giving up all other career options? Skipping chemistry class to practice football may improve your chances of being an NFL quarterback from 0.000013% to 0.000014%, but it will also drop your chances of being a chemical engineer from 95% (a degree in chemical engineering almost guarantees you a job eventually) to more like 5% (it’s hard to get a degree when you flunk all your classes).

Frank and Cook offer a solution that I think is basically right; they call it positional arms control agreements. By analogy with arms control agreements between nations—and what is war, if not the ultimate winner-takes-all contest?—they propose that we use taxation and regulation policy to provide incentives to make people compete less fiercely for the top positions. Some of these we already do: Performance-enhancing drugs are banned in professional sports, for instance. Even where there are no regulations, we can use social norms: That’s why it’s actually a good thing that your parents rarely support your decision to drop out of school and become a movie star.

That’s yet another reason why progressive taxation is a good idea, as if we needed another; by paring down those top incomes it makes the prospect of winning big less enticing. If NFL quarterbacks only made 10 times what chemical engineers make instead of 300 times, people would be a lot more hesitant to give up on chemical engineering to become a quarterback. If top Wall Street executives only made 50 times what normal people make instead of 5000, people with physics degrees might go back to actually being physicists instead of speculating on stock markets.

There is one case where we might not want fewer people to try, and that is entrepreneurship. Most startups fail, and only a handful go on to make mind-bogglingly huge amounts of money (often for no apparent reason, like the Snuggie and Flappy Bird), yet entrepreneurship is what drives the dynamism of a capitalist economy. We need people to start new businesses, and right now they do that mainly because of a tiny chance of a huge benefit. Yet we don’t want them to be too unrealistic in their expectations: Entrepreneurs are much more optimistic than the general population, but the most successful entrepreneurs are a bit less optimistic than other entrepreneurs. The most successful strategy is to be optimistic but realistic; this outperforms both unrealistic optimism and pessimism. That seems pretty intuitive; you have to be confident you’ll succeed, but you can’t be totally delusional. Yet it’s precisely the realistic optimists who are most likely to be disincentivized by a reduction in the top prizes.

Here’s my solution: Let’s change it from a tiny change of a huge benefit to a large chance of a moderately large benefit. Let’s reward entrepreneurs for trying—with standards for what constitutes a really serious, good attempt rather than something frivolous that was guaranteed to fail. Use part of the funds from the progressive tax as a fund for angel grants, provided to a large number of the most promising entrepreneurs. It can’t be a million-dollar prize for the top 100. It needs to be more like a $50,000 prize for the top 100,000 (which would cost $5 billion a year, affordable for the US government). It should be paid at the proposal phase; the top 100,000 business plans receive the funding and are under no obligation to repay it. It has to be enough money that someone can rationally commit themselves to years of dedicated work without throwing themselves into poverty, and it has to be confirmed money so that they don’t have to worry about throwing themselves into debt. As for the upper limit, it only needs to be small enough that there is still an incentive for the business to succeed; but even with a 99% tax Mark Zuckerberg would still be a millionaire, so the rewards for success are high indeed.

The good news is that we actually have such a system to some extent. For research scientists rather than entrepreneurs, NSF grants are pretty close to what I have in mind, but at present they are a bit too competitive: 8,000 research grants with a median of $130,000 each and a 20% acceptance rate isn’t quite enough people—the acceptance rate should be higher, since most of these proposals are quite worthy. Still, it’s close, and definitely a much better incentive system than what we have for entrepreneurs; there are almost 12 million entrepreneurs in the United States, starting 6 million businesses a year, 75% of which fail before they can return their venture capital. Those that succeed have incomes higher than the general population, with a median income of around $70,000 per year, but most of this is accounted for by the fact that entrepreneurs are more educated and talented than the general population. Once you factor that in, successful entrepreneurs have about 50% more income on average, but their standard deviation of income is also 60% higher—so some are getting a lot and some are getting very little. Since 75% fail, we’re talking about a 25% chance of entering an income distribution that’s higher on average but much more variable, and a 75% chance of going through a period with little or no income at all—is it worth it? Maybe, maybe not. But if you could get a guaranteed $50,000 for having a good idea—and let me be clear, only serious proposals that have a good chance of success should qualify—that deal sounds an awful lot better.