What is the price of time?

JDN 2457562

If they were asked outright, “What is the price of time?” most people would find that it sounds nonsensical, like I’ve asked you “What is the diameter of calculus?” or “What is the electric charge of justice?” (It’s interesting that we generally try to assign meaning to such nonsensical questions, and they often seem strangely profound when we do; a good deal of what passes for “profound wisdom” is really better explained as this sort of reaction to nonsense. Deepak Chopra, for instance.)

But there is actually a quite sensible economic meaning of this question, and answering it turns out to have many important implications for how we should run our countries and how we should live our lives.

What we are really asking for is temporal discounting; we want to know how much more money today is worth compared to tomorrow, and how much more money tomorrow is worth compared to two days from now.

If you say that they are exactly the same, your discount rate (your “price of time”) is zero; if that is indeed how you feel, may I please borrow your entire net wealth at 0% interest for the next thirty years? If you like we can even inflation-index the interest rate so it always produces a real interest rate of zero, thus protecting you from potential inflation risk.
What? You don’t like my deal? You say you need that money sooner? Then your discount rate is not zero. Similarly, it can’t be negative; if you actually valued money tomorrow more than money today, you’d gladly give me my loan.

Money today is worth more to you than money tomorrow—the only question is how much more.

There’s a very simple theorem which says that as long as your temporal discounting doesn’t change over time, so it is dynamically consistent, it must have a very specific form. I don’t normally use math this advanced in my blog, but this one is so elegant I couldn’t resist. I’ll encase it in blockquotes so you can skim over it if you must.

The value of $1 today relative to… today is of course 1; f(0) = 1.

If you are dynamically consistent, at any time t you should discount tomorrow relative to today the same as you discounted today relative to yesterday, so for all t, f(t+1)/f(t) = f(t)/f(t-1)
Thus, f(t+1)/f(t) is independent of t, and therefore equal to some constant, which we can call r:

f(t+1)/f(t) = r, which implies f(t+1) = r f(t).

Starting at f(0) = 1, we have:

f(0) = 1, f(1) = r, f(2) = r^2

We can prove that this pattern continues to hold by mathematical induction.

Suppose the following is true for some integer k; we already know it works for k = 1:

f(k) = r^k

Let t = k:

f(k+1) = r f(k)

Therefore:

f(k+1) = r^(k+1)

Which by induction proves that for all integers n:

f(n) = r^n

The name of the variable doesn’t matter. Therefore:

f(t) = r^t

Whether you agree with me that this is beautiful, or you have no idea what I just said, the take-away is the same: If your discount rate is consistent over time, it must be exponential. There must be some constant number 0 < r < 1 such that each successive time period is worth r times as much as the previous. (You can also generalize this to the case of continuous time, where instead of r^t you get e^(-r t). This requires even more advanced math, so I’ll spare you.)

Most neoclassical economists would stop right there. But there are two very big problems with this argument:

(1) It doesn’t tell us the value r should actually be, only that it should be a constant.

(2) No actual human being thinks of time this way.

There is still ongoing research as to exactly how real human beings discount time, but one thing is quite clear from the experiments: It certainly isn’t exponential.

From about 2000 to 2010, the consensus among cognitive economists was that humans discount time hyperbolically; that is, our discount function looks like this:

f(t) = 1/(1 + r t)

In the 1990s there were a couple of experiments supporting hyperbolic discounting. There is even some theoretical work trying to show that this is actually optimal, given a certain kind of uncertainty about the future, and the argument for exponential discounting relies upon certainty we don’t actually have. Hyperbolic discounting could also result if we were reasoning as though we are given a simple interest rate, rather than a compound interest rate.

But even that doesn’t really seem like humans think, now does it? It’s already weird enough for someone to say “Should I take out this loan at 5%? Well, my discount rate is 7%, so yes.” But I can at least imagine that happening when people are comparing two different interest rates (“Should I pay down my student loans, or my credit cards?”). But I can’t imagine anyone thinking, “Should I take out this loan at 5% APR which I’d need to repay after 5 years? Well, let’s check my discount function, 1/(1+0.05 (5)) = 0.8, multiplied by 1.05^5 = 1.28, the product of which is 1.02, greater than 1, so no, I shouldn’t.” That isn’t how human brains function.

Moreover, recent experiments have shown that people often don’t seem to behave according to what hyperbolic discounting would predict.

Therefore I am very much in the other camp of cognitive economists, who say that we don’t have a well-defined discount function. It’s not exponential, it’s not hyperbolic, it’s not “quasi-hyperbolic” (yes that is a thing); we just don’t have one. We reason about time by simple heuristics. You can’t make a coherent function out of it because human beings… don’t always reason coherently.

Some economists seem to have an incredible amount of trouble accepting that; here we have one from the University of Chicago arguing that hyperbolic discounting can’t possibly exist, because then people could be Dutch-booked out of all their money; but this amounts to saying that human behavior cannot ever be irrational, lest all our money magically disappear. Yes, we know hyperbolic discounting (and heuristics) allow for Dutch-booking; that’s why they’re irrational. If you really want to know the formal assumption this paper makes that is wrong, it assumes that we have complete markets—and yes, complete markets essentially force you to be perfectly rational or die, because the slightest inconsistency in your reasoning results in someone convincing you to bet all your money on a sure loss. Why was it that we wanted complete markets, again? (Oh, yes, the fanciful Arrow-Debreu model, the magical fairy land where everyone is perfectly rational and all markets are complete and we all have perfect information and the same amount of wealth and skills and the same preferences, where everything automatically achieves a perfect equilibrium.)

There was a very good experiment on this, showing that rather than discount hyperbolically, behavior is better explained by a heuristic that people judge which of two options is better by a weighted sum of the absolute distance in time plus the relative distance in time. Now that sounds like something human beings might actually do. “$100 today or $110 tomorrow? That’s only 1 day away, but it’s also twice as long. I’m not waiting.” “$100 next year, or $110 in a year and a day? It’s only 1 day apart, and it’s only slightly longer, so I’ll wait.”

That might not actually be the precise heuristic we use, but it at least seems like one that people could use.

John Duffy, whom I hope to work with at UCI starting this fall, has been working on another experiment to test a different heuristic, based on the work of Daniel Kahneman, saying essentially that we have a fast, impulsive, System 1 reasoning layer and a slow, deliberative, System 2 reasoning layer; the result is that our judgments combine both “hand to mouth” where our System 1 essentially tries to get everything immediately and spend whatever we can get our hands on, and a more rational assessment by System 2 that might actually resemble an exponential discount rate. In the 5-minute judgment, System 1’s voice is overwhelming; but if we’re already planning a year out, System 1 doesn’t even care anymore and System 2 can take over. This model also has the nice feature of explaining why people with better self-control seem to behave more like they use exponential discounting,[PDF link] and why people do on occasion reason more or less exponentially, while I have literally never heard anyone try to reason hyperbolically, only economic theorists trying to use hyperbolic models to explain behavior.

Another theory is that discounting is “subadditive”, that is, if you break up a long time interval into many short intervals, people will discount it more, because it feels longer that way. Imagine a century. Now imagine a year, another year, another year, all the way up to 100 years. Now imagine a day, another day, another day, all the way up to 365 days for the first year, and then 365 days for the second year, and that on and on up to 100 years. It feels longer, doesn’t it? It is of course exactly the same. This can account for some weird anomalies in choice behavior, but I’m not convinced it’s as good as the two-system model.

Another theory is that we simply have a “present bias”, which we treat as a sort of fixed cost that we incur regardless of what the payments are. I like this because it is so supremely simple, but there’s something very fishy about it, because in this experiment it was just fixed at $4, and that can’t be right. It must be fixed at some proportion of the rewards, or something like that; or else we would always exhibit near-perfect exponential discounting for large amounts of money, which is more expensive to test (quite directly), but still seems rather unlikely.

Why is this important? This post is getting long, so I’ll save it for future posts, but in short, the ways that we value future costs and benefits, both as we actually do, and as we ought to, have far-reaching implications for everything from inflation to saving to environmental sustainability.

Externalities

JDN 2457202 EDT 17:52.

The 1992 Bill Clinton campaign had a slogan, “It’s the economy, stupid.”: A snowclone I’ve used on occasion is “it’s the externalities, stupid.” (Though I’m actually not all that fond of calling people ‘stupid’; though occasionally true is it never polite and rarely useful.) Externalities are one of the most important concepts in economics, and yet one that even all too many economists frequently neglect.

Fortunately for this one, I really don’t need much math; the concept isn’t even that complicated, which makes it all the more mysterious how frequently it is ignored. An externality is simply an effect that an action has upon those who were not involved in choosing to perform that action.

All sorts of actions have externalities; indeed, much rarer are actions that don’t. An obvious example is that punching someone in the face has the externality of injuring that person. Pollution is an important externality of many forms of production, because the people harmed by pollution are typically not the same people who were responsible for creating it. Traffic jams are created because every car on the road causes a congestion externality on all the other cars.

All the aforementioned are negative externalities, but there are also positive externalities. When one individual becomes educated, they tend to improve the overall economic viability of the place in which they live. Building infrastructure benefits whole communities. New scientific discoveries enhance the well-being of all humanity.

Externalities are a fundamental problem for the functioning of markets. In the absence of externalities—if each person’s actions only affected that one person and nobody else—then rational self-interest would be optimal and anything else would make no sense. In arguing that rationality is equivalent to self-interest, generations of economists have been, tacitly or explicitly, assuming that there are no such things as externalities.

This is a necessary assumption to show that self-interest would lead to something I discussed in an earlier post: Pareto-efficiency, in which the only way to make one person better off is to make someone else worse off. As I already talked about in that other post, Pareto-efficiency is wildly overrated; a wide variety of Pareto-efficient systems would be intolerable to actually live in. But in the presence of externalities, markets can’t even guarantee Pareto-efficiency, because it’s possible to have everyone acting in their rational self-interest cause harm to everyone at once.

This is called a tragedy of the commons; the basic idea is really quite simple. Suppose that when I burn a gallon of gasoline, that makes me gain 5 milliQALY by driving my car, but then makes everyone lose 1 milliQALY in increased pollution. On net, I gain 4 milliQALY, so if I am rational and self-interested I would do that. But now suppose that there are 10 people all given the same choice. If we all make that same choice, each of us will gain 1 milliQALY—and then lose 10 milliQALY. We would all have been better off if none of us had done it, even though it made sense to each of us at the time. Burning a gallon of gasoline to drive my car is beneficial to me, more so than the release of carbon dioxide into the atmosphere is harmful; but as a result of millions of people burning gasoline, the carbon dioxide in the atmosphere is destabilizing our planet’s climate. We’d all be better off if we could find some way to burn less gasoline.

In order for rational self-interest to be optimal, externalities have to somehow be removed from the system. Otherwise, there are actions we can take that benefit ourselves but harm other people—and thus, we would all be better off if we acted to some degree altruistically. (When I say things like this, most non-economists think I am saying something trivial and obvious, while most economists insist that I am making an assertion that is radical if not outright absurd.)

But of course a world without externalities is a world of complete isolation; it’s a world where everyone lives on their own deserted island and there is no way of communicating or interacting with any other human being in the world. The only reasonable question about this world is whether we would die first or go completely insane first; clearly those are the two things that would happen. Human beings are fundamentally social animals—I would argue that we are in fact more social even than eusocial animals like ants and bees. (Ants and bees are only altruistic toward their own kin; humans are altruistic to groups of millions of people we’ve never even met.) Humans without social interaction are like flowers without sunlight.

Indeed, externalities are so common that if markets only worked in their absence, markets would make no sense at all. Fortunately this isn’t true; there are some ways that markets can be adjusted to deal with at least some kinds of externalities.

One of the most well-known is the Coase theorem; this is odd because it is by far the worst solution. The Coase theorem basically says that if you can assign and enforce well-defined property rights and there is absolutely no cost in making any transaction, markets will automatically work out all externalities. The basic idea is that if someone is about to perform an action that would harm you, you can instead pay them not to do it. Then, the harm to you will be prevented and they will incur an additional benefit.

In the above example, we could all agree to pay $30 (which let’s say is worth 1 milliQALY) to each person who doesn’t burn a gallon of gasoline that would pollute our air. Then, if I were thinking about burning some gasoline, I wouldn’t want to do it, because I’d lose the $300 in payments, which costs me 10 milliQALY, while the benefits of burning the gasoline are only 5 milliQALY. We all reason the same way, and the result is that nobody burns gasoline and actually the money exchanged all balances out so we end up where we were before. The result is that we are all better off.

The first thought you probably have is: How do I pay everyone who doesn’t hurt me? How do I even find all those people? How do I ensure that they follow through and actually don’t hurt me? These are the problems of transaction costs and contract enforcement that are usually presented as the problem with the Coase theorem, and they certainly are very serious problems. You end up needing some sort of government simply to enforce all those contracts, and even then there’s the question of how we can possibly locate everyone who has ever polluted our air or our water.

But in fact there’s an even more fundamental problem: This is extortion. We are almost always in the condition of being able to harm other people, and a system in which the reason people don’t hurt each other is because they’re constantly paying each other not to is a system in which the most intimidating psychopath is the wealthiest person in the world. That system is in fact Pareto-efficient (the psychopath does quite well for himself indeed); but it’s exactly the sort of Pareto-efficient system that isn’t worth pursuing.

Another response to externalities is simply to accept them, which isn’t as awful as it sounds. There are many kinds of externalities that really aren’t that bad, and anything we might do to prevent them is likely to make the cure worse than the disease. Think about the externality of people standing in front of you in line, or the externality of people buying the last cereal box off the shelf before you can get there. The externality of taking the job you applied for may hurt at the time, but in the long run that’s how we maintain a thriving and competitive labor market. In fact, even the externality of ‘gentrifying’ your neighborhood so you can no longer afford it is not nearly as bad as most people seem to think—indeed, the much larger problem seems to be the poor neighborhoods that don’t have rising incomes, remaining poor for generations. (It also makes no sense to call this “gentrifying”; the only landed gentry we have in America is the landowners who claim a ludicrous proportion of our wealth, not the middle-class people who buy cheap homes and move in. If you really want to talk about a gentry, you should be thinking Waltons and Kochs—or Bushs and Clintons.) These sorts of minor externalities that are better left alone are sometimes characterized as pecuniary externalities because they usually are linked to prices, but I think that really misses the point; it’s quite possible for an externality to be entirely price-related and do enormous damage (read: the entire financial system) and to have little or nothing to do with prices and still be not that bad (like standing in line as I mentioned above).

But obviously we can’t leave all externalities alone in this way. We can’t just let people rob and murder one another arbitrarily, or ignore the destruction of the world’s climate that threatens hundreds of millions of lives. We can’t stand back and let forests burn and rivers run dry when we could easily have saved them.

The much more reasonable and realistic response to externalities is what we call government—there are rules you have to follow in society and punishments you face if you don’t. We can avoid most of the transaction problems involved in figuring out who polluted our water by simply making strict rules about polluting water in general. We can prevent people from stealing each other’s things or murdering each other by police who will investigate and punish such crimes.

This is why regulation—and a government strong enough to enforce that regulation—is necessary for the functioning of a society. This dichotomy we have been sold about “regulations versus the market” is totally nonsensical; the market depends upon regulations. This doesn’t justify any particular regulation—and indeed, an awful lot of regulations are astonshingly bad. But some sort of regulatory system is necessary for a market to function at all, and the question has never been whether we will have regulations but which regulations we will have. People who argue that all regulations must go and the market would somehow work on its own are either deeply ignorant of economics or operating from an ulterior motive; some truly horrendous policies have been made by arguing that “less government is always better” when the truth is nothing of the sort.

In fact, there is one real-world method I can think of that actually comes reasonably close to eliminating all externalities—and it is called social democracy. By involving everyone—democracy—in a system that regulates the economy—socialism—we can, in a sense, involve everyone in every transaction, and thus make it impossible to have externalities. In practice it’s never that simple, of course; but the basic concept of involving our whole society in making the rules that our society will follow is sound—and in fact I can think of no reasonable alternative.

We have to institute some sort of regulatory system, but then we need to decide what the regulations will be and who will control them. If we want to instead vest power in a technocratic elite, how do you decide whom to include in that elite? How do we ensure that the technocrats are actually better for the general population if there is no way for that general population to have a say in their election? By involving as many people as we can in the decision-making process, we make it much less likely that one person’s selfish action will harm many others. Indeed, this is probably why democracy prevents famine and genocide—which are, after all, rather extreme examples of negative externalities.

Love is rational

JDN 2457066 PST 15:29.

Since I am writing this the weekend of Valentine’s Day (actually by the time it is published it will be Valentine’s Day) and sitting across from my boyfriend, it seems particularly appropriate that today’s topic should be love. As I am writing it is in fact Darwin Day, so it is fitting that evolution will be a major topic as well.

Usually we cognitive economists are the ones reminding neoclassical economists that human beings are not always rational. Today however I must correct a misconception in the opposite direction: Love is rational, or at least it can be, should be, and typically is.

Lately I’ve been reading The Logic of Life which actually makes much the same point, about love and many other things. I had expected it to be a dogmatic defense of economic rationality—published in 2008 no less, which would make it the scream of a dying paradigm as it carries us all down with it—but I was in fact quite pleasantly surprised. The book takes a nuanced position on rationality very similar to my own, and actually incorporates many of the insights from neuroeconomics and cognitive economics. I think Harford would basically agree with me that human beings are 90% rational (but woe betide the other 10%).

We have this romantic (Romantic?) notion in our society that love is not rational, it is “beyond” rationality somehow. “Love is blind”, they say; and this is often used as a smug reply to the notion that rationality is the proper guide to live our lives.

The argument would seem to follow: “Love is not rational, love is good, therefore rationality is not always good.”

But then… the argument would follow? What do you mean, follow? Follow logically? Follow rationally? Something is clearly wrong if we’ve constructed a rational argument intended to show that we should not live our lives by rational arguments.

And the problem of course is the premise that love is not rational. Whatever made you say that?

It’s true that love is not directly volitional, not in the way that it is volitional to move your arm upward or close your eyes or type the sentence “Jackdaws ate my big sphinx of quartz.” You don’t exactly choose to love someone, weighing the pros and cons and making a decision the way you might choose which job offer to take or which university to attend.

But then, you don’t really choose which university you like either, now do you? You choose which to attend. But your enjoyment of that university is not a voluntary act. And similarly you do in fact choose whom to date, whom to marry. And you might well consider the pros and cons of such decisions. So the difference is not as large as it might at first seem.

More importantly, to say that our lives should be rational is not the same as saying they should be volitional. You simply can’t live your life as completely volitional, no matter how hard you try. You simply don’t have the cognitive resources to maintain constant awareness of every breath, every heartbeat. Yet there is nothing irrational about breathing or heartbeats—indeed they are necessary for survival and thus a precondition of anything rational you might ever do.

Indeed, in many ways it is our subconscious that is the most intelligent part of us. It is not as flexible as our conscious mind—that is why our conscious mind is there—but the human subconscious is unmatched in its efficiency and reliability among literally all known computational systems in the known universe. Walk across a room and it will solve reverse kinematics in real time. Throw a ball and it will solve three-dimensional nonlinear differential equations as well. Look at a familiar face and it will immediately identify it among a set of hundreds of faces with near-perfect accuracy regardless of the angle, lighting conditions, or even hairstyle. To see that I am not exaggerating the immense difficulty of these tasks, look at how difficult it is to make robots that can walk on two legs or throw balls. Face recognition is so difficult that it is still an unsolved problem with an extensive body of ongoing research.

And love, of course, is the subconscious system that has been most directly optimized by natural selection. Our very survival has depended upon it for millions of years. Indeed, it’s amazing how often it does seem to fail given those tight optimization constraints; I think this is for two reasons. First, natural selection optimizes for inclusive fitness, which is not the same thing as optimizing for happiness—what’s good for your genes may not be good for you per se. Many of the ways that love hurts us seem to be based around behaviors that probably did on average spread more genes on the African savannah. Second, the task of selecting an optimal partner is so mind-bogglingly complex that even the most powerful computational system in the known universe still can only do it so well. Imagine trying to construct a formal decision model that would tell you whom you should marry—all the variables you’d need to consider, the cost of sampling each of those variables sufficiently, the proper weightings on all the different terms in the utility function. Perhaps the wonder is that love is as rational as it is.

Indeed, love is evidence-based—and when it isn’t, this is cause for concern. The evidence is most often presented in small ways over long periods of time—a glance, a kiss, a gift, a meeting canceled to stay home and comfort you. Some ways are larger—a career move postponed to keep the family together, a beautiful wedding, a new house. We aren’t formally calculating the Bayesian probability at each new piece of evidence—though our subconscious brains might be, and whatever they’re doing the results aren’t far off from that mathematical optimum.

The notion that you will never “truly know” if others love you is no more epistemically valid or interesting than the notion that you will never “truly know” if your shirt is grue instead of green or if you are a brain in a vat. Perhaps we’ve been wrong about gravity all these years, and on April 27, 2016 it will suddenly reverse direction! No, it won’t, and I’m prepared to literally bet the whole world on that (frankly I’m not sure I have a choice). To be fair, the proposition that your spouse of twenty years or your mother loves you is perhaps not that certain—but it’s pretty darn certain. Perhaps the proper comparison is the level of certainty that climate change is caused by human beings, or even less, the level of certainty that your car will not suddenly veer off the road and kill you. The latter is something that actually happens—but we all drive every day assuming it won’t. By the time you marry someone, you can and should be that certain that they love you.

Love without evidence is bad love. The sort of unrequited love that builds in secret based upon fleeing glimpses, hours of obsessive fantasy, and little or no interaction with its subject isn’t romantic—it’s creepy and psychologically unhealthy. The extreme of that sort of love is what drove John Hinckley Jr. to shoot Ronald Reagan in order to impress Jodie Foster.

I don’t mean to make you feel guilty if you have experienced such a love—most of us have at one point or another—but it disgusts me how much our society tries to elevate that sort of love as the “true love” to which we should all aspire. We encourage people—particularly teenagers—to conceal their feelings for a long time and then release them in one grand surprise gesture of affection, which is just about the opposite of what you should actually be doing. (Look at Love Actually, which is just about the opposite of what its title says.) I think a great deal of strife in our society would be eliminated if we taught our children how to build relationships gradually over time instead of constantly presenting them with absurd caricatures of love that no one can—or should—follow.

I am pleased to see that our cultural norms on that point seem to be changing. A corporation as absurdly powerful as Disney is both an influence upon and a barometer of our social norms, and the trope in the most recent Disney films (like Frozen and Maleficent) is that true love is not the fiery passion of love at first sight, but the deep bond between family members that builds over time. This is a much healthier concept of love, though I wouldn’t exclude romantic love entirely. Romantic love can be true love, but only by building over time through a similar process.

Perhaps there is another reason people are uncomfortable with the idea that love is rational; by definition, rational behaviors respond to incentives. And since we tend to conceive of incentives as a purely selfish endeavor, this would seem to imply that love is selfish, which seems somewhere between painfully cynical and outright oxymoronic.

But while love certainly does carry many benefits for its users—being in love will literally make you live longer, by quite a lot, an effect size comparable to quitting smoking or exercising twice a week—it also carries many benefits for its recipients as well. Love is in fact the primary means by which evolution has shaped us toward altruism; it is the love for our family and our tribe that makes us willing to sacrifice so much for them. Not all incentives are selfish; indeed, an incentive is really just something that motivates you to action. If you could truly convince me that a given action I took would have even a reasonable chance of ending world hunger, I would do almost anything to achieve it; I can scarcely imagine a greater incentive, even though I would be harmed and the benefits would incur to people I have never met.

Love evolved because it advanced the fitness of our genes, of course. And this bothers many people; it seems to make our altruism ultimately just a different form of selfishness I guess, selfishness for our genes instead of ourselves. But this is a genetic fallacy, isn’t it? Yes, evolution by natural selection is a violent process, full of death and cruelty and suffering (as Darwin said, red in tooth and claw); but that doesn’t mean that its outcome—namely ourselves—is so irredeemable. We are, in fact, altruistic, regardless of where that altruism came from. The fact that it advanced our genes can actually be comforting in a way, because it reminds us that the universe is nonzero-sum and benefiting others does not have to mean harming ourselves.

One question I like to ask when people suggest that some scientific fact undermines our moral status in this way is: “Well, what would you prefer?” If the causal determinism of neural synapses undermines our free will, then what should we have been made of? Magical fairy dust? If we were, fairy dust would be a real phenomenon, and it would obey laws of nature, and you’d just say that the causal determinism of magical fairy dust undermines free will all over again. If the fact that our altruistic emotions evolved by natural selection to advance our inclusive fitness makes us not truly altruistic, then where should have altruism come from? A divine creator who made us to love one another? But then we’re just following our programming! You can always make this sort of argument, which either means that live is necessarily empty of meaning, that no possible universe could ever assuage our ennui—or, what I believe, that life’s meaning does not come from such ultimate causes. It is not what you are made of or where you come from that defines what you are. We are best defined by what we do.

It seems to depend how you look at it: Romantics are made of stardust and the fabric of the cosmos, while cynics are made of the nuclear waste expelled in the planet-destroying explosions of dying balls of fire. Romantics are the cousins of all living things in one grand family, while cynics are apex predators evolved from millions of years of rape and murder. Both of these views are in some sense correct—but I think the real mistake is in thinking that they are incompatible. Human beings are both those things, and more; we are capable of both great compassion and great cruelty—and also great indifference. It is a mistake to think that only the dark sides—or for that matter only the light sides—of us are truly real.

Love is rational; love responds to incentives; love is an evolutionary adaptation. Love binds us together; love makes us better; love leads us to sacrifice for one another.

Love is, above all, what makes us not infinite identical psychopaths.

The irrationality of racism

JDN 2457039 EST 12:07.

I thought about making today’s post about the crazy currency crisis in Switzerland, but currency exchange rates aren’t really my area of expertise; this is much more in Krugman’s bailiwick, so you should probably read what Krugman says about the situation. There is one thing I’d like to say, however: I think there is a really easy way to create credible inflation and boost aggregate demand, but for some reason nobody is ever willing to do it: Give people money. Emphasis here on the people—not banks. Don’t adjust interest rates or currency pegs, don’t engage in quantitative easing. Give people money. Actually write a bunch of checks, presumably in the form of refundable tax rebates.

The only reason I can think of that economists don’t do this is they are afraid of helping poor people. They wouldn’t put it that way; maybe they’d say they want to avoid “moral hazard” or “perverse incentives”. But those fears didn’t stop them from loaning $2 trillion to banks or adding $4 trillion to the monetary base; they didn’t stop them from fighting for continued financial deregulation when what the world economy most desperately needs is stronger financial regulation. Our whole derivatives market practically oozes moral hazard and perverse incentives, but they aren’t willing to shut down that quadrillion-dollar con game. So that can’t be the actual fear. No, it has to be a fear of helping poor people instead of rich people, as though “capitalism” meant a system in which we squeeze the poor as tight as we can and heap all possible advantages upon those who are already wealthy. No, that’s called feudalism. Capitalism is supposed to be a system where markets are structured to provide free and fair competition, with everyone on a level playing field.

A basic income is a fundamentally capitalist policy, which maintains equal opportunity with a minimum of government intervention and allows the market to flourish. I suppose if you want to say that all taxation and government spending is “socialist”, fine; then every nation that has ever maintained stability for more than a decade has been in this sense “socialist”. Every soldier, firefighter and police officer paid by a government payroll is now part of a “socialist” system. Okay, as long as we’re consistent about that; but now you really can’t say that socialism is harmful; on the contrary, on this definition socialism is necessary for capitalism. In order to maintain security of property, enforcement of contracts, and equality of opportunity, you need government. Maybe we should just give up on the words entirely, and speak more clearly about what specific policies we want. If I don’t get to say that a basic income is “capitalist”, you don’t get to say financial deregulation is “capitalist”. Better yet, how about you can’t even call it “deregulation”? You have to actually argue in front of a crowd of people that it should be legal for banks to lie to them, and there should be no serious repercussions for any bank that cheats, steals, colludes, or even launders money for terrorists. That is, after all, what financial deregulation actually does in the real world.

Okay, that’s enough about that.

My birthday is coming up this Monday; thus completes my 27th revolution around the Sun. With birthdays come thoughts of ancestry: Though I appear White, I am legally one-quarter Native American, and my total ethnic mix includes English, German, Irish, Mohawk, and Chippewa.

Biologically, what exactly does that mean? Next to nothing.

Human genetic diversity is a real thing, and there are genetic links to not only dozens of genetic diseases and propensity toward certain types of cancer, but also personality and intelligence. There are also of course genes for skin pigmentation.

The human population does exhibit some genetic clustering, but the categories are not what you’re probably used to: Good examples of relatively well-defined genetic clusters include Ashkenazi, Papuan, and Mbuti. There are also many different haplogroups, such as mitochondrial haplogroups L3 and CZ.

Maybe you could even make a case for the “races” East Asian, South Asian, Pacific Islander, and Native American, since the indigenous populations of these geographic areas largely do come from the same genetic clusters. Or you could make a bigger category and call them all “Asian”—but if you include Papuan and Aborigine in “Asian” you’d pretty much have to include Chippewa and Najavo as well.

But I think it tells you a lot about what “race” really means when you realize that the two “race” categories which are most salient to Americans are in fact the categories that are genetically most meaningless. “White” and “Black” are totally nonsensical genetic categorizations.

Let’s start with “Black”; defining a “Black” race is like defining a category of animals by the fact that they are all tinted red—foxes yes, dogs no; robins yes, swallows no; ladybirds yes, cockroaches no. There is more genetic diversity within Africa than there is outside of it. There are African populations that are more closely related to European populations than they are to other African populations. The only thing “Black” people have in common is that their skin is dark, which is due to convergent evolution: It’s not due to common ancestry, but a common environment. Dark skin has a direct survival benefit in climates with intense sunlight.  The similarity is literally skin deep.

What about “White”? Well, there are some fairly well-defined European genetic populations, so if we clustered those together we might be able to get something worth calling “White”. The problem is, that’s not how it happened. “White” is a club. The definition of who gets to be “White” has expanded over time, and even occasionally contracted. Originally Hebrew, Celtic, Hispanic, and Italian were not included (and Hebrew, for once, is actually a fairly sensible genetic category, as long as you restrict it to Ashkenazi), but then later they were. But now that we’ve got a lot of poor people coming in from Mexico, we don’t quite think of Hispanics as “White” anymore. We actually watched Arabs lose their “White” card in real-time in 2001; before 9/11, they were “White”; now, “Arab” is a separate thing. And “Muslim” is even treated like a race now, which is like making a racial category of “Keynesians”—never forget that Islam is above all a belief system.

Actually, “White privilege” is almost a tautology—the privilege isn’t given to people who were already defined as “White”, the privilege is to be called “White”. The privilege is to have your ancestors counted in the “White” category so that they can be given rights, while people who are not in the category are denied those rights. There does seem to be a certain degree of restriction by appearance—to my knowledge, no population with skin as dark as Kenyans has ever been considered “White”, and Anglo-Saxons and Nordics have always been included—but the category is flexible to political and social changes.

But really I hate that word “privilege”, because it gets the whole situation backwards. When you talk about “White privilege”, you make it sound as though the problem with racism is that it gives unfair advantages to White people (or to people arbitrarily defined as “White”). No, the problem is that people who are not White are denied rights. It isn’t what White people have that’s wrong; it’s what Black people don’t have. Equating those two things creates a vision of the world as zero-sum, in which each gain for me is a loss for you.

Here’s the thing about zero-sum games: All outcomes are Pareto-efficient. Remember when I talked about Pareto-efficiency? As a quick refresher, an outcome is Pareto-efficient if there is no way for one person to be made better off without making someone else worse off. In general, it’s pretty hard to disagree that, other things equal, Pareto-efficiency is a good thing, and Pareto-inefficiency is a bad thing. But if racism were about “White privilege” and the game were zero-sum, racism would have to be Pareto-efficient.

In fact, racism is Pareto-inefficient, and that is part of why it is so obviously bad. It harms literally billions of people, and benefits basically no one. Maybe there are a few individuals who are actually, all things considered, better off than they would have been if racism had not existed. But there are certainly not very many such people, and in fact I’m not sure there are any at all. If there are any, it would mean that technically racism is not Pareto-inefficient—but it is definitely very close. At the very least, the damage caused by racism is several orders of magnitude larger than any benefits incurred.

That’s why the “privilege” language, while well-intentioned, is so insidious; it tells White people that racism means taking things away from them. Many of these people are already in dire straits—broke, unemployed, or even homeless—so taking away what they have sounds particularly awful. Of course they’d be hostile to or at least dubious of attempts to reduce racism. You just told them that racism is the only thing keeping them afloat! In fact, quite the opposite is the case: Poor White people are, second only to poor Black people, those who stand the most to gain from a more just society. David Koch and Donald Trump should be worried; we will probably have to take most of their money away in order to achieve social justice. (Bill Gates knows we’ll have to take most of his money away, but he’s okay with that; in fact he may end up giving it away before we get around to taking it.) But the average White person will almost certainly be better off than they were.

Why does it seem like there are benefits to racism? Again, because people are accustomed to thinking of the world as zero-sum. One person is denied a benefit, so that benefit must go somewhere else right? Nope—it can just disappear entirely, and in this case typically does.

When a Black person is denied a job in favor of a White person who is less qualified, doesn’t that White person benefit? Uh, no, actually, not really. They have been hired for a job that isn’t an optimal fit for them; they aren’t working to their comparative advantage, and that Black person isn’t either and may not be working at all. The total output of the economy will be thereby reduced slightly. When this happens millions of times, the total reduction in output can be quite substantial, and as a result that White person was hired at $30,000 for an unsuitable job when in a racism-free world they’d have been hired at $40,000 for a suitable one. A similar argument holds for sexism; men don’t benefit from getting jobs women are denied if one of those women would have invented a cure for prostate cancer.

Indeed, the empowerment of women and minorities is kind of the secret cheat code for creating a First World economy. The great successes of economic development—Korea, Japan, China, the US in WW2—had their successes precisely at a time when they suddenly started including women in manufacturing, effectively doubling their total labor capacity. Moreover, it’s pretty clear that the causation ran in this direction. Periods of economic growth are associated with increases in solidarity with other groups—and downturns with decreased solidarity—but the increase in women in the workforce was sudden and early while the increase in growth and total output was prolonged.

Racism is irrational. Indeed it is so obviously irrational that for decades now neoclassical economists have been insisting that there is no need for civil rights policy, affirmative action, etc. because the market will automatically eliminate racism by the rational profit motive. A more recent literature has attempted to show that, contrary to all appearances, racism actually is rational in some cases. Inevitably it relies upon either the background of a racist society (maybe Black people are, on average, genuinely less qualified, but it would only be because they’ve been given poorer opportunities), or an assumption of “discriminatory tastes”, which is basically giving up and redefining the utility function so that people simply get direct pleasure from being racists. Of course, on that sort of definition, you can basically justify any behavior as “rational”: Maybe he just enjoys banging his head against the wall! (A similar slipperiness is used by egoists to argue that caring for your children is actually “selfish”; well, it makes you happy, doesn’t it? Yes, but that’s not why we do it.)

There’s a much simpler way to understand this situation: Racism is irrational, and so is human behavior.

That isn’t a complete explanation, of course; and I think one major misunderstanding neoclassical economists have of cognitive economists is that they think this is what we do—we point out that something is irrational, and then high-five and go home. No, that’s not what we do. Finding the irrationality is just the start; next comes explaining the irrationality, understanding the irrationality, and finally—we haven’t reached this point in most cases—fixing the irrationality.

So what explains racism? In short, the tribal paradigm. Human beings evolved in an environment in which the most important factor in our survival and that of our offspring was not food supply or temperature or predators, it was tribal cohesion. With a cohesive tribe, we could find food, make clothes, fight off lions. Without one, we were helpless. Millions of years in this condition shaped our brains, programming them to treat threats to tribal cohesion as the greatest possible concern. We even reached the point where solidarity for the tribe actually began to dominate basic survival instincts: For a suicide bomber the unity of the tribe—be it Marxism for the Tamil Tigers or Islam for Al-Qaeda—is more important than his own life. We will do literally anything if we believe it is necessary to defend the identities we believe in.

And no, we rationalists are no exception here. We are indeed different from other groups; the beliefs that define us, unlike the beliefs of literally every other group that has ever existed, are actually rationally founded. The scientific method really isn’t just another religion, for unlike religion it actually works. But still, if push came to shove and we were forced to kill and die in order to defend rationality, we would; and maybe we’d even be right to do so. Maybe the French Revolution was, all things considered, a good thing—but it sure as hell wasn’t nonviolent.

This is the background we need to understand racism. It actually isn’t enough to show people that racism is harmful and irrational, because they are programmed not to care. As long as racial identification is the salient identity, the tribe by which we define ourselves, we will do anything to defend the cohesion of that tribe. It is not enough to show that racism is bad; we must in fact show that race doesn’t matter. Fortunately, this is easy, for as I explained above, race does not actually exist.

That makes racism in some sense easier to deal with than sexism, because the very categories of races upon which it is based are fundamentally faulty. Sexes, on the other hand, are definitely a real thing. Males and females actually are genetically different in important ways. Exactly how different in what ways is an open question, and what we do know is that for most of the really important traits like intelligence and personality the overlap outstrips the difference. (The really big, categorical differences all appear to be physical: Anatomy, size, testosterone.) But conquering sexism may always be a difficult balance, for there are certain differences we won’t be able to eliminate without altering DNA. That no more justifies sexism than the fact that height is partly genetic would justify denying rights to short people (which, actually, is something we do); but it does make matters complicated, because it’s difficult to know whether an observed difference (for instance, most pediatricians are female, while most neurosurgeons are male) is due to discrimination or innate differences.

Racism, on the other hand, is actually quite simple: Almost any statistically significant difference in behavior or outcome between races must be due to some form of discrimination somewhere down the line. Maybe it’s not discrimination right here, right now; maybe it’s discrimination years ago that denied opportunities, or discrimination against their ancestors that led them to inherit less generations later; but it almost has to be discrimination against someone somewhere, because it is only by social construction that races exist in the first place. I do say “almost” because I can think of a few exceptions: Black people are genuinely less likely to use tanning salons and genuinely more likely to need vitamin D supplements, but both of those things are directly due to skin pigmentation. They are also more likely to suffer from sickle-cell anemia, which is another convergent trait that evolved in tropical climates as a response to malaria. But unless you can think of a reason why employment outcomes would depend upon vitamin D, the huge difference in employment between Whites and Blacks really can’t be due to anything but discrimination.

I imagine most of my readers are more sophisticated than this, but just in case you’re wondering about the difference in IQ scores between Whites and Blacks, that is indeed a real observation, but IQ isn’t entirely genetic. The reason IQ scores are rising worldwide (the Flynn Effect) is due to improvements in environmental conditions: Fewer environmental pollutants—particularly lead and mercury, the removal of which is responsible for most of the reduction in crime in America over the last 20 yearsbetter nutrition, better education, less stress. Being stupid does not make you poor (or how would we explain Donald Trump?), but being poor absolutely does make you stupid. Combine that with the challenges and inconsistencies in cross-national IQ comparisons, and it’s pretty clear that the higher IQ scores in rich nations are an effect, not a cause, of their affluence. Likewise, the lower IQ scores of Black people in the US are entirely explained by their poorer living conditions, with no need for any genetic hypothesis—which would also be very difficult in the first place precisely because “Black” is such a weird genetic category.

Unfortunately, I don’t yet know exactly what it takes to change people’s concept of group identification. Obviously it can be done, for group identities change all the time, sometimes quite rapidly; but we simply don’t have good research on what causes those changes or how they might be affected by policy. That’s actually a major part of the experiment I’ve been trying to get funding to run since 2009, which I hope can now become my PhD thesis. All I can say is this: I’m working on it.

Are humans rational?

JDN 2456928 PDT 11:21.

The central point of contention between cognitive economists and neoclassical economists hinges upon the word “rational”: Are humans rational? What do we mean by “rational”?

Neoclassicists are very keen to insist that they think humans are rational, and often characterize the cognitivist view as saying that humans are irrational. (Daniel Ariely has a habit of feeding this view, titling books things like Predictably Irrational and The Upside of Irrationality.) But I really don’t think this is the right way to characterize the difference.

Daniel Kahneman has a somewhat better formulation (from Thinking, Fast and Slow): “I often cringe when my work is credited as demonstrating that human choices are irrational, when in fact our research only shows that Humans are not well described by the rational-agent model.” (Yes, he capitalizes the word “Humans” throughout, which is annoying; but in general it is a great book.)

The problem is that saying “humans are irrational” has the connotation of a universal statement; it seems to be saying that everything we do, all the time, is always and everywhere utterly irrational. And this of course could hardly be further from the truth; we would not have even survived in the savannah, let alone invented the Internet, if we were that irrational. If we simply lurched about randomly without any concept of goals or response to information in the environment, we would have starved to death millions of years ago.

But at the same time, the neoclassical definition of “rational” obviously does not describe human beings. We aren’t infinite identical psychopaths. Particularly bizarre (and frustrating) is the continued insistence that rationality entails selfishness; apparently economists are getting all their philosophy from Ayn Rand (who barely even qualifies as such), rather than the greats such as Immanuel Kant and John Stuart Mill or even the best contemporary philosophers such as Thomas Pogge and John Rawls. All of these latter would be baffled by the notion that selfless compassion is irrational.

Indeed, Kant argued that rationality implies altruism, that a truly coherent worldview requires assent to universal principles that are morally binding on yourself and every other rational being in the universe. (I am not entirely sure he is correct on this point, and in any case it is clear to me that neither you nor I are anywhere near advanced enough beings to seriously attempt such a worldview. Where neoclassicists envision infinite identical psychopaths, Kant envisions infinite identical altruists. In reality we are finite diverse tribalists.)

But even if you drop selfishness, the requirements of perfect information and expected utility maximization are still far too strong to apply to real human beings. If that’s your standard for rationality, then indeed humans—like all beings in the real world—are irrational.

The confusion, I think, comes from the huge gap between ideal rationality and total irrationality. Our behavior is neither perfectly optimal nor hopelessly random, but somewhere in between.

In fact, we are much closer to the side of perfect rationality! Our brains are limited, so they operate according to heuristics: simplified, approximate rules that are correct most of the time. Clever experiments—or complex environments very different from how we evolved—can cause those heuristics to fail, but we must not forget that the reason we have them is that they work extremely well in most cases in the environment in which we evolved. We are about 90% rational—but woe betide that other 10%.

The most obvious example is phobias: Why are people all over the world afraid of snakes, spiders, falling, and drowning? Because those used to be leading causes of death. In the African savannah 200,000 years ago, you weren’t going to be hit by a car, shot with a rifle bullet or poisoned by carbon monoxide. (You’d probably die of malaria, actually; for that one, instead of evolving to be afraid of mosquitoes we evolved a biological defense mechanism—sickle-cell red blood cells.) Death in general was actually much more likely then, particularly for children.

A similar case can be made for other heuristics we use: We are tribal because the proper functioning of our 100-person tribe used to be the most important factor in our survival. We are racist because people physically different from us were usually part of rival tribes and hence potential enemies. We hoard resources even when our technology allows abundance, because a million years ago no such abundance was possible and every meal might be our last.

When asked how common something is, we don’t calculate a posterior probability based upon Bayesian inference—that’s hard. Instead we try to think of examples—that’s easy. That’s the availability heuristic. And if we didn’t have mass media constantly giving us examples of rare events we wouldn’t otherwise have known about, the availability heuristic would actually be quite accurate. Right now, people think of terrorism as common (even though it’s astoundingly rare) because it’s always all over the news; but if you imagine living in an ancient tribe—or even an medieval village!—anything you heard about that often would almost certainly be something actually worth worrying about. Our level of panic over Ebola is totally disproportionate; but in the 14th century that same level of panic about the Black Death would be entirely justified.

When we want to know whether something is a member of a category, again we don’t try to calculate the actual probability; instead we think about how well it seems to fit a model we have of the paradigmatic example of that category—the representativeness heuristic. You see a Black man on a street corner in New York City at night; how likely is it that he will mug you? Pretty small actually, because there were less than 200,000 crimes in all of New York City last year in a city of 8,000,000 people—meaning the probability any given person committed a crime in the previous year was only 2.5%; the probability on any given day would then be less than 0.01%. Maybe having those attributes raises the probability somewhat, but you can still be about 99% sure that this guy isn’t going to mug you tonight. But since he seemed representative of the category in your mind “criminals”, your mind didn’t bother asking how many criminals there are in the first place—an effect called base rate neglect. Even 200 years ago—let alone 1 million—you didn’t have these sorts of reliable statistics, so what else would you use? You basically had no choice but to assess based upon representative traits.

As you probably know, people have trouble dealing with big numbers, and this is a problem in our modern economy where we actually need to keep track of millions or billions or even trillions of dollars moving around. And really I shouldn’t say it that way, because $1 million ($1,000,000) is an amount of money an upper-middle class person could have in a retirement fund, while $1 billion ($1,000,000,000) would make you in the top 1000 richest people in the world, and $1 trillion ($1,000,000,000,000) is enough to end world hunger for at least the next 15 years (it would only take about $1.5 trillion to do it forever, by paying only the interest on the endowment). It’s important to keep this in mind, because otherwise the natural tendency of the human mind is to say “big number” and ignore these enormous differences—it’s called scope neglect. But how often do you really deal with numbers that big? In ancient times, never. Even in the 21st century, not very often. You’ll probably never have $1 billion, and even $1 million is a stretch—so it seems a bit odd to say that you’re irrational if you can’t tell the difference. I guess technically you are, but it’s an error that is unlikely to come up in your daily life.

Where it does come up, of course, is when we’re talking about national or global economic policy. Voters in the United States today have a level of power that for 99.99% of human existence no ordinary person has had. 2 million years ago you may have had a vote in your tribe, but your tribe was only 100 people. 2,000 years ago you may have had a vote in your village, but your village was only 1,000 people. Now you have a vote on the policies of a nation of 300 million people, and more than that really: As goes America, so goes the world. Our economic, cultural, and military hegemony is so total that decisions made by the United States reverberate through the entire human population. We have choices to make about war, trade, and ecology on a far larger scale than our ancestors could have imagined. As a result, the heuristics that served us well millennia ago are now beginning to cause serious problems.

[As an aside: This is why the “Downs Paradox” is so silly. If you’re calculating the marginal utility of your vote purely in terms of its effect on you—you are a psychopath—then yes, it would be irrational for you to vote. And really, by all means: psychopaths, feel free not to vote. But the effect of your vote is much larger than that; in a nation of N people, the decision will potentially affect N people. Your vote contributes 1/N to a decision that affects N people, making the marginal utility of your vote equal to N*1/N = 1. It’s constant. It doesn’t matter how big the nation is, the value of your vote will be exactly the same. The fact that your vote has a small impact on the decision is exactly balanced by the fact that the decision, once made, will have such a large effect on the world. Indeed, since larger nations also influence other nations, the marginal effect of your vote is probably larger in large elections, which means that people are being entirely rational when they go to greater lengths to elect the President of the United States (58% turnout) rather than the Wayne County Commission (18% turnout).]

So that’s the problem. That’s why we have economic crises, why climate change is getting so bad, why we haven’t ended world hunger. It’s not that we’re complete idiots bumbling around with no idea what we’re doing. We simply aren’t optimized for the new environment that has been recently thrust upon us. We are forced to deal with complex problems unlike anything our brains evolved to handle. The truly amazing part is actually that we can solve these problems at all; most lifeforms on Earth simply aren’t mentally flexible enough to do that. Humans found a really neat trick (actually in a formal evolutionary sense a goodtrick, which we know because it also evolved in cephalopods): Our brains have high plasticity, meaning they are capable of adapting themselves to their environment in real-time. Unfortunately this process is difficult and costly; it’s much easier to fall back on our old heuristics. We ask ourselves: Why spend 10 times the effort to make it work 99% of the time when you can make it work 90% of the time so much easier?

Why? Because it’s so incredibly important that we get these things right.