# What is the price of time?

JDN 2457562

If they were asked outright, “What is the price of time?” most people would find that it sounds nonsensical, like I’ve asked you “What is the diameter of calculus?” or “What is the electric charge of justice?” (It’s interesting that we generally try to assign meaning to such nonsensical questions, and they often seem strangely profound when we do; a good deal of what passes for “profound wisdom” is really better explained as this sort of reaction to nonsense. Deepak Chopra, for instance.)

But there is actually a quite sensible economic meaning of this question, and answering it turns out to have many important implications for how we should run our countries and how we should live our lives.

What we are really asking for is temporal discounting; we want to know how much more money today is worth compared to tomorrow, and how much more money tomorrow is worth compared to two days from now.

If you say that they are exactly the same, your discount rate (your “price of time”) is zero; if that is indeed how you feel, may I please borrow your entire net wealth at 0% interest for the next thirty years? If you like we can even inflation-index the interest rate so it always produces a real interest rate of zero, thus protecting you from potential inflation risk.
What? You don’t like my deal? You say you need that money sooner? Then your discount rate is not zero. Similarly, it can’t be negative; if you actually valued money tomorrow more than money today, you’d gladly give me my loan.

Money today is worth more to you than money tomorrow—the only question is how much more.

There’s a very simple theorem which says that as long as your temporal discounting doesn’t change over time, so it is dynamically consistent, it must have a very specific form. I don’t normally use math this advanced in my blog, but this one is so elegant I couldn’t resist. I’ll encase it in blockquotes so you can skim over it if you must.

The value of \$1 today relative to… today is of course 1; f(0) = 1.

If you are dynamically consistent, at any time t you should discount tomorrow relative to today the same as you discounted today relative to yesterday, so for all t, f(t+1)/f(t) = f(t)/f(t-1)
Thus, f(t+1)/f(t) is independent of t, and therefore equal to some constant, which we can call r:

f(t+1)/f(t) = r, which implies f(t+1) = r f(t).

Starting at f(0) = 1, we have:

f(0) = 1, f(1) = r, f(2) = r^2

We can prove that this pattern continues to hold by mathematical induction.

Suppose the following is true for some integer k; we already know it works for k = 1:

f(k) = r^k

Let t = k:

f(k+1) = r f(k)

Therefore:

f(k+1) = r^(k+1)

Which by induction proves that for all integers n:

f(n) = r^n

The name of the variable doesn’t matter. Therefore:

f(t) = r^t

Whether you agree with me that this is beautiful, or you have no idea what I just said, the take-away is the same: If your discount rate is consistent over time, it must be exponential. There must be some constant number 0 < r < 1 such that each successive time period is worth r times as much as the previous. (You can also generalize this to the case of continuous time, where instead of r^t you get e^(-r t). This requires even more advanced math, so I’ll spare you.)

Most neoclassical economists would stop right there. But there are two very big problems with this argument:

(1) It doesn’t tell us the value r should actually be, only that it should be a constant.

(2) No actual human being thinks of time this way.

There is still ongoing research as to exactly how real human beings discount time, but one thing is quite clear from the experiments: It certainly isn’t exponential.

From about 2000 to 2010, the consensus among cognitive economists was that humans discount time hyperbolically; that is, our discount function looks like this:

f(t) = 1/(1 + r t)

In the 1990s there were a couple of experiments supporting hyperbolic discounting. There is even some theoretical work trying to show that this is actually optimal, given a certain kind of uncertainty about the future, and the argument for exponential discounting relies upon certainty we don’t actually have. Hyperbolic discounting could also result if we were reasoning as though we are given a simple interest rate, rather than a compound interest rate.

But even that doesn’t really seem like humans think, now does it? It’s already weird enough for someone to say “Should I take out this loan at 5%? Well, my discount rate is 7%, so yes.” But I can at least imagine that happening when people are comparing two different interest rates (“Should I pay down my student loans, or my credit cards?”). But I can’t imagine anyone thinking, “Should I take out this loan at 5% APR which I’d need to repay after 5 years? Well, let’s check my discount function, 1/(1+0.05 (5)) = 0.8, multiplied by 1.05^5 = 1.28, the product of which is 1.02, greater than 1, so no, I shouldn’t.” That isn’t how human brains function.

Moreover, recent experiments have shown that people often don’t seem to behave according to what hyperbolic discounting would predict.

Therefore I am very much in the other camp of cognitive economists, who say that we don’t have a well-defined discount function. It’s not exponential, it’s not hyperbolic, it’s not “quasi-hyperbolic” (yes that is a thing); we just don’t have one. We reason about time by simple heuristics. You can’t make a coherent function out of it because human beings… don’t always reason coherently.

Some economists seem to have an incredible amount of trouble accepting that; here we have one from the University of Chicago arguing that hyperbolic discounting can’t possibly exist, because then people could be Dutch-booked out of all their money; but this amounts to saying that human behavior cannot ever be irrational, lest all our money magically disappear. Yes, we know hyperbolic discounting (and heuristics) allow for Dutch-booking; that’s why they’re irrational. If you really want to know the formal assumption this paper makes that is wrong, it assumes that we have complete markets—and yes, complete markets essentially force you to be perfectly rational or die, because the slightest inconsistency in your reasoning results in someone convincing you to bet all your money on a sure loss. Why was it that we wanted complete markets, again? (Oh, yes, the fanciful Arrow-Debreu model, the magical fairy land where everyone is perfectly rational and all markets are complete and we all have perfect information and the same amount of wealth and skills and the same preferences, where everything automatically achieves a perfect equilibrium.)

There was a very good experiment on this, showing that rather than discount hyperbolically, behavior is better explained by a heuristic that people judge which of two options is better by a weighted sum of the absolute distance in time plus the relative distance in time. Now that sounds like something human beings might actually do. “\$100 today or \$110 tomorrow? That’s only 1 day away, but it’s also twice as long. I’m not waiting.” “\$100 next year, or \$110 in a year and a day? It’s only 1 day apart, and it’s only slightly longer, so I’ll wait.”

That might not actually be the precise heuristic we use, but it at least seems like one that people could use.

John Duffy, whom I hope to work with at UCI starting this fall, has been working on another experiment to test a different heuristic, based on the work of Daniel Kahneman, saying essentially that we have a fast, impulsive, System 1 reasoning layer and a slow, deliberative, System 2 reasoning layer; the result is that our judgments combine both “hand to mouth” where our System 1 essentially tries to get everything immediately and spend whatever we can get our hands on, and a more rational assessment by System 2 that might actually resemble an exponential discount rate. In the 5-minute judgment, System 1’s voice is overwhelming; but if we’re already planning a year out, System 1 doesn’t even care anymore and System 2 can take over. This model also has the nice feature of explaining why people with better self-control seem to behave more like they use exponential discounting,[PDF link] and why people do on occasion reason more or less exponentially, while I have literally never heard anyone try to reason hyperbolically, only economic theorists trying to use hyperbolic models to explain behavior.

Another theory is that discounting is “subadditive”, that is, if you break up a long time interval into many short intervals, people will discount it more, because it feels longer that way. Imagine a century. Now imagine a year, another year, another year, all the way up to 100 years. Now imagine a day, another day, another day, all the way up to 365 days for the first year, and then 365 days for the second year, and that on and on up to 100 years. It feels longer, doesn’t it? It is of course exactly the same. This can account for some weird anomalies in choice behavior, but I’m not convinced it’s as good as the two-system model.

Another theory is that we simply have a “present bias”, which we treat as a sort of fixed cost that we incur regardless of what the payments are. I like this because it is so supremely simple, but there’s something very fishy about it, because in this experiment it was just fixed at \$4, and that can’t be right. It must be fixed at some proportion of the rewards, or something like that; or else we would always exhibit near-perfect exponential discounting for large amounts of money, which is more expensive to test (quite directly), but still seems rather unlikely.

Why is this important? This post is getting long, so I’ll save it for future posts, but in short, the ways that we value future costs and benefits, both as we actually do, and as we ought to, have far-reaching implications for everything from inflation to saving to environmental sustainability.

# The Tragedy of the Commons

JDN 2457387

In a previous post I talked about one of the most fundamental—perhaps the most fundamental—problem in game theory, the Prisoner’s Dilemma, and how neoclassical economic theory totally fails to explain actual human behavior when faced with this problem in both experiments and the real world.

As a brief review, the essence of the game is that both players can either cooperate or defect; if they both cooperate, the outcome is best overall; but it is always in each player’s interest to defect. So a neoclassically “rational” player would always defect—resulting in a bad outcome for everyone. But real human beings typically cooperate, and thus do better. The “paradox” of the Prisoner’s Dilemma is that being “rational” results in making less money at the end.

Obviously, this is not actually a good definition of rational behavior. Being short-sighted and ignoring the impact of your behavior on others doesn’t actually produce good outcomes for anybody, including yourself.

But the Prisoner’s Dilemma only has two players. If we expand to a larger number of players, the expanded game is called a Tragedy of the Commons.

When we do this, something quite surprising happens: As you add more people, their behavior starts converging toward the neoclassical solution, in which everyone defects and we get a bad outcome for everyone.

Indeed, people in general become less cooperative, less courageous, and more apathetic the more of them you put together. K was quite apt when he said, “A person is smart; people are dumb, panicky, dangerous animals and you know it.” There are ways to counteract this effect, as I’ll get to in a moment—but there is a strong effect that needs to be counteracted.

We see this most vividly in the bystander effect. If someone is walking down the street and sees someone fall and injure themselves, there is about a 70% chance that they will go try to help the person who fell—humans are altruistic. But if there are a dozen people walking down the street who all witness the same event, there is only a 40% chance that any of them will help—humans are irrational.

The primary reason appears to be diffusion of responsibility. When we are alone, we are the only one could help, so we feel responsible for helping. But when there are others around, we assume that someone else could take care of it for us, so if it isn’t done that’s not our fault.

There also appears to be a conformity effect: We want to conform our behavior to social norms (as I said, to a first approximation, all human behavior is social norms). The mere fact that there are other people who could have helped but didn’t suggests the presence of an implicit social norm that we aren’t supposed to help this person for some reason. It never occurs to most people to ask why such a norm would exist or whether it’s a good one—it simply never occurs to most people to ask those questions about any social norms. In this case, by hesitating to act, people actually end up creating the very norm they think they are obeying.

This can lead to what’s called an Abilene Paradox, in which people simultaneously try to follow what they think everyone else wants and also try to second-guess what everyone else wants based on what they do, and therefore end up doing something that none of them actually wanted. I think a lot of the weird things humans do can actually be attributed to some form of the Abilene Paradox. (“Why are we sacrificing this goat?” “I don’t know, I thought you wanted to!”)

Autistic people are not as good at following social norms (though some psychologists believe this is simply because our social norms are optimized for the neurotypical population). My suspicion is that autistic people are therefore less likely to suffer from the bystander effect, and more likely to intervene to help someone even if they are surrounded by passive onlookers. (Unfortunately I wasn’t able to find any good empirical data on that—it appears no one has ever thought to check before.) I’m quite certain that autistic people are less likely to suffer from the Abilene Paradox—if they don’t want to do something, they’ll tell you so (which sometimes gets them in trouble).

Because of these psychological effects that blunt our rationality, in large groups human beings often do end up behaving in a way that appears selfish and short-sighted.

Nowhere is this more apparent than in ecology. Recycling, becoming vegetarian, driving less, buying more energy-efficient appliances, insulating buildings better, installing solar panels—none of these things are particularly difficult or expensive to do, especially when weighed against the tens of millions of people who will die if climate change continues unabated. Every recyclable can we throw in the trash is a silent vote for a global holocaust.

But as it no doubt immediately occurred to you to respond: No single one of us is responsible for all that. There’s no way I myself could possibly save enough carbon emissions to significantly reduce climate change—indeed, probably not even enough to save a single human life (though maybe). This is certainly true; the error lies in thinking that this somehow absolves us of the responsibility to do our share.

I think part of what makes the Tragedy of the Commons so different from the Prisoner’s Dilemma, at least psychologically, is that the latter has an identifiable victimwe know we are specifically hurting that person more than we are helping ourselves. We may even know their name (and if we don’t, we’re more likely to defect—simply being on the Internet makes people more aggressive because they don’t interact face-to-face). In the Tragedy of the Commons, it is often the case that we don’t know who any of our victims are; moreover, it’s quite likely that we harm each one less than we benefit ourselves—even though we harm everyone overall more.

Suppose that driving a gas-guzzling car gives me 1 milliQALY of happiness, but takes away an average of 1 nanoQALY from everyone else in the world. A nanoQALY is tiny! Negligible, even, right? One billionth of a year, a mere 30 milliseconds! Literally less than the blink of an eye. But take away 30 milliseconds from everyone on Earth and you have taken away 7 years of human life overall. Do that 10 times, and statistically one more person is dead because of you. And you have gained only 10 milliQALY, roughly the value of \$300 to a typical American. Would you kill someone for \$300?

Peter Singer has argued that we should in fact think of it this way—when we cause a statistical death by our inaction, we should call it murder, just as if we had left a child to drown to keep our clothes from getting wet. I can’t agree with that. When you think seriously about the scale and uncertainty involved, it would be impossible to live at all if we were constantly trying to assess whether every action would lead to statistically more or less happiness to the aggregate of all human beings through all time. We would agonize over every cup of coffee, every new video game. In fact, the global economy would probably collapse because none of us would be able to work or willing to buy anything for fear of the consequences—and then whom would we be helping?

That uncertainty matters. Even the fact that there are other people who could do the job matters. If a child is drowning and there is a trained lifeguard right next to you, the lifeguard should go save the child, and if they don’t it’s their responsibility, not yours. Maybe if they don’t you should try; but really they should have been the one to do it.
But we must also not allow ourselves to simply fall into apathy, to do nothing simply because we cannot do everything. We cannot assess the consequences of every specific action into the indefinite future, but we can find general rules and patterns that govern the consequences of actions we might take. (This is the difference between act utilitarianism, which is unrealistic, and rule utilitarianism, which I believe is the proper foundation for moral understanding.)

Thus, I believe the solution to the Tragedy of the Commons is policy. It is to coordinate our actions together, and create enforcement mechanisms to ensure compliance with that coordinated effort. We don’t look at acts in isolation, but at policy systems holistically. The proper question is not “What should I do?” but “How should we live?”

In the short run, this can lead to results that seem deeply suboptimal—but in the long run, policy answers lead to sustainable solutions rather than quick-fixes.

People are starving! Why don’t we just steal money from the rich and use it to feed people? Well, think about what would happen if we said that the property system can simply be unilaterally undermined if someone believes they are achieving good by doing so. The property system would essentially collapse, along with the economy as we know it. A policy answer to that same question might involve progressive taxation enacted by a democratic legislature—we agree, as a society, that it is justified to redistribute wealth from those who have much more than they need to those who have much less.

Our government is corrupt! We should launch a revolution! Think about how many people die when you launch a revolution. Think about past revolutions. While some did succeed in bringing about more just governments (e.g. the French Revolution, the American Revolution), they did so only after a long period of strife; and other revolutions (e.g. the Russian Revolution, the Iranian Revolution) have made things even worse. Revolution is extremely costly and highly unpredictable; we must use it only as a last resort against truly intractable tyranny. The policy answer is of course democracy; we establish a system of government that elects leaders based on votes, and then if they become corrupt we vote to remove them. (Sadly, we don’t seem so good about that second part—the US Congress has a 14% approval rating but a 95% re-election rate.)

And in terms of ecology, this means that berating ourselves for our sinfulness in forgetting to recycle or not buying a hybrid car does not solve the problem. (Not that it’s bad to recycle, drive a hybrid car, and eat vegetarian—by all means, do these things. But it’s not enough.) We need a policy solution, something like a carbon tax or cap-and-trade that will enforce incentives against excessive carbon emissions.

In case you don’t think politics makes a difference, all of the Democrat candidates for President have proposed such plans—Bernie Sanders favors a carbon tax, Martin O’Malley supports an aggressive cap-and-trade plan, and Hillary Clinton favors heavily subsidizing wind and solar power. The Republican candidates on the other hand? Most of them don’t even believe in climate change. Chris Christie and Carly Fiorina at least accept the basic scientific facts, but (1) they are very unlikely to win at this point and (2) even they haven’t announced any specific policy proposals for dealing with it.

This is why voting is so important. We can’t do enough on our own; the coordination problem is too large. We need to elect politicians who will make policy. We need to use the systems of coordination enforcement that we have built over generations—and that is fundamentally what a government is, a system of coordination enforcement. Only then can we overcome the tendency among human beings to become apathetic and short-sighted when faced with a Tragedy of the Commons.

# No, advertising is not signaling

JDN 2457373

Awhile ago, I wrote a post arguing that advertising is irrational, that at least with advertising as we know it, no real information is conveyed and thus either consumers are being irrational in their purchasing decisions, or advertisers are irrational for buying ads that don’t work.

One of the standard arguments neoclassical economists make to defend the rationality of advertising is that advertising is signaling—that even though the content of the ads conveys no useful information, the fact that there are ads is a useful signal of the real quality of goods being sold.

The idea is that by spending on advertising, a company shows that they have a lot of money to throw around, and are therefore a stable and solvent company that probably makes good products and is going to stick around for awhile.

Here are a number of different papers all making this same basic argument, often with sophisticated mathematical modeling. This paper takes an even bolder approach, arguing that people benefit from ads and would therefore pay to get them if they had to. Does that sound even remotely plausible to you? It sure doesn’t to me. Some ads are fairly entertaining, but generally if someone is willing to pay money for a piece of content, they charge money for that content.

Could spending on advertising offer a signal of the quality of a product or the company that makes it? Yes. That is something that actually could happen. The reason this argument is ridiculous is not that advertising signaling couldn’t happen—it’s that advertising is clearly nowhere near the best way to do that. The content of ads is clearly nothing remotely like what it would be if advertising were meant to be a costly signal of quality.

Look at this ad for Orangina. Look at it. Look at it.

Now, did that ad tell you anything about Orangina? Anything at all?

As far as I can tell, the thing it actually tells you isn’t even true—it strongly implies that Orangina is a form of aftershave when in fact it is an orange-flavored beverage. It’d be kind of like having an ad for the iPad that involves scantily-clad dog-people riding the iPad like it’s a hoverboard. (Now that I’ve said it, Apple is probably totally working on that ad.)

This isn’t an isolated incident for Orangina, who have a tendency to run bizarre and somewhat suggestive (let’s say PG-13) TV spots involving anthropomorphic animals.

But more than that, it’s endemic to the whole advertising industry.

Look at GEICO, for instance; without them specifically mentioning that this is car insurance, you’d never know what they were selling from all the geckos,

and Neanderthals,

and… golf Krakens?

Progressive does slightly better, talking about some of their actual services while also including an adorably-annoying spokesperson (she’s like Jar Jar, but done better):

State Farm also includes at least a few tidbits about their insurance amidst the teleportation insanity:

But honestly the only car insurance commercials I can think of that are actually about car insurance are Allstate’s, and even then they’re mostly about Dennis Haybert’s superhuman charisma. I would buy bacon cheeseburgers from this man, and I’m vegetarian.

Esurance is also relatively informative (and owned by Allstate, by the way); they talk about their customer service and low prices (in other words, the only things you actually care about with car insurance). But even so, what reason do we have to believe their bald assertions of good customer service? And what’s the deal with the whole money-printing thing?

And of course I could deluge you with examples from other companies, from Coca-Cola’s polar bears and Santa Claus to this commercial, which is literally the most American thing I have ever seen:

If you’re from some other country and are going, “What!?” right now, that’s totally healthy. Honestly I think we would too if constant immersion in this sort of thing hadn’t deadened our souls.

Do these ads signal that their companies have a lot of extra money to burn? Sure. But there are plenty of other ways to do that which would also serve other valuable functions. I honestly can’t imagine any scenario in which the best way to tell me the quality of an auto insurance company is to show me 30-second spots about geckos and Neanderthals.

If a company wants to signal that they have a lot of money, they could simply report their financial statement. That’s even regulated so that we know it has to be accurate (and this is one of the few financial regulations we actually enforce). The amount you spent on an ad is not obvious from the result of the ad, and doesn’t actually prove that you’re solvent, only that you have enough access to credit. (Pets.com famously collapsed the same year they ran a multi-million-dollar Super Bowl ad.)

If a company wants to signal that they make a good product, they could pay independent rating agencies to rate products on their quality (you know, like credit rating agencies and reviewers of movies and video games). Paying an independent agency is far more reliable than the signaling provided by advertising. Consumers could also pay their own agencies, which would be even more reliable; credit rating agencies and movie reviewers do sometimes have a conflict of interest, which could be resolved by making them report to consumers instead of producers.

If a company wants to establish that they are both financially stable and socially responsible, they could make large public donations to important charities. (This is also something that corporations do on occasion, such as Subaru’s recent campaign.) Or they could publicly announce a raise for all their employees. This would not only provide us with the information that they have this much money to spend—it would actually have a direct positive social effect, thus putting their money where there mouth is.

Signaling theory in advertising is based upon the success of signaling theory in evolutionary biology, which is beyond dispute; but evolution is tightly constrained in what it can do, so wasteful costly signals make sense. Human beings are smarter than that; we can find ways to convey information that don’t involve ludicrous amounts of waste.

If we were anywhere near as rational as these neoclassical models assume us to be, we would take the constant bombardment of meaningless ads not as a signal of a company’s quality but as a personal assault—they are needlessly attacking our time and attention when all the genuinely-valuable information they convey could have been conveyed much more easily and reliably. We would not buy more from them; we would refuse to buy from them. And indeed, I’ve learned to do just that; the more a company bombards me with annoying or meaningless advertisements, the more I make a point of not buying their product if I have a viable substitute. (For similar reasons, I make a point of never donating to any charity that uses hard-sell tactics to solicit donations.)

But of course the human mind is limited. We only have so much attention, and by bombarding us frequently and intensely enough they can overcome our mental defenses and get us to make decisions we wouldn’t if we were optimally rational. I can feel this happening when I am hungry and a food ad appears on TV; my autonomic hunger response combined with their expert presentation of food in the perfect lighting makes me want that food, if only for the few seconds it takes my higher cognitive functions to kick in and make me realize that I don’t eat meat and I don’t like mayonnaise.

Car commercials have always been particularly baffling to me. Who buys a car based on a commercial? A decision to spend \$20,000 should not be made based upon 30 seconds of obviously biased information. But either people do buy cars based on commercials or they don’t; if they do, consumers are irrational, and if they don’t, car companies are irrational.

Advertising isn’t the source of human irrationality, but it feeds upon human irrationality, and is specifically designed to exploit our own stupidity to make us spend money in ways we wouldn’t otherwise. This means that markets will not be efficient, and huge amounts of productivity can be wasted because we spent it on what they convinced us to buy instead of what would truly have made our lives better. Those companies then profit more, which encourages them to make even more stuff nobody actually wants and sell it that much harder… and basically we all end up buying lots of worthless stuff and putting it in our garages and wondering what happened to our money and the meaning in our lives. Neoclassical economists really need to stop making ridiculous excuses for this damaging and irrational behavior–and maybe then we could actually find a way to make it stop.

# Advertising: Someone is being irrational

JDN 2457285 EDT 12:52

I’m working on moving toward a slightly different approach to posting; instead of one long 3000-word post once a week, I’m going to try to do two more bite-sized posts of about 1500 words or less spread throughout the week. I’m actually hoping to work toward setting up a Patreon and making blogging into a source of income.

Today’s bite-sized post is about advertising, and a rather simple, basic argument that shows that irrational economic behavior is widespread.

First, there are advertisements that don’t make sense. They don’t tell you anything about the product, they are often completely absurd, and while sometimes entertaining they are rarely so entertaining that people would pay to see them in theaters or buy them on DVD—which means that any entertainment value they had is outweighed by the opportunity cost of seeing them instead of the actual TV show, movie, or whatever else it was you wanted to see.

If you doubt that there are advertisements that don’t make sense, I have one example in particular for you which I think will settle this matter:

If you didn’t actually watch it, you must. It is too absurd to be explained.

And of course there are many other examples, from Coca-Cola’s weird associations with polar bears to the series of GEICO TV spots about Neanderthals that they thought were so entertaining as to deserve a TV show (the world proved them wrong), to M&M commercials that present a terrifying world in which humans regularly consume the chocolatey flesh of other sapient citizens (and I thought beef was bad!).

Or here’s another good one:

In the above commercial, Walmart attempts to advertise themselves by showing a heartwarming story of a child who works hard to make money by doing odd jobs, including using the model of door-to-door individual sales that Walmart exists to make obsolete. The only contribution Walmart makes to the story is apparently “we have affordable bicycles for children”. Coca-Cola is also thrown in for some reason.

Certain products seem to attract nonsensical advertising more than others, with car insurance being the prime culprit of totally nonsensical and irrelevant commercials, perhaps because of GEICO in particular who do not actually seem to be any good at providing car insurance but instead spend all of their resources making commercials.

Commercials for cars themselves are an interesting case, as certain ads actually appeal in at least a general way to the quality of the vehicle itself:

Then there are those that vaguely allude to qualities of their vehicles, but mostly immerse us in optimistic cyberpunk:

Others, however, make no attempt to say anything about the vehicle, instead spinning us exciting tales of giant hamsters who use the car and the power of dance to somehow form a truce between warring robot factions in a dystopian future (if you haven’t seen this commercial, none of that is a joke; see for yourself below):

So, I hope that I have satisfied you that there are in fact advertisements which don’t make sense, which could not possibly give anyone a rational reason to purchase the product contained within.

Therefore, at least one of the following statements must be true:

1. Consumers behave irrationally by buying products for irrational reasons

Both could be true (in fact I think both are true), but at least one must be, on pain of contradiction, as long as you accept that there are advertisements which don’t provide rational reasons to buy products. There’s no wiggling out of this one, neoclassicists.

Advertising forms a large part of our economy—Americans spend \$171 billion per year on ads, more than the federal government spends on education, and also more than the nominal GDP of Hungary or Vietnam. This figure is growing thanks to the Internet and its proliferation of “free” ad-supported content. Insofar as advertising is irrational, this money is being thrown down the drain.

The waste from spending on ads that don’t work is limited; you can’t waste more than you actually spent. But the waste from buying things you don’t actually need is not limited in the same way; an ad that cost \$1 million to air (cheaper than a typical Super Bowl ad) could lead to \$10 million in worthless purchases.

I wouldn’t say that all advertising is irrational; some ads do actually provide enough meaningful information about a product that they could reasonably motivate you to buy it (or at least look into buying it), and it is in both your best interest and the company’s best interest for you to have such information.

But I think it’s not unreasonable to estimate that about half of our advertising spending is irrational, either by making people buy things for bad reasons or by making corporations waste time and money on buying ads that don’t work. This amounts to some \$85 billion per year, or enough to pay every undergraduate tuition at every public university in the United States.

This state of affairs is not inevitable.

Most meaningless ads could be undermined by regulation; instead of the current “blacklist” model where an ad is legal as long as it doesn’t explicitly state anything that is verifiably false, we could move to a “whitelist” model where an ad is illegal if it states anything that isn’t verifiably true. Red Bull cannot give you wings, Maxwell House isn’t good to the last drop, and Volkswagen needs to be more specific than “round for a reason”. We may never be able to completely eliminate irrelevant emotionally-salient allusions (pictures of families, children, puppies, etc.), but as long as the actual content of the words is regulated it would be much harder to deluge people with advertisements that provide no actual information.

We have a choice, as a civilization: Do we want to continue to let meaningless ads invade our brains and waste the resources of our society?

# How following the crowd can doom us all

JDN 2457110 EDT 21:30

Humans are nothing if not social animals. We like to follow the crowd, do what everyone else is doing—and many of us will continue to do so even if our own behavior doesn’t make sense to us. There is a very famous experiment in cognitive science that demonstrates this vividly.

People are given a very simple task to perform several times: We show you line X and lines A, B, and C. Now tell us which of A, B or C is the same length as X. Couldn’t be easier, right? But there’s a trick: seven other people are in the same room performing the same experiment, and they all say that B is the same length as X, even though you can clearly see that A is the correct answer. Do you stick with what you know, or say what everyone else is saying? Typically, you say what everyone else is saying. Over 18 trials, 75% of people followed the crowd at least once, and some people followed the crowd every single time. Some people even began to doubt their own perception, wondering if B really was the right answer—there are four lights, anyone?

Given that our behavior can be distorted by others in such simple and obvious tasks, it should be no surprise that it can be distorted even more in complex and ambiguous tasks—like those involved in finance. If everyone is buying up Beanie Babies or Tweeter stock, maybe you should too, right? Can all those people be wrong?

In fact, matters are even worse with the stock market, because it is in a sense rational to buy into a bubble if you know that other people will as well. As long as you aren’t the last to buy in, you can make a lot of money that way. In speculation, you try to predict the way that other people will cause prices to move and base your decisions around that—but then everyone else is doing the same thing. By Keynes called it a “beauty contest”; apparently in his day it was common to have contests for picking the most beautiful photo—but how is beauty assessed? By how many people pick it! So you actually don’t want to choose the one you think is most beautiful, you want to choose the one you think most people will think is the most beautiful—or the one you think most people will think most people will think….

Our herd behavior probably made a lot more sense when we evolved it millennia ago; when most of your threats are external and human beings don’t have that much influence over our environment, the majority opinion is quite likely to be right, and can often given you an answer much faster than you could figure it out on your own. (If everyone else thinks a lion is hiding in the bushes, there’s probably a lion hiding in the bushes—and if there is, the last thing you want is to be the only one who didn’t run.) The problem arises when this tendency to follow the ground feeds back on itself, and our behavior becomes driven not by the external reality but by an attempt to predict each other’s predictions of each other’s predictions. Yet this is exactly how financial markets are structured.

With this in mind, the surprise is not why markets are unstable—the surprise is why markets are ever stable. I think the main reason markets ever manage price stability is actually something most economists think of as a failure of markets: Price rigidity and so-called “menu costs“. If it’s costly to change your price, you won’t be constantly trying to adjust it to the mood of the hour—or the minute, or the microsecondbut instead trying to tie it to the fundamental value of what you’re selling so that the price will continue to be close for a long time ahead. You may get shortages in times of high demand and gluts in times of low demand, but as long as those two things roughly balance out you’ll leave the price where it is. But if you can instantly and costlessly change the price however you want, you can raise it when people seem particularly interested in buying and lower it when they don’t, and then people can start trying to buy when your price is low and sell when it is high. If people were completely rational and had perfect information, this arbitrage would stabilize prices—but since they’re not, arbitrage attempts can over- or under-compensate, and thus result in cyclical or even chaotic changes in prices.

Our herd behavior then makes this worse, as more people buying leads to, well, more people buying, and more people selling leads to more people selling. If there were no other causes of behavior, the result would be prices that explode outward exponentially; but even with other forces trying to counteract them, prices can move suddenly and unpredictably.

If most traders are irrational or under-informed while a handful are rational and well-informed, the latter can exploit the former for enormous amounts of money; this fact is often used to argue that irrational or under-informed traders will simply drop out, but it should only take you a few moments of thought to see why that isn’t necessarily true. The incentives isn’t just to be well-informed but also to keep others from being well-informed. If everyone were rational and had perfect information, stock trading would be the most boring job in the world, because the prices would never change except perhaps to grow with the growth rate of the overall economy. Wall Street therefore has every incentive in the world not to let that happen. And now perhaps you can see why they are so opposed to regulations that would require them to improve transparency or slow down market changes. Without the ability to deceive people about the real value of assets or trigger irrational bouts of mass buying or selling, Wall Street would make little or no money at all. Not only are markets inherently unstable by themselves, in addition we have extremely powerful individuals and institutions who are driven to ensure that this instability is never corrected.

This is why as our markets have become ever more streamlined and interconnected, instead of becoming more efficient as expected, they have actually become more unstable. They were never stable—and the gold standard made that instability worse—but despite monetary policy that has provided us with very stable inflation in the prices of real goods, the prices of assets such as stocks and real estate have continued to fluctuate wildly. Real estate isn’t as bad as stocks, again because of price rigidity—houses rarely have their values re-assessed multiple times per year, let alone multiple times per second. But real estate markets are still unstable, because of so many people trying to speculate on them. We think of real estate as a good way to make money fast—and if you’re lucky, it can be. But in a rational and efficient market, real estate would be almost as boring as stock trading; your profits would be driven entirely by population growth (increasing the demand for land without changing the supply) and the value added in construction of buildings. In fact, the population growth effect should be sapped by a land tax, and then you should only make a profit if you actually build things. Simply owning land shouldn’t be a way of making money—and the reason for this should be obvious: You’re not actually doing anything. I don’t like patent rents very much, but at least inventing new technologies is actually beneficial for society. Owning land contributes absolutely nothing, and yet it has been one of the primary means of amassing wealth for centuries and continues to be today.

But (so-called) investors and the banks and hedge funds they control have little reason to change their ways, as long as the system is set up so that they can keep profiting from the instability that they foster. Particularly when we let them keep the profits when things go well, but immediately rush to bail them out when things go badly, they have basically no incentive at all not to take maximum risk and seek maximum instability. We need a fundamentally different outlook on the proper role and structure of finance in our economy.

Fortunately one is emerging, summarized in a slogan among economically-savvy liberals: Banking should be boring. (Elizabeth Warren has said this, as have Joseph Stiglitz and Paul Krugman.) And indeed it should, for all banks are supposed to be doing is lending money from people who have it and don’t need it to people who need it but don’t have it. They aren’t supposed to be making large profits of their own, because they aren’t the ones actually adding value to the economy. Indeed it was never quite clear to me why banks should be privatized in the first place, though I guess it makes more sense than, oh, say, prisons.

Unfortunately, the majority opinion right now, at least among those who make policy, seems to be that banks don’t need to be restructured or even placed on a tighter leash; no, they need to be set free so they can work their magic again. Even otherwise reasonable, intelligent people quickly become unshakeable ideologues when it comes to the idea of raising taxes or tightening regulations. And as much as I’d like to think that it’s just a small but powerful minority of people who thinks this way, I know full well that a large proportion of Americans believe in these views and intentionally elect politicians who will act upon them.

All the more reason to break from the crowd, don’t you think?

# Scope neglect and the question of optimal altruism

JDN 2457090 EDT 16:15.

We’re now on Eastern Daylight Time because of this bizarre tradition of shifting our time zone forward for half of the year. It’s supposed to save energy, but a natural experiment in India suggests it actually increases energy demand. So why do we do it? Like every ridiculous tradition (have you ever tried to explain Groundhog Day to someone from another country?), we do it because we’ve always done it.
This week’s topic is scope neglect, one of the most pervasive—and pernicious—cognitive heuristics human beings face. Scope neglect raises a great many challenges not only practically but also theoretically—it raises what I call the question of optimal altruism.

The question is simple to ask yet remarkably challenging to answer: How much should we be willing to sacrifice in order to benefit others? If we think of this as a number, your solidarity coefficient (s), it is equal to the cost you are willing to pay divided by the benefit your action has for someone else: s B > C.

This is analogous to the biological concept relatedness (r), on which Hamilton’s Rule applies: r B > C. Solidarity is the psychological analogue; instead of valuing people based on their genetic similarity to you, you value them based on… well, that’s the problem.

I can easily place upper and lower bounds: The lower bound is zero: You should definitely be willing to sacrifice something to help other people—otherwise you are a psychopath. The upper bound is one: There’s no point in paying more cost than you produce in benefit, and in fact even paying the same cost to yourself as you yield in benefits for other people doesn’t make a lot of sense, because it means that your own self-interest is meaningless and the fact that you understand your own needs better than the needs of others is also irrelevant.

But beyond that, it gets a lot harder—and that may explain why we suffer scope neglect in the first place. Should it be 90%? 50%? 10%? 1%? How should it vary between friends versus family versus strangers? It’s really hard to say. And this inability to precisely decide how much other people should be worth to us may be part of why we suffer scope neglect.

Scope neglect is the fact that we are not willing to expend effort or money in direct proportion to the benefit it would have. When different groups were asked how much they would be willing to donate in order to save the lives of 2,000 birds, 20,000 birds, or 200,000 birds, the answers they gave were statistically indistinguishable—always about \$80. But however much a bird’s life is worth to you, shouldn’t 200,000 birds be worth, well, 200,000 times as much? In fact, more than that, because the marginal utility of wealth is decreasing, but I see no reason to think that the marginal utility of birds decreases nearly as fast.

But therein lies the problem: Usually we can’t pay 200,000 times as much. I’d feel like a horrible person if I weren’t willing to expend at least \$10 or an equivalent amount of effort in order to save a bird. To save 200,000 birds that means I’d owe \$2 million—and I simply don’t have \$2 million.

You can get similar results to the bird experiment if you use children—though, as one might hope, the absolute numbers are a bit bigger, usually more like \$500 to \$1000. (And this, it turns out, is actually about how much it actually costs to save a child’s life by a particularly efficient means, such as anti-malaria nets, de-worming, or direct cash transfer. So please, by all means, give \$1000 to UNICEF or the Against Malaria Foundation. If you can’t give \$1000, give \$100; if you can’t give \$100, give \$10.) It doesn’t much matter whether you say that the project will save 500 children, 5,000 children, or 50,000 children—people still will give about \$500 to \$1000. But once again, if I’m willing to spend \$1000 to save a child—and I definitely am—how much should I be willing to spend to end malaria, which kills 500,000 children a year? Apparently \$500 million, which not only do I not have, I almost certainly will not make that much money cumulatively through my entire life. (\$2 million, on the other hand, I almost certainly will make cumulatively—the median income of an economist is \$90,000 per year, so if I work for at least 22 years with that as my average income I’ll have cumulatively made \$2 million. My net wealth may never be that high—though if I get better positions, or I’m lucky enough or clever enough with the stock market it might—but my cumulative income almost certainly will. Indeed, the average gain in cumulative income from a college degree is about \$1 million. Because it takes time—time is money—and loans carry interest, this gives it a net present value of about \$300,000.)

But maybe scope neglect isn’t such a bad thing after all. There is a very serious problem with these sort of moral dilemmas: The question didn’t say I would single-handedly save 200,000 birds—and indeed, that notion seems quite ridiculous. If I knew that I could actually save 200,000 birds and I were the only one who could do it, dammit, I would try to come up with that \$2 million. I might not succeed, but I really would try as hard as I could.

And if I could single-handedly end malaria, I hereby vow that I would do anything it took to achieve that. Short of mass murder, anything I could do couldn’t be a higher cost to the world than malaria itself. I have no idea how I’d come up with \$500 million, but I’d certainly try. Bill Gates could easily come up with that \$500 million—so he did. In fact he endowed the Gates Foundation with \$28 billion, and they’ve spent \$1.3 billion of that on fighting malaria, saving hundreds of thousands of lives.

With this in mind, what is scope neglect really about? I think it’s about coordination. It’s not that people don’t care more about 200,000 birds than they do about 2,000; and it’s certainly not that they don’t care more about 50,000 children than they do about 500. Rather, the problem is that people don’t know how many other people are likely to donate, or how expensive the total project is likely to be; and we don’t know how much we should be willing to pay to save the life of a bird or a child.

Hence, what we basically do is give up; since we can’t actually assess the marginal utility of our donation dollars, we fall back on our automatic emotional response. Our mind focuses itself on visualizing that single bird covered in oil, or that single child suffering from malaria. We then hope that the representative heuristic will guide us in how much to give. Or we follow social norms, and give as much as we think others would expect us to give.

While many in the effective altruism community take this to be a failing, they never actually say what we should do—they never give us a figure for how much money we should be willing to donate to save the life of a child. Instead they retreat to abstraction, saying that whatever it is we’re willing to give to save a child, we should be willing to give 50,000 times as much to save 50,000 children.

But it’s not that simple. A bigger project may attract more supporters; if the two occur in direct proportion, then constant donation is the optimal response. Since it’s probably not actually proportional, you likely should give somewhat more to causes that affect more people; but exactly how much more is an astonishingly difficult question. I really don’t blame people—or myself—for only giving a little bit more to causes with larger impact, because actually getting the right answer is so incredibly hard. This is why it’s so important that we have institutions like GiveWell and Charity Navigator which do the hard work to research the effectiveness of charities and tell us which ones we should give to.

Yet even if we can properly prioritize which charities to give to first, that still leaves the question of how much each of us should give. 1% of our income? 5%? 10%? 20%? 50%? Should we give so much that we throw ourselves into the same poverty we are trying to save others from?

In his earlier work Peter Singer seemed to think we should give so much that it throws us into poverty ourselves; he asked us to literally compare every single purchase and ask ourselves whether a year of lattes or a nicer car is worth a child’s life. Of course even he doesn’t live that way, and in his later books Singer seems to have realized this, and now recommends the far more modest standard that everyone give at least 1% of their income. (He himself gives about 33%, but he’s also very rich so he doesn’t feel it nearly as much.) I think he may have overcompensated; while if literally everyone gave at least 1% that would be more than enough to end world hunger and solve many other problems—world nominal GDP is over \$70 trillion, so 1% of that is \$700 billion a year—we know that this won’t happen. Some will give more, others less; most will give nothing at all. Hence I think those of us who give should give more than our share; hence I lean toward figures more like 5% or 10%.

But then, why not 50% or 90%? It is very difficult for me to argue on principle why we shouldn’t be expected to give that much. Because my income is such a small proportion of the total donations, the marginal utility of each dollar I give is basically constant—and quite high; if it takes about \$1000 to save a child’s life on average, and each of these children will then live about 60 more years at about half the world average happiness, that’s about 30 QALY per \$1000, or about 30 milliQALY per dollar. Even at my current level of income (incidentally about as much as I think the US basic income should be), I’m benefiting myself only about 150 microQALY per dollar—so my money is worth about 200 times as much to those children as it is to me.

So now we have to ask ourselves the really uncomfortable question: How much do I value those children, relative to myself? If I am at all honest, the value is not 1; I’m not prepared to die for someone I’ve never met 10,000 kilometers away in a nation I’ve never even visited, nor am I prepared to give away all my possessions and throw myself into the same starvation I am hoping to save them from. I value my closest friends and family approximately the same as myself, but I have to admit that I value random strangers considerably less.

Do I really value them at less than 1%, as these figures would seem to imply? I feel like a monster saying that, but maybe it really isn’t so terrible—after all, most economists seem to think that the optimal solidarity coefficient is in fact zero. Maybe we need to become more comfortable admitting that random strangers aren’t worth that much to us, simply so that we can coherently acknowledge that they aren’t worth nothing. Very few of us actually give away all our possessions, after all.

Then again, what do we mean by worth? I can say from direct experience that a single migraine causes me vastly more pain than learning about the death of 200,000 people in an earthquake in Southeast Asia. And while I gave about \$100 to the relief efforts involved in that earthquake, I’ve spent considerably more on migraine treatments—thousands, once you include health insurance. But given the chance, would I be willing to suffer a migraine to prevent such an earthquake? Without hesitation. So the amount of pain we feel is not the same as the amount of money we pay, which is not the same as what we would be willing to sacrifice. I think the latter is more indicative of how much people’s lives are really worth to us—but then, what we pay is what has the most direct effect on the world.

It’s actually possible to justify not dying or selling all my possessions even if my solidarity coefficient is much higher—it just leads to some really questionable conclusions. Essentially the argument is this: I am an asset. I have what economists call “human capital”—my health, my intelligence, my education—that gives me the opportunity to affect the world in ways those children cannot. In my ideal imagined future (albeit improbable) in which I actually become President of the World Bank and have the authority to set global development policy, I myself could actually have a marginal impact of megaQALY—millions of person-years of better life. In the far more likely scenario in which I attain some mid-level research or advisory position, I could be one of thousands of people who together have that sort of impact—which still means my own marginal effect is on the order of kiloQALY. And clearly it’s true that if I died, or even if I sold all my possessions, these events would no longer be possible.

The problem with that reasoning is that it’s wildly implausible to say that everyone in the First World are in this same sort of position—Peter Singer can say that, and maybe I can say that, and indeed hundreds of development economists can say that—but at least 99.9% of the First World population are not development economists, nor are they physicists likely to invent cold fusion, nor biomedical engineers likely to cure HIV, nor aid workers who distribute anti-malaria nets and polio vaccines, nor politicians who set national policy, nor diplomats who influence international relations, nor authors whose bestselling books raise worldwide consciousness. Yet I am not comfortable saying that all the world’s teachers, secretaries, airline pilots and truck drivers should give away their possessions either. (Maybe all the world’s bankers and CEOs should—or at least most of them.)

Is it enough that our economy would collapse without teachers, secretaries, airline pilots and truck drivers? But this seems rather like the fact that if everyone in the world visited the same restaurant there wouldn’t be enough room. Surely we could do without any individual teacher, any individual truck driver? If everyone gave the same proportion of their income, 1% would be more than enough to end malaria and world hunger. But we know that everyone won’t give, and the job won’t get done if those of us who do give only 1%.

Perhaps we should figure out what proportion of the world’s people are likely to give, and how much we need altogether, and then assign the amount we expect from each of them based on that? The more money you ask from each, the fewer people are likely to give. This creates an optimization problem akin to setting the price of a product under monopoly—monopolies maximize profits by carefully balancing the quantity sold with the price at which they sell, and perhaps a similar balance would allow us to maximize development aid. But wouldn’t it be better if we could simply increase the number of people who give, so that we don’t have to ask so much of those who are generous? That means tax-funded foreign aid is the way to go, because it ensures coordination. And indeed I do favor increasing foreign aid to about 1% of GDP—in the US it is currently about \$50 billion, 0.3% of GDP, a little more than 1% of the Federal budget. (Most people who say we should “cut” foreign aid don’t realize how small it already is.) But foreign aid is coercive; wouldn’t it be better if people would give voluntarily?

I don’t have a simple answer. I don’t know how much other people’s lives ought to be worth to us, or what it means for our decisions once we assign that value. But I hope I’ve convinced you that this problem is an important one—and made you think a little more about scope neglect and why we have it.

# Love is rational

JDN 2457066 PST 15:29.

Since I am writing this the weekend of Valentine’s Day (actually by the time it is published it will be Valentine’s Day) and sitting across from my boyfriend, it seems particularly appropriate that today’s topic should be love. As I am writing it is in fact Darwin Day, so it is fitting that evolution will be a major topic as well.

Usually we cognitive economists are the ones reminding neoclassical economists that human beings are not always rational. Today however I must correct a misconception in the opposite direction: Love is rational, or at least it can be, should be, and typically is.

Lately I’ve been reading The Logic of Life which actually makes much the same point, about love and many other things. I had expected it to be a dogmatic defense of economic rationality—published in 2008 no less, which would make it the scream of a dying paradigm as it carries us all down with it—but I was in fact quite pleasantly surprised. The book takes a nuanced position on rationality very similar to my own, and actually incorporates many of the insights from neuroeconomics and cognitive economics. I think Harford would basically agree with me that human beings are 90% rational (but woe betide the other 10%).

We have this romantic (Romantic?) notion in our society that love is not rational, it is “beyond” rationality somehow. “Love is blind”, they say; and this is often used as a smug reply to the notion that rationality is the proper guide to live our lives.

The argument would seem to follow: “Love is not rational, love is good, therefore rationality is not always good.”

But then… the argument would follow? What do you mean, follow? Follow logically? Follow rationally? Something is clearly wrong if we’ve constructed a rational argument intended to show that we should not live our lives by rational arguments.

And the problem of course is the premise that love is not rational. Whatever made you say that?

It’s true that love is not directly volitional, not in the way that it is volitional to move your arm upward or close your eyes or type the sentence “Jackdaws ate my big sphinx of quartz.” You don’t exactly choose to love someone, weighing the pros and cons and making a decision the way you might choose which job offer to take or which university to attend.

But then, you don’t really choose which university you like either, now do you? You choose which to attend. But your enjoyment of that university is not a voluntary act. And similarly you do in fact choose whom to date, whom to marry. And you might well consider the pros and cons of such decisions. So the difference is not as large as it might at first seem.

More importantly, to say that our lives should be rational is not the same as saying they should be volitional. You simply can’t live your life as completely volitional, no matter how hard you try. You simply don’t have the cognitive resources to maintain constant awareness of every breath, every heartbeat. Yet there is nothing irrational about breathing or heartbeats—indeed they are necessary for survival and thus a precondition of anything rational you might ever do.

Indeed, in many ways it is our subconscious that is the most intelligent part of us. It is not as flexible as our conscious mind—that is why our conscious mind is there—but the human subconscious is unmatched in its efficiency and reliability among literally all known computational systems in the known universe. Walk across a room and it will solve reverse kinematics in real time. Throw a ball and it will solve three-dimensional nonlinear differential equations as well. Look at a familiar face and it will immediately identify it among a set of hundreds of faces with near-perfect accuracy regardless of the angle, lighting conditions, or even hairstyle. To see that I am not exaggerating the immense difficulty of these tasks, look at how difficult it is to make robots that can walk on two legs or throw balls. Face recognition is so difficult that it is still an unsolved problem with an extensive body of ongoing research.

And love, of course, is the subconscious system that has been most directly optimized by natural selection. Our very survival has depended upon it for millions of years. Indeed, it’s amazing how often it does seem to fail given those tight optimization constraints; I think this is for two reasons. First, natural selection optimizes for inclusive fitness, which is not the same thing as optimizing for happiness—what’s good for your genes may not be good for you per se. Many of the ways that love hurts us seem to be based around behaviors that probably did on average spread more genes on the African savannah. Second, the task of selecting an optimal partner is so mind-bogglingly complex that even the most powerful computational system in the known universe still can only do it so well. Imagine trying to construct a formal decision model that would tell you whom you should marry—all the variables you’d need to consider, the cost of sampling each of those variables sufficiently, the proper weightings on all the different terms in the utility function. Perhaps the wonder is that love is as rational as it is.

Indeed, love is evidence-based—and when it isn’t, this is cause for concern. The evidence is most often presented in small ways over long periods of time—a glance, a kiss, a gift, a meeting canceled to stay home and comfort you. Some ways are larger—a career move postponed to keep the family together, a beautiful wedding, a new house. We aren’t formally calculating the Bayesian probability at each new piece of evidence—though our subconscious brains might be, and whatever they’re doing the results aren’t far off from that mathematical optimum.

The notion that you will never “truly know” if others love you is no more epistemically valid or interesting than the notion that you will never “truly know” if your shirt is grue instead of green or if you are a brain in a vat. Perhaps we’ve been wrong about gravity all these years, and on April 27, 2016 it will suddenly reverse direction! No, it won’t, and I’m prepared to literally bet the whole world on that (frankly I’m not sure I have a choice). To be fair, the proposition that your spouse of twenty years or your mother loves you is perhaps not that certain—but it’s pretty darn certain. Perhaps the proper comparison is the level of certainty that climate change is caused by human beings, or even less, the level of certainty that your car will not suddenly veer off the road and kill you. The latter is something that actually happens—but we all drive every day assuming it won’t. By the time you marry someone, you can and should be that certain that they love you.

Love without evidence is bad love. The sort of unrequited love that builds in secret based upon fleeing glimpses, hours of obsessive fantasy, and little or no interaction with its subject isn’t romantic—it’s creepy and psychologically unhealthy. The extreme of that sort of love is what drove John Hinckley Jr. to shoot Ronald Reagan in order to impress Jodie Foster.

I don’t mean to make you feel guilty if you have experienced such a love—most of us have at one point or another—but it disgusts me how much our society tries to elevate that sort of love as the “true love” to which we should all aspire. We encourage people—particularly teenagers—to conceal their feelings for a long time and then release them in one grand surprise gesture of affection, which is just about the opposite of what you should actually be doing. (Look at Love Actually, which is just about the opposite of what its title says.) I think a great deal of strife in our society would be eliminated if we taught our children how to build relationships gradually over time instead of constantly presenting them with absurd caricatures of love that no one can—or should—follow.

I am pleased to see that our cultural norms on that point seem to be changing. A corporation as absurdly powerful as Disney is both an influence upon and a barometer of our social norms, and the trope in the most recent Disney films (like Frozen and Maleficent) is that true love is not the fiery passion of love at first sight, but the deep bond between family members that builds over time. This is a much healthier concept of love, though I wouldn’t exclude romantic love entirely. Romantic love can be true love, but only by building over time through a similar process.

Perhaps there is another reason people are uncomfortable with the idea that love is rational; by definition, rational behaviors respond to incentives. And since we tend to conceive of incentives as a purely selfish endeavor, this would seem to imply that love is selfish, which seems somewhere between painfully cynical and outright oxymoronic.

But while love certainly does carry many benefits for its users—being in love will literally make you live longer, by quite a lot, an effect size comparable to quitting smoking or exercising twice a week—it also carries many benefits for its recipients as well. Love is in fact the primary means by which evolution has shaped us toward altruism; it is the love for our family and our tribe that makes us willing to sacrifice so much for them. Not all incentives are selfish; indeed, an incentive is really just something that motivates you to action. If you could truly convince me that a given action I took would have even a reasonable chance of ending world hunger, I would do almost anything to achieve it; I can scarcely imagine a greater incentive, even though I would be harmed and the benefits would incur to people I have never met.

Love evolved because it advanced the fitness of our genes, of course. And this bothers many people; it seems to make our altruism ultimately just a different form of selfishness I guess, selfishness for our genes instead of ourselves. But this is a genetic fallacy, isn’t it? Yes, evolution by natural selection is a violent process, full of death and cruelty and suffering (as Darwin said, red in tooth and claw); but that doesn’t mean that its outcome—namely ourselves—is so irredeemable. We are, in fact, altruistic, regardless of where that altruism came from. The fact that it advanced our genes can actually be comforting in a way, because it reminds us that the universe is nonzero-sum and benefiting others does not have to mean harming ourselves.

One question I like to ask when people suggest that some scientific fact undermines our moral status in this way is: “Well, what would you prefer?” If the causal determinism of neural synapses undermines our free will, then what should we have been made of? Magical fairy dust? If we were, fairy dust would be a real phenomenon, and it would obey laws of nature, and you’d just say that the causal determinism of magical fairy dust undermines free will all over again. If the fact that our altruistic emotions evolved by natural selection to advance our inclusive fitness makes us not truly altruistic, then where should have altruism come from? A divine creator who made us to love one another? But then we’re just following our programming! You can always make this sort of argument, which either means that live is necessarily empty of meaning, that no possible universe could ever assuage our ennui—or, what I believe, that life’s meaning does not come from such ultimate causes. It is not what you are made of or where you come from that defines what you are. We are best defined by what we do.

It seems to depend how you look at it: Romantics are made of stardust and the fabric of the cosmos, while cynics are made of the nuclear waste expelled in the planet-destroying explosions of dying balls of fire. Romantics are the cousins of all living things in one grand family, while cynics are apex predators evolved from millions of years of rape and murder. Both of these views are in some sense correct—but I think the real mistake is in thinking that they are incompatible. Human beings are both those things, and more; we are capable of both great compassion and great cruelty—and also great indifference. It is a mistake to think that only the dark sides—or for that matter only the light sides—of us are truly real.

Love is rational; love responds to incentives; love is an evolutionary adaptation. Love binds us together; love makes us better; love leads us to sacrifice for one another.

Love is, above all, what makes us not infinite identical psychopaths.