I dislike overstatement

Jan 10 JDN 2459225

I was originally planning on titling this post “I hate overstatement”, but I thought that might be itself an overstatement; then I considered leaning into the irony with something like “Overstatement is the worst thing ever”. But no, I think my point best comes across if I exemplify it, rather than present it ironically.

It’s a familiar formula: “[Widespread belief] is wrong! [Extreme alternative view] is true! [Obvious exception]. [Further qualifications]. [Revised, nuanced view that is only slightly different from the widespread belief].”

Here are some examples of the formula (these are not direct quotes but paraphrases of their general views). Note that these are all people I basically agree with, and yet I still find their overstatement annoying:

Bernie Sanders: “Capitalism is wrong! Socialism is better! Well, not authoritarian socialism like the Soviet Union. And some industries clearly function better when privatized. Scandinavian social democracy seems to be the best system.”

Richard Dawkins: “Religion is a delusion! Only atheists are rational! Well, some atheists are also pretty irrational. And most religious people are rational about most things most of the time, and don’t let their religious beliefs interfere too greatly with their overall behavior. Really, what I mean to say that is that God doesn’t exist and organized religion is often harmful.”

Black Lives Matter: “Abolish the police! All cops are bastards! Well, we obviously still need some kind of law enforcement system for dealing with major crimes; we can’t just let serial killers go free. In fact, while there are deep-seated flaws in police culture, we could solve a lot of the most serious problems with a few simple reforms like changing the rules of engagement.”

Sam Harris is particularly fond of this formula, so here is a direct quote that follows the pattern precisely:

“The link between belief and behavior raises the stakes considerably. Some propositions are so dangerous that it may even be ethical to kill people for believing them. This may seem an extraordinary claim, but it merely enunciates an ordinary fact about the world in which we live. Certain beliefs place their adherents beyond the reach of every peaceful means of persuasion, while inspiring them to commit acts of extraordinary violence against others. There is, in fact, no talking to some people. If they cannot be captured, and they often cannot, otherwise tolerant people may be justified in killing them in self-defense. This is what the United States attempted in Afghanistan, and it is what we and other Western powers are bound to attempt, at an even greater cost to ourselves and to innocents abroad, elsewhere in the Muslim world. We will continue to spill blood in what is, at bottom, a war of ideas.”

Somehow in a single paragraph he started with the assertion “It is permissible to punish thoughtcrime with death” and managed to qualify it down to “The Afghanistan War was largely justified”. This is literally the difference between a proposition fundamentally antithetical to everything America stands for, and an utterly uncontroversial statement most Americans agree with. Harris often complains that people misrepresent his views, and to some extent this is true, but honestly I think he does this on purpose because he knows that controversy sells. There’s taking things out of context—and then there’s intentionally writing in a style that will maximize opportunities to take you out of context.

I think the idea behind overstating your case is that you can then “compromise” toward your actual view, and thereby seem more reasonable.

If there is some variable X that we want to know the true value of, and I currently believe that it is some value x1 while you believe that it is some larger value x2, and I ask you what you think, you may not want to tell me x2. Intead you might want to give some number even larger than x2 that you choose to try to make me adjust all the way into adopting your new belief.

For instance, suppose I think the probability of your view being right is p and the probability of my view being right is 1-p. But you think that the probability of your view being right is q > p and the probability of my view being right is 1-q < 1-p.

I tell you that my view is x1. Then I ask you what your view is. What answer should you give?


Well, you can expect that I’ll revise my belief to a new value px + (1-p)x1, where x is whatever answer you give me. The belief you want me to hold is qx2 + (1-q)x1. So your optimal choice is as follows:

qx2 + (1-q)x1 = px + (1-p)x1

x = x1 + q/p(x2-x1)

Since q > p, q/p > 1 and the x you report to me will be larger than your true value x2. You will overstate your case to try to get me to adjust my beliefs more. (Interestingly, if you were less confident in your own beliefs, you’d report a smaller difference. But this seems like a rare case.)

In a simple negotiation over dividing some resource (e.g. over a raise or a price), this is quite reasonable. When you’re a buyer and I’m a seller, our intentions are obvious enough: I want to sell high and you want to buy low. Indeed, the Nash Equilibrium of this game seems to be that we both make extreme offers then compromise on a reasonable offer, all the while knowing that this is exactly what we’re doing.

But when it comes to beliefs about the world, things aren’t quite so simple.

In particular, we have reasons for our beliefs. (Or at least, we’re supposed to!) And evidence isn’t linear. Even when propositions can be placed on a one-dimensional continuum in this way (and quite frankly we shoehorn far too many complex issues onto a simple “left/right” continuum!), evidence that X = x isn’t partial evidence that X = 2x. A strong argument that the speed of light is 3*108 m/s isn’t a weak argument that the speed of light is 3*109 m/s. A compelling reason to think that taxes should be over 30% isn’t even a slight reason to think that taxes should be over 90%.

To return to my specific examples: Seeing that Norway is a very prosperous country doesn’t give us reasons to like the Soviet Union. Recognizing that religion is empirically false doesn’t justify calling all religious people delusional. Reforming the police is obviously necessary, and diverting funds to other social services is surely a worthwhile goal; but law enforcement is necessary and cannot simply be abolished. And defending against the real threat of Islamist terrorism in no way requires us to institute the death penalty for thoughtcrime.

I don’t know how most people response to overstatement. Maybe it really does cause them to over-adjust their beliefs. Hyperbole is a very common rhetorical tactic, and for all I know perhaps it is effective on many people.

But personally, here is my reaction: At the very start, you stated something implausible. That has reduced your overall credibility.

If I continue reading and you then deal with various exceptions and qualifications, resulting in a more reasonable view, I do give you some credit for that; but now I am faced with a dilemma: Either (1) you were misrepresenting your view initially, or (2) you are engaging in a motte-and-bailey doctrine, trying to get me to believe the strong statement while you can only defend the weak statement. Either way I feel like you are being dishonest and manipulative. I trust you less. I am less interested in hearing whatever else you have to say. I am in fact less likely to adopt your nuanced view than I would have been if you’d simply presented it in the first place.

And that’s assuming I have the opportunity to hear your full nuanced version. If all I hear is the sound-byte overstatement, I will come away with an inaccurate assessment of your beliefs. I will have been presented with an implausible claim and evidence that doesn’t support that claim. I will reject your view out of hand, without ever actually knowing what your view truly was.

Furthermore, I know that many others who are listening are not as thoughtful as I am about seeking out detailed context, so even if I know the nuanced version I know—and I think you know—that some people are going to only hear the extreme version.

Maybe what it really comes down to is a moral question: Is this a good-faith discussion where we are trying to reach the truth together? Or is this a psychological manipulation to try to get me to believe what you believe? Am I a fellow rational agent seeking knowledge with you? Or am I a behavior machine that you want to control by pushing the right buttons?

I won’t say that overstatement is always wrong—because that would be an overstatement. But please, make an effort to avoid it whenever you can.

MSRP is tacit collusion

Oct 7 JDN 2458399

It’s been a little while since I’ve done a really straightforward economic post. It feels good to get back to that.

You are no doubt familiar with the “Manufacturer’s Suggested Retail Price” or MSRP. It can be found on everything from books to dishwashers to video games.

The MSRP is a very simple concept: The manufacturer suggests that all retailers sell it (at least the initial run) at precisely this price.

Why would they want to do that? There is basically only one possible reason: They are trying to sustain tacit collusion.

The game theory of this is rather subtle: It requires that both manufacturers and retailers engage in long-term relationships with one another, and can pick and choose who to work with based on the history of past behavior. Both of these conditions hold in most real-world situations—indeed, the fact that they don’t hold very well in the agriculture industry is probably why we don’t see MSRP on produce.

If pricing were decided by random matching with no long-term relationships or past history, MSRP would be useless. Each firm would have little choice but to set their own optimal price, probably just slightly over their own marginal cost. Even if the manufacturer suggested an MSRP, retailers would promptly and thoroughly ignore it.

This is because the one-shot Bertrand pricing game has a unique Nash equilibrium, at pricing just above marginal cost. The basic argument is as follows: If I price cheaper than you, I can claim the whole market. As long as it’s profitable for me to do that, I will. The only time it’s not profitable for me to undercut you in this way is if we are both charging just slightly above marginal cost—so that is what we shall do, in Nash equilibrium. Human beings don’t always play according to the Nash equilibrium, but for-profit corporations do so quite consistently. Humans have limited attention and moral values; corporations have accounting departments and a fanatical devotion to the One True Profit.

But the iterated Bertrand pricing game is quite different. If instead of making only one pricing decision, we make many pricing decisions over time, always with a high probability of encountering the same buyers and sellers again in the future, then I may not want to undercut your price, for fear of triggering a price war that will hurt both of our firms.

Much like how the Iterated Prisoner’s Dilemma can sustain cooperation in Nash equilibrium while the one-shot Prisoner’s Dilemma cannot, the iterated Bertrand game can sustain collusion as a Nash equilibrium.

There is in fact a vast number of possible equilibria in the iterated Bertrand game. If prices were infinitely divisible, there would be an infinite number of equilibria. In reality, there are hundreds or thousands of equilibria, depending on how finely divisible the price may be.

This makes the iterated Bertrand game a coordination gamethere are many possible equilibria, and our task is to figure out which one to coordinate on.

If we had perfect information, we could deduce what the monopoly price would be, and then all choose the monopoly price; this would be what we call “payoff dominant”, and it’s often what people actually try to choose in real-world coordination games.

But in reality, the monopoly price is a subtle and complicated thing, and might not even be the same between different retailers. So if we each try to compute a monopoly price, we may end up with different results, and then we could trigger a price war and end up driving all of our profits down. If only there were some way to communicate with one another, and say what price we all want to set?

Ah, but there is: The MSRP. Most other forms of price communication are illegal: We certainly couldn’t send each other emails and say “Let’s all charge $59.99, okay?” (When banks tried to do that with the LIBOR, it was the largest white-collar crime in history.) But for some reason economists (particularly, I note, the supposed “free market” believers of the University of Chicago) have convinced antitrust courts that MSRP is somehow different. Yet it’s obviously hardly different at all: You’ve just made the communication one-way from manufacturers to retailers, which makes it a little less reliable, but otherwise exactly the same thing.

There are all sorts of subtler arguments about how MSRP is justifiable, but as far as I can tell they all fall flat. If you’re worried about retailers not promoting your product enough, enter into a contract requiring them to promote. Proposing a suggested price is clearly nothing but an attempt to coordinate tacit—frankly not even that tacit—collusion.

MSRP also probably serves another, equally suspect, function, which is to manipulate consumers using the anchoring heuristic: If the MSRP is $59.99, then when it does go on sale for $49.99 you feel like you are getting a good deal; whereas, if it had just been priced at $49.99 to begin with, you might still have felt that it was too expensive. I see no reason why this sort of crass manipulation of consumers should be protected under the law either, especially when it would be so easy to avoid.

There are all sorts of ways for firms to tacitly collude with one another, and we may not be able to regulate them all. But the MSRP is literally printed on the box. It’s so utterly blatant that we could very easily make it illegal with hardly any effort at all. The fact that we allow such overt price communication makes a mockery of our antitrust law.

Self-fulfilling norms

Post 242: Jun 10 JDN 2458280

Imagine what it would be like to live in a country with an oppressive totalitarian dictator. For millions of people around the world, this is already reality. For us in the United States, it’s becoming more terrifyingly plausible all the time.

You would probably want to get rid of this dictator. And even if you aren’t in the government yourself, there are certainly things you could do to help with that: Join protests, hide political dissenters in your basement, publish refutations of the official propaganda on the Internet. But all of these things carry great risks. How do you know whether it’s worth the risk?

Well, a very important consideration in that reasoning is how many other people agree with you. In the extreme case where everyone but the dictator agrees with you, overthrowing him should be no problem. In the other extreme case where nobody agrees with you, attempting to overthrow him will inevitably result in being imprisoned and tortured as a political prisoner. Everywhere in between, your probability of success increases as the number of people who agree with you increases.

But how do you know how many people agree with you? You can’t just ask them—simply asking someone “Do you support the dictator?” is a dangerous thing to do in a totalitarian society. Simply by asking around, you could get yourself into a lot of trouble. And if people think you might be asking on behalf of the government, they’re always going to say they support the dictator whether or not they do.

If you believe that enough people would support you, you will take action against the dictator. But if you don’t believe that, you won’t take the chance. Now, consider the fact that many other people are in the same position: They too would only take action if they believed others would.

You are now in what’s called a coordination game. The best decision for you depends upon what everyone else decides. There are two equilibrium outcomes of this game: In one, you all keep your mouths shut and the dictator continues to oppress you. In the other, you all rise up together and overthrow the dictator. But if you take an action out of equilibrium, that could be very bad for you: If you rise up against the dictator without support, you’ll be imprisoned and tortured. If you support the dictator while others try to overthrow him, you might be held responsible for some of his crimes once the coup d’etat is complete.

And what about people who do support the dictator? They might still be willing to go along with overthrowing him, if they saw the writing on the wall. But if they think the dictator can still win, they will stand with him. So their beliefs, also, are vital in deciding whether to try to overthrow the dictator.

This results in a self-fulfilling norm. The dictator can be overthrown, if and only if enough people believe that the dictator can be overthrown.

There are much more mundane examples of of self-fulfilling norms. Most of our traffic laws are actually self-fulfilling norms as much as they are real laws; enforcement is remarkably weak, particularly when you compare it to the rate of compliance. Most of us have driven faster than the speed limit or run a red light on occasion; but how often do you drive on the wrong side of the road, or stop on green and go on red? It is best to drive on the right side of the road if, and only if, everyone believes it is best to drive on the right side of the road. That’s a self-fulfilling norm.

Self-fulfilling norms are a greatly underappreciated force in global history. We often speak as though historical changes are made by “great men”—powerful individuals who effect chance through their charisma or sheer force of will. But that power didn’t exist in a vacuum. For good (Martin Luther King) or for ill (Adolf Hitler), “great men” only have their power because they can amass followers. The reason they can amass followers is that a large number of people already agree with them—but are too afraid to speak up, because they are trapped in a self-fulfilling norm. The primary function of a great leader is to announce—at great personal risk—views that they believe others already hold. If indeed they are correct, then they can amass followers by winning the coordination game. If they are wrong, they may suffer terribly at the hands of a populace that hates them.

There is persuasion involved, but typically it’s not actually persuading people to believe that something is right; it’s persuading people to actually take action, convincing them that there is really enough chance of succeeding that it is worth the risk. Because of the self-fulfilling norm, this is a very all-or-nothing affair; do it right and you win, but do it wrong and your whole movement collapses. You essentially need to know exactly what battles you can win, so that you only fight those battles.

The good news is that information technology may actually make this easier. Honest assessment of people’s anonymous opinions is now easier than ever. Large-scale coordination of activity with relative security is now extremely easy, as we saw in the Arab Spring. This means that we are entering an era of rapid social change, where self-fulfilling norms will rise and fall at a rate never before seen.

In the best-case scenario, this means we get rid of all the bad norms and society becomes much better.

In the worst-case scenario, we may find out that most people actually believe in the bad norms, and this makes those norms all the more entrenched.

Only time will tell.

Reasonableness and public goods games

Apr 1 JDN 2458210

There’s a very common economics experiment called a public goods game, often used to study cooperation and altruistic behavior. I’m actually planning on running a variant of such an experiment for my second-year paper.

The game is quite simple, which is part of why it is used so frequently: You are placed into a group of people (usually about four), and given a little bit of money (say $10). Then you are offered a choice: You can keep the money, or you can donate some of it to a group fund. Money in the group fund will be multiplied by some factor (usually about two) and then redistributed evenly to everyone in the group. So for example if you donate $5, that will become $10, split four ways, so you’ll get back $2.50.

Donating more to the group will benefit everyone else, but at a cost to yourself. The game is usually set up so that the best outcome for everyone is if everyone donates the maximum amount, but the best outcome for you, holding everyone else’s choices constant, is to donate nothing and keep it all.

Yet it is a very robust finding that most people do neither of those things. There’s still a good deal of uncertainty surrounding what motivates people to donate what they do, but certain patterns that have emerged:

  1. Most people donate something, but hardly anyone donates everything.
  2. Increasing the multiplier tends to smoothly increase how much people donate.
  3. The number of people in the group isn’t very important, though very small groups (e.g. 2) behave differently from very large groups (e.g. 50).
  4. Letting people talk to each other tends to increase the rate of donations.
  5. Repetition of the game, or experience from previous games, tends to result in decreasing donation over time.
  6. Economists donate less than other people.

Number 6 is unfortunate, but easy to explain: Indoctrination into game theory and neoclassical economics has taught economists that selfish behavior is efficient and optimal, so they behave selfishly.

Number 3 is also fairly easy to explain: Very small groups allow opportunities for punishment and coordination that don’t exist in large groups. Think about how you would respond when faced with 2 defectors in a group of 4 as opposed to 10 defectors in a group of 50. You could punish the 2 by giving less next round; but punishing the 10 would end up punishing 40 others who had contributed like they were supposed to.

Number 4 is a very interesting finding. Game theory says that communication shouldn’t matter, because there is a unique Nash equilibrium: Donate nothing. All the promises in the world can’t change what is the optimal response in the game. But in fact, human beings don’t like to break their promises, and so when you get a bunch of people together and they all agree to donate, most of them will carry through on that agreement most of the time.

Number 5 is on the frontier of research right now. There are various theoretical accounts for why it might occur, but none of the models proposed so far have much predictive power.

But my focus today will be on findings 1 and 2.

If you’re not familiar with the underlying game theory, finding 2 may seem obvious to you: Well, of course if you increase the payoff for donating, people will donate more! It’s precisely that sense of obviousness which I am going to appeal to in a moment.

In fact, the game theory makes a very sharp prediction: For N players, if the multiplier is less than N, you should always contribute nothing. Only if the multiplier becomes larger than N should you donate—and at that point you should donate everything. The game theory prediction is not a smooth increase; it’s all-or-nothing. The only time game theory predicts intermediate amounts is on the knife-edge at exactly equal to N, where each player would be indifferent between donating and not donating.

But it feels reasonable that increasing the multiplier should increase donation, doesn’t it? It’s a “safer bet” in some sense to donate $1 if the payoff to everyone is $3 and the payoff to yourself is $0.75 than if the payoff to everyone is $1.04 and the payoff to yourself is $0.26. The cost-benefit analysis comes out better: In the former case, you can gain up to $2 if everyone donates, but would only lose $0.25 if you donate alone; but in the latter case, you would only gain $0.04 if everyone donates, and would lose $0.74 if you donate alone.

I think this notion of “reasonableness” is a deep principle that underlies a great deal of human thought. This is something that is sorely lacking from artificial intelligence: The same AI that tells you the precise width of the English Channel to the nearest foot may also tell you that the Earth is 14 feet in diameter, because the former was in its database and the latter wasn’t. Yes, WATSON may have won on Jeopardy, but it (he?) also made a nonsensical response to the Final Jeopardy question.

Human beings like to “sanity-check” our results against prior knowledge, making sure that everything fits together. And, of particular note for public goods games, human beings like to “hedge our bets”; we don’t like to over-commit to a single belief in the face of uncertainty.

I think this is what best explains findings 1 and 2. We don’t donate everything, because that requires committing totally to the belief that contributing is always better. We also don’t donate nothing, because that requires committing totally to the belief that contributing is always worse.

And of course we donate more as the payoffs to donating more increase; that also just seems reasonable. If something is better, you do more of it!

These choices could be modeled formally by assigning some sort of probability distribution over other’s choices, but in a rather unconventional way. We can’t simply assume that other people will randomly choose some decision and then optimize accordingly—that just gives you back the game theory prediction. We have to assume that our behavior and the behavior of others is in some sense correlated; if we decide to donate, we reason that others are more likely to donate as well.

Stated like that, this sounds irrational; some economists have taken to calling it “magical thinking”. Yet, as I always like to point out to such economists: On average, people who do that make more money in the games. Economists playing other economists always make very little money in these games, because they turn on each other immediately. So who is “irrational” now?

Indeed, if you ask people to predict how others will behave in these games, they generally do better than the game theory prediction: They say, correctly, that some people will give nothing, most will give something, and hardly any will give everything. The same “reasonableness” that they use to motivate their own decisions, they also accurately apply to forecasting the decisions of others.

Of course, to say that something is “reasonable” may be ultimately to say that it conforms to our heuristics well. To really have a theory, I need to specify exactly what those heuristics are.

“Don’t put all your eggs in one basket” seems to be one, but it’s probably not the only one that matters; my guess is that there are circumstances in which people would actually choose all-or-nothing, like if we said that the multiplier was 0.5 (so everyone giving to the group would make everyone worse off) or 10 (so that giving to the group makes you and everyone else way better off).

“Higher payoffs are better” is probably one as well, but precisely formulating that is actually surprisingly difficult. Higher payoffs for you? For the group? Conditional on what? Do you hold others’ behavior constant, or assume it is somehow affected by your own choices?

And of course, the theory wouldn’t be much good if it only worked on public goods games (though even that would be a substantial advance at this point). We want a theory that explains a broad class of human behavior; we can start with simple economics experiments, but ultimately we want to extend it to real-world choices.

The potential of an advertising tax

Jan 7, JDN 2458126

Advertising is everywhere in our society. You may see some on this very page (though if I hit my next Patreon target I’m going to pay to get rid of those). Ad-blockers can help when you’re on the Web, and premium channels like HBO will save you from ads when watching TV, but what are you supposed to do about ads on billboards as you drive down the highway, ads on buses as you walk down the street, ads on the walls of the subway train?

And Banksy isn’t entirely wrong; this stuff can be quite damaging. Based on decades of research, the American Psychological Association has issued official statements condemning the use of advertising to children for its harmful psychological effects. Medical research has shown that advertisements for food can cause overeating—and thus, the correlated rise of advertising and obesity may be no coincidence.

Worst of all, political advertising distorts our view of the world. Though we may not be able to blame advertising per se for Trump; most of his publicity was gained for free by irresponsible media coverage.

And yet, advertising is almost pure rent-seeking. It costs resources, but it doesn’t produce anything. In most cases it doesn’t even raise awareness about something or find new customers. The primary goal of most advertising is to get you to choose that brand instead of a different brand. A secondary goal (especially for food ads) is to increase your overall consumption of that good, but since the means employed typically involve psychological manipulation, this increase in consumption is probably harmful to social welfare.

A general principle of economics that has almost universal consensus is the Pigou Principle: If you want less of something, you should put a tax on it. So, what would happen if we put a tax on advertising?

The amazing thing is that in this case, we would probably not actually reduce advertising spending, but we would reduce advertising, which is what we actually care about. Moreover, we would be able to raise an enormous amount of revenue with zero social cost. Like the other big Pigovian tax (the carbon tax), this a rare example of a tax that will give you a huge amount of revenue while actually yielding a benefit to society.

This is far from obvious, so I think it is worth explaining where it comes from.

The key point is that advertising doesn’t typically increase the overall size of the market (though in some cases it does; I’ll get back to that in a moment). Rather that a conventional production function like we would have for most types of expenditure, advertising is better modeled by what is called a contest function (something that our own Stergios Skaperdas at UCI is actually a world-class expert in). In a production function, inputs increase the total amount of output. But in a contest function, inputs only redistribute output from one place to another. Contest functions thus provide a good model of rent-seeking, which is what most advertising is.

Suppose there’s a total market M for some good, where M is the total profits that can be gained from capturing that entire market.
Then, to keep it simple, let’s suppose there are only two major firms in the market, a duopoly like Coke and Pepsi or Boeing and Airbus.

Let’s say Coke decides to spend an amount x on advertising, and Pepsi decides to spend an amount y.

For now, let’s assume that total beverage consumption won’t change; so the total profits to be had from the market are always M.

What advertising does is it changes the share of that market which each firm will get. Specifically, let’s use the simplest model, where the share of the market is equal to the share of advertising spending.

Then the net profit for Coke is the following:

The share they get, x/(x+y), times the size of the whole market, M, minus the advertising spending x.

max M*x/(x+y) – x

We can maximize this with the usual first-order condition:

y/(x+y)^2 M – 1 = 0

(x+y)^2 = My

Since the game is symmetric, in a Nash equilibrium, Pepsi will use the same reasoning:

(x+y)^2 = Mx

Thus we have:

x = y

(2x)^2 = Mx

x = M/4

In this very simple model, each firm will spend one-fourth of the market’s value, and the total advertising spending will be equal to half the size of the market. Then, each company’s net income will be equal to its advertising spending. This is a pretty good estimate for Coca-Cola in real life, which spends about $3.3 billion on advertising and receives about $2.8 billion in net income each year.

What would happen if we introduce a tax? Let’s say we introduce a proportional tax r on all advertising spending. That is, for every dollar you spend on advertising, you must pay the government $r in tax. The really remarkable thing is that companies who advertise shouldn’t care what we make the tax; the only ones who will care are the advertising companies themselves.

If Coke pays x, the actual amount of advertising they receive is x – r x = x(1-r).

Likewise, Pepsi’s actual advertising received is y(1-r).

But notice that the share of total advertising spending is completely unchanged!

(x(1-r))/(x(1-r) + y(1-r)) = x/(x+y)

Since the payoff for Coke only depends on how much Coke spends and what market share they get, it is also unchanged. Since the same is true for Pepsi, nothing will change in how the two companies behave. They will spend the same amount on advertising, and they will receive the same amount of net income when all is said and done.

The total quantity of advertising will be reduced, from x+y to (x+y)(1-r). That means fewer billboards, fewer posters in subway stations, fewer TV commercials. That will hurt advertising companies, but benefit everyone else.

How much revenue will we get for the government? r x + r y = r(x+y).

Since the goal is to substantially reduce advertising output, and it won’t distort other industries in any way, we should set this tax quite high. A reasonable value for r would be 50%. We might even want to consider something as high as 90%; but for now let’s look at what 50% would do.

Total advertising spending in the US is over $200 billion per year. Since an advertising tax would not change total advertising spending, we can expect that a tax rate of 50% would simply capture 50% of this spending as revenue, which is to say $100 billion per year. That would be enough to pay for the entire Federal education budget, or the foreign aid and environment budgets combined.
Another great aspect of how an advertising tax is actually better than a carbon tax is that countries will want to compete to have the highest advertising taxes. If say Canada imposes a carbon tax but the US doesn’t, industries will move production to the US where it is cheaper, which hurts Canada. Yet the total amount of pollution will remain about the same, and Canada will be just as affected by climate change as they would have been anyway. So we need to coordinate across countries so that the carbon taxes are all the same (or at least close), to prevent industries from moving around; and each country has an incentive to cheat by imposing a lower carbon tax.

But advertising taxes aren’t like that. If Canada imposes an advertising tax and the US doesn’t, companies won’t shift production to the US; they will shift advertising to the US. And having your country suddenly flooded with advertisements is bad. That provides a strong incentive for you to impose your own equal or even higher advertising tax to stem the tide. And pretty soon, everyone will have imposed an advertising tax at the same rate.

Of course, in all the above I’ve assumed a pure contest function, meaning that advertisements are completely unproductive. What if they are at least a little bit productive? Then we wouldn’t want to set the tax too high, but the basic conclusions would be unchanged.

Suppose, for instance, that the advertising spending adds half its value to the value of the market. This is a pretty high estimate of the benefits of advertising.

Under this assumption, in place of M we have M+(x+y)/2. Everything else is unchanged.

We can maximize as before:

max (M+(x+y)/2)*x/(x+y) – x

The math is a bit trickier, but we can still solve by a first-order condition, which simplifies to:

(x+y)^2 = 2My

By the same symmetry reasoning as before:

(2x)^2 = 2Mx

x = M/2

Now, total advertising spending would equal the size of the market without advertising, and net income for each firm after advertising would be:

2M(1/2) – M/2 = M/2

That is, advertising spending would equal net income, as before. (A surprisingly robust result!)

What if we imposed a tax? Now the algebra gets even nastier:

max (M+(x+y)(1-r)/2)*x/(x+y) – x

But the ultimate outcome is still quite similar:

(1+r)(x+y)^2 = 2My

(1+r)(2x)^2 = 2Mx

x = M/2*1/(1+r)

Advertising spending will be reduced by a factor of 1/(1+r). Even if r is 50%, that still means we’ll have 2/3 of the advertising spending we had before.

Total tax revenue will then be M*r/(1+r), which for r of 50% would be M/3.

Total advertising will be M(1-r)/(1+r), which would be M/3. So we managed to reduce advertising by 2/3, while reducing advertising spending by only 1/3. Then we would receive half of that spending as revenue. Thus, instead of getting $100 billion per year, we would get $67 billion, which is still just about enough to pay for food stamps.

What’s the downside of this tax? Unlike most taxes, there really isn’t one. Yes, it would hurt advertising companies, which I suppose counts as a downside. But that was mostly waste anyway; anyone employed in advertising would be better employed almost anywhere else. Millions of minds are being wasted coming up with better ways to sell Viagra instead of better treatments for cancer. Any unemployment introduced by an advertising tax would be temporary and easily rectified by monetary policy, and most of it would hit highly educated white-collar professionals who have high incomes to begin with and can more easily find jobs when displaced.

The real question is why we aren’t doing this already. And that, I suppose, has to come down to politics.

“DSGE or GTFO”: Macroeconomics took a wrong turn somewhere

Dec 31, JDN 2458119

The state of macro is good,” wrote Oliver Blanchard—in August 2008. This is rather like the turkey who is so pleased with how the farmer has been feeding him lately, the day before Thanksgiving.

It’s not easy to say exactly where macroeconomics went wrong, but I think Paul Romer is right when he makes the analogy between DSGE (dynamic stochastic general equilbrium) models and string theory. They are mathematically complex and difficult to understand, and people can make their careers by being the only ones who grasp them; therefore they must be right! Nevermind if they have no empirical support whatsoever.

To be fair, DSGE models are at least a little better than string theory; they can at least be fit to real-world data, which is better than string theory can say. But being fit to data and actually predicting data are fundamentally different things, and DSGE models typically forecast no better than far simpler models without their bold assumptions. You don’t need to assume all this stuff about a “representative agent” maximizing a well-defined utility function, or an Euler equation (that doesn’t even fit the data), or this ever-proliferating list of “random shocks” that end up taking up all the degrees of freedom your model was supposed to explain. Just regressing the variables on a few years of previous values of each other (a “vector autoregression” or VAR) generally gives you an equally-good forecast. The fact that these models can be made to fit the data well if you add enough degrees of freedom doesn’t actually make them good models. As Von Neumann warned us, with enough free parameters, you can fit an elephant.

But really what bothers me is not the DSGE but the GTFO (“get the [expletive] out”); it’s not that DSGE models are used, but that it’s almost impossible to get published as a macroeconomic theorist using anything else. Defenders of DSGE typically don’t even argue anymore that it is good; they argue that there are no credible alternatives. They characterize their opponents as “dilettantes” who aren’t opposing DSGE because we disagree with it; no, it must be because we don’t understand it. (Also, regarding that post, I’d just like to note that I now officially satisfy the Athreya Axiom of Absolute Arrogance: I have passed my qualifying exams in a top-50 economics PhD program. Yet my enmity toward DSGE has, if anything, only intensified.)

Of course, that argument only makes sense if you haven’t been actively suppressing all attempts to formulate an alternative, which is precisely what DSGE macroeconomists have been doing for the last two or three decades. And yet despite this suppression, there are alternatives emerging, particularly from the empirical side. There are now empirical approaches to macroeconomics that don’t use DSGE models. Regression discontinuity methods and other “natural experiment” designs—not to mention actual experiments—are quickly rising in popularity as economists realize that these methods allow us to actually empirically test our models instead of just adding more and more mathematical complexity to them.

But there still seems to be a lingering attitude that there is no other way to do macro theory. This is very frustrating for me personally, because deep down I think what I would like to do as a career is macro theory: By temperament I have always viewed the world through a very abstract, theoretical lens, and the issues I care most about—particularly inequality, development, and unemployment—are all fundamentally “macro” issues. I left physics when I realized I would be expected to do string theory. I don’t want to leave economics now that I’m expected to do DSGE. But I also definitely don’t want to do DSGE.

Fortunately with economics I have a backup plan: I can always be an “applied micreconomist” (rather the opposite of a theoretical macroeconomist I suppose), directly attached to the data in the form of empirical analyses or even direct, randomized controlled experiments. And there certainly is plenty of work to be done along the lines of Akerlof and Roth and Shiller and Kahneman and Thaler in cognitive and behavioral economics, which is also generally considered applied micro. I was never going to be an experimental physicist, but I can be an experimental economist. And I do get to use at least some theory: In particular, there’s an awful lot of game theory in experimental economics these days. Some of the most exciting stuff is actually in showing how human beings don’t behave the way classical game theory predicts (particularly in the Ultimatum Game and the Prisoner’s Dilemma), and trying to extend game theory into something that would fit our actual behavior. Cognitive science suggests that the result is going to end up looking quite different from game theory as we know it, and with my cognitive science background I may be particularly well-positioned to lead that charge.

Still, I don’t think I’ll be entirely satisfied if I can’t somehow bring my career back around to macroeconomic issues, and particularly the great elephant in the room of all economics, which is inequality. Underlying everything from Marxism to Trumpism, from the surging rents in Silicon Valley and the crushing poverty of Burkina Faso, to the Great Recession itself, is inequality. It is, in my view, the central question of economics: Who gets what, and why?

That is a fundamentally macro question, but you can’t even talk about that issue in DSGE as we know it; a “representative agent” inherently smooths over all inequality in the economy as though total GDP were all that mattered. A fundamentally new approach to macroeconomics is needed. Hopefully I can be part of that, but from my current position I don’t feel much empowered to fight this status quo. Maybe I need to spend at least a few more years doing something else, making a name for myself, and then I’ll be able to come back to this fight with a stronger position.

In the meantime, I guess there’s plenty of work to be done on cognitive biases and deviations from game theory.

Why New Year’s resolutions fail

Jan 1, JDN 2457755

Last week’s post was on Christmas, so by construction this week’s post will be on New Year’s Day.

It is a tradition in many cultures, especially in the US and Europe, to start every new year with a New Year’s resolution, a promise to ourselves to change our behavior in some positive way.

Yet, over 80% of these resolutions fail. Why is this?

If we are honest, most of us would agree that there is something about our own behavior that could stand to be improved. So why do we so rarely succeed in actually making such improvements?

One possibility, which I’m guessing most neoclassical economists would favor, is to say that we don’t actually want to. We may pretend that we do in order to appease others, but ultimately our rational optimization has already chosen that we won’t actually bear the cost to make the improvement.

I think this is actually quite rare. I’ve seen too many people with resolutions they didn’t share with anyone, for example, to think that it’s all about social pressure. And I’ve seen far too many people try very hard to achieve their resolutions, day after day, and yet still fail.

Sometimes we make resolutions that are not entirely within our control, such as “get a better job” or “find a girlfriend” (last year I made a resolution to publish a work of commercial fiction or a peer-reviewed article—and alas, failed at that task, unless I somehow manage it in the next few days). Such resolutions may actually be unwise to make in the first place, as it can feel like breaking a promise to yourself when you’ve actually done all you possibly could.

So let’s set those aside and talk only about things we should be in control over, like “lose weight” or “save more money”. Even these kinds of resolutions typically fail; why? What is this “weakness of will”? How is it possible to really want something that you are in full control over, and yet still fail to accomplish it?

Well, first of all, I should be clear what I mean by “in full control over”. In some sense you’re not in full control, which is exactly the problem. Your conscious mind is not actually an absolute tyrant over your entire body; you’re more like an elected president who has to deal with a legislature in order to enact policy.

You do have a great deal of power over your own behavior, and you can learn to improve this control (much as real executive power in presidential democracies has expanded over the last century!); but there are fundamental limits to just how well you can actually consciously will your body to do anything, limits imposed by billions of years of evolution that established most of the traits of your body and nervous system millions of generations before there even was such a thing as rational conscious reasoning.

One thing that makes a surprisingly large difference lies in whether your goals are reduced to specific, actionable objectives. “Lose weight” is almost guaranteed to fail. “Lose 30 pounds” is still unlikely to succeed. “Work out for 2 hours per week,” on the other hand, might have a chance. “Save money” is never going to make it, but “move to a smaller apartment and set aside $200 per month” just might.

I think the government metaphor is helpful here; if you President of the United States and you want something done, do you state some vague, broad goal like “Improve the economy”? No, you make a specific, actionable demand that allows you to enforce compliance, like “increase infrastructure spending by 24% over the next 5 years”. Even then it is possible to fail if you can’t push it through the legislature (in the metaphor, the “legislature” is your habits, instincts and other subconscious processes), but you’re much more likely to succeed if you have a detailed plan.

Another technique that helps is to visualize the benefits of succeeding and the costs of failing, and keep these in your mind. This counteracts the tendency for the costs of succeeding and the benefits of giving up to be more salient—losing 30 pounds sounds nice in theory, but that treadmill is so much work right now!

This salience effect has a lot to do with the fact that human beings are terrible at dealing with the future.

Rationally, we are supposed to use exponential discounting; each successive moment is supposed to be worth less to us than the previous by a fixed proportion, say 5% per year. This is actually a mathematical theorem; if you don’t discount this way, your decisions will be systematically irrational.

And yet… we don’t discount that way. Some behavioral economists argue that we use hyperbolic discounting, in which instead of discounting time by a fixed proportion, we use a different formula that drops off too quickly early on and not quickly enough later on.

But I am increasingly convinced that human beings don’t actually use discounting at all. We have a series of rough-and-ready heuristics for making future judgments, which can sort of act like discounting, but require far less computation than actually calculating a proper discount rate. (Recent empirical evidence seems to be tilting this direction.)

In any case, whatever we do is clearly not a proper rational discount rate. And this means that our behavior can be time-inconsistent; a choice that seems rational at one time can not seem rational at a later time. When we’re planning out our year and saying we will hit the treadmill more, it seems like a good idea; but when we actually get to the gym and feel our legs ache as we start running, we begin to regret our decision.

The challenge, really, is determining which “version” of us is correct! A priori, we don’t actually know whether the view of our distant self contemplating the future or the view of our current self making the choice in the moment is the right one. Actually, when I frame it this way, it almost seems like the self that’s closer to the choice should have better information—and yet typically we think the exact opposite, that it is our past self making plans that really knows what’s best for us.

So where does that come from? Why do we think, at least in most cases, that the “me” which makes a plan a year in advance is the smart one, and the “me” that actually decides in the moment is untrustworthy.

Kahneman has a good explanation for this, in his model of System 1 and System 2. System 1 is simple and fast, but often gets the wrong answer. System 2 usually gets the right answer, but it is complex and slow. When we are making plans, we have a lot of time to think, and we can afford to expend the extra effort to engage the full power of System 2. But when we are living in the moment, choosing what to do right now, we don’t have that luxury of time, and we are forced to fall back on System 1. System 1 is easier—but it’s also much more likely to be wrong.

How, then, do we resolve this conflict? Commitment. (Perhaps that’s why it’s called a New Year’s resolution!)

We make promises to ourselves, commitments that we will feel bad about not following through.

If we rationally discounted, this would be a baffling thing to do; we’re just imposing costs on ourselves for no reason. But because we don’t discount rationally, commitments allow us to change the calculation for our future selves.

This brings me to one last strategy to use when making your resolutions: Include punishment.

“I will work out at least 2 hours per week, and if I don’t, I’m not allowed to watch TV all weekend.” Now that is a resolution you are actually likely to keep.

To see why, consider the decision problem for your System 2 self today versus your System 1 self throughout the year.

Your System 2 self has done the cost-benefit analysis and ruled that working out 2 hours per week is worthwhile for its health benefits.

If you left it at that, your System 1 self would each day find an excuse to procrastinate the workouts, because at least from where they’re sitting, working out for 2 hours looks a lot more painful than the marginal loss in health from missing just this one week. And of course this will keep happening, week after week—and then 52 go by and you’ve had few if any workouts.

But by adding the punishment of “no TV”, you have imposed an additional cost on your System 1 self, something that they care about. Suddenly the calculation changes; it’s not just 2 hours of workout weighed against vague long-run health benefits, but 2 hours of workout weighed against no TV all weekend. That punishment is surely too much to bear; so you’d best do the workout after all.

Do it right, and you will rarely if ever have to impose the punishment. But don’t make it too large, or then it will seem unreasonable and you won’t want to enforce it if you ever actually need to. Your System 1 self will then know this, and treat the punishment as nonexistent. (Formally the equilibrium is not subgame perfect; I am gravely concerned that our nuclear deterrence policy suffers from precisely this flaw.) “If I don’t work out, I’ll kill myself” is a recipe for depression, not healthy exercise habits.

But if you set clear, actionable objectives and sufficient but reasonable punishments, there’s at least a good chance you will actually be in the minority of people who actually succeed in keeping their New Year’s resolution.

And if not, there’s always next year.

The game theory of holidays

Dec 25, JDN 2457748

When this post goes live, it will be Christmas; so I felt I should make the topic somehow involve the subject of Christmas, or holidays in general.

I decided I would pull back for as much perspective as possible, and ask this question: Why do we have holidays in the first place?

All human cultures have holidays, but not the same ones. Cultures with a lot of mutual contact will tend to synchronize their holidays temporally, but still often preserve wildly different rituals on those same holidays. Yes, we celebrate “Christmas” in both the US and in Austria; but I think they are baffled by the Elf on the Shelf and I know that I find the Krampus bizarre and terrifying.

Most cultures from temperate climates have some sort of celebration around the winter solstice, probably because this is an ecologically important time for us. Our food production is about to get much, much lower, so we’d better make sure we have sufficient quantities stored. (In an era of globalization and processed food that lasts for months, this is less important, of course.) But they aren’t the same celebration, and they generally aren’t exactly on the solstice.

What is a holiday, anyway? We all get off work, we visit our families, and we go through a series of ritualized actions with some sort of symbolic cultural meaning. Why do we do this?

First, why not work all year round? Wouldn’t that be more efficient? Well, no, because human beings are subject to exhaustion. We need to rest at least sometimes.

Well, why not simply have each person rest whenever they need to? Well, how do we know they need to? Do we just take their word for it? People might exaggerate their need for rest in order to shirk their duties and free-ride on the work of others.

It would help if we could have pre-scheduled rest times, to remove individual discretion.

Should we have these at the same time for everyone, or at different times for each person?

Well, from the perspective of efficiency, different times for each person would probably make the most sense. We could trade off work in shifts that way, and ensure production keeps moving. So why don’t we do that?
Well, now we get to the game theory part. Do you want to be the only one who gets today off? Or do you want other people to get today off as well?

You probably want other people to be off work today as well, at least your family and friends so that you can spend time with them. In fact, this is probably more important to you than having any particular day off.

We can write this as a normal-form game. Suppose we have four days to choose from, 1 through 4, and two people, who can each decide which day to take off, or they can not take a day off at all. They each get a payoff of 1 if they take the same day off, 0 if they take different days off, and -1 if they don’t take a day off at all. This is our resulting payoff matrix:

1 2 3 4 None
1 1/1 0/0 0/0 0/0 0/-1
2 0/0 1/1 0/0 0/0 0/-1
3 0/0 0/0 1/1 0/0 0/-1
4 0/0 0/0 0/0 1/1 0/-1
None -1/0 -1/0 -1/0 -1/0 -1/-1

 

It’s pretty obvious that each person will take some day off. But which day? How do they decide that?
This is what we call a coordination game; there are many possible equilibria to choose from, and the payoffs are highest if people can somehow coordinate their behavior.

If they can actually coordinate directly, it’s simple; one person should just suggest a day, and since the other one is indifferent, they have no reason not to agree to that day. From that point forward, they have coordinated on a equilibrium (a Nash equilibrium, in point of fact).

But suppose they can’t talk to each other, or suppose there aren’t two people to coordinate but dozens, or hundreds—or even thousands, once you include all the interlocking social networks. How could they find a way to coordinate on the same day?

They need something more intuitive, some “obvious” choice that they can call upon that they hope everyone else will as well. Even if they can’t communicate, as long as they can observe whether their coordination has succeeded or failed they can try to set these “obvious” choices by successive trial and error.

The result is what we call a Schelling point; players converge on this equilibrium not because there’s actually anything better about it, but because it seems obvious and they expect everyone else to think it will also seem obvious.

This is what I think is happening with holidays. Yes, we make up stories to justify them, or sometimes even have genuine reasons for them (Independence Day actually makes sense being on July 4, for instance), but the ultimate reason why we have a holiday on one day rather than other is that we had to have it some time, and this was a way of breaking the deadlock and finally setting a date.

In fact, weekends are probably a more optimal solution to this coordination problem than holidays, because human beings need rest on a fairly regular basis, not just every few months. Holiday seasons now serve more as an opportunity to have long vacations that allow travel, rather than as a rest between work days. But even those we had to originally justify as a matter of religion: Jews would not work on Saturday, Christians would not work on Sunday, so together we will not work on Saturday or Sunday. The logic here is hardly impeccable (why not make it religion-specific, for example?), but it was enough to give us a Schelling point.

This makes me wonder about what it would take to create a new holiday. How could we actually get people to celebrate Darwin Day or Sagan Day on a large scale, for example? Darwin and Sagan are both a lot more worth celebrating than most of the people who get holidays—Columbus especially leaps to mind. But even among those of us who really love Darwin and Sagan, these are sort of half-hearted celebrations that never attain the same status as Easter, much less Thanksgiving or Christmas.

I’d also like to secularize—or at least ecumenicalize—the winter solstice celebration. Christianity shouldn’t have a monopoly on what is really something like a human universal, or at least a “humans who live in temperate climates” universal. It really isn’t Christmas anyway; most of what we do is celebrating Yule, compounded by a modern expression in mass consumption that is thoroughly borne of modern capitalism. We have no reason to think Jesus was actually born in December, much less on the 25th. But that’s around the time when lots of other celebrations were going on anyway, and it’s much easier to convince people that they should change the name of their holiday than that they should stop celebrating it and start celebrating something else—I think precisely because that still preserves the Schelling point.

Creating holidays has obviously been done before—indeed it is literally the only way holidays ever come into existence. But part of their structure seems to be that the more transparent the reasons for choosing that date and those rituals, the more empty and insincere the holiday seems. Once you admit that this is an arbitrary choice meant to converge an equilibrium, it stops seeming like a good choice anymore.

Now, if we could find dates and rituals that really had good reasons behind them, we could probably escape that; but I’m not entirely sure we can. We can use Darwin’s birthday—but why not the first edition publication of On the Origin of Species? And Darwin himself is really that important, but why Sagan Day and not Einstein Day or Niels Bohr Day… and so on? The winter solstice itself is a very powerful choice; its deep astronomical and ecological significance might actually make it a strong enough attractor to defeat all contenders. But what do we do on the winter solstice celebration? What rituals best capture the feelings we are trying to express, and how do we defend those rituals against criticism and competition?

In the long run, I think what usually happens is that people just sort of start doing something, and eventually enough people are doing it that it becomes a tradition. Maybe it always feels awkward and insincere at first. Maybe you have to be prepared for it to change into something radically different as the decades roll on.

This year the winter solstice is on December 21st. I think I’ll be lighting a candle and gazing into the night sky, reflecting on our place in the universe. Unless you’re reading this on Patreon, by the time this goes live, you’ll have missed it; but you can try later, or maybe next year.

In fifty years all the cool kids will be doing it, I’m sure.

The Tragedy of the Commons

JDN 2457387

In a previous post I talked about one of the most fundamental—perhaps the most fundamental—problem in game theory, the Prisoner’s Dilemma, and how neoclassical economic theory totally fails to explain actual human behavior when faced with this problem in both experiments and the real world.

As a brief review, the essence of the game is that both players can either cooperate or defect; if they both cooperate, the outcome is best overall; but it is always in each player’s interest to defect. So a neoclassically “rational” player would always defect—resulting in a bad outcome for everyone. But real human beings typically cooperate, and thus do better. The “paradox” of the Prisoner’s Dilemma is that being “rational” results in making less money at the end.

Obviously, this is not actually a good definition of rational behavior. Being short-sighted and ignoring the impact of your behavior on others doesn’t actually produce good outcomes for anybody, including yourself.

But the Prisoner’s Dilemma only has two players. If we expand to a larger number of players, the expanded game is called a Tragedy of the Commons.

When we do this, something quite surprising happens: As you add more people, their behavior starts converging toward the neoclassical solution, in which everyone defects and we get a bad outcome for everyone.

Indeed, people in general become less cooperative, less courageous, and more apathetic the more of them you put together. K was quite apt when he said, “A person is smart; people are dumb, panicky, dangerous animals and you know it.” There are ways to counteract this effect, as I’ll get to in a moment—but there is a strong effect that needs to be counteracted.

We see this most vividly in the bystander effect. If someone is walking down the street and sees someone fall and injure themselves, there is about a 70% chance that they will go try to help the person who fell—humans are altruistic. But if there are a dozen people walking down the street who all witness the same event, there is only a 40% chance that any of them will help—humans are irrational.

The primary reason appears to be diffusion of responsibility. When we are alone, we are the only one could help, so we feel responsible for helping. But when there are others around, we assume that someone else could take care of it for us, so if it isn’t done that’s not our fault.

There also appears to be a conformity effect: We want to conform our behavior to social norms (as I said, to a first approximation, all human behavior is social norms). The mere fact that there are other people who could have helped but didn’t suggests the presence of an implicit social norm that we aren’t supposed to help this person for some reason. It never occurs to most people to ask why such a norm would exist or whether it’s a good one—it simply never occurs to most people to ask those questions about any social norms. In this case, by hesitating to act, people actually end up creating the very norm they think they are obeying.

This can lead to what’s called an Abilene Paradox, in which people simultaneously try to follow what they think everyone else wants and also try to second-guess what everyone else wants based on what they do, and therefore end up doing something that none of them actually wanted. I think a lot of the weird things humans do can actually be attributed to some form of the Abilene Paradox. (“Why are we sacrificing this goat?” “I don’t know, I thought you wanted to!”)

Autistic people are not as good at following social norms (though some psychologists believe this is simply because our social norms are optimized for the neurotypical population). My suspicion is that autistic people are therefore less likely to suffer from the bystander effect, and more likely to intervene to help someone even if they are surrounded by passive onlookers. (Unfortunately I wasn’t able to find any good empirical data on that—it appears no one has ever thought to check before.) I’m quite certain that autistic people are less likely to suffer from the Abilene Paradox—if they don’t want to do something, they’ll tell you so (which sometimes gets them in trouble).

Because of these psychological effects that blunt our rationality, in large groups human beings often do end up behaving in a way that appears selfish and short-sighted.

Nowhere is this more apparent than in ecology. Recycling, becoming vegetarian, driving less, buying more energy-efficient appliances, insulating buildings better, installing solar panels—none of these things are particularly difficult or expensive to do, especially when weighed against the tens of millions of people who will die if climate change continues unabated. Every recyclable can we throw in the trash is a silent vote for a global holocaust.

But as it no doubt immediately occurred to you to respond: No single one of us is responsible for all that. There’s no way I myself could possibly save enough carbon emissions to significantly reduce climate change—indeed, probably not even enough to save a single human life (though maybe). This is certainly true; the error lies in thinking that this somehow absolves us of the responsibility to do our share.

I think part of what makes the Tragedy of the Commons so different from the Prisoner’s Dilemma, at least psychologically, is that the latter has an identifiable victimwe know we are specifically hurting that person more than we are helping ourselves. We may even know their name (and if we don’t, we’re more likely to defect—simply being on the Internet makes people more aggressive because they don’t interact face-to-face). In the Tragedy of the Commons, it is often the case that we don’t know who any of our victims are; moreover, it’s quite likely that we harm each one less than we benefit ourselves—even though we harm everyone overall more.

Suppose that driving a gas-guzzling car gives me 1 milliQALY of happiness, but takes away an average of 1 nanoQALY from everyone else in the world. A nanoQALY is tiny! Negligible, even, right? One billionth of a year, a mere 30 milliseconds! Literally less than the blink of an eye. But take away 30 milliseconds from everyone on Earth and you have taken away 7 years of human life overall. Do that 10 times, and statistically one more person is dead because of you. And you have gained only 10 milliQALY, roughly the value of $300 to a typical American. Would you kill someone for $300?

Peter Singer has argued that we should in fact think of it this way—when we cause a statistical death by our inaction, we should call it murder, just as if we had left a child to drown to keep our clothes from getting wet. I can’t agree with that. When you think seriously about the scale and uncertainty involved, it would be impossible to live at all if we were constantly trying to assess whether every action would lead to statistically more or less happiness to the aggregate of all human beings through all time. We would agonize over every cup of coffee, every new video game. In fact, the global economy would probably collapse because none of us would be able to work or willing to buy anything for fear of the consequences—and then whom would we be helping?

That uncertainty matters. Even the fact that there are other people who could do the job matters. If a child is drowning and there is a trained lifeguard right next to you, the lifeguard should go save the child, and if they don’t it’s their responsibility, not yours. Maybe if they don’t you should try; but really they should have been the one to do it.
But we must also not allow ourselves to simply fall into apathy, to do nothing simply because we cannot do everything. We cannot assess the consequences of every specific action into the indefinite future, but we can find general rules and patterns that govern the consequences of actions we might take. (This is the difference between act utilitarianism, which is unrealistic, and rule utilitarianism, which I believe is the proper foundation for moral understanding.)

Thus, I believe the solution to the Tragedy of the Commons is policy. It is to coordinate our actions together, and create enforcement mechanisms to ensure compliance with that coordinated effort. We don’t look at acts in isolation, but at policy systems holistically. The proper question is not “What should I do?” but “How should we live?”

In the short run, this can lead to results that seem deeply suboptimal—but in the long run, policy answers lead to sustainable solutions rather than quick-fixes.

People are starving! Why don’t we just steal money from the rich and use it to feed people? Well, think about what would happen if we said that the property system can simply be unilaterally undermined if someone believes they are achieving good by doing so. The property system would essentially collapse, along with the economy as we know it. A policy answer to that same question might involve progressive taxation enacted by a democratic legislature—we agree, as a society, that it is justified to redistribute wealth from those who have much more than they need to those who have much less.

Our government is corrupt! We should launch a revolution! Think about how many people die when you launch a revolution. Think about past revolutions. While some did succeed in bringing about more just governments (e.g. the French Revolution, the American Revolution), they did so only after a long period of strife; and other revolutions (e.g. the Russian Revolution, the Iranian Revolution) have made things even worse. Revolution is extremely costly and highly unpredictable; we must use it only as a last resort against truly intractable tyranny. The policy answer is of course democracy; we establish a system of government that elects leaders based on votes, and then if they become corrupt we vote to remove them. (Sadly, we don’t seem so good about that second part—the US Congress has a 14% approval rating but a 95% re-election rate.)

And in terms of ecology, this means that berating ourselves for our sinfulness in forgetting to recycle or not buying a hybrid car does not solve the problem. (Not that it’s bad to recycle, drive a hybrid car, and eat vegetarian—by all means, do these things. But it’s not enough.) We need a policy solution, something like a carbon tax or cap-and-trade that will enforce incentives against excessive carbon emissions.

In case you don’t think politics makes a difference, all of the Democrat candidates for President have proposed such plans—Bernie Sanders favors a carbon tax, Martin O’Malley supports an aggressive cap-and-trade plan, and Hillary Clinton favors heavily subsidizing wind and solar power. The Republican candidates on the other hand? Most of them don’t even believe in climate change. Chris Christie and Carly Fiorina at least accept the basic scientific facts, but (1) they are very unlikely to win at this point and (2) even they haven’t announced any specific policy proposals for dealing with it.

This is why voting is so important. We can’t do enough on our own; the coordination problem is too large. We need to elect politicians who will make policy. We need to use the systems of coordination enforcement that we have built over generations—and that is fundamentally what a government is, a system of coordination enforcement. Only then can we overcome the tendency among human beings to become apathetic and short-sighted when faced with a Tragedy of the Commons.

The Prisoner’s Dilemma

JDN 2457348
When this post officially goes live, it will have been one full week since I launched my Patreon, on which I’ve already received enough support to be more than halfway to my first funding goal. After this post, I will be far enough ahead in posting that I can release every post one full week ahead of time for my Patreon patrons (can I just call them Patreons?).

It’s actually fitting that today’s topic is the Prisoner’s Dilemma, for Patreon is a great example of how real human beings can find solutions to this problem even if infinite identical psychopaths could not.

The Prisoner’s Dilemma is the most fundamental problem in game theory—arguably the reason game theory is worth bothering with in the first place. There is a standard story that people generally tell to set up the dilemma, but honestly I find that they obscure more than they illuminate. You can find it in the Wikipedia article if you’re interested.

The basic idea of the Prisoner’s Dilemma is that there are many times in life when you have a choice: You can do the nice thing and cooperate, which costs you something, but benefits the other person more; or you can do the selfish thing and defect, which benefits you but harms the other person more.

The game can basically be defined as four possibilities: If you both cooperate, you each get 1 point. If you both defect, you each get 0 points. If you cooperate when the other player defects, you lose 1 point while the other player gets 2 points. If you defect when the other player cooperates, you get 2 points while the other player loses 1 point.

P2 Cooperate P2 Defect
P1 Cooperate +1, +1 -1, +2
P2 Defect +2, -1 0, 0

These games are nonzero-sum, meaning that the total amount of benefit or harm incurred is not constant; it depends upon what players choose to do. In my example, the total benefit varies from +2 (both cooperate) to +1 (one cooperates, one defects) to 0 (both defect).

The answer which is “neat, plausible, and wrong” (to use Mencken’s oft-misquoted turn of phrase) is to reason this way: If the other player cooperates, I can get +1 if I cooperate, or +2 if I defect. So I should defect. If the other player defects, I can get -1 if I cooperate, or 0 if I defect. So I should defect. In either case I defect, therefore I should always defect.

The problem with this argument is that your behavior affects the other player. You can’t simply hold their behavior fixed when making your choice. If you always defect, the other player has no incentive to cooperate, so you both always defect and get 0. But if you credibly promise to cooperate every time they also cooperate, you create an incentive to cooperate that can get you both +1 instead.

If there were a fixed amount of benefit, the game would be zero-sum, and cooperation would always be damaging yourself. In zero-sum games, the assumption that acting selfishly maximizes your payoffs is correct; we could still debate whether it’s necessarily more rational (I don’t think it’s always irrational to harm yourself to benefit someone else an equal amount), but it definitely is what maximizes your money.

But in nonzero-sum games, that assumption no longer holds; we can both end up better off by cooperating than we would have been if we had both defected.
Below is a very simple zero-sum game (notice how indeed in each outcome, the payoffs sum to zero; any zero-sum game can be written so that this is so, hence the name):

Player 2 cooperates Player 2 defects
Player 1 cooperates 0, 0 -1, +1
Player 1 defects +1, -1 0, 0

In that game, there really is no reason for you to cooperate; you make yourself no better off if they cooperate, and you give them a strong incentive to defect and make you worse off. But that game is not a Prisoner’s Dilemma, even though it may look superficially similar.

The real world, however, is full of variations on the Prisoner’s Dilemma. This sort of situation is fundamental to our experience; it probably happens to you multiple times every single day.
When you finish eating at a restaurant, you could pay the bill (cooperate) or you could dine and dash (defect). When you are waiting in line, you could quietly take your place in the queue (cooperate) or you could cut ahead of people (defect). If you’re married, you could stay faithful to your spouse (cooperate) or you could cheat on them (defect). You could pay more for the shoes made in the USA (cooperate), or buy the cheap shoes that were made in a sweatshop (defect). You could pay more to buy a more fuel-efficient car (cooperate), or buy that cheap gas-guzzler even though you know how much it pollutes (defect). Most of us cooperate most of the time, but occasionally are tempted into defecting.

The “Prisoner’s Dilemma” is honestly not much of a dilemma. A lot of neoclassical economists really struggle with it; their model of rational behavior is so narrow that it keeps putting out the result that they are supposed to always defect, but they know that this results in a bad outcome. More recently we’ve done experiments and we find that very few people actually behave that way (though typically neoclassical economists do), and also that people end up making more money in these experimental games than they would if they behaved as neoclassical economics says would be optimal.

Let me repeat that: People make more money than they would if they acted according to what’s supposed to be optimal according to neoclassical economists. I think that’s why it feels like such a paradox to them; their twin ideals of infinite identical psychopaths and maximizing the money you make have shown themselves to be at odds with one another.

But in fact, it’s really not that paradoxical: Rationality doesn’t mean being maximally selfish at every opportunity. It also doesn’t mean maximizing the money you make, but even if it did, it still wouldn’t mean being maximally selfish.

We have tested experimentally what sort of strategy is most effective at making the most money in the Prisoner’s Dilemma; basically we make a bunch of competing computer programs to play the game against one another for points, and tally up the points. When we do that, the winner is almost always a remarkably simple strategy, called “Tit for Tat”. If your opponent cooperated last time, cooperate. If your opponent defected last time, defect. Reward cooperation, punish defection.

In more complex cases (such as allowing for random errors in behavior), some subtle variations on that strategy turn out to be better, but are still basically focused around rewarding cooperation and punishing defection.
This probably seems quite intuitive, yes? It may even be the strategy that it occurred to you to try when you first learned about the game. This strategy comes naturally to humans, not because it is actually obvious as a mathematical result (the obvious mathematical result is the neoclassical one that turns out to be wrong), but because it is effective—human beings evolved to think this way because it gave us the ability to form stable cooperative coalitions. This is what gives us our enormous evolutionary advantage over just about everything else; we have transcended the limitations of a single individual and now work together in much larger groups. E.O. Wilson likes to call us “eusocial”, a term formally applied only to a very narrow range of species such as ants and bees (and for some reason, naked mole rats); but I don’t think this is actually strong enough, because human beings are social in a way that even ants are not. We cooperate on the scale of millions of individuals, who are basically unrelated genetically (or very distantly related). That is what makes us the species who eradicate viruses and land robots on other planets. Much more so than intelligence per se, the human superpower is cooperation.

Indeed, it is not a great exaggeration to say that morality exists as a concept in the human mind because cooperation is optimal in many nonzero-sum games such as these. If the world were zero-sum, morality wouldn’t work; the immoral action would always make you better off, and the bad guys would always win. We probably would never even have evolved to think in moral terms, because any individual or species that started to go that direction would be rapidly outcompeted by those that remained steadfastly selfish.