The Prisoner’s Dilemma

JDN 2457348
When this post officially goes live, it will have been one full week since I launched my Patreon, on which I’ve already received enough support to be more than halfway to my first funding goal. After this post, I will be far enough ahead in posting that I can release every post one full week ahead of time for my Patreon patrons (can I just call them Patreons?).

It’s actually fitting that today’s topic is the Prisoner’s Dilemma, for Patreon is a great example of how real human beings can find solutions to this problem even if infinite identical psychopaths could not.

The Prisoner’s Dilemma is the most fundamental problem in game theory—arguably the reason game theory is worth bothering with in the first place. There is a standard story that people generally tell to set up the dilemma, but honestly I find that they obscure more than they illuminate. You can find it in the Wikipedia article if you’re interested.

The basic idea of the Prisoner’s Dilemma is that there are many times in life when you have a choice: You can do the nice thing and cooperate, which costs you something, but benefits the other person more; or you can do the selfish thing and defect, which benefits you but harms the other person more.

The game can basically be defined as four possibilities: If you both cooperate, you each get 1 point. If you both defect, you each get 0 points. If you cooperate when the other player defects, you lose 1 point while the other player gets 2 points. If you defect when the other player cooperates, you get 2 points while the other player loses 1 point.

P2 Cooperate P2 Defect
P1 Cooperate +1, +1 -1, +2
P2 Defect +2, -1 0, 0

These games are nonzero-sum, meaning that the total amount of benefit or harm incurred is not constant; it depends upon what players choose to do. In my example, the total benefit varies from +2 (both cooperate) to +1 (one cooperates, one defects) to 0 (both defect).

The answer which is “neat, plausible, and wrong” (to use Mencken’s oft-misquoted turn of phrase) is to reason this way: If the other player cooperates, I can get +1 if I cooperate, or +2 if I defect. So I should defect. If the other player defects, I can get -1 if I cooperate, or 0 if I defect. So I should defect. In either case I defect, therefore I should always defect.

The problem with this argument is that your behavior affects the other player. You can’t simply hold their behavior fixed when making your choice. If you always defect, the other player has no incentive to cooperate, so you both always defect and get 0. But if you credibly promise to cooperate every time they also cooperate, you create an incentive to cooperate that can get you both +1 instead.

If there were a fixed amount of benefit, the game would be zero-sum, and cooperation would always be damaging yourself. In zero-sum games, the assumption that acting selfishly maximizes your payoffs is correct; we could still debate whether it’s necessarily more rational (I don’t think it’s always irrational to harm yourself to benefit someone else an equal amount), but it definitely is what maximizes your money.

But in nonzero-sum games, that assumption no longer holds; we can both end up better off by cooperating than we would have been if we had both defected.
Below is a very simple zero-sum game (notice how indeed in each outcome, the payoffs sum to zero; any zero-sum game can be written so that this is so, hence the name):

Player 2 cooperates Player 2 defects
Player 1 cooperates 0, 0 -1, +1
Player 1 defects +1, -1 0, 0

In that game, there really is no reason for you to cooperate; you make yourself no better off if they cooperate, and you give them a strong incentive to defect and make you worse off. But that game is not a Prisoner’s Dilemma, even though it may look superficially similar.

The real world, however, is full of variations on the Prisoner’s Dilemma. This sort of situation is fundamental to our experience; it probably happens to you multiple times every single day.
When you finish eating at a restaurant, you could pay the bill (cooperate) or you could dine and dash (defect). When you are waiting in line, you could quietly take your place in the queue (cooperate) or you could cut ahead of people (defect). If you’re married, you could stay faithful to your spouse (cooperate) or you could cheat on them (defect). You could pay more for the shoes made in the USA (cooperate), or buy the cheap shoes that were made in a sweatshop (defect). You could pay more to buy a more fuel-efficient car (cooperate), or buy that cheap gas-guzzler even though you know how much it pollutes (defect). Most of us cooperate most of the time, but occasionally are tempted into defecting.

The “Prisoner’s Dilemma” is honestly not much of a dilemma. A lot of neoclassical economists really struggle with it; their model of rational behavior is so narrow that it keeps putting out the result that they are supposed to always defect, but they know that this results in a bad outcome. More recently we’ve done experiments and we find that very few people actually behave that way (though typically neoclassical economists do), and also that people end up making more money in these experimental games than they would if they behaved as neoclassical economics says would be optimal.

Let me repeat that: People make more money than they would if they acted according to what’s supposed to be optimal according to neoclassical economists. I think that’s why it feels like such a paradox to them; their twin ideals of infinite identical psychopaths and maximizing the money you make have shown themselves to be at odds with one another.

But in fact, it’s really not that paradoxical: Rationality doesn’t mean being maximally selfish at every opportunity. It also doesn’t mean maximizing the money you make, but even if it did, it still wouldn’t mean being maximally selfish.

We have tested experimentally what sort of strategy is most effective at making the most money in the Prisoner’s Dilemma; basically we make a bunch of competing computer programs to play the game against one another for points, and tally up the points. When we do that, the winner is almost always a remarkably simple strategy, called “Tit for Tat”. If your opponent cooperated last time, cooperate. If your opponent defected last time, defect. Reward cooperation, punish defection.

In more complex cases (such as allowing for random errors in behavior), some subtle variations on that strategy turn out to be better, but are still basically focused around rewarding cooperation and punishing defection.
This probably seems quite intuitive, yes? It may even be the strategy that it occurred to you to try when you first learned about the game. This strategy comes naturally to humans, not because it is actually obvious as a mathematical result (the obvious mathematical result is the neoclassical one that turns out to be wrong), but because it is effective—human beings evolved to think this way because it gave us the ability to form stable cooperative coalitions. This is what gives us our enormous evolutionary advantage over just about everything else; we have transcended the limitations of a single individual and now work together in much larger groups. E.O. Wilson likes to call us “eusocial”, a term formally applied only to a very narrow range of species such as ants and bees (and for some reason, naked mole rats); but I don’t think this is actually strong enough, because human beings are social in a way that even ants are not. We cooperate on the scale of millions of individuals, who are basically unrelated genetically (or very distantly related). That is what makes us the species who eradicate viruses and land robots on other planets. Much more so than intelligence per se, the human superpower is cooperation.

Indeed, it is not a great exaggeration to say that morality exists as a concept in the human mind because cooperation is optimal in many nonzero-sum games such as these. If the world were zero-sum, morality wouldn’t work; the immoral action would always make you better off, and the bad guys would always win. We probably would never even have evolved to think in moral terms, because any individual or species that started to go that direction would be rapidly outcompeted by those that remained steadfastly selfish.

3 thoughts on “The Prisoner’s Dilemma

Leave a comment