Jun 19 JDN 2459780
There’s something odd about the debate in evolutionary theory about multilevel selection (sometimes called “group selection”). On one side are the mainstream theorists who insist that selection only happens at the individual level (or is the gene level?); and on the other are devout group-selectionists who insist that group selection is everywhere and the only possible explanation of altruism.
Both of these sides are wrong. Selection does happen at multiple levels, but it’s entirely possible for altruism to emerge without it.
The usual argument by the mainstream is that group selection would require the implausible assumption that group live and die on the same timescale as individuals. The usual argument by group-selectionists is that there’s no other explanation for why humans are so altruistic. But neither of these things are true.
There is plenty of discussion out there about why group selection isn’t necessary for altruism: Kin selection is probably the clearest example. So I’m going to focus on showing that group selection can work even when groups live and die much slower than individuals.
To do this, I would like to present you a model. It’s a very pared-down, simplified version, but it is nevertheless a valid evolutionary game theory model.
Consider a world where the only kind of interaction is Iterated Prisoner’s Dilemmas. For the uninitiated, an Iterated Prisoner’s Dilemma is as follows.
Time goes on forever. At each point in time, some people are born, and some people die; people have a limited lifespan and some general idea of how long it is, but nobody can predict for sure when they will die. (So far, this isn’t even a model; all of this is literally true.)
In this world, people are randomly matched with others one on one, and they play a game together, where each person can choose either “Cooperate” or “Defect”. They choose in secret and reveal simultaneously. If both choose “Cooperate”, everyone gets 3 points. If both choose “Defect”, everyone gets 2 points. If one chooses “Cooperate” and the other chooses “Defect”, the “Cooperate” person gets only 1 point while the “Defect” person gets 4 points.
What are these points? Since this is evolution, let’s call them offspring. An average lifetime score of 4 points means 4 offspring per couple per generation—you get rapid population growth. 1 point means 1 offspring per couple per generation—your genes will gradually die out.
That makes the payoffs follow this table:
|C||3, 3||1, 4|
|D||4, 1||2, 2|
There are two very notable properties of this game; together they seem paradoxical, which is probably why the game has such broad applicability and such enduring popularity.
- Everyone, as a group, is always better off if more people choose “Cooperate”.
- Each person, as an individual, regardless of what the others do, is always better off choosing “Defect”.
Thus, Iterated Prisoner’s Dilemmas are ideal for understanding altruism, as they directly model a conflict between individual self-interest and group welfare. (They didn’t do a good job of explaining it in A Beautiful Mind, but that one line in particular was correct: the Prisoner’s Dilemma is precisely what proves “Adam Smith was wrong.”)
Each person is matched with someone else at random for a few rounds, and then re-matched with someone else; and nobody knows how long they will be with any particular person. (For technical reasons, with these particular payoffs, the chance of going to another round needs to be at least 50%; but that’s not too important for what I have to say here.)
Now, suppose there are three tribes of people, who are related by family ties but also still occasionally intermingle with one another.
In the Hobbes tribe, people always play “Defect”.
In the Rousseau tribe, people always play “Cooperate”.
How will these tribes evolve? In the long run, will all tribes survive, or will some prevail over others?
The Rousseau tribe seems quite nice; everyone always gets along! Unfortunately, the Rousseau tribe will inevitably and catastrophically collapse. As soon as a single Hobbes gets in, or a mutation arises to make someone behave like a Hobbes, that individual will become far more successful than everyone else, have vastly more offspring, and ultimately take over the entire population.
The Hobbes tribe seems pretty bad, but it’ll be stable. If a Rousseau should come visit, they’ll just be ruthlessly exploited and makes the Hobbes better off. If an Axelrod arrives, they’ll learn not to be exploited (after the first encounter), but they won’t do any better than the Hobbeses do.
What about the Axelrod tribe? They seem similar to the Rousseau tribe, because everyone is choosing “Cooperate” all the time—will they suffer the same fate? No, they won’t! They’ll do just fine, it turns out. Should a Rousseau come to visit, nobody will even notice; they’ll just keep on choosing “Cooperate” and everything will be fine. And what if a Hobbes comes? They’ll try to exploit the Axelrods, and succeed at first—but soon enough they will be punished for their sins, and in the long run they’ll be worse off (this is why the probability of continuing needs to be sufficiently high).
The net result, then, will be that the Rousseau tribe dies out and only the Hobbes and Axelrod tribes remain. But that’s not the end of the story.
Look back at that payoff table. Both tribes are stable, but each round the Hobbeses are getting 2 each round, while the Axelrods are getting 3. Remember that these are offspring per couple per generation. This means that the Hobbes tribe will have a roughly constant population, while the Axelrods will have an increasing population.
If the two tribes then come into conflict, perhaps competing over resources, the larger population will most likely prevail. This means that, in the long run, the Axelrod tribe will come to dominate. In the end, all the world will be ruled by Axelrods.
And indeed, most human beings behave like Axelrods: We’re nice to most people most of the time, but we’re no chumps. Betray our trust, and you will be punished severely. (It seems we also have a small incursion of Hobbeses: We call them psychopaths. Perhaps there are a few Rousseaus among us as well, whom the Hobbeses exploit.)
What is this? It’s multilevel selection. It’s group selection, if you like that term. There’s clearly no better way to describe it.
Moreover, we can’t simply stop at the reciprocal altruism as most mainstream theorists do; yes, Axelrods exhibit reciprocal altruism. But that’s not the only equilibrium! Why is reciprocal altruism so common? Why in the real world are there fifty Axelrods for every Hobbes? Multilevel selection.
And at no point did I assume either (1) that individual selection wasn’t operating, or (2) that the timescales of groups and individuals were the same. Indeed, I’m explicitly assuming the opposite: Individual selection continues to work at every generation, and groups only live or die over many generations.
The key insight that makes this possible is that the game is iterated—it happens over many rounds, and nobody knows exactly how many. This results in multiple Nash equilibria for individual selection, and then group selection can occur over equilibria.
This is by no means restricted to the Prisoner’s Dilemma. In fact, any nontrivial game will result in multiple equilibria when it is iterated, and group selection should always favor the groups that choose a relatively cooperative, efficient outcome. As long as such a strategy emerges by mutation, and gets some chance to get a foothold, it will be successful in the long run.
Indeed, since these conditions don’t seem all that difficult to meet, we would expect that group selection should actually occur quite frequently, and should be a major explanation for a lot of important forms of altruism.
And in fact this seems to be the case. Humans look awfully group-selected. (Like I said, we behave very much like Axelrods.) Many other social species, such as apes, dolphins, and wolves, do as well. There is altruism in nature that doesn’t look group-selected, for instance among eusocial insects; but much of the really impressive altruism seems more like equilibrium selection at the group level than it does like direct selection at the individual level.
Even multicellular life can be considered group selection: A bunch of cells “agree” to set aside some of their own interest in self-replication in favor of supporting a common, unified whole. (And should any mutated cells try to defect and multiply out of control, what happens? We call that cancer.) This can only work when there are multiple equilibria to select from at the individual level—but there nearly always are.