The evolution of human cooperation

Jun 17 JDN 2458287

If alien lifeforms were observing humans (assuming they didn’t turn out the same way—which they actually might, for reasons I’ll get to shortly), the thing that would probably baffle them the most about us is how we organize ourselves into groups. Each individual may be part of several groups at once, and some groups are closer-knit than others; but the most tightly-knit groups exhibit extremely high levels of cooperation, coordination, and self-sacrifice.

They might think at first that we are eusocial, like ants or bees; but upon closer study they would see that our groups are not very strongly correlated with genetic relatedness. We are somewhat more closely related to those in our groups than to those outsides, usually; but it’s a remarkably weak effect, especially compared to the extremely high relatedness of worker bees in a hive. No, to a first approximation, these groups are of unrelated humans; yet their level of cooperation is equal to if not greater than that exhibited by the worker bees.

However, the alien anthropologists would find that it is not that humans are simply predisposed toward extremely high altruism and cooperation in general; when two humans groups come into conflict, they are capable of the most extreme forms of violence imaginable. Human history is full of atrocities that combine the indifferent brutality of nature red in tooth and claw with the boundless ingenuity of a technologically advanced species. Yet except for a small proportion perpetrated by individual humans with some sort of mental pathology, these atrocities are invariably committed by one unified group against another. Even in genocide there is cooperation.

Humans are not entirely selfish. But nor are they paragons of universal altruism (though some of them aspire to be). Humans engage in a highly selective form of altruism—virtually boundless for the in-group, almost negligible for the out-group. Humans are tribal.

Being a human yourself, this probably doesn’t strike you as particularly strange. Indeed, I’ve mentioned it many times previously on this blog. But it is actually quite strange, from an evolutionary perspective; most organisms are not like this.

As I said earlier, there is actually reason to think that our alien anthropologist would come from a species with similar traits, simply because such cooperation may be necessary to achieve a full-scale technological civilization, let alone the capacity for interstellar travel. But there might be other possibilities; perhaps they come from a eusocial species, and their large-scale cooperation is within an extremely large hive.

It’s true that most organisms are not entirely selfish. There are various forms of cooperation within and even across species. But these usually involve only close kin, and otherwise involve highly stable arrangements of mutual benefit. There is nothing like the large-scale cooperation between anonymous unrelated individuals that is exhibited by all human societies.

How would such an unusual trait evolve? It must require a very particular set of circumstances, since it only seems to have evolved in a single species (or at most a handful of species, since other primates and cetaceans display some of the same characteristics).

Once evolved, this trait is clearly advantageous; indeed it turned a local apex predator into a species so successful that it can actually intentionally control the evolution of other species. Humans have become a hegemon over the entire global ecology, for better or for worse. Cooperation gave us a level of efficiency in producing the necessities of survival so great that at this point most of us spend our time working on completely different tasks. If you are not a farmer or a hunter or a carpenter (and frankly, even if you are a farmer with a tractor, a hunter with a rifle, or a carpenter with a table saw), you are doing work that would simply not have been possible without very large-scale human cooperation.

This extremely high fitness benefit only makes the matter more puzzling, however: If the benefits are so great, why don’t more species do this? There must be some other requirements that other species were unable to meet.

One clear requirement is high intelligence. As frustrating as it may be to be a human and watch other humans kill each other over foolish grievances, this is actually evidence of how smart humans are, biologically speaking. We might wish we were even smarter still—but most species don’t have the intelligence to make it even as far as we have.

But high intelligence is likely not sufficient. We can’t be sure of that, since we haven’t encountered any other species with equal intelligence; but what we do know is that even Homo sapiens didn’t coordinate on anything like our current scale for tens of thousands of years. We may have had tribal instincts, but if so they were largely confined to a very small scale. Something happened, about 50,000 years ago or so—not very long ago in evolutionary time—that allowed us to increase that scale dramatically.

Was this a genetic change? It’s difficult to say. There could have been some subtle genetic mutation, something that wouldn’t show up in the fossil record. But more recent expansions in human cooperation to the level of the nation-state and beyond clearly can’t be genetic; they were much too fast for that. They must be a form of cultural evolution: The replicators being spread are ideas and norms—memes—rather than genes.

So perhaps the very early shift toward tribal cooperation was also a cultural one. Perhaps it began not as a genetic mutation but as an idea—perhaps a metaphor of “universal brotherhood” as we often still hear today. The tribes that believed this ideas prospered; the tribes that didn’t were outcompeted or even directly destroyed.

This would explain why it had to be an intelligent species. We needed brains big enough to comprehend metaphors and generalize concepts. We needed enough social cognition to keep track of who was in the in-group and who was in the out-group.

If it was indeed a cultural shift, this should encourage us. (And since the most recent changes definitely were cultural, that is already quite encouraging.) We are not limited by our DNA to only care about a small group of close kin; we are capable of expanding our scale of unity and cooperation far beyond.
The real question is whether we can expand it to everyone. Unfortunately, there is some reason to think that this may not be possible. If our concept of tribal identity inherently requires both an in-group and an out-group, then we may never be able to include everyone. If we are only unified against an enemy, never simply for our own prosperity, world peace may forever remain a dream.

But I do have a work-around that I think is worth considering. Can we expand our concept of the out-group to include abstract concepts? With phrases like “The War on Poverty” and “The War on Terror”, it would seem in fact that we can. It feels awkward; it is somewhat imprecise—but then, so was the original metaphor of “universal brotherhood”. Our brains are flexible enough that they don’t actually seem to need the enemy to be a person; it can also be an idea. If this is right, then we can actually include everyone in our in-group, as long as we define the right abstract out-group. We can choose enemies like poverty, violence, cruelty, and despair instead of other nations or ethnic groups. If we must continue to fight a battle, let it be a battle against the pitiless indifference of the universe, rather than our fellow human beings.

Of course, the real challenge will be getting people to change their existing tribal identities. In the moment, these identities seem fundamentally intractable. But that can’t really be the case—for these identities have changed over historical time. Once-important categories have disappeared; new ones have arisen in their place. Someone in 4th century Constantinople would find the conflict between Democrats and Republicans as baffling as we would find the conflict between Trinitarians and Arians. The ongoing oppression of Native American people by White people would be unfathomable to someone of the 11th century Onondaga, who could scarcely imagine an enemy more different than the Seneca west of them. Even the conflict between Russia and NATO would probably seem strange to someone living in France in 1943, for whom Germany was the enemy and Russia was at least the enemy of the enemy—and many of those people are still alive.

I don’t know exactly how these tribal identities change (I’m working on it). It clearly isn’t as simple as convincing people with rational arguments. In fact, part of how it seems to work is that someone will shift their identity slowly enough that they can’t perceive the shift themselves. People rarely seem to appreciate, much less admit, how much their own minds have changed over time. So don’t ever expect to change someone’s identity in one sitting. Don’t even expect to do it in one year. But never forget that identities do change, even within an individual’s lifetime.

Is grade inflation a real problem?

Mar 4 JDN 2458182

You can’t spend much time teaching at the university level and not hear someone complain about “grade inflation”. Almost every professor seems to believe in it, and yet they must all be participating in it, if it’s really such a widespread problem.

This could be explained as a collective action problem, a Tragedy of the Commons: If the incentives are always to have the students with the highest grades—perhaps because of administrative pressure, or in order to get better reviews from students—then even if all professors would prefer a harsher grading scheme, no individual professor can afford to deviate from the prevailing norms.

But in fact I think there is a much simpler explanation: Grade inflation doesn’t exist.

In economic growth theory, economists make a sharp distinction between inflation—increase in prices without change in underlying fundamentals—and growth—increase in the real value of output. I contend that there is no such thing as grade inflation—what we are in fact observing is grade growth.
Am I saying that students are actually smarter now than they were 30 years ago?

Yes. That’s exactly what I’m saying.

But don’t take it from me. Take it from the decades of research on the Flynn Effect: IQ scores have been rising worldwide at a rate of about 0.3 IQ points per year for as long as we’ve been keeping good records. Students today are about 10 IQ points smarter than students 30 years ago—a 2018 IQ score of 95 is equivalent to a 1988 score of 105, which is equivalent to a 1958 score of 115. There is reason to think this trend won’t continue indefinitely, since the effect is mainly concentrated at the bottom end of the distribution; but it has continued for quite some time already.

This by itself would probably be enough to explain the observed increase in grades, but there’s more: College students are also a self-selected sample, admitted precisely because they were believed to be the smartest individuals in the application pool. Rising grades at top institutions are easily explained by rising selectivity at top schools: Harvard now accepts 5.6% of applicants. In 1942, Harvard accepted 92% of applicants. The odds of getting in have fallen from 9:1 in favor to 19:1 against. Today, you need a 4.0 GPA, a 36 ACT in every category, glowing letters of recommendation, and hundreds of hours of extracurricular activities (or a family member who donated millions of dollars, of course) to get into Harvard. In the 1940s, you needed a high school diploma and a B average.

In fact, when educational researchers have tried to quantitatively study the phenomenon of “grade inflation”, they usually come back with the result that they simply can’t find it. The US department of education conducted a study in 1995 showing that average university grades had declined since 1965. Given that the Flynn effect raised IQ by almost 10 points during that time, maybe we should be panicking about grade deflation.

It really wouldn’t be hard to make that case: “Back in my day, you could get an A just by knowing basic algebra! Now they want these kids to take partial derivatives?” “We used to just memorize facts to ace the exam; but now teachers keep asking for reasoning and critical thinking?”

More recently, a study in 2013 found that grades rose at the high school level, but fell at the college level, and showed no evidence of losing any informativeness as a signaling mechanism. The only recent study I could find showing genuinely compelling evidence for grade inflation was a 2017 study of UK students estimating that grades are growing about twice as fast as the Flynn effect alone would predict. Most studies don’t even consider the possibility that students are smarter than they used to be—they just take it for granted that any increase in average grades constitutes grade inflation. Many of them don’t even control for the increase in selectivity—here’s one using the fact that Harvard’s average rose from 2.7 to 3.4 from 1960 to 2000 as evidence of “grade inflation” when Harvard’s acceptance rate fell from almost 30% to only 10% during that period.

Indeed, the real mystery is why so many professors believe in grade inflation, when the evidence for it is so astonishingly weak.

I think it’s availability heuristic. Who are professors? They are the cream of the crop. They aced their way through high school, college, and graduate school, then got hired and earned tenure—they were one of a handful of individuals who won a fierce competition with hundreds of competitors at each stage. There are over 320 million people in the US, and only 1.3 million college faculty. This means that college professors represent about the top 0.4% of high-scoring students.

Combine that with the fact that human beings assort positively (we like to spend time with people who are similar to us) and use availability heuristic (we judge how likely something is based on how many times we have seen it).

Thus, when a professor compares to her own experience of college, she is remembering her fellow top-scoring students at elite educational institutions. She is recalling the extreme intellectual demands she had to meet to get where she is today, and erroneously assuming that these are representative of most the population of her generation. She probably went to school at one of a handful of elite institutions, even if she now teaches at a mid-level community college: three quarters of college faculty come from the top one quarter of graduate schools.

And now she compares to the students she has to teach, most of whom would not be able to meet such demands—but of course most people in her generation couldn’t either. She frets for the future of humanity only because not everyone is a genius like her.

Throw in the Curse of Knowledge: The professor doesn’t remember how hard it was to learn what she has learned so far, and so the fact that it seems easy now makes her think it was easy all along. “How can they not know how to take partial derivatives!?” Well, let’s see… were you born knowing how to take partial derivatives?

Giving a student an A for work far inferior to what you’d have done in their place isn’t unfair. Indeed, it would clearly be unfair to do anything less. You have years if not decades of additional education ahead of them, and you are from self-selected elite sample of highly intelligent individuals. Expecting everyone to perform as well as you would is simply setting up most of the population for failure.

There are potential incentives for grade inflation that do concern me: In particular, a lot of international student visas and scholarship programs insist upon maintaining a B or even A- average to continue. Professors are understandably loathe to condemn a student to having to drop out or return to their home country just because they scored 81% instead of 84% on the final exam. If we really intend to make C the average score, then students shouldn’t lose funding or visas just for scoring a B-. Indeed, I have trouble defending any threshold above outright failing—which is to say, a minimum score of D-. If you pass your classes, that should be good enough to keep your funding.

Yet apparently even this isn’t creating too much upward bias, as students who are 10 IQ points smarter are still getting about the same scores as their forebears. We should be celebrating that our population is getting smarter, but instead we’re panicking over “easy grading”.

But kids these days, am I right?

Nature via Nurture

JDN 2457222 EDT 16:33.

One of the most common “deep questions” human beings have asked ourselves over the centuries is also one of the most misguided, the question of “nature versus nurture”: Is it genetics or environment that makes us what we are?

Humans are probably the single entity in the universe for which this question makes least sense. Artificial constructs have no prior existence, so they are “all nurture”, made what we choose to make them. Most other organisms on Earth behave accordingly to fixed instinctual programming, acting out a specific series of responses that have been honed over millions of years, doing only one thing, but doing it exceedingly well. They are in this sense “all nature”. As the saying goes, the fox knows many things, but the hedgehog knows one very big thing. Most organisms on Earth are in this sense hedgehogs, but we Homo sapiens are the ultimate foxes. (Ironically, hedgehogs are not actually “hedgehogs” in this sense: Being mammals, they have an advanced brain capable of flexibly responding to environmental circumstances. Foxes are a good deal more intelligent still, however.)

But human beings are by far the most flexible, adaptable organism on Earth. We live on literally every continent; despite being savannah apes we even live deep underwater and in outer space. Unlike most other species, we do not fit into a well-defined ecological niche; instead, we carve our own. This certainly has downsides; human beings are ourselves a mass extinction event.

Does this mean, therefore, that we are tabula rasa, blank slates upon which anything can be written?

Hardly. We’re more like word processors. Staring (as I of course presently am) at the blinking cursor of a word processor on a computer screen, seeing that wide, open space where a virtual infinity of possible texts could be written, depending entirely upon a sequence of miniscule key vibrations, you could be forgiven for thinking that you are looking at a blank slate. But in fact you are looking at the pinnacle of thousands of years of technological advancement, a machine so advanced, so precisely engineered, that its individual components are one ten-thousandth the width of a human hair (Intel just announced that we can now do even better than that). At peak performance, it is capable of over 100 billion calculations per second. Its random-access memory stores as much information as all the books on a stacks floor of the Hatcher Graduate Library, and its hard drive stores as much as all the books in the US Library of Congress. (Of course, both libraries contain digital media as well, exceeding anything my humble hard drive could hold by a factor of a thousand.)

All of this, simply to process text? Of course not; word processing is an afterthought for a processor that is specifically designed for dealing with high-resolution 3D images. (Of course, nowadays even a low-end netbook that is designed only for word processing and web browsing can typically handle a billion calculations per second.) But there the analogy with humans is quite accurate as well: Written language is about 10,000 years old, while the human visual mind is at least 100,000. We were 3D image analyzers long before we were word processors. This may be why we say “a picture is worth a thousand words”; we process each with about as much effort, even though the image necessarily contains thousands of times as many bits.

Why is the computer capable of so many different things? Why is the human mind capable of so many more? Not because they are simple and impinged upon by their environments, but because they are complex and precision-engineered to nonlinearly amplify tiny inputs into vast outputs—but only certain tiny inputs.

That is, it is because of our nature that we are capable of being nurtured. It is precisely the millions of years of genetic programming that have optimized the human brain that allow us to learn and adapt so flexibly to new environments and form a vast multitude of languages and cultures. It is precisely the genetically-programmed humanity we all share that makes our environmentally-acquired diversity possible.

In fact, causality also runs the other direction. Indeed, when I said other organisms were “all nature” that wasn’t right either; for even tightly-programmed instincts are evolved through millions of years of environmental pressure. Human beings have even been involved in cultural interactions long enough that it has begun to affect our genetic evolution; the reason I can digest lactose is that my ancestors about 10,000 years ago raised goats. We have our nature because of our ancestors’ nurture.

And then of course there’s the fact that we need a certain minimum level of environmental enrichment even to develop normally; a genetically-normal human raised into a deficient environment will suffer a kind of mental atrophy, as when children raised feral lose their ability to speak.

Thus, the question “nature or nurture?” seems a bit beside the point: We are extremely flexible and responsive to our environment, because of innate genetic hardware and software, which requires a certain environment to express itself, and which arose because of thousands of years of culture and millions of years of the struggle for survival—we are nurture because nature because nurture.

But perhaps we didn’t actually mean to ask about human traits in general; perhaps we meant to ask about some specific trait, like spatial intelligence, or eye color, or gender identity. This at least can be structured as a coherent question: How heritable is the trait? What proportion of the variance in this population is caused by genetic variation? Heritability analysis is a well-established methodology in behavioral genetics.
Yet, that isn’t the same question at all. For while height is extremely heritable within a given population (usually about 80%), human height worldwide has been increasing dramatically over time due to environmental influences and can actually be used as a measure of a nation’s economic development. (Look at what happened to the height of men in Japan.) How heritable is height? You have to be very careful what you mean.

Meanwhile, the heritability of neurofibromatosis is actually quite low—as many people acquire the disease by new mutations as inherit it from their parents—but we know for a fact it is a genetic disorder, because we can point to the specific genes that mutate to cause the disease.

Heritability also depends on the population under consideration; speaking English is more heritable within the United States than it is across the world as a whole, because there are a larger proportion of non-native English speakers in other countries. In general, a more diverse environment will lead to lower heritability, because there are simply more environmental influences that could affect the trait.

As children get older, their behavior gets more heritablea result which probably seems completely baffling, until you understand what heritability really means. Your genes become a more important factor in your behavior as you grow up, because you become separated from the environment of your birth and immersed into the general environment of your whole society. Lower environmental diversity means higher heritability, by definition. There’s also an effect of choosing your own environment; people who are intelligent and conscientious are likely to choose to go to college, where they will be further trained in knowledge and self-control. This latter effect is called niche-picking.

This is why saying something like “intelligence is 80% genetic” is basically meaningless, and “intelligence is 80% heritable” isn’t much better until you specify the reference population. The heritability of intelligence depends very much on what you mean by “intelligence” and what population you’re looking at for heritability. But even if you do find a high heritability (as we do for, say, Spearman’s g within the United States), this doesn’t mean that intelligence is fixed at birth; it simply means that parents with high intelligence are likely to have children with high intelligence. In evolutionary terms that’s all that matters—natural selection doesn’t care where you got your traits, only that you have them and pass them to your offspring—but many people do care, and IQ being heritable because rich, educated parents raise rich, educated children is very different from IQ being heritable because innately intelligent parents give birth to innately intelligent children. If genetic variation is systematically related to environmental variation, you can measure a high heritability even though the genes are not directly causing the outcome.

We do use twin studies to try to sort this out, but because identical twins raised apart are exceedingly rare, two very serious problems emerge: One, there usually isn’t a large enough sample size to say anything useful; and more importantly, this is actually an inaccurate measure in terms of natural selection. The evolutionary pressure is based on the correlation with the genes—it actually doesn’t matter whether the genes are directly causal. All that matters is that organisms with allele X survive and organisms with allele Y do not. Usually that’s because allele X does something useful, but even if it’s simply because people with allele X happen to mostly come from a culture that makes better guns, that will work just as well.

We can see this quite directly: White skin spread across the world not because it was useful (it’s actually terrible in any latitude other than subarctic), but because the cultures that conquered the world happened to be comprised mostly of people with White skin. In the 15th century you’d find a very high heritability of “using gunpowder weapons”, and there was definitely a selection pressure in favor of that trait—but it obviously doesn’t take special genes to use a gun.

The kind of heritability you get from twin studies is answering a totally different, nonsensical question, something like: “If we reassigned all offspring to parents randomly, how much of the variation in this trait in the new population would be correlated with genetic variation?” And honestly, I think the only reason people think that this is the question to ask is precisely because even biologists don’t fully grasp the way that nature and nurture are fundamentally entwined. They are trying to answer the intuitive question, “How much of this trait is genetic?” rather than the biologically meaningful “How strongly could a selection pressure for this trait evolve this gene?”

And if right now you’re thinking, “I don’t care how strongly a selection pressure for the trait could evolve some particular gene”, that’s fine; there are plenty of meaningful scientific questions that I don’t find particularly interesting and are probably not particularly important. (I hesitate to provide a rigid ranking, but I think it’s safe to say that “How does consciousness arise?” is a more important question than “Why are male platypuses venomous?” and “How can poverty be eradicated?” is a more important question than “How did the aircraft manufacturing duopoly emerge?”) But that’s really the most meaningful question we can construct from the ill-formed question “How much of this trait is genetic?” The next step is to think about why you thought that you were asking something important.

What did you really mean to ask?

For a bald question like, “Is being gay genetic?” there is no meaningful answer. We could try to reformulate it as a meaningful biological question, like “What is the heritability of homosexual behavior among males in the United States?” or “Can we find genetic markers strongly linked to self-identification as ‘gay’?” but I don’t think those are the questions we really meant to ask. I think actually the question we meant to ask was more fundamental than that: Is it legitimate to discriminate against gay people? And here the answer is unequivocal: No, it isn’t. It is a grave mistake to think that this moral question has anything to do with genetics; discrimination is wrong even against traits that are totally environmental (like religion, for example), and there are morally legitimate actions to take based entirely on a person’s genes (the obvious examples all coming from medicine—you don’t treat someone for cystic fibrosis if they don’t actually have it).

Similarly, when we ask the question “Is intelligence genetic?” I don’t think most people are actually interested in the heritability of spatial working memory among young American males. I think the real question they want to ask is about equality of opportunity, and what it would look like if we had it. If success were entirely determined by intelligence and intelligence were entirely determined by genetics, then even a society with equality of opportunity would show significant inequality inherited across generations. Thus, inherited inequality is not necessarily evidence against equality of opportunity. But this is in fact a deeply disingenuous argument, used by people like Charles Murray to excuse systemic racism, sexism, and concentration of wealth.

We didn’t have to say that inherited inequality is necessarily or undeniably evidence against equality of opportunity—merely that it is, in fact, evidence of inequality of opportunity. Moreover, it is far from the only evidence against equality of opportunity; we also can observe the fact that college-educated Black people are no more likely to be employed than White people who didn’t even finish high school, for example, or the fact that otherwise identical resumes with predominantly Black names (like “Jamal”) are less likely to receive callbacks compared to predominantly White names (like “Greg”). We can observe that the same is true for resumes with obviously female names (like “Sarah”) versus obviously male names (like “David”), even when the hiring is done by social scientists. We can directly observe that one-third of the 400 richest Americans inherited their wealth (and if you look closer into the other two-thirds, all of them had some very unusual opportunities, usually due to their family connections—“self-made” is invariably a great exaggeration). The evidence for inequality of opportunity in our society is legion, regardless of how genetics and intelligence are related. In fact, I think that the high observed heritability of intelligence is largely due to the fact that educational opportunities are distributed in a genetically-biased fashion, but I could be wrong about that; maybe there really is a large genetic influence on human intelligence. Even so, that does not justify widespread and directly-measured discrimination. It does not justify a handful of billionaires luxuriating in almost unimaginable wealth as millions of people languish in poverty. Intelligence can be as heritable as you like and it is still wrong for Donald Trump to have billions of dollars while millions of children starve.

This is what I think we need to do when people try to bring up a “nature versus nurture” question. We can certainly talk about the real complexity of the relationship between genetics and environment, which I think are best summarized as “nature via nurture”; but in fact usually we should think about why we are asking that question, and try to find the real question we actually meant to ask.