The mental health crisis in academia

Apr 30 JDN 2460065

Why are so many academics anxious and depressed?

Depression and anxiety are much more prevalent among both students and faculty than they are in the general population. Unsurprisingly, women seem to have it a bit worse than men, and trans people have it worst of all.

Is this the result of systemic failings of the academic system? Before deciding that, one thing we should consider is that very smart people do seem to have a higher risk of depression.

There is a complex relationship between genes linked to depression and genes linked to intelligence, and some evidence that people of especially high IQ are more prone to depression; nearly 27% of Mensa members report mood disorders, compared to 10% of the general population.

(Incidentally, the stereotype of the weird, sickly nerd has a kernel of truth: the correlations between intelligence and autism, ADHD, allergies, and autoimmune disorders are absolutely real—and not at all well understood. It may be a general pattern of neural hyper-activation, not unlike what I posit in my stochastic overload model. The stereotypical nerd wears glasses, and, yes, indeed, myopia is also correlated with intelligence—and this seems to be mostly driven by genetics.)

Most of these figures are at least a few years old. If anything things are only worse now, as COVID triggered a surge in depression for just about everyone, academics included. It remains to be seen how much of this large increase will abate as things gradually return to normal, and how much will continue to have long-term effects—this may depend in part on how well we manage to genuinely restore a normal way of life and how well we can deal with long COVID.

If we assume that academics are a similar population to Mensa members (admittedly a strong assumption), then this could potentially explain why 26% of academic faculty are depressed—but not why nearly 40% of junior faculty are. At the very least, we junior faculty are about 50% more likely to be depressed than would be explained by our intelligence alone. And grad students have it even worse: Nearly 40% of graduate students report anxiety or depression, and nearly 50% of PhD students meet the criteria for depression. At the very least this sounds like a dual effect of being both high in intelligence and low in status—it’s those of us who have very little power or job security in academia who are the most depressed.

This suggests that, yes, there really is something wrong with academia. It may not be entirely the fault of the system—perhaps even a well-designed academic system would result in more depression than the general population because we are genetically predisposed. But it really does seem like there is a substantial environmental contribution that academic institutions bear some responsibility for.

I think the most obvious explanation is constant evaluation: From the time we are students at least up until we (maybe, hopefully, someday) get tenure, academics are constantly being evaluated on our performance. We know that this sort of evaluation contributes to anxiety and depression.

Don’t other jobs evaluate performance? Sure. But not constantly the way that academia does. This is especially obvious as a student, where everything you do is graded; but it largely continues once you are faculty as well.

For most jobs, you are concerned about doing well enough to keep your job or maybe get a raise. But academia has this continuous forward pressure: if you are a grad student or junior faculty, you can’t possibly keep your job; you must either move upward to the next stage or drop out. And academia has become so hyper-competitive that if you want to continue moving upward—and someday getting that tenure—you must publish in top-ranked journals, which have utterly opaque criteria and ever-declining acceptance rates. And since there are so few jobs available compared to the number of applicants, good enough is never good enough; you must be exceptional, or you will fail. Two thirds of PhD graduates seek a career in academia—but only 30% are actually in one three years later. (And honestly, three years is pretty short; there are plenty of cracks left to fall through between that and a genuinely stable tenured faculty position.)

Moreover, our skills are so hyper-specialized that it’s very hard to imagine finding work anywhere else. This grants academic institutions tremendous monopsony power over us, letting them get away with lower pay and worse working conditions. Even with an economics PhD—relatively transferable, all things considered—I find myself wondering who would actually want to hire me outside this ivory tower, and my feeble attempts at actually seeking out such employment have thus far met with no success.

I also find academia painfully isolating. I’m not an especially extraverted person; I tend to score somewhere near the middle range of extraversion (sometimes called an “ambivert”). But I still find myself craving more meaningful contact with my colleagues. We all seem to work in complete isolation from one another, even when sharing the same office (which is awkward for other reasons). There are very few consistent gatherings or good common spaces. And whenever faculty do try to arrange some sort of purely social event, it always seems to involve drinking at a pub and nobody is interested in providing any serious emotional or professional support.

Some of this may be particular to this university, or to the UK; or perhaps it has more to do with being at a certain stage of my career. In any case I didn’t feel nearly so isolated in graduate school; I had other students in my cohort and adjacent cohorts who were going through the same things. But I’ve been here two years now and so far have been unable to establish any similarly supportive relationships with colleagues.

There may be some opportunities I’m not taking advantage of: I’ve skipped a lot of research seminars, and I stopped going to those pub gatherings. But it wasn’t that I didn’t try them at all; it was that I tried them a few times and quickly found that they were not filling that need. At seminars, people only talked about the particular research project being presented. At the pub, people talked about almost nothing of serious significance—and certainly nothing requiring emotional vulnerability. The closest I think I got to this kind of support from colleagues was a series of lunch meetings designed to improve instruction in “tutorials” (what here in the UK we call discussion sections); there, at least, we could commiserate about feeling overworked and dealing with administrative bureaucracy.

There seem to be deep, structural problems with how academia is run. This whole process of universities outsourcing their hiring decisions to the capricious whims of high-ranked journals basically decides the entire course of our careers. And once you get to the point I have, now so disheartened with the process of publishing research that I can’t even engage with it, it’s not at all clear how it’s even possible to recover. I see no way forward, no one to turn to. No one seems to care how well I teach, if I’m not publishing research.

And I’m clearly not the only one who feels this way.

Pinker Propositions

May 19 2458623

What do the following statements have in common?

1. “Capitalist countries have less poverty than Communist countries.

2. “Black men in the US commit homicide at a higher rate than White men.

3. “On average, in the US, Asian people score highest on IQ tests, White and Hispanic people score near the middle, and Black people score the lowest.

4. “Men on average perform better at visual tasks, and women on average perform better on verbal tasks.

5. “In the United States, White men are no more likely to be mass shooters than other men.

6. “The genetic heritability of intelligence is about 60%.

7. “The plurality of recent terrorist attacks in the US have been committed by Muslims.

8. “The period of US military hegemony since 1945 has been the most peaceful period in human history.

These statements have two things in common:

1. All of these statements are objectively true facts that can be verified by rich and reliable empirical data which is publicly available and uncontroversially accepted by social scientists.

2. If spoken publicly among left-wing social justice activists, all of these statements will draw resistance, defensiveness, and often outright hostility. Anyone making these statements is likely to be accused of racism, sexism, imperialism, and so on.

I call such propositions Pinker Propositions, after an excellent talk by Steven Pinker illustrating several of the above statements (which was then taken wildly out of context by social justice activists on social media).

The usual reaction to these statements suggests that people think they imply harmful far-right policy conclusions. This inference is utterly wrong: A nuanced understanding of each of these propositions does not in any way lead to far-right policy conclusions—in fact, some rather strongly support left-wing policy conclusions.

1. Capitalist countries have less poverty than Communist countries, because Communist countries are nearly always corrupt and authoritarian. Social democratic countries have the lowest poverty and the highest overall happiness (#ScandinaviaIsBetter).

2. Black men commit more homicide than White men because of poverty, discrimination, mass incarceration, and gang violence. Black men are also greatly overrepresented among victims of homicide, as most homicide is intra-racial. Homicide rates often vary across ethnic and socioeconomic groups, and these rates vary over time as a result of cultural and political changes.

3. IQ tests are a highly imperfect measure of intelligence, and the genetics of intelligence cut across our socially-constructed concept of race. There is far more within-group variation in IQ than between-group variation. Intelligence is not fixed at birth but is affected by nutrition, upbringing, exposure to toxins, and education—all of which statistically put Black people at a disadvantage. Nor does intelligence remain constant within populations: The Flynn Effect is the well-documented increase in intelligence which has occurred in almost every country over the past century. Far from justifying discrimination, these provide very strong reasons to improve opportunities for Black children. The lead and mercury in Flint’s water suppressed the brain development of thousands of Black children—that’s going to lower average IQ scores. But that says nothing about supposed “inherent racial differences” and everything about the catastrophic damage of environmental racism.

4. To be quite honest, I never even understood why this one shocks—or even surprises—people. It’s not even saying that men are “smarter” than women—overall IQ is almost identical. It’s just saying that men are more visual and women are more verbal. And this, I think, is actually quite obvious. I think the clearest evidence of this—the “interocular trauma” that will convince you the effect is real and worth talking about—is pornography. Visual porn is overwhelmingly consumed by men, even when it was designed for women (e.g. Playgirla majority of its readers are gay men, even though there are ten times as many straight women in the world as there are gay men). Conversely, erotic novels are overwhelmingly consumed by women. I think a lot of anti-porn feminism can actually be explained by this effect: Feminists (who are usually women, for obvious reasons) can say they are against “porn” when what they are really against is visual porn, because visual porn is consumed by men; then the kind of porn that they like (erotic literature) doesn’t count as “real porn”. And honestly they’re mostly against the current structure of the live-action visual porn industry, which is totally reasonable—but it’s a far cry from being against porn in general. I have some serious issues with how our farming system is currently set up, but I’m not against farming.

5. This one is interesting, because it’s a lack of a race difference, which normally is what the left wing always wants to hear. The difference of course is that this alleged difference would make White men look bad, and that’s apparently seen as a desirable goal for social justice. But the data just doesn’t bear it out: While indeed most mass shooters are White men, that’s because most Americans are White, which is a totally uninteresting reason. There’s no clear evidence of any racial disparity in mass shootings—though the gender disparity is absolutely overwhelming: It’s almost always men.

6. Heritability is a subtle concept; it doesn’t mean what most people seem to think it means. It doesn’t mean that 60% of your intelligence is due to your your genes. Indeed, I’m not even sure what that sentence would actually mean; it’s like saying that 60% of the flavor of a cake is due to the eggs. What this heritability figure actually means that when you compare across individuals in a population, and carefully control for environmental influences, you find that about 60% of the variance in IQ scores is explained by genetic factors. But this is within a particular population—here, US adults—and is absolutely dependent on all sorts of other variables. The more flexible one’s environment becomes, the more people self-select into their preferred environment, and the more heritable traits become. As a result, IQ actually becomes more heritable as children become adults, called the Wilson Effect.

7. This one might actually have some contradiction with left-wing policy. The disproportionate participation of Muslims in terrorism—controlling for just about anything you like, income, education, age etc.—really does suggest that, at least at this point in history, there is some real ideological link between Islam and terrorism. But the fact remains that the vast majority of Muslims are not terrorists and do not support terrorism, and antagonizing all the people of an entire religion is fundamentally unjust as well as likely to backfire in various ways. We should instead be trying to encourage the spread of more tolerant forms of Islam, and maintaining the strict boundaries of secularism to prevent the encroach of any religion on our system of government.

8. The fact that US military hegemony does seem to be a cause of global peace doesn’t imply that every single military intervention by the US is justified. In fact, it doesn’t even necessarily imply that any such interventions are justified—though I think one would be hard-pressed to say that the NATO intervention in the Kosovo War or the defense of Kuwait in the Gulf War was unjustified. It merely points out that having a hegemon is clearly preferable to having a multipolar world where many countries jockey for military supremacy. The Pax Romana was a time of peace but also authoritarianism; the Pax Americana is better, but that doesn’t prevent us from criticizing the real harms—including major war crimes—committed by the United States.

So it is entirely possible to know and understand these facts without adopting far-right political views.

Yet Pinker’s point—and mine—is that by suppressing these true facts, by responding with hostility or even ostracism to anyone who states them, we are actually adding fuel to the far-right fire. Instead of presenting the nuanced truth and explaining why it doesn’t imply such radical policies, we attack the messenger; and this leads people to conclude three things:

1. The left wing is willing to lie and suppress the truth in order to achieve political goals (they’re doing it right now).

2. These statements actually do imply right-wing conclusions (else why suppress them?).

3. Since these statements are true, that must mean the right-wing conclusions are actually correct.

Now (especially if you are someone who identifies unironically as “woke”), you might be thinking something like this: “Anyone who can be turned away from social justice so easily was never a real ally in the first place!”

This is a fundamentally and dangerously wrongheaded view. No one—not me, not you, not anyone—was born believing in social justice. You did not emerge from your mother’s womb ranting against colonalist imperialism. You had to learn what you now know. You came to believe what you now believe, after once believing something else that you now think is wrong. This is true of absolutely everyone everywhere. Indeed, the better you are, the more true it is; good people learn from their mistakes and grow in their knowledge.

This means that anyone who is now an ally of social justice once was not. And that, in turn, suggests that many people who are currently not allies could become so, under the right circumstances. They would probably not shift all at once—as I didn’t, and I doubt you did either—but if we are welcoming and open and honest with them, we can gradually tilt them toward greater and greater levels of support.

But if we reject them immediately for being impure, they never get the chance to learn, and we never get the chance to sway them. People who are currently uncertain of their political beliefs will become our enemies because we made them our enemies. We declared that if they would not immediately commit to everything we believe, then they may as well oppose us. They, quite reasonably unwilling to commit to a detailed political agenda they didn’t understand, decided that it would be easiest to simply oppose us.

And we don’t have to win over every person on every single issue. We merely need to win over a large enough critical mass on each issue to shift policies and cultural norms. Building a wider tent is not compromising on your principles; on the contrary, it’s how you actually win and make those principles a reality.

There will always be those we cannot convince, of course. And I admit, there is something deeply irrational about going from “those leftists attacked Charles Murray” to “I think I’ll start waving a swastika”. But humans aren’t always rational; we know this. You can lament this, complain about it, yell at people for being so irrational all you like—it won’t actually make people any more rational. Humans are tribal; we think in terms of teams. We need to make our team as large and welcoming as possible, and suppressing Pinker Propositions is not the way to do that.

How (not) to destroy an immoral market

Jul 29 JDN 2458329

In this world there are people of primitive cultures, with a population that is slowly declining, trying to survive a constant threat of violence in the aftermath of colonialism. But you already knew that, of course.

What you may not have realized is that some of these people are actively hunted by other people, slaughtered so that their remains can be sold on the black market.

I am referring of course to elephants. Maybe those weren’t the people you first had in mind?

Elephants are not human in the sense of being Homo sapiens; but as far as I am concerned, they are people in a moral sense.

Elephants take as long to mature as humans, and spend most of their childhood learning. They are born with brains only 35% of the size of their adult brains, much as we are born with brains 28% the size of our adult brains. Their encephalization quotients range from about 1.5 to 2.4, comparable to chimpanzees.

Elephants have problem-solving intelligence comparable to chimpanzees, cetaceans, and corvids. Elephants can pass the “mirror test” of self-identification and self-awareness. Individual elephants exhibit clearly distinguishable personalities. They exhibit empathy toward humans and other elephants. They can think creatively and develop new tools.

Elephants distinguish individual humans or elephants by sight or by voice, comfort each other when distressed, and above all mourn their dead. The kind of mourning behaviors elephants exhibit toward the remains of their dead family members have only been observed in humans and chimpanzees.

On a darker note, elephants also seek revenge. In response to losing loved ones to poaching or collisions with trains, elephants have orchestrated organized counter-attacks against human towns. This is not a single animal defending itself, as almost any will do; this is a coordinated act of vengeance after the fact. Once again, we have only observed similar behaviors in humans, great apes, and cetaceans.

Huffington Post backed off and said “just kidding” after asserting that elephants are people—but I won’t. Elephants are people. They do not have an advanced civilization, to be sure. But as far as I am concerned they display all the necessary minimal conditions to be granted the fundamental rights of personhood. Killing an elephant is murder.

And yet, the ivory trade continues to be profitable. Most of this is black-market activity, though it was legal in some places until very recently; China only restored their ivory trade ban this year, and Hong Kong’s ban will not take full effect until 2021. Some places are backsliding: A proposal (currently on hold) by the US Fish and Wildlife Service under the Trump administration would also legalize some limited forms of ivory trade.
With this in mind, I can understand why people would support the practice of ivory-burning, symbolically and publicly destroying ivory by fire so that no one can buy it. Two years ago, Kenya organized a particularly large ivory-burning that set ablaze 105 tons of elephant tusk and 1.35 tons of rhino horn.

But as economist, when I first learned about ivory-burning, it seemed like a really, really bad idea.

Why? Supply and demand. By destroying supply, you have just raised the market price of ivory. You have therefore increased the market incentives for poaching elephants and rhinos.

Yet it turns out I was wrong about this, as were many other economists. I looked at the empirical research, and changed my mind substantially. Ivory-burning is not such a bad idea after all.

Here was my reasoning before: If I want to reduce the incentives to produce something, what do I need to do? Lower the price. How do I do that? I need to increase the supply. Economists have made several proposals for how to do that, and until I looked at the data I would have expected them to work; but they haven’t.

The best way to increase supply is to create synthetic ivory that is cheap and very difficult to tell apart from the real thing. This has been done, but it didn’t work. For some reason, sellers try to hide the expensive real ivory in with the cheap synthetic ivory. I admit I actually have trouble understanding this; if you can’t sell it at full price, why even bother with the illegal real ivory? Maybe their customers have methods of distinguishing the two that the regulators don’t? If so, why aren’t the regulators using those methods? Another concern with increasing the supply of ivory is that it might reduce the stigma of consuming ivory, thereby also increasing the demand.

A similar problem has arisen with so-called “ghost ivory”; for obvious reasons, existing ivory products were excluded from the ban imposed in 1947, lest the government be forced to confiscate millions of billiard balls and thousands of pianos. Yet poachers have learned ways to hide new, illegal ivory and sell it as old, legal ivory.

Another proposal was to organize “sustainable ivory harvesting”, which based on past experience with similar regulations is unlikely to be enforceable. Moreover, this is not like sustainable wood harvesting, where our only concern is environmental. I for one care about the welfare of individual elephants, and I don’t think they would want to be “harvested”, sustainably or otherwise.
There is one way of doing “sustainable harvesting” that might not be so bad for the elephants, which would be to set up a protected colony of elephants, help them to increase their population, and then when elephants die of natural causes, take only the tusks and sell those as ivory, stamped with an official seal as “humanely and sustainably produced”. Even then, elephants are among a handful of species that would be offended by us taking their ancestors’ remains. But if it worked, it could save many elephant lives. The bigger problem is how expensive such a project would be, and how long it would take to show any benefit; elephant lifespans are about half as long as ours, (except in zoos, where their mortality rate is much higher!) so a policy that might conceivably solve a problem in 30 to 40 years doesn’t really sound so great. More detailed theoretical and empirical analysis has made this clear: you just can’t get ivory fast enough to meet existing demand this way.

In any case, China’s ban on all ivory trade had an immediate effect at dropping the price of ivory, which synthetic ivory did not. Before that, strengthened regulations in the US (particularly in New York and California) had been effective at reducing ivory sales. The CITES treaty in 1989 that banned most international ivory trade was followed by an immediate increase in elephant populations.

The most effective response to ivory trade is an absolutely categorical ban with no loopholes. To fight “ghost ivory”, we should remove exceptions for old ivory, offering buybacks for any antiques with a verifiable pedigree and a brief period of no-penalty surrender for anything with no such records. The only legal ivory must be for medical and scientific purposes, and its sourcing records must be absolutely impeccable—just as we do with human remains.

Even synthetic ivory must also be banned, at least if it’s convincing enough that real ivory could be hidden in it. You can make something you call “synthetic ivory” that serves a similar consumer function, but it must be different enough that it can be easily verified at customs inspections.

We must give no quarter to poachers; Kenya was right to impose a life sentence for aggravated poaching. The Tanzanian proposal to “shoot to kill” was too extreme; summary execution is never acceptable. But if indeed someone currently has a weapons pointed at an elephant and refuses to drop it, I consider it justifiable to shoot them, just as I would if that weapon were aimed at a human.

The need for a categorical ban is what makes the current US proposal dangerous. The particular exceptions it carves out are not all that large, but the fact that it carves out exceptions at all makes enforcement much more difficult. To his credit, Trump himself doesn’t seem very keen on the proposal, which may mean that it is dead in the water. I don’t get to say this often, but so far Trump seems to be making the right choice on this one.

Though the economic theory predicted otherwise, the empirical data is actually quite clear: The most effective way to save elephants from poaching is an absolutely categorical ban on ivory.

Ivory-burning is a signal of commitment to such a ban. Any ivory we find being sold, we will burn. Whoever was trying to sell it will lose their entire investment. Find more, and we will burn that too.

The evolution of human cooperation

Jun 17 JDN 2458287

If alien lifeforms were observing humans (assuming they didn’t turn out the same way—which they actually might, for reasons I’ll get to shortly), the thing that would probably baffle them the most about us is how we organize ourselves into groups. Each individual may be part of several groups at once, and some groups are closer-knit than others; but the most tightly-knit groups exhibit extremely high levels of cooperation, coordination, and self-sacrifice.

They might think at first that we are eusocial, like ants or bees; but upon closer study they would see that our groups are not very strongly correlated with genetic relatedness. We are somewhat more closely related to those in our groups than to those outsides, usually; but it’s a remarkably weak effect, especially compared to the extremely high relatedness of worker bees in a hive. No, to a first approximation, these groups are of unrelated humans; yet their level of cooperation is equal to if not greater than that exhibited by the worker bees.

However, the alien anthropologists would find that it is not that humans are simply predisposed toward extremely high altruism and cooperation in general; when two humans groups come into conflict, they are capable of the most extreme forms of violence imaginable. Human history is full of atrocities that combine the indifferent brutality of nature red in tooth and claw with the boundless ingenuity of a technologically advanced species. Yet except for a small proportion perpetrated by individual humans with some sort of mental pathology, these atrocities are invariably committed by one unified group against another. Even in genocide there is cooperation.

Humans are not entirely selfish. But nor are they paragons of universal altruism (though some of them aspire to be). Humans engage in a highly selective form of altruism—virtually boundless for the in-group, almost negligible for the out-group. Humans are tribal.

Being a human yourself, this probably doesn’t strike you as particularly strange. Indeed, I’ve mentioned it many times previously on this blog. But it is actually quite strange, from an evolutionary perspective; most organisms are not like this.

As I said earlier, there is actually reason to think that our alien anthropologist would come from a species with similar traits, simply because such cooperation may be necessary to achieve a full-scale technological civilization, let alone the capacity for interstellar travel. But there might be other possibilities; perhaps they come from a eusocial species, and their large-scale cooperation is within an extremely large hive.

It’s true that most organisms are not entirely selfish. There are various forms of cooperation within and even across species. But these usually involve only close kin, and otherwise involve highly stable arrangements of mutual benefit. There is nothing like the large-scale cooperation between anonymous unrelated individuals that is exhibited by all human societies.

How would such an unusual trait evolve? It must require a very particular set of circumstances, since it only seems to have evolved in a single species (or at most a handful of species, since other primates and cetaceans display some of the same characteristics).

Once evolved, this trait is clearly advantageous; indeed it turned a local apex predator into a species so successful that it can actually intentionally control the evolution of other species. Humans have become a hegemon over the entire global ecology, for better or for worse. Cooperation gave us a level of efficiency in producing the necessities of survival so great that at this point most of us spend our time working on completely different tasks. If you are not a farmer or a hunter or a carpenter (and frankly, even if you are a farmer with a tractor, a hunter with a rifle, or a carpenter with a table saw), you are doing work that would simply not have been possible without very large-scale human cooperation.

This extremely high fitness benefit only makes the matter more puzzling, however: If the benefits are so great, why don’t more species do this? There must be some other requirements that other species were unable to meet.

One clear requirement is high intelligence. As frustrating as it may be to be a human and watch other humans kill each other over foolish grievances, this is actually evidence of how smart humans are, biologically speaking. We might wish we were even smarter still—but most species don’t have the intelligence to make it even as far as we have.

But high intelligence is likely not sufficient. We can’t be sure of that, since we haven’t encountered any other species with equal intelligence; but what we do know is that even Homo sapiens didn’t coordinate on anything like our current scale for tens of thousands of years. We may have had tribal instincts, but if so they were largely confined to a very small scale. Something happened, about 50,000 years ago or so—not very long ago in evolutionary time—that allowed us to increase that scale dramatically.

Was this a genetic change? It’s difficult to say. There could have been some subtle genetic mutation, something that wouldn’t show up in the fossil record. But more recent expansions in human cooperation to the level of the nation-state and beyond clearly can’t be genetic; they were much too fast for that. They must be a form of cultural evolution: The replicators being spread are ideas and norms—memes—rather than genes.

So perhaps the very early shift toward tribal cooperation was also a cultural one. Perhaps it began not as a genetic mutation but as an idea—perhaps a metaphor of “universal brotherhood” as we often still hear today. The tribes that believed this ideas prospered; the tribes that didn’t were outcompeted or even directly destroyed.

This would explain why it had to be an intelligent species. We needed brains big enough to comprehend metaphors and generalize concepts. We needed enough social cognition to keep track of who was in the in-group and who was in the out-group.

If it was indeed a cultural shift, this should encourage us. (And since the most recent changes definitely were cultural, that is already quite encouraging.) We are not limited by our DNA to only care about a small group of close kin; we are capable of expanding our scale of unity and cooperation far beyond.
The real question is whether we can expand it to everyone. Unfortunately, there is some reason to think that this may not be possible. If our concept of tribal identity inherently requires both an in-group and an out-group, then we may never be able to include everyone. If we are only unified against an enemy, never simply for our own prosperity, world peace may forever remain a dream.

But I do have a work-around that I think is worth considering. Can we expand our concept of the out-group to include abstract concepts? With phrases like “The War on Poverty” and “The War on Terror”, it would seem in fact that we can. It feels awkward; it is somewhat imprecise—but then, so was the original metaphor of “universal brotherhood”. Our brains are flexible enough that they don’t actually seem to need the enemy to be a person; it can also be an idea. If this is right, then we can actually include everyone in our in-group, as long as we define the right abstract out-group. We can choose enemies like poverty, violence, cruelty, and despair instead of other nations or ethnic groups. If we must continue to fight a battle, let it be a battle against the pitiless indifference of the universe, rather than our fellow human beings.

Of course, the real challenge will be getting people to change their existing tribal identities. In the moment, these identities seem fundamentally intractable. But that can’t really be the case—for these identities have changed over historical time. Once-important categories have disappeared; new ones have arisen in their place. Someone in 4th century Constantinople would find the conflict between Democrats and Republicans as baffling as we would find the conflict between Trinitarians and Arians. The ongoing oppression of Native American people by White people would be unfathomable to someone of the 11th century Onondaga, who could scarcely imagine an enemy more different than the Seneca west of them. Even the conflict between Russia and NATO would probably seem strange to someone living in France in 1943, for whom Germany was the enemy and Russia was at least the enemy of the enemy—and many of those people are still alive.

I don’t know exactly how these tribal identities change (I’m working on it). It clearly isn’t as simple as convincing people with rational arguments. In fact, part of how it seems to work is that someone will shift their identity slowly enough that they can’t perceive the shift themselves. People rarely seem to appreciate, much less admit, how much their own minds have changed over time. So don’t ever expect to change someone’s identity in one sitting. Don’t even expect to do it in one year. But never forget that identities do change, even within an individual’s lifetime.

Is grade inflation a real problem?

Mar 4 JDN 2458182

You can’t spend much time teaching at the university level and not hear someone complain about “grade inflation”. Almost every professor seems to believe in it, and yet they must all be participating in it, if it’s really such a widespread problem.

This could be explained as a collective action problem, a Tragedy of the Commons: If the incentives are always to have the students with the highest grades—perhaps because of administrative pressure, or in order to get better reviews from students—then even if all professors would prefer a harsher grading scheme, no individual professor can afford to deviate from the prevailing norms.

But in fact I think there is a much simpler explanation: Grade inflation doesn’t exist.

In economic growth theory, economists make a sharp distinction between inflation—increase in prices without change in underlying fundamentals—and growth—increase in the real value of output. I contend that there is no such thing as grade inflation—what we are in fact observing is grade growth.
Am I saying that students are actually smarter now than they were 30 years ago?

Yes. That’s exactly what I’m saying.

But don’t take it from me. Take it from the decades of research on the Flynn Effect: IQ scores have been rising worldwide at a rate of about 0.3 IQ points per year for as long as we’ve been keeping good records. Students today are about 10 IQ points smarter than students 30 years ago—a 2018 IQ score of 95 is equivalent to a 1988 score of 105, which is equivalent to a 1958 score of 115. There is reason to think this trend won’t continue indefinitely, since the effect is mainly concentrated at the bottom end of the distribution; but it has continued for quite some time already.

This by itself would probably be enough to explain the observed increase in grades, but there’s more: College students are also a self-selected sample, admitted precisely because they were believed to be the smartest individuals in the application pool. Rising grades at top institutions are easily explained by rising selectivity at top schools: Harvard now accepts 5.6% of applicants. In 1942, Harvard accepted 92% of applicants. The odds of getting in have fallen from 9:1 in favor to 19:1 against. Today, you need a 4.0 GPA, a 36 ACT in every category, glowing letters of recommendation, and hundreds of hours of extracurricular activities (or a family member who donated millions of dollars, of course) to get into Harvard. In the 1940s, you needed a high school diploma and a B average.

In fact, when educational researchers have tried to quantitatively study the phenomenon of “grade inflation”, they usually come back with the result that they simply can’t find it. The US department of education conducted a study in 1995 showing that average university grades had declined since 1965. Given that the Flynn effect raised IQ by almost 10 points during that time, maybe we should be panicking about grade deflation.

It really wouldn’t be hard to make that case: “Back in my day, you could get an A just by knowing basic algebra! Now they want these kids to take partial derivatives?” “We used to just memorize facts to ace the exam; but now teachers keep asking for reasoning and critical thinking?”

More recently, a study in 2013 found that grades rose at the high school level, but fell at the college level, and showed no evidence of losing any informativeness as a signaling mechanism. The only recent study I could find showing genuinely compelling evidence for grade inflation was a 2017 study of UK students estimating that grades are growing about twice as fast as the Flynn effect alone would predict. Most studies don’t even consider the possibility that students are smarter than they used to be—they just take it for granted that any increase in average grades constitutes grade inflation. Many of them don’t even control for the increase in selectivity—here’s one using the fact that Harvard’s average rose from 2.7 to 3.4 from 1960 to 2000 as evidence of “grade inflation” when Harvard’s acceptance rate fell from almost 30% to only 10% during that period.

Indeed, the real mystery is why so many professors believe in grade inflation, when the evidence for it is so astonishingly weak.

I think it’s availability heuristic. Who are professors? They are the cream of the crop. They aced their way through high school, college, and graduate school, then got hired and earned tenure—they were one of a handful of individuals who won a fierce competition with hundreds of competitors at each stage. There are over 320 million people in the US, and only 1.3 million college faculty. This means that college professors represent about the top 0.4% of high-scoring students.

Combine that with the fact that human beings assort positively (we like to spend time with people who are similar to us) and use availability heuristic (we judge how likely something is based on how many times we have seen it).

Thus, when a professor compares to her own experience of college, she is remembering her fellow top-scoring students at elite educational institutions. She is recalling the extreme intellectual demands she had to meet to get where she is today, and erroneously assuming that these are representative of most the population of her generation. She probably went to school at one of a handful of elite institutions, even if she now teaches at a mid-level community college: three quarters of college faculty come from the top one quarter of graduate schools.

And now she compares to the students she has to teach, most of whom would not be able to meet such demands—but of course most people in her generation couldn’t either. She frets for the future of humanity only because not everyone is a genius like her.

Throw in the Curse of Knowledge: The professor doesn’t remember how hard it was to learn what she has learned so far, and so the fact that it seems easy now makes her think it was easy all along. “How can they not know how to take partial derivatives!?” Well, let’s see… were you born knowing how to take partial derivatives?

Giving a student an A for work far inferior to what you’d have done in their place isn’t unfair. Indeed, it would clearly be unfair to do anything less. You have years if not decades of additional education ahead of them, and you are from self-selected elite sample of highly intelligent individuals. Expecting everyone to perform as well as you would is simply setting up most of the population for failure.

There are potential incentives for grade inflation that do concern me: In particular, a lot of international student visas and scholarship programs insist upon maintaining a B or even A- average to continue. Professors are understandably loathe to condemn a student to having to drop out or return to their home country just because they scored 81% instead of 84% on the final exam. If we really intend to make C the average score, then students shouldn’t lose funding or visas just for scoring a B-. Indeed, I have trouble defending any threshold above outright failing—which is to say, a minimum score of D-. If you pass your classes, that should be good enough to keep your funding.

Yet apparently even this isn’t creating too much upward bias, as students who are 10 IQ points smarter are still getting about the same scores as their forebears. We should be celebrating that our population is getting smarter, but instead we’re panicking over “easy grading”.

But kids these days, am I right?

Nature via Nurture

JDN 2457222 EDT 16:33.

One of the most common “deep questions” human beings have asked ourselves over the centuries is also one of the most misguided, the question of “nature versus nurture”: Is it genetics or environment that makes us what we are?

Humans are probably the single entity in the universe for which this question makes least sense. Artificial constructs have no prior existence, so they are “all nurture”, made what we choose to make them. Most other organisms on Earth behave accordingly to fixed instinctual programming, acting out a specific series of responses that have been honed over millions of years, doing only one thing, but doing it exceedingly well. They are in this sense “all nature”. As the saying goes, the fox knows many things, but the hedgehog knows one very big thing. Most organisms on Earth are in this sense hedgehogs, but we Homo sapiens are the ultimate foxes. (Ironically, hedgehogs are not actually “hedgehogs” in this sense: Being mammals, they have an advanced brain capable of flexibly responding to environmental circumstances. Foxes are a good deal more intelligent still, however.)

But human beings are by far the most flexible, adaptable organism on Earth. We live on literally every continent; despite being savannah apes we even live deep underwater and in outer space. Unlike most other species, we do not fit into a well-defined ecological niche; instead, we carve our own. This certainly has downsides; human beings are ourselves a mass extinction event.

Does this mean, therefore, that we are tabula rasa, blank slates upon which anything can be written?

Hardly. We’re more like word processors. Staring (as I of course presently am) at the blinking cursor of a word processor on a computer screen, seeing that wide, open space where a virtual infinity of possible texts could be written, depending entirely upon a sequence of miniscule key vibrations, you could be forgiven for thinking that you are looking at a blank slate. But in fact you are looking at the pinnacle of thousands of years of technological advancement, a machine so advanced, so precisely engineered, that its individual components are one ten-thousandth the width of a human hair (Intel just announced that we can now do even better than that). At peak performance, it is capable of over 100 billion calculations per second. Its random-access memory stores as much information as all the books on a stacks floor of the Hatcher Graduate Library, and its hard drive stores as much as all the books in the US Library of Congress. (Of course, both libraries contain digital media as well, exceeding anything my humble hard drive could hold by a factor of a thousand.)

All of this, simply to process text? Of course not; word processing is an afterthought for a processor that is specifically designed for dealing with high-resolution 3D images. (Of course, nowadays even a low-end netbook that is designed only for word processing and web browsing can typically handle a billion calculations per second.) But there the analogy with humans is quite accurate as well: Written language is about 10,000 years old, while the human visual mind is at least 100,000. We were 3D image analyzers long before we were word processors. This may be why we say “a picture is worth a thousand words”; we process each with about as much effort, even though the image necessarily contains thousands of times as many bits.

Why is the computer capable of so many different things? Why is the human mind capable of so many more? Not because they are simple and impinged upon by their environments, but because they are complex and precision-engineered to nonlinearly amplify tiny inputs into vast outputs—but only certain tiny inputs.

That is, it is because of our nature that we are capable of being nurtured. It is precisely the millions of years of genetic programming that have optimized the human brain that allow us to learn and adapt so flexibly to new environments and form a vast multitude of languages and cultures. It is precisely the genetically-programmed humanity we all share that makes our environmentally-acquired diversity possible.

In fact, causality also runs the other direction. Indeed, when I said other organisms were “all nature” that wasn’t right either; for even tightly-programmed instincts are evolved through millions of years of environmental pressure. Human beings have even been involved in cultural interactions long enough that it has begun to affect our genetic evolution; the reason I can digest lactose is that my ancestors about 10,000 years ago raised goats. We have our nature because of our ancestors’ nurture.

And then of course there’s the fact that we need a certain minimum level of environmental enrichment even to develop normally; a genetically-normal human raised into a deficient environment will suffer a kind of mental atrophy, as when children raised feral lose their ability to speak.

Thus, the question “nature or nurture?” seems a bit beside the point: We are extremely flexible and responsive to our environment, because of innate genetic hardware and software, which requires a certain environment to express itself, and which arose because of thousands of years of culture and millions of years of the struggle for survival—we are nurture because nature because nurture.

But perhaps we didn’t actually mean to ask about human traits in general; perhaps we meant to ask about some specific trait, like spatial intelligence, or eye color, or gender identity. This at least can be structured as a coherent question: How heritable is the trait? What proportion of the variance in this population is caused by genetic variation? Heritability analysis is a well-established methodology in behavioral genetics.
Yet, that isn’t the same question at all. For while height is extremely heritable within a given population (usually about 80%), human height worldwide has been increasing dramatically over time due to environmental influences and can actually be used as a measure of a nation’s economic development. (Look at what happened to the height of men in Japan.) How heritable is height? You have to be very careful what you mean.

Meanwhile, the heritability of neurofibromatosis is actually quite low—as many people acquire the disease by new mutations as inherit it from their parents—but we know for a fact it is a genetic disorder, because we can point to the specific genes that mutate to cause the disease.

Heritability also depends on the population under consideration; speaking English is more heritable within the United States than it is across the world as a whole, because there are a larger proportion of non-native English speakers in other countries. In general, a more diverse environment will lead to lower heritability, because there are simply more environmental influences that could affect the trait.

As children get older, their behavior gets more heritablea result which probably seems completely baffling, until you understand what heritability really means. Your genes become a more important factor in your behavior as you grow up, because you become separated from the environment of your birth and immersed into the general environment of your whole society. Lower environmental diversity means higher heritability, by definition. There’s also an effect of choosing your own environment; people who are intelligent and conscientious are likely to choose to go to college, where they will be further trained in knowledge and self-control. This latter effect is called niche-picking.

This is why saying something like “intelligence is 80% genetic” is basically meaningless, and “intelligence is 80% heritable” isn’t much better until you specify the reference population. The heritability of intelligence depends very much on what you mean by “intelligence” and what population you’re looking at for heritability. But even if you do find a high heritability (as we do for, say, Spearman’s g within the United States), this doesn’t mean that intelligence is fixed at birth; it simply means that parents with high intelligence are likely to have children with high intelligence. In evolutionary terms that’s all that matters—natural selection doesn’t care where you got your traits, only that you have them and pass them to your offspring—but many people do care, and IQ being heritable because rich, educated parents raise rich, educated children is very different from IQ being heritable because innately intelligent parents give birth to innately intelligent children. If genetic variation is systematically related to environmental variation, you can measure a high heritability even though the genes are not directly causing the outcome.

We do use twin studies to try to sort this out, but because identical twins raised apart are exceedingly rare, two very serious problems emerge: One, there usually isn’t a large enough sample size to say anything useful; and more importantly, this is actually an inaccurate measure in terms of natural selection. The evolutionary pressure is based on the correlation with the genes—it actually doesn’t matter whether the genes are directly causal. All that matters is that organisms with allele X survive and organisms with allele Y do not. Usually that’s because allele X does something useful, but even if it’s simply because people with allele X happen to mostly come from a culture that makes better guns, that will work just as well.

We can see this quite directly: White skin spread across the world not because it was useful (it’s actually terrible in any latitude other than subarctic), but because the cultures that conquered the world happened to be comprised mostly of people with White skin. In the 15th century you’d find a very high heritability of “using gunpowder weapons”, and there was definitely a selection pressure in favor of that trait—but it obviously doesn’t take special genes to use a gun.

The kind of heritability you get from twin studies is answering a totally different, nonsensical question, something like: “If we reassigned all offspring to parents randomly, how much of the variation in this trait in the new population would be correlated with genetic variation?” And honestly, I think the only reason people think that this is the question to ask is precisely because even biologists don’t fully grasp the way that nature and nurture are fundamentally entwined. They are trying to answer the intuitive question, “How much of this trait is genetic?” rather than the biologically meaningful “How strongly could a selection pressure for this trait evolve this gene?”

And if right now you’re thinking, “I don’t care how strongly a selection pressure for the trait could evolve some particular gene”, that’s fine; there are plenty of meaningful scientific questions that I don’t find particularly interesting and are probably not particularly important. (I hesitate to provide a rigid ranking, but I think it’s safe to say that “How does consciousness arise?” is a more important question than “Why are male platypuses venomous?” and “How can poverty be eradicated?” is a more important question than “How did the aircraft manufacturing duopoly emerge?”) But that’s really the most meaningful question we can construct from the ill-formed question “How much of this trait is genetic?” The next step is to think about why you thought that you were asking something important.

What did you really mean to ask?

For a bald question like, “Is being gay genetic?” there is no meaningful answer. We could try to reformulate it as a meaningful biological question, like “What is the heritability of homosexual behavior among males in the United States?” or “Can we find genetic markers strongly linked to self-identification as ‘gay’?” but I don’t think those are the questions we really meant to ask. I think actually the question we meant to ask was more fundamental than that: Is it legitimate to discriminate against gay people? And here the answer is unequivocal: No, it isn’t. It is a grave mistake to think that this moral question has anything to do with genetics; discrimination is wrong even against traits that are totally environmental (like religion, for example), and there are morally legitimate actions to take based entirely on a person’s genes (the obvious examples all coming from medicine—you don’t treat someone for cystic fibrosis if they don’t actually have it).

Similarly, when we ask the question “Is intelligence genetic?” I don’t think most people are actually interested in the heritability of spatial working memory among young American males. I think the real question they want to ask is about equality of opportunity, and what it would look like if we had it. If success were entirely determined by intelligence and intelligence were entirely determined by genetics, then even a society with equality of opportunity would show significant inequality inherited across generations. Thus, inherited inequality is not necessarily evidence against equality of opportunity. But this is in fact a deeply disingenuous argument, used by people like Charles Murray to excuse systemic racism, sexism, and concentration of wealth.

We didn’t have to say that inherited inequality is necessarily or undeniably evidence against equality of opportunity—merely that it is, in fact, evidence of inequality of opportunity. Moreover, it is far from the only evidence against equality of opportunity; we also can observe the fact that college-educated Black people are no more likely to be employed than White people who didn’t even finish high school, for example, or the fact that otherwise identical resumes with predominantly Black names (like “Jamal”) are less likely to receive callbacks compared to predominantly White names (like “Greg”). We can observe that the same is true for resumes with obviously female names (like “Sarah”) versus obviously male names (like “David”), even when the hiring is done by social scientists. We can directly observe that one-third of the 400 richest Americans inherited their wealth (and if you look closer into the other two-thirds, all of them had some very unusual opportunities, usually due to their family connections—“self-made” is invariably a great exaggeration). The evidence for inequality of opportunity in our society is legion, regardless of how genetics and intelligence are related. In fact, I think that the high observed heritability of intelligence is largely due to the fact that educational opportunities are distributed in a genetically-biased fashion, but I could be wrong about that; maybe there really is a large genetic influence on human intelligence. Even so, that does not justify widespread and directly-measured discrimination. It does not justify a handful of billionaires luxuriating in almost unimaginable wealth as millions of people languish in poverty. Intelligence can be as heritable as you like and it is still wrong for Donald Trump to have billions of dollars while millions of children starve.

This is what I think we need to do when people try to bring up a “nature versus nurture” question. We can certainly talk about the real complexity of the relationship between genetics and environment, which I think are best summarized as “nature via nurture”; but in fact usually we should think about why we are asking that question, and try to find the real question we actually meant to ask.