Naive moral Darwinism

Feb 23 JDN 2460730

Impressed by the incredible usefulness of evolutionary theory in explaining the natural world, many people have tried to apply it to ethical claims as well. The basic idea is that morality evolves; morality is an adaptation just like any other, a trait which has evolved by mutation and natural selection.

Unfortunately the statement “morality evolves” is ambiguous; it could mean a number of different things. This ambiguity has allowed abuses of evolutionary thinking in morality.

Two that are particularly harmful are evolutionary eugenics and laissez-faire Darwinism, both of which fall under an umbrella I’ll call ‘naive moral Darwinism’.

They are both terrible; it saddens me that many people propound them. Creationists will often try to defend their doubts about evolution on empirical grounds, but they really can’t, and I think even they realize this. Their real objection to evolution is not that it is unscientific, but that it is immoral; the concern is that studying evolution will make us callous and selfish. And unfortunately, there is a grain of truth here: A shallow understanding of evolution can indeed lead to a callous and selfish mindset, as people try to shoehorn evolutionary theory onto moral and political systems without a deep understanding of either.

The first option is usually known as “Social Darwinism”, but I think a better term is “evolutionary eugenics”. (“Social Darwinism” is a pejorative, not a self-description.) This philosophy, if we even credit it with the term, is especially ridiculous; indeed, it is evil. It doesn’t make any sense, either as ethics or as evolution, and it has led to some of the most terrible atrocities in history, from forced sterilization to mass murder. Darwin adamantly disagreed with it, and it rests upon a variety of deep confusions about evolutionary science.

First, in practice at least, eugenicists presumed that traits like intelligence, health, and even wealth are almost entirely genetic—when it’s obvious that they are very heavily affected by the environment. There certainly are genetic factors involved, but the presumption that these traits are entirely genetic is absurd. Indeed, the fact that the wealth of parents is strongly correlated with that of their children has an obvious explanation completely unrelated to genetics: Inheritance. Wealthy parents can also give their children many advantages in life that lead to higher earnings later. Controlling for inherited environment, there is still some heritability of wealth, but it’s quite weak; it’s probably due to personality traits like conscientiousness, ambition, and in fact narcissism which are beneficial in a capitalist economy. Hence breeding the wealthy may make more people who are similar to the wealthy; but there’s no reason to think it will actually make the world wealthier.

Moreover, eugenics rests upon a confusion between fitness in the evolutionary sense of expected number of allele copies, and the notion of being “fit” in some other sense, like physical health (as in “fitness club”), socially conformity (as in “misfits”) or mental sanity (as in “unfit to serve trial”). Strong people are not necessarily higher in genetic fitness, nor are smart people, nor are people of any particular race or ethnicity. Fitness entails the probability of one’s genes being passed on in a given environment—without reference to a specific environment, it says basically nothing. Given the reference environment “majority of the Earth’s land surface”, humans are very fit organisms, but so are rats and cockroaches. Given the reference environment “deep ocean”, sharks fare far better than we ever will, and better even than our cousins the cetaceans who live there. Moreover, there is no reason to think that intelligence in the sense of Einstein or Darwin is particularly fit. The intelligence of an ordinary person is definitely fit—that’s why we have it—but beyond that point, it may in fact be counterproductive. (Consider Isaac Newton and Alan Turing, both of whom were geniuses and neither of whom ever married or had children.)

There is milder form of this that is still quite harmful; I’ll call it “laissez-faire Darwinism”. It says that because natural selection automatically perpetuates the fit at the expense of the unfit, it ultimately leads to the best overall outcome. Under laissez-faire Darwinism, we should simply let evolution happen as it is going to happen. This theory is not as crazy as evolutionary eugenics—nor would its consequences be as dire—but it’s still quite confused. Natural selection is a law of nature, not a moral principle. It says what will happen, not what should happen. Indeed, like any law of nature, natural selection is inevitable. No matter what you do, natural selection will act upon you. The genes that work will survive, the genes that fail will die. The specifics of the environmental circumstances will decide which genes are the ones that survive, and there are random deviations due to genetic drift; but natural selection always applies.

Typically laissez-faire Darwinists argue that we should eliminate all government welfare, health care, and famine relief, because they oppose natural selection; but this would be like tearing down all skyscrapers because they oppose gravity, or, as Benjamin Franklin was once asked to do, to cease installing lightning rods because they oppose God’s holy smiting. Natural selection is a law of nature, a fundamental truth; but through wise engineering we can work with it instead of against it, just as we do with gravity and electricity. We would ignore laws of nature at our own peril—an engineer who failed to take gravity into account would not make very good buildings!—but we can work with them and around them to achieve our goals. This is no less true with natural selection than with any law of nature, whether gravity, electricity, quantum mechanics, or anything else. As a laser uses quantum mechanics and a light bulb uses electricity, so wise social policy can use natural selection to serve human ends. Indeed, welfare, health care, and famine relief are precisely the sort of things that can modulate the fitness of our entire species to make us all better off.

There are however important ways in which evolution can influence our ethical reasoning, which I’ll talk about in later posts.

Defining evolution

Feb 2 JDN 2460709

In the last post I said I’d explain the basics of evolution, then went into a bunch of detail about genetics. Why all this stuff about DNA? Weren’t we supposed to be talking about evolution? Yes—but it’s impossible to truly understand evolution without understanding DNA. This unity between genetics and evolution is called the Modern Synthesis, and it is the unified field theory of the life sciences. It’s quite different from what Darwin invented in 1859, but the fundamental insights were his; the Modern Synthesis is a body of flesh over the skeleton of Darwinian evolution. Now that I have explained the basics of DNA, it is time to discuss evolution itself.

The fundamental unit of evolution is the gene. (Darwin, among others, insisted that the fundamental unit of evolution is the organism, because it is organisms that are born and die. There is some truth to this, but given the presence of phenomena like kin selection and genetic drift, we also need to consider genes themselves. Richard Dawkins makes a distinction between “replicators” (genes) and “vehicles” (organisms) that makes a great deal of sense to me—both are necessary parts of the same system, and it’s a little silly to ask which is “more fundamental”.) The fundamental unit of evolution is not the population or the species; it is populations that evolve, but they evolve by natural selection acting upon individuals and genes. Natural selection is not sensitive to “the good of the species”; it is only sensitive to the good of the organism and the good of the gene.

A gene is a section of DNA that, when processed by the appropriate proteins, produces a particular protein. Most DNA is not in the form of genes. The majority of DNA has no effect—you can change it without affecting the organism—and most of the rest is involved in regulating the genes, not in producing proteins. Yet, genes are the recipes by which we are made. Human beings have genes for hemoglobin that oxygenates our blood, genes for melanin that pigments our skin, genes for serotonin that transmits signals in our brains, genes for keratin that makes up our hair, and about 46,000 other genes that produce other proteins (the Human Genome Project is still working on the exact number). An allele is a particular variant of a gene which produces a particular variant of the resulting protein. Alleles in melanin genes give different people different colors of skin; a particular allele in a hemoglobin gene gives some people sickle-cell anaemia.

When the distribution of alleles in a population changes, that is evolution. Yes, that’s all “evolution” means: Changes of distribution in alleles in a population. When a baby is born, that’s evolution. When a person dies, that’s evolution. This is what we mean when we say that evolution is a fact; it is a fact that alleles do change distribution in populations. Individuals do not evolve, populations evolve. You will never see a dog turn into a cat, nor an ape to a human. You could see, if you were watching for millions of years, a population of animals that started very dog-like and got increasingly cat-like with each generation, or a population of animals that started very ape-like and got increasingly human-like with each generation. Even these latter are not necessary occurrences; under different environmental circumstances, the same genes can evolve in completely different directions.

Fitness is the expected number of copies that an allele is likely to produce in the next generation.(There are a few subtly different ways of defining fitness; the one I prefer is the expected value of the number of copies of a given allele in the next generation. The fitness f of an allele a at generation t is given by the expectation of the number n of copies of that allele in that population at generation t+1: f(a,t) = E[n(a,t+1)]This is an \inclusive fitness measure, which accounts for kin selection better than exclusive fitness measures like “predicted grandchildren” or “expected number of reproductively-viable offspring”. In practical terms these generally give the same results; but when they don’t, the inclusive measure is to be preferred.)

Fitness is a probabilistic notion—alleles with high fitness are likely to be passed on, but this is not guaranteed. “Survival of the fittest” ultimately just means that genes that are likely to make many copies are likely to have many copies. It has been said that this is a tautology, and indeed it is; but so is the Pythagorean Theorem. Some tautologies are useful, and all tautologies are undeniably true.

What causes evolution? Organisms are born, reproduce, and die. Any time this happens, it changes the distribution of alleles in the population—it is evolution. If there was a reason why the ones who lived lived and the ones who died died, then the actual number of copies of each allele in the population will reflect the fitness of those alleles; this is called natural selection. On the other hand, if it just happened by chance, then the distribution of alleles won’t match the fitness; this is called genetic drift. Examples of each: Trees are tall, giraffes eat leaves, so giraffes with longer necks get more food and live longer—that’s natural selection. A flood rips through the savannah and kills half of the giraffes, and it just happens that more long-necked than short-necked giraffes die—that’s genetic drift. The difference can be subtle, since sometimes we don’t know what the reasons are; if it turned out that there was some reason why floods are more likely to kill long-necked giraffes (they can’t swim as well?), then in fact what we thought was genetic drift was really natural selection. But notice: Natural selection is not chance. Natural selection is the opposite of chance. If evolution happens by chance, that’s genetic drift. Natural selection is evolution that happens for a reason.

Natural selection changes populations, but what causes them to separate into distinct species? Well, a species is really a breeding population—it is a group of organisms that regularly interbreeds within the group and does not regularly interbreed outside the group. In most cases, breeding between species is actually impossible; but in many cases it is simply rare. Indeed, there is a particularly interesting case called a ring species, in which interbreeding possibilities rest on a continuum rather than being sharply delineated. In a ring species, there are several distinct populations for which some can interbreed easily, others can interbreed with difficulty, and others can’t inbreed at all. A classic case is the Ensatina salamanders who live in the Central Valley in California. There are nineteen populations, and each can interbreed with its adjacent populations—but the two populations at the far ends cannot interbreed. Ensatina eschscholtzii eschscholtzii can interbreed with E.e. croceater, which can interbreed with E.e. oregonensis, and so on all the way to E.e. klauberi—but E.e. eschscholzii on one end can’t interbreed with E.e. klauberi on the other end. Are they different “species”? It’s difficult to say. If all the intermediates died out, we would call them different species, Ensatina escholzii and Ensatina klauberi; but in fact genes do sometimes pass between them, because they can both interbreed with the intermediates. Really, the concept “species” fails to capture the true complexity of the situation.

This is not a problem for evolutionary theory—it is a prediction of evolutionary theory. We should expect to see new species occasionally forming, and while they are in the process of forming there should be many intermediates that aren’t yet distinct species. Evolution predicts gradual divergence, and sometimes we are lucky enough to see that divergence in process.

Natural selection can only act upon alleles that already exist; it chooses the best out of what’s available, not the best that could possibly exist. This is why dolphins breathe air instead of water; breathing water would be much better for their lifestyle, but no dolphin has yet been born who can breathe water. The alleles aren’t there, so natural selection cannot act upon them. If a mutant dolphin is someday born who can breathe water, as long as they don’t suffer from other problems as a result of their mutation, they are likely to live a long time and have lots of offspring; in a hundred generations perhaps water-breathing dolphins would form a new species, or even replace air-breathing dolphins. And notice how short a time that is: 100 generations of dolphins is only about 1000 years. We could watch this happening in historical time. If it had happened a million years ago, the fossil record would probably never show the intermediate forms. This is why we don’t see transitional forms between closely-related species; because the differences are so subtle, the necessary changes can occur very rapidly, in too few generations to ensure fossilization.

Indeed, monogenic traits—those that can be changed by a single mutation—never produce transitional forms. There is a single gene for sickle-cell anaemia in humans; we should not expect to see people with “30\% sickle-cell anaemia”, because there are only three options: you either have no copies of the sickle-cell allele (normal), you have one copy (sickle-cell trait), or you have two copies (sickle-cell anaemia). In fact, in this particular case, the one-copy variant isn’t even mild anaemia; it is a generally healthy non-anaemic state that offers protection against malaria. There is a single gene for six fingers in humans. Two copies gives you six fingers; one copy doesn’t do anything. Even if we had access to every individual organism that ever lived, we still wouldn’t see transitional forms for monogenic traits. Given that we actually have fossils of less than one in ten billion organisms that ever lived, it’s not surprising that most evolutionary changes leave no mark in the fossil record.

Furthermore, it’s important to understand that natural selection, even when there is plenty of variation to act on, does not produce perfectly-adapted organisms. It only produces organisms that are good enough to survive and pass on their alleles. In fact, there can be multiple fit alleles of the same gene in a population—all different, perhaps even some better than others, but each good enough to keep on surviving.

Indeed, the fitness of one allele can increase the fitness of another allele, in a number of different ways. The most morally-relevant ones only make sense in terms of game theory, so I will wait until later posts to get into them, but there are a few worth mentioning here. The first is co-evolution. Organisms evolve to suit their environments—but part of an organism’s environment consists of other organisms. Bees would not function if there were no flowers—but nor would flowers function without bees. So which came first, the bee or the flower? Neither. Ancient ancestors of each evolved together, co-evolved, the proto-flowers growing more flower-like as the proto-bees grew more bee-like, until finally an equilibrium was reached at the bees and flowers we see today.

Another way that organisms can affect the evolution of other organisms is through frequency-dependent selection, in which the fitness of a given allele depends upon the distribution of other alleles of the same gene. The most important case of frequency-dependent selection is in sex dimorphism, the differences between sexes within a species. If there are more males than females, the fitness of females goes up—it pays to be female; you’ll get your choice of males. Conversely, if there are more females than males, it pays to be male. Hence, over time, sex distributions reach an equilibrium at 50% male and 50% female, which has happened in almost every species (eusocial insects are the only major exception, and it’s due to their weird genetics). There are other cases of frequency-dependent selection as well; for instance, in stag beetles (Lucanidae), there are three kinds of males, called “alpha”, “beta”, and “gamma”. Alpha males have large horns and fight heavily with other alpha males; they risk being killed in the process, but if they win the fight, they get all the best females. Beta males have short horns and only fight other beta males; this limits their mating pool, but prevents them from being killed by alpha males. Finally, gamma males look just like females and will occasionally sneak past an alpha male and mate with his females. This is frequency-dependent selection because the success of each strategy depends on the other strategies in a fashion similar to rock-paper-scissors. If gamma males become very common, beta males will become more successful, because they won’t get cheated the way alpha males do. If beta males become common, alpha males will become more successful, because they can beat beta males in fights. If alpha males become common, gamma males will become more successful, because they can cheat alpha males. In the long run, the system settles into an equilibrium with a certain fraction of all three types.

A third way alleles affect other alleles is in sexual selection; in sexual selection, the alleles of one sex affect the alleles of the other sex, because sexual compatibility has obvious advantages. For instance, when there are lots of alleles in peahens that make them attracted to big, colorful tails, there is a fitness advantage to being a peacock with a big, colorful tail. Hence, alleles for big, colorful tails in peacocks will be selected. But then, if all the males have big, colorful tails, there is a fitness advantage to being a female who prefers big, colorful tails, and so a positive feedback loop forms; the end result is peacocks with ridiculously huge, ridiculously colorful tails and peahens who love them for it.

Everything above is very technical and scientific, and I imagine it is not very controversial or offensive to anyone. In future posts, I’ll get into the stuff that really upsets people, the true source of controversy on evolution.

Evolution: Foundations of Genetics


Jan 26 JDN 2460702

It frustrates me that in American society, evolutionary biology is considered a controversial topic. When I use knowledge from quantum physics or from organic chemistry, all I need to do is cite a credible source; I don’t need to preface it with a defense of the entire scientific field. Yet in the United States today, even basic statements of facts observed in evolutionary biology are met with incredulity. The consensus in the scientific community about evolution is greater than the consensus about quantum physics, and comparable to the consensus about organic chemistry. 95% of scientists agree that evolution happens, that Darwinian natural selection is the primary cause, and that human beings share a common ancestor with every other life form on Earth. Polls of scientists have consistently made this clear, and the wild success of Project Steve continues to vividly demonstrate it.

But I would rather defend evolution than have to tiptoe around it, or worse have my conclusions ignored because I use it. So, here goes.

You may think you understand evolution, but especially if you doubt that evolution is true, odds are good that you really don’t. Even most people who have taken college courses in evolutionary biology have difficulty understanding evolution.

Evolution is a very rich and complicated science, and I don’t have room to do it justice here. I merely hope that I can give you enough background to make sense of the core concepts, and convince you that evolution is real and important.

Foundations of genetics

So let us start at the beginning. DNA—deoxyribonucleic acid—is a macromolecular (very big and complicated) organic (carbon-based) acid (chemical that can give up hydrogen ions in solution) that is produced by all living cells. More properly, it is a class of macromolecular organic acids, because differences between DNA strands are actually chemical differences in the molecule. The structure of DNA consists of two long chains of constituent molecules called nucleotides; for chemical reasons nucleotides usually bond in pairs, adenine (A) with thymine (T), guanine (G) with cytosine (C). Pairs of nucleotides are called base pairs. We call it a “double-helix” because the two chains are normally wrapped around each other in a helix shape.

Because of this base-pair correspondence, the two strands of a DNA molecule are complementary; if one half is GATTACA, the other half will be CTAATGT. This process is reversible. Either strand can be reproduced from the other; this is how DNA replicates. A DNA strand GATTACA/CTAATGT can split into its GATTACA half and its CTAATGT half, and then the original GATTACA half will acquire new nucleotides and make a new CTAATGT for itself; similarly the original CTAATGT half will make a new GATTACA. At the end of this process, two precise copies of the original GATTACA/CTAATGT strand will result. This process can be repeated as necessary.

DNA molecules can vary in size from a few base-pairs (like the sequence GATTACA), to the 16,000 base-pairs of Carsonella bacteria, up to the 3 billion base-pairs of humans and beyond. While complexity of DNA and complexity of organism are surely related (it’s impossible to make a really complicated organism with very simple DNA), more base pairs does not necessarily imply a more complex organism. The single-celled amoeboid Polychaos dubium has 670 billion base-pairs. Amoeboids are relatively complex, all things considered; but they’re hardly 200 times more complex than we are!

The copying of DNA is exceedingly precise, but like anything in real life, not perfect. Cells have many physical and chemical mechanisms to correct bad copying, but sometimes—about 1 in 1 million base-pairs copied—something goes wrong. Sometimes, one nucleotide gets switched for another; perhaps what should have been a T becomes an A, or what should have been an A becomes a G. Other times, a whole sequence of DNA gets duplicated and inserted in a new place; still other times entire pieces of DNA are lost, never to be copied again. In some cases a sequence is flipped around backwards. All of these things (a single-nucleotide substitution, an insertion, a deletion, and an inversion, respectively) are forms of mutation. Mutation is always happening, but it can be increased by the presence of radiation, toxins, and other stresses. Usually cells with mutant DNA are killed by the immune system; if not, mutant body cells can cause cancer or other health problems. Usually it’s only mutations in gametes—the sperm and egg cells that carry DNA to the next generation—that actually have a long-term effect on future generations. Most mutations do not have any significant effect, and most of those that do have bad effects. It is only the rare minority of mutations that actually produces something useful to an organism’s survival.

What does DNA do? It makes proteins. Technically, proteins make other proteins (enzymes called transcriptases and polymerases and so on), but which protein is produced by such a process is dependent upon the order of base pairs in a DNA strand. DNA has been likened to a “code” or a “message”, but this is a little misleading. It’s definitely a sequence that contains information, but the “code” is less like a cryptographer’s cipher and more like a computer’s machine code; it interacts directly with the hardware to produce an output. And it’s important to understand that when DNA is “read” and “decoded”, it’s all happening purely by chemical reactions, and there is no conscious being doing the reading. While metaphorically we might say that DNA is a “code” or a “language”, we must not take these metaphors too literally; DNA is not a language in the same sense as English, nor is it a code in the same sense as the Enigma cipher.

Genotype and phenotype

DNA is also not a “blueprint”, as it is sometimes described. There is a one-to-one correspondence between a house and its blueprint: given a house, it would be easy to draw a blueprint much like the original blueprint; given a blueprint, one can construct basically the same house. DNA is not like this. There is no one-to-one correspondence between DNA and a living organism’s structure. Given the traits of an organism, it is impossible to reconstruct its DNA—and purely from the DNA, it is impossible to reconstruct the organism. A better analogy is to a recipe, which offers a general guide as to what to make and how to make it, but depending on the cook and the ingredients, may give quite different results. The ingredients in this case are nutrients, and the “cook” is the whole of our experience and interaction with the environment. No experience or environment can act upon us unless we have the right genes and nutrients to make it effective. No matter how long you let it sit, bread with no yeast will never rise—and no matter how hard you try to teach him, your dog will never be able to speak in fluent sentences.

Furthermore, genes rarely do only one thing in an organism; much as drugs have side effects, so do genes, a phenomenon called pleiotropy. Some genes are more pleiotropic than others, but really, all genes are pleiotropic. In any complex organism, genes will have complex effects. The genes of an organism are its genotype; the actual traits that it has are its phenotype. We have these two different words precisely because they are different things; genotype influences phenotype, but many other things influence phenotype besides genotype. The answer to the question “Nature or Nurture?” is always—always—“Both”. There are much more useful questions to ask, like “How much of the variation of this trait within this population is attributable to genetic differences?”, “How do environmental conditions trigger this phenotype in the presence of this genotype?”, and “Under what ecological circumstances would this genotype evolve?”

This is why it’s a bit misleading to talk about the “the gene for homosexuality” or “the gene for religiosity”; taken literally this would be like saying “the ingredient for chocolate cake” or “the beam for the Empire State Building”. At best we can distinguish certain genes that might, in the context of many other genes and environmental contributions, make a difference between particular states—much as removing the cocoa from chocolate cake makes some other kind of cake, it could be that removing a particular gene from someone strongly homosexual might make them nearer to heterosexual. It’s not that genes can be mapped one-to-one to traits of an organism; but rather that in many cases a genetic difference corresponds to a difference in traits that is ecologically significant. This is what geneticists mean when they say “the gene for X”; it’s a very useful concept in evolutionary theory, but I don’t think it’s one most laypeople understand. As usual, Richard Dawkins explains this matter brilliantly:

Probably the first point to make is that whenever a geneticist speaks of a gene `for’ such and such a characteristic, say brown eyes, he never means that this gene affects nothing else, nor that it is the only gene contributing to the brown pigmentation. Most genes have many distantly ramified and apparently unconnected effects. A vast number of genes are necessary for the development of eyes and their pigment. When a geneticist talks about a single gene effect, he is always talking about a difference between individuals. A gene `for brown eyes’ is not a gene that, alone and unaided, manufactures brown pigment. It is a gene that, when compared with its alleles (alternatives at the same chromosomal locus), in a normal environment, is responsible for the difference in eye colour between individuals possessing the gene and individuals not possessing the gene. The statement `G1 is a gene for phenotypic characteristic P1‘ is always a shorthand. It always implies the existence, or potential existence, of at least one alternative gene G2, and at least one alternative characteristic P2. It also implies a normal developmental environment, including the presence of the other genes which are common in the gene pool as a whole, and therefore likely to be in the same body. If all individuals had two copies of the gene `for’ brown eyes and if no other eye colour ever occurred, the `gene for brown eyes’ would strictly be a meaningless concept. It can only be defined by reference to at least one potential alternative. Of course any gene exists physically in the sense of being a length of DNA; but it is only properly called a gene `for X’ if there is at least one alternative gene at the same chromosomal locus, which leads to not X.

It follows that there is no clear limit to the complexity of the `X’ which we may substitute in the phrase `a gene for X’. Reading, for example, is a learned skill of immense and subtle complexity. A gene for reading would, to naive common sense, be an absurd notion. Yet, if we follow genetic terminological convention to its logical conclusion, all that would be necessary in order to establish the existence of a gene for reading is the existence of a gene for not reading. If a gene G2 could be found which infallibly caused in its possessors the particular brain lesion necessary to induce specific dyslexia, it would follow that G1, the gene which all the rest of us have in double dose at that chromosomal locus, would by definition have to be called a gene for reading.

It’s important to keep this in mind when interpreting any new ideas or evidence from biology. Just as cocoa by itself is not chocolate cake because one also needs all the other ingredients that make it cake in the first place, “the gay gene” cannot exist in isolation because in order to be gay one needs all the other biological and neurological structures that make one a human being in the first place. Moreover, just as cocoa changes the consistency of a cake so that other ingredients may need to be changed to compensate, so a hypothetical“gay gene” might have other biological or neurological effects that would be inseparable from its contribution to sexual orientation.

It’s also important to point out that hereditary is not the same thing as genetic. By comparing pedigrees, it is relatively straightforward to determine the heritability of a trait within a population—but this is not the same as determining whether the trait is genetic. A great many traits are systematically inherited from parents that have nothing to do with DNA—like language, culture, and wealth. (These too can evolve, but it’s a different kind of evolution.) In the United States, IQ is about 80% heritable; but so is height, and yet nutrition has large, well-documented effects on height (The simplest case: malnourished people never grow very tall). If, as is almost certainly the case, there are many environmental influences such as culture and education that can affect IQ scores, then the heritability of IQ tells us very little.

In fact, some traits are genetic but not hereditary! Certain rare genetic diseases can appear by what is called de novo mutation; the genes that cause them can randomly appear in an individual without having been present in their parents. Neurofibromatosis occurs in as many people with no family history as it does in people with family history; and yet, neurofibromatosis is definitely a genetic disorder, for it can be traced to particular sections of defective DNA.

Honestly, most of the debate about nature versus nurture in human behavior is really quite pointless. Even if you ignore the general facts that phenotype is always an interaction between genes and environment, and feedback occurs between genes and environment over evolutionary time, human beings are the species for which the “Nature or nurture?” question reaches its most meaningless. It is human nature to be nurtured; it is written within our genes that we should be flexible, intelligent beings capable of learning and training far beyond our congenital capacities. An ant’s genes are not written that way; ants play out essentially the same program in every place and time, because that program is hard-wired within them. Humans have an enormous variety of behaviors—far outstripping the variety in any other species—despite having genetic variation of only about 0.1%; clearly most of the differences between humans are environmental. Yet, it is precisely the genes that code for being Homo sapiens that make this possible; if we’d had the genes of an ant or an earthworm, we wouldn’t have this enormous behavioral plasticity. So each person is who they are largely because of their environment—but that itself would not be true without the genes we all share.

On land acknowledgments

Dec 29 JDN 2460674

Noah Smith and Brad DeLong, both of whom I admire, have recently written about the practice of land acknowledgments. Smith is wholeheartedly against them. DeLong has a more nuanced view. Smith in fact goes so far as to argue that there is no moral basis for considering these lands to be ‘Native lands’ at all, which DeLong rightly takes issue with.

I feel like this might be an issue where it would be better to focus on Native American perspectives. (Not that White people aren’t allowed to talk about it; just that we tend to hear from them on everything, and this is something where maybe they’re less likely to know what they’re talking about.)

It turns out that Native views on land acknowledgments are also quite mixed; some see them as a pointless, empty gesture; others see them as a stepping-stone to more serious policy changes that are necessary. There is general agreement that more concrete actions, such as upholding treaties and maintaining tribal sovereignty, are more important.

I have to admit I’m much more in the ’empty gesture’ camp. I’m only one-fourth Native (so I’m Whiter than I am not), but my own view on this is that land acknowledgments aren’t really accomplishing very much, and in fact aren’t even particularly morally defensible.

Now, I know that it’s not realistic to actually “give back” all the land in the United States (or Australia, or anywhere where indigenous people were forced out by colonialism). Many of the tribes that originally lived on the land are gone, scattered to the winds, or now living somewhere else that they were forced to (predominantly Oklahoma). Moreover, there are now more non-Native people living on that land than there ever were Native people living on it, and forcing them all out would be just as violent and horrific as forcing out the Native people was in the first place.

I even appreciate Smith’s point that there is something problematic about assigning ownership of land to bloodlines of people just because they happened to be the first ones living there. Indeed, as he correctly points out, they often weren’t the first ones living there; different tribes have been feuding and warring with each other since time immemorial, and it’s likely that any given plot of land was held by multiple different tribes at different times even before colonization.

Let’s make this a little more concrete.

Consider the Beaver Wars.


The Beaver Wars were a series of conflicts between the Haudenosaunee (that’s what they call themselves; to a non-Native audience they are better known by what the French called them, Iroquois) and several other tribes. Now, that was after colonization, and the French were involved, and part of what they were fighting over was the European fur trade—so the story is a bit complicated by that. But it’s a conflict we have good historical records of, and it’s pretty clear that many of these rivalries long pre-dated the arrival of the French.

The Haudenosaunee were brutal in the Beaver Wars. They slaughtered thousands, including many helpless civilians, and effectively wiped out several entire tribes, including the Erie and Susquehannock, and devastated several others, including the Mohicans and the Wyandot. Many historians consider these to be acts of genocide. Surely any land that the Haundenosaunee claimed as a result of the Beaver Wars is as illegitimate as land claimed by colonial imperialism? Indeed, isn’t it colonial imperialism?

Yet we have no reason to believe that these brutal wars were unique to the Haundenosaunee, or that they only occurred after colonization. Our historical records aren’t as clear going that far back, because many Native tribes didn’t keep written records—in fact, many didn’t even have a written language. But what we do know suggests that a great many tribes warred with a great many other tribes, and land was gained and lost in warfare, going back thousands of years.

Indeed, it seems to be a sad fact of human history that virtually all land, indigenous or colonized, is actually owned by a group that conquered another group (that conquered another group, that conquered another group…). European colonialism was simply the most recent conquest.

But this doesn’t make European colonialism any more justifiable. Rather, it raises a deeper question:

How should we decide who owns what land?

The simplest way, and the way that we actually seem to use most of the time, is to simply take whoever currently owns the land as its legitimate ownership. “Possession is nine-tenths of the law” was always nonsense when it comes to private property (that’s literally what larceny means!), but when it comes to national sovereignty, it is basically correct. Once a group manages to organize itself well enough to enforce control over a territory, we pretty much say that it’s their territory now and they’re allowed to keep it.

Does that mean that anyone is just allowed to take whatever land they can successfully conquer and defend? That the world must simply accept that chaos and warfare are inevitable? Fortunately, there is a solution to this problem.

The Westphalian solution.

The current solution to this problem is what’s called Westphalian sovereignty, after the Peace of Westphalia, two closely-related treaties that were signed in Westphalia (a region of Germany) in 1648. Those treaties established a precedent in international law that nations are entitled to sovereignty over their own territory; other nations are not allowed to invade and conquer them, and if anyone tries, the whole international community should fight to resist any such attempt.

Effectively, what Westphalia did was establish that whoever controlled a given territory right now (where “right now” means 1648) now gets the right to hold it forever—and everyone else not only has to accept that, they are expected to defend it. Now, clearly this has not been followed precisely; new nations have gained independence from their empires (like the United States), nations have separated into pieces (like India and Pakistan, the Balkans, and most recently South Sudan), and sometimes even nations have successfully conquered each other and retained control—but the latter has been considerably rarer than it was before the establishment of Westphalian sovereignty. (Indeed, part of what makes the Ukraine War such an aberration is that it is a brazen violation of Westphalian sovereignty the likes of which we haven’t seen since the Second World War.)

This was, as far as I can tell, a completely pragmatic solution, with absolutely no moral basis whatsoever. We knew in 1648, and we know today, that virtually every nation on Earth was founded in bloodshed, its land taken from others (who took it from others, who took it from others…). And it was timed in such a way that European colonialism became etched in stone—no European power was allowed to take over another European power’s colonies anymore, but they were all allowed to keep all the colonies they already had, and the people living in those colonies didn’t get any say in the matter.

Since then, most (but by no means all) of those colonies have revolted and gained their own independence. But by the time it happened, there were large populations of former colonists, and the indigenous populations were often driven out, dramatically reduced, or even outright exterminated. There is something unsettling about founding a new democracy like the United States or Australia after centuries of injustice and oppression have allowed a White population to establish a majority over the indigenous population; had indigenous people been democratically represented all along, things would probably have gone a lot differently.

What do land acknowledgments accomplish?

I think that the intent behind land acknowledgments is to recognize and commemorate this history of injustice, in the hopes of somehow gaining some kind of at least partial restitution. The intentions here are good, and the injustices are real.

But there is something fundamentally wrong with the way most land acknowledgments are done, because they basically just push the sovereignty back one step: They assert that whoever held the land before Europeans came along is the land’s legitimate owner. But what about the people before them (and the people before them, and the people before them)? How far back in the chain of violence are we supposed to go before we declare a given group’s conquests legitimate?

How far back can we go?

Most of these events happened many centuries ago and were never written down, and all we have now is vague oral histories that may or may not even be accurate. Particularly when one tribe forces out another, it rather behooves the conquering tribe to tell the story in their own favor, as one of “reclaiming” land that was rightfully theirs all along, whether or not that was actually true—as they say, history is written by the victors. (I think it’s actually more true when the history is never actually written.) And in some cases it’s probably even true! In others, that land may have been contested between the two tribes for so long that nobody honestly knows who owned it first.

It feels wrong to legitimate the conquests of colonial imperialism, but it feels just as wrong to simply push it back one step—or three steps, or seven steps.

I think that ultimately what we must do is acknowledge this entire history.

We must acknowledge that this land was stolen by force from Native Americans, and also that most of those Native Americans acquired their land by stealing it by force from other Native Americans, and the chain goes back farther than we have records. We must acknowledge that this is by no means unique to the United States but in fact a universal feature of almost all land held by anyone anywhere in the world. We must acknowledge that this chain of violence and conquest has been a part of human existence since time immemorial—and affirm our commitment to end it, once and for all.

That doesn’t simply mean accepting the current allocation of land; land, like many other resources, is clearly distributed unequally and unfairly. But it does mean that however we choose to allocate land, we must do so by a fair and peaceful process, not by force and conquest. The chain of violence that has driven human history for thousands of years must finally be brought to an end.

Moral progress and moral authority

Dec 8 JDN 2460653

In previous posts I’ve written about why religion is a poor source of morality. But it’s worse than that. Religion actually holds us back morally. It is because of religion that our society grants the greatest moral authority to precisely the people and ideas which have most resisted moral progress. Most religious people are good, well-intentioned people—but religious authorities are typically selfish, manipulative, Machiavellian leaders who will say or do just about anything to maintain power. They have trained us to respect and obey them without question; they even call themselves “shepherds” and us the “flock”, as if we were not autonomous humans but obedient ungulates.

I’m sure that most of my readers are shocked that I would assert such a thing; surely priests and imams are great, holy men who deserve our honor and respect? The evidence against such claims is obvious. We only believe such things because the psychopaths have told us to believe them.

I am not saying that these evil practices are inherent to religion—they aren’t. Other zealous, authoritarian ideologies, like Communism and fascism, have been just as harmful for many of the same reasons. Rather, I am saying that religion gives authority and respect to people who would otherwise not have it, people who have long histories of evil, selfish, and exploitative behavior. For a particularly striking example, Catholicism as an idea is false and harmful, but not nearly as harmful as the Catholic Church as an institution, which has harbored some of the worst criminals in history.

The Catholic Church hierarchy is quite literally composed of a cadre of men who use tradition and rhetoric to extort billions of dollars from the poor and who have gone to great lengths to defend men who rape children—a category of human being that normally is so morally reviled that even thieves and murderers consider them beyond the pale of human society. Pope Ratzinger himself, formerly the most powerful religious leader in the world, has been connected with the coverup based on a letter he wrote in 1985. The Catholic Church was also closely tied to Nazi Germany and publicly celebrated Hitler’s birthday for many years; there is evidence that the Vatican actively assisted in the exodus of Nazi leaders along “ratlines” to South America. More recently the Church once again abetted genocide, when in Rwanda it turned away refugees and refused to allow prosecution against any of the perpetrators who were affiliated with the Catholic Church. Yes, that’s right; the Vatican has quite literally been complicit in the worst moral crimes human beings have ever committed. Embezzlement of donations and banning of life-saving condoms seem rather beside the point once we realize that these men and their institutions have harbored genocidaires and child rapists. I can scarcely imagine a more terrible source of moral authority.

Most people respect evangelical preachers, like Jerry Falwell who blamed 9/11 and Hurricane Katrina on feminists, gays, and secularists, then retracted the statement about 9/11 when he realized how much it had offended people. These people have concepts of morality that were antiquated in the 19th century; they base their ethical norms on books that were written by ignorant and cultish nomads thousands of years ago. Leviticus 18:22 and 20:13 indeed condemn homosexuality, but Leviticus 19:27 condemns shaving and Leviticus 11:9-12 says that eating fish is fine but eating shrimp is evil. By the way, Leviticus 11:21-22 seems to say that locusts have only four legs, when they very definitely have six and you can see this by looking at one. (I cannot emphasize this enough: Don’t listen to what people say about the book, read the book.)

But we plainly don’t respect scientists or philosophers to make moral and political decisions. If we did, we would have enacted equal rights for LGBT people sometime around 1898 when the Scientific-Humanitarian Committee was founded or at least by 1948 when Alfred Kinsey showed how common, normal, and healthy homosexuality is. Democracy and universal suffrage (for men at least) would have been the norm shortly after 1689 when Locke wrote his Two Treatises of Government. Women would have been granted the right to vote in 1792 upon the publication of Mary Woolstonecraft’s A Vindication of the Rights of Woman, instead of in 1920 after a long and painful political battle. Animal rights would have become law in 1789 with the publication of Bentham’s Introduction to the Principles of Morals and Legislation. We should have been suspicious of slavery since at least Kant if not Socrates, but instead it took until the 19th century for slavery to finally be banned. We owe the free world to moral science; but nonetheless we rarely listen to the arguments of moral scientists. As a species we fight for our old traditions even in the face of obvious and compelling evidence to the contrary, and this holds us back—far back. If they haven’t sunk in yet, read these dates again: Society is literally about 200 years behind the cutting edge of moral science. Imagine being 200 years behind in technology; you would be riding horses instead of flying in jet airliners and writing letters with quills instead of texting on your iPhone. Imagine being 200 years behind in ecology; you would be considering the environmental impact of not photovoltaic panels or ethanol but whale oil. This is how far behind we are in moral science.

One subfield of moral science has done somewhat better: The economics of theory and the economics of practice differ by only about 100 years. Capitalism really was instituted on a large scale only a few decades after Adam Smith argued for it, and socialism (while horrifyingly abused in the Communism of Lenin and Stalin) has nonetheless been implemented on a wide scale only a century after Marx. Keynesian stimulus was international policy (despite its numerous detractors) in 2008 and 2020, and Keynes himself died in only 1946. This process is still slower than it probably should be, but at least we aren’t completely ignoring new advances the way we do in ethics. 100 years behind in technology we would have cars and electricity at least.

Except perhaps in economics, in general we entrust our moral claims to the authority of men in tall hats and ornate robes who merely assert their superiority and ties to higher knowledge, while ignoring the thousands of others who actually apply their reason and demonstrate knowledge and expertise. A criminal in pretty robes who calls himself a moral leader might as well be a moral leader, as far as we’re concerned; a genuinely wise teacher of morality who isn’t arrogant enough to assert special revelation from the divine is instead ignored. Why do we do this? Religion. Religion is holding us back.

We need to move beyond religion in order to make real and lasting moral progress.

More on religion

Dec 8 JDN 2460653

Reward and punishment

In previous posts I’ve argued that religion can make people do evil and that religious beliefs simply aren’t true.

But there is another reason to doubt religion as a source of morality: There is no reason to think that obeying God is a particularly good way of behaving, even if God is in fact good. If you are obeying God because he will reward you, you aren’t really being moral at all; you are being selfish, and just by accident doing good things. If everyone acted that way, good things would get done; but it clearly misses what we mean when we talk about morality. To be moral is to do good because it is good, not because you will be rewarded for doing it. This becomes even clearer when we consider the following question: If you weren’t rewarded, would you still do good? If not, then you aren’t really a good person.

In fact, it’s ironic that proponents of naturalistic and evolutionary accounts of morality are often accused of cheapening morality because we explain it using selfish genes and memes; traditional religious accounts of morality are directly based on selfishness, not for my genes or my memes, but for me myself! It’s legitimate to question whether someone who acts out of a sense of empathy that ultimately evolved to benefit their ancestors’ genes is really being moral (why I think so requires essentially the rest of this book to argue); but clearly someone who acts out of the desire to be rewarded later isn’t! Selfish genes may or may not make good people; but selfish people clearly aren’t good people.

Even if religion makes people act more morally (and the evidence on that is quite mixed), that doesn’t make it true. If I could convince everyone that John Stuart Mill was a prophet of God, this world would be a paradise; but that would be a lie, because John Stuart Mill was a brilliant man and nothing more. The belief that Santa Claus is watching no doubt makes some children behave better around Christmas, but this is not evidence for flying reindeer. In fact, the children who behave just fine without the threat of coal in their stockings are better children, aren’t they? For the same reason, people who do good for the sake of goodness are better people than those who do it out of hope for Heaven and fear of Hell.

There are cases in which false beliefs might make people do more good, because the false beliefs provide a more obvious, but wrong reason for doing something that is actually good for less obvious, but actually correct reasons. Believing that God requires you to give to charity might motivate you to give more to charity; but charity is good not because God demands it, but because there are billions of innocent people suffering around the world. Maybe we should for this reason be careful about changing people’s beliefs; someone who believes a lie but does the right thing is still better than someone who believes the truth but acts wrongly. If people think that without God there is no morality, then telling them that there is no God may make them abandon morality. This is precisely why I’m not simply telling readers that there is no God: I am also spending this entire chapter explaining why we don’t need God for morality. I’d much rather you be a moral theist than an immoral atheist; but I’m trying to make you a moral atheist.

The problem with holy texts

Even if God actually existed, and were actually good, and commanded us to do things, we do not have direct access to God’s commandments. If you are not outright psychotic, you must acknowledge this; God does not speak to us directly. If anything, he has written or inspired particular books, which have then been translated and interpreted over centuries by many different people and institutions. There is a fundamental problem in deciding which books have been written or inspired by God; not only does the Bible differ from the Qur’an, which differs from the Bhagavad-Gita, which differs from other holy texts; worse, particular chapters and passages within each book differ from one another on significant moral questions, sometimes on the foundational principles of morality itself.

For instance, let’s consider the Bible, because this is the holy book in greatest favor in modern Western culture. Should we use a law of retribution, a lex talionis, as in Exodus 21? Or should we instead forgive our enemies, as in Matthew 5? Perhaps we should treat others as we would like to be treated, as in Luke 6? Are rape and genocide commanded by God, as in 1 Samuel 15, Numbers 31, and Deuteronomy 20-21, or is murder always a grave crime, as in Exodus 20? Is even anger a grave sin, as in Matthew 5? Is it a crime to engage in male-male sex, as in Leviticus 18? Then, is it then also a crime to shave beards and wear mixed-fiber clothing, as in Leviticus 19? Is it just to punish descendants for the crimes of their ancestors, as in Genesis 9, or is it only fair to punish the specific perpetrators, as in Deuteronomy 24? Is adultery always immoral, as in Exodus 20, or does God sometimes command it, as in Hosea 1? Must homosexual men be killed, as in Leviticus 20, or is it enough to exile them, as in 1 Kings 15? A thorough reading of the Bible shows hundreds of moral contradictions and thousands of moral absurdities. (This is not even to mention the factual contradictions and absurdities.)

Similar contradictions and absurdities can be found in the Qur’an and other texts. Since most of my readers will come from Christian cultures, for my purposes I think brief examples will suffice. The Qur’an at times says that Christians are deserving of the same rights as Muslims, and at other times declares Christians so evil that they ought to be put to the sword. (Most of the time it says something in between, that “People of the Book”, ahl al-Kitab, as Jews and Christians are known, are inferior to Muslims but nonetheless deserving of rights.) The Bhagavad-Gita at times argues for absolute nonviolence, and at times declares an obligation to fight in war. The Dharmas and the Dao De Jing are full of contradictions, about everything from meaning to justice to reincarnation (in fact, many Buddhists and Taoists freely admit this, and try to claim that non-contradiction is overrated—which is literally talking nonsense). The Book of Mormon claims the canonicity of texts that it explicitly contradicts.

And above all, we have no theological basis for deciding which parts of which holy books we should follow, and which we should reject—for they all have many sects with many followers, and they all declare with the same intensity of clamor and absence of credibility that they are the absolute truth of a perfect God. To decide which books to trust and which to ignore, we have only a rational basis, founded upon reason and science—but then, we can’t help but take a rational approach to morality in general. If it were glaringly obvious which holy text was written by God, and its message were clear and coherent, perhaps we could follow such a book—but given the multitude of religions and sects and denominations in the world, all mutually-contradictory and most even self-contradictory, each believed with just as much fervor as the last, how obvious can the answer truly be?

One option would be to look for the things that are not contradicted, the things that are universal across religions and texts. In truth these things are few and far between; one sect’s monstrous genocide is another’s holy duty. But it is true that certain principles appear in numerous places and times, a signal of universality amidst the noise of cultural difference: Fairness and reciprocity, as in the Golden Rule; honesty and fidelity; forbiddance of theft and murder. There are examples of religious beliefs and holy texts that violate these rules—including the Bible and the Qur’an—but the vast majority of people hold to these propositions, suggesting that there is some universal truth that has been recognized here. In fact, the consensus in favor of these values is far stronger than the consensus in favor of recognized scientific facts like the shape of the Earth and the force of gravity. While for most of history most people had no idea how old the Earth was and many people still seem to think it is a mere 6,000 years old, there has never been a human culture on record that thought it acceptable to murder people arbitrarily.

But notice how these propositions are not tied to any particular religion or belief; indeed, nearly all atheists, including me, also accept these ideas. Moreover, it is possible to find these principles contradicted in the very books that religious people claim as the foundation of their beliefs. This is strong evidence that religion has nothing to do with it—these principles are part of a universal human nature, or better yet, they may even be necessary truths that would hold for any rational beings in any possible universe. If Christians, Muslims, Buddhists, Hindus and atheists all agree that murder is wrong, then it must not be necessary to hold any specific religion—or any at all—in order to agree that murder is wrong.

Indeed, holy texts are so full of absurdities and atrocities that the right thing to do is to completely and utterly repudiate holy texts—especially the Bible and the Qur’an.

If you say you believe in one of these holy texts, you’re either a good person but a hypocrite because you aren’t following the book; or you can be consistent in following the book, but you’ll end up being a despicable human being. Obviously I much prefer the former—but why not just give up the damn book!? Why is it so important to you to say that you believe in this particular book? You can still believe in God if you want! If God truly exists and is benevolent, it should be patently obvious that he couldn’t possibly have written a book as terrible as the Bible or the Qur’an. Obviously those were written by madmen who had no idea what God is truly like.

Trump Won. Now what?

Nov 10 JDN 2460625

How did Trump win?

After the election results were announced, one of the first things I saw on social media, aside from the shock and panic among most of my friends and acquaintances, was various people trying to explain what happened this election by some flaw in Kamala Harris or her campaign.

They said it was the economy—even though the economy was actually very good, with the lowest unemployment we’ve had in decades and inflation coming back to normal. Real wages have been rising quickly, especially at the bottom! Most economists agree that inflation will be worse under Trump than it would have been under Harris.

They said it was too much identity politics, or else that Black and Latino men felt their interests were being ignored—somehow it was both of those things.

They said it was her support of Israel in its war crimes in Gaza—even though Trump supports them even more.

They said she was too radical on trans issues—even though most Americans favor anti-discrimination laws protecting trans people.

They said Harris didn’t campaign well—even though her campaign was obviously better organized than Trump’s (or Hillary Clinton’s).

They said it was too much talk about abortion, alienating pro-lifers—even though the majority of Americans want abortion to be legal in all or most cases.

They said that Biden stepped down too late, and she didn’t have enough time—even though he stepped down as soon as he showed signs of cognitive decline, and her poll numbers were actually better early on in the campaign.

They said that Harris was wrong to court endorsements by Republicans—even though endorsements form the other side are exactly the sort of thing that usually convinces undecided voters.

None of these explanations actually hold much water.

BUT EVEN IF THEY DID, IT WOULDN’T MATTER.

I could stipulate that Harris and her campaign had all of these failures and more. I could agree that she’s the worst candidate the Democrats have fielded in decades. (She wasn’t.)

THE ALTERNATIVE WAS DONALD TRUMP.

Trump is so terrible that he utterly eclipses any failings that could reasonably be attributed to Harris. He is racist, fascist, authoritarian, bigoted, incompetent, narcissistic, egomaniacal, corrupt, a liar, a cheat, an insurrectionist, a sexual predator, and a convicted criminal. He shows just as much cognitive decline as Biden did, but no one on his side asked him to step down because of it. His proposed tariffs would cause massive economic harm for virtually no benefit, and his planned mass deportations are a human rights violation (and also likely an economic disaster). He will most likely implement some variant of Project 2025, which is absolutely full of terrible, dangerous policies. Historians agree he was one of the worst Presidents we’ve ever had.

Indeed, Trump is so terrible that there really can’t be any good reasons to re-elect him. We are left only with bad reasons.

I know of two, and both of them are horrifying.


The first is that Kamala Harris is a woman of color, and a lot of Americans just weren’t willing to put a woman of color in charge. Indeed, sexism seems to be a stronger effect here than racism, because Barack Obama made it but Hillary Clinton didn’t.

The second is that Trump and other Republicans successfully created a whole propaganda system that allows them to indoctrinate millions of people with disinformation. Part of their strategy involves systematically discrediting all mainstream sources, from journalists to scientists, so that they can replace the truth with whatever lies they want.

It was this disinformation that convinced millions of Americans that the economy was in shambles when it was doing remarkably well, convinced them that crime is rising when it is actually falling, convinced them that illegal immigrants were eating people’s pets. Once Republicans had successfully made people doubt all mainstream sources, they could simply substitute whatever beliefs were most convenient for their goals.

Democrats and Republicans are no longer operating with the same set of facts. I’m not claiming that Democrats are completely without bias, but there is a very clear difference: When scientists and journalists report that a widely-held belief by Democrats is false, most Democrats change their beliefs. When the same happens to Republicans, they just become further convinced that scientists and journalists are liars.

What happens now?

In the worst-case scenario, Trump will successfully surround himself with enough sycophants to undermine the checks and balances in our government and actually become an authoritarian dictator. I still believe that this is unlikely, but I can’t rule it out. I am certain that he would want to do this if he thought he could pull it off. (His own chief of staff has said so!)

Even if that worst-case doesn’t come to pass, things will still be very bad for millions of people. Immigrants will be forcibly removed from their homes. Trans people will face even more discrimination. Abortion may be banned nationwide. We may withdraw our support from Ukraine, and that may allow Russia to win the war. Environmental regulations will be repealed. Much or all of our recent progress at fighting climate change could be reversed. Voter suppression efforts will intensify. Yet more far-right judges will be appointed, and they will make far-right rulings. And tax cuts on the rich will make our already staggering, unsustainable inequality even worse.

Indeed, it’s not clear that this will be good even for the people who voted for Trump. (Of course it will be good for Trump himself and his closest lackeys.) The people who voted based on a conviction that the economy was bad won’t see the economy improve. The people who felt ignored by the Democrats will continue to be even more ignored by the Republicans. The people who were tired of identity politics aren’t going to make us care any less about racism and sexism by electing a racist misogynist. The working-class people who were voting against “liberal elites” will see their taxes raised and their groceries more expensive and their wages reduced.

I guess if people really hate immigrants and want them gone, they may get their wish when millions of immigrants are taken from their homes. And the rich will be largely insulated from the harms, while getting those tax cuts they love so much. So that’s some kind of benefit at least.

But mostly, this was an awful outcome, and the next four years will be progressively more and more awful, until hopefully—hopefully—Trump leaves office and we get another chance at something better. That is, if he hasn’t taken over and become a dictator by then.

What can we do to make things less bad?

I’m seeing a lot of people talking about grassroots organizing and mutual aid. I think these are good things, but honestly I fear they just aren’t going to be enough. The United States government is the most powerful institution in the world, and we have just handed control of it over to a madman.

Maybe we will need to organize mass protests. Maybe we will need to take some kind of radical direct action. I don’t know what to do. This all just feels so overwhelming.

I don’t want to give in to despair. I want to believe that we can still make things better. But right now, things feel awfully bleak.

What is Religion?

Nov 3 JDN 2460618

In this and following posts I will be extensively criticizing religion and religious accounts of morality. Religious authorities have asserted a monopoly for themselves on moral knowledge; as a result most people seem to agree with statements like Dostoyevsky’s “If God does not exist, then everything is permitted.” The majority of people around the world—including the United States, but not including most other First World countries—believe that it is necessary to believe in God in order to be a moral person. Yet little could be further from the truth.

First, I must deal with the fact that in American culture, it is widely considered taboo to criticize religion. A level of criticism which would be unremarkable in other fields of discourse is viewed as “shrill”, “arrogant”, “strident”, “harsh”, and “offensive”.

For instance, I believe the following:

The Republican Party is overall harmful.

Most of Ayn Rand’s Capitalism: The Unknown Ideal is clearly false.

Did you find that offensive? I presume not! I’m sure many people would disagree with me on these things, but hardly anyone would seriously argue that I am being aggressive or intentionally provocative.

Indeed, if I chose less controversial examples, people would find my words positively charitable:

The Nazi Party is overall harmful.

Most of Mao Tse Tung’s The Little Red Book is clearly false.

Now, compare some other beliefs I have, also about ideologies and books:

Islam is overall harmful.

Most of the Bible is clearly false.

Suddenly, I’m being “strident”; I’m being an “angry atheist”, “intolerant” of religious believers—yet I’m using the same words! I must conclude that the objection of atheist “intolerance” comes not because my criticisms are genuinely harsh, but simply because they are criticisms of religion. We have been taught that criticizing religion is evil, regardless of whether the criticisms are valid. Once beliefs are wrapped in the shield of “religion”, they become invulnerable.

If I’d said that Muslim people are inherently evil, or that people who believe in the Bible are mentally defective, I can see why people would be offended. But I’m not saying that. On the contrary, I think the vast majority of religious people are good, reasonable, well-intentioned people who are honestly mistaken. There are some extremely intelligent theists in the world, and I do not dismiss their intelligence; I merely contend that they are mistaken about this issue. I don’t think religious people are evil or stupid or crazy; I just think they are wrong. I respect religious people as intelligent beings; that’s why I am trying to use reason to persuade them. I wouldn’t try to reason with a rock or even a tiger.

I will in future posts show that religion is false and morally harmful. But of course in order to do that, I must first explain what I mean by religion; while we use the word every day, we are far from consistent about what we mean.

There’s one meaning of “religion” that often is put forth by its defenders, on which “religion” seems to mean only “moral values”, or else “a sense of mystery and awe before the universe”. Einstein often spoke this way, which is why people who quote him out of context often get the impression that he is defending Judaism or Christianity:

I cannot conceive of a genuine scientist without that profound faith. The situation may be expressed by an image: science without religion is lame, religion without science is blind.

But in the original context, a very different picture emerges:

Even though the realms of religion and science in themselves are clearly marked off from each other, nevertheless there exist between the two strong reciprocal relationships and dependencies. Though religion may be that which determines the goal, it has, nevertheless, learned from science, in the broadest sense, what means will contribute to the attainment of the goals it has set up. But science can only be created by those who are thoroughly imbued with the aspiration toward truth and understanding. This source of feeling, however, springs from the sphere of religion. To this there also belongs the faith in the possibility that the regulations valid for the world of existence are rational, that is, comprehensible to reason. I cannot conceive of a genuine scientist without that profound faith. The situation may be expressed by an image: science without religion is lame, religion without science is blind.

Here, “religion” comes to mean little more than “moral values” or “aspiration toward truth”. In my own lexicon Einstein’s words would become “Fact without value is lame; value without fact is blind.” (I would add: both are the domain of science.)

Einstein did not believe in a personal deity of any kind. He was moved to awe by the mystery and grandeur of the universe, and motivated by moral duties to do good and seek truth. If that’s what you mean by “religion”, then of course I am entirely in favor of it. But that is not what most people mean by “religion”.

A much better meaning of the word “religion” is something like “cultural community of believers”; this is what we mean when we say that Catholicism is a religion or that Shi’a Islam is a religion. This is essentially the definition I will be using. But there is a problem with this meaning, because it doesn’t specify what constitutes a believer.

May any shared belief suffice? Then the Democratic Party is a “religion”, because it is a community of people with shared beliefs. Indeed, the scientific community is a “religion”. This sort of definition is so broad that it loses all usefulness.

So in order for “religion” to be a really meaningful concept, we must specify just what sort of beliefs qualify as religious rather than secular. Here I offer my definition; I have tried to be as charitable to religion as possible while remaining accurate in what I am criticizing.

Religion is a system of beliefs and practices that is based upon one or more of the following concepts:

  • Super-human beings: sentient beings that are much more powerful and long-lived than humans are.
  • Afterlife: a continued existence for human conscious experience that persists after death.
  • Prayer: a system of ritual behaviors that are expected to influence the outcome of phenomena through the mediation of something other than human action or laws of nature.

Note that I have specifically excluded from the definition claims that the super-human beings are “supernatural” or “magical”. Though many people, even religious people, would include these concepts, I do not, because I don’t think that the words supernatural and magical carry any well-defined meaning. Is “supernatural” what doesn’t follow the laws of nature? Well, do we mean the laws as we know them, or the laws as they are? It makes a big difference: The laws of nature as we know them have changed as science advances. 100 years ago, atoms were beyond our understanding; 200 years ago, electricity was beyond our understanding; 500 years ago, ballistics was beyond our understanding as well. The laws of nature as they are, on the other hand, are by definition the laws that everything in the universe must follow—hence, “supernatural” would be a funny way of saying “non-existent”.

I think ultimately “supernatural” and “magical” are just weird ways of saying “what I don’t understand”; but if that’s all they are, they clearly aren’t helpful. Today’s magic is tomorrow’s science. If Clarke’s Third Law is right that any sufficiently-advanced technology is indistinguishable from magic, then what’s the point of being magic? It’s just technology we don’t understand! In fact I prefer the reformulation of Clarke’s Law by Mark Stanley: Any technology, no matter how primitive, is magic to those who don’t understand it. To an ape, a spear is magical; to a hunter-gatherer, a rifle is magical; and to us today, creating planets from dust and living a million years are magical. But that could very well change someday.

Similarly, I have excluded the hyperboles “omnipotent” and “omniscient”, because they are widely considered by philosophers to be outright incoherent, and in no cases are they actually believed. If you believed that God knows everything, then you would have to believe that God knows how to prove the statement “This statement is unprovable” (Gödel’s incompleteness theorems), and that God knows everything he doesn’t know. If you believed that God could do anything, you would have to believe that God can put four sides on a triangle, that God can heal the sick while leaving them sick, and that God can make a rock so big he can’t lift it. Even if you restrict God’s powers to what is logically coherent, you are still left trying to explain why he didn’t create a world of perfect happiness and peace to begin with, or how he can know the future if there is any randomness in the world at all. Furthermore, my definition is meant to include beings like Zeus and Thor, which were sincerely believed to be divine by millions of people for hundreds of years. Zeus is clearly neither omnipotent nor omniscient, but he is a lot more powerful and long-lived than we are; he’s not very benevolent, but nonetheless people called him God. (In fact, the Latin word for God, deus and the proper name Zeus are linguistically cognate. Zeus was thought to define or epitomize what it means to be God.) My definition is also meant to include non-divine super-humans like spirits and leprechauns, which similarly have been believed by many people for many centuries. The definition I have used is about as broad as I could make it without including things that obviously and uncontroversially exist, like “sentient beings other than humans” (animals?) or “forces beyond human power and comprehension” (gravity?) or “energy that animates life and permeates all things” (electricity?).

I have also excluded from my definition of “religion” anything that is obviously false or bad, like “believing things with no evidence”, “denying scientific facts”, “assenting to logical contradictions”, “hating those who disagree with them”, or “blaming natural disasters on people’s moral failings”. In fact, these are characteristic features of nearly all religions, and most religious people do them often; recall that 40% of Americans think that human beings were created by God less than 10,000 years ago, and note also that while the number has fallen over the decades, still 40% would not elect an atheist President, despite the fact that 93% of the National Academy of Science is atheist or agnostic. In the US, 32% of people believe in ghosts and 21% believe in witches. Views like “When people die they become ghosts”, “evolution is a lie” and “Earthquakes are caused by sexual immorality” are really quite mainstream in modern society. But criticism of religion is always countered by claims that we “New Atheists” (we are certainly not new, for Seneca and Epicurus would have qualified) lack philosophical sophistication, or focus too much on the obviously bad or ridiculous ideas.

Furthermore, note that I have formulated the definition of religion as a disjunction, not a conjunction; you must have at least one of these features, but need not have all of them. This is so that I can include in my criticism beliefs like Buddhism, which often does not involve prayer or super-human beings, but except in its most rarefied forms (which really aren’t recognizably religious!) invariably involves concepts of afterlife, and also New Age beliefs, which often do not involve afterlife or super-human beings but fit my definition of prayer—wearing a rabbit’s foot is a prayer, as is using a Ouiji board. It is incumbent upon me to show that all three are false, not merely that one of them is false. Of course, if you believe all three, then even if I only succeed in discrediting any of them, that is enough to show you are mistaken.

Finally, note that what I have just defined is a philosophy that, at least in principle, could be true. We can imagine a world in which there are super-human beings who control our fates; we can imagine a world in which consciousness persists after death; we can imagine a world where entreating to such super-human beings is a good way to get things done. On this definition, religion isn’t incoherent, it’s just incorrect. My point is not that these things are impossible—it is that they are not true.

And that is precisely what I intend to show in upcoming posts.

Against Moral Relativism

Moral relativism is surprisingly common, especially among undergraduate students. There are also some university professors who espouse it, typically but not always from sociology, gender studies or anthropology departments (examples include Marshall Sahlins, Stanley Fish, Susan Harding, Richard Rorty, Michael Fischer, and Alison Renteln). There is a fairly long tradition of moral relativism, from Edvard Westermarck in the 1930s to Melville Herskovits, to more recently Francis Snare and David Wong in the 1980s. University of California Press at Berkeley.} In 1947, the American Anthropological Association released a formal statement declaring that moral relativism was the official position of the anthropology community, though this has since been retracted.

All of this is very, very bad, because moral relativism is an incredibly naive moral philosophy and a dangerous one at that. Vitally important efforts to advance universal human rights are conceptually and sometimes even practically undermined by moral relativists. Indeed, look at that date again: 1947, two years after the end of World War II. The world’s civilized cultures had just finished the bloodiest conflict in history, including some ten million people murdered in cold blood for their religion and ethnicity, and the very survival of the human species hung in the balance with the advent of nuclear weapons—and the American Anthropological Association was insisting that morality is meaningless independent of cultural standards? Were they trying to offer an apologia for genocide?

What is relativism trying to say, anyway? Often the arguments get tied up in knots. Consider a particular example, infanticide. Moral relativists will sometimes argue, for example, that infanticide is wrong in the modern United States but permissible in ancient Inuit society. But is this itself an objectively true normative claim? If it is, then we are moral realists. Indeed, the dire circumstances of ancient Inuit society would surely justify certain life-and-death decisions we wouldn’t otherwise accept. (Compare “If we don’t strangle this baby, we may all starve to death” and “If we don’t strangle this baby, we will have to pay for diapers and baby food”.) Circumstances can change what is moral, and this includes the circumstances of our cultural and ecological surroundings. So there could well be an objective normative fact that infanticide is justified by the circumstances of ancient Inuit life. But if there are objective normative facts, this is moral realism. And if there are no objective normative facts, then all moral claims are basically meaningless. Someone could just as well claim that infanticide is good for modern Americans and bad for ancient Inuits, or that larceny is good for liberal-arts students but bad for engineering students.

If instead all we mean is that particular acts are perceived as wrong in some societies but not in others, this is a factual claim, and on certain issues the evidence bears it out. But without some additional normative claim about whose beliefs are right, it is morally meaningless. Indeed, the idea that whatever society believes is right is a particularly foolish form of moral realism, as it would justify any behavior—torture, genocide, slavery, rape—so long as society happens to practice it, and it would never justify any kind of change in any society, because the status quo is by definition right. Indeed, it’s not even clear that this is logically coherent, because different cultures disagree, and within each culture, individuals disagree. To say that an action is “right for some, wrong for others” doesn’t solve the problem—because either it is objectively normatively right or it isn’t. If it is, then it’s right, and it can’t be wrong; and if it isn’t—if nothing is objectively normatively right—then relativism itself collapses as no more sound than any other belief.

In fact, the most difficult part of defending common-sense moral realism is explaining why it isn’t universally accepted. Why are there so many relativists? Why do so many anthropologists and even some philosophers scoff at the most fundamental beliefs that virtually everyone in the world has?

I should point out that it is indeed relativists, and not realists, who scoff at the most fundamental beliefs of other people. Relativists are fond of taking a stance of indignant superiority in which moral realism is just another form of “ethnocentrism” or “imperialism”. The most common battleground of contention recently is the issue of female circumcision, which is considered completely normal or even good in some African societies but is viewed with disgust and horror by most Western people. Other common choices include abortion, clothing, especially Islamic burqa and hijab, male circumcision, and marriage; given the incredible diversity in human food, clothing, language, religion, behavior, and technology, there are surprisingly few moral issues on which different cultures disagree—but relativists like to milk them for all they’re worth!

But I dare you, anthropologists: Take a poll. Ask people which is more important to them, their belief that, say, female circumcision is immoral, or their belief that moral right and wrong are objective truths? Virtually anyone in any culture anywhere in the world would sooner admit they are wrong about some particular moral issue than they would assent to the claim that there is no such thing as a wrong moral belief. I for one would be more willing to abandon just about any belief I hold before I would abandon the belief that there are objective normative truths. I would sooner agree that the Earth is flat and 6,000 years old, that the sky is green, that I am a brain in a vat, that homosexuality is a crime, that women are inferior to men, or that the Holocaust was a good thing—than I would ever agree that there is no such thing as right or wrong. This is of course because once I agreed that there is no objective normative truth, I would be forced to abandon everything else as well—since without objective normativity there is no epistemic normativity, and hence no justice, no truth, no knowledge, no science. If there is nothing objective to say about how we ought to think and act, then we might as well say the Earth is flat and the sky is green.

So yes, when I encounter other cultures with other values and ideas, I am forced to deal with the fact that they and I disagree about many things, important things that people really should agree upon. We disagree about God, about the afterlife, about the nature of the soul; we disagree about many specific ethical norms, like those regarding racial equality, feminism, sexuality and vegetarianism. We may disagree about economics, politics, social justice, even family values. But as long as we are all humans, we probably agree about a lot of other important things, like “murder is wrong”, “stealing is bad”, and “the sky is blue”. And one thing we definitely do not disagree about—the one cornerstone upon which all future communication can rest—is that these things matter, that they really do describe actual features of an actual world that are worth knowing. If it turns out that I am wrong about these things, \I would want to know! I’d much rather find out I’d been living the wrong way than continue to live the same pretending that it doesn’t matter. I don’t think I am alone in this; indeed, I suspect that the reason people get so angry when I tell them that religion is untrue is precisely because they realize how important it is. One thing religious people never say is “Well, God is imaginary to you, perhaps; but to me God is real. Truth is relative.” I’ve heard atheists defend other people’s beliefs in such terms—but no one ever defends their own beliefs that way. No Evangelical Baptist thinks that Christianity is an arbitrary social construction. No Muslim thinks that Islam is just one equally-valid perspective among many. It is you, relativists, who deny people’s fundamental beliefs.

Yet the fact that relativists accuse realists of being chauvinistic hints at the deeper motivations of moral relativism. In a word: Guilt. Moral relativism is an outgrowth of the baggage of moral guilt and self-loathing that Western societies have built up over the centuries. Don’t get me wrong: Western cultures have done terrible things, many terrible things, all too recently. We needn’t go so far back as the Crusades or the ethnocidal “colonization” of the Americas; we need only look to the carpet-bombing of Dresden in 1945 or the defoliation of Vietnam in the 1960s, or even the torture program as recently as 2009. There is much evil that even the greatest nations of the world have to answer for. For all our high ideals, even America, the nation of “life, liberty, and the pursuit of happiness”, the culture of “liberty and justice for all”, has murdered thousands of innocent people—and by “murder” I mean murder, killing not merely by accident in the collateral damage of necessary war, but indeed in acts of intentional and selfish cruelty. Not all war is evil—but many wars are, and America has fought in some of them. No Communist radical could ever burn so much of the flag as the Pentagon itself has burned in acts of brutality.

Yet it is an absurd overreaction to suggest that there is nothing good about Western culture, nothing valuable about secularism, liberal democracy, market economics, or technological development. It is even more absurd to carry the suggestion further, to the idea that civilization was a mistake and we should all go back to our “natural” state as hunter-gatherers. Yet there are anthropologists working today who actually say such things. And then, as if we had not already traversed so far beyond the shores of rationality that we can no longer see the light of home, then relativists take it one step further and assert that any culture is as good as any other.

Think about what this would mean, if it were true. To say that all cultures are equal is to say that science, education, wealth, technology, medicine—all of these are worthless. It is to say that democracy is no better than tyranny, security is no better than civil war, secularism is no better than theocracy. It is to say that racism is as good as equality, sexism is as good as feminism, feudalism is as good as capitalism.

Many relativists seem worried that moral realism can be used by the powerful and privileged to oppress others—the cishet White males who rule the world (and let’s face it, cishet White males do, pretty much, rule the world!) can use the persuasive force of claiming objective moral truth in order to oppress women and minorities. Yet what is wrong with oppressing women and minorities, if there is no such thing as objective moral truth? Only under moral realism is oppression truly wrong.

How to detect discrimination, empirically

Aug 25 JDN 2460548

For concreteness, I’ll use men and women as my example, though the same principles would apply for race, sexual orientation, and so on. Suppose we find that there are more men than women in a given profession; does this mean that women are being discriminated against?

Not necessarily. Maybe women are less interested in that kind of work, or innately less qualified. Is there a way we can determine empirically that it really is discrimination?

It turns out that there is. All we need is a reliable measure of performance in that profession. Then, we compare performance between men and women, and that comparison can tell us whether discrimination is happening or not. The key insight is that workers in a job are not a random sample; they are a selected sample. The results of that selection can tell us whether discrimination is happening.

Here’s a simple model to show how this works.

Suppose there are five different skill levels in the job, from 1 to 5 where 5 is the most skilled. And suppose there are 5 women and 5 men in the population.

1. Baseline

The baseline case to consider is when innate talents are equal and there is no discrimination. In that case, we should expect men and women to be equally represented in the profession.

For the simplest case, let’s say that there is one person at each skill level:

MenWomen
11
22
33
44
55

Now suppose that everyone above a certain skill threshold gets hired. Since we’re assuming no discrimination, the threshold should be the same for men and women. Let’s say it’s 3; then these are the people who get hired:

Hired MenHired Women
33
44
55

The result is that not only are there the same number of men and women in the job, their skill levels are also the same. There are just as many highly-competent men as highly-competent women.

2. Innate Differences

Now, suppose there is some innate difference in talent between men and women for this job. For most jobs this seems suspicious, but consider pro sports: Men really are better at basketball, in general, than women, and this is pretty clearly genetic. So it’s not absurd to suppose that for at least some jobs, there might be some innate differences. What would that look like?


Again suppose a population of 5 men and 5 women, but now the women are a bit less qualified: There are two 1s and no 5s among the women.

MenWomen
11
21
32
43
54

Then, this is the group that will get hired:

Hired MenHired Women
33
44
5

The result will be fewer women who are on average less qualified. The most highly-qualified individuals at that job will be almost entirely men. (In this simple model, entirely men; but you can easily extend it so that there are a few top-qualified women.)

This is in fact what we see for a lot of pro sports; in a head-to-head match, even the best WNBA teams would generally lose against most NBA teams. That’s what it looks like when there are real innate differences.

But it’s hard to find clear examples outside of sports. The genuine, large differences in size and physical strength between the sexes just don’t seem to be associated with similar differences in mental capabilities or even personality. You can find some subtler effects, but nothing very large—and certainly nothing large enough to explain the huge gender gaps in various industries.

3. Discrimination

What does it look like when there is discrimination?

Now assume that men and women are equally qualified, but it’s harder for women to get hired, because of discrimination. The key insight here is that this amounts to women facing a higher threshold. Where men only need to have level 3 competence to get hired, women need level 4.

So if the population looks like this:

MenWomen
11
22
33
44
55

The hired employees will look like this:

Hired MenHired Women
3
44
55

Once again we’ll have fewer women in the profession, but they will be on average more qualified. The top-performing individuals will be as likely to be women as they are to be men, while the lowest-performing individuals will be almost entirely men.

This is the kind of pattern we observe when there is discrimination. Do we see it in real life?

Yes, we see it all the time.

Corporations with women CEOs are more profitable.

Women doctors have better patient outcomes.

Startups led by women are more likely to succeed.

This shows that there is some discrimination happening, somewhere in the process. Does it mean that individual firms are actively discriminating in their hiring process? No, it doesn’t. The discrimination could be happening somewhere else; maybe it happens during education, or once women get hired. Maybe it’s a product of sexism in society as a whole, that isn’t directly under the control of employers. But it must be in there somewhere. If women are both rarer and more competent, there must be some discrimination going on.

What if there is also innate difference? We can detect that too!

4. Both

Suppose now that men are on average more talented, but there is also discrimination against women. Then the population might look like this:

MenWomen
11
21
32
43
54

And the hired employees might look like this:

Hired MenHired Women
3
4
54

In such a scenario, you’ll see a large gender imbalance, but there may not be a clear difference in competence. The tiny fraction of women who get hired will perform about as well as the men, on average.

Of course, this assumes that the two effects are of equal strength. In reality, we might see a whole spectrum of possibilities, from very strong discrimination with no innate differences, all the way to very large innate differences with no discrimination. The outcomes will then be similarly along a spectrum: When discrimination is much larger than innate difference, women will be rare but more competent. When innate difference is much larger than discrimination, women will be rare and less competent. And when there is a mix of both, women will be rare but won’t show as much difference in competence.

Moreover, if you look closer at the distribution of performance, you can still detect the two effects independently. If the lowest-performing workers are almost all men, that’s evidence of discrimination against women; while if the highest-performing workers are almost all men, that’s evidence of innate difference. And if you look at the table above, that’s exactly what we see: Both the 3 and the 5 are men, indicating the presence of both effects.

What does affirmative action do?

Effectively, affirmative action lowers the threshold for hiring women (or minorities) in order to equalize representation in the workplace. In the presence of discrimination raising that threshold, this is exactly what we need! It can take us from case 3 (discrimination) to case 1 (equality), or from case 4 (both discrimination and innate difference) to case 2 (innate difference only).

Of course, it’s possible for us to overshoot, using more affirmative action than we should have. If we achieve better representation of women, but the lowest performers at the job are women, then we have overshot, effectively now discriminating against men. Fortunately, there is very little evidence of this in practice. In general, even with affirmative action programs in place, we tend to find that the lowest performers are still men—so there is still discrimination against women that we’ve failed to compensate for.

What if we can’t measure competence?

Of course, it’s possible that we don’t have good measures of competence in a given industry. (One must wonder how firms decide who to hire, but frankly I’m prepared to believe they’re just really bad at it.) Then we can’t observe discrimination statistically in this way. What do we do then?

Well, there is at least one avenue left for us to detect discrimination: We can do direct experiments comparing resumes with male names versus female names. These sorts of experiments typically don’t find very much, though—at least for women. For different races, they absolutely do find strong results. They also find evidence of discrimination against people with disabilities, older people, and people who are physically unattractive. There’s also evidence of intersectional effects, where women of particular ethnic groups get discriminated against even when women in general don’t.

But this will only pick up discrimination if it occurs during the hiring process. The advantage of having a competence measure is that it can detect discrimination that occurs anywhere—even outside employer control. Of course, if we don’t know where the discrimination is happening, that makes it very hard to fix; so the two approaches are complementary.

And there is room for new methods too; right now we don’t have a good way to detect discrimination in promotion decisions, for example. Many of us suspect that it occurs, but unless you have a good measure of competence, you can’t really distinguish promotion discrimination from innate differences in talent. We don’t have a good method for testing that in a direct experiment, either, because unlike hiring, we can’t just use fake resumes with masculine or feminine names on them.