Other approaches to evolutionary ethics

Mar 2 JDN 2460737

In my previous post, I talked about some ways that evolutionary theory can be abused in ethics, leading to abhorrent conclusions. This is all too common; but it doesn’t mean that evolutionary theory has nothing useful to say about ethics.

There are other approaches to evolutionary ethics that do not lead to such horrific conclusions; one such approach is evolutionary anthropocentrism; it is a position held by respected thinkers such as Frans de Waal, but it is still flawed. The claim is that certain behaviors are moral because we have evolved to do them—that behaviors like friendship, marriage, and nationalism are good precisely because they are part of human nature. On this theory, we can discern what is right and wrong for human beings simply by empirically studying what behaviors are universal or adaptive among human beings.

While I applaud the attempt to understand morality scientifically, I must ultimately conclude that the peculiar history of human evolution is far too parochial a basis for any deep moral truths. Another species—from the millions of other life forms with which we share the Earth to the millions of extraterrestrial civilizations that must in all probability exist somewhere in the vastness of the universe—could have a completely different set of adaptations, and hence a completely incompatible moral system.

Is a trait good because it evolved, or did it evolve because it is good? If the former then “good” just means “fit” and human beings are no more moral than rats or cockroaches. Indeed, the most fit human being of all time was the Moroccan tyrant Mulai Ismail, who reputedly fathered 800 children; the least fit include Isaac Newton and Alan Turing, who had no children at all. To say that evolution gets it right—as, with qualifications, I will—is to say that there is a right, independent of what did or did not evolve; if evolution can get it right, then it could also, under other circumstances, get it wrong.

For illustration, imagine a truly alien form of life, one with which we share no common ancestor and only the most basic similarities. Such creatures likely exist in the vastness of the universe, though of course we’ve never encountered any. Perhaps somewhere in one of the nearby arms of our galaxy there is an unassuming planet inhabited by a race of ammonia-based organisms, let’s call them the Extrans, whose “eyes” see in the radio spectrum, whose “ears” are attuned to frequencies lower than we can hear, whose “nerves” transmit signals by fiber optics instead of electricity, whose “legs” are twenty frond-structured fins that propel them through the ammonia sea, whose “hands” are three long prehensile tentacles extending from their heads, whose “language” is a pattern of radio transmissions produced by their four dorsal antennae. Now, imagine that this alien species has managed to develop sufficient technology so that over millions of years they have colonized all the nearby planets with sufficient ammonia to support them. Yet, their population continues to grow—now in the hundreds of trillions—and they cannot find enough living space to support it. One of their scientists has discovered a way to “ammoniform” certain planets—planets with a great deal of water and nitrogen can be converted into ammonia-supporting planets. There’s only one problem: The nearest water-nitrogen planet is called Earth, and there are already seven billion humans (not to mention billions of other lifeforms) living on it who would surely die if the ammoniforming were performed. The ammoniformer ship has just entered our solar system; we have managed to establish radio contact and achieve some rudimentary level of translation between our radically different languages. What do we say to the Extrans?

If morality is to have a truly objective meaning, we ought to be able to explain in terms the Extrans could accept and understand why it would be wrong for them to ammoniform our planet while we are still living on it. We ought to be able to justify to these other intelligent beings, however different they are from us chemically, biologically, psychologically, and technologically, why we are creatures of dignity who deserve not to be killed. Otherwise, the species with superior weapons will win; and if they can get here, that will probably be them, not us.

Sam Harris has said several times, “morality could be like food”; by this he seems to mean that there is objective evaluation that can be made about the nutrition versus toxicity of a given food, even if there is no one best food, and similarly that objective evaluation can be made about the goodness or badness of a moral system even if there is no one best moral system. This makes a great deal of sense to me, but the analogy can also be turned against him, for if morality is just as contingent upon our biology as diet, then who are we to question these Extrans in their quest for more lebensraum?

Or, if you’d prefer to keep the matter closer to home: Who are we to question sharks or cougars, for whom we are food? In practice it’s difficult to negotiate with sharks and cougars, of course. But if even this is to have real moral significance, e.g. that creatures more capable of rational thought and mutual communication are morally better, we still need an objective inter-species account of morality. And suppose we found a particularly intelligent cougar, and managed some sort of communication; what would we be able to say? What reasons could we offer in defense of our claim that they ought not to eat us? Or is, ultimately, our moral authority in these conflicts no deeper than our superior weapons technology? If this is so, it’s hard to see why the superior weapons technology of the Nazi military wouldn’t justify their genocide of the Jews; and thus we run afoul of the Hitler Principle.

While specific moral precepts can and will depend upon the particular features of a given situation, and evolution surely affects and informs these circumstances, the fundamental principles of morality must be deeper than this—they must at least have the objectivity of scientific facts; in fact I think we can go further than this and say that the core principles of morality are in fact logical truths, the sort of undeniable facts that any intelligent being must accept on pain of contradiction or incoherence. Even if not trivially obvious (like “2+2=4” or “a triangle has three sides”), logical and mathematical truths are still logically undeniable (like “the Fourier transform of a Gaussian function is a Gaussian function” or “the Galois group of some fifth-order real polynomials has an acyclic simple normal subgroup” or “the existence of a strong Lyapunov function proves that a system of nonlinear differential equations has an asymptotically stable zero solution”. Don’t worry if you have no idea what those sentences mean; that’s kind of the point. They are tautologies, yes, but very sophisticated tautologies). The fundamental norms must be derivable by logic and the applications to the real world must depend only upon empirical facts.

The standard that moral principles should be scientific or logical truths is a high bar indeed; and one may think it is unreachable. But if this is so, then I do not see how we can coherently discuss ethics as something which makes true claims against us; I can see only prudence, instinct, survival or custom. If morality is an adaptation like any other, then the claim “genocide is wrong” has no more meaning than “five fingers are better than six”—each applies to our particular evolutionary niche, but no other. Certainly the Extrans will not be bound by such rules, and it is hard to see why cougars should be either. There may still be objectively valid claims that can be made against our behavior, but they will have no more force than “Don’t do that; it’s bad for your genes”. Indeed, I already know that plenty of things people do are (at least potentially) bad for their genes, and yet I think they have a right to do them; not only the usual suspects of contraception, masturbation and homosexuality, but indeed reading books, attending school, drinking alcohol, watching television, skiing, playing baseball, and all sorts of other things human beings do, are wastes of energy in purely Darwinian terms. Most of what makes life worth living has little, if any, effect at spreading our genes.

Naive moral Darwinism

Feb 23 JDN 2460730

Impressed by the incredible usefulness of evolutionary theory in explaining the natural world, many people have tried to apply it to ethical claims as well. The basic idea is that morality evolves; morality is an adaptation just like any other, a trait which has evolved by mutation and natural selection.

Unfortunately the statement “morality evolves” is ambiguous; it could mean a number of different things. This ambiguity has allowed abuses of evolutionary thinking in morality.

Two that are particularly harmful are evolutionary eugenics and laissez-faire Darwinism, both of which fall under an umbrella I’ll call ‘naive moral Darwinism’.

They are both terrible; it saddens me that many people propound them. Creationists will often try to defend their doubts about evolution on empirical grounds, but they really can’t, and I think even they realize this. Their real objection to evolution is not that it is unscientific, but that it is immoral; the concern is that studying evolution will make us callous and selfish. And unfortunately, there is a grain of truth here: A shallow understanding of evolution can indeed lead to a callous and selfish mindset, as people try to shoehorn evolutionary theory onto moral and political systems without a deep understanding of either.

The first option is usually known as “Social Darwinism”, but I think a better term is “evolutionary eugenics”. (“Social Darwinism” is a pejorative, not a self-description.) This philosophy, if we even credit it with the term, is especially ridiculous; indeed, it is evil. It doesn’t make any sense, either as ethics or as evolution, and it has led to some of the most terrible atrocities in history, from forced sterilization to mass murder. Darwin adamantly disagreed with it, and it rests upon a variety of deep confusions about evolutionary science.

First, in practice at least, eugenicists presumed that traits like intelligence, health, and even wealth are almost entirely genetic—when it’s obvious that they are very heavily affected by the environment. There certainly are genetic factors involved, but the presumption that these traits are entirely genetic is absurd. Indeed, the fact that the wealth of parents is strongly correlated with that of their children has an obvious explanation completely unrelated to genetics: Inheritance. Wealthy parents can also give their children many advantages in life that lead to higher earnings later. Controlling for inherited environment, there is still some heritability of wealth, but it’s quite weak; it’s probably due to personality traits like conscientiousness, ambition, and in fact narcissism which are beneficial in a capitalist economy. Hence breeding the wealthy may make more people who are similar to the wealthy; but there’s no reason to think it will actually make the world wealthier.

Moreover, eugenics rests upon a confusion between fitness in the evolutionary sense of expected number of allele copies, and the notion of being “fit” in some other sense, like physical health (as in “fitness club”), socially conformity (as in “misfits”) or mental sanity (as in “unfit to serve trial”). Strong people are not necessarily higher in genetic fitness, nor are smart people, nor are people of any particular race or ethnicity. Fitness entails the probability of one’s genes being passed on in a given environment—without reference to a specific environment, it says basically nothing. Given the reference environment “majority of the Earth’s land surface”, humans are very fit organisms, but so are rats and cockroaches. Given the reference environment “deep ocean”, sharks fare far better than we ever will, and better even than our cousins the cetaceans who live there. Moreover, there is no reason to think that intelligence in the sense of Einstein or Darwin is particularly fit. The intelligence of an ordinary person is definitely fit—that’s why we have it—but beyond that point, it may in fact be counterproductive. (Consider Isaac Newton and Alan Turing, both of whom were geniuses and neither of whom ever married or had children.)

There is milder form of this that is still quite harmful; I’ll call it “laissez-faire Darwinism”. It says that because natural selection automatically perpetuates the fit at the expense of the unfit, it ultimately leads to the best overall outcome. Under laissez-faire Darwinism, we should simply let evolution happen as it is going to happen. This theory is not as crazy as evolutionary eugenics—nor would its consequences be as dire—but it’s still quite confused. Natural selection is a law of nature, not a moral principle. It says what will happen, not what should happen. Indeed, like any law of nature, natural selection is inevitable. No matter what you do, natural selection will act upon you. The genes that work will survive, the genes that fail will die. The specifics of the environmental circumstances will decide which genes are the ones that survive, and there are random deviations due to genetic drift; but natural selection always applies.

Typically laissez-faire Darwinists argue that we should eliminate all government welfare, health care, and famine relief, because they oppose natural selection; but this would be like tearing down all skyscrapers because they oppose gravity, or, as Benjamin Franklin was once asked to do, to cease installing lightning rods because they oppose God’s holy smiting. Natural selection is a law of nature, a fundamental truth; but through wise engineering we can work with it instead of against it, just as we do with gravity and electricity. We would ignore laws of nature at our own peril—an engineer who failed to take gravity into account would not make very good buildings!—but we can work with them and around them to achieve our goals. This is no less true with natural selection than with any law of nature, whether gravity, electricity, quantum mechanics, or anything else. As a laser uses quantum mechanics and a light bulb uses electricity, so wise social policy can use natural selection to serve human ends. Indeed, welfare, health care, and famine relief are precisely the sort of things that can modulate the fitness of our entire species to make us all better off.

There are however important ways in which evolution can influence our ethical reasoning, which I’ll talk about in later posts.

The real source of the evolution debate, part 2

As I discussed in my last post, the propositions that people really object to are not evolution per se. They are distinct but conceptually related ideas, such as adaptationism, common descent, animalism, abiogenesis, and atheism.

In my last post I dealt with adaptationism and common descent; now its time for animalism, abiogenesis, and atheism.

Animalism

Next we must consider animalism, the proposition that humans are not “special”, that we are animals like any other. I’d like to distinguish two forms of animalism which are quite different but often confused; I will call them weak animalism and strong animalism. The former is definitely true, but the latter doesn’t make any sense. Weak animalism is the observation that human beings have the same biological structure as other animals, and share a common ancestry and many common traits—in short, that humans are in fact animals. We are all born, we all die; we all breathe, we all eat, we all sleep; we all love, we all suffer. This seems to me a completely unassailable observation; of course these things are true, they are essential to human nature, and they are a direct consequence of our kinship with the rest of the animal domain. Humans are not rocks or plants or empty space; humans are animals.

On the other hand, strong animalism is the claim that because humans are animals, we may (or should) “act like animals”, stealing, raping, murdering, and so on. It is true that all these behaviors, or very close analogues, can be observed in the animal domain; but at the same time, so can friendship (e.g. in chimpanzees), affection (e.g. in penguins), monogamy (e.g. in gerbils), and many other behaviors. The diversity of behaviors in the animal domain is mind-bogglingly huge. There are animals that can sever and regrow limbs and animals that can infest and control other animals’ minds.

In the only sense in which we are “just animals”, the fact justifies no moral claims about our behavior. This matter is not a trivial quibble, but a major factor in the evolution debate: Intelligent Design proponents made a similar complaint when they objected to Bloodhound Gang’s song “The Bad Touch” which includes the line, “You and me baby we ain’t nothin’ but mammals // So let’s do it like they do on the Discovery Channel”. This may make for entertaining music (and I’ve no objection to sex or even promiscuity and seduction per se), but it is highly fallacious reasoning, and it’s clearly hurting the public understanding of science. If you insist on saying that humans are “just animals”, you should be very clear about what this means; I much prefer to remove the condescending “just” and say “humans are animals”. For to say humans are just animals would be like saying the Earth is just a planet, or love is just a chemical reaction. If all you mean is that the example is an instance of a category, you don’t need the “just”; by saying “just”, you clearly are trying to assert some sort of equivalence between members of the category, one that would deflate the status of the particular example. Yet if you have to say it, it probably isn’t true; no one would point at a random rock and say “this is just a rock”—instead you point to the Earth and say “this is just a rock”, when in fact it is a very special rock. Humans are a very special animals, the Earth is a very special planet, and love is a very special chemical reaction (closely tied to that most mysterious of chemical reactions, consciousness). We are members of one vast animal family—indeed, one vast family of life—but we are most definitely the wisest and most powerful member.

I’m honestly not sure what I would do if I tried to “act like an animal”; I suppose I would be born, breathe, eat, sleep, love, suffer and die—but I was going to do these things anyway, whether I wanted to or not. Indeed, by weak animalism, humans are animals, and so by acting like human beings we are in fact acting like animals—the animal Homo sapiens.

Abiogenesis

Next comes abiogenesis, the proposition that living things came from nonliving things. Well, where else would they come from? The only way to deny this proposition is to say that living things always existed. (If God made life, he would have done so by being a living thing that always existed.) The problem with this idea is that it doesn’t really explain where life comes from, it only pushes its origin back into the infinite past. Scientists are making progress in using nonliving chemicals to produce replicating entities that are very similar to life, and inn 2010 scientists created the first all-synthetic bacterium, but to do it they had to use pre-existing bacteria to set up the reactions. This lends credibility to the idea that life came from nonlife, but in fact even this wouldn’t conclusively demonstrate abiogenesis; it would prove that life can arise from nonlife, but that doesn’t mean it did originally. The truth is, we really don’t understand much about the origin of life, and even less about the origin of the universe; but this does nothing to undermine evolution or even common descent. No one doubts the existence of gravity simply because we don’t know what caused the Big Bang!

Atheism

Finally, and most controversially, there is atheism. Theism is belief in a superhuman being that responds to prayers and performs miracles; atheism is the negation of theism. This is all atheism means; if you think it means something more than this—absolute knowledge that there cannot be a creator being, or no ultimate foundation for morality, or no meaning to existence, or whatever else—that isn’t atheism. An atheist is someone who doesn’t believe in a personal divinity, someone who says that there are no superhuman beings that intervene in our lives. This is a fairly strong claim in itself, since if correct, atheism implies that religion as we know it—prayer, rituals, miracles, holy books—is utterly false. Deep philosophical religion, like that practiced by Einstein or Kant, remains intact; but the religion of churches, mosques and temples is completely undermined.

Evolution doesn’t imply atheism, but it does support it, in the following sense: Evolution answers the question of “Where did we come from?” without requiring God. Even before we knew about evolution, religion wasn’t a very convincing answer to that question; but we didn’t really have a better one—and now we do.

Yet atheism is clearly correct. This is something we can infer directly from a large body of scientific evidence. I’ve already addressed this topic in previous posts, so I’ll be brief this time around.

Maybe there is a kind of religion that could be reconciled with science; but it’s not a theistic religion. Perhaps there is a God who made the whole of the universe, set it running in perfect harmony to achieve some divine plan. This is called deism, and it’s a scientifically respectable position. But then, it is senseless to pray, since God isn’t going to change the divine plan on behalf of tiny creatures on a backwater planet of a backwater star in a backwater galaxy. It is plainly wrong to call such a being “he” or even “He”, since no being so vast and powerful could ever be properly described in the petty terms of a biological male—it would be like saying that gravity has testicles, energy conservation has a beard, or causality has a Y chromosome. I’m not sure we can even fairly say that God is a conscious being, for consciousness as we know it seems too vulgar a trait to assign to an entity of such vastness. In fact, the theologian Paul Tillich thought even existence a concept insufficient to describe the divine. It is foolish to look to ancient books to understand God, for its work is written from horizon to horizon in the fabric of the universe, and these ancient books are but pale shadows of its grandeur. It is naive to suppose that we are special beings created in God’s image, for God has made many millions of species on this planet, and probably countless more on other distant planets; furthermore, God’s process of production favors insects and bacteria and requires massive systematic death and suffering.

And even once we have removed everything we knew of religion, even this truncated theology suffers from an egregious flaw: Such a creator offers us no evidence of its existence. A deistic God is indistinguishable from the universe itself, definitely in practice and perhaps even in principle. I don’t really see the point in using the word “God” when the word “nature” captures what we mean much better. Saying “God is vaster than we can imagine, and of course by `God’ I mean the universe” strikes me as like saying “The Sun is powered by magical unicorn love, and of course by `magical unicorn love’ I mean nuclear fusion.”

And theism, religion as we know it, is philosophically and scientifically bankrupt. Imagine an airline pilot who lets go of the controls and prays to God to fly the plane; imagine a surgeon who puts down the scalpel and prays to God for the patients to be healed. That’s the sort of thing we would do if theism were true. It would make sense to do these things—it would be rational to do these things—under the presumption that there is a God who answers our prayers. You can’t escape this; if it makes sense to pray for your sick grandmother, then it doesn’t make sense for her to take medicine—because if God is in control, then chemistry isn’t. The fact that hardly anyone really would resort to prayer when an obvious and effective scientific alternative is available (and the fact that people who do are considered fanatical or even insane) clearly shows that theism is bankrupt, and that hardly anyone believes it confidently enough to actually live by it. No one except the craziest fanatics believes in God the way they believe in gravity.

I’m sure this book will be perceived as yet another “angry atheist” “attacking” “religious people”; on the contrary, I am a respectful and reflective atheist criticizing theistic religion. I respect religious people; I do not respect theistic religion. Indeed, I respect religious people too much to let them go on believing such ridiculous things. What glorious powers of human reason are wasted on such nonsense! If you believe in the subtle, abstract, inscrutable God of Einstein or Spinoza, very well. We disagree only about the most abstract matters, almost at the level of semantics (what you call “God” I prefer to call “nature”). Our beliefs and values are not only reconcilable but nearly identical.

On the other hand if you believe in a magical personal God, a God who writes books and answers prayers, then my criticism is indeed directed at your beliefs; I think you are mistaken, gravely, dangerously mistaken.

Atheism is a scientific fact.

Conclusion

Evolution is a fact. The Modern Synthesis of genetics and natural selection is among the most certain scientific theories ever devised; it is the unified field theory of life on Earth. The following claims may be controversial in our society, but they are also scientific facts: Living things are adapted to their environment by natural selection; all life on Earth is descended from a common ancestor; humans are animals; life arose by natural processes; and theistic religion is false. You can accept these facts, or else you can live in denial.

Yes, in principle evolution is a theory that can be doubted, but in principle everything in science is a theory that can be doubted. If you want certain, undeniable truths, you will need to stay with logic and mathematics—and even then, you’ll need to be careful about your axioms. Otherwise, you must always be open to a thin sliver of uncertainty, a sliver that should be no larger for evolution than for gravity or photosynthesis. (Of the three, gravity is by far the least-understood.)

The convergence of scientific evidence in favor of evolution, a 4.5-billion-year-old Earth, genetics, natural selection, common descent, adaptationism, weak animalism, and yes, even atheism, is so incredibly massive that we’d have to give up half of science to abandon these things. Any revisions we do make in the future will necessarily be minor, leaving the core of truth intact.

To doubt that rubidium decays into strontium at the same rate now it did a million years ago, you must explain how the fundamental laws of nuclear physics that we have verified to twelve decimal places are incorrect.

To doubt that cetaceans evolved from land mammals, you must explain why they breathe air instead of water and swim vertically rather than horizontally, unlike nearly everything else in the sea.

To believe in microevolution but not macroevolution, you must think that there is some mysterious force that prevents what has happened 100 times from happening an additional 100,000 times for the same reasons—for, if repeated many times, a 0.01% systematic change per century, a darwin of evolution (lowercase for a unit of measure, like the newton of force or the weber of magnetic flux), is more than enough to account for the transition from archaea to eukaryotes over 3 billion years, and vastly more than is needed to account for the transition from apes to humans over 5 million years. In fact, observed rates of evolution in the short term have reached the level of kilodarwins, thousands of darwins.

To doubt that life on Earth has changed and diverged over time you must ignore the most obvious facts about a remarkably rich and well-organized fossil record. There are no rabbits in Precambrian layers. There are no trilobites in Mesozoic layers. There are no primates in the Jurassic, and no sauropods in the Tertiary. There have never been a human fossil and a dinosaur fossil found in the same rock. Creationists like to claim that the fossil record sorted itself by size and lifestyle (as here), but in fact there are large and small, land and sea, in pretty much every layer of the fossil record—just not the same ones, because the organisms in lower layers died off and were replaced by the organisms in higher layers. Pterodactyls look a lot like a birds, come in roughly the same size ranges as birds, and seemed to live similar lifestyles, but you’ll never find the two buried together. Looking at the fossils, you can’t help but infer evolution; if God made the fossils, he must have wanted us to believe in evolution.

The real source of the evolution debate, part 1

Feb 9 JDN 2460716

The last few posts have been about evolution; but everything I’ve said in them has been very technical and scientific, and I imagine it is not very controversial or offensive to anyone. In fact, I would guess that anyone who believes in Creationism, upon reading my definition of evolution as “change in allele distribution in a population”, was thinking, “Of course we believe in that. But that’s not evolution.” Actually it is; evolution is change in allele distribution in a population. What people are objecting to isn’t really evolution.

There are however several propositions that people do object to, which are conceptually related, but not strictly implied by evolution. They are adaptationism, common descent, animalism, abiogenesis, and atheism respectively. They are all true—and in what follows I will offer a defense of each—but they are not necessarily entailed by evolution or the Modern Synthesis, and so they should be considered separately on their own merits. This post will deal with adaptationism and common descent, and I’ll save the others for a later post.

Adaptationism

Adaptationism is the principle that living organisms have the traits they do because these traits are adaptive, that is, that they are beneficial to fitness. It’s obvious that this isn’t completely true in every case; whales have hipbones despite having no apparent use for them, and the human appendix seems mostly useful for collecting toxins and occasionally exploding. There are also limits to how much an organism can change given its current structure; the emerging field of developmental evolutionary biology, or evo-devo, seeks to characterize these limits more precisely.

But in general, adaptationism is an incredibly powerful principle, one which makes sense of the diversity and complexity of life on Earth in a way no other theory can. Natural selection predicts that organisms will become more and more adapted over time; adaptationism is based on the fact that we have had plenty of time to adapt really, really well. In fact, it can be argued that adaptationism is really what evolutionary theory is about, that all this business about changes in allele distributions is useful but not really the point of the enterprise.

When we look at the world, we see that living things are extremely complex and well-suited to their environments; theologians used to say (in fact some still do) that this was evidence that living things were designed by a perfect God.

The problem with this argument was exposed almost immediately by David Hume: If complex things need designers, aren’t designers even more complex than what they design? But then, the designer needs a designer-designer, and the designer-designer needs a designer-designer-designer, and so on into an infinite regress! Another problem with this sort of Intelligent Design thinking is that it cannot explain the cases when adaptationism fails—in particular, why do so many species go extinct? Recently a theory of “Intelligent Recall” was proposed for this purpose; but this forces us to think of our designer as no more intelligent than a financial analyst or an automobile engineer! What kind of God would make mistakes in design?

And now we know better: The remarkable complexity and fitness of living organisms can be entirely explained by adaptationism. When we ask why dolphins have fins, why birds have wings, why centipedes have so many legs, why snakes are so long, or why humans have such enormous brains, adaptationism gives us the answer: organisms have these traits because having these traits benefited their ancestors. In some cases it’s pretty obvious how this would work (having fins lets dolphins swim faster, swimming faster has obvious benefits in catching fish and escaping sharks, so dolphin ancestors with more fin-like limbs survived better); in others we’re still working on the specifics (there is as yet no consensus on how humans got so incredibly smart compared to other animals); but in general adaptationism has explained a huge body of data that we couldn’t account for any other way.

Common descent

Common descent is the proposition that all living organisms on Earth are descended from a common ancestor. This implies, in particular, that human beings share a common ancestor with other animals. The former is strictly stronger, and not quite as certain; at least in principle it could be that some broad classes of organism do not share a common ancestor, but nonetheless it would still be quite clear that humans share a common ancestor with chimpanzees. In practice nearly all biologists agree with the strongest form of common descent, that all living organisms on Earth share a common ancestor. Recently the biochemist Douglas Theobald mathematically compared this strongest form of common descent (universal common descent) with several other models of phylogenetic history, finding that universal common descent was the most probable result by a factor of at least 102000—a 2001-digit number. That is, scientists are 99.999,999,999,999,999,999… (on with 1,980 more nines!) percent sure that universal common descent is right. This is not hyperbole; it is mathematically precise. At this point any sliver of uncertainty left in universal common descent needs to apply to all of our fundamental knowledge of physics and chemistry; in order to be wrong about this, we’d need to be wrong about everything.

How are we so sure? Nature presents us with a very consistent pattern of observations that simply make no sense any other way. Traits in living things (and, we are increasingly finding, genes) have distinct patterns, structural similarities that exist between species irrespective of their lifestyle; we call these similiarities homologues. (Similarities that are due to lifestyle—e.g., both dolphins and fish have fins—are called analogues.) Dolphin skeletons are more like dog skeletons than they are like fish skeletons, even though dolphins live more like fish. Bat skin is more like human skin than like bird skin, even though bats live more like birds. The most parsimonious explanation is that these traits were passed on from some common ancestor—that dolphins and dogs have similar skeletons because dolphins and dogs are actually genetically related somehow, and they differ from fish because they are more distantly related.

Once we began to understand DNA, we became able to detect even more compelling homologues. Many kinds of mutation are completely ineffectual; some involve a change to DNA that doesn’t do anything, others swap out two amino acids that are essentially the same; in fact because of the way genes code for amino acids, it’s possible to have a change in a gene that isn’t reflected in the resulting protein at all. All of these changes have no effect on the organism, but they are still passed on to offspring. When you find two organisms that have the same trait (e.g. bats and birds both have wings), if that trait does something important (lets you fly), then maybe it’s just a similarity in lifestyle; if that happens we call it convergent evolution. But when we’re looking at a DNA sequence that doesn’t do anything, lifestyle can’t be the reason—it must be either common ancestry or pure coincidence. Statistical analysis can rule out pure coincidence, and then we are left with only one possibility: common descent. A third option often proposed by Creationists simply doesn’t work: A common designer of sharks and dolphins would not give one a cartilaginous skeleton and gills and the other a bony mammalian skeleton and lungs. There is no reason for dolphin skeletons to be more like dog skeletons than shark skeletons—except that dogs and dolphins share closer common ancestry to each other than they do to sharks.

There are thousands of traits and genes that we can use to assess these relationships. When we do this, we find a remarkably consistent organizational structure, a pattern of a few common ancestors diversifying into a wide variety of descendants—it looks a bit like a tree, so we call it a phylogenetic tree. In some cases there is ambiguity about which species are more closely related, and we need to gather more evidence. This is a normal part of evolutionary biology research.

One thing is not disputed: Humans share a common ancestor with apes. This is simply too obvious from the morphological and genetic homologues. Human and chimp DNA coincides 95-98\%, depending on how you count insertions and deletions.

A standard measure of genetic distance is the Nei distance; a larger Nei distance implies more genetic differences, which in turn suggests that the common ancestor was further in the past. (Exactly how it’s calculated is a bit too technical for this post.)

Humans and chimps have a Nei distance of 0.45. This similarity between humans and chimps represents a closer similarity than that between dogs and foxes, who differ by a Nei distance of 1.1. Almost anyone can see that dogs and foxes are related animals; so why is it so hard to believe that humans and chimps are related too?

Creationists often claim that we never find the transitional forms predicted by evolutionary theory, but this is simply not true. We do in fact see many transitional forms; feathered dinosaurs mark the transition from reptiles to birds, ambulocetids mark the transition from land mammals to cetaceans, therapsids mark the transition from reptiles to mammals, and a huge variety of hominids marks the transition from apes to humans. It’s important to understand what this means: transitional forms are not bizarre combinations of their descendant organisms, but fully-functional lifeforms in their own right that have descendants very different from one another. Just as your grandparents are not a combination of half of you and half of your first cousin, common ancestors are not simply half-and-half combinations of their descendant organisms. Ambulocetids are not half-deer/half-dolphin, they are somewhat deer-like yet somewhat dolphin-like mammals whose ancestors were on average slightly more deer-like and whose descendants were on average slightly more dolphin-like. Different traits changed at different times, generations apart: Ambulocetids began to swim before they lost their legs, and even modern dolphins haven’t lost their lungs or hipbones.


This is such a deep, marvelous truth that Creationists are missing out on: All life on Earth is part of one family. We are kin with the dogs and the cats and the elephants, with the snakes and the lizards and the birds, with the beetles and the flies and the bees, even with the flowers and the bushes and the trees.

Defining evolution

Feb 2 JDN 2460709

In the last post I said I’d explain the basics of evolution, then went into a bunch of detail about genetics. Why all this stuff about DNA? Weren’t we supposed to be talking about evolution? Yes—but it’s impossible to truly understand evolution without understanding DNA. This unity between genetics and evolution is called the Modern Synthesis, and it is the unified field theory of the life sciences. It’s quite different from what Darwin invented in 1859, but the fundamental insights were his; the Modern Synthesis is a body of flesh over the skeleton of Darwinian evolution. Now that I have explained the basics of DNA, it is time to discuss evolution itself.

The fundamental unit of evolution is the gene. (Darwin, among others, insisted that the fundamental unit of evolution is the organism, because it is organisms that are born and die. There is some truth to this, but given the presence of phenomena like kin selection and genetic drift, we also need to consider genes themselves. Richard Dawkins makes a distinction between “replicators” (genes) and “vehicles” (organisms) that makes a great deal of sense to me—both are necessary parts of the same system, and it’s a little silly to ask which is “more fundamental”.) The fundamental unit of evolution is not the population or the species; it is populations that evolve, but they evolve by natural selection acting upon individuals and genes. Natural selection is not sensitive to “the good of the species”; it is only sensitive to the good of the organism and the good of the gene.

A gene is a section of DNA that, when processed by the appropriate proteins, produces a particular protein. Most DNA is not in the form of genes. The majority of DNA has no effect—you can change it without affecting the organism—and most of the rest is involved in regulating the genes, not in producing proteins. Yet, genes are the recipes by which we are made. Human beings have genes for hemoglobin that oxygenates our blood, genes for melanin that pigments our skin, genes for serotonin that transmits signals in our brains, genes for keratin that makes up our hair, and about 46,000 other genes that produce other proteins (the Human Genome Project is still working on the exact number). An allele is a particular variant of a gene which produces a particular variant of the resulting protein. Alleles in melanin genes give different people different colors of skin; a particular allele in a hemoglobin gene gives some people sickle-cell anaemia.

When the distribution of alleles in a population changes, that is evolution. Yes, that’s all “evolution” means: Changes of distribution in alleles in a population. When a baby is born, that’s evolution. When a person dies, that’s evolution. This is what we mean when we say that evolution is a fact; it is a fact that alleles do change distribution in populations. Individuals do not evolve, populations evolve. You will never see a dog turn into a cat, nor an ape to a human. You could see, if you were watching for millions of years, a population of animals that started very dog-like and got increasingly cat-like with each generation, or a population of animals that started very ape-like and got increasingly human-like with each generation. Even these latter are not necessary occurrences; under different environmental circumstances, the same genes can evolve in completely different directions.

Fitness is the expected number of copies that an allele is likely to produce in the next generation.(There are a few subtly different ways of defining fitness; the one I prefer is the expected value of the number of copies of a given allele in the next generation. The fitness f of an allele a at generation t is given by the expectation of the number n of copies of that allele in that population at generation t+1: f(a,t) = E[n(a,t+1)]This is an \inclusive fitness measure, which accounts for kin selection better than exclusive fitness measures like “predicted grandchildren” or “expected number of reproductively-viable offspring”. In practical terms these generally give the same results; but when they don’t, the inclusive measure is to be preferred.)

Fitness is a probabilistic notion—alleles with high fitness are likely to be passed on, but this is not guaranteed. “Survival of the fittest” ultimately just means that genes that are likely to make many copies are likely to have many copies. It has been said that this is a tautology, and indeed it is; but so is the Pythagorean Theorem. Some tautologies are useful, and all tautologies are undeniably true.

What causes evolution? Organisms are born, reproduce, and die. Any time this happens, it changes the distribution of alleles in the population—it is evolution. If there was a reason why the ones who lived lived and the ones who died died, then the actual number of copies of each allele in the population will reflect the fitness of those alleles; this is called natural selection. On the other hand, if it just happened by chance, then the distribution of alleles won’t match the fitness; this is called genetic drift. Examples of each: Trees are tall, giraffes eat leaves, so giraffes with longer necks get more food and live longer—that’s natural selection. A flood rips through the savannah and kills half of the giraffes, and it just happens that more long-necked than short-necked giraffes die—that’s genetic drift. The difference can be subtle, since sometimes we don’t know what the reasons are; if it turned out that there was some reason why floods are more likely to kill long-necked giraffes (they can’t swim as well?), then in fact what we thought was genetic drift was really natural selection. But notice: Natural selection is not chance. Natural selection is the opposite of chance. If evolution happens by chance, that’s genetic drift. Natural selection is evolution that happens for a reason.

Natural selection changes populations, but what causes them to separate into distinct species? Well, a species is really a breeding population—it is a group of organisms that regularly interbreeds within the group and does not regularly interbreed outside the group. In most cases, breeding between species is actually impossible; but in many cases it is simply rare. Indeed, there is a particularly interesting case called a ring species, in which interbreeding possibilities rest on a continuum rather than being sharply delineated. In a ring species, there are several distinct populations for which some can interbreed easily, others can interbreed with difficulty, and others can’t inbreed at all. A classic case is the Ensatina salamanders who live in the Central Valley in California. There are nineteen populations, and each can interbreed with its adjacent populations—but the two populations at the far ends cannot interbreed. Ensatina eschscholtzii eschscholtzii can interbreed with E.e. croceater, which can interbreed with E.e. oregonensis, and so on all the way to E.e. klauberi—but E.e. eschscholzii on one end can’t interbreed with E.e. klauberi on the other end. Are they different “species”? It’s difficult to say. If all the intermediates died out, we would call them different species, Ensatina escholzii and Ensatina klauberi; but in fact genes do sometimes pass between them, because they can both interbreed with the intermediates. Really, the concept “species” fails to capture the true complexity of the situation.

This is not a problem for evolutionary theory—it is a prediction of evolutionary theory. We should expect to see new species occasionally forming, and while they are in the process of forming there should be many intermediates that aren’t yet distinct species. Evolution predicts gradual divergence, and sometimes we are lucky enough to see that divergence in process.

Natural selection can only act upon alleles that already exist; it chooses the best out of what’s available, not the best that could possibly exist. This is why dolphins breathe air instead of water; breathing water would be much better for their lifestyle, but no dolphin has yet been born who can breathe water. The alleles aren’t there, so natural selection cannot act upon them. If a mutant dolphin is someday born who can breathe water, as long as they don’t suffer from other problems as a result of their mutation, they are likely to live a long time and have lots of offspring; in a hundred generations perhaps water-breathing dolphins would form a new species, or even replace air-breathing dolphins. And notice how short a time that is: 100 generations of dolphins is only about 1000 years. We could watch this happening in historical time. If it had happened a million years ago, the fossil record would probably never show the intermediate forms. This is why we don’t see transitional forms between closely-related species; because the differences are so subtle, the necessary changes can occur very rapidly, in too few generations to ensure fossilization.

Indeed, monogenic traits—those that can be changed by a single mutation—never produce transitional forms. There is a single gene for sickle-cell anaemia in humans; we should not expect to see people with “30\% sickle-cell anaemia”, because there are only three options: you either have no copies of the sickle-cell allele (normal), you have one copy (sickle-cell trait), or you have two copies (sickle-cell anaemia). In fact, in this particular case, the one-copy variant isn’t even mild anaemia; it is a generally healthy non-anaemic state that offers protection against malaria. There is a single gene for six fingers in humans. Two copies gives you six fingers; one copy doesn’t do anything. Even if we had access to every individual organism that ever lived, we still wouldn’t see transitional forms for monogenic traits. Given that we actually have fossils of less than one in ten billion organisms that ever lived, it’s not surprising that most evolutionary changes leave no mark in the fossil record.

Furthermore, it’s important to understand that natural selection, even when there is plenty of variation to act on, does not produce perfectly-adapted organisms. It only produces organisms that are good enough to survive and pass on their alleles. In fact, there can be multiple fit alleles of the same gene in a population—all different, perhaps even some better than others, but each good enough to keep on surviving.

Indeed, the fitness of one allele can increase the fitness of another allele, in a number of different ways. The most morally-relevant ones only make sense in terms of game theory, so I will wait until later posts to get into them, but there are a few worth mentioning here. The first is co-evolution. Organisms evolve to suit their environments—but part of an organism’s environment consists of other organisms. Bees would not function if there were no flowers—but nor would flowers function without bees. So which came first, the bee or the flower? Neither. Ancient ancestors of each evolved together, co-evolved, the proto-flowers growing more flower-like as the proto-bees grew more bee-like, until finally an equilibrium was reached at the bees and flowers we see today.

Another way that organisms can affect the evolution of other organisms is through frequency-dependent selection, in which the fitness of a given allele depends upon the distribution of other alleles of the same gene. The most important case of frequency-dependent selection is in sex dimorphism, the differences between sexes within a species. If there are more males than females, the fitness of females goes up—it pays to be female; you’ll get your choice of males. Conversely, if there are more females than males, it pays to be male. Hence, over time, sex distributions reach an equilibrium at 50% male and 50% female, which has happened in almost every species (eusocial insects are the only major exception, and it’s due to their weird genetics). There are other cases of frequency-dependent selection as well; for instance, in stag beetles (Lucanidae), there are three kinds of males, called “alpha”, “beta”, and “gamma”. Alpha males have large horns and fight heavily with other alpha males; they risk being killed in the process, but if they win the fight, they get all the best females. Beta males have short horns and only fight other beta males; this limits their mating pool, but prevents them from being killed by alpha males. Finally, gamma males look just like females and will occasionally sneak past an alpha male and mate with his females. This is frequency-dependent selection because the success of each strategy depends on the other strategies in a fashion similar to rock-paper-scissors. If gamma males become very common, beta males will become more successful, because they won’t get cheated the way alpha males do. If beta males become common, alpha males will become more successful, because they can beat beta males in fights. If alpha males become common, gamma males will become more successful, because they can cheat alpha males. In the long run, the system settles into an equilibrium with a certain fraction of all three types.

A third way alleles affect other alleles is in sexual selection; in sexual selection, the alleles of one sex affect the alleles of the other sex, because sexual compatibility has obvious advantages. For instance, when there are lots of alleles in peahens that make them attracted to big, colorful tails, there is a fitness advantage to being a peacock with a big, colorful tail. Hence, alleles for big, colorful tails in peacocks will be selected. But then, if all the males have big, colorful tails, there is a fitness advantage to being a female who prefers big, colorful tails, and so a positive feedback loop forms; the end result is peacocks with ridiculously huge, ridiculously colorful tails and peahens who love them for it.

Everything above is very technical and scientific, and I imagine it is not very controversial or offensive to anyone. In future posts, I’ll get into the stuff that really upsets people, the true source of controversy on evolution.

Evolution: Foundations of Genetics


Jan 26 JDN 2460702

It frustrates me that in American society, evolutionary biology is considered a controversial topic. When I use knowledge from quantum physics or from organic chemistry, all I need to do is cite a credible source; I don’t need to preface it with a defense of the entire scientific field. Yet in the United States today, even basic statements of facts observed in evolutionary biology are met with incredulity. The consensus in the scientific community about evolution is greater than the consensus about quantum physics, and comparable to the consensus about organic chemistry. 95% of scientists agree that evolution happens, that Darwinian natural selection is the primary cause, and that human beings share a common ancestor with every other life form on Earth. Polls of scientists have consistently made this clear, and the wild success of Project Steve continues to vividly demonstrate it.

But I would rather defend evolution than have to tiptoe around it, or worse have my conclusions ignored because I use it. So, here goes.

You may think you understand evolution, but especially if you doubt that evolution is true, odds are good that you really don’t. Even most people who have taken college courses in evolutionary biology have difficulty understanding evolution.

Evolution is a very rich and complicated science, and I don’t have room to do it justice here. I merely hope that I can give you enough background to make sense of the core concepts, and convince you that evolution is real and important.

Foundations of genetics

So let us start at the beginning. DNA—deoxyribonucleic acid—is a macromolecular (very big and complicated) organic (carbon-based) acid (chemical that can give up hydrogen ions in solution) that is produced by all living cells. More properly, it is a class of macromolecular organic acids, because differences between DNA strands are actually chemical differences in the molecule. The structure of DNA consists of two long chains of constituent molecules called nucleotides; for chemical reasons nucleotides usually bond in pairs, adenine (A) with thymine (T), guanine (G) with cytosine (C). Pairs of nucleotides are called base pairs. We call it a “double-helix” because the two chains are normally wrapped around each other in a helix shape.

Because of this base-pair correspondence, the two strands of a DNA molecule are complementary; if one half is GATTACA, the other half will be CTAATGT. This process is reversible. Either strand can be reproduced from the other; this is how DNA replicates. A DNA strand GATTACA/CTAATGT can split into its GATTACA half and its CTAATGT half, and then the original GATTACA half will acquire new nucleotides and make a new CTAATGT for itself; similarly the original CTAATGT half will make a new GATTACA. At the end of this process, two precise copies of the original GATTACA/CTAATGT strand will result. This process can be repeated as necessary.

DNA molecules can vary in size from a few base-pairs (like the sequence GATTACA), to the 16,000 base-pairs of Carsonella bacteria, up to the 3 billion base-pairs of humans and beyond. While complexity of DNA and complexity of organism are surely related (it’s impossible to make a really complicated organism with very simple DNA), more base pairs does not necessarily imply a more complex organism. The single-celled amoeboid Polychaos dubium has 670 billion base-pairs. Amoeboids are relatively complex, all things considered; but they’re hardly 200 times more complex than we are!

The copying of DNA is exceedingly precise, but like anything in real life, not perfect. Cells have many physical and chemical mechanisms to correct bad copying, but sometimes—about 1 in 1 million base-pairs copied—something goes wrong. Sometimes, one nucleotide gets switched for another; perhaps what should have been a T becomes an A, or what should have been an A becomes a G. Other times, a whole sequence of DNA gets duplicated and inserted in a new place; still other times entire pieces of DNA are lost, never to be copied again. In some cases a sequence is flipped around backwards. All of these things (a single-nucleotide substitution, an insertion, a deletion, and an inversion, respectively) are forms of mutation. Mutation is always happening, but it can be increased by the presence of radiation, toxins, and other stresses. Usually cells with mutant DNA are killed by the immune system; if not, mutant body cells can cause cancer or other health problems. Usually it’s only mutations in gametes—the sperm and egg cells that carry DNA to the next generation—that actually have a long-term effect on future generations. Most mutations do not have any significant effect, and most of those that do have bad effects. It is only the rare minority of mutations that actually produces something useful to an organism’s survival.

What does DNA do? It makes proteins. Technically, proteins make other proteins (enzymes called transcriptases and polymerases and so on), but which protein is produced by such a process is dependent upon the order of base pairs in a DNA strand. DNA has been likened to a “code” or a “message”, but this is a little misleading. It’s definitely a sequence that contains information, but the “code” is less like a cryptographer’s cipher and more like a computer’s machine code; it interacts directly with the hardware to produce an output. And it’s important to understand that when DNA is “read” and “decoded”, it’s all happening purely by chemical reactions, and there is no conscious being doing the reading. While metaphorically we might say that DNA is a “code” or a “language”, we must not take these metaphors too literally; DNA is not a language in the same sense as English, nor is it a code in the same sense as the Enigma cipher.

Genotype and phenotype

DNA is also not a “blueprint”, as it is sometimes described. There is a one-to-one correspondence between a house and its blueprint: given a house, it would be easy to draw a blueprint much like the original blueprint; given a blueprint, one can construct basically the same house. DNA is not like this. There is no one-to-one correspondence between DNA and a living organism’s structure. Given the traits of an organism, it is impossible to reconstruct its DNA—and purely from the DNA, it is impossible to reconstruct the organism. A better analogy is to a recipe, which offers a general guide as to what to make and how to make it, but depending on the cook and the ingredients, may give quite different results. The ingredients in this case are nutrients, and the “cook” is the whole of our experience and interaction with the environment. No experience or environment can act upon us unless we have the right genes and nutrients to make it effective. No matter how long you let it sit, bread with no yeast will never rise—and no matter how hard you try to teach him, your dog will never be able to speak in fluent sentences.

Furthermore, genes rarely do only one thing in an organism; much as drugs have side effects, so do genes, a phenomenon called pleiotropy. Some genes are more pleiotropic than others, but really, all genes are pleiotropic. In any complex organism, genes will have complex effects. The genes of an organism are its genotype; the actual traits that it has are its phenotype. We have these two different words precisely because they are different things; genotype influences phenotype, but many other things influence phenotype besides genotype. The answer to the question “Nature or Nurture?” is always—always—“Both”. There are much more useful questions to ask, like “How much of the variation of this trait within this population is attributable to genetic differences?”, “How do environmental conditions trigger this phenotype in the presence of this genotype?”, and “Under what ecological circumstances would this genotype evolve?”

This is why it’s a bit misleading to talk about the “the gene for homosexuality” or “the gene for religiosity”; taken literally this would be like saying “the ingredient for chocolate cake” or “the beam for the Empire State Building”. At best we can distinguish certain genes that might, in the context of many other genes and environmental contributions, make a difference between particular states—much as removing the cocoa from chocolate cake makes some other kind of cake, it could be that removing a particular gene from someone strongly homosexual might make them nearer to heterosexual. It’s not that genes can be mapped one-to-one to traits of an organism; but rather that in many cases a genetic difference corresponds to a difference in traits that is ecologically significant. This is what geneticists mean when they say “the gene for X”; it’s a very useful concept in evolutionary theory, but I don’t think it’s one most laypeople understand. As usual, Richard Dawkins explains this matter brilliantly:

Probably the first point to make is that whenever a geneticist speaks of a gene `for’ such and such a characteristic, say brown eyes, he never means that this gene affects nothing else, nor that it is the only gene contributing to the brown pigmentation. Most genes have many distantly ramified and apparently unconnected effects. A vast number of genes are necessary for the development of eyes and their pigment. When a geneticist talks about a single gene effect, he is always talking about a difference between individuals. A gene `for brown eyes’ is not a gene that, alone and unaided, manufactures brown pigment. It is a gene that, when compared with its alleles (alternatives at the same chromosomal locus), in a normal environment, is responsible for the difference in eye colour between individuals possessing the gene and individuals not possessing the gene. The statement `G1 is a gene for phenotypic characteristic P1‘ is always a shorthand. It always implies the existence, or potential existence, of at least one alternative gene G2, and at least one alternative characteristic P2. It also implies a normal developmental environment, including the presence of the other genes which are common in the gene pool as a whole, and therefore likely to be in the same body. If all individuals had two copies of the gene `for’ brown eyes and if no other eye colour ever occurred, the `gene for brown eyes’ would strictly be a meaningless concept. It can only be defined by reference to at least one potential alternative. Of course any gene exists physically in the sense of being a length of DNA; but it is only properly called a gene `for X’ if there is at least one alternative gene at the same chromosomal locus, which leads to not X.

It follows that there is no clear limit to the complexity of the `X’ which we may substitute in the phrase `a gene for X’. Reading, for example, is a learned skill of immense and subtle complexity. A gene for reading would, to naive common sense, be an absurd notion. Yet, if we follow genetic terminological convention to its logical conclusion, all that would be necessary in order to establish the existence of a gene for reading is the existence of a gene for not reading. If a gene G2 could be found which infallibly caused in its possessors the particular brain lesion necessary to induce specific dyslexia, it would follow that G1, the gene which all the rest of us have in double dose at that chromosomal locus, would by definition have to be called a gene for reading.

It’s important to keep this in mind when interpreting any new ideas or evidence from biology. Just as cocoa by itself is not chocolate cake because one also needs all the other ingredients that make it cake in the first place, “the gay gene” cannot exist in isolation because in order to be gay one needs all the other biological and neurological structures that make one a human being in the first place. Moreover, just as cocoa changes the consistency of a cake so that other ingredients may need to be changed to compensate, so a hypothetical“gay gene” might have other biological or neurological effects that would be inseparable from its contribution to sexual orientation.

It’s also important to point out that hereditary is not the same thing as genetic. By comparing pedigrees, it is relatively straightforward to determine the heritability of a trait within a population—but this is not the same as determining whether the trait is genetic. A great many traits are systematically inherited from parents that have nothing to do with DNA—like language, culture, and wealth. (These too can evolve, but it’s a different kind of evolution.) In the United States, IQ is about 80% heritable; but so is height, and yet nutrition has large, well-documented effects on height (The simplest case: malnourished people never grow very tall). If, as is almost certainly the case, there are many environmental influences such as culture and education that can affect IQ scores, then the heritability of IQ tells us very little.

In fact, some traits are genetic but not hereditary! Certain rare genetic diseases can appear by what is called de novo mutation; the genes that cause them can randomly appear in an individual without having been present in their parents. Neurofibromatosis occurs in as many people with no family history as it does in people with family history; and yet, neurofibromatosis is definitely a genetic disorder, for it can be traced to particular sections of defective DNA.

Honestly, most of the debate about nature versus nurture in human behavior is really quite pointless. Even if you ignore the general facts that phenotype is always an interaction between genes and environment, and feedback occurs between genes and environment over evolutionary time, human beings are the species for which the “Nature or nurture?” question reaches its most meaningless. It is human nature to be nurtured; it is written within our genes that we should be flexible, intelligent beings capable of learning and training far beyond our congenital capacities. An ant’s genes are not written that way; ants play out essentially the same program in every place and time, because that program is hard-wired within them. Humans have an enormous variety of behaviors—far outstripping the variety in any other species—despite having genetic variation of only about 0.1%; clearly most of the differences between humans are environmental. Yet, it is precisely the genes that code for being Homo sapiens that make this possible; if we’d had the genes of an ant or an earthworm, we wouldn’t have this enormous behavioral plasticity. So each person is who they are largely because of their environment—but that itself would not be true without the genes we all share.

On this, my 37th birthday

Jan 19 JDN 2460695

This post will go live on my 37th birthday. I’m now at an age where birthdays don’t really feel like a good thing.

This past year has been one of my worst ever.

It started with returning home from the UK, burnt out, depressed, suffering from frequent debilitating migraines. I had no job prospects, and I was too depressed to search for any. I moved in with my mother, who lately has been suffering health problems of her own.

Gradually, far too gradually, some aspects of my situation improved; my migraines are now better controlled, my depression has been reduced. I am now able to search for jobs at least—but I still haven’t found one. I would say that my mother’s health is better than it was—but several of her conditions are chronic, and much of this struggle will continue indefinitely.

I look back on this year feeling shame, despair, failure and defeat. I haven’t published anything—either fiction, nonfiction, or scientific—in years, and after months of searching I still haven’t found a job that would let me and my husband move to a home of our own. My six figures of student debt are now in forbearance, because the SAVE plan was struck down in court. (At least they’re not accruing interest….) I can’t think of anything I’ve done this year that I would count as a meaningful accomplishment. I feel like I’m just treading water, trying not to drown.

I see others my age finding careers, buying homes, starting families. Honestly they’re a little old to be doing these things now—we Millennials have drawn the short straw on homeownership for sure. (The median age of first-time homebuyers is now 38 years old—the highest ever recorded. In 1981, it was only 29.) I don’t see that happening for me any time soon, and I feel a deep grief over that.

I have not had a year go this badly since high school, when I was struggling even more with migraines and depression. Back then I had debilitating migraines multiple times per week, and my depression sometimes kept me from getting out of bed. I even had suicidal thoughts for a time, though I never made any plans or attempts.

Somehow, despite all that, I still managed to maintain straight As in high school and became a kind of de facto valedictorian. (My school technically didn’t have a valedictorian, but I had the best grades, and I successfully petitioned for special dispensation to deliver a much longer graduation speech than any other student.) Some would say this was because I was so brilliant, but I say it was because high school was too easy—and that this set me up for unrealistic expectations later in life. I am a poster child for Gifted Kid Syndrome and Impostor Syndrome. Honestly, maybe I would have gotten better help for my conditions sooner if my grades had slipped.

Will the coming year be better?

In some ways, probably. Now that my migraines and depression are better controlled—but by no means gone—I have been able to actively search for jobs, and I should be able to find one that fits me eventually (or so I keep trying to convince myself, when it all feels hopeless and pointless). And once I do have a job, whenever that happens, I might be able to start saving up for a home and finally move forward into feeling like a proper adult in this society.

But I look to the coming year feeling fear and dread, as Trump will soon take office and already looks primed to be far worse the second time around. In all likelihood I personally won’t suffer very much from Trump’s incompetence and malfeasance—but millions of other people will, and I don’t know how I can help them, especially when I seem so ineffectual at helping myself.

Moore’s “naturalistic fallacy”

Jan 12 JDN 2460688

In last week’s post I talked about some of the arguments against ethical naturalism, which have sometimes been called “the naturalistic fallacy”.

The “naturalistic fallacy” that G.E. Moore actually wrote about is somewhat subtler; it says that there is something philosophically suspect about defining something non-natural in terms of natural things—and furthermore, it says that “good” is not a natural thing and so cannot be defined in terms of natural things. For Moore, “good” is not something that can be defined with recourse to facts about psychology, biology or mathematics; “good” is simply an indefinable atomic concept that exists independent of all other concepts. As such Moore was criticizing moral theories like utilitarianism and hedonism that seek to define “good” in terms of “pleasure” or “lack of pain”; for Moore, good cannot have a definition in terms of anything except itself.

My greatest problem with this position is less philosophical than linguistic; how does one go about learning a concept that is so atomic and indefinable? When I was a child, I acquired an understanding of the word “good” that has since expanded as I grew in knowledge and maturity. I need not have called it “good”: had I been raised in Madrid, I would have called it bueno; in Beijing, hao; in Kyoto, ii; in Cairo, jaiid; and so on.

I’m not even sure if all these words really mean exactly the same thing, since each word comes with its own cultural and linguistic connotations. A vast range of possible sounds could be used to express this concept and related concepts—and somehow I had to learn which sounds were meant to symbolize which concepts, and what relations were meant to hold between them. This learning process was highly automatic, and occurred when I was very young, so I do not have great insight into its specifics; but nonetheless it seems clear to me that in some sense I learned to define “good” in terms of things that I could perceive. No doubt this definition was tentative, and changed with time and experience; indeed, I think all definitions are like this. Perhaps my knowledge of other concepts, like “pleasure”, “happiness”, “hope” and “justice”, is interconnected with “good” in such a way that none can be defined separately from the others—indeed perhaps language itself is best considered a network of mutually-reinforcing concepts, each with some independent justification and some connection to other concepts, not a straightforward derivation from more basic atomic notions. If you wish, call me a “foundherentist” in the tradition of Susan Haack; I certainly do think that all beliefs have some degree of independent justification by direct evidence and some degree of mutual justification by coherence. Haack uses the metaphor of a crossword puzzle, but I prefer Alison Gopnik’s mathematical model of a Bayes net. In any case, I had to learn about “good” somehow. Even if I had some innate atomic concept of good, we are left to explain two things: First, how I managed to associate that innate atomic concept with my sense experiences, and second, how that innate atomic concept got in my brain in the first place. If it was genetic, it must have evolved; but it could only have evolved by phenotypic interaction with the external environment—that is, with natural things. We are natural beings, made of natural material, evolved by natural selection. If there is a concept of “good” encoded into my brain either by learning or instinct or whatever combination, it had to get there by some natural mechanism.

The classic argument Moore used to support this position is now called the Open Question Argument; it says, essentially, that we could take any natural property that would be proposed as the definition of “good” and call it X, and we could ask: “Sure, that’s X, but is it good?” The idea is that since we can ask this question and it seems to make sense, then X cannot be the definition of “good”. If someone asked, “I know he is an unmarried man, but is he a bachelor?” or “I know that has three sides, but is it a triangle?” we would think that they didn’t understand what they were talking about; but Moore argues that for any natural property, “I know that is X, but is it good?” is still a meaningful question. Moore uses two particular examples, X = “pleasant” and X = “what we desire to desire”; and indeed those fit what he is saying. But are these really very good examples?

One subtle point that many philosophers make about this argument is that science can discover identities between things and properties that are not immediately apparent. We now know that water is H2O, but until the 19th century we did not know this. So we could perfectly well imagine someone asking, “I know that’s H2O, but is it water?” even though in fact water is H2O and we know this. I think this sort of argument would work for some very complicated moral claims, like the claim that constitutional democracy is good; I can imagine someone who was quite ignorant of international affairs asking: “I know that it’s constitutional democracy, but is that good?” and be making sense. This is because the goodness of constitutional democracy isn’t something conceptually necessary, it is an empirical result based on the fact that constitutional democracies are more peaceful, fair, egalitarian, and prosperous than other governmental systems. In fact, it may even be only true relative to other systems we know of; perhaps there is an as-yet-unimagined governmental system that is better still. No one thinks that constitutional democracy is a definition of moral goodness. And indeed, I think few would argue that H2O is the definition of water; instead the definition of water is something like “that wet stuff we need to drink to survive” and it just so happens that this turns out to be H2O. If someone asked “is that wet stuff we need to drink to survive really water?” he would rightly be thought talking nonsense; that’s just what water means.

But if instead of the silly examples Moore uses, we take a serious proposal that real moral philosophers have suggested, it’s not nearly so obvious that the question is open. From Kant: “Yes, that is our duty as rational beings, but is it good?” From Mill: “Yes, that increases the amount of happiness and decreases the amount of suffering in the world, but is it good?” From Aristotle: “Yes, that is kind, just, and fair, but is it good?” These do sound dangerously close to talking nonsense! If someone asked these questions, I would immediately expect an explanation of what they were getting at. And if no such explanation was forthcoming, I would, in fact, be led to conclude that they literally don’t understand what they’re talking about.

I can imagine making sense of “I know that has three sides, but is it a triangle?”in some bizarre curved multi-dimensional geometry. Even “I know he is an unmarried man, but is he a bachelor?” makes sense if you are talking about a celibate priest. Very rarely do perfect synonyms exist in natural languages, and even when they do they are often unstable due to the effects of connotations. None of this changes the fact that bachelors are unmarried men, triangles have three sides, and yes, goodness involves fulfilling rational duties, alleviating suffering, and being kind and just (Deontology, consequentialism, and virtue theory are often thought to be distinct and incompatible; I’m convinced they amount to the same thing, which I’ll say more about in later posts.).

This line of reasoning has led some philosophers (notably Willard Quine) to deny the existence of analytic truths altogether; on Quine’s view even “2+2=4” isn’t something we can deduce directly from the meaning of the symbols. This is clearly much too strong; no empirical observation could ever lead us to deny 2+2=4. In fact, I am convinced that all mathematical truths are ultimately reducible to tautologies; even “the Fourier transform of a Gaussian is Gaussian” is ultimately a way of saying in compact jargon some very complicated statement that amounts to A=A. This is not to deny that mathematics is useful; of course mathematics is tremendously useful, because this sort of compact symbolic jargon allows us to make innumerable inferences about the world and at the same time guarantee that these inferences are correct. Whenever you see a Gaussian and you need its Fourier transform (I know, it happens a lot, right?), you can immediately know that the result will be a Gaussian; you don’t have to go through the whole derivation yourself. We are wrong to think that “ultimately reducible to a tautology” is the same as “worthless and trivial”; on the contrary, to realize that mathematics is reducible to tautology is to say that mathematics is undeniable, literally impossible to coherently deny. At least the way I use the words, the statement “Happiness is good and suffering is bad” is pretty close to that same sort of claim; if you don’t agree with it, I sense that you honestly don’t understand what I mean.

In any case, I see no more fundamental difficulty in defining “good” than I do in defining any concept, like “man”, “tree”, “multiplication”, “green” or “refrigerator”; and nor do I see any point in arguing about the semantics of definition as an approach to understanding moral truth. It seems to me that Moore has confused the map with the territory, and later authors have confused him with Hume, to all of our detriment.

What’s fallacious about naturalism?

Jan 5 JDN 2460681

There is another line of attack against a scientific approach to morality, one which threatens all the more because it comes from fellow scientists. Even though they generally agree that morality is real and important, many scientists have suggested that morality is completely inaccessible to science. There are a few different ways that this claim can be articulated; the most common are Stephen Jay Gould’s concept of “non-overlapping magisteria” (NOMA), David Hume’s “is-ought problem”, and G.E. Moore’s “naturalistic fallacy”. As I will show, none of these pose serious threats to a scientific understanding of morality.

NOMA

Stephen Jay Gould, though a scientist, an agnostic, and a morally upright person, did not think that morality could be justified in scientific or naturalistic terms. He seemed convinced that moral truth could only be understood through religion, and indeed seemed to use the words “religion” and “morality” almost interchangeably:

The magisterium of science covers the empirical realm: what the Universe is made of (fact) and why does it work in this way (theory). The magisterium of religion extends over questions of ultimate meaning and moral value. These two magisteria do not overlap, nor do they encompass all inquiry (consider, for example, the magisterium of art and the meaning of beauty).

If we take Gould to be using a very circumscribed definition of “science” to just mean the so-called “natural sciences” like physics and chemistry, then the claim is trivial. Of course we cannot resolve moral questions about stem cell research entirely in terms of quantum physics or even entirely in terms of cellular biology; no one ever supposed that we could. Yes, it’s obvious that we need to understand the way people think and the way they interact in social structures. But that’s precisely what the fields of psychology, sociology, economics, and political science are designed to do. It would be like saying that quantum physics cannot by itself explain the evolution of life on Earth. This is surely true, but it’s hardly relevant.

Conversely, if we define science broadly to include all rational and empirical methods: physics, chemistry, geology, biology, psychology, sociology, astronomy, logic, mathematics, philosophy, history, archaeology, anthropology, economics, political science, and so on, then Gould’s claim would mean that there is no rational reason for thinking that rape and genocide are immoral.

And even if we suppose there is something wrong with using science to study morality, the alternative Gould offers us—religion—is far worse. As I’ve already shown in previous posts, religion is a very poor source of moral understanding. If morality is defined by religious tradition, then it is arbitrary and capricious, and real moral truth disintegrates.

Fortunately, we have no reason to think so. The entire history of ethical philosophy speaks against such notions, and had Immanuel Kant and John Stuart Mill alive been alive to read them, they would have scoffed at Gould’s claims. I suspect Peter Singer and Thomas Pogge would scoff similarly today. Religion doesn’t offer any deep insights into morality, and reason often does; NOMA is simply wrong.

What’s the problem with “ought” and “is”?

The next common objection to a scientific approach to morality is the remark, after David Hume, that “one cannot derive an ought from an is”; due to a conflation with a loosely-related argument that G.E. Moore made later, the attempt to derive moral statements from empirical facts has become called the “naturalistic fallacy” (this is clearly not what Moore intended; I will address Moore’s actual point in a later post). But in truth, I do not really see where the fallacy is meant to lie; there is little difference in principle between deriving “ought” from “is” than there is from deriving anything from anything else.

First, let’s put aside direct inferences from “X is true” to “X ought to be true”; these are obviously fallacious. If that’s all Hume was saying, then he is of course correct; but this does little to undermine any serious scientific theory of morality. You can’t infer from “there are genocides” to “there ought to be genocides”; nor can you infer from “there ought to be happy people” to “there are happy people”; but nor would I or any other scientist seek to do so. This is a strawman of naturalistic morality.

It’s true that some people do attempt to draw similar inferences, usually stated in a slightly different form—but these are not moral scientists, they are invariably laypeople with little understanding of the subject. Arguments based on the claim that “homosexuality is unnatural” (therefore wrong) or “violence is natural” (therefore right) are guilty of this sort of fallacy, but I’ve never heard any credible philosopher or scientist support such arguments. (And by the way, homosexuality is nearly as common among animals as violence.)

A subtler way of reasoning from “is” to “ought” that is still problematic is the common practice of surveying people about their moral attitudes and experimentally testing their moral behaviors, sometimes called experimental philosophy. I do think this kind of research is useful and relevant, but it doesn’t get us as far as some people seem to think. Even if we were to prove that 100% of humans who have ever lived believe that cannibalism is wrong, it does not follow that cannibalism is in fact wrong. It is indeed evidence that there is something wrong with cannibalism—perhaps it is maladaptive to the point of being evolutionarily unstable, or it is so obviously wrong that even the most morally-blind individuals can detect its wrongness. But this extra step of explanation is necessary; it simply doesn’t follow from the fact that “everyone believes X is wrong” that in fact “X is wrong”. (Before 1900 just about everyone quite reasonably believed that the passage of time is the same everywhere regardless of location, speed or gravity; Einstein proved everyone wrong.) Moral realism demands that we admit people can be mistaken about their moral beliefs, just as they can be mistaken about other beliefs.

But these are not the only way to infer from “is” to “ought”, and there are many ways to make such inferences that are in fact perfectly valid. For instance, I know at least two ways to validly prove moral claims from nonmoral claims. The first is by conjunctive addition: “2+2=4, therefore 2+2=4 or genocide is wrong”. The second is by contradictory explosion: “2+2=5, therefore genocide is wrong”. Both of these arguments are logically valid. Obviously they are also quite trivial; “genocide is wrong” could be replaced by any other conceivable proposition (even a contradiction!), leaving an equally valid argument. Still, we have validly derived a moral statement from nonmoral statements, while obeying the laws of logic.

Moreover, it is clearly rational to infer a certain kind of “ought” from statements that entirely involve facts. For instance, it is rational to reason, “If you are cold, you ought to close the window”. This is an instrumental “ought” (it says what it is useful to do, given the goals that you have), not a moral “ought” (which would say what goals you should have in the first place). Hence, this is not really inferring moral claims from non-moral claims, since the “ought” isn’t really a moral “ought” at all; if the ends are immoral the means will be immoral too. (It would be equally rational in this instrumental sense to say, “If you want to destroy the world, you ought to get control of the nuclear launch codes”.) In fact this kind of instrumental rationality—doing what accomplishes our goals—actually gets us quite far in defining moral norms for real human beings; but clearly it does not get us far enough.

Finally, and most importantly, epistemic normativity, which any rational being must accept, is itself an inference from “is” to “ought”; it involves inferences like “Is it raining, therefore you ought to believe it is raining.”

With these considerations in mind, we must carefully rephrase Hume’s remark, to something like this:

One cannot nontrivially with logical certainty derive moral statements from entirely nonmoral statements.

This is indeed correct; but here the word “moral” carries no weight and could be replaced by almost anything. One cannot nontrivially with logical certainty derive physical statements from entirely nonphysical statements, nor nontrivially with logical certainty derive statements about fish from statements that are entirely not about fish. For all X, one cannot nontrivially with logical certainty derive statements about X from statements entirely unrelated to X. This is an extremely general truth. We could very well make it a logical axiom. In fact, if we do so, we pretty much get relevance logic, which takes the idea of “nontrivial” proofs to the extreme of actually considering trivial proofs invalid. Most logicians don’t go so far—they say that “2+2=5, therefore genocide is wrong” is technically a valid argument—but everyone agrees that such arguments are pointless and silly. In any case the word “moral” carries no weight here; it is no harder to derive an “ought” from an “is” than it is to derive a “fish” from a “molecule”.

Moreover, the claim that nonmoral propositions can never validly influence moral propositions is clearly false; the argument “Killing is wrong, shooting someone will kill them, therefore shooting someone is wrong” is entirely valid, and the moral proposition “shooting someone is wrong” is derived in large part from the nonmoral proposition “shooting someone will kill them”. In fact, the entire Frege-Geach argument against expressivism hinges upon the fact that we all realize that moral propositions function logically the same way as nonmoral propositions, and can interact with nonmoral propositions in all the usual ways. Even expressivists usually do not deny this; they simply try to come up with ways of rescuing expressivism despite this observation.

There are also ways of validly deriving moral propositions from entirely nonmoral propositions, in an approximate or probabilistic fashion. “Genocide causes a great deal of suffering and death, and almost everyone who has ever lived has agreed that suffering and death are bad and that genocide is wrong, therefore genocide is probably wrong” is a reasonably sound probabilistic argument that infers a moral conclusion based on entirely nonmoral premises, though it lacks the certainty of a logical proof.

We could furthermore take as axiom some definition of moral concepts in terms of nonmoral concepts, and then derive consequences of this definition with logical certainty. “A morally right action maximizes pleasure and minimizes pain. Genocide fails to maximize pleasure or minimize pain. Therefore genocide is not morally right.” Obviously one is free to challenge the definition, but that’s true of many different types of philosophical arguments, not a specific problem in arguments about morality.

So what exactly was Hume trying to say? I’m really not sure. Maybe he has in mind the sort of naive arguments that infer from “unnatural” to “wrong”; if so, he’s surely correct, but the argument does little to undermine any serious naturalistic theories of morality.

On land acknowledgments

Dec 29 JDN 2460674

Noah Smith and Brad DeLong, both of whom I admire, have recently written about the practice of land acknowledgments. Smith is wholeheartedly against them. DeLong has a more nuanced view. Smith in fact goes so far as to argue that there is no moral basis for considering these lands to be ‘Native lands’ at all, which DeLong rightly takes issue with.

I feel like this might be an issue where it would be better to focus on Native American perspectives. (Not that White people aren’t allowed to talk about it; just that we tend to hear from them on everything, and this is something where maybe they’re less likely to know what they’re talking about.)

It turns out that Native views on land acknowledgments are also quite mixed; some see them as a pointless, empty gesture; others see them as a stepping-stone to more serious policy changes that are necessary. There is general agreement that more concrete actions, such as upholding treaties and maintaining tribal sovereignty, are more important.

I have to admit I’m much more in the ’empty gesture’ camp. I’m only one-fourth Native (so I’m Whiter than I am not), but my own view on this is that land acknowledgments aren’t really accomplishing very much, and in fact aren’t even particularly morally defensible.

Now, I know that it’s not realistic to actually “give back” all the land in the United States (or Australia, or anywhere where indigenous people were forced out by colonialism). Many of the tribes that originally lived on the land are gone, scattered to the winds, or now living somewhere else that they were forced to (predominantly Oklahoma). Moreover, there are now more non-Native people living on that land than there ever were Native people living on it, and forcing them all out would be just as violent and horrific as forcing out the Native people was in the first place.

I even appreciate Smith’s point that there is something problematic about assigning ownership of land to bloodlines of people just because they happened to be the first ones living there. Indeed, as he correctly points out, they often weren’t the first ones living there; different tribes have been feuding and warring with each other since time immemorial, and it’s likely that any given plot of land was held by multiple different tribes at different times even before colonization.

Let’s make this a little more concrete.

Consider the Beaver Wars.


The Beaver Wars were a series of conflicts between the Haudenosaunee (that’s what they call themselves; to a non-Native audience they are better known by what the French called them, Iroquois) and several other tribes. Now, that was after colonization, and the French were involved, and part of what they were fighting over was the European fur trade—so the story is a bit complicated by that. But it’s a conflict we have good historical records of, and it’s pretty clear that many of these rivalries long pre-dated the arrival of the French.

The Haudenosaunee were brutal in the Beaver Wars. They slaughtered thousands, including many helpless civilians, and effectively wiped out several entire tribes, including the Erie and Susquehannock, and devastated several others, including the Mohicans and the Wyandot. Many historians consider these to be acts of genocide. Surely any land that the Haundenosaunee claimed as a result of the Beaver Wars is as illegitimate as land claimed by colonial imperialism? Indeed, isn’t it colonial imperialism?

Yet we have no reason to believe that these brutal wars were unique to the Haundenosaunee, or that they only occurred after colonization. Our historical records aren’t as clear going that far back, because many Native tribes didn’t keep written records—in fact, many didn’t even have a written language. But what we do know suggests that a great many tribes warred with a great many other tribes, and land was gained and lost in warfare, going back thousands of years.

Indeed, it seems to be a sad fact of human history that virtually all land, indigenous or colonized, is actually owned by a group that conquered another group (that conquered another group, that conquered another group…). European colonialism was simply the most recent conquest.

But this doesn’t make European colonialism any more justifiable. Rather, it raises a deeper question:

How should we decide who owns what land?

The simplest way, and the way that we actually seem to use most of the time, is to simply take whoever currently owns the land as its legitimate ownership. “Possession is nine-tenths of the law” was always nonsense when it comes to private property (that’s literally what larceny means!), but when it comes to national sovereignty, it is basically correct. Once a group manages to organize itself well enough to enforce control over a territory, we pretty much say that it’s their territory now and they’re allowed to keep it.

Does that mean that anyone is just allowed to take whatever land they can successfully conquer and defend? That the world must simply accept that chaos and warfare are inevitable? Fortunately, there is a solution to this problem.

The Westphalian solution.

The current solution to this problem is what’s called Westphalian sovereignty, after the Peace of Westphalia, two closely-related treaties that were signed in Westphalia (a region of Germany) in 1648. Those treaties established a precedent in international law that nations are entitled to sovereignty over their own territory; other nations are not allowed to invade and conquer them, and if anyone tries, the whole international community should fight to resist any such attempt.

Effectively, what Westphalia did was establish that whoever controlled a given territory right now (where “right now” means 1648) now gets the right to hold it forever—and everyone else not only has to accept that, they are expected to defend it. Now, clearly this has not been followed precisely; new nations have gained independence from their empires (like the United States), nations have separated into pieces (like India and Pakistan, the Balkans, and most recently South Sudan), and sometimes even nations have successfully conquered each other and retained control—but the latter has been considerably rarer than it was before the establishment of Westphalian sovereignty. (Indeed, part of what makes the Ukraine War such an aberration is that it is a brazen violation of Westphalian sovereignty the likes of which we haven’t seen since the Second World War.)

This was, as far as I can tell, a completely pragmatic solution, with absolutely no moral basis whatsoever. We knew in 1648, and we know today, that virtually every nation on Earth was founded in bloodshed, its land taken from others (who took it from others, who took it from others…). And it was timed in such a way that European colonialism became etched in stone—no European power was allowed to take over another European power’s colonies anymore, but they were all allowed to keep all the colonies they already had, and the people living in those colonies didn’t get any say in the matter.

Since then, most (but by no means all) of those colonies have revolted and gained their own independence. But by the time it happened, there were large populations of former colonists, and the indigenous populations were often driven out, dramatically reduced, or even outright exterminated. There is something unsettling about founding a new democracy like the United States or Australia after centuries of injustice and oppression have allowed a White population to establish a majority over the indigenous population; had indigenous people been democratically represented all along, things would probably have gone a lot differently.

What do land acknowledgments accomplish?

I think that the intent behind land acknowledgments is to recognize and commemorate this history of injustice, in the hopes of somehow gaining some kind of at least partial restitution. The intentions here are good, and the injustices are real.

But there is something fundamentally wrong with the way most land acknowledgments are done, because they basically just push the sovereignty back one step: They assert that whoever held the land before Europeans came along is the land’s legitimate owner. But what about the people before them (and the people before them, and the people before them)? How far back in the chain of violence are we supposed to go before we declare a given group’s conquests legitimate?

How far back can we go?

Most of these events happened many centuries ago and were never written down, and all we have now is vague oral histories that may or may not even be accurate. Particularly when one tribe forces out another, it rather behooves the conquering tribe to tell the story in their own favor, as one of “reclaiming” land that was rightfully theirs all along, whether or not that was actually true—as they say, history is written by the victors. (I think it’s actually more true when the history is never actually written.) And in some cases it’s probably even true! In others, that land may have been contested between the two tribes for so long that nobody honestly knows who owned it first.

It feels wrong to legitimate the conquests of colonial imperialism, but it feels just as wrong to simply push it back one step—or three steps, or seven steps.

I think that ultimately what we must do is acknowledge this entire history.

We must acknowledge that this land was stolen by force from Native Americans, and also that most of those Native Americans acquired their land by stealing it by force from other Native Americans, and the chain goes back farther than we have records. We must acknowledge that this is by no means unique to the United States but in fact a universal feature of almost all land held by anyone anywhere in the world. We must acknowledge that this chain of violence and conquest has been a part of human existence since time immemorial—and affirm our commitment to end it, once and for all.

That doesn’t simply mean accepting the current allocation of land; land, like many other resources, is clearly distributed unequally and unfairly. But it does mean that however we choose to allocate land, we must do so by a fair and peaceful process, not by force and conquest. The chain of violence that has driven human history for thousands of years must finally be brought to an end.