Evolution: Foundations of Genetics


Jan 26 JDN 2460702

It frustrates me that in American society, evolutionary biology is considered a controversial topic. When I use knowledge from quantum physics or from organic chemistry, all I need to do is cite a credible source; I don’t need to preface it with a defense of the entire scientific field. Yet in the United States today, even basic statements of facts observed in evolutionary biology are met with incredulity. The consensus in the scientific community about evolution is greater than the consensus about quantum physics, and comparable to the consensus about organic chemistry. 95% of scientists agree that evolution happens, that Darwinian natural selection is the primary cause, and that human beings share a common ancestor with every other life form on Earth. Polls of scientists have consistently made this clear, and the wild success of Project Steve continues to vividly demonstrate it.

But I would rather defend evolution than have to tiptoe around it, or worse have my conclusions ignored because I use it. So, here goes.

You may think you understand evolution, but especially if you doubt that evolution is true, odds are good that you really don’t. Even most people who have taken college courses in evolutionary biology have difficulty understanding evolution.

Evolution is a very rich and complicated science, and I don’t have room to do it justice here. I merely hope that I can give you enough background to make sense of the core concepts, and convince you that evolution is real and important.

Foundations of genetics

So let us start at the beginning. DNA—deoxyribonucleic acid—is a macromolecular (very big and complicated) organic (carbon-based) acid (chemical that can give up hydrogen ions in solution) that is produced by all living cells. More properly, it is a class of macromolecular organic acids, because differences between DNA strands are actually chemical differences in the molecule. The structure of DNA consists of two long chains of constituent molecules called nucleotides; for chemical reasons nucleotides usually bond in pairs, adenine (A) with thymine (T), guanine (G) with cytosine (C). Pairs of nucleotides are called base pairs. We call it a “double-helix” because the two chains are normally wrapped around each other in a helix shape.

Because of this base-pair correspondence, the two strands of a DNA molecule are complementary; if one half is GATTACA, the other half will be CTAATGT. This process is reversible. Either strand can be reproduced from the other; this is how DNA replicates. A DNA strand GATTACA/CTAATGT can split into its GATTACA half and its CTAATGT half, and then the original GATTACA half will acquire new nucleotides and make a new CTAATGT for itself; similarly the original CTAATGT half will make a new GATTACA. At the end of this process, two precise copies of the original GATTACA/CTAATGT strand will result. This process can be repeated as necessary.

DNA molecules can vary in size from a few base-pairs (like the sequence GATTACA), to the 16,000 base-pairs of Carsonella bacteria, up to the 3 billion base-pairs of humans and beyond. While complexity of DNA and complexity of organism are surely related (it’s impossible to make a really complicated organism with very simple DNA), more base pairs does not necessarily imply a more complex organism. The single-celled amoeboid Polychaos dubium has 670 billion base-pairs. Amoeboids are relatively complex, all things considered; but they’re hardly 200 times more complex than we are!

The copying of DNA is exceedingly precise, but like anything in real life, not perfect. Cells have many physical and chemical mechanisms to correct bad copying, but sometimes—about 1 in 1 million base-pairs copied—something goes wrong. Sometimes, one nucleotide gets switched for another; perhaps what should have been a T becomes an A, or what should have been an A becomes a G. Other times, a whole sequence of DNA gets duplicated and inserted in a new place; still other times entire pieces of DNA are lost, never to be copied again. In some cases a sequence is flipped around backwards. All of these things (a single-nucleotide substitution, an insertion, a deletion, and an inversion, respectively) are forms of mutation. Mutation is always happening, but it can be increased by the presence of radiation, toxins, and other stresses. Usually cells with mutant DNA are killed by the immune system; if not, mutant body cells can cause cancer or other health problems. Usually it’s only mutations in gametes—the sperm and egg cells that carry DNA to the next generation—that actually have a long-term effect on future generations. Most mutations do not have any significant effect, and most of those that do have bad effects. It is only the rare minority of mutations that actually produces something useful to an organism’s survival.

What does DNA do? It makes proteins. Technically, proteins make other proteins (enzymes called transcriptases and polymerases and so on), but which protein is produced by such a process is dependent upon the order of base pairs in a DNA strand. DNA has been likened to a “code” or a “message”, but this is a little misleading. It’s definitely a sequence that contains information, but the “code” is less like a cryptographer’s cipher and more like a computer’s machine code; it interacts directly with the hardware to produce an output. And it’s important to understand that when DNA is “read” and “decoded”, it’s all happening purely by chemical reactions, and there is no conscious being doing the reading. While metaphorically we might say that DNA is a “code” or a “language”, we must not take these metaphors too literally; DNA is not a language in the same sense as English, nor is it a code in the same sense as the Enigma cipher.

Genotype and phenotype

DNA is also not a “blueprint”, as it is sometimes described. There is a one-to-one correspondence between a house and its blueprint: given a house, it would be easy to draw a blueprint much like the original blueprint; given a blueprint, one can construct basically the same house. DNA is not like this. There is no one-to-one correspondence between DNA and a living organism’s structure. Given the traits of an organism, it is impossible to reconstruct its DNA—and purely from the DNA, it is impossible to reconstruct the organism. A better analogy is to a recipe, which offers a general guide as to what to make and how to make it, but depending on the cook and the ingredients, may give quite different results. The ingredients in this case are nutrients, and the “cook” is the whole of our experience and interaction with the environment. No experience or environment can act upon us unless we have the right genes and nutrients to make it effective. No matter how long you let it sit, bread with no yeast will never rise—and no matter how hard you try to teach him, your dog will never be able to speak in fluent sentences.

Furthermore, genes rarely do only one thing in an organism; much as drugs have side effects, so do genes, a phenomenon called pleiotropy. Some genes are more pleiotropic than others, but really, all genes are pleiotropic. In any complex organism, genes will have complex effects. The genes of an organism are its genotype; the actual traits that it has are its phenotype. We have these two different words precisely because they are different things; genotype influences phenotype, but many other things influence phenotype besides genotype. The answer to the question “Nature or Nurture?” is always—always—“Both”. There are much more useful questions to ask, like “How much of the variation of this trait within this population is attributable to genetic differences?”, “How do environmental conditions trigger this phenotype in the presence of this genotype?”, and “Under what ecological circumstances would this genotype evolve?”

This is why it’s a bit misleading to talk about the “the gene for homosexuality” or “the gene for religiosity”; taken literally this would be like saying “the ingredient for chocolate cake” or “the beam for the Empire State Building”. At best we can distinguish certain genes that might, in the context of many other genes and environmental contributions, make a difference between particular states—much as removing the cocoa from chocolate cake makes some other kind of cake, it could be that removing a particular gene from someone strongly homosexual might make them nearer to heterosexual. It’s not that genes can be mapped one-to-one to traits of an organism; but rather that in many cases a genetic difference corresponds to a difference in traits that is ecologically significant. This is what geneticists mean when they say “the gene for X”; it’s a very useful concept in evolutionary theory, but I don’t think it’s one most laypeople understand. As usual, Richard Dawkins explains this matter brilliantly:

Probably the first point to make is that whenever a geneticist speaks of a gene `for’ such and such a characteristic, say brown eyes, he never means that this gene affects nothing else, nor that it is the only gene contributing to the brown pigmentation. Most genes have many distantly ramified and apparently unconnected effects. A vast number of genes are necessary for the development of eyes and their pigment. When a geneticist talks about a single gene effect, he is always talking about a difference between individuals. A gene `for brown eyes’ is not a gene that, alone and unaided, manufactures brown pigment. It is a gene that, when compared with its alleles (alternatives at the same chromosomal locus), in a normal environment, is responsible for the difference in eye colour between individuals possessing the gene and individuals not possessing the gene. The statement `G1 is a gene for phenotypic characteristic P1‘ is always a shorthand. It always implies the existence, or potential existence, of at least one alternative gene G2, and at least one alternative characteristic P2. It also implies a normal developmental environment, including the presence of the other genes which are common in the gene pool as a whole, and therefore likely to be in the same body. If all individuals had two copies of the gene `for’ brown eyes and if no other eye colour ever occurred, the `gene for brown eyes’ would strictly be a meaningless concept. It can only be defined by reference to at least one potential alternative. Of course any gene exists physically in the sense of being a length of DNA; but it is only properly called a gene `for X’ if there is at least one alternative gene at the same chromosomal locus, which leads to not X.

It follows that there is no clear limit to the complexity of the `X’ which we may substitute in the phrase `a gene for X’. Reading, for example, is a learned skill of immense and subtle complexity. A gene for reading would, to naive common sense, be an absurd notion. Yet, if we follow genetic terminological convention to its logical conclusion, all that would be necessary in order to establish the existence of a gene for reading is the existence of a gene for not reading. If a gene G2 could be found which infallibly caused in its possessors the particular brain lesion necessary to induce specific dyslexia, it would follow that G1, the gene which all the rest of us have in double dose at that chromosomal locus, would by definition have to be called a gene for reading.

It’s important to keep this in mind when interpreting any new ideas or evidence from biology. Just as cocoa by itself is not chocolate cake because one also needs all the other ingredients that make it cake in the first place, “the gay gene” cannot exist in isolation because in order to be gay one needs all the other biological and neurological structures that make one a human being in the first place. Moreover, just as cocoa changes the consistency of a cake so that other ingredients may need to be changed to compensate, so a hypothetical“gay gene” might have other biological or neurological effects that would be inseparable from its contribution to sexual orientation.

It’s also important to point out that hereditary is not the same thing as genetic. By comparing pedigrees, it is relatively straightforward to determine the heritability of a trait within a population—but this is not the same as determining whether the trait is genetic. A great many traits are systematically inherited from parents that have nothing to do with DNA—like language, culture, and wealth. (These too can evolve, but it’s a different kind of evolution.) In the United States, IQ is about 80% heritable; but so is height, and yet nutrition has large, well-documented effects on height (The simplest case: malnourished people never grow very tall). If, as is almost certainly the case, there are many environmental influences such as culture and education that can affect IQ scores, then the heritability of IQ tells us very little.

In fact, some traits are genetic but not hereditary! Certain rare genetic diseases can appear by what is called de novo mutation; the genes that cause them can randomly appear in an individual without having been present in their parents. Neurofibromatosis occurs in as many people with no family history as it does in people with family history; and yet, neurofibromatosis is definitely a genetic disorder, for it can be traced to particular sections of defective DNA.

Honestly, most of the debate about nature versus nurture in human behavior is really quite pointless. Even if you ignore the general facts that phenotype is always an interaction between genes and environment, and feedback occurs between genes and environment over evolutionary time, human beings are the species for which the “Nature or nurture?” question reaches its most meaningless. It is human nature to be nurtured; it is written within our genes that we should be flexible, intelligent beings capable of learning and training far beyond our congenital capacities. An ant’s genes are not written that way; ants play out essentially the same program in every place and time, because that program is hard-wired within them. Humans have an enormous variety of behaviors—far outstripping the variety in any other species—despite having genetic variation of only about 0.1%; clearly most of the differences between humans are environmental. Yet, it is precisely the genes that code for being Homo sapiens that make this possible; if we’d had the genes of an ant or an earthworm, we wouldn’t have this enormous behavioral plasticity. So each person is who they are largely because of their environment—but that itself would not be true without the genes we all share.

On this, my 37th birthday

Jan 19 JDN 2460695

This post will go live on my 37th birthday. I’m now at an age where birthdays don’t really feel like a good thing.

This past year has been one of my worst ever.

It started with returning home from the UK, burnt out, depressed, suffering from frequent debilitating migraines. I had no job prospects, and I was too depressed to search for any. I moved in with my mother, who lately has been suffering health problems of her own.

Gradually, far too gradually, some aspects of my situation improved; my migraines are now better controlled, my depression has been reduced. I am now able to search for jobs at least—but I still haven’t found one. I would say that my mother’s health is better than it was—but several of her conditions are chronic, and much of this struggle will continue indefinitely.

I look back on this year feeling shame, despair, failure and defeat. I haven’t published anything—either fiction, nonfiction, or scientific—in years, and after months of searching I still haven’t found a job that would let me and my husband move to a home of our own. My six figures of student debt are now in forbearance, because the SAVE plan was struck down in court. (At least they’re not accruing interest….) I can’t think of anything I’ve done this year that I would count as a meaningful accomplishment. I feel like I’m just treading water, trying not to drown.

I see others my age finding careers, buying homes, starting families. Honestly they’re a little old to be doing these things now—we Millennials have drawn the short straw on homeownership for sure. (The median age of first-time homebuyers is now 38 years old—the highest ever recorded. In 1981, it was only 29.) I don’t see that happening for me any time soon, and I feel a deep grief over that.

I have not had a year go this badly since high school, when I was struggling even more with migraines and depression. Back then I had debilitating migraines multiple times per week, and my depression sometimes kept me from getting out of bed. I even had suicidal thoughts for a time, though I never made any plans or attempts.

Somehow, despite all that, I still managed to maintain straight As in high school and became a kind of de facto valedictorian. (My school technically didn’t have a valedictorian, but I had the best grades, and I successfully petitioned for special dispensation to deliver a much longer graduation speech than any other student.) Some would say this was because I was so brilliant, but I say it was because high school was too easy—and that this set me up for unrealistic expectations later in life. I am a poster child for Gifted Kid Syndrome and Impostor Syndrome. Honestly, maybe I would have gotten better help for my conditions sooner if my grades had slipped.

Will the coming year be better?

In some ways, probably. Now that my migraines and depression are better controlled—but by no means gone—I have been able to actively search for jobs, and I should be able to find one that fits me eventually (or so I keep trying to convince myself, when it all feels hopeless and pointless). And once I do have a job, whenever that happens, I might be able to start saving up for a home and finally move forward into feeling like a proper adult in this society.

But I look to the coming year feeling fear and dread, as Trump will soon take office and already looks primed to be far worse the second time around. In all likelihood I personally won’t suffer very much from Trump’s incompetence and malfeasance—but millions of other people will, and I don’t know how I can help them, especially when I seem so ineffectual at helping myself.

Moore’s “naturalistic fallacy”

Jan 12 JDN 2460688

In last week’s post I talked about some of the arguments against ethical naturalism, which have sometimes been called “the naturalistic fallacy”.

The “naturalistic fallacy” that G.E. Moore actually wrote about is somewhat subtler; it says that there is something philosophically suspect about defining something non-natural in terms of natural things—and furthermore, it says that “good” is not a natural thing and so cannot be defined in terms of natural things. For Moore, “good” is not something that can be defined with recourse to facts about psychology, biology or mathematics; “good” is simply an indefinable atomic concept that exists independent of all other concepts. As such Moore was criticizing moral theories like utilitarianism and hedonism that seek to define “good” in terms of “pleasure” or “lack of pain”; for Moore, good cannot have a definition in terms of anything except itself.

My greatest problem with this position is less philosophical than linguistic; how does one go about learning a concept that is so atomic and indefinable? When I was a child, I acquired an understanding of the word “good” that has since expanded as I grew in knowledge and maturity. I need not have called it “good”: had I been raised in Madrid, I would have called it bueno; in Beijing, hao; in Kyoto, ii; in Cairo, jaiid; and so on.

I’m not even sure if all these words really mean exactly the same thing, since each word comes with its own cultural and linguistic connotations. A vast range of possible sounds could be used to express this concept and related concepts—and somehow I had to learn which sounds were meant to symbolize which concepts, and what relations were meant to hold between them. This learning process was highly automatic, and occurred when I was very young, so I do not have great insight into its specifics; but nonetheless it seems clear to me that in some sense I learned to define “good” in terms of things that I could perceive. No doubt this definition was tentative, and changed with time and experience; indeed, I think all definitions are like this. Perhaps my knowledge of other concepts, like “pleasure”, “happiness”, “hope” and “justice”, is interconnected with “good” in such a way that none can be defined separately from the others—indeed perhaps language itself is best considered a network of mutually-reinforcing concepts, each with some independent justification and some connection to other concepts, not a straightforward derivation from more basic atomic notions. If you wish, call me a “foundherentist” in the tradition of Susan Haack; I certainly do think that all beliefs have some degree of independent justification by direct evidence and some degree of mutual justification by coherence. Haack uses the metaphor of a crossword puzzle, but I prefer Alison Gopnik’s mathematical model of a Bayes net. In any case, I had to learn about “good” somehow. Even if I had some innate atomic concept of good, we are left to explain two things: First, how I managed to associate that innate atomic concept with my sense experiences, and second, how that innate atomic concept got in my brain in the first place. If it was genetic, it must have evolved; but it could only have evolved by phenotypic interaction with the external environment—that is, with natural things. We are natural beings, made of natural material, evolved by natural selection. If there is a concept of “good” encoded into my brain either by learning or instinct or whatever combination, it had to get there by some natural mechanism.

The classic argument Moore used to support this position is now called the Open Question Argument; it says, essentially, that we could take any natural property that would be proposed as the definition of “good” and call it X, and we could ask: “Sure, that’s X, but is it good?” The idea is that since we can ask this question and it seems to make sense, then X cannot be the definition of “good”. If someone asked, “I know he is an unmarried man, but is he a bachelor?” or “I know that has three sides, but is it a triangle?” we would think that they didn’t understand what they were talking about; but Moore argues that for any natural property, “I know that is X, but is it good?” is still a meaningful question. Moore uses two particular examples, X = “pleasant” and X = “what we desire to desire”; and indeed those fit what he is saying. But are these really very good examples?

One subtle point that many philosophers make about this argument is that science can discover identities between things and properties that are not immediately apparent. We now know that water is H2O, but until the 19th century we did not know this. So we could perfectly well imagine someone asking, “I know that’s H2O, but is it water?” even though in fact water is H2O and we know this. I think this sort of argument would work for some very complicated moral claims, like the claim that constitutional democracy is good; I can imagine someone who was quite ignorant of international affairs asking: “I know that it’s constitutional democracy, but is that good?” and be making sense. This is because the goodness of constitutional democracy isn’t something conceptually necessary, it is an empirical result based on the fact that constitutional democracies are more peaceful, fair, egalitarian, and prosperous than other governmental systems. In fact, it may even be only true relative to other systems we know of; perhaps there is an as-yet-unimagined governmental system that is better still. No one thinks that constitutional democracy is a definition of moral goodness. And indeed, I think few would argue that H2O is the definition of water; instead the definition of water is something like “that wet stuff we need to drink to survive” and it just so happens that this turns out to be H2O. If someone asked “is that wet stuff we need to drink to survive really water?” he would rightly be thought talking nonsense; that’s just what water means.

But if instead of the silly examples Moore uses, we take a serious proposal that real moral philosophers have suggested, it’s not nearly so obvious that the question is open. From Kant: “Yes, that is our duty as rational beings, but is it good?” From Mill: “Yes, that increases the amount of happiness and decreases the amount of suffering in the world, but is it good?” From Aristotle: “Yes, that is kind, just, and fair, but is it good?” These do sound dangerously close to talking nonsense! If someone asked these questions, I would immediately expect an explanation of what they were getting at. And if no such explanation was forthcoming, I would, in fact, be led to conclude that they literally don’t understand what they’re talking about.

I can imagine making sense of “I know that has three sides, but is it a triangle?”in some bizarre curved multi-dimensional geometry. Even “I know he is an unmarried man, but is he a bachelor?” makes sense if you are talking about a celibate priest. Very rarely do perfect synonyms exist in natural languages, and even when they do they are often unstable due to the effects of connotations. None of this changes the fact that bachelors are unmarried men, triangles have three sides, and yes, goodness involves fulfilling rational duties, alleviating suffering, and being kind and just (Deontology, consequentialism, and virtue theory are often thought to be distinct and incompatible; I’m convinced they amount to the same thing, which I’ll say more about in later posts.).

This line of reasoning has led some philosophers (notably Willard Quine) to deny the existence of analytic truths altogether; on Quine’s view even “2+2=4” isn’t something we can deduce directly from the meaning of the symbols. This is clearly much too strong; no empirical observation could ever lead us to deny 2+2=4. In fact, I am convinced that all mathematical truths are ultimately reducible to tautologies; even “the Fourier transform of a Gaussian is Gaussian” is ultimately a way of saying in compact jargon some very complicated statement that amounts to A=A. This is not to deny that mathematics is useful; of course mathematics is tremendously useful, because this sort of compact symbolic jargon allows us to make innumerable inferences about the world and at the same time guarantee that these inferences are correct. Whenever you see a Gaussian and you need its Fourier transform (I know, it happens a lot, right?), you can immediately know that the result will be a Gaussian; you don’t have to go through the whole derivation yourself. We are wrong to think that “ultimately reducible to a tautology” is the same as “worthless and trivial”; on the contrary, to realize that mathematics is reducible to tautology is to say that mathematics is undeniable, literally impossible to coherently deny. At least the way I use the words, the statement “Happiness is good and suffering is bad” is pretty close to that same sort of claim; if you don’t agree with it, I sense that you honestly don’t understand what I mean.

In any case, I see no more fundamental difficulty in defining “good” than I do in defining any concept, like “man”, “tree”, “multiplication”, “green” or “refrigerator”; and nor do I see any point in arguing about the semantics of definition as an approach to understanding moral truth. It seems to me that Moore has confused the map with the territory, and later authors have confused him with Hume, to all of our detriment.

What’s fallacious about naturalism?

Jan 5 JDN 2460681

There is another line of attack against a scientific approach to morality, one which threatens all the more because it comes from fellow scientists. Even though they generally agree that morality is real and important, many scientists have suggested that morality is completely inaccessible to science. There are a few different ways that this claim can be articulated; the most common are Stephen Jay Gould’s concept of “non-overlapping magisteria” (NOMA), David Hume’s “is-ought problem”, and G.E. Moore’s “naturalistic fallacy”. As I will show, none of these pose serious threats to a scientific understanding of morality.

NOMA

Stephen Jay Gould, though a scientist, an agnostic, and a morally upright person, did not think that morality could be justified in scientific or naturalistic terms. He seemed convinced that moral truth could only be understood through religion, and indeed seemed to use the words “religion” and “morality” almost interchangeably:

The magisterium of science covers the empirical realm: what the Universe is made of (fact) and why does it work in this way (theory). The magisterium of religion extends over questions of ultimate meaning and moral value. These two magisteria do not overlap, nor do they encompass all inquiry (consider, for example, the magisterium of art and the meaning of beauty).

If we take Gould to be using a very circumscribed definition of “science” to just mean the so-called “natural sciences” like physics and chemistry, then the claim is trivial. Of course we cannot resolve moral questions about stem cell research entirely in terms of quantum physics or even entirely in terms of cellular biology; no one ever supposed that we could. Yes, it’s obvious that we need to understand the way people think and the way they interact in social structures. But that’s precisely what the fields of psychology, sociology, economics, and political science are designed to do. It would be like saying that quantum physics cannot by itself explain the evolution of life on Earth. This is surely true, but it’s hardly relevant.

Conversely, if we define science broadly to include all rational and empirical methods: physics, chemistry, geology, biology, psychology, sociology, astronomy, logic, mathematics, philosophy, history, archaeology, anthropology, economics, political science, and so on, then Gould’s claim would mean that there is no rational reason for thinking that rape and genocide are immoral.

And even if we suppose there is something wrong with using science to study morality, the alternative Gould offers us—religion—is far worse. As I’ve already shown in previous posts, religion is a very poor source of moral understanding. If morality is defined by religious tradition, then it is arbitrary and capricious, and real moral truth disintegrates.

Fortunately, we have no reason to think so. The entire history of ethical philosophy speaks against such notions, and had Immanuel Kant and John Stuart Mill alive been alive to read them, they would have scoffed at Gould’s claims. I suspect Peter Singer and Thomas Pogge would scoff similarly today. Religion doesn’t offer any deep insights into morality, and reason often does; NOMA is simply wrong.

What’s the problem with “ought” and “is”?

The next common objection to a scientific approach to morality is the remark, after David Hume, that “one cannot derive an ought from an is”; due to a conflation with a loosely-related argument that G.E. Moore made later, the attempt to derive moral statements from empirical facts has become called the “naturalistic fallacy” (this is clearly not what Moore intended; I will address Moore’s actual point in a later post). But in truth, I do not really see where the fallacy is meant to lie; there is little difference in principle between deriving “ought” from “is” than there is from deriving anything from anything else.

First, let’s put aside direct inferences from “X is true” to “X ought to be true”; these are obviously fallacious. If that’s all Hume was saying, then he is of course correct; but this does little to undermine any serious scientific theory of morality. You can’t infer from “there are genocides” to “there ought to be genocides”; nor can you infer from “there ought to be happy people” to “there are happy people”; but nor would I or any other scientist seek to do so. This is a strawman of naturalistic morality.

It’s true that some people do attempt to draw similar inferences, usually stated in a slightly different form—but these are not moral scientists, they are invariably laypeople with little understanding of the subject. Arguments based on the claim that “homosexuality is unnatural” (therefore wrong) or “violence is natural” (therefore right) are guilty of this sort of fallacy, but I’ve never heard any credible philosopher or scientist support such arguments. (And by the way, homosexuality is nearly as common among animals as violence.)

A subtler way of reasoning from “is” to “ought” that is still problematic is the common practice of surveying people about their moral attitudes and experimentally testing their moral behaviors, sometimes called experimental philosophy. I do think this kind of research is useful and relevant, but it doesn’t get us as far as some people seem to think. Even if we were to prove that 100% of humans who have ever lived believe that cannibalism is wrong, it does not follow that cannibalism is in fact wrong. It is indeed evidence that there is something wrong with cannibalism—perhaps it is maladaptive to the point of being evolutionarily unstable, or it is so obviously wrong that even the most morally-blind individuals can detect its wrongness. But this extra step of explanation is necessary; it simply doesn’t follow from the fact that “everyone believes X is wrong” that in fact “X is wrong”. (Before 1900 just about everyone quite reasonably believed that the passage of time is the same everywhere regardless of location, speed or gravity; Einstein proved everyone wrong.) Moral realism demands that we admit people can be mistaken about their moral beliefs, just as they can be mistaken about other beliefs.

But these are not the only way to infer from “is” to “ought”, and there are many ways to make such inferences that are in fact perfectly valid. For instance, I know at least two ways to validly prove moral claims from nonmoral claims. The first is by conjunctive addition: “2+2=4, therefore 2+2=4 or genocide is wrong”. The second is by contradictory explosion: “2+2=5, therefore genocide is wrong”. Both of these arguments are logically valid. Obviously they are also quite trivial; “genocide is wrong” could be replaced by any other conceivable proposition (even a contradiction!), leaving an equally valid argument. Still, we have validly derived a moral statement from nonmoral statements, while obeying the laws of logic.

Moreover, it is clearly rational to infer a certain kind of “ought” from statements that entirely involve facts. For instance, it is rational to reason, “If you are cold, you ought to close the window”. This is an instrumental “ought” (it says what it is useful to do, given the goals that you have), not a moral “ought” (which would say what goals you should have in the first place). Hence, this is not really inferring moral claims from non-moral claims, since the “ought” isn’t really a moral “ought” at all; if the ends are immoral the means will be immoral too. (It would be equally rational in this instrumental sense to say, “If you want to destroy the world, you ought to get control of the nuclear launch codes”.) In fact this kind of instrumental rationality—doing what accomplishes our goals—actually gets us quite far in defining moral norms for real human beings; but clearly it does not get us far enough.

Finally, and most importantly, epistemic normativity, which any rational being must accept, is itself an inference from “is” to “ought”; it involves inferences like “Is it raining, therefore you ought to believe it is raining.”

With these considerations in mind, we must carefully rephrase Hume’s remark, to something like this:

One cannot nontrivially with logical certainty derive moral statements from entirely nonmoral statements.

This is indeed correct; but here the word “moral” carries no weight and could be replaced by almost anything. One cannot nontrivially with logical certainty derive physical statements from entirely nonphysical statements, nor nontrivially with logical certainty derive statements about fish from statements that are entirely not about fish. For all X, one cannot nontrivially with logical certainty derive statements about X from statements entirely unrelated to X. This is an extremely general truth. We could very well make it a logical axiom. In fact, if we do so, we pretty much get relevance logic, which takes the idea of “nontrivial” proofs to the extreme of actually considering trivial proofs invalid. Most logicians don’t go so far—they say that “2+2=5, therefore genocide is wrong” is technically a valid argument—but everyone agrees that such arguments are pointless and silly. In any case the word “moral” carries no weight here; it is no harder to derive an “ought” from an “is” than it is to derive a “fish” from a “molecule”.

Moreover, the claim that nonmoral propositions can never validly influence moral propositions is clearly false; the argument “Killing is wrong, shooting someone will kill them, therefore shooting someone is wrong” is entirely valid, and the moral proposition “shooting someone is wrong” is derived in large part from the nonmoral proposition “shooting someone will kill them”. In fact, the entire Frege-Geach argument against expressivism hinges upon the fact that we all realize that moral propositions function logically the same way as nonmoral propositions, and can interact with nonmoral propositions in all the usual ways. Even expressivists usually do not deny this; they simply try to come up with ways of rescuing expressivism despite this observation.

There are also ways of validly deriving moral propositions from entirely nonmoral propositions, in an approximate or probabilistic fashion. “Genocide causes a great deal of suffering and death, and almost everyone who has ever lived has agreed that suffering and death are bad and that genocide is wrong, therefore genocide is probably wrong” is a reasonably sound probabilistic argument that infers a moral conclusion based on entirely nonmoral premises, though it lacks the certainty of a logical proof.

We could furthermore take as axiom some definition of moral concepts in terms of nonmoral concepts, and then derive consequences of this definition with logical certainty. “A morally right action maximizes pleasure and minimizes pain. Genocide fails to maximize pleasure or minimize pain. Therefore genocide is not morally right.” Obviously one is free to challenge the definition, but that’s true of many different types of philosophical arguments, not a specific problem in arguments about morality.

So what exactly was Hume trying to say? I’m really not sure. Maybe he has in mind the sort of naive arguments that infer from “unnatural” to “wrong”; if so, he’s surely correct, but the argument does little to undermine any serious naturalistic theories of morality.

On land acknowledgments

Dec 29 JDN 2460674

Noah Smith and Brad DeLong, both of whom I admire, have recently written about the practice of land acknowledgments. Smith is wholeheartedly against them. DeLong has a more nuanced view. Smith in fact goes so far as to argue that there is no moral basis for considering these lands to be ‘Native lands’ at all, which DeLong rightly takes issue with.

I feel like this might be an issue where it would be better to focus on Native American perspectives. (Not that White people aren’t allowed to talk about it; just that we tend to hear from them on everything, and this is something where maybe they’re less likely to know what they’re talking about.)

It turns out that Native views on land acknowledgments are also quite mixed; some see them as a pointless, empty gesture; others see them as a stepping-stone to more serious policy changes that are necessary. There is general agreement that more concrete actions, such as upholding treaties and maintaining tribal sovereignty, are more important.

I have to admit I’m much more in the ’empty gesture’ camp. I’m only one-fourth Native (so I’m Whiter than I am not), but my own view on this is that land acknowledgments aren’t really accomplishing very much, and in fact aren’t even particularly morally defensible.

Now, I know that it’s not realistic to actually “give back” all the land in the United States (or Australia, or anywhere where indigenous people were forced out by colonialism). Many of the tribes that originally lived on the land are gone, scattered to the winds, or now living somewhere else that they were forced to (predominantly Oklahoma). Moreover, there are now more non-Native people living on that land than there ever were Native people living on it, and forcing them all out would be just as violent and horrific as forcing out the Native people was in the first place.

I even appreciate Smith’s point that there is something problematic about assigning ownership of land to bloodlines of people just because they happened to be the first ones living there. Indeed, as he correctly points out, they often weren’t the first ones living there; different tribes have been feuding and warring with each other since time immemorial, and it’s likely that any given plot of land was held by multiple different tribes at different times even before colonization.

Let’s make this a little more concrete.

Consider the Beaver Wars.


The Beaver Wars were a series of conflicts between the Haudenosaunee (that’s what they call themselves; to a non-Native audience they are better known by what the French called them, Iroquois) and several other tribes. Now, that was after colonization, and the French were involved, and part of what they were fighting over was the European fur trade—so the story is a bit complicated by that. But it’s a conflict we have good historical records of, and it’s pretty clear that many of these rivalries long pre-dated the arrival of the French.

The Haudenosaunee were brutal in the Beaver Wars. They slaughtered thousands, including many helpless civilians, and effectively wiped out several entire tribes, including the Erie and Susquehannock, and devastated several others, including the Mohicans and the Wyandot. Many historians consider these to be acts of genocide. Surely any land that the Haundenosaunee claimed as a result of the Beaver Wars is as illegitimate as land claimed by colonial imperialism? Indeed, isn’t it colonial imperialism?

Yet we have no reason to believe that these brutal wars were unique to the Haundenosaunee, or that they only occurred after colonization. Our historical records aren’t as clear going that far back, because many Native tribes didn’t keep written records—in fact, many didn’t even have a written language. But what we do know suggests that a great many tribes warred with a great many other tribes, and land was gained and lost in warfare, going back thousands of years.

Indeed, it seems to be a sad fact of human history that virtually all land, indigenous or colonized, is actually owned by a group that conquered another group (that conquered another group, that conquered another group…). European colonialism was simply the most recent conquest.

But this doesn’t make European colonialism any more justifiable. Rather, it raises a deeper question:

How should we decide who owns what land?

The simplest way, and the way that we actually seem to use most of the time, is to simply take whoever currently owns the land as its legitimate ownership. “Possession is nine-tenths of the law” was always nonsense when it comes to private property (that’s literally what larceny means!), but when it comes to national sovereignty, it is basically correct. Once a group manages to organize itself well enough to enforce control over a territory, we pretty much say that it’s their territory now and they’re allowed to keep it.

Does that mean that anyone is just allowed to take whatever land they can successfully conquer and defend? That the world must simply accept that chaos and warfare are inevitable? Fortunately, there is a solution to this problem.

The Westphalian solution.

The current solution to this problem is what’s called Westphalian sovereignty, after the Peace of Westphalia, two closely-related treaties that were signed in Westphalia (a region of Germany) in 1648. Those treaties established a precedent in international law that nations are entitled to sovereignty over their own territory; other nations are not allowed to invade and conquer them, and if anyone tries, the whole international community should fight to resist any such attempt.

Effectively, what Westphalia did was establish that whoever controlled a given territory right now (where “right now” means 1648) now gets the right to hold it forever—and everyone else not only has to accept that, they are expected to defend it. Now, clearly this has not been followed precisely; new nations have gained independence from their empires (like the United States), nations have separated into pieces (like India and Pakistan, the Balkans, and most recently South Sudan), and sometimes even nations have successfully conquered each other and retained control—but the latter has been considerably rarer than it was before the establishment of Westphalian sovereignty. (Indeed, part of what makes the Ukraine War such an aberration is that it is a brazen violation of Westphalian sovereignty the likes of which we haven’t seen since the Second World War.)

This was, as far as I can tell, a completely pragmatic solution, with absolutely no moral basis whatsoever. We knew in 1648, and we know today, that virtually every nation on Earth was founded in bloodshed, its land taken from others (who took it from others, who took it from others…). And it was timed in such a way that European colonialism became etched in stone—no European power was allowed to take over another European power’s colonies anymore, but they were all allowed to keep all the colonies they already had, and the people living in those colonies didn’t get any say in the matter.

Since then, most (but by no means all) of those colonies have revolted and gained their own independence. But by the time it happened, there were large populations of former colonists, and the indigenous populations were often driven out, dramatically reduced, or even outright exterminated. There is something unsettling about founding a new democracy like the United States or Australia after centuries of injustice and oppression have allowed a White population to establish a majority over the indigenous population; had indigenous people been democratically represented all along, things would probably have gone a lot differently.

What do land acknowledgments accomplish?

I think that the intent behind land acknowledgments is to recognize and commemorate this history of injustice, in the hopes of somehow gaining some kind of at least partial restitution. The intentions here are good, and the injustices are real.

But there is something fundamentally wrong with the way most land acknowledgments are done, because they basically just push the sovereignty back one step: They assert that whoever held the land before Europeans came along is the land’s legitimate owner. But what about the people before them (and the people before them, and the people before them)? How far back in the chain of violence are we supposed to go before we declare a given group’s conquests legitimate?

How far back can we go?

Most of these events happened many centuries ago and were never written down, and all we have now is vague oral histories that may or may not even be accurate. Particularly when one tribe forces out another, it rather behooves the conquering tribe to tell the story in their own favor, as one of “reclaiming” land that was rightfully theirs all along, whether or not that was actually true—as they say, history is written by the victors. (I think it’s actually more true when the history is never actually written.) And in some cases it’s probably even true! In others, that land may have been contested between the two tribes for so long that nobody honestly knows who owned it first.

It feels wrong to legitimate the conquests of colonial imperialism, but it feels just as wrong to simply push it back one step—or three steps, or seven steps.

I think that ultimately what we must do is acknowledge this entire history.

We must acknowledge that this land was stolen by force from Native Americans, and also that most of those Native Americans acquired their land by stealing it by force from other Native Americans, and the chain goes back farther than we have records. We must acknowledge that this is by no means unique to the United States but in fact a universal feature of almost all land held by anyone anywhere in the world. We must acknowledge that this chain of violence and conquest has been a part of human existence since time immemorial—and affirm our commitment to end it, once and for all.

That doesn’t simply mean accepting the current allocation of land; land, like many other resources, is clearly distributed unequally and unfairly. But it does mean that however we choose to allocate land, we must do so by a fair and peaceful process, not by force and conquest. The chain of violence that has driven human history for thousands of years must finally be brought to an end.

Why I celebrate Christmas

Dec 22 JDN 2460667

In my last several posts I’ve been taking down religion and religious morality. So it might seem strange, or even hypocritical, that I would celebrate Christmas, which is widely regarded as a Christian religious holiday. Allow me to explain.

First of all, Christmas is much older than Christianity.

It had other names before: Solstice celebrations, Saturnalia, Yuletide. But human beings of a wide variety of cultures around the world have been celebrating some kind of winter festival around the solstice since time immemorial.

Indeed, many of the traditions we associate with Christmas, such as decorating trees and having an—ahem—Yule log, are in fact derived from pre-Christian traditions that Christians simply adopted.

The reason different regions have their own unique Christmas traditions, such as Krampus, is most likely that these regions already had such traditions surrounding their winter festivals which likewise got absorbed into Christmas once Christianity took over. (Though oddly enough, Mari Lwyd seems to be much more recent, created in the 1800s.)

In fact, Christmas really has nothing to do with the birth of Jesus.

It’s wildly improbable that Jesus was born in December. Indeed, we have very little historical or even Biblical evidence of his birth date. (What little we do have strongly suggests it wasn’t in winter.)

The date of December 25 was almost certainly chosen in order to coincide—and therefore compete—with the existing Roman holiday of Dies Natalis Solis Invicti (literally, “the birthday of the invincible sun”), an ancient solstice celebration. Today the Winter Solstice is slightly earlier, but in the Julian calendar it was December 25.

In the past, Christians have sometimes suppressed Christmas celebration.

Particularly during the 17th century, most Protestant sects, especially the Puritans, regarded Christmas as a Catholic thing, and therefore strongly discouraged their own adherents from celebrating it.

Besides, Christmas is very secularized at this point.

Many have bemoaned its materialistic nature—and even economists have claimed it is “inefficient”—but gift-giving has become a central part of the celebration of Christmas, despite it being a relatively recent addition. Santa Claus has a whole fantasy magic narrative woven around him that is the source of countless movies and has absolutely nothing to do with Christianity.

I celebrate because we celebrate.

When I celebrate Christmas, I’m also celebrating Saturnalia, and Yuletide, and many of the hundreds of other solstice celebrations and winter festivals that human cultures around the world have held for thousands of years. I’m placing myself within a grander context, a unified human behavior that crosses lines of race, religion, and nationality.

Not all cultures celebrate the Winter Solstice, but a huge number do—and those that don’t have their own celebrations which often involve music and feasting and gift-giving too.

So Merry Christmas, and Happy Yuletide, and Io Saturnalia to you all.

Moral progress and moral authority

Dec 8 JDN 2460653

In previous posts I’ve written about why religion is a poor source of morality. But it’s worse than that. Religion actually holds us back morally. It is because of religion that our society grants the greatest moral authority to precisely the people and ideas which have most resisted moral progress. Most religious people are good, well-intentioned people—but religious authorities are typically selfish, manipulative, Machiavellian leaders who will say or do just about anything to maintain power. They have trained us to respect and obey them without question; they even call themselves “shepherds” and us the “flock”, as if we were not autonomous humans but obedient ungulates.

I’m sure that most of my readers are shocked that I would assert such a thing; surely priests and imams are great, holy men who deserve our honor and respect? The evidence against such claims is obvious. We only believe such things because the psychopaths have told us to believe them.

I am not saying that these evil practices are inherent to religion—they aren’t. Other zealous, authoritarian ideologies, like Communism and fascism, have been just as harmful for many of the same reasons. Rather, I am saying that religion gives authority and respect to people who would otherwise not have it, people who have long histories of evil, selfish, and exploitative behavior. For a particularly striking example, Catholicism as an idea is false and harmful, but not nearly as harmful as the Catholic Church as an institution, which has harbored some of the worst criminals in history.

The Catholic Church hierarchy is quite literally composed of a cadre of men who use tradition and rhetoric to extort billions of dollars from the poor and who have gone to great lengths to defend men who rape children—a category of human being that normally is so morally reviled that even thieves and murderers consider them beyond the pale of human society. Pope Ratzinger himself, formerly the most powerful religious leader in the world, has been connected with the coverup based on a letter he wrote in 1985. The Catholic Church was also closely tied to Nazi Germany and publicly celebrated Hitler’s birthday for many years; there is evidence that the Vatican actively assisted in the exodus of Nazi leaders along “ratlines” to South America. More recently the Church once again abetted genocide, when in Rwanda it turned away refugees and refused to allow prosecution against any of the perpetrators who were affiliated with the Catholic Church. Yes, that’s right; the Vatican has quite literally been complicit in the worst moral crimes human beings have ever committed. Embezzlement of donations and banning of life-saving condoms seem rather beside the point once we realize that these men and their institutions have harbored genocidaires and child rapists. I can scarcely imagine a more terrible source of moral authority.

Most people respect evangelical preachers, like Jerry Falwell who blamed 9/11 and Hurricane Katrina on feminists, gays, and secularists, then retracted the statement about 9/11 when he realized how much it had offended people. These people have concepts of morality that were antiquated in the 19th century; they base their ethical norms on books that were written by ignorant and cultish nomads thousands of years ago. Leviticus 18:22 and 20:13 indeed condemn homosexuality, but Leviticus 19:27 condemns shaving and Leviticus 11:9-12 says that eating fish is fine but eating shrimp is evil. By the way, Leviticus 11:21-22 seems to say that locusts have only four legs, when they very definitely have six and you can see this by looking at one. (I cannot emphasize this enough: Don’t listen to what people say about the book, read the book.)

But we plainly don’t respect scientists or philosophers to make moral and political decisions. If we did, we would have enacted equal rights for LGBT people sometime around 1898 when the Scientific-Humanitarian Committee was founded or at least by 1948 when Alfred Kinsey showed how common, normal, and healthy homosexuality is. Democracy and universal suffrage (for men at least) would have been the norm shortly after 1689 when Locke wrote his Two Treatises of Government. Women would have been granted the right to vote in 1792 upon the publication of Mary Woolstonecraft’s A Vindication of the Rights of Woman, instead of in 1920 after a long and painful political battle. Animal rights would have become law in 1789 with the publication of Bentham’s Introduction to the Principles of Morals and Legislation. We should have been suspicious of slavery since at least Kant if not Socrates, but instead it took until the 19th century for slavery to finally be banned. We owe the free world to moral science; but nonetheless we rarely listen to the arguments of moral scientists. As a species we fight for our old traditions even in the face of obvious and compelling evidence to the contrary, and this holds us back—far back. If they haven’t sunk in yet, read these dates again: Society is literally about 200 years behind the cutting edge of moral science. Imagine being 200 years behind in technology; you would be riding horses instead of flying in jet airliners and writing letters with quills instead of texting on your iPhone. Imagine being 200 years behind in ecology; you would be considering the environmental impact of not photovoltaic panels or ethanol but whale oil. This is how far behind we are in moral science.

One subfield of moral science has done somewhat better: The economics of theory and the economics of practice differ by only about 100 years. Capitalism really was instituted on a large scale only a few decades after Adam Smith argued for it, and socialism (while horrifyingly abused in the Communism of Lenin and Stalin) has nonetheless been implemented on a wide scale only a century after Marx. Keynesian stimulus was international policy (despite its numerous detractors) in 2008 and 2020, and Keynes himself died in only 1946. This process is still slower than it probably should be, but at least we aren’t completely ignoring new advances the way we do in ethics. 100 years behind in technology we would have cars and electricity at least.

Except perhaps in economics, in general we entrust our moral claims to the authority of men in tall hats and ornate robes who merely assert their superiority and ties to higher knowledge, while ignoring the thousands of others who actually apply their reason and demonstrate knowledge and expertise. A criminal in pretty robes who calls himself a moral leader might as well be a moral leader, as far as we’re concerned; a genuinely wise teacher of morality who isn’t arrogant enough to assert special revelation from the divine is instead ignored. Why do we do this? Religion. Religion is holding us back.

We need to move beyond religion in order to make real and lasting moral progress.

More on religion

Dec 8 JDN 2460653

Reward and punishment

In previous posts I’ve argued that religion can make people do evil and that religious beliefs simply aren’t true.

But there is another reason to doubt religion as a source of morality: There is no reason to think that obeying God is a particularly good way of behaving, even if God is in fact good. If you are obeying God because he will reward you, you aren’t really being moral at all; you are being selfish, and just by accident doing good things. If everyone acted that way, good things would get done; but it clearly misses what we mean when we talk about morality. To be moral is to do good because it is good, not because you will be rewarded for doing it. This becomes even clearer when we consider the following question: If you weren’t rewarded, would you still do good? If not, then you aren’t really a good person.

In fact, it’s ironic that proponents of naturalistic and evolutionary accounts of morality are often accused of cheapening morality because we explain it using selfish genes and memes; traditional religious accounts of morality are directly based on selfishness, not for my genes or my memes, but for me myself! It’s legitimate to question whether someone who acts out of a sense of empathy that ultimately evolved to benefit their ancestors’ genes is really being moral (why I think so requires essentially the rest of this book to argue); but clearly someone who acts out of the desire to be rewarded later isn’t! Selfish genes may or may not make good people; but selfish people clearly aren’t good people.

Even if religion makes people act more morally (and the evidence on that is quite mixed), that doesn’t make it true. If I could convince everyone that John Stuart Mill was a prophet of God, this world would be a paradise; but that would be a lie, because John Stuart Mill was a brilliant man and nothing more. The belief that Santa Claus is watching no doubt makes some children behave better around Christmas, but this is not evidence for flying reindeer. In fact, the children who behave just fine without the threat of coal in their stockings are better children, aren’t they? For the same reason, people who do good for the sake of goodness are better people than those who do it out of hope for Heaven and fear of Hell.

There are cases in which false beliefs might make people do more good, because the false beliefs provide a more obvious, but wrong reason for doing something that is actually good for less obvious, but actually correct reasons. Believing that God requires you to give to charity might motivate you to give more to charity; but charity is good not because God demands it, but because there are billions of innocent people suffering around the world. Maybe we should for this reason be careful about changing people’s beliefs; someone who believes a lie but does the right thing is still better than someone who believes the truth but acts wrongly. If people think that without God there is no morality, then telling them that there is no God may make them abandon morality. This is precisely why I’m not simply telling readers that there is no God: I am also spending this entire chapter explaining why we don’t need God for morality. I’d much rather you be a moral theist than an immoral atheist; but I’m trying to make you a moral atheist.

The problem with holy texts

Even if God actually existed, and were actually good, and commanded us to do things, we do not have direct access to God’s commandments. If you are not outright psychotic, you must acknowledge this; God does not speak to us directly. If anything, he has written or inspired particular books, which have then been translated and interpreted over centuries by many different people and institutions. There is a fundamental problem in deciding which books have been written or inspired by God; not only does the Bible differ from the Qur’an, which differs from the Bhagavad-Gita, which differs from other holy texts; worse, particular chapters and passages within each book differ from one another on significant moral questions, sometimes on the foundational principles of morality itself.

For instance, let’s consider the Bible, because this is the holy book in greatest favor in modern Western culture. Should we use a law of retribution, a lex talionis, as in Exodus 21? Or should we instead forgive our enemies, as in Matthew 5? Perhaps we should treat others as we would like to be treated, as in Luke 6? Are rape and genocide commanded by God, as in 1 Samuel 15, Numbers 31, and Deuteronomy 20-21, or is murder always a grave crime, as in Exodus 20? Is even anger a grave sin, as in Matthew 5? Is it a crime to engage in male-male sex, as in Leviticus 18? Then, is it then also a crime to shave beards and wear mixed-fiber clothing, as in Leviticus 19? Is it just to punish descendants for the crimes of their ancestors, as in Genesis 9, or is it only fair to punish the specific perpetrators, as in Deuteronomy 24? Is adultery always immoral, as in Exodus 20, or does God sometimes command it, as in Hosea 1? Must homosexual men be killed, as in Leviticus 20, or is it enough to exile them, as in 1 Kings 15? A thorough reading of the Bible shows hundreds of moral contradictions and thousands of moral absurdities. (This is not even to mention the factual contradictions and absurdities.)

Similar contradictions and absurdities can be found in the Qur’an and other texts. Since most of my readers will come from Christian cultures, for my purposes I think brief examples will suffice. The Qur’an at times says that Christians are deserving of the same rights as Muslims, and at other times declares Christians so evil that they ought to be put to the sword. (Most of the time it says something in between, that “People of the Book”, ahl al-Kitab, as Jews and Christians are known, are inferior to Muslims but nonetheless deserving of rights.) The Bhagavad-Gita at times argues for absolute nonviolence, and at times declares an obligation to fight in war. The Dharmas and the Dao De Jing are full of contradictions, about everything from meaning to justice to reincarnation (in fact, many Buddhists and Taoists freely admit this, and try to claim that non-contradiction is overrated—which is literally talking nonsense). The Book of Mormon claims the canonicity of texts that it explicitly contradicts.

And above all, we have no theological basis for deciding which parts of which holy books we should follow, and which we should reject—for they all have many sects with many followers, and they all declare with the same intensity of clamor and absence of credibility that they are the absolute truth of a perfect God. To decide which books to trust and which to ignore, we have only a rational basis, founded upon reason and science—but then, we can’t help but take a rational approach to morality in general. If it were glaringly obvious which holy text was written by God, and its message were clear and coherent, perhaps we could follow such a book—but given the multitude of religions and sects and denominations in the world, all mutually-contradictory and most even self-contradictory, each believed with just as much fervor as the last, how obvious can the answer truly be?

One option would be to look for the things that are not contradicted, the things that are universal across religions and texts. In truth these things are few and far between; one sect’s monstrous genocide is another’s holy duty. But it is true that certain principles appear in numerous places and times, a signal of universality amidst the noise of cultural difference: Fairness and reciprocity, as in the Golden Rule; honesty and fidelity; forbiddance of theft and murder. There are examples of religious beliefs and holy texts that violate these rules—including the Bible and the Qur’an—but the vast majority of people hold to these propositions, suggesting that there is some universal truth that has been recognized here. In fact, the consensus in favor of these values is far stronger than the consensus in favor of recognized scientific facts like the shape of the Earth and the force of gravity. While for most of history most people had no idea how old the Earth was and many people still seem to think it is a mere 6,000 years old, there has never been a human culture on record that thought it acceptable to murder people arbitrarily.

But notice how these propositions are not tied to any particular religion or belief; indeed, nearly all atheists, including me, also accept these ideas. Moreover, it is possible to find these principles contradicted in the very books that religious people claim as the foundation of their beliefs. This is strong evidence that religion has nothing to do with it—these principles are part of a universal human nature, or better yet, they may even be necessary truths that would hold for any rational beings in any possible universe. If Christians, Muslims, Buddhists, Hindus and atheists all agree that murder is wrong, then it must not be necessary to hold any specific religion—or any at all—in order to agree that murder is wrong.

Indeed, holy texts are so full of absurdities and atrocities that the right thing to do is to completely and utterly repudiate holy texts—especially the Bible and the Qur’an.

If you say you believe in one of these holy texts, you’re either a good person but a hypocrite because you aren’t following the book; or you can be consistent in following the book, but you’ll end up being a despicable human being. Obviously I much prefer the former—but why not just give up the damn book!? Why is it so important to you to say that you believe in this particular book? You can still believe in God if you want! If God truly exists and is benevolent, it should be patently obvious that he couldn’t possibly have written a book as terrible as the Bible or the Qur’an. Obviously those were written by madmen who had no idea what God is truly like.

The afterlife

Dec 1 JDN 2460646

Super-human beings aren’t that strange a thing to posit, but they are the sort of thing we’d expect to see clear evidence of if they existed. Without them, prayer is a muddled concept that is difficult to distinguish from simply “things that don’t work”. That leaves the afterlife. Could there be an existence for human consciousness after death?

No. There isn’t. Once you’re dead, you’re dead. It’s really that unequivocal. It is customary in most discussions of this matter to hedge and fret and be “agnostic” about what might lie beyond the grave—but in fact the evidence is absolutely overwhelming.

Everything we know about neuroscience—literally everything—would have to be abandoned in order for an afterlife to make sense. The core of neuroscience, the foundation from which the entire field is built, is what I call the Basic Fact of Cognitive Science: you are your brain. It is your brain that feels, your brain that thinks, your brain that dreams, your brain that remembers. We do not yet understand most of these processes in detail—though some we actually do, such as the processing of visual images. But it doesn’t take an expert mechanic to know that removing the engine makes the car stop running. It doesn’t take a brilliant electrical engineer to know that smashing the CPU makes the computer stop working. Saying that your mind continues to work without your brain is like saying that you can continue to digest without having a stomach or intestines.

This fundamental truth underlies everything we know about the science of consciousness. It can even be directly verified in a piecemeal form: There are specific areas of your brain that, when damaged, will cause you to become blind, or unable to understand language, or unable to speak grammatically (those are two distinct areas), or destroy your ability to form new memories or recall old ones, or even eliminate your ability to recognize faces. Most terrifying of all—yet by no means surprising to anyone who really appreciates the Basic Fact—is the fact that damage to certain parts of your brain will even change your personality, often making you impulsive, paranoid or cruel, literally making you a worse person. More surprising and baffling is the fact that cutting your brain down the middle into left and right halves can split you into two people, each of whom operates half of your body (the opposite half, oddly enough), who mostly agree on things and work together but occasionally don’t. All of these are people we can actually interact with in laboratories, and (except for language deficits of course) talk to them about their experiences. It’s true that we can’t ask people what it’s like when their whole brain is dead, but of course not; there’s nobody left to ask.

This means that if you take away all the functions that experiments have shown require certain brain parts to function, whatever “soul” is left that survives brain death cannot do any of the following: See, hear, speak, understand, remember, recognize faces, or make moral decisions. In what sense is that worth calling a “soul”? In what sense is that you? Those are just the ones we know for sure; as our repertoire expands, more and more cognitive functions will be mapped to specific brain regions. And of course there’s no evidence that anything survives whatsoever.

Nor are near-death experiences any kind of evidence of an afterlife. Yes, some people who were close to dying or briefly technically dead (“He’s only mostly dead!”) have had very strange experiences during that time. Of course they did! Of course you’d have weird experiences as your brain is shutting down or struggling to keep itself online. Think about a computer that has had a magnet run over its hard drive; all sorts of weird glitches and errors are going to occur. (In fact, powerful magnets can have an effect on humans not all that dissimilar from what weaker magnets can do to computers! Certain sections of the brain can be disrupted or triggered in this way; it’s called transcranial magnetic stimulation and it’s actually a promising therapy for some neurological and psychological disorders.) People also have a tendency to over-interpret these experiences as supporting their particular religion, when in fact it’s usually something no more complicated than “a bright light” or “a long tunnel” (another popular item is “positive feelings”). If you stop and think about all the different ways you might come to see “a bright light” and have “positive feelings”, it should be pretty obvious that this isn’t evidence of St. Paul and the Pearly Gates.

The evidence against an afterlife is totally overwhelming. The fact that when we die, we are gone, is among the most certain facts in science. So why do people cling to this belief? Probably because it’s comforting—or rather because the truth that death is permanent and irrevocable is terrifying. You’re damn right it is; it’s basically the source of all other terror, in fact. But guess what? “Terrifying” does not mean “false”. The idea of an afterlife may be comforting, but it’s still obviously not true.

While I was in the process of writing this book, my father died of a ruptured intracranial aneurysm. The event was sudden and unexpected, and by the time I was able to fly from California to Michigan to see him, he had already lost consciousness—for what would turn out to be forever. This event caused me enormous grief, grief from which I may never fully recover. Nothing would make me happier than knowing that he was not truly gone, that he lives on somewhere watching over me. But alas, I know it is not true. He is gone. Forever.

However, I do have a couple of things to say that might offer some degree of consolation:

First, because human minds are software, pieces of our loved ones do go on—in us. Our memories of those we have lost are tiny shards of their souls. When we tell stories about them to others, we make copies of those shards; or to use a more modern metaphor, we back up their data in the cloud. Were we to somehow reassemble all these shards together, we could not rebuild the whole person—there are always missing pieces. But it is also not true that nothing remains. What we have left is how they touched our lives. And when we die, we will remain in how we touch the lives of others. And so on, and so on, as the ramifications of our deeds in life and the generations after us ripple out through the universe at the speed of light, until the end of time.

Moreover, if there’s no afterlife there can be no Hell, and Hell is literally the worst thing imaginable. To subject even a single person—even the most horrible person who ever lived, Hitler, Stalin, Mao, whomever—to the experience of maximum possible suffering forever is an atrocity of incomparable magnitude. Hitler may have deserved a million years of suffering for what he did—but I’m not so sure about maximum suffering, and forever is an awful lot longer than a million years. Indeed, forever is so much longer than a million years that if your sentence is forever, then after serving a million years you still have as much left to go as when you began. But the Bible doesn’t even just say that the most horrible mass murderers will go to Hell; no, it says everyone will go to Hell by default, and deserve it, and can only be forgiven if we believe. No amount of good works will save us from this fate, only God’s grace.

If you believe this—or even suspect it—religion has caused you deep psychological damage. This is the theology of an abusive father—“You must do exactly as I say, or you are worthless and undeserving of love and I will hurt you and it will be all your fault.” No human being, no matter what they have done or failed to do, could ever possibly deserve a punishment as terrible as maximum possible suffering forever. Even if you’re a serial rapist and murderer—and odds are, you’re not—you still don’t deserve to suffer forever. You have lived upon this planet for only a finite time; you can therefore only have committed finitely many crimes and you can only deserve at most finite suffering. In fact, the vast majority of the world’s population is comprised of good, decent people who deserve joy, not suffering.

Indeed, many ethicists would say that nobody deserves suffering, it is simply a necessary evil that we use as a deterrent from greater harms. I’m actually not sure I buy this—if you say that punishment is all about deterrence and not about desert, then you end up with the result that anything which deters someone could count as a fair punishment, even if it’s inflicted upon someone else who did nothing wrong. But no ethicist worthy of the name believes that anybody deserves eternal punishment—yet this is what Jesus says we all deserve in the Bible. And Muhammad says similar things in the Qur’an, about lakes of eternal burning (4:56) and eternal boiling water to drink (47:15) and so on. It’s entirely understandable that such things would motivate you—indeed, they should motivate you completely to do just about anything—if you believed they were true. What I don’t get is why anybody would believe they are true. And I certainly don’t get why anyone would be willing to traumatize their children with these horrific lies.

Then there is Pascal’s Wager: An infinite punishment can motivate you if it has any finite probability, right? Theoretically, yes… but here’s the problem with that line of reasoning: Anybody can just threaten you with infinite punishment to make you do anything. Clearly something is wrong with your decision theory if any psychopath can just make you do whatever he wants because you’re afraid of what might happen just in case what he says might possibly be true. Beware of plausible-seeming theories that lead to such absurd conclusions; it may not be obvious what’s wrong with the argument, but it should be obvious that something is.

Religion is False.

Nov 24 JDN 2460639

In my previous post I wrote about some of the ways that religion can make people do terrible things. However, to be clear, as evil as actions like wiping out cities, torturing nonbelievers, and killing gays appear on their face—as transparently as they violate even the Hitler Principle—they might in fact be justified were religion actually true. So that requires us to ask the question: Is religion true?

Recall that I said that religion consists in three propositions: Super-human beings, afterlife, and prayer.

Super-human beings

There is basically no evidence at all of super-human beings—no booming voices in the sky, no beings who come down from heaven in beams of light. To be sure, there are reports of such things, but none of them can be in any way substantiated. Moreover, they only seem to have happened back in a time when there was no such thing as science as we know it, to people who were totally uneducated, with no physical evidence whatsoever. As soon as we invented technologies to record such events, they apparently stopped occurring? As soon as it might have been possible to prove they weren’t made up, they stopped? Clearly, they were made up all along, and once we were able to prove this, people stopped trying to lie to us.

Actually it’s worse than that—even before we had such technology, merely the fact that people were educated was sufficient to make them believe none of it. Quoth Lucretius in De Rerum Natura in 50 BC (my own translation)}:

Humana ante oculos foede cum vita iaceret

in terris oppressa gravi sub religione,

quae caput a caeli regionibus ostendebat

horribili super aspectu mortalibus instans,

[…]

quare religio pedibus subiecta vicissim

opteritur, nos exaequat victoria caelo.

[…]

sed casta inceste nubendi tempore in ipso

hostia concideret mactatu maesta parentis,

exitus ut classi felix faustusque daretur.

tantum religio potuit suadere malorum.

Before, humanity would cast down their eyes to the ground,

with a foul life oppressed beneath the burden of Religion,

who would show her head along the regional skies,

pressing upon mortals with a horrible view.

[…]

Therefore religion is now pressed under our feet,

and this victory equalizes us with heaven.

[…]

But at the very time of her wedding, a sinless woman

sinfully slain, an offering in sacrifice to omens,

gone in order to give happy and auspicious travels to ships.

Only religion could induce such evil.

Yes, before Jesus there were already scientists writing about how religion is false and immoral. I suppose you could argue that religion has gotten better since then… but I don’t think it’s gotten any more true.

Nor did Jesus provide some kind of compelling evidence that won the Romans over; indeed, other than the works of his followers (such as the Bible itself) there are hardly any records showing he even existed; he probably did, but we know very little about him. Modern scholars can still read classical Latin; we have extensive records of history and literature from that period. One of the reasons the Dark Ages were originally called that was because the historical record suddenly became much more scant after the fall of Rome—not so much dark as in “bad” as dark as in “you can’t see”. Yet despite this extensive historical record, we have only a handful of references to someone named Yeshua, probably Jewish, who may have been crucified (which was a standard method of punishment in Rome). By this line of reasoning you can prove Thor exists by finding an epitaph of some Viking blacksmith whose name was Thad. If Jesus had been going around performing astounding miracles for all the world to see—rather than, you know, playing parlor tricks to fool his gullible cult—don’t you think someone credible would have written that down?

If there were a theistic God (at least one who is morally good), we would expect that the world would be without suffering, without hunger, without harm to innocents—it is not. We would expect that good things never happen to bad people and bad things never happen to good people—but clearly they often do. Free will might—might—excuse God for allowing the Holocaust, but what about earthquakes? What about viruses? What about cancer? What about famine? In fact, why do we need to eat at all? Without digestive tracts (with some sort of internal power source run on fusion or antimatter reactions, perhaps?) we would never be hungry, never be tired, never starve in famine or grow sick from obesity. We limited humans are forced to deal with our own ecological needs, but why did God make us this way in the first place?

If a few eyewitness accounts of someone apparently performing miracles are sufficient to define an entire belief system, then we must all worship Appollonius of Tyana, L. Ron Hubbard, and Jose deLuis deJesus, and perhaps even Criss Angel and Uri Geller, as well as of course Jesus, Muhammad, Buddha, Krishna, Herakles, Augustus Caesar, Joseph Smith, and so on. The way you explain “miracles” in every case other than your own religion—illusion, hallucination, deceit, exaggeration—is the way that I explain the “miracles” in your religion as well. Why can people all around the world with totally different ideas of which super-human beings they’re working for nonetheless perform all the same miracles? Because it’s all fake.

Prayer

Which brings me to the subject of prayer. The basic idea is that ritualized actions are meant to somehow influence the behavior of the universe by means other than natural law or human action. Performing a certain series of behaviors in a specific sequence will “bring you luck” or “appease the gods” or “share in the Eucharist”.

The problem here is basically that once you try to explain how this could possibly work, you just end up postulating different natural laws. The super-human being theory was a way out of that; if Yahweh somehow is looking down upon us and will do what you ask if you go through a certain sequence (a password, I guess?), then you have a reason for why prayer would work, because you have a sensible category of actions that isn’t either nature or humans. But if that’s not what’s happening—if there’s no someone doing these things, then there has to be a something—and now you need to explain how that’s different from the laws of nature.

Actually, the clearest distinction I can find is that prayer is the sort of action that doesn’t actually work. If something actually works, we don’t call it prayer or think of it as a ritual. Brushing your teeth is a sequence of actions that will actually make you healthier, because the fluoride remineralizes your teeth and kills bacteria that live in your mouth. Inserting and turning the ignition key will start a car, because that is how cars are designed to work. When you remove certain pieces of paper from your wallet and hand them over to a specific person, that person will give you goods in return, because that’s how our monetary system works. When we perform a specific sequence of actions toward achieving a goal that actually makes rational sense, nobody calls it a ritual anymore. But once again we’re back to the fact that “supernatural” is just a weird way of saying “non-existent”.

And indeed prayer does not work, at all, ever, period. There have been empirical studies on the subject, and all of the at all credible ones have shown effects indistinguishable from chance (including a 2006 randomized controlled medical trial) In fact telling sick people they’re being prayed for may make them sicker, so please stop telling people you’re praying for them! Instead, pray with your wallet—donate to medical research. Put your money where your mouth is.

There’s some evidence that prayer has psychological benefits, and that having a more positive attitude can be good for your health in some circumstances; but this is not evidence that prayer actually affects the world. It’s just a placebo effect, and you can get the same effect from lots of other things, like meditation, relaxation exercises, or just taking a sugar pill. Indeed, the fact that prayer works just as well regardless of your religion really proves that prayer is doing nothing but making people feel better.

Occasionally an experiment will seem to show a positive effect of some prayer or superstition, but these are clearly statistical flukes. If you keep testing things at random, eventually by pure coincidence some of them are going to appear related, even though they actually aren’t. If you run dozens and dozens of studies trying to correlate things, of course some of them would show up correlated—indeed, the really shocking thing, the evidence of miracles, would be if they didn’t. At the standard 95% confidence level, about 1 in 20 completely unrelated things will be statistically correlated just by chance. Even at 99.9% confidence, 1 in 1000 will be.

This same effect applies even if you aren’t formally testing, but are simply noticing coincidences in your daily life. You are visiting Disneyland and happen to meet someone from your alma mater; you’re thinking about Grandma just as she happens to call. What a coincidence! If you add up all the different possible events that might feel like a coincidence if they occurred, and then determine the probability that at least one of them will occur at some point in your life—or at least ten, or even a hundred—you’d find that the probability is, far from being tiny, virtually 100%.

And then even truly rare coincidences—one in a million, one in a billion—will still happen somewhere in the world, for there are over 8 billion people in the world. A one in a million chance happens 300 times a day in America alone. Combine this with our news media that loves to focus upon rare events, and it’s a virtual certainty that you will have heard of someone who survived a plane crash, or won $100 million in the lottery; and they will no doubt have a story to tell about the prayer they made as the plane was falling (nevermind the equally-sincere prayers of many of the hundred other passengers who died) or the lucky numbers they got off a fortune cookie (nevermind the millions of fortune cookies with numbers that haven’t won the lottery). The human mind craves explanation, and in general this is a good thing; but sometimes there is no rational explanation, because the event was just random.

I actually find it deeply disturbing when people say “Thank God” after surviving some horrible event that killed many other people. I understand why you are glad to be alive; but please, have enough respect for the people who didn’t survive that you don’t casually imply that the creator of the universe thinks they deserved to die. Oh, you didn’t realize that’s what you’re doing? Well, it is. If God saved you, that means he didn’t save everyone else. And God is supposed to be ultimately powerful, so if he is real, he could have saved everyone, he just chose not to. You’re saying he chose to let those other people die.

It’s quite different if you say “Thank you” to the individual person who helped you—the donor of your new kidney, the firefighter who pulled you from the wreckage. Those are human beings with human limitations, and they are doing their best—even going above and beyond the moral standards we normally expect, an act we rightly call heroism. It’s even different to say “Thank goodness”. This need not be a euphemism for “Thank God”; you can actually thank goodness—express gratitude for the moral capacities that have built human civilization and hold it together. Daniel Dennett wrote a very powerful peace about thanking goodness when he was suffering a heart problem and was saved by the intervention of expert medical staff and advanced medical technology, which I highly recommend reading.