Evolutionary skepticism

Post 572 Mar 9 JDN 2460744

In the last two posts I talked about ways that evolutionary theory could influence our understanding of morality, including the dangerous views of naive moral Darwinism as well as some more reasonable approaches; yet there are other senses of the phrase “morality evolves” that we haven’t considered. One of these is actually quite troubling; were it true, the entire project of morality would be in jeopardy. I’ll call it “evolutionary skepticism”; it says that yes, morality has evolved—and this is reason to doubt that morality is true. Richard Joyce, author of The Evolution of Morality, is of such a persuasion, and he makes a quite compelling case. Joyce’s central point is that evolution selects for fitness, not accuracy; we had reason to evolve in ways that would maximize the survival of our genes, not reasons to evolve in ways that would maximize the accuracy of our moral claims.

This is of course absolutely correct, and it is troubling precisely because we can all see that the two are not necessarily the same thing. It’s easy to imagine many ways that beliefs could evolve that had nothing to do with the accuracy of those beliefs.

But note that word: necessarily. Accuracy and fitness aren’t necessarily aligned—but it could still be that they are, in fact, aligned rather well. Yes, we can imagine ways a brain could evolve that would benefit its fitness without improving its accuracy; but is that actually what happened to our ancestors? Do we live on instinct, merely playing out by rote the lifestyles of our forebears, thinking and living the same way we have for hundreds of millennia?

Clearly not! Behold, you are reading a blog post! It was written on a laptop computer! While these facts may seem perfectly banal to you, they represent an unprecedented level of behavioral novelty, one achieved only by one animal species among millions, and even then only very recently. Human beings are incredibly flexible, incredibly creative, and incredibly intelligent. Yes, we evolved to be this way, of course we did; but so what? We are this way. We are capable of learning new things about the world, gaining in a few short centuries knowledge our forebears could never have imagined. Evolution does not always make animals into powerful epistemic engines—indeed, 99.99999\% of the time it does not—but once in awhile it does, and we are the result.

Natural selection is quite frugal; it tends to evolve things the easiest way. The way the world is laid out, it seems to be that the easiest way to evolve a brain that survives really well in a wide variety of ecological and social environments is to evolve a brain that is capable of learning to expand its own knowledge and understanding. After all, no other organism has ever been or is ever likely to be as evolutionarily fit as we are; we span the globe, cover a wide variety of ecological niches, and number in the billions and counting. We’ve even expanded beyond the planet Earth, something no other organism could even contemplate. We are successful because we are smart; is it really so hard to believe that we are smart because it made our ancestors successful?

Indeed, it must be this way, or we wouldn’t be able to make sense of the fact that our human brains, evolved for the African savannah a million years ago with minor tweaks since then, are capable of figuring out chess, calculus, writing, quantum mechanics, special relativity, television broadcasting, space travel, and for that matter Darwinian evolution and meta-ethics. None of these things could possibly have been adaptive in our ancestral ecology. They must be spandrels, fitness-neutral side-effects of evolved traits. And just like the original pendentives of San Marco that motivated Gould’s metaphor, what glorious spandrels they are!

Our genes made us better at gathering information and processing that information into correct beliefs, and calculus and quantum mechanics came along for the ride. Our greatest adaptation is to be adaptable; our niche is to need no niche, for we can carve our own.

This is not to abandon evolutionary psychology, for evolution does have a great deal to tell us about psychology. We do have instincts; preprocessing systems built into our sensory organs, innate emotions that motivate us to action, evolved heuristics that we use to respond quickly under pressure. Steven Pinker argues convincingly that language is an evolved instinct—and where would we be without language? Our instincts are essential for not only our survival, but indeed for our rationality.

Staring at a blinking cursor on the blank white page of a word processor, imagining the infinity of texts that could be written upon that page, you could be forgiven for thinking that you were looking at a blank slate. Yet in fact you are staring at the pinnacle of high technology, an extremely complex interlocking system of hardware and software with dozens of components and billions of subcomponents, all precision-engineered for maximum efficiency. The possibilities are endless not because the system is simple and impinged upon by its environment, but because it is complex, and capable of engaging with that environment in order to convert subtle differences in input into vast differences in output. If this is true of a word processor, how much more true it must be of an organism capable of designing and using word processors! It is the very instincts that seem to limit our rationality which have made that rationality possible in the first place. Witness the eternal wisdom of Immanuel Kant:

Misled by such a proof of the power of reason, the demand for the extension of knowledge recognises no limits. The light dove, cleaving the air in her free flight, and feeling its resistance, might imagine that its flight would be still easier in empty space.

The analogy is even stronger than he knew—for brains, like wings, are an evolutionary adaptation! (What would Kant have made of Darwin?) But because our instincts are so powerful, they are self-correcting; they allow us to do science.

Richard Joyce agrees that we are right to think our evolved brains are reasonably reliable when it comes to scientific facts. He has to, otherwise his whole argument would be incoherent. Joyce agrees that we evolved to think 2+2=4 precisely because 2+2=4, and we evolved to think space is 3-dimensional precisely because space is 3-dimensional. Indeed, he must agree that we evolved to think that we evolved because we evolved! Yet, for some reason Joyce thinks that this same line of reasoning doesn’t apply to ethics.

But why wouldn’t it? In fact, I think we have more reason to trust our evolved capacities in ethics than we do in other domains of science, because the subject matter of morality—human behavior and social dynamics—is something that we have been familiar with even all the way back to the savannah. If we evolved to think that theft and murder are bad, why would that happen? I submit it would happen precisely because theft and murder are Pareto-suboptimal unsustainable strategies—that is, precisely because theft and murder are bad. (Don’t worry if you don’t know what I mean by “Pareto-suboptimal” and “unsustainable strategy”; I’ll get to those in later posts.) Once you realize that “bad” is a concept that can ultimately be unpacked to naturalistic facts, all reason to think it is inaccessible to natural selection drops away; natural selection could well have chosen brains that didn’t like murder precisely because murder is bad. Indeed, because morality is ultimately scientific, part of how natural selection could evolve us to be more moral is by evolving us to be more scientific. We are more scientific than apes, and vastly more scientific than cockroaches; we are, indeed, the most scientific animal that has ever lived on Earth.

I do think that our evolved moral instincts are to some degree mistaken or incomplete; but I can make sense of this, in the same way I make sense of the fact that other evolved instincts don’t quite fit what we have discovered in other sciences. For instance, humans have an innate concept of linear momentum that doesn’t quite fit with what we’ve discovered in physics. We tend to presume that objects have an inherent tendency toward rest, though in fact they do not—this is because in our natural environment, friction makes most objects act as if they had such a tendency. Roll a rock along the ground, and it will eventually stop. Run a few miles, and eventually you’ll have to stop too. Most things in our everyday life really do behave as if they had an inherent tendency toward rest. It’s only once we realized that friction is itself a force, not present everywhere, that we came to see that linear momentum is conserved in the absence of external forces. (Throw a rock in space, and it will not ever stop. Nor will you, by Newton’s Third Law.) This casts no doubt upon our intuitions about rocks rolled along the ground, which do indeed behave exactly as our intuition predicts.

Similarly, our intuition that animals don’t deserve rights could well be an evolutionary consequence of the fact that we sometimes had to eat animals in order to survive, and so would do better not thinking about it too much; but now that we don’t need to do this anymore, we can reflect upon the deeper issues involved in eating meat. This is no reason to doubt our intuitions that parents should care for their children and murder is bad.

How I feel is how things are

Mar 17 JDN 2460388

One of the most difficult things in life to learn is how to treat your own feelings and perceptions as feelings and perceptions—rather than simply as the way the world is.

A great many errors people make can be traced to this.

When we disagree with someone (whether it is as trivial as pineapple on pizza or as important as international law), we feel like they must be speaking in bad faith, they must be lying—because, to us, they are denying the way the world is. If the subject is important enough, we may become convinced that they are evil—for only someone truly evil could deny such important truths. (Ultimately, even holy wars may come from this perception.)


When we are overconfident, we not only can’t see that; we can scarcely even consider that it could be true. Because we don’t simply feel confident; we are sure we will succeed. And thus if we do fail, as we often do, the result is devastating; it feels as if the world itself has changed in order to make our wishes not come true.

Conversely, when we succumb to Impostor Syndrome, we feel inadequate, and so become convinced that we are inadequate, and thus that anyone who says they believe we are competent must either be lying or else somehow deceived. And then we fear to tell anyone, because we know that our jobs and our status depend upon other people seeing us as competent—and we are sure that if they knew the truth, they’d no longer see us that way.

When people see their beliefs as reality, they don’t even bother to check whether their beliefs are accurate.

Why would you need to check whether the way things are is the way things are?

This is how common misconceptions persist—the information needed to refute them is widely available, but people simply don’t realize they needed to be looking for that information.

For lots of things, misconceptions aren’t very consequential. But some common misconceptions do have large consequences.

For instance, most Americans think that crime is increasing and worse now than it was 30 or 50 years ago. (I tested this on my mother this morning; she thought so too.) It is in fact much, much better—violent crimes are about half as common in the US today as they were in the 1970s. Republicans are more likely to get this wrong than Democrats—but an awful lot of Democrats still get it wrong.

It’s not hard to see how that kind of misconception could drive voters into supporting “tough on crime” candidates who will enact needlessly harsh punishments and waste money on excessive police and incarceration. Indeed, when you look at our world-leading spending on police and incarceration (highest in absolute terms, third-highest as a portion of GDP), it’s pretty clear this is exactly what’s happening.

And it would be so easy—just look it up, right here, or here, or here—to correct that misconception. But people don’t even think to bother; they just know that their perception must be the truth. It never even occurs to them that they could be wrong, and so they don’t even bother to look.

This is not because people are stupid or lazy. (I mean, compared to what?) It’s because perceptions feel like the truth, and it’s shockingly difficult to see them as anything other than the truth.

It takes a very dedicated effort, and no small amount of training, to learn to see your own perceptions as how you see things rather than simply how things are.

I think part of what makes this so difficult is the existential terror that results when you realize that anything you believe—even anything you perceive—could potentially be wrong. Basically the entire field of epistemology is dedicated to understanding what we can and can’t be certain of—and the “can’t” is a much, much bigger set than the “can”.

In a sense, you can be certain of what you feel and perceive—you can be certain that you feel and perceive them. But you can’t be certain whether those feelings and perceptions correspond to your external reality.

When you are sad, you know that you are sad. You can be certain of that. But you don’t know whether you should be sad—whether you have a reason to be sad. Often, perhaps even usually, you do. But sometimes, the sadness comes from within you, or from misperceiving the world.

Once you learn to recognize your perceptions as perceptions, you can question them, doubt them, challenge them. Training your mind to do this is an important part of mindfulness meditation, and also of cognitive behavioral therapy.

But even after years of training, it’s still shockingly hard to do this, especially in the throes of a strong emotion. Simply seeing that what you’re feeling—about yourself, or your situation, or the world—is not an entirely accurate perception can take an incredible mental effort.

We really seem to be wired to see our perceptions as reality.

This makes a certain amount of sense, in evolutionary terms. In an ancestral environment where death was around every corner, we really didn’t have time to stop and thinking carefully about whether our perceptions were accurate.

Two ancient hominids hear a sound that might be a tiger. One immediately perceives it as a tiger, and runs away. The other stops to think, and then begins carefully examining his surroundings, looking for more conclusive evidence to determine whether it is in fact a tiger.

The latter is going to have more accurate beliefs—right up until the point where it is a tiger and he gets eaten.

But in our world today, it may be more dangerous to hold onto false beliefs than to analyze and challenge our beliefs. We may harm ourselves—and others—more by trusting our perceptions too much rather than by taking the time to analyze them.

Fake skepticism

Jun 3 JDN 2458273

“You trust the mainstream media?” “Wake up, sheeple!” “Don’t listen to what so-called scientists say; do your own research!”

These kinds of statements have become quite ubiquitous lately (though perhaps the attitudes were always there, and we only began to hear them because of the Internet and social media), and are often used to defend the most extreme and bizarre conspiracy theories, from moon-landing denial to flat Earth. The amazing thing about these kinds of statements is that they can be used to defend literally anything, as long as you can find some source with less than 100% credibility that disagrees with it. (And what source has 100% credibility?)

And that, I think, should tell you something. An argument that can prove anything is an argument that proves nothing.

Reversed stupidity is not intelligence. The fact that the mainstream media, or the government, or the pharmaceutical industry, or the oil industry, or even gangsters, fanatics, or terrorists believes something does not make it less likely to be true.

In fact, the vast majority of beliefs held by basically everyone—including the most fanatical extremists—are true. I could list such consensus true beliefs for hours: “The sky is blue.” “2+2=4.” “Ice is colder than fire.”

Even if a belief is characteristic of a specifically evil or corrupt organization, that does not necessarily make it false (though it usually is evidence of falsehood in a Bayesian sense). If only terrible people belief X, then maybe you shouldn’t believe X. But if both good and bad people believe X, the fact that bad people believe X really shouldn’t matter to you.

People who use this kind of argument often present themselves as being “skeptics”. They imagine that they have seen through the veil of deception that blinds others.

In fact, quite the opposite is the case: This is fake skepticism. These people are not uniquely skeptical; they are uniquely credulous. If you think the Earth is flat because you don’t trust the mainstream scientific community, that means you do trust someone far less credible than the mainstream scientific community.

Real skepticism is difficult. It requires concerted effort and investigation, and typically takes years. To really seriously challenge the expert consensus in a field, you need to become an expert in that field. Ideally, you should get a graduate degree in that field and actually start publishing your heterodox views. Failing that, you should at least be spending hundreds or thousands of hours doing independent research. If you are unwilling or unable to do that, you are not qualified to assess the validity of the expert consensus.

This does not mean the expert consensus is always right—remarkably often, it isn’t. But it means you aren’t allowed to say it’s wrong, because you don’t know enough to assess that.

This is not elitism. This is not an argument from authority. This is a basic respect for the effort and knowledge that experts spend their lives acquiring.

People don’t like being told that they are not as smart as other people—even though, with any variation at all, that’s got to be true for a certain proportion of people. But I’m not even saying experts are smarter than you. I’m saying they know more about their particular field of expertise.

Do you walk up to construction workers on the street and critique how they lay concrete? When you step on an airplane, do you explain to the captain how to read an altimeter? When you hire a plumber, do you insist on using the snake yourself?

Probably not. And why not? Because you know these people have training; they do this for a living. Yeah, well, scientists do this for a living too—and our training is much longer. To be a plumber, you need a high school diploma and an apprenticeship that usually lasts about four years. To be a scientist, you need a PhD, which means four years of college plus an additional five or six years of graduate school.

To be clear, I’m not saying you should listen to experts speaking outside their expertise. Some of the most idiotic, arrogant things ever said by human beings have been said by physicists opining on biology or economists ranting about politics. Even within a field, some people have such narrow expertise that you can’t really trust them even on things that seem related—like macroeconomists with idiotic views on trade, or ecologists who clearly don’t understand evolution.

This is also why one of the great challenges of being a good interdisciplinary scientist is actually obtaining enough expertise in both fields you’re working in; it isn’t literally twice the work (since there is overlap—or you wouldn’t be doing it—and you do specialize in particular interdisciplinary subfields), but it’s definitely more work, and there are definitely a lot of people on each side of the fence who may never take you seriously no matter what you do.

How do you tell who to trust? This is why I keep coming back to the matter of expert consensus. The world is much too complicated for anyone, much less everyone, to understand it all. We must be willing to trust the work of others. The best way we have found to decide which work is trustworthy is by the norms and institutions of the scientific community itself. Since 97% of climatologists say that climate change is caused by humans, they’re probably right. Since 99% of biologists believe humans evolved by natural selection, that’s probably what happened. Since 87% of economists oppose tariffs, tariffs probably aren’t a good idea.

Can we be certain that the consensus is right? No. There is precious little in this universe that we can be certain about. But as in any game of chance, you need to play the best odds, and my money will always be on the scientific consensus.