The mythology mindset

Feb 5 JDN 2459981

I recently finished reading Steven Pinker’s latest book Rationality. It’s refreshing, well-written, enjoyable, and basically correct with some small but notable errors that seem sloppy—but then you could have guessed all that from the fact that it was written by Steven Pinker.

What really makes the book interesting is an insight Pinker presents near the end, regarding the difference between the “reality mindset” and the “mythology mindset”.

It’s a pretty simple notion, but a surprisingly powerful one.

In the reality mindset, a belief is a model of how the world actually functions. It must be linked to the available evidence and integrated into a coherent framework of other beliefs. You can logically infer from how some parts work to how other parts must work. You can predict the outcomes of various actions. You live your daily life in the reality mindset; you couldn’t function otherwise.

In the mythology mindset, a belief is a narrative that fulfills some moral, emotional, or social function. It’s almost certainly untrue or even incoherent, but that doesn’t matter. The important thing is that it sends the right messages. It has the right moral overtones. It shows you’re a member of the right tribe.

The idea is similar to Dennett’s “belief in belief”, which I’ve written about before; but I think this characterization may actually be a better one, not least because people would be more willing to use it as a self-description. If you tell someone “You don’t really believe in God, you believe in believing in God”, they will object vociferously (which is, admittedly, what the theory would predict). But if you tell them, “Your belief in God is a form of the mythology mindset”, I think they are at least less likely to immediately reject your claim out of hand. “You believe in God a different way than you believe in cyanide” isn’t as obviously threatening to their identity.

A similar notion came up in a Psychology of Religion course I took, in which the professor discussed “anomalous beliefs” linked to various world religions. He picked on a bunch of obscure religions, often held by various small tribes. He asked for more examples from the class. Knowing he was nominally Catholic and not wanting to let mainstream religion off the hook, I presented my example: “This bread and wine are the body and blood of Christ.” To his credit, he immediately acknowledged it as a very good example.

It’s also not quite the same thing as saying that religion is a “metaphor”; that’s not a good answer for a lot of reasons, but perhaps chief among them is that people don’t say they believe metaphors. If I say something metaphorical and then you ask me, “Hang on; is that really true?” I will immediately acknowledge that it is not, in fact, literally true. Love is a rose with all its sweetness and all its thorns—but no, love isn’t really a rose. And when it comes to religious belief, saying that you think it’s a metaphor is basically a roundabout way of saying you’re an atheist.

From all these different directions, we seem to be converging on a single deeper insight: when people say they believe something, quite often, they clearly mean something very different by “believe” than what I would ordinarily mean.

I’m tempted even to say that they don’t really believe it—but in common usage, the word “belief” is used at least as often to refer to the mythology mindset as the reality mindset. (In fact, it sounds less weird to say “I believe in transsubstantiation” than to say “I believe in gravity”.) So if they don’t really believe it, then they at least mythologically believe it.

Both mindsets seem to come very naturally to human beings, in particular contexts. And not just modern people, either. Humans have always been like this.

Ask that psychology professor about Jesus, and he’ll tell you a tall tale of life, death, and resurrection by a demigod. But ask him about the Stroop effect, and he’ll provide a detailed explanation of rigorous experimental protocol. He believes something about God; but he knows something about psychology.

Ask a hunter-gatherer how the world began, and he’ll surely spin you a similarly tall tale about some combination of gods and spirits and whatever else, and it will all be peculiarly particular to his own tribe and no other. But ask him how to gut a fish, and he’ll explain every detail with meticulous accuracy, with almost the same rigor as that scientific experiment. He believes something about the sky-god; but he knows something about fish.

To be a rationalist, then, is to aspire to live your whole life in the reality mindset. To seek to know rather than believe.

This isn’t about certainty. A rationalist can be uncertain about many things—in fact, it’s rationalists of all people who are most willing to admit and quantify their uncertainty.

This is about whether you allow your beliefs to float free as bare, almost meaningless assertions that you profess to show you are a member of the tribe, or you make them pay rent, directly linked to other beliefs and your own experience.

As long as I can remember, I have always aspired to do this. But not everyone does. In fact, I dare say most people don’t. And that raises a very important question: Should they? Is it better to live the rationalist way?

I believe that it is. I suppose I would, temperamentally. But say what you will about the Enlightenment and the scientific revolution, they have clearly revolutionized human civilization and made life much better today than it was for most of human existence. We are peaceful, safe, and well-fed in a way that our not-so-distant ancestors could only dream of, and it’s largely thanks to systems built under the principles of reason and rationality—that is, the reality mindset.

We would never have industrialized agriculture if we still thought in terms of plant spirits and sky gods. We would never have invented vaccines and antibiotics if we still believed disease was caused by curses and witchcraft. We would never have built power grids and the Internet if we still saw energy as a mysterious force permeating the world and not as a measurable, manipulable quantity.

This doesn’t mean that ancient people who saw the world in a mythological way were stupid. In fact, it doesn’t even mean that people today who still think this way are stupid. This is not about some innate, immutable mental capacity. It’s about a technology—or perhaps the technology, the meta-technology that makes all other technology possible. It’s about learning to think the same way about the mysterious and the familiar, using the same kind of reasoning about energy and death and sunlight as we already did about rocks and trees and fish. When encountering something new and mysterious, someone in the mythology mindset quickly concocts a fanciful tale about magical beings that inevitably serves to reinforce their existing beliefs and attitudes, without the slightest shred of evidence for any of it. In their place, someone in the reality mindset looks closer and tries to figure it out.

Still, this gives me some compassion for people with weird, crazy ideas. I can better make sense of how someone living in the modern world could believe that the Earth is 6,000 years old or that the world is ruled by lizard-people. Because they probably don’t really believe it, they just mythologically believe it—and they don’t understand the difference.

Moral disagreement is not bad faith

Jun 7 JDN 2459008

One of the most dangerous moves to make in an argument is to accuse your opponent of bad faith. It’s a powerful, and therefore tempting, maneuver: If they don’t even really believe what they are saying, then you can safely ignore basically whatever comes out of their mouth. And part of why this is so tempting is that it is in fact occasionally true—people do sometimes misrepresent their true beliefs in various ways for various reasons. On the Internet especially, sometimes people are just trolling.

But unless you have really compelling evidence that someone is arguing in bad faith, you should assume good faith. You should assume that whatever they are asserting is what they actually believe. For if you assume bad faith and are wrong, you have just cut off any hope of civil discourse between the two of you. You have made it utterly impossible for either side to learn anything or change their mind in any way. If you assume good faith and are wrong, you may have been overly charitable; but in the end you are the one that is more likely to persuade any bystanders, not the one who was arguing in bad faith.

Furthermore, it is important to really make an effort to understand your opponent’s position as they understand it before attempting to respond to it. Far too many times, I have seen someone accused of bad faith by an opponent who simply did not understand their worldview—and did not even seem willing to try to understand their worldview.

In this post, I’m going to point out some particularly egregious examples of this phenomenon that I’ve found, all statements made by left-wing people in response to right-wing people. Why am I focusing on these? Well, for one thing, it’s as important to challenge bad arguments on your own side as it is to do so on the other side. I also think I’m more likely to be persuasive to a left-wing audience. I could find right-wing examples easily enough, but I think it would be less useful: It would be too tempting to think that this is something only the other side does.

Example 1: “Republicans Have Stopped Pretending to Care About Life”

The phrase “pro-life” means thinking that abortion is wrong. That’s all it means. It’s jargon at this point. The phrase has taken on this meaning independent of its constituent parts, just as a red herring need not be either red or a fish.

Stop accusing people of not being “truly pro-life” because they don’t adopt some other beliefs that are not related to abortion. Even if those would be advancing life in some sense (most people probably think that most things they think are good advance life in some sense!), they aren’t relevant to the concept of being “pro-life”. Moreover, being “pro-life” in the traditional conservative sense isn’t even about minimizing the harm of abortion or the abortion rate. It’s about emphasizing the moral wrongness of abortion itself, and often even criminalizing it.


I don’t think this is really so hard to understand. If someone truly, genuinely believes that abortion is murdering a child, it’s quite clear why they won’t be convinced by attempts at minimizing harm or trying to reduce the abortion rate via contraception or other social policy. Many policies are aimed at “reducing the demand for abortion”; would you want to “reduce the demand for murder”? No, you’d want murderers to be locked up. You wouldn’t care what their reasons were, and you wouldn’t be interested in using social policies to address those reasons. It’s not even hard to understand why this would be such an important issue to them, overriding almost anything else: If you thought that millions of people were murdering children you would consider that an extremely important issue too.

If you want to convince people to support Roe v. Wade, you’re going to have to change their actual belief that abortion is murder. You may even be able to convince them that they don’t really think abortion is murder—many conservatives support the death penalty for murder, but very few do so for abortion. But they clearly do think that abortion is a grave moral wrong, and you can’t simply end-run around that by calling them hypocrites because they don’t care about whatever other issue you think they should care about.

Example 2: “Stop pretending to care about human life if you support wars in the Middle East”

I had some trouble finding the exact wording of the meme I originally saw with this sentiment, but the gist of it was basically that if you support bombing Afghanistan, Libya, Iraq, and/or Syria, you have lost all legitimacy to claiming that you care about human life.

Say what you will about these wars (though to be honest I think what the US has done in Libya and Syria has done more good than harm), but simply supporting a war does not automatically undermine all your moral legitimacy. The kind of radical pacifism that requires us to never kill anyone ever is utterly unrealistic; the question is and has always been “Which people is it okay to kill, when and how and why?” Some wars are justified; we have to accept that.

It would be different if these were wars of genocidal extermination; I can see a case for saying that anyone who supported the Holocaust or the Rwandan Genocide has lost all moral legitimacy. But even then it isn’t really accurate to say that those people don’t care about human life; it’s much more accurate to say that they have assigned the group of people they want to kill to a subhuman status. Maybe you would actually get more traction by saying “They are human beings too!” rather than by accusing people of not believing in the value of human life.

And clearly these are not wars of extermination—if the US military wanted to exterminate an entire nation of people, they could do so much more efficiently than by using targeted airstrikes and conventional warfare. Remember: They have nuclear weapons. Even if you think that they wouldn’t use nukes because of fear of retaliation (Would Russia or China really retaliate using their own nukes if the US nuked Afghanistan or Iran?), it’s clear that they could have done a lot more to kill a lot more innocent people if that were actually their goal. It’s one thing to say they don’t take enough care not to kill innocent civilians—I agree with that. It’s quite another to say that they actively try to kill innocent civilians—that’s clearly not what is happening.

Example 3: “Stop pretending to be Christian if you won’t help the poor.”

This one I find a good deal more tempting: In the Bible, Jesus does spend an awful lot more words on helping the poor than he does on, well, almost anything else; and he doesn’t even once mention abortion or homosexuality. (The rest of the Bible does at least mention homosexuality, but it really doesn’t have any clear mentions of abortion.) So it really is tempting to say that anyone who doesn’t make helping the poor their number one priority can’t really be a Christian.

But the world is more complicated than that. People can truly and deeply believe some aspects of a religion while utterly rejecting others. They can do this more or less arbitrarily, in a way that may not even be logically coherent. They may even honestly believe that every single word of the Bible to be the absolute perfect truth of an absolute perfect God, and yet there are still passages you could point them to that they would have to admit they don’t believe in. (There are literally hundreds of explicit contradictions in the Bible. Many are minor—though still undermine any claim to absolute perfect truth—but some are really quite substantial. Does God forgive and forget, or does he visit revenge upon generations to come? That’s kind of a big deal! And should we be answering fools or not?) In some sense they don’t really believe that every word is true, then; but they do seem to believe in believing it.

Yes, it’s true; people can worship a penniless son of a carpenter who preached peace and charity and at the same time support cutting social welfare programs and bombing the Middle East. Such a worldview may not be entirely self-consistent; it’s certainly not the worldview that Jesus himself espoused. But it nevertheless is quite sincerely believed by many millions of people.

It may still be useful to understand the Bible in order to persuade Christians to help the poor more. There are certainly plenty of passages you can point them to where Jesus talks about how important it is to help the poor. Likewise, Jesus doesn’t seem to much like the rich, so it is fair to ask: How Christian is it for Republicans to keep cutting taxes on the rich? (I literally laughed out loud when I first saw this meme: “Celebrate Holy Week By Flogging a Banker: It’s What Jesus Would Have Done!“) But you should not accuse people of “pretending to be Christian”. They really do strongly identify themselves as Christian, and would sooner give up almost anything else about their identity. If you accuse them of pretending, all that will do is shut down the conversation.

Now, after all that, let me give one last example that doesn’t fit the trend, one example where I really do think the other side is acting in bad faith.


Example 4: “#AllLivesMatter is a lie. You don’t actually think all lives matter.”

I think this one is actually true. If you truly believed that all lives matter, you wouldn’t post the hashtag #AllLivesMatter in response to #BlackLivesMatter protests against police brutality.

First of all, you’d probably be supporting those protests. But even if you didn’t for some reason, that isn’t how you would use the hashtag. As a genuine expression of caring, the hashtag #AllLivesMatter would only really make sense for something like Oxfam or UNICEF: Here are these human lives that are in danger and we haven’t been paying enough attention to them, and here, you can follow my hashtag and give some money to help them because all lives matter. If it were really about all lives mattering, then you’d see the hashtag pop up after a tsunami in Southeast Asia or a famine in central Africa. (For awhile I tried actually using it that way; I quickly found that it was overwhelmed by the bad faith usage and decided to give up.)

No, this hashtag really seems to be trying to use a genuinely reasonable moral norm—all lives matter—as a weapon against a political movement. We don’t see #AllLivesMatter popping up asking people to help save some lives—it’s always as a way of shouting down other people who want to save some lives. It’s a glib response that lets you turn away and ignore their pleas, without ever actually addressing the substance of what they are saying. If you really believed that all lives matter, you would not be so glib; you would want to understand how so many people are suffering and want to do something to help them. Even if you ultimately disagreed with what they were saying, you would respect them enough to listen.

The counterpart #BlueLivesMatter isn’t in bad faith, but it is disturbing in a different way: What are ‘blue lives’? People aren’t born police officers. They volunteer for that job. They can quit if want. No one can quit being Black. Working as a police officer isn’t even especially dangerous! But it’s not a bad faith argument: These people really do believe that the lives of police officers are worth more—apparently much more—than the lives of Black civilians.

I do admit, the phrasing “#BlackLivesMatter” is a bit awkward, and could be read to suggest that other lives don’t matter, but it takes about 2 minutes of talking to someone (or reading a blog by someone) who supports those protests to gather that this is not their actual view. Perhaps they should have used #BlackLivesMatterToo, but when your misconception is that easily rectified the responsibility to avoid it falls on you. (Then again, some people do seem to stoke this misconception: I was quite annoyed when a question was asked at a Democratic debate: “Do Black Lives Matter, or Do All Lives Matter?” The correct answer of course is “All lives matter, which is why I support the Black Lives Matter movement.”)

So, yes, bad faith arguments do exist, and sometimes we need to point them out. But I implore you, consider that a last resort, a nuclear option you’ll only deploy when all other avenues have been exhausted. Once you accuse someone of bad faith, you have shut down the conversation completely—preventing you, them, and anyone else who was listening from having any chance of learning or changing their mind.

The backfire effect has been greatly exaggerated

Sep 8 JDN 2458736

Do a search for “backfire effect” and you’re likely to get a large number of results, many of them from quite credible sources. The Oatmeal did an excellent comic on it. The basic notion is simple: “[…]some individuals when confronted with evidence that conflicts with their beliefs come to hold their original position even more strongly.”

The implications of this effect are terrifying: There’s no point in arguing with anyone about anything controversial, because once someone strongly holds a belief there is nothing you can do to ever change it. Beliefs are fixed and unchanging, stalwart cliffs against the petty tides of evidence and logic.

Fortunately, the backfire effect is not actually real—or if it is, it’s quite rare. Over many years those seemingly-ineffectual tides can erode those cliffs down and turn them into sandy beaches.

The most recent studies with larger samples and better statistical analysis suggest that the typical response to receiving evidence contradicting our beliefs is—lo and behold—to change our beliefs toward that evidence.

To be clear, very few people completely revise their worldview in response to a single argument. Instead, they try to make a few small changes and fit them in as best they can.

But would we really expect otherwise? Worldviews are holistic, interconnected systems. You’ve built up your worldview over many years of education, experience, and acculturation. Even when someone presents you with extremely compelling evidence that your view is wrong, you have to weigh that against everything else you have experienced prior to that point. It’s entirely reasonable—rational, even—for you to try to fit the new evidence in with a minimal overall change to your worldview. If it’s possible to make sense of the available evidence with only a small change in your beliefs, it makes perfect sense for you to do that.

What if your whole worldview is wrong? You might have based your view of the world on a religion that turns out not to be true. You might have been raised into a culture with a fundamentally incorrect concept of morality. What if you really do need a radical revision—what then?

Well, that can happen too. People change religions. They abandon their old cultures and adopt new ones. This is not a frequent occurrence, to be sure—but it does happen. It happens, I would posit, when someone has been bombarded with contrary evidence not once, not a few times, but hundreds or thousands of times, until they can no longer sustain the crumbling fortress of their beliefs against the overwhelming onslaught of argument.

I think the reason that the backfire effect feels true to us is that our life experience is largely that “argument doesn’t work”; we think back to all the times that we have tried to convince to change a belief that was important to them, and we can find so few examples of when it actually worked. But this is setting the bar much too high. You shouldn’t expect to change an entire worldview in a single conversation. Even if your worldview is correct and theirs is not, that one conversation can’t have provided sufficient evidence for them to rationally conclude that. One person could always be mistaken. One piece of evidence could always be misleading. Even a direct experience could be a delusion or a foggy memory.

You shouldn’t be trying to turn a Young-Earth Creationist into an evolutionary biologist, or a climate change denier into a Greenpeace member. You should be trying to make that Creationist question whether the Ussher chronology is really so reliable, or if perhaps the Earth might be a bit older than a 17th century theologian interpreted it to be. You should be getting the climate change denier to question whether scientists really have such a greater vested interest in this than oil company lobbyists. You can’t expect to make them tear down the entire wall—just get them to take out one brick today, and then another brick tomorrow, and perhaps another the day after that.

The proverb is of uncertain provenance, variously attributed, rarely verified, but it is still my favorite: No single raindrop feels responsible for the flood.

Do not seek to be a flood. Seek only to be a raindrop—for if we all do, the flood will happen sure enough. (There’s a version more specific to our times: So maybe we’re snowflakes. I believe there is a word for a lot of snowflakes together: Avalanche.)

And remember this also: When you argue in public (which includes social media), you aren’t just arguing for the person you’re directly engaged with; you are also arguing for everyone who is there to listen. Even if you can’t get the person you’re arguing with to concede even a single point, maybe there is someone else reading your post who now thinks a little differently because of something you said. In fact, maybe there are many people who think a little differently—the marginal impact of slacktivism can actually be staggeringly large if the audience is big enough.

This can be frustrating, thankless work, for few people will ever thank you for changing their mind, and many will condemn you even for trying. Finding out you were wrong about a deeply-held belief can be painful and humiliating, and most people will attribute that pain and humiliation to the person who called them out for being wrong—rather than placing the blame where it belongs, which is on whatever source or method made you wrong in the first place. Being wrong feels just like being right.

But this is important work, among the most important work that anyone can do. Philosophy, mathematics, science, technology—all of these things depend upon it. Changing people’s minds by evidence and rational argument is literally the foundation of civilization itself. Every real, enduring increment of progress humanity has ever made depends upon this basic process. Perhaps occasionally we have gotten lucky and made the right choice for the wrong reasons; but without the guiding light of reason, there is nothing to stop us from switching back and making the wrong choice again soon enough.

So I guess what I’m saying is: Don’t give up. Keep arguing. Keep presenting evidence. Don’t be afraid that your arguments will backfire—because in fact they probably won’t.

The vector geometry of value change

Post 239: May 20 JDN 2458259

This post is one of those where I’m trying to sort out my own thoughts on an ongoing research project, so it’s going to be a bit more theoretical than most, but I’ll try to spare you the mathematical details.

People often change their minds about things; that should be obvious enough. (Maybe it’s not as obvious as it might be, as the brain tends to erase its prior beliefs as wastes of data storage space.)

Most of the ways we change our minds are fairly minor: We get corrected about Napoleon’s birthdate, or learn that George Washington never actually chopped down any cherry trees, or look up the actual weight of an average African elephant and are surprised.

Sometimes we change our minds in larger ways: We realize that global poverty and violence are actually declining, when we thought they were getting worse; or we learn that climate change is actually even more dangerous than we thought.

But occasionally, we change our minds in an even more fundamental way: We actually change what we care about. We convert to a new religion, or change political parties, or go to college, or just read some very compelling philosophy books, and come out of it with a whole new value system.

Often we don’t anticipate that our values are going to change. That is important and interesting in its own right, but I’m going to set it aside for now, and look at a different question: What about the cases where we know our values are going to change?
Can it ever be rational for someone to choose to adopt a new value system?

Yes, it can—and I can put quite tight constraints on precisely when.

Here’s the part where I hand-wave the math, but imagine for a moment there are only two goods in the world that anyone would care about. (This is obviously vastly oversimplified, but it’s easier to think in two dimensions to make the argument, and it generalizes to n dimensions easily from there.) Maybe you choose a job caring only about money and integrity, or design policy caring only about security and prosperity, or choose your diet caring only about health and deliciousness.

I can then represent your current state as a vector, a two dimensional object with a length and a direction. The length describes how happy you are with your current arrangement. The direction describes your values—the direction of the vector characterizes the trade-off in your mind of how much you care about each of the two goods. If your vector is pointed almost entirely parallel with health, you don’t much care about deliciousness. If it’s pointed mostly at integrity, money isn’t that important to you.

This diagram shows your current state as a green vector.

vector1

Now suppose you have the option of taking some action that will change your value system. If that’s all it would do and you know that, you wouldn’t accept it. You will be no better off, and your value system will be different, which is bad from your current perspective. So here, you would not choose to move to the red vector:

vector2

But suppose that the action would change your value system, and make you better off. Now the red vector is longer than the green vector. Should you choose the action?

vector3

It’s not obvious, right? From the perspective of your new self, you’ll definitely be better off, and that seems good. But your values will change, and maybe you’ll start caring about the wrong things.

I realized that the right question to ask is whether you’ll be better off from your current perspective. If you and your future self both agree that this is the best course of action, then you should take it.

The really cool part is that (hand-waving the math again) it’s possible to work this out as a projection of the new vector onto the old vector. A large change in values will be reflected as a large angle between the two vectors; to compensate for that you need a large change in length, reflecting a greater improvement in well-being.

If the projection of the new vector onto the old vector is longer than the old vector itself, you should accept the value change.

vector4
If the projection of the new vector onto the old vector is shorter than the old vector, you should not accept the value change.

vector5

This captures the trade-off between increased well-being and changing values in a single number. It fits the simple intuitions that being better off is good, and changing values more is bad—but more importantly, it gives us a way of directly comparing the two on the same scale.

This is a very simple model with some very profound implications. One is that certain value changes are impossible in a single step: If a value change would require you to take on values that are completely orthogonal or diametrically opposed to your own, no increase in well-being will be sufficient.

It doesn’t matter how long I make this red vector, the projection onto the green vector will always be zero. If all you care about is money, no amount of integrity will entice you to change.

vector6

But a value change that was impossible in a single step can be feasible, even easy, if conducted over a series of smaller steps. Here I’ve taken that same impossible transition, and broken it into five steps that now make it feasible. By offering a bit more money for more integrity, I’ve gradually weaned you into valuing integrity above all else:

vector7

This provides a formal justification for the intuitive sense many people have of a “moral slippery slope” (commonly regarded as a fallacy). If you make small concessions to an argument that end up changing your value system slightly, and continue to do so many times, you could end up with radically different beliefs at the end, even diametrically opposed to your original beliefs. Each step was rational at the time you took it, but because you changed yourself in the process, you ended up somewhere you would not have wanted to go.

This is not necessarily a bad thing, however. If the reason you made each of those changes was actually a good one—you were provided with compelling evidence and arguments to justify the new beliefs—then the whole transition does turn out to be a good thing, even though you wouldn’t have thought so at the time.

This also allows us to formalize the notion of “inferential distance”: the inferential distance is the number of steps of value change required to make someone understand your point of view. It’s a function of both the difference in values and the difference in well-being between their point of view and yours.

Another key insight is that if you want to persuade someone to change their mind, you need to do it slowly, with small changes repeated many times, and you need to benefit them at each step. You can only persuade someone to change their minds if they will end up better off than they were at each step.

Is this an endorsement of wishful thinking? Not if we define “well-being” in the proper way. It can make me better off in a deep sense to realize that my wishful thinking was incorrect, so that I realize what must be done to actually get the good things I thought I already had.  It’s not necessary to appeal to material benefits; it’s necessary to appeal to current values.

But it does support the notion that you can’t persuade someone by belittling them. You won’t convince people to join your side by telling them that they are defective and bad and should feel guilty for being who they are.

If that seems obvious, well, maybe you should talk to some of the people who are constantly pushing “White privilege”. If you focused on how reducing racism would make people—even White people—better off, you’d probably be more effective. In some cases there would be direct material benefits: Racism creates inefficiency in markets that reduces overall output. But in other cases, sure, maybe there’s no direct benefit for the person you’re talking to; but you can talk about other sorts of benefits, like what sort of world they want to live in, or how proud they would feel to be part of the fight for justice. You can say all you want that they shouldn’t need this kind of persuasion, they should already believe and do the right thing—and you might even be right about that, in some ultimate sense—but do you want to change their minds or not? If you actually want to change their minds, you need to meet them where they are, make small changes, and offer benefits at each step.

If you don’t, you’ll just keep on projecting a vector orthogonally, and you’ll keep ending up with zero.

Are some ideas too ridiculous to bother with?

Apr 22 JDN 2458231

Flat Earth. Young-Earth Creationism. Reptilians. 9/11 “Truth”. Rothschild conspiracies.

There are an astonishing number of ideas that satisfy two apparently-contrary conditions:

  1. They are so obviously ridiculous that even a few minutes of honest, rational consideration of evidence that is almost universally available will immediately refute them;
  2. They are believed by tens or hundreds of millions of otherwise-intelligent people.

Young-Earth Creationism is probably the most alarming, seeing as it grips the minds of some 38% of Americans.

What should we do when faced with such ideas? This is something I’ve struggled with before.

I’ve spent a lot of time and effort trying to actively address and refute them—but I don’t think I’ve even once actually persuaded someone who believes these ideas to change their mind. This doesn’t mean my time and effort were entirely wasted; it’s possible that I managed to convince bystanders, or gained some useful understanding, or simply improved my argumentation skills. But it does seem likely that my time and effort were mostly wasted.

It’s tempting, therefore, to give up entirely, and just let people go on believing whatever nonsense they want to believe. But there’s a rather serious downside to that as well: Thirty-eight percent of Americans.

These people vote. They participate in community decisions. They make choices that affect the rest of our lives. Nearly all of those Creationists are Evangelical Christians—and White Evangelical Christians voted overwhelmingly in favor of Donald Trump. I can’t be sure that changing their minds about the age of the Earth would also change their minds about voting for Trump, but I can say this: If all the Creationists in the US had simply not voted, Hillary Clinton would have won the election.

And let’s not leave the left wing off the hook either. Jill Stein is a 9/11 “Truther”, and pulled a lot of fellow “Truthers” to her cause in the election as well. Had all of Jill Stein’s votes gone to Hillary Clinton instead, again Hillary would have won, even if all the votes for Trump had remained the same. (That said, there is reason to think that if Stein had dropped out, most of those folks wouldn’t have voted at all.)

Therefore, I don’t think it is safe to simply ignore these ridiculous beliefs. We need to do something; the question is what.

We could try to censor them, but first of all that violates basic human rights—which should be a sufficient reason not to do it—and second, it probably wouldn’t even work. Censorship typically leads to radicalization, not assimilation.

We could try to argue against them. Ideally this would be the best option, but it has not shown much effect so far. The kind of person who sincerely believes that the Earth is 6,000 years old (let alone that governments are secretly ruled by reptilian alien invaders) isn’t the kind of person who is highly responsive to evidence and rational argument.

In fact, there is reason to think that these people don’t actually believe what they say the same way that you and I believe things. I’m not saying they’re lying, exactly. They think they believe it; they want to believe it. They believe in believing it. But they don’t actually believe it—not the way that I believe that cyanide is poisonous or the way I believe the sun will rise tomorrow. It isn’t fully integrated into the way that they anticipate outcomes and choose behaviors. It’s more of a free-floating sort of belief, where professing a particular belief allows them to feel good about themselves, or represent their status in a community.

To be clear, it isn’t that these beliefs are unimportant to them; on the contrary, they are in some sense more important. Creationism isn’t really about the age of the Earth; it’s about who you are and where you belong. A conventional belief can be changed by evidence about the world because it is about the world; a belief-in-belief can’t be changed by evidence because it was never really about that.

But if someone’s ridiculous belief is really about their identity, how do we deal with that? I can’t refute an identity. If your identity is tied to a particular social group, maybe they could ostracize you and cause you to lose the identity; but an outsider has no power to do that. (Even then, I strongly suspect that, for instance, most excommunicated Catholics still see themselves as Catholic.) And if it’s a personal identity not tied to a particular group, even that option is unavailable.

Where, then, does that leave us? It would seem that we can’t change their minds—but we also can’t afford not to change their minds. We are caught in a terrible dilemma.

I think there might be a way out. It’s a bit counter-intuitive, but I think what we need to do is stop taking them seriously as beliefs, and start treating them purely as announcements of identity.

So when someone says something like, “The Rothschilds run everything!”, instead of responding as though this were a coherent proposition being asserted, treat it as if someone had announced, “Boo! I hate the Red Sox!” Belief in the Rothschild conspiracies isn’t a well-defined set of propositions about the world; it’s an assertion of membership in a particular sort of political sect that is vaguely left-wing and anarchist. You don’t really think the Rothschilds rule everything. You just want to express your (quite justifiable) anger at how our current political system privileges the rich.

Likewise, when someone says they think the Earth is 6,000 years old, you could try to present the overwhelming scientific evidence that they are wrong—but it might be more productive, and it is certainly easier, to just think of this as a funny way of saying “I’m an Evangelical Christian”.

Will this eliminate the ridiculous beliefs? Not immediately. But it might ultimately do so, in the following way: By openly acknowledging the belief-in-belief as a signaling mechanism, we can open opportunities for people to develop new, less pathological methods of signaling. (Instead of saying you think the Earth is 6,000 years old, maybe you could wear a funny hat, like Orthodox Jews do. Funny hats don’t hurt anybody. Everyone loves funny hats.) People will always want to signal their identity, and there are fundamental reasons why such signals will typically be costly for those who use them; but we can try to make them not so costly for everyone else.

This also makes arguments a lot less frustrating, at least at your end. It might make them more frustrating at the other end, because people want their belief-in-belief to be treated like proper belief, and you’ll be refusing them that opportunity. But this is not such a bad thing; if we make it more frustrating to express ridiculous beliefs in public, we might manage to reduce the frequency of such expression.

Why do so many Americans think that crime is increasing?

Jan 29, JDN 2457783

Since the 1990s, crime in United States has been decreasing, and yet in every poll since then most Americans report that they believe that crime is increasing.

It’s not a small decrease either. The US murder rate is down to the lowest it has been in a century. There are now a smaller absolute number (by 34 log points) of violent crimes per year in the US than there were 20 years ago, despite a significant increase in total population (19 log points—and the magic of log points is that, yes, the rate has decreased by precisely 53 log points).

It isn’t geographically uniform, of course; some states have improved much more than others, and a few states (such as New Mexico) have actually gotten worse.

The 1990s were a peak of violent crime, so one might say that we are just regressing to the mean. (Even that would be enough to make it baffling that people think crime is increasing.) But in fact overall crime in the US is now the lowest it has been since the 1970s, and still decreasing.

Indeed, this decrease has been underestimated, because we are now much better about reporting and investigating crimes than we used to be (which may also be part of why they are decreasing, come to think of it). If you compare against surveys of people who say they have been personally victimized, we’re looking at a decline in violent crime rates of two thirds—109 log points.

Just since 2008 violent crime has decreased by 26% (30 log points)—but of course we all know that Obama is “soft on crime” because he thinks cops shouldn’t be allowed to just shoot Black kids for no reason.

And yet, over 60% of Americans believe that overall crime in the US has increased in the last 10 years (though only 38% think it has increased in their own community!). These figures are actually down from 2010, when 66% thought crime was increasing nationally and 49% thought it was increasing in their local area.

The proportion of people who think crime is increasing does seem to decrease as crime rates decrease—but it still remains alarmingly high. If people were half as rational as most economists seem to believe, the proportion of people who think crime is increasing should drop to basically zero whenever crime rates decrease, since that’s a really basic fact about the world that you can just go look up on the Web in a couple of minutes. There’s no deep ambiguity, not even much “rational ignorance” given the low cost of getting correct answers. People just don’t bother to check, or don’t feel they need to.
What’s going on? How can crime fall to half what it was 20 years ago and yet almost two-thirds of people think it’s actually increasing?

Well, one hint is that news coverage of crime doesn’t follow the same pattern as actual crime.

News coverage in general is a terrible source of information, not simply because news organizations can be biased, make glaring mistakes, and sometimes outright lie—but actually for a much more fundamental reason: Even a perfect news channel, qua news channel, would report what is surprising—and what is surprising is, by definition, improbable. (Indeed, there is a formal mathematical concept in probability theory called surprisal that is simply the logarithm of 1 over the probability.) Even assuming that news coverage reports only the truth, the probability of seeing something on the news isn’t proportional to the probability of the event occurring—it’s more likely proportional to the entropy, which is probability times surprisal.

Now, if humans were optimal information processing engines, that would be just fine, actually; reporting events proportional to their entropy is actually a very efficient mechanism for delivering information (optimal, under certain types of constraints), provided that you can then process the information back into probabilities afterward.

But of course, humans aren’t optimal information processing engines. We don’t recompute the probabilities from the given entropy; instead we use the availability heuristic, by which we simply use the number of times we can think of something happening as our estimate of the probability of that event occurring. If you see more murders on TV news than you used you, you assume that murders must be more common than they used to be. (And when I put it like that, it really doesn’t sound so unreasonable, does it? Intuitively the availability heuristic seems to make sense—which is part of why it’s so insidious.)

Another likely reason for the discrepancy between perception and reality is nostalgia. People almost always have a more positive view of the past than it deserves, particularly when referring to their own childhoods. Indeed, I’m quite certain that a major reason why people think the world was much better when they were kids was that their parents didn’t tell them what was going on. And of course I’m fine with that; you don’t need to burden 4-year-olds with stories of war and poverty and terrorism. I just wish people would realize that they were being protected from the harsh reality of the world, instead of thinking that their little bubble of childhood innocence was a genuinely much safer world than the one we live in today.

Then take that nostalgia and combine it with the availability heuristic and the wall-to-wall TV news coverage of anything bad that happens—and almost nothing good that happens, certainly not if it’s actually important. I’ve seen bizarre fluff pieces about puppies, but never anything about how world hunger is plummeting or air quality is dramatically improved or cars are much safer. That’s the one thing I will say about financial news; at least they report it when unemployment is down and the stock market is up. (Though most Americans, especially most Republicans, still seem really confused on those points as well….) They will attribute it to anything from sunspots to the will of Neptune, but at least they do report good news when it happens. It’s no wonder that people are always convinced that the world is getting more dangerous even as it gets safer and safer.

The real question is what we do about it—how do we get people to understand even these basic facts about the world? I still believe in democracy, but when I see just how painfully ignorant so many people are of such basic facts, I understand why some people don’t. The point of democracy is to represent everyone’s interests—but we also end up representing everyone’s beliefs, and sometimes people’s beliefs just don’t line up with reality. The only way forward I can see is to find a way to make people’s beliefs better align with reality… but even that isn’t so much a strategy as an objective. What do I say to someone who thinks that crime is increasing, beyond showing them the FBI data that clearly indicates otherwise? When someone is willing to override all evidence with what they feel in their heart to be true, what are the rest of us supposed to do?

Belief in belief, and why it’s important

Oct 30, JDN 2457692

In my previous post on ridiculous beliefs, I passed briefly over this sentence:

“People invest their identity in beliefs, and decide what beliefs to profess based on the group identities they value most.”

Today I’d like to talk about the fact that “to profess” is a very important phrase in that sentence. Part of understanding ridiculous beliefs, I think, is understanding that many, if not most, of them are not actually proper beliefs. They are what Daniel Dennett calls “belief in belief”, and has elsewhere been referred to as “anomalous belief”. They are not beliefs in the ordinary sense that we would line up with the other beliefs in our worldview and use them to anticipate experiences and motivate actions. They are something else, lone islands of belief that are not weaved into our worldview. But all the same they are invested with importance, often moral or even ultimate importance; this one belief may not make any sense with everyone else, but you must believe it, because it is a vital part of your identity and your tribe. To abandon it would not simply be mistaken; it would be heresy, it would be treason.

How do I know this? Mainly because nobody has tried to stone me to death lately.

The Bible is quite explicit about at least a dozen reasons I am supposed to be executed forthwith; you likely share many of them: Heresy, apostasy, blasphemy, nonbelief, sodomy, fornication, covetousness, taking God’s name in vain, eating shellfish (though I don’t anymore!), wearing mixed fiber, shaving, working on the Sabbath, making images of things, and my personal favorite, not stoning other people for committing such crimes (as we call it in game theory, a second-order punishment).

Yet I have met many people who profess to be “Bible-believing Christians”, and even may oppose some of these activities (chiefly sodomy, blasphemy, and nonbelief) on the grounds that they are against what the Bible says—and yet not one has tried to arrange my execution, nor have I ever seriously feared that they might.

Is this because we live in a secular society? Well, yes—but not simply that. It isn’t just that these people are afraid of being punished by our secular government should they murder me for my sins; they believe that it is morally wrong to murder me, and would rarely even consider the option. Someone could point them to the passage in Leviticus (20:16, as it turns out) that explicitly says I should be executed, and it would not change their behavior toward me.

On first glance this is quite baffling. If I thought you were about to drink a glass of water that contained cyanide, I would stop you, by force if necessary. So if they truly believe that I am going to be sent to Hell—infinitely worse than cyanide—then shouldn’t they be willing to use any means necessary to stop that from happening? And wouldn’t this be all the more true if they believe that they themselves will go to Hell should they fail to punish me?

If these “Bible-believing Christians” truly believed in Hell the way that I believe in cyanide—that is, as proper beliefs which anticipate experience and motivate action—then they would in fact try to force my conversion or execute me, and in doing so would believe that they are doing right. This used to be quite common in many Christian societies (most infamously in the Salem Witch Trials), and still is disturbingly common in many Muslim societies—ISIS doesn’t just throw gay men off rooftops and stone them as a weird idiosyncrasy; it is written in the Hadith that they’re supposed to. Nor is this sort of thing confined to terrorist groups; the “legitimate” government of Saudi Arabia routinely beheads atheists or imprisons homosexuals (though has a very capricious enforcement system, likely so that the monarchy can trump up charges to justify executing whomever they choose). Beheading people because the book said so is what your behavior would look like if you honestly believed, as a proper belief, that the Qur’an or the Bible or whatever holy book actually contained the ultimate truth of the universe. The great irony of calling religion people’s “deeply-held belief” is that it is in almost all circumstances the exact opposite—it is their most weakly held belief, the one that they could most easily sacrifice without changing their behavior.

Yet perhaps we can’t even say that to people, because they will get equally defensive and insist that they really do hold this very important anomalous belief, and how dare you accuse them otherwise. Because one of the beliefs they really do hold, as a proper belief, and a rather deeply-held one, is that you must always profess to believe your religion and defend your belief in it, and if anyone catches you not believing it that’s a horrible, horrible thing. So even though it’s obvious to everyone—probably even to you—that your behavior looks nothing like what it would if you actually believed in this book, you must say that you do, scream that you do if necessary, for no one must ever, ever find out that it is not a proper belief.

Another common trick is to try to convince people that their beliefs do affect their behavior, even when they plainly don’t. We typically use the words “religious” and “moral” almost interchangeably, when they are at best orthogonal and arguably even opposed. Part of why so many people seem to hold so rigidly to their belief-in-belief is that they think that morality cannot be justified without recourse to religion; so even though on some level they know religion doesn’t make sense, they are afraid to admit it, because they think that means admitting that morality doesn’t make sense. If you are even tempted by this inference, I present to you the entire history of ethical philosophy. Divine Command theory has been a minority view among philosophers for centuries.

Indeed, it is precisely because your moral beliefs are not based on your religion that you feel a need to resort to that defense of your religion. If you simply believed religion as a proper belief, you would base your moral beliefs on your religion, sure enough; but you’d also defend your religion in a fundamentally different way, not as something you’re supposed to believe, not as a belief that makes you a good person, but as something that is just actually true. (And indeed, many fanatics actually do defend their beliefs in those terms.) No one ever uses the argument that if we stop believing in chairs we’ll all become murderers, because chairs are actually there. We don’t believe in belief in chairs; we believe in chairs.

And really, if such a belief were completely isolated, it would not be a problem; it would just be this weird thing you say you believe that everyone really knows you don’t and it doesn’t affect how you behave, but okay, whatever. The problem is that it’s never quite isolated from your proper beliefs; it does affect some things—and in particular it can offer a kind of “support” for other real, proper beliefs that you do have, support which is now immune to rational criticism.

For example, as I already mentioned: Most of these “Bible-believing Christians” do, in fact, morally oppose homosexuality, and say that their reason for doing so is based on the Bible. This cannot literally be true, because if they actually believed the Bible they wouldn’t want gay marriage taken off the books, they’d want a mass pogrom of 4-10% of the population (depending how you count), on a par with the Holocaust. Fortunately their proper belief that genocide is wrong is overriding. But they have no such overriding belief supporting the moral permissibility of homosexuality or the personal liberty of marriage rights, so the very tenuous link to their belief-in-belief in the Bible is sufficient to tilt their actual behavior.

Similarly, if the people I meet who say they think maybe 9/11 was an inside job by our government really believed that, they would most likely be trying to organize a violent revolution; any government willing to murder 3,000 of its own citizens in a false flag operation is one that must be overturned and can probably only be overturned by force. At the very least, they would flee the country. If they lived in a country where the government is actually like that, like Zimbabwe or North Korea, they wouldn’t fear being dismissed as conspiracy theorists, they’d fear being captured and executed. The very fact that you live within the United States and exercise your free speech rights here says pretty strongly that you don’t actually believe our government is that evil. But they wouldn’t be so outspoken about their conspiracy theories if they didn’t at least believe in believing them.

I also have to wonder how many of our politicians who lean on the Constitution as their source of authority have actually read the Constitution, as it says a number of rather explicit things against, oh, say, the establishment of religion (First Amendment) or searches and arrests without warrants (Fourth Amendment) that they don’t much seem to care about. Some are better about this than others; Rand Paul, for instance, actually takes the Constitution pretty seriously (and is frequently found arguing against things like warrantless searches as a result!), but Ted Cruz for example says he has spent decades “defending the Constitution”, despite saying things like “America is a Christian nation” that directly violate the First Amendment. Cruz doesn’t really seem to believe in the Constitution; but maybe he believes in believing the Constitution. (It’s also quite possible he’s just lying to manipulate voters.)

 

How do we reach people with ridiculous beliefs?

Oct 16, JDN 2457678

One of the most unfortunate facts in the world—indeed, perhaps the most unfortunate fact, from which most other unfortunate facts follow—is that it is quite possible for a human brain to sincerely and deeply hold a belief that is, by any objective measure, totally and utterly ridiculous.

And to be clear, I don’t just mean false; I mean ridiculous. People having false beliefs is an inherent part of being finite beings in a vast and incomprehensible universe. Monetarists are wrong, but they are not ludicrous. String theorists are wrong, but they are not absurd. Multiregionalism is wrong, but it is not nonsensical. Indeed, I, like anyone else, am probably wrong about a great many things, though of course if I knew which ones I’d change my mind. (Indeed, I admit a small but nontrivial probability of being wrong about the three things I just listed.)

I mean ridiculous beliefs. I mean that any rational, objective assessment of the probability of that belief being true would be vanishingly small, 1 in 1 million at best. I’m talking about totally nonsensical beliefs, beliefs that go against overwhelming evidence; some of them are outright incoherent. Yet millions of people go on believing them.

For example, over 40% of Americans believe that human beings were created by God in their present form less than 10,000 years ago, and typically offer no evidence for this besides “The Bible says so.” (Strictly speaking, even that isn’t true—standard interpretations of the Bible say so. The Bible itself contains no clearly stated date for creation.) This despite the absolutely overwhelming body of evidence supporting the theory of evolution by Darwinian natural selection.

Over a third of Americans don’t believe in global warming, which is not only a complete consensus among all credible climate scientists based on overwhelming evidence, but one of the central threats facing human civilization over the 21st century. On a global scale this is rather like standing on a train track and saying you don’t believe in trains. (Or like the time my mother once told me about where an alert went out to her office that there was a sniper in the area, indiscriminately shooting at civilians, and one of her co-workers refused to join the security protocol and declared smugly, “I don’t believe in snipers.” Fortunately, he was unharmed in the incident. This time.)

1/4 of Americans believe in astrology, and 1/4 Americans believe that aliens have visited the Earth. (Not sure if it’s the same 1/4. Probably considerable but not total overlap.) The existence of extraterrestrial civilizations somewhere in this mind-bogglingly (perhaps infinitely) vast universe has probability 1. But visiting us is quite another matter, and there is absolutely no credible evidence of it. As for astrology? I shouldn’t have to explain why the position of Jupiter, much less Sirius, on your birthday is not a major influence on your behavior or life outcomes. Your obstetrician exerted more gravitational force on you than Jupiter did at the moment you were born.

The majority of Americans believe in telepathy or extrasensory perception. I confess that I actually did when I was very young, though I think I disabused myself of this around the time I stopped believing in Santa Claus.

I love the term “extrasensory perception” because it is such an oxymoron; if you’re perceiving, it is via senses. “Sixth sense” is better, except that we actually already have at least nine senses: The ones you probably know, vision (sight), audition (hearing), olfaction (smell), gustation (taste), and tactition (touch)—and the ones you may not know, thermoception (heat), proprioception (body position), vestibulation (balance), and nociception (pain). These can probably be subdivided further—vision and spatial reasoning are dissociated in blind people, heat and cold are separate nerve pathways, pain and itching are distinct systems, and there are a variety of different sensors used for proprioception. So we really could have as many as twenty senses, depending on how you’re counting.

What about telepathy? Well, that is not actually impossible in principle; it’s just that there’s no evidence that humans actually do it. Smartphones do it almost literally constantly, transmitting data via high-frequency radio waves back and forth to one another. We could have evolved some sort of radio transceiver organ (perhaps an offshoot of an electric defense organ such as that of electric eels), but as it turns out we didn’t. Actually in some sense—which some might say is trivial, but I think it’s actually quite deep—we do have telepathy; it’s just that we transmit our thoughts not via radio waves or anything more exotic, but via sound waves (speech) and marks on paper (writing) and electronic images (what you’re reading right now). Human beings really do transmit our thoughts to one another, and this truly is a marvelous thing we should not simply take for granted (it is one of our most impressive feats of Mundane Magic); but somehow I don’t think that’s what people mean when they say they believe in psychic telepathy.

And lest you think this is a uniquely American phenomenon: The particular beliefs vary from place to place, but bizarre beliefs abound worldwide, from conspiracy theories in the UK to 9/11 “truthers” in Canada to HIV denialism in South Africa (fortunately on the wane). The American examples are more familiar to me and most of my readers are Americans, but wherever you are reading from, there are probably ridiculous beliefs common there.

I could go on, listing more objectively ridiculous beliefs that are surprisingly common; but the more I do that, the more I risk alienating you, in case you should happen to believe one of them. When you add up the dizzying array of ridiculous beliefs one could hold, odds are that most people you’d ever meet will have at least one of them. (“Not me!” you’re thinking; and perhaps you’re right. Then again, I’m pretty sure that the 4% or so of people who believe in the Reptilians think the same thing.)

Which brings me to my real focus: How do we reach these people?

One possible approach would be to just ignore them, leave them alone, or go about our business with them as though they did not have ridiculous beliefs. This is in fact the right thing to do under most circumstances, I think; when a stranger on the bus starts blathering about how the lizard people are going to soon reveal themselves and establish the new world order, I don’t think it’s really your responsibility to persuade that person to realign their beliefs with reality. Nodding along quietly would be acceptable, and it would be above and beyond the call of duty to simply say, “Um, no… I’m fairly sure that isn’t true.”

But this cannot always be the answer, if for no other reason than the fact that we live in a democracy, and people with ridiculous beliefs frequently vote according to them. Then people with ridiculous beliefs can take office, and make laws that affect our lives. Actually this would be true even if we had some other system of government; there’s nothing in particular to stop monarchs, hereditary senates, or dictators from believing ridiculous things. If anything, the opposite; dictators are known for their eccentricity precisely because there are no checks on their behavior.

At some point, we’re going to need to confront the fact that over half of the Republicans in the US Congress do not believe in climate change, and are making policy accordingly, rolling drunk on petroleum and treating the hangover with the hair of the dog.

We’re going to have to confront the fact that school boards in Southern states, particularly Texas, continually vote to censor biology textbooks of their dreaded Darwinian evolution.

So we really do need to find a way to talk to people who have ridiculous beliefs, and engage with them, understand why they think the way they do, and then—hopefully at least—tilt them a little bit back toward rational reality. You will not be able to change their mind completely right away, but if each of us can at least chip away at their edifice of absurdity, then all together perhaps we can eventually bring them to enlightenment.

Of course, a good start is probably not to say you think that their beliefs are ridiculous, because people get very defensive when you do that, even—perhaps especially—when it’s true. People invest their identity in beliefs, and decide what beliefs to profess based on the group identities they value most.

This is the link that we must somehow break. We must show people that they are not defined by their beliefs, that it is okay to change your mind. We must be patient and compassionate—sometimes heroically so, as people spout offensive nonsense in our faces, sometimes offensive nonsense that directly attacks us personally. (“Atheists deserve Hell”, taken literally, would constitute something like a death threat except infinitely worse. While to them it very likely is just reciting a slogan, to the atheist listening it says that you believe that they are so evil, so horrible that they deserve eternal torture for believing what they do. And you get mad when we say your beliefs are ridiculous?)

We must also remind people that even very smart people can believe very dumb things—indeed, I’d venture a guess that most dumb things are in fact believed by smart people. Even the most intelligent human beings can only glimpse a tiny fraction of the universe, and all human brains are subject to the same fundamental limitations, the same core heuristics and biases. Make it clear that you’re saying you think their beliefs are false, not that they are stupid or crazy. And indeed, make it clear to yourself that this is indeed what you believe, because it ought to be. It can be tempting to think that only an idiot would believe something so ridiculous—and you are safe, for you are no idiot!—but the truth is far more humbling: Human brains are subject to many flaws, and guarding the fortress of the mind against error and deceit is a 24-7 occupation. Indeed, I hope that you will ask yourself: “What beliefs do I hold that other people might find ridiculous? Are they, in fact, ridiculous?”

Even then, it won’t be easy. Most people are strongly resistant to any change in belief, however small, and it is in the nature of ridiculous beliefs that they require radical changes in order to restore correspondence with reality. So we must try in smaller steps.

Maybe don’t try to convince them that 9/11 was actually the work of Osama bin Laden; start by pointing out that yes, steel does bend much more easily at the temperature at which jet fuel burns. Maybe don’t try to persuade them that astrology is meaningless; start by pointing out the ways that their horoscope doesn’t actually seem to fit them, or could be made to fit anybody. Maybe don’t try to get across the real urgency of climate change just yet, and instead point out that the “study” they read showing it was a hoax was clearly funded by oil companies, who would perhaps have a vested interest here. And as for ESP? I think it’s a good start just to point out that we have more than five senses already, and there are many wonders of the human brain that actual scientists know about well worth exploring—so who needs to speculate about things that have no scientific evidence?