Bayesian updating with irrational belief change

Jul 27 JDN 2460884

For the last few weeks I’ve been working at a golf course. (It’s a bit of an odd situation: I’m not actually employed by the golf course; I’m contracted by a nonprofit to be a “job coach” for a group of youths who are part of a work program that involves them working at the golf course.)

I hate golf. I have always hated golf. I find it boring and pointless—which, to be fair, is my reaction to most sports—and also an enormous waste of land and water. A golf course is also a great place for oligarchs to arrange collusion.

But I noticed something about being on the golf course every day, seeing people playing and working there: I feel like I hate it a bit less now.

This is almost certainly a mere-exposure effect: Simply being exposed to something many times makes it feel familiar, and that tends to make you like it more, or at least dislike it less. (There are some exceptions: repeated exposure to trauma can actually make you more sensitive to it, hating it even more.)

I kinda thought this would happen. I didn’t really want it to happen, but I thought it would.

This is very interesting from the perspective of Bayesian reasoning, because it is a theorem (though I cannot seem to find anyone naming the theorem; it’s like a folk theorem, I guess?) of Bayesian logic that the following is true:

The prior expectation of the posterior is the expectation of the prior.

The prior is what you believe before observing the evidence; the posterior is what you believe afterward. This theorem describes a relationship that holds between them.

This theorem means that, if I am being optimally rational, I should take into account all expected future evidence, not just evidence I have already seen. I should not expect to encounter evidence that will change my beliefs—if I did expect to see such evidence, I should change my beliefs right now!

This might be easier to grasp with an example.

Suppose I am trying to predict whether it will rain at 5:00 pm tomorrow, and I currently estimate that the probability of rain is 30%. This is my prior probability.

What will actually happen tomorrow is that it will rain or it won’t; so my posterior probability will either be 100% (if it rains) or 0% (if it doesn’t). But I had better assign a 30% chance to the event that will make me 100% certain it rains (namely, I see rain), and a 70% chance to the event that will make me 100% certain it doesn’t rain (namely, I see no rain); if I were to assign any other probabilities, then I must not really think the probability of rain at 5:00 pm tomorrow is 30%.

(The keen Bayesian will notice that the expected variance of my posterior need not be the variance of my prior: My initial variance is relatively high (it’s actually 0.3*0.7 = 0.21, because this is a Bernoulli distribution), because I don’t know whether it will rain or not; but my posterior variance will be 0, because I’ll know the answer once it rains or doesn’t.)

It’s a bit trickier to analyze, but this also works even if the evidence won’t make me certain. Suppose I am trying to determine the probability that some hypothesis is true. If I expect to see any evidence that might change my beliefs at all, then I should, on average, expect to see just as much evidence making me believe the hypothesis more as I see evidence that will make me believe the hypothesis less. If that is not what I expect, I should really change how much I believe the hypothesis right now!

So what does this mean for the golf example?

Was I wrong to hate golf quite so much before, because I knew that spending time on a golf course might make me hate it less?

I don’t think so.

See, the thing is: I know I’m not perfectly rational.

If I were indeed perfectly rational, then anything I expect to change my beliefs is a rational Bayesian update, and I should indeed factor it into my prior beliefs.

But if I know for a fact that I am not perfectly rational, that there are things which will change my beliefs in ways that make them deviate from rational Bayesian updating, then in fact I should not take those expected belief changes into account in my prior beliefs—since I expect to be wrong later, updating on that would just make me wrong now as well. I should only update on the expected belief changes that I believe will be rational.

This is something that a boundedly-rational person should do that neither a perfectly-rational nor perfectly-irrational person would ever do!

But maybe you don’t find the golf example convincing. Maybe you think I shouldn’t hate golf so much, and it’s not irrational for me to change my beliefs in that direction.


Very well. Let me give you a thought experiment which provides a very clear example of a time when you definitely would think your belief change was irrational.


To be clear, I’m not suggesting the two situations are in any way comparable; the golf thing is pretty minor, and for the thought experiment I’m intentionally choosing something quite extreme.

Here’s the thought experiment.

A mad scientist offers you a deal: Take this pill and you will receive $50 million. Naturally, you ask what the catch is. The catch, he explains, is that taking the pill will make you staunchly believe that the Holocaust didn’t happen. Take this pill, and you’ll be rich, but you’ll become a Holocaust denier. (I have no idea if making such a pill is even possible, but it’s a thought experiment, so bear with me. It’s certainly far less implausible than Swampman.)

I will assume that you are not, and do not want to become, a Holocaust denier. (If not, I really don’t know what else to say to you right now. It happened.) So if you take this pill, your beliefs will change in a clearly irrational way.

But I still think it’s probably justifiable to take the pill. This is absolutely life-changing money, for one thing, and being a random person who is a Holocaust denier isn’t that bad in the scheme of things. (Maybe it would be worse if you were in a position to have some kind of major impact on policy.) In fact, before taking the pill, you could write out a contract with a trusted friend that will force you to donate some of the $50 million to high-impact charities—and perhaps some of it to organizations that specifically fight Holocaust denial—thus ensuring that the net benefit to humanity is positive. Once you take the pill, you may be mad about the contract, but you’ll still have to follow it, and the net benefit to humanity will still be positive as reckoned by your prior, more correct, self.

It’s certainly not irrational to take the pill. There are perfectly-reasonable preferences you could have (indeed, likely dohave) that would say that getting $50 million is more important than having incorrect beliefs about a major historical event.

And if it’s rational to take the pill, and you intend to take the pill, then of course it’s rational to believe that in the future, you will have taken the pill and you will become a Holocaust denier.

But it would be absolutely irrational for you to become a Holocaust denier right now because of that. The pill isn’t going to provide evidence that the Holocaust didn’t happen (for no such evidence exists); it’s just going to alter your brain chemistry in such a way as to make you believe that the Holocaust didn’t happen.

So here we have a clear example where you expect to be more wrong in the future.

Of course, if this really only happens in weird thought experiments about mad scientists, then it doesn’t really matter very much. But I contend it happens in reality all the time:

  • You know that by hanging around people with an extremist ideology, you’re likely to adopt some of that ideology, even if you really didn’t want to.
  • You know that if you experience a traumatic event, it is likely to make you anxious and fearful in the future, even when you have little reason to be.
  • You know that if you have a mental illness, you’re likely to form harmful, irrational beliefs about yourself and others whenever you have an episode of that mental illness.

Now, all of these belief changes are things you would likely try to guard against: If you are a researcher studying extremists, you might make a point of taking frequent vacations to talk with regular people and help yourself re-calibrate your beliefs back to normal. Nobody wants to experience trauma, and if you do, you’ll likely seek out therapy or other support to help heal yourself from that trauma. And one of the most important things they teach you in cognitive-behavioral therapy is how to challenge and modify harmful, irrational beliefs when they are triggered by your mental illness.

But these guarding actions only make sense precisely because the anticipated belief change is irrational. If you anticipate a rational change in your beliefs, you shouldn’t try to guard against it; you should factor it into what you already believe.

This also gives me a little more sympathy for Evangelical Christians who try to keep their children from being exposed to secular viewpoints. I think we both agree that having more contact with atheists will make their children more likely to become atheists—but we view this expected outcome differently.

From my perspective, this is a rational change, and it’s a good thing, and I wish they’d factor it into their current beliefs already. (Like hey, maybe if talking to a bunch of smart people and reading a bunch of books on science and philosophy makes you think there’s no God… that might be because… there’s no God?)

But I think, from their perspective, this is an irrational change, it’s a bad thing, the children have been “tempted by Satan” or something, and thus it is their duty to protect their children from this harmful change.

Of course, I am not a subjectivist. I believe there’s a right answer here, and in this case I’m pretty sure it’s mine. (Wouldn’t I always say that? No, not necessarily; there are lots of matters for which I believe that there are experts who know better than I do—that’s what experts are for, really—and thus if I find myself disagreeing with those experts, I try to educate myself more and update my beliefs toward theirs, rather than just assuming they’re wrong. I will admit, however, that a lot of people don’t seem to do this!)

But this does change how I might tend to approach the situation of exposing their children to secular viewpoints. I now understand better why they would see that exposure as a harmful thing, and thus be resistant to actions that otherwise seem obviously beneficial, like teaching kids science and encouraging them to read books. In order to get them to stop “protecting” their kids from the free exchange of ideas, I might first need to persuade them that introducing some doubt into their children’s minds about God isn’t such a terrible thing. That sounds really hard, but it at least clearly explains why they are willing to fight so hard against things that, from my perspective, seem good. (I could also try to convince them that exposure to secular viewpoints won’t make their kids doubt God, but the thing is… that isn’t true. I’d be lying.)

That is, Evangelical Christians are not simply incomprehensibly evil authoritarians who hate truth and knowledge; they quite reasonably want to protect their children from things that will harm them, and they firmly believe that being taught about evolution and the Big Bang will make their children more likely to suffer great harm—indeed, the greatest harm imaginable, the horror of an eternity in Hell. Convincing them that this is not the case—indeed, ideally, that there is no such place as Hell—sounds like a very tall order; but I can at least more keenly grasp the equilibrium they’ve found themselves in, where they believe that anything that challenges their current beliefs poses a literally existential threat. (Honestly, as a memetic adaptation, this is brilliant. Like a turtle, the meme has grown itself a nigh-impenetrable shell. No wonder it has managed to spread throughout the world.)

Against Moral Anti-Realism

Sep 22 JDN 2460576

Moral anti-realism is more philosophically sophisticated than relativism, but it is equally mistaken. It is what is sounds like, the negation of moral realism. Moral anti-realists hold that moral truths are meaningless because they rest upon presumptions about the world that fail to hold. To an anti-realist, “genocide is wrong” is meaningless because there is no such thing as “wrong”, much as to any sane person “unicorns have purple feathers” is meaningless because there are no such things as unicorns. They aren’t saying that genocide isn’t wrong—they’re saying that wrong itself is a defective concept.

The vast majority of people profess strong beliefs in moral truth, and indeed strong beliefs about particular moral issues, such as abortion, capital punishment, same-sex marriage, euthanasia, contraception, civil liberties, and war. There is at the very least a troubling tension here between academia and daily life.

This does not by itself prove that moral truths exist. Ordinary people could be simply wrong about these core beliefs. Indeed, I must acknowledge that most ordinary people clearly are deeply ignorant about certain things, as only 55\% of Americans believe that the theory of evolution is true, and only 66\% of Americans agree that the majority of recent changes in Earth’s climate has been caused by human activity, when in reality these are scientific facts, empirically demonstrable through multiple lines of evidence, verified beyond all reasonable doubt, and both evolution and climate change are universally accepted within the scientific community. In scientific terms there is no more doubt about evolution or climate change than there is about the shape of the Earth or the structure of the atom.

If there were similarly compelling reasons to be moral anti-realists, then the fact that most people believe in morality would be little different: Perhaps most ordinary people are simply wrong about these issues. But when asked to provide similarly compelling evidence for why they reject the moral views of ordinary people, moral anti-realists have little to offer.

Many anti-realists will note the diversity of moral opinions in the world, as John Burgess did, which would be rather like noting the diversity of beliefs about the soul as an argument against neuroscience, or noting the diversity of beliefs about the history of life as an argument against evolution. Many people are wrong about many things that science has shown to be the case; this is worrisome for various reasons, but it is not an argument against the validity of scientific knowledge. Similarly, a diversity of opinions about morality is worrisome, but hardly evidence against the validity of morality.

In fact, when they talk about such fundamental disagreements in morality, anti-realists don’t have very compelling examples. It’s easy to find fundamental disagreements about biology—ask an evolutionary biologist and a Creationist whether humans share an ancestor with chimpanzees. It’s easy to find fundamental disagreements about cosmology—ask a physicist and an evangelical Christian how the Earth began. It’s easy to find fundamental disagreements about climate—ask a climatologist and an oil company executive whether human beings are causing global warming. But where are these fundamental disagreements in morality? Sure, on specific matters there is some disagreement. There are differences between cultures regarding what animals it is acceptable to eat, and differences between cultures about what constitutes acceptable clothing, and differences on specific political issues. But in what society is it acceptable to kill people arbitrarily? Where is it all right to steal whatever you want? Where is lying viewed as a good thing? Where is it obligatory to eat only dirt? In what culture has wearing clothes been a crime? Moral realists are by no means committed to saying that everyone agrees about everything—but it does support our case to point out that most people agree on most things most of the time.

There are a few compelling cases of moral disagreement, but they hardly threaten moral realism. How might we show one culture’s norms to be better than another’s? Compare homicide rates. Compare levels of poverty. Compare overall happiness, perhaps using surveys—or even brain scans. This kind of data exists, and it has a fairly clear pattern: people living in social democratic societies (such as Sweden and Norway) are wealthier, safer, longer-lived, and overall happier than people in other societies. Moreover, using the same publicly-available data, democratic societies in general do much better than authoritarian societies, by almost any measure. This is an empirical fact. It doesn’t necessarily mean that such societies are doing everything right—but they are clearly doing something right. And it really isn’t so implausible to say that what they are doing right is enforcing a good system of moral, political, and cultural norms.

Then again, perhaps some people would accept these empirical facts but still insist that their culture is superior; suppose the disagreement really is radical and intractable. This still leaves two possibilities for moral realism.

The most obvious answer would be to say that one group is wrong—that, objectively, one culture is better than another.

But even if that doesn’t work, there is another way: Perhaps both are right, or more precisely, perhaps these two cultural systems are equally good but incompatible. Is this relativism? Some might call it that, but if it is, it’s relativism of a very narrow kind. I am emphatically not saying that all existing cultures are equal, much less that all possible cultures are equal. Instead, I am saying that it is entirely possible to have two independent moral systems which prescribe different behaviors yet nonetheless result in equally-good overall outcomes.

I could make a mathematical argument involving local maxima of nonlinear functions, but instead I think I’ll use an example: Traffic laws.

In the United States, we drive on the right side of the road. In the United Kingdom, they drive on the left side. Which way is correct? Both are—both systems work well, and neither is superior in any discernible way. In fact, there are other systems that would be just as effective, like the system of all one-way roads that prevails in Manhattan.

Yet does this mean that we should abandon reason in our traffic planning, throw up our hands and declare that any traffic system is as good as any other? On the contrary—there are plenty of possible traffic systems that clearly don’t work. Pointing several one-way roads into one another with no exit is clearly not going to result in good traffic flow. Having each driver flip a coin to decide whether to drive on the left or the right would result in endless collisions. Moreover, our own system clearly isn’t perfect. Nearly 40,000 Americans die of car collisions every year; perhaps we can find a better system that will prevent some or all of these deaths. The mere fact that two, or three, or even 400 different systems of laws or morals are equally good does not entail that all systems are equally good. Even if two cultures really are equal, that doesn’t mean we need to abandon moral realism; it merely means that some problems have multiple solutions. “X2 = 4; what is X?” has two perfectly correct answers (2 and -2), but it also has an infinite variety of wrong answers.

In fact, moral disagreement may not be evidence of anti-realism at all. In order to disagree with someone, you must think that there is an objective fact to be decided. If moral statements were seen as arbitrary and subjective, then people wouldn’t argue about them very much. Imagine an argument, “Chocolate is the best flavor of ice cream!” “No, vanilla is the best!”. This sort of argument might happen on occasion between seven-year-olds, but it is definitely not the sort of thing we hear from mature adults. This is because as adults we realize that tastes in ice cream really are largely subjective. An anti-realist can, in theory, account for this, if they can explain why moral values are falsely perceived as objective while values in taste are not; but if all values are all really arbitrary and subjective, why is it that this is obvious to everyone in the one case and not the other? In fact, there are compelling reasons to think that we couldn’t perceive moral values as arbitrary even if we tried. Some people say “abortion is a right”, others say “abortion is murder”. Even if we were to say that these are purely arbitrary, we would still be left with the task of deciding what laws to make on abortion. Regardless of where the goals come from, some goals are just objectively incompatible.

Another common anti-realist argument rests upon the way that arguments about morality often become emotional and irrational. Charles Stevenson has made this argument; apparently Stevenson has never witnessed an argument about religion, science, or policy, certainly not one outside academia. Many laypeople will insist passionately that the free market is perfect, global warming is a lie, or the Earth is only 6,000 years old. (Often the same people, come to think of it.) People will grow angry and offended if such beliefs are disputed. Yet these are objectively false claims. Unless we want to be anti-realists about GDP, temperature and radiometric dating, emotional and irrational arguments cannot compel us to abandon realism.

Another frequent claim, commonly known as the “argument from queerness”, says that moral facts would need to be something very strange, usually imagined as floating obligations existing somewhere in space; but this is rather like saying that mathematical facts cannot exist because we do not see floating theorems in space and we have never met a perfect triangle. In fact, there is no such thing as a floating speed of light or a floating Schrodinger’s equation either, but no one thinks this is an argument against physics.

A subtler version of this argument, the original “argument from queerness” put forth by J.L. Mackie, says that moral facts are strange because they are intrinsically motivating, something no other kind of facts would be. This is no doubt true; but it seems to me a fairly trivial observation, since part of the definition of “moral fact” is that anything which has this kind of motivational force is a moral (or at least normative) fact. Any well-defined natural kind is subject to the same sort of argument. Spheres are perfectly round three-dimensional objects, something no other object is. Eyes are organs that perceive light, something no other organ does. Moral facts are indeed facts that categorically motivate action, which no other thing does—but so what? All this means is that we have a well-defined notion of what it means to be a moral fact.

Finally, it is often said that moral claims are too often based on religion, and religion is epistemically unfounded, so morality must fall as well. Now, unlike most people, I completely agree that religion is epistemically unfounded. Instead, the premise I take issue with is the idea that moral claims have anything to do with religion. A lot of people seem to think so; but in fact our most important moral values transcend religion and in many cases actually contradict it.

Now, it may well be that the majority of claims people make about morality are to some extent based in their religious beliefs. The majority of governments in history have been tyrannical; does that mean that government is inherently tyrannical, there is no such thing as a just government? The vast majority of human beings have never traveled in outer space; does that mean space travel is impossible? Similarly, I see no reason to say that simply because the majority of moral claims (maybe) are religious, therefore moral claims are inherently religious.

Generally speaking, moral anti-realists make a harsh distinction between morality and other domains of knowledge. They agree that there are such things as trucks and comets and atoms, but do not agree that there are such things as obligations and rights. Indeed, a typical moral anti-realist speaks as if they are being very rigorous and scientific while we moral realists are being foolish, romantic, even superstitious. Moral anti-realism has an attitude of superciliousness not seen in a scientific faction since behaviorism.

But in fact, I think moral anti-realism is the result of a narrow understanding of fundamental physics and cognitive science. It is a failure to drink deep enough of the Pierian springs. This is not surprising, since fundamental physics and cognitive science are so mind-bogglingly difficult that even the geniuses of the world barely grasp them. Quoth Feynman: “I think I can safely say that nobody understands quantum mechanics.” This was of course a bit overstated—Feynman surely knew that there are things we do understand about quantum physics, for he was among those who best understood them. Still, even the brightest minds in the world face total bafflement before problems like dark energy, quantum gravity, the binding problem, and the Hard Problem. It is no moral failing to have a narrow understanding of fundamental physics and cognitive science, for the world’s greatest minds have a scarcely broader understanding.

The failing comes from trying to apply this narrow understanding of fundamental science to moral problems without the humility to admit that the answers are never so simple. “Neuroscience proves we have no free will.” No it doesn’t! It proves we don’t have the kind of free will you thought we did. “We are all made of atoms, therefore there can be no such thing as right and wrong.” And what do you suppose we would have been made of if there were such things as right and wrong? Magical fairy dust?

Here is what I think moral anti-realists get wrong: They hear only part of what scientists say. Neuroscientists explain to them that the mind is a function of matter, and they hear it as if we had said there is only mindless matter. Physicists explain to them that we have much more precise models of atomic phenomena than we do of human behavior, and they hear it as if we had said that scientific models of human behavior are fundamentally impossible. They trust that we know very well what atoms are made of and very poorly what is right and wrong—when quite the opposite is the case.

In fact, the more we learn about physics and cognitive science, the more similar the two fields seem. There was a time when Newtonian mechanics ruled, when everyone thought that physical objects are made of tiny billiard balls bouncing around according to precise laws, while consciousness was some magical, “higher” spiritual substance that defied explanation. But now we understand that quantum physics is all chaos and probability, while cognitive processes can be mathematically modeled and brain waves can be measured in the laboratory. Something as apparently simple as a proton—let alone an extended, complex object, like a table or a comet—is fundamentally a functional entity, a unit of structure rather than substance. To be a proton is to be organized the way protons are and to do what protons do; and so to be human is to be organized the way humans are and to do what humans do. The eternal search for “stuff” of which everything is made has come up largely empty; eventually we may find the ultimate “stuff”, but when we do, it will already have long been apparent that substance is nowhere near as important as structure. Reductionism isn’t so much wrong as beside the point—when we want to understand what makes a table a table or what makes a man a man, it simply doesn’t matter what stuff they are made of. The table could be wood, glass, plastic, or metal; the man could be carbon, nitrogen and water like us, or else silicon and tantalum like Lieutenant Commander Data on Star Trek. Yes, structure must be made of something, and the substance does affect the structures that can be made out of it, but the structure is what really matters, not the substance.

Hence, I think it is deeply misguided to suggest that because human beings are made of molecules, this means that we are just the same thing as our molecules. Love is indeed made of oxytocin (among other things), but only in the sense that a table is made of wood. To know that love is made of oxytocin really doesn’t tell us very much about love; we need also to understand how oxytocin interacts with the bafflingly complex system that is a human brain—and indeed how groups of brains get together in relationships and societies. This is because love, like so much else, is not substance but function—something you do, not something you are made of.

It is not hard, rigorous science that says love is just oxytocin and happiness is just dopamine; it is naive, simplistic science. It is the sort of “science” that comes from overlaying old prejudices (like “matter is solid, thoughts are ethereal”) with a thin veneer of knowledge. To be a realist about protons but not about obligations is to be a realist about some functional relations and not others. It is to hear “mind is matter”, and fail to understand the is—the identity between them—instead acting as if we had said “there is no mind; there is only matter”. You may find it hard to believe that mind can be made of matter, as do we all; yet the universe cares not about our incredulity. The perfect correlation between neurochemical activity and cognitive activity has been verified in far too many experiments to doubt. Somehow, that kilogram of wet, sparking gelatin in your head is actually thinking and feeling—it is actually you.

And once we realize this, I do not think it is a great leap to realize that the vast collection of complex, interacting bodies moving along particular trajectories through space that was the Holocaust was actually wrong, really, objectively wrong.

Against Moral Relativism

Moral relativism is surprisingly common, especially among undergraduate students. There are also some university professors who espouse it, typically but not always from sociology, gender studies or anthropology departments (examples include Marshall Sahlins, Stanley Fish, Susan Harding, Richard Rorty, Michael Fischer, and Alison Renteln). There is a fairly long tradition of moral relativism, from Edvard Westermarck in the 1930s to Melville Herskovits, to more recently Francis Snare and David Wong in the 1980s. University of California Press at Berkeley.} In 1947, the American Anthropological Association released a formal statement declaring that moral relativism was the official position of the anthropology community, though this has since been retracted.

All of this is very, very bad, because moral relativism is an incredibly naive moral philosophy and a dangerous one at that. Vitally important efforts to advance universal human rights are conceptually and sometimes even practically undermined by moral relativists. Indeed, look at that date again: 1947, two years after the end of World War II. The world’s civilized cultures had just finished the bloodiest conflict in history, including some ten million people murdered in cold blood for their religion and ethnicity, and the very survival of the human species hung in the balance with the advent of nuclear weapons—and the American Anthropological Association was insisting that morality is meaningless independent of cultural standards? Were they trying to offer an apologia for genocide?

What is relativism trying to say, anyway? Often the arguments get tied up in knots. Consider a particular example, infanticide. Moral relativists will sometimes argue, for example, that infanticide is wrong in the modern United States but permissible in ancient Inuit society. But is this itself an objectively true normative claim? If it is, then we are moral realists. Indeed, the dire circumstances of ancient Inuit society would surely justify certain life-and-death decisions we wouldn’t otherwise accept. (Compare “If we don’t strangle this baby, we may all starve to death” and “If we don’t strangle this baby, we will have to pay for diapers and baby food”.) Circumstances can change what is moral, and this includes the circumstances of our cultural and ecological surroundings. So there could well be an objective normative fact that infanticide is justified by the circumstances of ancient Inuit life. But if there are objective normative facts, this is moral realism. And if there are no objective normative facts, then all moral claims are basically meaningless. Someone could just as well claim that infanticide is good for modern Americans and bad for ancient Inuits, or that larceny is good for liberal-arts students but bad for engineering students.

If instead all we mean is that particular acts are perceived as wrong in some societies but not in others, this is a factual claim, and on certain issues the evidence bears it out. But without some additional normative claim about whose beliefs are right, it is morally meaningless. Indeed, the idea that whatever society believes is right is a particularly foolish form of moral realism, as it would justify any behavior—torture, genocide, slavery, rape—so long as society happens to practice it, and it would never justify any kind of change in any society, because the status quo is by definition right. Indeed, it’s not even clear that this is logically coherent, because different cultures disagree, and within each culture, individuals disagree. To say that an action is “right for some, wrong for others” doesn’t solve the problem—because either it is objectively normatively right or it isn’t. If it is, then it’s right, and it can’t be wrong; and if it isn’t—if nothing is objectively normatively right—then relativism itself collapses as no more sound than any other belief.

In fact, the most difficult part of defending common-sense moral realism is explaining why it isn’t universally accepted. Why are there so many relativists? Why do so many anthropologists and even some philosophers scoff at the most fundamental beliefs that virtually everyone in the world has?

I should point out that it is indeed relativists, and not realists, who scoff at the most fundamental beliefs of other people. Relativists are fond of taking a stance of indignant superiority in which moral realism is just another form of “ethnocentrism” or “imperialism”. The most common battleground of contention recently is the issue of female circumcision, which is considered completely normal or even good in some African societies but is viewed with disgust and horror by most Western people. Other common choices include abortion, clothing, especially Islamic burqa and hijab, male circumcision, and marriage; given the incredible diversity in human food, clothing, language, religion, behavior, and technology, there are surprisingly few moral issues on which different cultures disagree—but relativists like to milk them for all they’re worth!

But I dare you, anthropologists: Take a poll. Ask people which is more important to them, their belief that, say, female circumcision is immoral, or their belief that moral right and wrong are objective truths? Virtually anyone in any culture anywhere in the world would sooner admit they are wrong about some particular moral issue than they would assent to the claim that there is no such thing as a wrong moral belief. I for one would be more willing to abandon just about any belief I hold before I would abandon the belief that there are objective normative truths. I would sooner agree that the Earth is flat and 6,000 years old, that the sky is green, that I am a brain in a vat, that homosexuality is a crime, that women are inferior to men, or that the Holocaust was a good thing—than I would ever agree that there is no such thing as right or wrong. This is of course because once I agreed that there is no objective normative truth, I would be forced to abandon everything else as well—since without objective normativity there is no epistemic normativity, and hence no justice, no truth, no knowledge, no science. If there is nothing objective to say about how we ought to think and act, then we might as well say the Earth is flat and the sky is green.

So yes, when I encounter other cultures with other values and ideas, I am forced to deal with the fact that they and I disagree about many things, important things that people really should agree upon. We disagree about God, about the afterlife, about the nature of the soul; we disagree about many specific ethical norms, like those regarding racial equality, feminism, sexuality and vegetarianism. We may disagree about economics, politics, social justice, even family values. But as long as we are all humans, we probably agree about a lot of other important things, like “murder is wrong”, “stealing is bad”, and “the sky is blue”. And one thing we definitely do not disagree about—the one cornerstone upon which all future communication can rest—is that these things matter, that they really do describe actual features of an actual world that are worth knowing. If it turns out that I am wrong about these things, \I would want to know! I’d much rather find out I’d been living the wrong way than continue to live the same pretending that it doesn’t matter. I don’t think I am alone in this; indeed, I suspect that the reason people get so angry when I tell them that religion is untrue is precisely because they realize how important it is. One thing religious people never say is “Well, God is imaginary to you, perhaps; but to me God is real. Truth is relative.” I’ve heard atheists defend other people’s beliefs in such terms—but no one ever defends their own beliefs that way. No Evangelical Baptist thinks that Christianity is an arbitrary social construction. No Muslim thinks that Islam is just one equally-valid perspective among many. It is you, relativists, who deny people’s fundamental beliefs.

Yet the fact that relativists accuse realists of being chauvinistic hints at the deeper motivations of moral relativism. In a word: Guilt. Moral relativism is an outgrowth of the baggage of moral guilt and self-loathing that Western societies have built up over the centuries. Don’t get me wrong: Western cultures have done terrible things, many terrible things, all too recently. We needn’t go so far back as the Crusades or the ethnocidal “colonization” of the Americas; we need only look to the carpet-bombing of Dresden in 1945 or the defoliation of Vietnam in the 1960s, or even the torture program as recently as 2009. There is much evil that even the greatest nations of the world have to answer for. For all our high ideals, even America, the nation of “life, liberty, and the pursuit of happiness”, the culture of “liberty and justice for all”, has murdered thousands of innocent people—and by “murder” I mean murder, killing not merely by accident in the collateral damage of necessary war, but indeed in acts of intentional and selfish cruelty. Not all war is evil—but many wars are, and America has fought in some of them. No Communist radical could ever burn so much of the flag as the Pentagon itself has burned in acts of brutality.

Yet it is an absurd overreaction to suggest that there is nothing good about Western culture, nothing valuable about secularism, liberal democracy, market economics, or technological development. It is even more absurd to carry the suggestion further, to the idea that civilization was a mistake and we should all go back to our “natural” state as hunter-gatherers. Yet there are anthropologists working today who actually say such things. And then, as if we had not already traversed so far beyond the shores of rationality that we can no longer see the light of home, then relativists take it one step further and assert that any culture is as good as any other.

Think about what this would mean, if it were true. To say that all cultures are equal is to say that science, education, wealth, technology, medicine—all of these are worthless. It is to say that democracy is no better than tyranny, security is no better than civil war, secularism is no better than theocracy. It is to say that racism is as good as equality, sexism is as good as feminism, feudalism is as good as capitalism.

Many relativists seem worried that moral realism can be used by the powerful and privileged to oppress others—the cishet White males who rule the world (and let’s face it, cishet White males do, pretty much, rule the world!) can use the persuasive force of claiming objective moral truth in order to oppress women and minorities. Yet what is wrong with oppressing women and minorities, if there is no such thing as objective moral truth? Only under moral realism is oppression truly wrong.

How I feel is how things are

Mar 17 JDN 2460388

One of the most difficult things in life to learn is how to treat your own feelings and perceptions as feelings and perceptions—rather than simply as the way the world is.

A great many errors people make can be traced to this.

When we disagree with someone (whether it is as trivial as pineapple on pizza or as important as international law), we feel like they must be speaking in bad faith, they must be lying—because, to us, they are denying the way the world is. If the subject is important enough, we may become convinced that they are evil—for only someone truly evil could deny such important truths. (Ultimately, even holy wars may come from this perception.)


When we are overconfident, we not only can’t see that; we can scarcely even consider that it could be true. Because we don’t simply feel confident; we are sure we will succeed. And thus if we do fail, as we often do, the result is devastating; it feels as if the world itself has changed in order to make our wishes not come true.

Conversely, when we succumb to Impostor Syndrome, we feel inadequate, and so become convinced that we are inadequate, and thus that anyone who says they believe we are competent must either be lying or else somehow deceived. And then we fear to tell anyone, because we know that our jobs and our status depend upon other people seeing us as competent—and we are sure that if they knew the truth, they’d no longer see us that way.

When people see their beliefs as reality, they don’t even bother to check whether their beliefs are accurate.

Why would you need to check whether the way things are is the way things are?

This is how common misconceptions persist—the information needed to refute them is widely available, but people simply don’t realize they needed to be looking for that information.

For lots of things, misconceptions aren’t very consequential. But some common misconceptions do have large consequences.

For instance, most Americans think that crime is increasing and worse now than it was 30 or 50 years ago. (I tested this on my mother this morning; she thought so too.) It is in fact much, much better—violent crimes are about half as common in the US today as they were in the 1970s. Republicans are more likely to get this wrong than Democrats—but an awful lot of Democrats still get it wrong.

It’s not hard to see how that kind of misconception could drive voters into supporting “tough on crime” candidates who will enact needlessly harsh punishments and waste money on excessive police and incarceration. Indeed, when you look at our world-leading spending on police and incarceration (highest in absolute terms, third-highest as a portion of GDP), it’s pretty clear this is exactly what’s happening.

And it would be so easy—just look it up, right here, or here, or here—to correct that misconception. But people don’t even think to bother; they just know that their perception must be the truth. It never even occurs to them that they could be wrong, and so they don’t even bother to look.

This is not because people are stupid or lazy. (I mean, compared to what?) It’s because perceptions feel like the truth, and it’s shockingly difficult to see them as anything other than the truth.

It takes a very dedicated effort, and no small amount of training, to learn to see your own perceptions as how you see things rather than simply how things are.

I think part of what makes this so difficult is the existential terror that results when you realize that anything you believe—even anything you perceive—could potentially be wrong. Basically the entire field of epistemology is dedicated to understanding what we can and can’t be certain of—and the “can’t” is a much, much bigger set than the “can”.

In a sense, you can be certain of what you feel and perceive—you can be certain that you feel and perceive them. But you can’t be certain whether those feelings and perceptions correspond to your external reality.

When you are sad, you know that you are sad. You can be certain of that. But you don’t know whether you should be sad—whether you have a reason to be sad. Often, perhaps even usually, you do. But sometimes, the sadness comes from within you, or from misperceiving the world.

Once you learn to recognize your perceptions as perceptions, you can question them, doubt them, challenge them. Training your mind to do this is an important part of mindfulness meditation, and also of cognitive behavioral therapy.

But even after years of training, it’s still shockingly hard to do this, especially in the throes of a strong emotion. Simply seeing that what you’re feeling—about yourself, or your situation, or the world—is not an entirely accurate perception can take an incredible mental effort.

We really seem to be wired to see our perceptions as reality.

This makes a certain amount of sense, in evolutionary terms. In an ancestral environment where death was around every corner, we really didn’t have time to stop and thinking carefully about whether our perceptions were accurate.

Two ancient hominids hear a sound that might be a tiger. One immediately perceives it as a tiger, and runs away. The other stops to think, and then begins carefully examining his surroundings, looking for more conclusive evidence to determine whether it is in fact a tiger.

The latter is going to have more accurate beliefs—right up until the point where it is a tiger and he gets eaten.

But in our world today, it may be more dangerous to hold onto false beliefs than to analyze and challenge our beliefs. We may harm ourselves—and others—more by trusting our perceptions too much rather than by taking the time to analyze them.

Against Self-Delusion

Mar 10 JDN 2460381

Is there a healthy amount of self-delusion? Would we be better off convincing ourselves that the world is better than it really is, in order to be happy?


A lot of people seem to think so.

I most recently encountered this attitude in Kathryn Schulz’s book Being Wrong (I liked the TED talk much better, in part because it didn’t have this), but there are plenty of other examples.

You’ll even find advocates for this attitude in the scientific literature, particularly when talking about the Lake Wobegon Effect, optimism bias, and depressive realism.

Fortunately, the psychology community seems to be turning away from this, perhaps because of mounting empirical evidence that “depressive realism” isn’t a robust effect. When I searched today, it was easier to find pop psych articles against self-delusion than in favor of it. (I strongly suspect that would not have been true about 10 years ago.)

I have come up with a very simple, powerful argument against self-delusion:

If you’re allowed to delude yourself, why not just believe everything is perfect?

If you can paint your targets after shooting, why not always paint a bullseye?

The notion seems to be that deluding yourself will help you achieve your goals. But if you’re going to delude yourself, why bother achieving goals? You could just pretend to achieve goals. You could just convince yourself that you have achieved goals. Wouldn’t that be so much easier?

The idea seems to be, for instance, to get an aspiring writer to actually finish the novel and submit it to the publisher. But why shouldn’t she simply imagine she has already done so? Why not simply believe she’s already a bestselling author?

If there’s something wrong with deluding yourself into thinking you’re a bestselling author, why isn’t that exact same thing wrong with deluding yourself into thinking you’re a better writer than you are?

Once you have opened this Pandora’s Box of lies, it’s not clear how you can ever close it again. Why shouldn’t you just stop working, stop eating, stop doing anything at all, but convince yourself that your life is wonderful and die in a state of bliss?

Granted, this is not generally what people who favor (so-called) “healthy self-delusion” advocate. But it’s difficult to see any principled reason why they should reject it. Once you give up on tying your beliefs to reality, it’s difficult to see why you shouldn’t just say that anything goes.

Why are some deviations from reality okay, but not others? Is it because they are small? Small changes in belief can still have big consequences: Believe a car is ten meters behind where it really is, and it may just run you over.

The general approach of “healthy self-delusion” seems to be that it’s all right to believe that you are smarter, prettier, healthier, wiser, and more competent than you actually are, because that will make you more confident and therefore more successful.

Well, first of all, it’s worth pointing out that some people obviously go way too far in that direction and become narcissists. But okay, let’s say we find a way to avoid that. (It’s unclear exactly how, since, again, by construction, we aren’t tying ourselves to reality.)

In practice, the people who most often get this sort of advice are people who currently lack self-confidence, who doubt their own abilities—people who suffer from Impostor Syndrome. And for people like that (and I count myself among them), a certain amount of greater self-confidence would surely be a good thing.

The idea seems to be that deluding yourself to increase your confidence will get you to face challenges and take risks you otherwise wouldn’t have, and that this will yield good outcomes.

But there’s a glaring hole in this argument:

If you have to delude yourself in order to take a risk, you shouldn’t take that risk.

Risk-taking is not an unalloyed good. Russian Roulette is certainly risky, but it’s not a good career path.

There are in fact a lot of risks you simply shouldn’t take, because they aren’t worth it.

The right risks to take are the ones for which the expected benefit outweighs the expected cost: The one with the highest expected utility. (That sounds simple, and in principle it is; but in practice, it can be extraordinarily difficult to determine.)

In other words, the right risks to take are the ones that are rational. The ones that a correct view of the world will instruct you to take.

That aspiring novelist, then, should write the book and submit it to publishers—if she’s actually any good at writing. If she’s actually terrible, then never submitting the book is the correct decision; she should spend more time honing her craft before she tries to finish it—or maybe even give up on it and do something else with her life.

What she needs, therefore, is not a confident assessment of her abilities, but an accurate one. She needs to believe that she is competent if and only if she actually is competent.

But I can also see how self-delusion can seem like good advice—and even work for some people.

If you start from an excessively negative view of yourself or the world, then giving yourself a more positive view will likely cause you to accomplish more things. If you’re constantly telling yourself that you are worthless and hopeless, then convincing yourself that you’re better than you thought is absolutely what you need to do. (Because it’s true.)

I can even see how convincing yourself that you are the best is useful—even though, by construction, most people aren’t. When you live in a hyper-competitive society like ours, where we are constantly told that winning is everything, losers are worthless, and second place is as bad as losing, it may help you get by to tell yourself that you really are the best, that you really can win. (Even weirder: “Winning isn’t everything; it’s the only thing.” Uh, that’s just… obviously false? Like, what is this even intended to mean that “Winning is everything” didn’t already say better?)

But that’s clearly not the right answer. You’re solving one problem by adding another. You shouldn’t believe you are the best; you should recognize that you don’t have to be. Second place is not as bad as losing—and neither is fifth, or tenth, or fiftieth place. The 100th-most successful author in the world still makes millions writing. The 1,000th-best musician does regular concert tours. The 10,000th-best accountant has a steady job. Even the 100,000th-best trucker can make a decent living. (Well, at least until the robots replace him.)

Honestly, it’d be great if our whole society would please get this memo. It’s no problem that “only a minority of schools play sport to a high level”—indeed, that’s literally inevitable. It’s also not clear that “60% of students read below grade level” is a problem, when “grade level” seems to be largely defined by averages. (Literacy is great and all, but what’s your objective standard for “what a sixth grader should be able to read”?)

We can’t all be the best. We can’t all even be above-average.

That’s okay. Below-average does not mean inadequate.

That’s the message we need to be sending:

You don’t have to be the best in order to succeed.

You don’t have to be perfect in order to be good enough.

You don’t even have to be above-average.

This doesn’t require believing anything that isn’t true. It doesn’t require overestimating your abilities or your chances. In fact, it asks you to believe something that is more true than “You have to be the best” or “Winning is everything”.

If what you want to do is actually worth doing, an accurate assessment will tell you that. And if an accurate assessment tells you not to do it, then you shouldn’t do it. So you have no reason at all to strive for anything other than accurate beliefs.

With this in mind, the fact that the empirical evidence for “depressive realism” is shockingly weak is not only unsurprising; it’s almost irrelevant. You can’t have evidence against being rational. If deluded people succeed more, that means something is very, very wrong; and the solution is clearly not to make more people deluded.

Of course, it’s worth pointing out that the evidence is shockingly weak: Depressed people show different biases, not less bias. And in fact they seem to be more overconfident in the following sense: They are more certain that what they predict will happen is what will actually happen.

So while most people think they will succeed when they will probably fail, depressed people are certain they will fail when in fact they could succeed. Both beliefs are inaccurate, but the depressed one is in an important sense more inaccurate: It tells you to give up, which is the wrong thing to do.

“Healthy self-delusion” ultimately amounts to trying to get you to do the right thing for the wrong reasons. But why? Do the right thing for the right reasons! If it’s really the right thing, it should have the right reasons!

Administering medicine to the dead

Jan 28 JDN 2460339

Here are a couple of pithy quotes that go around rationalist circles from time to time:

“To argue with a man who has renounced the use and authority of reason, […] is like administering medicine to the dead[…].”

Thomas Paine, The American Crisis

“It is useless to attempt to reason a man out of a thing he was never reasoned into.”

Jonathan Swift

You usually hear that abridged version, but Thomas Paine’s full quotation is actually rather interesting:

“To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.”

― Thomas Paine, The American Crisis

It is indeed quite ineffective to convert an atheist by scripture (though that doesn’t seem to stop them from trying). Yet this quotation seems to claim that the opposite should be equally ineffective: It should be impossible to convert a theist by reason.

Well, then, how else are we supposed to do it!?

Indeed, how did we become atheists in the first place!?

You were born an atheist? No, you were born having absolutely no opinion about God whatsoever. (You were born not realizing that objects don’t fade from existence when you stop seeing them! In a sense, we were all born believing ourselves to be God.)

Maybe you were raised by atheists, and religion never tempted you at all. Lucky you. I guess you didn’t have to be reasoned into atheism.

Well, most of us weren’t. Most of us were raised into religion, and told that it held all the most important truths of morality and the universe, and that believing anything else was horrible and evil and would result in us being punished eternally.

And yet, somehow, somewhere along the way, we realized that wasn’t true. And we were able to realize that because people made rational arguments.

Maybe we heard those arguments in person. Maybe we read them online. Maybe we read them in books that were written by people who died long before we were born. But somehow, somewhere people actually presented the evidence for atheism, and convinced us.

That is, they reasoned us out of something that we were not reasoned into.

I know it can happen. I have seen it happen. It has happened to me.

And it was one of the most important events in my entire life. More than almost anything else, it made me who I am today.

I’m scared that if you keep saying it’s impossible, people will stop trying to do it—and then it will stop happening to people like me.

So please, please stop telling people it’s impossible!

Quotes like these encourage you to simply write off entire swaths of humanity—most of humanity, in fact—judging them as worthless, insane, impossible to reach. When you should be reaching out and trying to convince people of the truth, quotes like these instead tell you to give up and consider anyone who doesn’t already agree with you as your enemy.

Indeed, it seems to me that the only logical conclusion of quotes like these is violence. If it’s impossible to reason with people who oppose us, then what choice do we have, but to fight them?

Violence is a weapon anyone can use.

Reason is the one weapon in the universe that works better when you’re right.

Reason is the sword that only the righteous can wield. Reason is the shield that only protects the truth. Reason is the only way we can ever be sure that the right people win—instead of just whoever happens to be strongest.

Yes, it’s true: reason isn’t always effective, and probably isn’t as effective as it should be. Convincing people to change their minds through rational argument is difficult and frustrating and often painful for both you and them—but it absolutely does happen, and our civilization would have long ago collapsed if it didn’t.

Even people who claim to have renounced all reason really haven’t: they still know 2+2=4 and they still look both ways when they cross the street. Whatever they’ve renounced, it isn’t reason; and maybe, with enough effort, we can help them see that—by reason, of course.

In fact, maybe even literally administering medicine to the dead isn’t such a terrible idea.

There are degrees of death, after all: Someone whose heart has stopped is in a different state than someone whose cerebral activity has ceased, and both of them clearly stand a better chance of being resuscitated than someone who has been vaporized by an explosion.

As our technology improves, more and more states that were previously considered irretrievably dead will instead be considered severe states of illness or injury from which it is possible to recover. We can now restart many stopped hearts; we are working on restarting stopped brains. (Of course we’ll probably never be able to restore someone who got vaporized—unless we figure out how to make backup copies of people?)

Most of the people who now live in the world’s hundreds of thousands of ICU beds would have been considered dead even just 100 years ago. But many of them will recover, because we didn’t give up on them.

So don’t give up on people with crazy beliefs either.

They may seem like they are too far gone, like nothing in the world could ever bring them back to the light of reason. But you don’t actually know that for sure, and the only way to find out is to try.

Of course, you won’t convince everyone of everything immediately. No matter how good your evidence is, that’s just not how this works. But you probably will convince someone of something eventually, and that is still well worthwhile.

You may not even see the effects yourself—people are often loathe to admit when they’ve been persuaded. But others will see them. And you will see the effects of other people’s persuasion.

And in the end, reason is really all we have. It’s the only way to know that what we’re trying to make people believe is the truth.

Don’t give up on reason.

And don’t give up on other people, whatever they might believe.

Why we need critical thinking

Jul 9 JDN 2460135

I can’t find it at the moment, but awhile ago I read a surprisingly compelling post on social media (I think it was Facebook, but it could also have been Reddit) questioning the common notion that we should be teaching more critical thinking in school.

I strongly believe that we should in fact be teaching more critical thinking in school—actually I think we should replace large chunks of the current math curriculum with a combination of statistics, economics and critical thinking—but it made me realize that we haven’t done enough to defend why that is something worth doing. It’s just become a sort of automatic talking point, like, “obviously you would want more critical thinking, why are you even asking?”

So here’s a brief attempt to explain why critical thinking is something that every citizen ought to be good at, and hence why it’s worthwhile to teach it in primary and secondary school.

Critical thinking, above all, allows you to detect lies. It teaches you to look past the surface of what other people are saying and determine whether what they are saying is actually true.

And our world is absolutely full of lies.

We are constantly lied to by advertising. We are constantly lied to by spam emails and scam calls. Day in and day out, people with big smiles promise us the world, if only we will send them five easy payments of $19.99.

We are constantly lied to by politicians. We are constantly lied to by religious leaders (it’s pretty much their whole job actually).

We are often lied to by newspapers—sometimes directly and explicitly, as in fake news, but more often in subtler ways. Most news articles in the mainstream press are true in the explicit facts they state, but are missing important context; and nearly all of them focus on the wrong things—exciting, sensational, rare events rather than what’s actually important and likely to affect your life. If newspapers were an accurate reflection of genuine risk, they’d have more articles on suicide than homicide, and something like one million articles on climate change for every one on some freak accident (like that submarine full of billionaires).

We are even lied to by press releases on science, which likewise focus on new, exciting, sensational findings rather than supported, established, documented knowledge. And don’t tell me everyone already knows it; just stating basic facts about almost any scientific field will shock and impress most of the audience, because they clearly didn’t learn this stuff in school (or, what amounts to the same thing, don’t remember it). This isn’t just true of quantum physics; it’s even true of economics—which directly affects people’s lives.

Critical thinking is how you can tell when a politician has distorted the views of his opponent and you need to spend more time listening to that opponent speak. Critical thinking could probably have saved us from electing Donald Trump President.

Critical thinking is how you tell that a supplement which “has not been evaluated by the FDA” (which is to say, nearly all of them) probably contains something mostly harmless that maybe would benefit you if you were deficient in it, but for most people really won’t matter—and definitely isn’t something you can substitute for medical treatment.

Critical thinking is how you recognize that much of the history you were taught as a child was a sanitized, simplified, nationalist version of what actually happened. But it’s also how you recognize that simply inverting it all and becoming the sort of anti-nationalist who hates your own country is at least as ridiculous. Thomas Jefferson was both a pioneer of democracy and a slaveholder. He was both a hero and a villain. The world is complicated and messy—and nothing will let you see that faster than critical thinking.


Critical thinking tells you that whenever a new “financial innovation” appears—like mortgage-backed securities or cryptocurrency—it will probably make obscene amounts of money for a handful of insiders, but will otherwise be worthless if not disastrous to everyone else. (And maybe if enough people had good critical thinking skills, we could stop the next “innovation” from getting so far!)

More widespread critical thinking could even improve our job market, as interviewers would no longer be taken in by the candidates who are best at overselling themselves, and would instead pay more attention to the more-qualified candidates who are quiet and honest.

In short, critical thinking constitutes a large portion of what is ordinarily called common sense or wisdom; some of that simply comes from life experience, but a great deal of it is actually a learnable skill set.

Of course, even if it can be learned, that still raises the question of how it can be taught. I don’t think we have a sound curriculum for teaching critical thinking, and in my more cynical moments I wonder if many of the powers that be like it that way. Knowing that many—not all, but many—politicians make their careers primarily from deceiving the public, it’s not so hard to see why those same politicians wouldn’t want to support teaching critical thinking in public schools. And it’s almost funny to me watching evangelical Christians try to justify why critical thinking is dangerous—they come so close to admitting that their entire worldview is totally unfounded in logic or evidence.

But at least I hope I’ve convinced you that it is something worthwhile to know, and that the world would be better off if we could teach it to more people.

How to change minds

Aug 29 JDN 2459456

Think for a moment about the last time you changed your mind on something important. If you can’t think of any examples, that’s not a good sign. Think harder; look back further. If you still can’t find any examples, you need to take a deep, hard look at yourself and how you are forming your beliefs. The path to wisdom is not found by starting with the right beliefs, but by starting with the wrong ones and recognizing them as wrong. No one was born getting everything right.

If you remember changing your mind about something, but don’t remember exactly when, that’s not a problem. Indeed, this is the typical case, and I’ll get to why in a moment. Try to remember as much as you can about the whole process, however long it took.

If you still can’t specifically remember changing your mind, try to imagine a situation in which you would change your mind—and if you can’t do that, you should be deeply ashamed and I have nothing further to say to you.

Thinking back to that time: Why did you change your mind?

It’s possible that it was something you did entirely on your own, through diligent research of primary sources or even your own mathematical proofs or experimental studies. This is occasionally something that happens; as an active researcher, it has definitely happened to me. But it’s clearly not the typical case of what changes people’s minds, and it’s quite likely that you have never experienced it yourself.

The far more common scenario—even for active researchers—is far more mundane: You changed your mind because someone convinced you. You encountered a persuasive argument, and it changed the way you think about things.

In fact, it probably wasn’t just one persuasive argument; it was probably many arguments, from multiple sources, over some span of time. It could be as little as minutes or hours; it could be as long as years.

Probably the first time someone tried to change your mind on that issue, they failed. The argument may even have degenerated into shouting and name-calling. You both went away thinking that the other side was composed of complete idiots or heartless monsters. And then, a little later, thinking back on the whole thing, you remembered one thing they said that was actually a pretty good point.

This happened again with someone else, and again with yet another person. And each time your mind changed just a little bit—you became less certain of some things, or incorporated some new information you didn’t know before. The towering edifice of your worldview would not be toppled by a single conversation—but a few bricks here and there did get taken out and replaced.

Or perhaps you weren’t even the target of the conversation; you simply overheard it. This seems especially common in the age of social media, where public and private spaces become blurred and two family members arguing about politics can blow up into a viral post that is viewed by millions. Perhaps you changed your mind not because of what was said to you, but because of what two other people said to one another; perhaps the one you thought was on your side just wasn’t making as many good arguments as the one on the other side.

Now, you may be thinking: Yes, people like me change our minds, because we are intelligent and reasonable. But those people, on the other side, aren’t like that. They are stubborn and foolish and dogmatic and stupid.

And you know what? You probably are an especially intelligent and reasonable person. If you’re reading this blog, there’s a good chance that you are at least above-average in your level of education, rationality, and open-mindedness.

But no matter what beliefs you hold, I guarantee you there is someone out there who shares many of them and is stubborn and foolish and dogmatic and stupid. And furthermore, there is probably someone out there who disagrees with many of your beliefs and is intelligent and open-minded and reasonable.

This is not to say that there’s no correlation between your level of reasonableness and what you actually believe. Obviously some beliefs are more rational than others, and rational people are more likely to hold those beliefs. (If this weren’t the case, we’d be doomed.) Other things equal, an atheist is more reasonable than a member of the Taliban; a social democrat is more reasonable than a neo-Nazi; a feminist is more reasonable than a misogynist; a member of the Human Rights Campaign is more reasonable than a member of the Westboro Baptist Church. But reasonable people can be wrong, and unreasonable people can be right.

You should be trying to seek out the most reasonable people who disagree with you. And you should be trying to present yourself as the most reasonable person who expresses your own beliefs.

This can be difficult—especially that first part, as the world (or at least the world spanned by Facebook and Twitter) seems to be filled with people who are astonishingly dogmatic and unreasonable. Often you won’t be able to find any reasonable disagreement. Often you will find yourself in threads full of rage, hatred and name-calling, and you will come away disheartened, frustrated, or even despairing for humanity. The whole process can feel utterly futile.

And yet, somehow, minds change.

Support for same-sex marriage in the US rose from 27% to 70% just since 1997.

Read that date again: 1997. Less than 25 years ago.

The proportion of new marriages which were interracial has risen from 3% in 1967 to 19% today. Given the racial demographics of the US, this is almost at the level of random assortment.

Ironically I think that the biggest reason people underestimate the effectiveness of rational argument is the availability heuristic: We can’t call to mind any cases where we changed someone’s mind completely. We’ve never observed a pi-radian turnaround in someone’s whole worldview, and thus, we conclude that nobody ever changes their mind about anything important.

But in fact most people change their minds slowly and gradually, and are embarrassed to admit they were wrong in public, so they change their minds in private. (One of the best single changes we could make toward improving human civilization would be to make it socially rewarded to publicly admit you were wrong. Even the scientific community doesn’t do this nearly as well as it should.) Often changing your mind doesn’t even really feel like changing your mind; you just experience a bit more doubt, learn a bit more, and repeat the process over and over again until, years later, you believe something different than you did before. You moved 0.1 or even 0.01 radians at a time, until at last you came all the way around.

It may be in fact that some people’s minds cannot be changed—either on particular issues, or even on any issue at all. But it is so very, very easy to jump to that conclusion after a few bad interactions, that I think we should intentionally overcompensate in the opposite direction: Only give up on someone after you have utterly overwhelming evidence that their mind cannot ever be changed in any way.

I can’t guarantee that this will work. Perhaps too many people are too far gone.

But I also don’t see any alternative. If the truth is to prevail, it will be by rational argument. This is the only method that systematically favors the truth. All other methods give equal or greater power to lies.

Fake skepticism

Jun 3 JDN 2458273

“You trust the mainstream media?” “Wake up, sheeple!” “Don’t listen to what so-called scientists say; do your own research!”

These kinds of statements have become quite ubiquitous lately (though perhaps the attitudes were always there, and we only began to hear them because of the Internet and social media), and are often used to defend the most extreme and bizarre conspiracy theories, from moon-landing denial to flat Earth. The amazing thing about these kinds of statements is that they can be used to defend literally anything, as long as you can find some source with less than 100% credibility that disagrees with it. (And what source has 100% credibility?)

And that, I think, should tell you something. An argument that can prove anything is an argument that proves nothing.

Reversed stupidity is not intelligence. The fact that the mainstream media, or the government, or the pharmaceutical industry, or the oil industry, or even gangsters, fanatics, or terrorists believes something does not make it less likely to be true.

In fact, the vast majority of beliefs held by basically everyone—including the most fanatical extremists—are true. I could list such consensus true beliefs for hours: “The sky is blue.” “2+2=4.” “Ice is colder than fire.”

Even if a belief is characteristic of a specifically evil or corrupt organization, that does not necessarily make it false (though it usually is evidence of falsehood in a Bayesian sense). If only terrible people belief X, then maybe you shouldn’t believe X. But if both good and bad people believe X, the fact that bad people believe X really shouldn’t matter to you.

People who use this kind of argument often present themselves as being “skeptics”. They imagine that they have seen through the veil of deception that blinds others.

In fact, quite the opposite is the case: This is fake skepticism. These people are not uniquely skeptical; they are uniquely credulous. If you think the Earth is flat because you don’t trust the mainstream scientific community, that means you do trust someone far less credible than the mainstream scientific community.

Real skepticism is difficult. It requires concerted effort and investigation, and typically takes years. To really seriously challenge the expert consensus in a field, you need to become an expert in that field. Ideally, you should get a graduate degree in that field and actually start publishing your heterodox views. Failing that, you should at least be spending hundreds or thousands of hours doing independent research. If you are unwilling or unable to do that, you are not qualified to assess the validity of the expert consensus.

This does not mean the expert consensus is always right—remarkably often, it isn’t. But it means you aren’t allowed to say it’s wrong, because you don’t know enough to assess that.

This is not elitism. This is not an argument from authority. This is a basic respect for the effort and knowledge that experts spend their lives acquiring.

People don’t like being told that they are not as smart as other people—even though, with any variation at all, that’s got to be true for a certain proportion of people. But I’m not even saying experts are smarter than you. I’m saying they know more about their particular field of expertise.

Do you walk up to construction workers on the street and critique how they lay concrete? When you step on an airplane, do you explain to the captain how to read an altimeter? When you hire a plumber, do you insist on using the snake yourself?

Probably not. And why not? Because you know these people have training; they do this for a living. Yeah, well, scientists do this for a living too—and our training is much longer. To be a plumber, you need a high school diploma and an apprenticeship that usually lasts about four years. To be a scientist, you need a PhD, which means four years of college plus an additional five or six years of graduate school.

To be clear, I’m not saying you should listen to experts speaking outside their expertise. Some of the most idiotic, arrogant things ever said by human beings have been said by physicists opining on biology or economists ranting about politics. Even within a field, some people have such narrow expertise that you can’t really trust them even on things that seem related—like macroeconomists with idiotic views on trade, or ecologists who clearly don’t understand evolution.

This is also why one of the great challenges of being a good interdisciplinary scientist is actually obtaining enough expertise in both fields you’re working in; it isn’t literally twice the work (since there is overlap—or you wouldn’t be doing it—and you do specialize in particular interdisciplinary subfields), but it’s definitely more work, and there are definitely a lot of people on each side of the fence who may never take you seriously no matter what you do.

How do you tell who to trust? This is why I keep coming back to the matter of expert consensus. The world is much too complicated for anyone, much less everyone, to understand it all. We must be willing to trust the work of others. The best way we have found to decide which work is trustworthy is by the norms and institutions of the scientific community itself. Since 97% of climatologists say that climate change is caused by humans, they’re probably right. Since 99% of biologists believe humans evolved by natural selection, that’s probably what happened. Since 87% of economists oppose tariffs, tariffs probably aren’t a good idea.

Can we be certain that the consensus is right? No. There is precious little in this universe that we can be certain about. But as in any game of chance, you need to play the best odds, and my money will always be on the scientific consensus.

What good are macroeconomic models? How could they be better?

Dec 11, JDN 2457734

One thing that I don’t think most people know, but which immediately obvious to any student of economics at the college level or above, is that there is a veritable cornucopia of different macroeconomic models. There are growth models (the Solow model, the Harrod-Domar model, the Ramsey model), monetary policy models (IS-LM, aggregate demand-aggregate supply), trade models (the Mundell-Fleming model, the Heckscher-Ohlin model), large-scale computational models (dynamic stochastic general equilibrium, agent-based computational economics), and I could go on.

This immediately raises the question: What are all these models for? What good are they?

A cynical view might be that they aren’t useful at all, that this is all false mathematical precision which makes economics persuasive without making it accurate or useful. And with such a proliferation of models and contradictory conclusions, I can see why such a view would be tempting.

But many of these models are useful, at least in certain circumstances. They aren’t completely arbitrary. Indeed, one of the litmus tests of the last decade has been how well the models held up against the events of the Great Recession and following Second Depression. The Keynesian and cognitive/behavioral models did rather well, albeit with significant gaps and flaws. The Monetarist, Real Business Cycle, and most other neoclassical models failed miserably, as did Austrian and Marxist notions so fluid and ill-defined that I’m not sure they deserve to even be called “models”. So there is at least some empirical basis for deciding what assumptions we should be willing to use in our models. Yet even if we restrict ourselves to Keynesian and cognitive/behavioral models, there are still a great many to choose from, which often yield inconsistent results.

So let’s compare with a science that is uncontroversially successful: Physics. How do mathematical models in physics compare with mathematical models in economics?

Well, there are still a lot of models, first of all. There’s the Bohr model, the Schrodinger equation, the Dirac equation, Newtonian mechanics, Lagrangian mechanics, Bohmian mechanics, Maxwell’s equations, Faraday’s law, Coulomb’s law, the Einstein field equations, the Minkowsky metric, the Schwarzschild metric, the Rindler metric, Feynman-Wheeler theory, the Navier-Stokes equations, and so on. So a cornucopia of models is not inherently a bad thing.

Yet, there is something about physics models that makes them more reliable than economics models.

Partly it is that the systems physicists study are literally two dozen orders of magnitude or more smaller and simpler than the systems economists study. Their task is inherently easier than ours.

But it’s not just that; their models aren’t just simpler—actually they often aren’t. The Navier-Stokes equations are a lot more complicated than the Solow model. They’re also clearly a lot more accurate.

The feature that models in physics seem to have that models in economics do not is something we might call nesting, or maybe consistency. Models in physics don’t come out of nowhere; you can’t just make up your own new model based on whatever assumptions you like and then start using it—which you very much can do in economics. Models in physics are required to fit consistently with one another, and usually inside one another, in the following sense:

The Dirac equation strictly generalizes the Schrodinger equation, which strictly generalizes the Bohr model. Bohmian mechanics is consistent with quantum mechanics, which strictly generalizes Lagrangian mechanics, which generalizes Newtonian mechanics. The Einstein field equations are consistent with Maxwell’s equations and strictly generalize the Minkowsky, Schwarzschild, and Rindler metrics. Maxwell’s equations strictly generalize Faraday’s law and Coulomb’s law.
In other words, there are a small number of canonical models—the Dirac equation, Maxwell’s equations and the Einstein field equation, essentially—inside which all other models are nested. The simpler models like Coulomb’s law and Newtonian mechanics are not contradictory with these canonical models; they are contained within them, subject to certain constraints (such as macroscopic systems far below the speed of light).

This is something I wish more people understood (I blame Kuhn for confusing everyone about what paradigm shifts really entail); Einstein did not overturn Newton’s laws, he extended them to domains where they previously had failed to apply.

This is why it is sensible to say that certain theories in physics are true; they are the canonical models that underlie all known phenomena. Other models can be useful, but not because we are relativists about truth or anything like that; Newtonian physics is a very good approximation of the Einstein field equations at the scale of many phenomena we care about, and is also much more mathematically tractable. If we ever find ourselves in situations where Newton’s equations no longer apply—near a black hole, traveling near the speed of light—then we know we can fall back on the more complex canonical model; but when the simpler model works, there’s no reason not to use it.

There are still very serious gaps in the knowledge of physics; in particular, there is a fundamental gulf between quantum mechanics and the Einstein field equations that has been unresolved for decades. A solution to this “quantum gravity problem” would be essentially a guaranteed Nobel Prize. So even a canonical model can be flawed, and can be extended or improved upon; the result is then a new canonical model which we now regard as our best approximation to truth.

Yet the contrast with economics is still quite clear. We don’t have one or two or even ten canonical models to refer back to. We can’t say that the Solow model is an approximation of some greater canonical model that works for these purposes—because we don’t have that greater canonical model. We can’t say that agent-based computational economics is approximately right, because we have nothing to approximate it to.

I went into economics thinking that neoclassical economics needed a new paradigm. I have now realized something much more alarming: Neoclassical economics doesn’t really have a paradigm. Or if it does, it’s a very informal paradigm, one that is expressed by the arbitrary judgments of journal editors, not one that can be written down as a series of equations. We assume perfect rationality, except when we don’t. We assume constant returns to scale, except when that doesn’t work. We assume perfect competition, except when that doesn’t get the results we wanted. The agents in our models are infinite identical psychopaths, and they are exactly as rational as needed for the conclusion I want.

This is quite likely why there is so much disagreement within economics. When you can permute the parameters however you like with no regard to a canonical model, you can more or less draw whatever conclusion you want, especially if you aren’t tightly bound to empirical evidence. I know a great many economists who are sure that raising minimum wage results in large disemployment effects, because the models they believe in say that it must, even though the empirical evidence has been quite clear that these effects are small if they are present at all. If we had a canonical model of employment that we could calibrate to the empirical evidence, that couldn’t happen anymore; there would be a coefficient I could point to that would refute their argument. But when every new paper comes with a new model, there’s no way to do that; one set of assumptions is as good as another.

Indeed, as I mentioned in an earlier post, a remarkable number of economists seem to embrace this relativism. “There is no true model.” they say; “We do what is useful.” Recently I encountered a book by the eminent economist Deirdre McCloskey which, though I confess I haven’t read it in its entirety, appears to be trying to argue that economics is just a meaningless language game that doesn’t have or need to have any connection with actual reality. (If any of you have read it and think I’m misunderstanding it, please explain. As it is I haven’t bought it for a reason any economist should respect: I am disinclined to incentivize such writing.)

Creating such a canonical model would no doubt be extremely difficult. Indeed, it is a task that would require the combined efforts of hundreds of researchers and could take generations to achieve. The true equations that underlie the economy could be totally intractable even for our best computers. But quantum mechanics wasn’t built in a day, either. The key challenge here lies in convincing economists that this is something worth doing—that if we really want to be taken seriously as scientists we need to start acting like them. Scientists believe in truth, and they are trying to find it out. While not immune to tribalism or ideology or other human limitations, they resist them as fiercely as possible, always turning back to the evidence above all else. And in their combined strivings, they attempt to build a grand edifice, a universal theory to stand the test of time—a canonical model.