Bayesian updating with irrational belief change

Jul 27 JDN 2460884

For the last few weeks I’ve been working at a golf course. (It’s a bit of an odd situation: I’m not actually employed by the golf course; I’m contracted by a nonprofit to be a “job coach” for a group of youths who are part of a work program that involves them working at the golf course.)

I hate golf. I have always hated golf. I find it boring and pointless—which, to be fair, is my reaction to most sports—and also an enormous waste of land and water. A golf course is also a great place for oligarchs to arrange collusion.

But I noticed something about being on the golf course every day, seeing people playing and working there: I feel like I hate it a bit less now.

This is almost certainly a mere-exposure effect: Simply being exposed to something many times makes it feel familiar, and that tends to make you like it more, or at least dislike it less. (There are some exceptions: repeated exposure to trauma can actually make you more sensitive to it, hating it even more.)

I kinda thought this would happen. I didn’t really want it to happen, but I thought it would.

This is very interesting from the perspective of Bayesian reasoning, because it is a theorem (though I cannot seem to find anyone naming the theorem; it’s like a folk theorem, I guess?) of Bayesian logic that the following is true:

The prior expectation of the posterior is the expectation of the prior.

The prior is what you believe before observing the evidence; the posterior is what you believe afterward. This theorem describes a relationship that holds between them.

This theorem means that, if I am being optimally rational, I should take into account all expected future evidence, not just evidence I have already seen. I should not expect to encounter evidence that will change my beliefs—if I did expect to see such evidence, I should change my beliefs right now!

This might be easier to grasp with an example.

Suppose I am trying to predict whether it will rain at 5:00 pm tomorrow, and I currently estimate that the probability of rain is 30%. This is my prior probability.

What will actually happen tomorrow is that it will rain or it won’t; so my posterior probability will either be 100% (if it rains) or 0% (if it doesn’t). But I had better assign a 30% chance to the event that will make me 100% certain it rains (namely, I see rain), and a 70% chance to the event that will make me 100% certain it doesn’t rain (namely, I see no rain); if I were to assign any other probabilities, then I must not really think the probability of rain at 5:00 pm tomorrow is 30%.

(The keen Bayesian will notice that the expected variance of my posterior need not be the variance of my prior: My initial variance is relatively high (it’s actually 0.3*0.7 = 0.21, because this is a Bernoulli distribution), because I don’t know whether it will rain or not; but my posterior variance will be 0, because I’ll know the answer once it rains or doesn’t.)

It’s a bit trickier to analyze, but this also works even if the evidence won’t make me certain. Suppose I am trying to determine the probability that some hypothesis is true. If I expect to see any evidence that might change my beliefs at all, then I should, on average, expect to see just as much evidence making me believe the hypothesis more as I see evidence that will make me believe the hypothesis less. If that is not what I expect, I should really change how much I believe the hypothesis right now!

So what does this mean for the golf example?

Was I wrong to hate golf quite so much before, because I knew that spending time on a golf course might make me hate it less?

I don’t think so.

See, the thing is: I know I’m not perfectly rational.

If I were indeed perfectly rational, then anything I expect to change my beliefs is a rational Bayesian update, and I should indeed factor it into my prior beliefs.

But if I know for a fact that I am not perfectly rational, that there are things which will change my beliefs in ways that make them deviate from rational Bayesian updating, then in fact I should not take those expected belief changes into account in my prior beliefs—since I expect to be wrong later, updating on that would just make me wrong now as well. I should only update on the expected belief changes that I believe will be rational.

This is something that a boundedly-rational person should do that neither a perfectly-rational nor perfectly-irrational person would ever do!

But maybe you don’t find the golf example convincing. Maybe you think I shouldn’t hate golf so much, and it’s not irrational for me to change my beliefs in that direction.


Very well. Let me give you a thought experiment which provides a very clear example of a time when you definitely would think your belief change was irrational.


To be clear, I’m not suggesting the two situations are in any way comparable; the golf thing is pretty minor, and for the thought experiment I’m intentionally choosing something quite extreme.

Here’s the thought experiment.

A mad scientist offers you a deal: Take this pill and you will receive $50 million. Naturally, you ask what the catch is. The catch, he explains, is that taking the pill will make you staunchly believe that the Holocaust didn’t happen. Take this pill, and you’ll be rich, but you’ll become a Holocaust denier. (I have no idea if making such a pill is even possible, but it’s a thought experiment, so bear with me. It’s certainly far less implausible than Swampman.)

I will assume that you are not, and do not want to become, a Holocaust denier. (If not, I really don’t know what else to say to you right now. It happened.) So if you take this pill, your beliefs will change in a clearly irrational way.

But I still think it’s probably justifiable to take the pill. This is absolutely life-changing money, for one thing, and being a random person who is a Holocaust denier isn’t that bad in the scheme of things. (Maybe it would be worse if you were in a position to have some kind of major impact on policy.) In fact, before taking the pill, you could write out a contract with a trusted friend that will force you to donate some of the $50 million to high-impact charities—and perhaps some of it to organizations that specifically fight Holocaust denial—thus ensuring that the net benefit to humanity is positive. Once you take the pill, you may be mad about the contract, but you’ll still have to follow it, and the net benefit to humanity will still be positive as reckoned by your prior, more correct, self.

It’s certainly not irrational to take the pill. There are perfectly-reasonable preferences you could have (indeed, likely dohave) that would say that getting $50 million is more important than having incorrect beliefs about a major historical event.

And if it’s rational to take the pill, and you intend to take the pill, then of course it’s rational to believe that in the future, you will have taken the pill and you will become a Holocaust denier.

But it would be absolutely irrational for you to become a Holocaust denier right now because of that. The pill isn’t going to provide evidence that the Holocaust didn’t happen (for no such evidence exists); it’s just going to alter your brain chemistry in such a way as to make you believe that the Holocaust didn’t happen.

So here we have a clear example where you expect to be more wrong in the future.

Of course, if this really only happens in weird thought experiments about mad scientists, then it doesn’t really matter very much. But I contend it happens in reality all the time:

  • You know that by hanging around people with an extremist ideology, you’re likely to adopt some of that ideology, even if you really didn’t want to.
  • You know that if you experience a traumatic event, it is likely to make you anxious and fearful in the future, even when you have little reason to be.
  • You know that if you have a mental illness, you’re likely to form harmful, irrational beliefs about yourself and others whenever you have an episode of that mental illness.

Now, all of these belief changes are things you would likely try to guard against: If you are a researcher studying extremists, you might make a point of taking frequent vacations to talk with regular people and help yourself re-calibrate your beliefs back to normal. Nobody wants to experience trauma, and if you do, you’ll likely seek out therapy or other support to help heal yourself from that trauma. And one of the most important things they teach you in cognitive-behavioral therapy is how to challenge and modify harmful, irrational beliefs when they are triggered by your mental illness.

But these guarding actions only make sense precisely because the anticipated belief change is irrational. If you anticipate a rational change in your beliefs, you shouldn’t try to guard against it; you should factor it into what you already believe.

This also gives me a little more sympathy for Evangelical Christians who try to keep their children from being exposed to secular viewpoints. I think we both agree that having more contact with atheists will make their children more likely to become atheists—but we view this expected outcome differently.

From my perspective, this is a rational change, and it’s a good thing, and I wish they’d factor it into their current beliefs already. (Like hey, maybe if talking to a bunch of smart people and reading a bunch of books on science and philosophy makes you think there’s no God… that might be because… there’s no God?)

But I think, from their perspective, this is an irrational change, it’s a bad thing, the children have been “tempted by Satan” or something, and thus it is their duty to protect their children from this harmful change.

Of course, I am not a subjectivist. I believe there’s a right answer here, and in this case I’m pretty sure it’s mine. (Wouldn’t I always say that? No, not necessarily; there are lots of matters for which I believe that there are experts who know better than I do—that’s what experts are for, really—and thus if I find myself disagreeing with those experts, I try to educate myself more and update my beliefs toward theirs, rather than just assuming they’re wrong. I will admit, however, that a lot of people don’t seem to do this!)

But this does change how I might tend to approach the situation of exposing their children to secular viewpoints. I now understand better why they would see that exposure as a harmful thing, and thus be resistant to actions that otherwise seem obviously beneficial, like teaching kids science and encouraging them to read books. In order to get them to stop “protecting” their kids from the free exchange of ideas, I might first need to persuade them that introducing some doubt into their children’s minds about God isn’t such a terrible thing. That sounds really hard, but it at least clearly explains why they are willing to fight so hard against things that, from my perspective, seem good. (I could also try to convince them that exposure to secular viewpoints won’t make their kids doubt God, but the thing is… that isn’t true. I’d be lying.)

That is, Evangelical Christians are not simply incomprehensibly evil authoritarians who hate truth and knowledge; they quite reasonably want to protect their children from things that will harm them, and they firmly believe that being taught about evolution and the Big Bang will make their children more likely to suffer great harm—indeed, the greatest harm imaginable, the horror of an eternity in Hell. Convincing them that this is not the case—indeed, ideally, that there is no such place as Hell—sounds like a very tall order; but I can at least more keenly grasp the equilibrium they’ve found themselves in, where they believe that anything that challenges their current beliefs poses a literally existential threat. (Honestly, as a memetic adaptation, this is brilliant. Like a turtle, the meme has grown itself a nigh-impenetrable shell. No wonder it has managed to spread throughout the world.)

Defending Moral Realism


Oct 6 JDN 2460590

In the last few posts I have only considered arguments against moral realism, and shown them to be lacking. Yet if you were already convinced of moral anti-realism, this probably didn’t change your mind—it’s entirely possible to have a bad argument for a good idea. (Consider the following argument: “Whales are fish, fish are mammals, therefore whales are mammals.”) What you need is arguments for moral realism.

Fortunately, such arguments are not hard to find. My personal favorite was offered by one of my professors in a philosophy course: “I fail all moral anti-realists. If you think that’s unfair, don’t worry: You’re not a moral anti-realist.” In other words, if you want to talk coherently at all about what actions are good or bad, fair or unfair, then you cannot espouse moral anti-realism; and if you do espouse moral anti-realism, there is no reason for us not to simply ignore you (or imprison you!) and go on living out our moral beliefs—especially if you are right that morality is a fiction. Indeed, the reason we don’t actually imprison all moral anti-realists is precisely because we are moral realists, and we think it is morally wrong to imprison someone for espousing unpopular or even ridiculous beliefs.

That of course is a pragmatic argument, not very compelling on epistemological grounds, but there are other arguments that cut deeper. Perhaps the most compelling is the realization that rationality itself is a moral principle—it says that we ought to believe what conforms to reason and ought not to believe what does not. We need at least some core notion of normativity even to value truth and honesty, to seek knowledge, to even care whether moral realism is correct or incorrect. In a total moral vacuum, we can fight over our values and beliefs, we can kill each other over them, but we cannot discuss them or debate them, for discussion and debate themselves presuppose certain moral principles.

Typically moral anti-realists expect us to accept epistemic normativity, but if they do this then they cannot deny the legitimacy of all normative claims. If their whole argument rests upon undermining normativity, then it is self-defeating. If it doesn’t, then anti-realists need to explain the difference between “moral” and “normative”, and explain why the former is so much more suspect than the latter—but even then we have objective obligations that bind our behavior. The difference, I suppose, would involve a tight restriction on the domains of discourse in which normativity applies. Scientific facts? Normative. Interpersonal relations? Subjective. I suppose it’s logically coherent to say that it is objectively wrong to be a Creationist but not objectively wrong to be a serial killer; but this is nothing if not counter-intuitive.

Moreover, it is unclear to me what a universe would be like if it had no moral facts. In what sort of universe would it not be best to believe what is true? In what sort of universe would it not be wrong to harm others for selfish gains? In what sort of world would it be wrong to keep a promise, or good to commit genocide? It seems to me that we are verging on nonsense, rather like what happens if we try to imagine a universe where 2+2=5.

Moreover, there is a particular moral principle, which depends upon moral realism, yet is almost universally agreed upon, even by people who otherwise profess to be moral relativists or anti-realists.

I call it the Hitler Principle, and it’s quite simple:

The Holocaust was bad.

In large part, ethical philosophy since 1945 has been the attempt to systematically justify the Hitler Principle. Only if moral realism is true can we say that the Holocaust was bad, morally bad, unequivocally, objectively, universally, regardless of the beliefs, feelings, desires, culture or upbringing of its perpetrators. And if we can’t even say that, can we say anything at all? If the Holocaust wasn’t wrong, nothing is. And if nothing is wrong, then does it even matter if we believe what is true?

But then, stop and think for a moment: If we know this—if it’s so obvious to just about everyone that the Holocaust was wrong, so obvious that anyone who denies it we immediately recognize as evil or insane (or lying or playing games)—then doesn’t that already offer us an objective moral standard?

I contend that it does—that the Hitler Principle is so self-evident that it can form an objective standard by which to measure all moral theory. I would sooner believe the Sun revolves around the Earth than deny the Holocaust was wrong. I would sooner consider myself a brain in a vat than suppose that systematic extermination of millions of innocent people could ever be morally justified. Richard Swinburne, a philosopher of religion at Oxford, put it well: “it is more obvious to almost all of us that the genocide conducted by Hitler was morally wrong than that we are not now dreaming, or that the Earth is many millions of years old.” Because at least this one moral fact is so obviously, incorrigibly true, we can use it to test our theories of morality. Just as we would immediately reject any theory of physics which denied that the sky is blue, we should also reject any theory of morality which denies that the Holocaust was wrong. This might seem obvious, but by itself it is sufficient to confirm moral realism.

Similar arguments can be made for other moral propositions that virtually everyone accepts, like the following:

  1. Theft is wrong.
  2. Homicide is wrong.
  3. Lying is wrong.
  4. Rape is wrong.
  5. Kindness is good.
  6. Keeping promises is good.
  7. Happiness is good.
  8. Suffering is bad.

With appropriate caveats (lying isn’t always wrong, if it is justified by some greater good; homicide is permissible in self-defense; promises made under duress do not oblige; et cetera), all of these propositions are accepted by almost everyone, and most people hold them with greater certainty than they would hold any belief about empirical science. “Science proves that time is relative” is surprising and counter-intuitive, but people can accept it; “Science proves that homicide is good” is not something anyone would believe for an instant. There is wider agreement and greater confidence about these basic moral truths than there is about any fact in science, even “the Earth is round” or “gravity pulls things toward each other”—for well before Newton or even Archimedes, people still knew that homicide was wrong.

Though there are surely psychopaths who disagree (basically because their brains are defective), the vast majority of people agree on these fundamental moral claims. At least 95\% of humans who have ever lived share this universal moral framework, under which the wrongness of genocide is as directly apprehensible as the blueness of the sky and the painfulness of a burn. Moral realism is on as solid an epistemic footing as any fact in science.

Against deontology

Aug 6 JDN 2460163

In last week’s post I argued against average utilitarianism, basically on the grounds that it devalues the lives of anyone who isn’t of above average happiness. But you might be tempted to take these as arguments against utilitarianism in general, and that is not my intention.

In fact I believe that utilitarianism is basically correct, though it needs some particular nuances that are often lost in various presentations of it.

Its leading rival is deontology, which is really a broad class of moral theories, some a lot better than others.

What characterizes deontology as a class is that it uses rules, rather than consequences; an act is just right or wrong regardless of its consequences—or even its expected consequences.

There are certain aspects of this which are quite appealing: In fact, I do think that rules have an important role to play in ethics, and as such I am basically a rule utilitarian. Actually trying to foresee all possible consequences of every action we might take is an absurd demand far beyond the capacity of us mere mortals, and so in practice we have no choice but to develop heuristic rules that can guide us.

But deontology says that these are no mere heuristics: They are in fact the core of ethics itself. Under deontology, wrong actions are wrong even if you know for certain that their consequences will be good.

Kantian ethics is one of the most well-developed deontological theories, and I am quite sympathetic to Kantian ethics In fact I used to consider myself one of its adherents, but I now consider that view a mistaken one.

Let’s first dispense with the views of Kant himself, which are obviously wrong. Kant explicitly said that lying is always, always, always wrong, and even when presented with obvious examples where you could tell a small lie to someone obviously evil in order to save many innocent lives, he stuck to his guns and insisted that lying is always wrong.

This is a bit anachronistic, but I think this example will be more vivid for modern readers, and it absolutely is consistent with what Kant wrote about the actual scenarios he was presented with:

You are living in Germany in 1945. You have sheltered a family of Jews in your attic to keep them safe from the Holocaust. Nazi soldiers have arrived at your door, and ask you: “Are there any Jews in this house?” Do you tell the truth?

I think it’s utterly, agonizingly obvious that you should not tell the truth. Exactly what you should do is less obvious: Do you simply lie and hope they buy it? Do you devise a clever ruse? Do you try to distract them in some way? Do you send them on a wild goose chase elsewhere? If you could overpower them and kill them, should you? What if you aren’t sure you can; should you still try? But one thing is clear: You don’t hand over the Jewish family to the Nazis.

Yet when presented with similar examples, Kant insisted that lying is always wrong. He had a theory to back it up, his Categorical Imperative: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”

And, so his argument goes: Since it would be obviously incoherent to say that everyone should always lie, lying is wrong, and you’re never allowed to do it. He actually bites that bullet the size of a Howitzer round.

Modern deontologists—even though who consider themselves Kantians—are more sophisticated than this. They realize that you could make a rule like “Never lie, except to save the life of an innocent person.” or “Never lie, except to stop a great evil.” Either of these would be quite adequate to solve this particular dilemma. And it’s absolutely possible to will that these would be universal laws, in the sense that they would apply to anyone. ‘Universal’ doesn’t have to mean ‘applies equally to all possible circumstances’.

There are also a couple of things that deontology does very well, which are worth preserving. One of them is supererogation: The idea that some acts are above and beyond the call of duty, that something can be good without being obligatory.

This is something most forms of utilitarianism are notoriously bad at. They show us a spectrum of worlds from the best to the worst, and tell us to make things better. But there’s nowhere we are allowed to stop, unless we somehow manage to make it all the way to the best possible world.

I find this kind of moral demand very tempting, which often leads me to feel a tremendous burden of guilt. I always know that I could be doing more than I do. I’ve written several posts about this in the past, in the hopes of fighting off this temptation in myself and others. (I am not entirely sure how well I’ve succeeded.)

Deontology does much better in this regard: Here are some rules. Follow them.

Many of the rules are in fact very good rules that most people successfully follow their entire lives: Don’t murder. Don’t rape. Don’t commit robbery. Don’t rule a nation tyrannically. Don’t commit war crimes.

Others are oft more honored in the breach than the observance: Don’t lie. Don’t be rude. Don’t be selfish. Be brave. Be generous. But a well-developed deontology can even deal with this, by saying that some rules are more important than others, and thus some sins are more forgivable than others.

Whereas a utilitarian—at least, anything but a very sophisticated utilitarian—can only say who is better and who is worse, a deontologist can say who is good enough: who has successfully discharged their moral obligations and is otherwise free to live their life as they choose. Deontology absolves us of guilt in a way that utilitarianism is very bad at.

Another good deontological principle is double-effect: Basically this says that if you are doing something that will have bad outcomes as well as good ones, it matters whether you intend the bad one and what you do to try to mitigate it. There does seem to be a morally relevant difference between a bombing that kills civilians accidentally as part of an attack on a legitimate military target, and a so-called “strategic bombing” that directly targets civilians in order to maximize casualties—even if both occur as part of a justified war. (Both happen a lot—and it may even be the case that some of the latter were justified. The Tokyo firebombing and atomic bombs on Hiroshima and Nagasaki were very much in the latter category.)

There are ways to capture this principle (or something very much like it) in a utilitarian framework, but like supererogation, it requires a sophisticated, nuanced approach that most utilitarians don’t seem willing or able to take.

Now that I’ve said what’s good about it, let’s talk about what’s really wrong with deontology.

Above all: How do we choose the rules?

Kant seemed to think that mere logical coherence would yield a sufficiently detailed—perhaps even unique—set of rules for all rational beings in the universe to follow. This is obviously wrong, and seems to be simply a failure of his imagination. There is literally a countably infinite space of possible ethical rules that are logically consistent. (With probability 1 any given one is utter nonsense: “Never eat cheese on Thursdays”, “Armadillos should rule the world”, and so on—but these are still logically consistent.)

If you require the rules to be simple and general enough to always apply to everyone everywhere, you can narrow the space substantially; but this is also how you get obviously wrong rules like “Never lie.”

In practice, there are two ways we actually seem to do this: Tradition and consequences.

Let’s start with tradition. (It came first historically, after all.) You can absolutely make a set of rules based on whatever your culture has handed down to you since time immemorial. You can even write them down in a book that you declare to be the absolute infallible truth of the universe—and, amazingly enough, you can get millions of people to actually buy that.

The result, of course, is what we call religion. Some of its rules are good: Thou shalt not kill. Some are flawed but reasonable: Thou shalt not steal. Thou shalt not commit adultery. Some are nonsense: Thou shalt not covet thy neighbor’s goods.

And some, well… some rules of tradition are the source of many of the world’s most horrific human rights violations. Thou shalt not suffer a witch to live (Exodus 22:18). If a man also lie with mankind, as he lieth with a woman, both of them have committed an abomination: they shall surely be put to death; their blood shall be upon them (Leviticus 20:13).

Tradition-based deontology has in fact been the major obstacle to moral progress throughout history. It is not a coincidence that utilitarianism began to become popular right before the abolition of slavery, and there is an even more direct casual link between utilitarianism and the advancement of rights for women and LGBT people. When the sole argument you can make for moral rules is that they are ancient (or allegedly handed down by a perfect being), you can make rules that oppress anyone you want. But when rules have to be based on bringing happiness or preventing suffering, whole classes of oppression suddenly become untenable. “God said so” can justify anything—but “Who does it hurt?” can cut through.

It is an oversimplification, but not a terribly large one, to say that the arc of moral history has been drawn by utilitarians dragging deontologists kicking and screaming into a better future.

There is a better way to make rules, and that is based on consequences. And, in practice, most people who call themselves deontologists these days do this. They develop a system of moral rules based on what would be expected to lead to the overall best outcomes.

I like this approach. In fact, I agree with this approach. But it basically amounts to abandoning deontology and surrendering to utilitarianism.

Once you admit that the fundamental justification for all moral rules is the promotion of happiness and the prevention of suffering, you are basically a rule utilitarian. Rules then become heuristics for promoting happiness, not the fundamental source of morality itself.

I suppose it could be argued that this is not a surrender but a synthesis: We are looking for the best aspects of deontology and utilitarianism. That makes a lot of sense. But I keep coming back to the dark history of traditional rules, the fact that deontologists have basically been holding back human civilization since time immemorial. If deontology wants to be taken seriously now, it needs to prove that it has broken with that dark tradition. And frankly the easiest answer to me seems to be to just give up on deontology.

Debunking the Simulation Argument

Oct 23, JDN 2457685

Every subculture of humans has words, attitudes, and ideas that hold it together. The obvious example is religions, but the same is true of sports fandoms, towns, and even scientific disciplines. (I would estimate that 40-60% of scientific jargon, depending on discipline, is not actually useful, but simply a way of exhibiting membership in the tribe. Even physicists do this: “quantum entanglement” is useful jargon, but “p-brane” surely isn’t. Statisticians too: Why say the clear and understandable “unequal variance” when you could show off by saying “heteroskedasticity”? In certain disciplines of the humanities this figure can rise as high as 90%: “imaginary” as a noun leaps to mind.)

One particularly odd idea that seems to define certain subcultures of very intelligent and rational people is the Simulation Argument, originally (and probably best) propounded by Nick Bostrom:

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.

In this original formulation by Bostrom, the argument actually makes some sense. It can be escaped, because it makes some subtle anthropic assumptions that need to be considered more carefully (in short, there could be ancestor-simulations but we could still know we aren’t in one); but it deserves to be taken seriously. Indeed, I think proposition (2) is almost certainly true, and proposition (1) might be as well; thus I have no problem accepting the disjunction.

Of course, the typical form of the argument isn’t nearly so cogent. In popular outlets as prestigious as the New York Times, Scientific American and the New Yorker, the idea is simply presented as “We are living in a simulation.” The only major outlet I could find that properly presented Bostrom’s disjunction was PBS. Indeed, there are now some Silicon Valley billionaires who believe the argument, or at least think it merits enough attention to be worth funding research into how we might escape the simulation we are in. (Frankly, even if we were inside a simulation, it’s not clear that “escaping” would be something worthwhile or even possible.)

Yet most people, when presented with this idea, think it is profoundly silly and a waste of time.

I believe this is the correct response. I am 99.9% sure we are not living in a simulation.

But it’s one thing to know that an argument is wrong, and quite another to actually show why; in that respect the Simulation Argument is a lot like the Ontological Argument for God:

However, as Bertrand Russell observed, it is much easier to be persuaded that ontological arguments are no good than it is to say exactly what is wrong with them.

To resolve this problem, I am writing this post (at the behest of my Patreons) to provide you now with a concise and persuasive argument directly against the Simulation Argument. No longer will you have to rely on your intuition that it can’t be right; you actually will have compelling logical reasons to reject it.

Note that I will not deny the core principle of cognitive science that minds are computational and therefore in principle could be simulated in such a way that the “simulations” would be actual minds. That’s usually what defenders of the Simulation Argument assume you’re denying, and perhaps in many cases it is; but that’s not what I’m denying. Yeah, sure, minds are computational (probably). There’s still no reason to think we’re living in a simulation.

To make this refutation, I should definitely address the strongest form of the argument, which is Nick Bostrom’s original disjunction. As I already noted, I believe that the disjunction is in fact true; at least one of those propositions is almost certainly correct, and perhaps two of them.

Indeed, I can tell you which one: Proposition (2). That is, I see no reason whatsoever why an advanced “posthuman” species would want to create simulated universes remotely resembling our own.


First of all, let’s assume that we do make it that far and posthumans do come into existence. I really don’t have sufficient evidence to say this is so, and the combination of millions of racists and thousands of nuclear weapons does not bode particularly well for that probability. But I think there is at least some good chance that this will happen—perhaps 10%?—so, let’s concede that point for now, and say that yes, posthumans will one day exist.

To be fair, I am not a posthuman, and cannot say for certain what beings of vastly greater intelligence and knowledge than I might choose to do. But since we are assuming that they exist as the result of our descendants more or less achieving everything we ever hoped for—peace, prosperity, immortality, vast knowledge—one thing I think I can safely extrapolate is that they will be moral. They will have a sense of ethics and morality not too dissimilar from our own. It will probably not agree in every detail—certainly not with what ordinary people believe, but very likely not with what even our greatest philosophers believe. It will most likely be better than our current best morality—closer to the objective moral truth that underlies reality.

I say this because this is the pattern that has emerged throughout the advancement of civilization thus far, and the whole reason we’re assuming posthumans might exist is that we are projecting this advancement further into the future. Humans have, on average, in the long run, become more intelligent, more rational, more compassionate. We have given up entirely on ancient moral concepts that we now recognize to be fundamentally defective, such as “witchcraft” and “heresy”; we are in the process of abandoning others for which some of us see the flaws but others don’t, such as “blasphemy” and “apostasy”. We have dramatically expanded the rights of women and various minority groups. Indeed, we have expanded our concept of which beings are morally relevant, our “circle of concern”, from only those in our tribe on outward to whole nations, whole races of people—and for some of us, as far as all humans or even all vertebrates. Therefore I expect us to continue to expand this moral circle, until it encompasses all sentient beings in the universe. Indeed, on some level I already believe that, though I know I don’t actually live in accordance with that theory—blame me if you will for my weakness of will, but can you really doubt the theory? Does it not seem likely that this it the theory to which our posthuman descendants will ultimately converge?

If that is the case, then posthumans would never make a simulation remotely resembling the universe I live in.

Maybe not me in particular, for I live relatively well—though I must ask why the migraines were really necessary. But among humans in general, there are many millions who live in conditions of such abject squalor and suffering that to create a universe containing them can only be counted as the gravest of crimes, morally akin to the Holocaust.

Indeed, creating this universe must, by construction, literally include the Holocaust. Because the Holocaust happened in this universe, you know.

So unless you think that our posthuman descendants are monstersdemons really, immortal beings of vast knowledge and power who thrive on the death and suffering of other sentient beings, you cannot think that they would create our universe. They might create a universe of some sort—but they would not create this one. You may consider this a corollary of the Problem of Evil, which has always been one of the (many) knockdown arguments against the existence of God as depicted in any major religion.

To deny this, you must twist the simulation argument quite substantially, and say that only some of us are actual people, sentient beings instantiated by the simulation, while the vast majority are, for lack of a better word, NPCs. The millions of children starving in southeast Asia and central Africa aren’t real, they’re just simulated, so that the handful of us who are real have a convincing environment for the purposes of this experiment. Even then, it seems monstrous to deceive us in this way, to make us think that millions of children are starving just to see if we’ll try to save them.

Bostrom presents it as obvious that any species of posthumans would want to create ancestor-simulations, and to make this seem plausible he compares to the many simulations we already create with our current technology, which we call “video games”. But this is such a severe equivocation on the word “simulation” that it frankly seems disingenuous (or for the pun perhaps I should say dissimulation).

This universe can’t possibly be a simulation in the sense that Halo 4 is a simulation. Indeed, this is something that I know with near-perfect certainty, for I am a sentient being (“Cogito ergo sum” and all that). There is at least one actual sentient person here—me—and based on my observations of your behavior, I know with quite high probability that there are many others as well—all of you.

Whereas, if I thought for even a moment there was even a slight probability that Halo 4 contains actual sentient beings that I am murdering, I would never play the game again; indeed I think I would smash the machine, and launch upon a global argumentative crusade to convince everyone to stop playing violent video games forevermore. If I thought that these video game characters that I explode with virtual plasma grenades were actual sentient people—or even had a non-negligible chance of being such—then what I am doing would be literally murder.

So whatever else the posthumans would be doing by creating our universe inside some vast computer, it is not “simulation” in the sense of a video game. If they are doing this for amusement, they are monsters. Even if they are doing it for some higher purpose such as scientific research, I strongly doubt that it can be justified; and I even more strongly doubt that it could be justified frequently. Perhaps once or twice in the whole history of the civilization, as a last resort to achieve some vital scientific objective when all other methods have been thoroughly exhausted. Furthermore it would have to be toward some truly cosmic objective, such as forestalling the heat death of the universe. Anything less would not justify literally replicating thousands of genocides.

But the way Bostrom generates a nontrivial probability of us living in a simulation is by assuming that each posthuman civilization will create many simulations similar to our own, so that the prior probability of being in a simulation is so high that it overwhelms the much higher likelihood that we are in the real universe. (This a deeply Bayesian argument; of that part, I approve. In Bayesian reasoning, the likelihood is the probability that we would observe the evidence we do given that the theory is true, while the prior is the probability that the theory is true, before we’ve seen any evidence. The probability of the theory actually being true is proportional to the likelihood multiplied by the prior.) But if the Foundation IRB will only approve the construction of a Synthetic Universe in order to achieve some cosmic objective, then the prior probability is something like 2/3, or 9/10; and thus it is no match whatsoever for the some 10^12 evidence in favor of this being actual reality.

Just what is this so compelling likelihood? That brings me to my next point, which is a bit more technical, but important because it’s really where the Simulation Argument truly collapses.

How do I know we aren’t in a simulation?

The fundamental equations of the laws of nature do not have closed-form solutions.

Take a look at the Schrodinger Equation, the Einstein field equations, the Navier-Stokes Equations, even Maxwell’s Equations (which are relatively well-behaved all things considered). These are second-order partial differential equations all, extremely complex to solve. They are all defined over continuous time and space, which has uncountably many points in every interval (though there are some physicists who believe that spacetime may be discrete on the order of 10^-44 seconds.) Not one of them has a general closed-form solution, by which I mean a formula that you could just plug in numbers for the parameters on one side of the equation and output an answer on the other. (x^3 + y^3 = 3 is not a closed-form solution, but y = (3 – x^3)^(1/3) is.) They have such exact solutions in certain special cases, but in general we can only solve them approximately, if at all.

This is not particularly surprising if you assume we’re in the actual universe. I have no particular reason to think that the fundamental laws underlying reality should be of a form that is exactly solvable to minds like my own, or even solvable at all in any but a trivial sense. (They must be “solvable” in the sense of actually resulting in something in particular happening at any given time, but that’s all.)

But it is extremely surprising if you assume we’re in a universe that is simulated by posthumans. If posthumans are similar to us, but… more so I guess, then when they set about to simulate a universe, they should do so in a fashion not too dissimilar from how we would do it. And how would we do it? We’d code in a bunch of laws into a computer in discrete time (and definitely not with time-steps of 10^-44 seconds either!), and those laws would have to be encoded as functions, not equations. There could be many inputs in many different forms, perhaps even involving mathematical operations we haven’t invented yet—but each configuration of inputs would have to yield precisely one output, if the computer program is to run at all.

Indeed, if they are really like us, then their computers will probably only be capable of one core operation—conditional bit flipping, 1 to 0 or 0 to 1 depending on some state—and the rest will be successive applications of that operation. Bit shifts are many bit flips, addition is many bit shifts, multiplication is many additions, exponentiation is many multiplications. We would therefore expect the fundamental equations of the simulated universe to have an extremely simple functional form, literally something that can be written out as many successive steps of “if A, flip X to 1” and “if B, flip Y to 0”. It could be a lot of such steps mind you—existing programs require billions or trillions of such operations—but one thing it could never be is a partial differential equation that cannot be solved exactly.

What fans of the Simulation Argument seem to forget is that while this simple set of operations is extremely general, capable of generating quite literally any possible computable function (Turing proved that), it is not capable of generating any function that isn’t computable, much less any equation that can’t be solved into a function. So unless the laws of the universe can actually be reduced to computable functions, it’s not even possible for us to be inside a computer simulation.

What is the probability that all the fundamental equations of the universe can be reduced to computable functions? Well, it’s difficult to assign a precise figure of course. I have no idea what new discoveries might be made in science or mathematics in the next thousand years (if I did, I would make a few and win the Nobel Prize). But given that we have been trying to get closed-form solutions for the fundamental equations of the universe and failing miserably since at least Isaac Newton, I think that probability is quite small.

Then there’s the fact that (again unless you believe some humans in our universe are NPCs) there are 7.3 billion minds (and counting) that you have to simulate at once, even assuming that the simulation only includes this planet and yet somehow perfectly generates an apparent cosmos that even behaves as we would expect under things like parallax and redshift. There’s the fact that whenever we try to study the fundamental laws of our universe, we are able to do so, and never run into any problems of insufficient resolution; so apparently at least this planet and its environs are being simulated at the scale of nanometers and femtoseconds. This is a ludicrously huge amount of data, and while I cannot rule out the possibility of some larger universe existing that would allow a computer large enough to contain it, you have a very steep uphill battle if you want to argue that this is somehow what our posthuman descendants will consider the best use of their time and resources. Bostrom uses the video game comparison to make it sound like they are just cranking out copies of Halo 917 (“Plasma rifles? How quaint!”) when in fact it amounts to assuming that our descendants will just casually create universes of 10^50 particles running over space intervals of 10^-9 meters and time-steps of 10^-15 seconds that contain billions of actual sentient beings and thousands of genocides, and furthermore do so in a way that somehow manages to make the apparent fundamental equations inside those universes unsolvable.

Indeed, I think it’s conservative to say that the likelihood ratio is 10^12—observing what we do is a trillion times more likely if this is the real universe than if it’s a simulation. Therefore, unless you believe that our posthuman descendants would have reason to create at least a billion simulations of universes like our own, you can assign a probability that we are in the actual universe of at least 99.9%.

As indeed I do.