There is no problem of free will, just a lot of really confused people

Jan 15, JDN 2457769

I was hoping for some sort of news item to use as a segue, but none in particular emerged, so I decided to go on with it anyway. I haven’t done any cognitive science posts in awhile, and this is one I’ve been meaning to write for a long time—actually it’s the sort of thing that even a remarkable number of cognitive scientists frequently get wrong, perhaps because the structure of human personality makes cognitive science inherently difficult.

Do we have free will?

The question has been asked so many times by so many people it is now a whole topic in philosophy. The Stanford Encyclopedia of Philosophy has an entire article on free will. The Information Philosopher has a gateway page “The Problem of Free Will” linking to a variety of subpages. There are even YouTube videos about “the problem of free will”.

The constant arguing back and forth about this would be problematic enough, but what really grates me are the many, many people who write “bold” articles and books about how “free will does not exist”. Examples include Sam Harris and Jerry Coyne, and have been published in everything from Psychology Today to the Chronicle of Higher Education. There’s even a TED talk.

The worst ones are those that follow with “but you should believe in it anyway”. In The Atlantic we have “Free will does not exist. But we’re better off believing in it anyway.” Scientific American offers a similar view, “Scientists say free will probably doesn’t exist, but urge: “Don’t stop believing!””

This is a mind-bogglingly stupid approach. First of all, if you want someone to believe in something, you don’t tell them it doesn’t exist. Second, if something doesn’t exist, that is generally considered a pretty compelling reason not to believe in it. You’d need a really compelling counter-argument, and frankly I’m not even sure the whole idea is logically coherent. How can I believe in something if I know it doesn’t exist? Am I supposed to delude myself somehow?

But the really sad part is that it’s totally unnecessary. There is no problem of free will. There are just an awful lot of really, really confused people. (Fortunately not everyone is confused; there are those, such as Daniel Dennett, who actually understand what’s going on.)

The most important confusion is over what you mean by the phrase “free will”. There are really two core meanings here, and the conflation of them is about 90% of the problem.

1. Moral responsibility: We have “free will” if and only if we are morally responsible for our actions.

2. Noncausality: We have “free will” if and only if our actions are not caused by the laws of nature.

Basically, every debate over “free will” boils down to someone pointing out that noncausality doesn’t exist, and then arguing that this means that moral responsibility doesn’t exist. Then someone comes back and says that moral responsibility does exist, and then infers that this means noncausality must exist. Or someone points out that noncausality doesn’t exist, and then they realize how horrible it would be if moral responsibility didn’t exist, and then tells people they should go on believing in noncausality so that they don’t have to give up moral responsibility.

Let me be absolutely clear here: Noncausality could not possibly exist.

Noncausality isn’t even a coherent concept. Actions, insofar as they are actions, must, necessarily, by definition, be caused by the laws of nature.

I can sort of imagine an event not being caused; perhaps virtual electron-positron pairs can really pop into existence without ever being caused. (Even then I’m not entirely convinced; I think quantum mechanics might actually be deterministic at the most fundamental level.)

But an action isn’t just a particle popping into existence. It requires the coordinated behavior of some 10^26 or more particles, all in a precisely organized, unified way, structured so as to move some other similarly large quantity of particles through space in a precise way so as to change the universe from one state to another state according to some system of objectives. Typically, it involves human muscles intervening on human beings or inanimate objects. (Recently it has come to mean specifically human fingers on computer keyboards a rather large segment of the time!) If what you do is an action—not a muscle spasm, not a seizure, not a slip or a trip, but something you did on purpose—then it must be caused. And if something is caused, it must be caused according to the laws of nature, because the laws of nature are the laws underlying all causality in the universe!

And once you realize that, the “problem of free will” should strike you as one of the stupidest “problems” ever proposed. Of course our actions are caused by the laws of nature! Why in the world would you think otherwise?

If you think that noncausality is necessary—or even useful—for free will, what kind of universe do you think you live in? What kind of universe could someone live in, that would fit your idea of what free will is supposed to be?

It’s like I said in that much earlier post about The Basic Fact of Cognitive Science (we are our brains): If you don’t think a mind can be made of matter, what do you think minds are made of? What sort of magical invisible fairy dust would satisfy you? If you can’t even imagine something that would satisfy the constraints you’ve imposed, did it maybe occur to you that your constraints are too strong?

Noncausality isn’t worth fretting over for the same reason that you shouldn’t fret over the fact that pi is irrational and you can’t make a square circle. There is no possible universe in which that isn’t true. So if it bothers you, it’s not that there’s something wrong with the universe—it’s clearly that there’s something wrong with you. Your thinking on the matter must be too confused, too dependent on unquestioned intuitions, if you think that murder can’t be wrong unless 2+2=5.

In philosophical jargon I am called a “compatibilist” because I maintain that free will and determinism are “compatible”. But this is much too weak a term. I much prefer Eleizer Yudkowsky’s “requiredism”, which he explains in one of the greatest blog posts of all time (seriously, read it immediately if you haven’t before—I’m okay with you cutting off my blog post here and reading his instead, because it truly is that brilliant), entitled simply “Thou Art Physics”. This quote sums it up briefly:

My position might perhaps be called “Requiredism.” When agency, choice, control, and moral responsibility are cashed out in a sensible way, they require determinism—at least some patches of determinism within the universe. If you choose, and plan, and act, and bring some future into being, in accordance with your desire, then all this requires a lawful sort of reality; you cannot do it amid utter chaos. There must be order over at least over those parts of reality that are being controlled by you. You are within physics, and so you/physics have determined the future. If it were not determined by physics, it could not be determined by you.

Free will requires a certain minimum level of determinism in the universe, because the universe must be orderly enough that actions make sense and there isn’t simply an endless succession of random events. Call me a “requiredist” if you need to call me something. I’d prefer you just realize the whole debate is silly because moral responsibility exists and noncausality couldn’t possibly.

We could of course use different terms besides “free will”. “Moral responsibility” is certainly a good one, but it is missing one key piece, which is the issue of why we can assign moral responsibility to human beings and a few other entities (animals, perhaps robots) and not to the vast majority of entities (trees, rocks, planets, tables), and why we are sometimes willing to say that even a human being does not have moral responsibility (infancy, duress, impairment).

This is why my favored term is actually “rational volition”. The characteristic that human beings have (at least most of us, most of the time), which also many animals and possibly some robots share (if not now, then soon enough), which justifies our moral responsibility is precisely our capacity to reason. Things don’t just happen to us the way they do to some 99.999,999,999% of the universe; we do things. We experience the world through our senses, have goals we want to achieve, and act in ways that are planned to make the world move closer to achieving those goals. We have causes, sure enough; but not just any causes. We have a specific class of causes, which are related to our desires and intentions—we call these causes reasons.

So if you want to say that we don’t have “free will” because that implies some mysterious nonsensical noncausality, sure; that’s fine. But then don’t go telling us that this means we don’t have moral responsibility, or that we should somehow try to delude ourselves into believing otherwise in order to preserve moral responsibility. Just recognize that we do have rational volition.

How do I know we have rational volition? That’s the best part, really: Experiments. While you’re off in la-la land imagining fanciful universes where somehow causes aren’t really causes even though they are, I can point to not only centuries of human experience but decades of direct, controlled experiments in operant conditioning. Human beings and most other animals behave quite differently in behavioral experiments than, say, plants or coffee tables. Indeed, it is precisely because of this radical difference that it seems foolish to even speak of a “behavioral experiment” about coffee tables—because coffee tables don’t behave, they just are. Coffee tables don’t learn. They don’t decide. They don’t plan or consider or hope or seek.

Japanese, as it turns out, may be a uniquely good language for cognitive science, because it has two fundamentally different verbs for “to be” depending on whether an entity is sentient. Humans and animals imasu, while inanimate objects merely arimasu. We have free will because and insofar as we imasu.

Once you get past that most basic confusion of moral responsibility with noncausality, there are a few other confusions you might run into as well. Another one is two senses of “reductionism”, which Dennett refers to as “ordinary” and “greedy”:

1. Ordinary reductionism: All systems in the universe are ultimately made up of components that always and everywhere obey the laws of nature.

2. Greedy reductionism: All systems in the universe just are their components, and have no existence, structure, or meaning aside from those components.

I actually had trouble formulating greedy reductionism as a coherent statement, because it’s such a nonsensical notion. Does anyone really think that a pile of two-by-fours is the same thing as a house? But people do speak as though they think this about human brains, when they say that “love is just dopamine” or “happiness is just serotonin”. But dopamine in a petri dish isn’t love, any more than a pile of two-by-fours is a house; and what I really can’t quite grok is why anyone would think otherwise.

Maybe they’re simply too baffled by the fact that love is made of dopamine (among other things)? They can’t quite visualize how that would work (nor can I, nor, I think, can anyone in the world at this level of scientific knowledge). You can see how the two-by-fours get nailed together and assembled into the house, but you can’t see how dopamine and action potentials would somehow combine into love.

But isn’t that a reason to say that love isn’t the same thing as dopamine, rather than that it is? I can understand why some people are still dualists who think that consciousness is somehow separate from the functioning of the brain. That’s wrong—totally, utterly, ridiculously wrong—but I can at least appreciate the intuition that underlies it. What I can’t quite grasp is why someone would go so far the other way and say that the consciousness they are currently experiencing does not exist.

Another thing that might confuse people is the fact that minds, as far as we know, are platform independentthat is, your mind could most likely be created out of a variety of different materials, from the gelatinous brain it currently is to some sort of silicon supercomputer, to perhaps something even more exotic. This independence follows from the widely-believed Church-Turing thesis, which essentially says that all computation is computation, regardless of how it is done. This may not actually be right, but I see many reasons to think that it is, and if so, this means that minds aren’t really what they are made of at all—they could be made of lots of things. What makes a mind a mind is how it is structured and above all what it does.

If this is baffling to you, let me show you how platform-independence works on a much simpler concept: Tables. Tables are also in fact platform-independent. You can make a table out of wood, or steel, or plastic, or ice, or bone. You could take out literally every single atom of a table and replace it will a completely different atom of a completely different element—carbon for iron, for example—and still end up with a table. You could conceivably even do so without changing the table’s weight, strength, size, etc., though that would be considerably more difficult.
Does this mean that tables somehow exist “beyond” their constituent matter? In some very basic sense, I suppose so—they are, again, platform-independent. But not in any deep, mysterious sense. Start with a wooden table, take away all the wood, and you no longer have a table. Take apart the table and you have a bunch of wood, which you could use to build something else. There is no “essence” comprising the table. There is no “table soul” that would persist when the table is deconstructed.

And—now for the hard part—so it is with minds. Your mind is your brain. The constituent atoms of your brain are gradually being replaced, day by day, but your mind is the same, because it exists in the arrangement and behavior, not the atoms themselves. Yet there is nothing “extra” or “beyond” that makes up your mind. You have no “soul” that lies beyond your brain. If your brain is destroyed, your mind will also be destroyed. If your brain could be copied, your mind would also be copied. And one day it may even be possible to construct your mind in some other medium—some complex computer made of silicon and tantalum, most likely—and it would still be a mind, and in all its thoughts, feelings and behaviors your mind, if not numerically identical to you.

Thus, when we engage in rational volition—when we use our “free will” if you like that term—there is no special “extra” process beyond what’s going on in our brains, but there doesn’t have to be. Those particular configurations of action potentials and neurotransmitters are our thoughts, desires, plans, intentions, hopes, fears, goals, beliefs. These mental concepts are not in addition to the physical material; they are made of that physical material. Your soul is made of gelatin.

Again, this is not some deep mystery. There is no “paradox” here. We don’t actually know the details of how it works, but that makes this no different from a Homo erectus who doesn’t know how fire works. Maybe he thinks there needs to be some extra “fire soul” that makes it burn, but we know better; and in far fewer centuries than separate that Homo erectus from us, our descendants will know precisely how the brain creates the mind.

Until then, simply remember that any mystery here lies in us—in our ignorance—and not in the universe. And take heart that the kind of “free will” that matters—moral responsibility—has absolutely no need for the kind of “free will” that doesn’t exist—noncausality. They’re totally different things.

Why New Year’s resolutions fail

Jan 1, JDN 2457755

Last week’s post was on Christmas, so by construction this week’s post will be on New Year’s Day.

It is a tradition in many cultures, especially in the US and Europe, to start every new year with a New Year’s resolution, a promise to ourselves to change our behavior in some positive way.

Yet, over 80% of these resolutions fail. Why is this?

If we are honest, most of us would agree that there is something about our own behavior that could stand to be improved. So why do we so rarely succeed in actually making such improvements?

One possibility, which I’m guessing most neoclassical economists would favor, is to say that we don’t actually want to. We may pretend that we do in order to appease others, but ultimately our rational optimization has already chosen that we won’t actually bear the cost to make the improvement.

I think this is actually quite rare. I’ve seen too many people with resolutions they didn’t share with anyone, for example, to think that it’s all about social pressure. And I’ve seen far too many people try very hard to achieve their resolutions, day after day, and yet still fail.

Sometimes we make resolutions that are not entirely within our control, such as “get a better job” or “find a girlfriend” (last year I made a resolution to publish a work of commercial fiction or a peer-reviewed article—and alas, failed at that task, unless I somehow manage it in the next few days). Such resolutions may actually be unwise to make in the first place, as it can feel like breaking a promise to yourself when you’ve actually done all you possibly could.

So let’s set those aside and talk only about things we should be in control over, like “lose weight” or “save more money”. Even these kinds of resolutions typically fail; why? What is this “weakness of will”? How is it possible to really want something that you are in full control over, and yet still fail to accomplish it?

Well, first of all, I should be clear what I mean by “in full control over”. In some sense you’re not in full control, which is exactly the problem. Your conscious mind is not actually an absolute tyrant over your entire body; you’re more like an elected president who has to deal with a legislature in order to enact policy.

You do have a great deal of power over your own behavior, and you can learn to improve this control (much as real executive power in presidential democracies has expanded over the last century!); but there are fundamental limits to just how well you can actually consciously will your body to do anything, limits imposed by billions of years of evolution that established most of the traits of your body and nervous system millions of generations before there even was such a thing as rational conscious reasoning.

One thing that makes a surprisingly large difference lies in whether your goals are reduced to specific, actionable objectives. “Lose weight” is almost guaranteed to fail. “Lose 30 pounds” is still unlikely to succeed. “Work out for 2 hours per week,” on the other hand, might have a chance. “Save money” is never going to make it, but “move to a smaller apartment and set aside $200 per month” just might.

I think the government metaphor is helpful here; if you President of the United States and you want something done, do you state some vague, broad goal like “Improve the economy”? No, you make a specific, actionable demand that allows you to enforce compliance, like “increase infrastructure spending by 24% over the next 5 years”. Even then it is possible to fail if you can’t push it through the legislature (in the metaphor, the “legislature” is your habits, instincts and other subconscious processes), but you’re much more likely to succeed if you have a detailed plan.

Another technique that helps is to visualize the benefits of succeeding and the costs of failing, and keep these in your mind. This counteracts the tendency for the costs of succeeding and the benefits of giving up to be more salient—losing 30 pounds sounds nice in theory, but that treadmill is so much work right now!

This salience effect has a lot to do with the fact that human beings are terrible at dealing with the future.

Rationally, we are supposed to use exponential discounting; each successive moment is supposed to be worth less to us than the previous by a fixed proportion, say 5% per year. This is actually a mathematical theorem; if you don’t discount this way, your decisions will be systematically irrational.

And yet… we don’t discount that way. Some behavioral economists argue that we use hyperbolic discounting, in which instead of discounting time by a fixed proportion, we use a different formula that drops off too quickly early on and not quickly enough later on.

But I am increasingly convinced that human beings don’t actually use discounting at all. We have a series of rough-and-ready heuristics for making future judgments, which can sort of act like discounting, but require far less computation than actually calculating a proper discount rate. (Recent empirical evidence seems to be tilting this direction.)

In any case, whatever we do is clearly not a proper rational discount rate. And this means that our behavior can be time-inconsistent; a choice that seems rational at one time can not seem rational at a later time. When we’re planning out our year and saying we will hit the treadmill more, it seems like a good idea; but when we actually get to the gym and feel our legs ache as we start running, we begin to regret our decision.

The challenge, really, is determining which “version” of us is correct! A priori, we don’t actually know whether the view of our distant self contemplating the future or the view of our current self making the choice in the moment is the right one. Actually, when I frame it this way, it almost seems like the self that’s closer to the choice should have better information—and yet typically we think the exact opposite, that it is our past self making plans that really knows what’s best for us.

So where does that come from? Why do we think, at least in most cases, that the “me” which makes a plan a year in advance is the smart one, and the “me” that actually decides in the moment is untrustworthy.

Kahneman has a good explanation for this, in his model of System 1 and System 2. System 1 is simple and fast, but often gets the wrong answer. System 2 usually gets the right answer, but it is complex and slow. When we are making plans, we have a lot of time to think, and we can afford to expend the extra effort to engage the full power of System 2. But when we are living in the moment, choosing what to do right now, we don’t have that luxury of time, and we are forced to fall back on System 1. System 1 is easier—but it’s also much more likely to be wrong.

How, then, do we resolve this conflict? Commitment. (Perhaps that’s why it’s called a New Year’s resolution!)

We make promises to ourselves, commitments that we will feel bad about not following through.

If we rationally discounted, this would be a baffling thing to do; we’re just imposing costs on ourselves for no reason. But because we don’t discount rationally, commitments allow us to change the calculation for our future selves.

This brings me to one last strategy to use when making your resolutions: Include punishment.

“I will work out at least 2 hours per week, and if I don’t, I’m not allowed to watch TV all weekend.” Now that is a resolution you are actually likely to keep.

To see why, consider the decision problem for your System 2 self today versus your System 1 self throughout the year.

Your System 2 self has done the cost-benefit analysis and ruled that working out 2 hours per week is worthwhile for its health benefits.

If you left it at that, your System 1 self would each day find an excuse to procrastinate the workouts, because at least from where they’re sitting, working out for 2 hours looks a lot more painful than the marginal loss in health from missing just this one week. And of course this will keep happening, week after week—and then 52 go by and you’ve had few if any workouts.

But by adding the punishment of “no TV”, you have imposed an additional cost on your System 1 self, something that they care about. Suddenly the calculation changes; it’s not just 2 hours of workout weighed against vague long-run health benefits, but 2 hours of workout weighed against no TV all weekend. That punishment is surely too much to bear; so you’d best do the workout after all.

Do it right, and you will rarely if ever have to impose the punishment. But don’t make it too large, or then it will seem unreasonable and you won’t want to enforce it if you ever actually need to. Your System 1 self will then know this, and treat the punishment as nonexistent. (Formally the equilibrium is not subgame perfect; I am gravely concerned that our nuclear deterrence policy suffers from precisely this flaw.) “If I don’t work out, I’ll kill myself” is a recipe for depression, not healthy exercise habits.

But if you set clear, actionable objectives and sufficient but reasonable punishments, there’s at least a good chance you will actually be in the minority of people who actually succeed in keeping their New Year’s resolution.

And if not, there’s always next year.

The game theory of holidays

Dec 25, JDN 2457748

When this post goes live, it will be Christmas; so I felt I should make the topic somehow involve the subject of Christmas, or holidays in general.

I decided I would pull back for as much perspective as possible, and ask this question: Why do we have holidays in the first place?

All human cultures have holidays, but not the same ones. Cultures with a lot of mutual contact will tend to synchronize their holidays temporally, but still often preserve wildly different rituals on those same holidays. Yes, we celebrate “Christmas” in both the US and in Austria; but I think they are baffled by the Elf on the Shelf and I know that I find the Krampus bizarre and terrifying.

Most cultures from temperate climates have some sort of celebration around the winter solstice, probably because this is an ecologically important time for us. Our food production is about to get much, much lower, so we’d better make sure we have sufficient quantities stored. (In an era of globalization and processed food that lasts for months, this is less important, of course.) But they aren’t the same celebration, and they generally aren’t exactly on the solstice.

What is a holiday, anyway? We all get off work, we visit our families, and we go through a series of ritualized actions with some sort of symbolic cultural meaning. Why do we do this?

First, why not work all year round? Wouldn’t that be more efficient? Well, no, because human beings are subject to exhaustion. We need to rest at least sometimes.

Well, why not simply have each person rest whenever they need to? Well, how do we know they need to? Do we just take their word for it? People might exaggerate their need for rest in order to shirk their duties and free-ride on the work of others.

It would help if we could have pre-scheduled rest times, to remove individual discretion.

Should we have these at the same time for everyone, or at different times for each person?

Well, from the perspective of efficiency, different times for each person would probably make the most sense. We could trade off work in shifts that way, and ensure production keeps moving. So why don’t we do that?
Well, now we get to the game theory part. Do you want to be the only one who gets today off? Or do you want other people to get today off as well?

You probably want other people to be off work today as well, at least your family and friends so that you can spend time with them. In fact, this is probably more important to you than having any particular day off.

We can write this as a normal-form game. Suppose we have four days to choose from, 1 through 4, and two people, who can each decide which day to take off, or they can not take a day off at all. They each get a payoff of 1 if they take the same day off, 0 if they take different days off, and -1 if they don’t take a day off at all. This is our resulting payoff matrix:

1 2 3 4 None
1 1/1 0/0 0/0 0/0 0/-1
2 0/0 1/1 0/0 0/0 0/-1
3 0/0 0/0 1/1 0/0 0/-1
4 0/0 0/0 0/0 1/1 0/-1
None -1/0 -1/0 -1/0 -1/0 -1/-1

 

It’s pretty obvious that each person will take some day off. But which day? How do they decide that?
This is what we call a coordination game; there are many possible equilibria to choose from, and the payoffs are highest if people can somehow coordinate their behavior.

If they can actually coordinate directly, it’s simple; one person should just suggest a day, and since the other one is indifferent, they have no reason not to agree to that day. From that point forward, they have coordinated on a equilibrium (a Nash equilibrium, in point of fact).

But suppose they can’t talk to each other, or suppose there aren’t two people to coordinate but dozens, or hundreds—or even thousands, once you include all the interlocking social networks. How could they find a way to coordinate on the same day?

They need something more intuitive, some “obvious” choice that they can call upon that they hope everyone else will as well. Even if they can’t communicate, as long as they can observe whether their coordination has succeeded or failed they can try to set these “obvious” choices by successive trial and error.

The result is what we call a Schelling point; players converge on this equilibrium not because there’s actually anything better about it, but because it seems obvious and they expect everyone else to think it will also seem obvious.

This is what I think is happening with holidays. Yes, we make up stories to justify them, or sometimes even have genuine reasons for them (Independence Day actually makes sense being on July 4, for instance), but the ultimate reason why we have a holiday on one day rather than other is that we had to have it some time, and this was a way of breaking the deadlock and finally setting a date.

In fact, weekends are probably a more optimal solution to this coordination problem than holidays, because human beings need rest on a fairly regular basis, not just every few months. Holiday seasons now serve more as an opportunity to have long vacations that allow travel, rather than as a rest between work days. But even those we had to originally justify as a matter of religion: Jews would not work on Saturday, Christians would not work on Sunday, so together we will not work on Saturday or Sunday. The logic here is hardly impeccable (why not make it religion-specific, for example?), but it was enough to give us a Schelling point.

This makes me wonder about what it would take to create a new holiday. How could we actually get people to celebrate Darwin Day or Sagan Day on a large scale, for example? Darwin and Sagan are both a lot more worth celebrating than most of the people who get holidays—Columbus especially leaps to mind. But even among those of us who really love Darwin and Sagan, these are sort of half-hearted celebrations that never attain the same status as Easter, much less Thanksgiving or Christmas.

I’d also like to secularize—or at least ecumenicalize—the winter solstice celebration. Christianity shouldn’t have a monopoly on what is really something like a human universal, or at least a “humans who live in temperate climates” universal. It really isn’t Christmas anyway; most of what we do is celebrating Yule, compounded by a modern expression in mass consumption that is thoroughly borne of modern capitalism. We have no reason to think Jesus was actually born in December, much less on the 25th. But that’s around the time when lots of other celebrations were going on anyway, and it’s much easier to convince people that they should change the name of their holiday than that they should stop celebrating it and start celebrating something else—I think precisely because that still preserves the Schelling point.

Creating holidays has obviously been done before—indeed it is literally the only way holidays ever come into existence. But part of their structure seems to be that the more transparent the reasons for choosing that date and those rituals, the more empty and insincere the holiday seems. Once you admit that this is an arbitrary choice meant to converge an equilibrium, it stops seeming like a good choice anymore.

Now, if we could find dates and rituals that really had good reasons behind them, we could probably escape that; but I’m not entirely sure we can. We can use Darwin’s birthday—but why not the first edition publication of On the Origin of Species? And Darwin himself is really that important, but why Sagan Day and not Einstein Day or Niels Bohr Day… and so on? The winter solstice itself is a very powerful choice; its deep astronomical and ecological significance might actually make it a strong enough attractor to defeat all contenders. But what do we do on the winter solstice celebration? What rituals best capture the feelings we are trying to express, and how do we defend those rituals against criticism and competition?

In the long run, I think what usually happens is that people just sort of start doing something, and eventually enough people are doing it that it becomes a tradition. Maybe it always feels awkward and insincere at first. Maybe you have to be prepared for it to change into something radically different as the decades roll on.

This year the winter solstice is on December 21st. I think I’ll be lighting a candle and gazing into the night sky, reflecting on our place in the universe. Unless you’re reading this on Patreon, by the time this goes live, you’ll have missed it; but you can try later, or maybe next year.

In fifty years all the cool kids will be doing it, I’m sure.

Experimentally testing categorical prospect theory

Dec 4, JDN 2457727

In last week’s post I presented a new theory of probability judgments, which doesn’t rely upon people performing complicated math even subconsciously. Instead, I hypothesize that people try to assign categories to their subjective probabilities, and throw away all the information that wasn’t used to assign that category.

The way to most clearly distinguish this from cumulative prospect theory is to show discontinuity. Kahneman’s smooth, continuous function places fairly strong bounds on just how much a shift from 0% to 0.000001% can really affect your behavior. In particular, if you want to explain the fact that people do seem to behave differently around 10% compared to 1% probabilities, you can’t allow the slope of the smooth function to get much higher than 10 at any point, even near 0 and 1. (It does depend on the precise form of the function, but the more complicated you make it, the more free parameters you add to the model. In the most parsimonious form, which is a cubic polynomial, the maximum slope is actually much smaller than this—only 2.)

If that’s the case, then switching from 0.% to 0.0001% should have no more effect in reality than a switch from 0% to 0.00001% would to a rational expected utility optimizer. But in fact I think I can set up scenarios where it would have a larger effect than a switch from 0.001% to 0.01%.

Indeed, these games are already quite profitable for the majority of US states, and they are called lotteries.

Rationally, it should make very little difference to you whether your odds of winning the Powerball are 0 (you bought no ticket) or 0.000000001% (you bought a ticket), even when the prize is $100 million. This is because your utility of $100 million is nowhere near 100 million times as large as your marginal utility of $1. A good guess would be that your lifetime income is about $2 million, your utility is logarithmic, the units of utility are hectoQALY, and the baseline level is about 100,000.

I apologize for the extremely large number of decimals, but I had to do that in order to show any difference at all. I have bolded where the decimals first deviate from the baseline.

Your utility if you don’t have a ticket is ln(20) = 2.9957322736 hQALY.

Your utility if you have a ticket is (1-10^-9) ln(20) + 10^-9 ln(1020) = 2.9957322775 hQALY.

You gain a whopping 40 microQALY over your whole lifetime. I highly doubt you could even perceive such a difference.

And yet, people are willing to pay nontrivial sums for the chance to play such lotteries. Powerball tickets sell for about $2 each, and some people buy tickets every week. If you do that and live to be 80, you will spend some $8,000 on lottery tickets during your lifetime, which results in this expected utility: (1-4*10^-6) ln(20-0.08) + 4*10^-6 ln(1020) = 2.9917399955 hQALY.
You have now sacrificed 0.004 hectoQALY, which is to say 0.4 QALY—that’s months of happiness you’ve given up to play this stupid pointless game.

Which shouldn’t be surprising, as (with 99.9996% probability) you have given up four months of your lifetime income with nothing to show for it. Lifetime income of $2 million / lifespan of 80 years = $25,000 per year; $8,000 / $25,000 = 0.32. You’ve actually sacrificed slightly more than this, which comes from your risk aversion.

Why would anyone do such a thing? Because while the difference between 0 and 10^-9 may be trivial, the difference between “impossible” and “almost impossible” feels enormous. “You can’t win if you don’t play!” they say, but they might as well say “You can’t win if you do play either.” Indeed, the probability of winning without playing isn’t zero; you could find a winning ticket lying on the ground, or win due to an error that is then upheld in court, or be given the winnings bequeathed by a dying family member or gifted by an anonymous donor. These are of course vanishingly unlikely—but so was winning in the first place. You’re talking about the difference between 10^-9 and 10^-12, which in proportional terms sounds like a lot—but in absolute terms is nothing. If you drive to a drug store every week to buy a ticket, you are more likely to die in a car accident on the way to the drug store than you are to win the lottery.

Of course, these are not experimental conditions. So I need to devise a similar game, with smaller stakes but still large enough for people’s brains to care about the “almost impossible” category; maybe thousands? It’s not uncommon for an economics experiment to cost thousands, it’s just usually paid out to many people instead of randomly to one person or nobody. Conducting the experiment in an underdeveloped country like India would also effectively amplify the amounts paid, but at the fixed cost of transporting the research team to India.

But I think in general terms the experiment could look something like this. You are given $20 for participating in the experiment (we treat it as already given to you, to maximize your loss aversion and endowment effect and thereby give us more bang for our buck). You then have a chance to play a game, where you pay $X to get a P probability of $Y*X, and we vary these numbers.

The actual participants wouldn’t see the variables, just the numbers and possibly the rules: “You can pay $2 for a 1% chance of winning $200. You can also play multiple times if you wish.” “You can pay $10 for a 5% chance of winning $250. You can only play once or not at all.”

So I think the first step is to find some dilemmas, cases where people feel ambivalent, and different people differ in their choices. That’s a good role for a pilot study.

Then we take these dilemmas and start varying their probabilities slightly.

In particular, we try to vary them at the edge of where people have mental categories. If subjective probability is continuous, a slight change in actual probability should never result in a large change in behavior, and furthermore the effect of a change shouldn’t vary too much depending on where the change starts.

But if subjective probability is categorical, these categories should have edges. Then, when I present you with two dilemmas that are on opposite sides of one of the edges, your behavior should radically shift; while if I change it in a different way, I can make a large change without changing the result.

Based solely on my own intuition, I guessed that the categories roughly follow this pattern:

Impossible: 0%

Almost impossible: 0.1%

Very unlikely: 1%

Unlikely: 10%

Fairly unlikely: 20%

Roughly even odds: 50%

Fairly likely: 80%

Likely: 90%

Very likely: 99%

Almost certain: 99.9%

Certain: 100%

So for example, if I switch from 0%% to 0.01%, it should have a very large effect, because I’ve moved you out of your “impossible” category (indeed, I think the “impossible” category is almost completely sharp; literally anything above zero seems to be enough for most people, even 10^-9 or 10^-10). But if I move from 1% to 2%, it should have a small effect, because I’m still well within the “very unlikely” category. Yet the latter change is literally one hundred times larger than the former. It is possible to define continuous functions that would behave this way to an arbitrary level of approximation—but they get a lot less parsimonious very fast.

Now, immediately I run into a problem, because I’m not even sure those are my categories, much less that they are everyone else’s. If I knew precisely which categories to look for, I could tell whether or not I had found it. But the process of both finding the categories and determining if their edges are truly sharp is much more complicated, and requires a lot more statistical degrees of freedom to get beyond the noise.

One thing I’m considering is assigning these values as a prior, and then conducting a series of experiments which would adjust that prior. In effect I would be using optimal Bayesian probability reasoning to show that human beings do not use optimal Bayesian probability reasoning. Still, I think that actually pinning down the categories would require a large number of participants or a long series of experiments (in frequentist statistics this distinction is vital; in Bayesian statistics it is basically irrelevant—one of the simplest reasons to be Bayesian is that it no longer bothers you whether someone did 2 experiments of 100 people or 1 experiment of 200 people, provided they were the same experiment of course). And of course there’s always the possibility that my theory is totally off-base, and I find nothing; a dissertation replicating cumulative prospect theory is a lot less exciting (and, sadly, less publishable) than one refuting it.

Still, I think something like this is worth exploring. I highly doubt that people are doing very much math when they make most probabilistic judgments, and using categories would provide a very good way for people to make judgments usefully with no math at all.

How do people think about probability?

Nov 27, JDN 2457690

(This topic was chosen by vote of my Patreons.)

In neoclassical theory, it is assumed (explicitly or implicitly) that human beings judge probability in something like the optimal Bayesian way: We assign prior probabilities to events, and then when confronted with evidence we infer using the observed data to update our prior probabilities to posterior probabilities. Then, when we have to make decisions, we maximize our expected utility subject to our posterior probabilities.

This, of course, is nothing like how human beings actually think. Even very intelligent, rational, numerate people only engage in a vague approximation of this behavior, and only when dealing with major decisions likely to affect the course of their lives. (Yes, I literally decide which universities to attend based upon formal expected utility models. Thus far, I’ve never been dissatisfied with a decision made that way.) No one decides what to eat for lunch or what to do this weekend based on formal expected utility models—or at least I hope they don’t, because that point the computational cost far exceeds the expected benefit.

So how do human beings actually think about probability? Well, a good place to start is to look at ways in which we systematically deviate from expected utility theory.

A classic example is the Allais paradox. See if it applies to you.

In game A, you get $1 million dollars, guaranteed.
In game B, you have a 10% chance of getting $5 million, an 89% chance of getting $1 million, but now you have a 1% chance of getting nothing.

Which do you prefer, game A or game B?

In game C, you have an 11% chance of getting $1 million, and an 89% chance of getting nothing.

In game D, you have a 10% chance of getting $5 million, and a 90% chance of getting nothing.

Which do you prefer, game C or game D?

I have to think about it for a little bit and do some calculations, and it’s still very hard because it depends crucially on my projected lifetime income (which could easily exceed $3 million with a PhD, especially in economics) and the precise form of my marginal utility (I think I have constant relative risk aversion, but I’m not sure what parameter to use precisely), but in general I think I want to choose game A and game C, but I actually feel really ambivalent, because it’s not hard to find plausible parameters for my utility where I should go for the gamble.

But if you’re like most people, you choose game A and game D.

There is no coherent expected utility by which you would do this.

Why? Either a 10% chance of $5 million instead of $1 million is worth risking a 1% chance of nothing, or it isn’t. If it is, you should play B and D. If it’s not, you should play A and C. I can’t tell you for sure whether it is worth it—I can’t even fully decide for myself—but it either is or it isn’t.

Yet most people have a strong intuition that they should take game A but game D. Why? What does this say about how we judge probability?
The leading theory in behavioral economics right now is cumulative prospect theory, developed by the great Kahneman and Tversky, who essentially founded the field of behavioral economics. It’s quite intimidating to try to go up against them—which is probably why we should force ourselves to do it. Fear of challenging the favorite theories of the great scientists before us is how science stagnates.

I wrote about it more in a previous post, but as a brief review, cumulative prospect theory says that instead of judging based on a well-defined utility function, we instead consider gains and losses as fundamentally different sorts of thing, and in three specific ways:

First, we are loss-averse; we feel a loss about twice as intensely as a gain of the same amount.

Second, we are risk-averse for gains, but risk-seeking for losses; we assume that gaining twice as much isn’t actually twice as good (which is almost certainly true), but we also assume that losing twice as much isn’t actually twice as bad (which is almost certainly false and indeed contradictory with the previous).

Third, we judge probabilities as more important when they are close to certainty. We make a large distinction between a 0% probability and a 0.0000001% probability, but almost no distinction at all between a 41% probability and a 43% probability.

That last part is what I want to focus on for today. In Kahneman’s model, this is a continuous, monotonoic function that maps 0 to 0 and 1 to 1, but systematically overestimates probabilities below but near 1/2 and systematically underestimates probabilities above but near 1/2.

It looks something like this, where red is true probability and blue is subjective probability:

cumulative_prospect
I don’t believe this is actually how humans think, for two reasons:

  1. It’s too hard. Humans are astonishingly innumerate creatures, given the enormous processing power of our brains. It’s true that we have some intuitive capacity for “solving” very complex equations, but that’s almost all within our motor system—we can “solve a differential equation” when we catch a ball, but we have no idea how we’re doing it. But probability judgments are often made consciously, especially in experiments like the Allais paradox; and the conscious brain is terrible at math. It’s actually really amazing how bad we are at math. Any model of normal human judgment should assume from the start that we will not do complicated math at any point in the process. Maybe you can hypothesize that we do so subconsciously, but you’d better have a good reason for assuming that.
  2. There is no reason to do this. Why in the world would any kind of optimization system function this way? You start with perfectly good probabilities, and then instead of using them, you subject them to some bizarre, unmotivated transformation that makes them less accurate and costs computing power? You may as well hit yourself in the head with a brick.

So, why might it look like we are doing this? Well, my proposal, admittedly still rather half-baked, is that human beings don’t assign probabilities numerically at all; we assign them categorically.

You may call this, for lack of a better term, categorical prospect theory.

My theory is that people don’t actually have in their head “there is an 11% chance of rain today” (unless they specifically heard that from a weather report this morning); they have in their head “it’s fairly unlikely that it will rain today”.

That is, we assign some small number of discrete categories of probability, and fit things into them. I’m not sure what exactly the categories are, and part of what makes my job difficult here is that they may be fuzzy-edged and vary from person to person, but roughly speaking, I think they correspond to the sort of things psychologists usually put on Likert scales in surveys: Impossible, almost impossible, very unlikely, unlikely, fairly unlikely, roughly even odds, fairly likely, likely, very likely, almost certain, certain. If I’m putting numbers on these probability categories, they go something like this: 0, 0.001, 0.01, 0.10, 0.20, 0.50, 0.8, 0.9, 0.99, 0.999, 1.

Notice that this would preserve the same basic effect as cumulative prospect theory: You care a lot more about differences in probability when they are near 0 or 1, because those are much more likely to actually shift your category. Indeed, as written, you wouldn’t care about a shift from 0.4 to 0.6 at all, despite caring a great deal about a shift from 0.001 to 0.01.

How does this solve the above problems?

  1. It’s easy. Not only don’t you compute a probability and then recompute it for no reason; you never even have to compute it precisely. Just get it within some vague error bounds and that will tell you what box it goes in. Instead of computing an approximation to a continuous function, you just slot things into a small number of discrete boxes, a dozen at the most.
  2. That explains why we would do it: It’s easy. Our brains need to conserve their capacity, and they did especially in our ancestral environment when we struggled to survive. Rather than having to iterate your approximation to arbitrary precision, you just get within 0.1 or so and call it a day. That saves time and computing power, which saves energy, which could save your life.

What new problems have I introduced?

  1. It’s very hard to know exactly where people’s categories are, if they vary between individuals or even between situations, and whether they are fuzzy-edged.
  2. If you take the model I just gave literally, even quite large probability changes will have absolutely no effect as long as they remain within a category such as “roughly even odds”.

With regard to 2, I think Kahneman may himself be able to save me, with his dual process theory concept of System 1 and System 2. What I’m really asserting is that System 1, the fast, intuitive judgment system, operates on these categories. System 2, on the other hand, the careful, rational thought system, can actually make use of proper numerical probabilities; it’s just very costly to boot up System 2 in the first place, much less ensure that it actually gets the right answer.

How might we test this? Well, I think that people are more likely to use System 1 when any of the following are true:

  1. They are under harsh time-pressure
  2. The decision isn’t very important
  3. The intuitive judgment is fast and obvious

And conversely they are likely to use System 2 when the following are true:

  1. They have plenty of time to think
  2. The decision is very important
  3. The intuitive judgment is difficult or unclear

So, it should be possible to arrange an experiment varying these parameters, such that in one treatment people almost always use System 1, and in another they almost always use System 2. And then, my prediction is that in the System 1 treatment, people will in fact not change their behavior at all when you change the probability from 15% to 25% (fairly unlikely) or 40% to 60% (roughly even odds).

To be clear, you can’t just present people with this choice between game E and game F:

Game E: You get a 60% chance of $50, and a 40% chance of nothing.

Game F: You get a 40% chance of $50, and a 60% chance of nothing.

People will obviously choose game E. If you can directly compare the numbers and one game is strictly better in every way, I think even without much effort people will be able to choose correctly.

Instead, what I’m saying is that if you make the following offers to two completely different sets of people, you will observe little difference in their choices, even though under expected utility theory you should.
Group I receives a choice between game E and game G:

Game E: You get a 60% chance of $50, and a 40% chance of nothing.

Game G: You get a 100% chance of $20.

Group II receives a choice between game F and game G:

Game F: You get a 40% chance of $50, and a 60% chance of nothing.

Game G: You get a 100% chance of $20.

Under two very plausible assumptions about marginal utility of wealth, I can fix what the rational judgment should be in each game.

The first assumption is that marginal utility of wealth is decreasing, so people are risk-averse (at least for gains, which these are). The second assumption is that most people’s lifetime income is at least two orders of magnitude higher than $50.

By the first assumption, group II should choose game G. The expected income is precisely the same, and being even ever so slightly risk-averse should make you go for the guaranteed $20.

By the second assumption, group I should choose game E. Yes, there is some risk, but because $50 should not be a huge sum to you, your risk aversion should be small and the higher expected income of $30 should sway you.

But I predict that most people will choose game G in both cases, and (within statistical error) the same proportion will choose F as chose E—thus showing that the difference between a 40% chance and a 60% chance was in fact negligible to their intuitive judgments.

However, this doesn’t actually disprove Kahneman’s theory; perhaps that part of the subjective probability function is just that flat. For that, I need to set up an experiment where I show discontinuity. I need to find the edge of a category and get people to switch categories sharply. Next week I’ll talk about how we might pull that off.

Bigotry is more powerful than the market

Nov 20, JDN 2457683

If there’s one message we can take from the election of Donald Trump, it is that bigotry remains a powerful force in our society. A lot of autoflagellating liberals have been trying to explain how this election result really reflects our failure to help people displaced by technology and globalization (despite the fact that personal income and local unemployment had negligible correlation with voting for Trump), or Hillary Clinton’s “bad campaign” that nonetheless managed the same proportion of Democrat turnout that re-elected her husband in 1996.

No, overwhelmingly, the strongest predictor of voting for Trump was being White, and living in an area where most people are White. (Well, actually, that’s if you exclude authoritarianism as an explanatory variable—but really I think that’s part of what we’re trying to explain.) Trump voters were actually concentrated in areas less affected by immigration and globalization. Indeed, there is evidence that these people aren’t racist because they have anxiety about the economy—they are anxious about the economy because they are racist. How does that work? Obama. They can’t believe that the economy is doing well when a Black man is in charge. So all the statistics and even personal experiences mean nothing to them. They know in their hearts that unemployment is rising, even as the BLS data clearly shows it’s falling.

The wide prevalence and enormous power of bigotry should be obvious. But economists rarely talk about it, and I think I know why: Their models say it shouldn’t exist. The free market is supposed to automatically eliminate all forms of bigotry, because they are inefficient.

The argument for why this is supposed to happen actually makes a great deal of sense: If a company has the choice of hiring a White man or a Black woman to do the same job, but they know that the market wage for Black women is lower than the market wage for White men (which it most certainly is), and they will do the same quality and quantity of work, why wouldn’t they hire the Black woman? And indeed, if human beings were rational profit-maximizers, this is probably how they would think.

More recently some neoclassical models have been developed to try to “explain” this behavior, but always without daring to give up the precious assumption of perfect rationality. So instead we get the two leading neoclassical theories of discrimination, which are statistical discrimination and taste-based discrimination.

Statistical discrimination is the idea that under asymmetric information (and we surely have that), features such as race and gender can act as signals of quality because they are correlated with actual quality for various reasons (usually left unspecified), so it is not irrational after all to choose based upon them, since they’re the best you have.

Taste-based discrimination is the idea that people are rationally maximizing preferences that simply aren’t oriented toward maximizing profit or well-being. Instead, they have this extra term in their utility function that says they should also treat White men better than women or Black people. It’s just this extra thing they have.

A small number of studies have been done trying to discern which of these is at work.
The correct answer, of course, is neither.

Statistical discrimination, at least, could be part of what’s going on. Knowing that Black people are less likely to be highly educated than Asians (as they definitely are) might actually be useful information in some circumstances… then again, you list your degree on your resume, don’t you? Knowing that women are more likely to drop out of the workforce after having a child could rationally (if coldly) affect your assessment of future productivity. But shouldn’t the fact that women CEOs outperform men CEOs be incentivizing shareholders to elect women CEOs? Yet that doesn’t seem to happen. Also, in general, people seem to be pretty bad at statistics.

The bigger problem with statistical discrimination as a theory is that it’s really only part of a theory. It explains why not all of the discrimination has to be irrational, but some of it still does. You need to explain why there are these huge disparities between groups in the first place, and statistical discrimination is unable to do that. In order for the statistics to differ this much, you need a past history of discrimination that wasn’t purely statistical.

Taste-based discrimination, on the other hand, is not a theory at all. It’s special pleading. Rather than admit that people are failing to rationally maximize their utility, we just redefine their utility so that whatever they happen to be doing now “maximizes” it.

This is really what makes the Axiom of Revealed Preference so insidious; if you really take it seriously, it says that whatever you do, must by definition be what you preferred. You can’t possibly be irrational, you can’t possibly be making mistakes of judgment, because by definition whatever you did must be what you wanted. Maybe you enjoy bashing your head into a wall, who am I to judge?

I mean, on some level taste-based discrimination is what’s happening; people think that the world is a better place if they put women and Black people in their place. So in that sense, they are trying to “maximize” some “utility function”. (By the way, most human beings behave in ways that are provably inconsistent with maximizing any well-defined utility function—the Allais Paradox is a classic example.) But the whole framework of calling it “taste-based” is a way of running away from the real explanation. If it’s just “taste”, well, it’s an unexplainable brute fact of the universe, and we just need to accept it. If people are happier being racist, what can you do, eh?

So I think it’s high time to start calling it what it is. This is not a question of taste. This is a question of tribal instinct. This is the product of millions of years of evolution optimizing the human brain to act in the perceived interest of whatever it defines as its “tribe”. It could be yourself, your family, your village, your town, your religion, your nation, your race, your gender, or even the whole of humanity or beyond into all sentient beings. But whatever it is, the fundamental tribe is the one thing you care most about. It is what you would sacrifice anything else for.

And what we learned on November 9 this year is that an awful lot of Americans define their tribe in very narrow terms. Nationalistic and xenophobic at best, racist and misogynistic at worst.

But I suppose this really isn’t so surprising, if you look at the history of our nation and the world. Segregation was not outlawed in US schools until 1955, and there are women who voted in this election who were born before American women got the right to vote in 1920. The nationalistic backlash against sending jobs to China (which was one of the chief ways that we reduced global poverty to its lowest level ever, by the way) really shouldn’t seem so strange when we remember that over 100,000 Japanese-Americans were literally forcibly relocated into camps as recently as 1942. The fact that so many White Americans seem all right with the biases against Black people in our justice system may not seem so strange when we recall that systemic lynching of Black people in the US didn’t end until the 1960s.

The wonder, in fact, is that we have made as much progress as we have. Tribal instinct is not a strange aberration of human behavior; it is our evolutionary default setting.

Indeed, perhaps it is unreasonable of me to ask humanity to change its ways so fast! We had millions of years to learn how to live the wrong way, and I’m giving you only a few centuries to learn the right way?

The problem, of course, is that the pace of technological change leaves us with no choice. It might be better if we could wait a thousand years for people to gradually adjust to globalization and become cosmopolitan; but climate change won’t wait a hundred, and nuclear weapons won’t wait at all. We are thrust into a world that is changing very fast indeed, and I understand that it is hard to keep up; but there is no way to turn back that tide of change.

Yet “turn back the tide” does seem to be part of the core message of the Trump voter, once you get past the racial slurs and sexist slogans. People are afraid of what the world is becoming. They feel that it is leaving them behind. Coal miners fret that we are leaving them behind by cutting coal consumption. Factory workers fear that we are leaving them behind by moving the factory to China or inventing robots to do the work in half the time for half the price.

And truth be told, they are not wrong about this. We are leaving them behind. Because we have to. Because coal is polluting our air and destroying our climate, we must stop using it. Moving the factories to China has raised them out of the most dire poverty, and given us a fighting chance toward ending world hunger. Inventing the robots is only the next logical step in the process that has carried humanity forward from the squalor and suffering of primitive life to the security and prosperity of modern society—and it is a step we must take, for the progress of civilization is not yet complete.

They wouldn’t have to let themselves be left behind, if they were willing to accept our help and learn to adapt. That carbon tax that closes your coal mine could also pay for your basic income and your job-matching program. The increased efficiency from the automated factories could provide an abundance of wealth that we could redistribute and share with you.

But this would require them to rethink their view of the world. They would have to accept that climate change is a real threat, and not a hoax created by… uh… never was clear on that point actually… the Chinese maybe? But 45% of Trump supporters don’t believe in climate change (and that’s actually not as bad as I’d have thought). They would have to accept that what they call “socialism” (which really is more precisely described as social democracy, or tax-and-transfer redistribution of wealth) is actually something they themselves need, and will need even more in the future. But despite rising inequality, redistribution of wealth remains fairly unpopular in the US, especially among Republicans.

Above all, it would require them to redefine their tribe, and start listening to—and valuing the lives of—people that they currently do not.

Perhaps we need to redefine our tribe as well; many liberals have argued that we mistakenly—and dangerously—did not include people like Trump voters in our tribe. But to be honest, that rings a little hollow to me: We aren’t the ones threatening to deport people or ban them from entering our borders. We aren’t the ones who want to build a wall (though some have in fact joked about building a wall to separate the West Coast from the rest of the country, I don’t think many people really want to do that). Perhaps we live in a bubble of liberal media? But I make a point of reading outlets like The American Conservative and The National Review for other perspectives (I usually disagree, but I do at least read them); how many Trump voters do you think have ever read the New York Times, let alone Huffington Post? Cosmopolitans almost by definition have the more inclusive tribe, the more open perspective on the world (in fact, do I even need the “almost”?).

Nor do I think we are actually ignoring their interests. We want to help them. We offer to help them. In fact, I want to give these people free money—that’s what a basic income would do, it would take money from people like me and give it to people like them—and they won’t let us, because that’s “socialism”! Rather, we are simply refusing to accept their offered solutions, because those so-called “solutions” are beyond unworkable; they are absurd, immoral and insane. We can’t bring back the coal mining jobs, unless we want Florida underwater in 50 years. We can’t reinstate the trade tariffs, unless we want millions of people in China to starve. We can’t tear down all the robots and force factories to use manual labor, unless we want to trigger a national—and then global—economic collapse. We can’t do it their way. So we’re trying to offer them another way, a better way, and they’re refusing to take it. So who here is ignoring the concerns of whom?

Of course, the fact that it’s really their fault doesn’t solve the problem. We do need to take it upon ourselves to do whatever we can, because, regardless of whose fault it is, the world will still suffer if we fail. And that presents us with our most difficult task of all, a task that I fully expect to spend a career trying to do and yet still probably failing: We must understand the human tribal instinct well enough that we can finally begin to change it. We must know enough about how human beings form their mental tribes that we can actually begin to shift those parameters. We must, in other words, cure bigotry—and we must do it now, for we are running out of time.

Belief in belief, and why it’s important

Oct 30, JDN 2457692

In my previous post on ridiculous beliefs, I passed briefly over this sentence:

“People invest their identity in beliefs, and decide what beliefs to profess based on the group identities they value most.”

Today I’d like to talk about the fact that “to profess” is a very important phrase in that sentence. Part of understanding ridiculous beliefs, I think, is understanding that many, if not most, of them are not actually proper beliefs. They are what Daniel Dennett calls “belief in belief”, and has elsewhere been referred to as “anomalous belief”. They are not beliefs in the ordinary sense that we would line up with the other beliefs in our worldview and use them to anticipate experiences and motivate actions. They are something else, lone islands of belief that are not weaved into our worldview. But all the same they are invested with importance, often moral or even ultimate importance; this one belief may not make any sense with everyone else, but you must believe it, because it is a vital part of your identity and your tribe. To abandon it would not simply be mistaken; it would be heresy, it would be treason.

How do I know this? Mainly because nobody has tried to stone me to death lately.

The Bible is quite explicit about at least a dozen reasons I am supposed to be executed forthwith; you likely share many of them: Heresy, apostasy, blasphemy, nonbelief, sodomy, fornication, covetousness, taking God’s name in vain, eating shellfish (though I don’t anymore!), wearing mixed fiber, shaving, working on the Sabbath, making images of things, and my personal favorite, not stoning other people for committing such crimes (as we call it in game theory, a second-order punishment).

Yet I have met many people who profess to be “Bible-believing Christians”, and even may oppose some of these activities (chiefly sodomy, blasphemy, and nonbelief) on the grounds that they are against what the Bible says—and yet not one has tried to arrange my execution, nor have I ever seriously feared that they might.

Is this because we live in a secular society? Well, yes—but not simply that. It isn’t just that these people are afraid of being punished by our secular government should they murder me for my sins; they believe that it is morally wrong to murder me, and would rarely even consider the option. Someone could point them to the passage in Leviticus (20:16, as it turns out) that explicitly says I should be executed, and it would not change their behavior toward me.

On first glance this is quite baffling. If I thought you were about to drink a glass of water that contained cyanide, I would stop you, by force if necessary. So if they truly believe that I am going to be sent to Hell—infinitely worse than cyanide—then shouldn’t they be willing to use any means necessary to stop that from happening? And wouldn’t this be all the more true if they believe that they themselves will go to Hell should they fail to punish me?

If these “Bible-believing Christians” truly believed in Hell the way that I believe in cyanide—that is, as proper beliefs which anticipate experience and motivate action—then they would in fact try to force my conversion or execute me, and in doing so would believe that they are doing right. This used to be quite common in many Christian societies (most infamously in the Salem Witch Trials), and still is disturbingly common in many Muslim societies—ISIS doesn’t just throw gay men off rooftops and stone them as a weird idiosyncrasy; it is written in the Hadith that they’re supposed to. Nor is this sort of thing confined to terrorist groups; the “legitimate” government of Saudi Arabia routinely beheads atheists or imprisons homosexuals (though has a very capricious enforcement system, likely so that the monarchy can trump up charges to justify executing whomever they choose). Beheading people because the book said so is what your behavior would look like if you honestly believed, as a proper belief, that the Qur’an or the Bible or whatever holy book actually contained the ultimate truth of the universe. The great irony of calling religion people’s “deeply-held belief” is that it is in almost all circumstances the exact opposite—it is their most weakly held belief, the one that they could most easily sacrifice without changing their behavior.

Yet perhaps we can’t even say that to people, because they will get equally defensive and insist that they really do hold this very important anomalous belief, and how dare you accuse them otherwise. Because one of the beliefs they really do hold, as a proper belief, and a rather deeply-held one, is that you must always profess to believe your religion and defend your belief in it, and if anyone catches you not believing it that’s a horrible, horrible thing. So even though it’s obvious to everyone—probably even to you—that your behavior looks nothing like what it would if you actually believed in this book, you must say that you do, scream that you do if necessary, for no one must ever, ever find out that it is not a proper belief.

Another common trick is to try to convince people that their beliefs do affect their behavior, even when they plainly don’t. We typically use the words “religious” and “moral” almost interchangeably, when they are at best orthogonal and arguably even opposed. Part of why so many people seem to hold so rigidly to their belief-in-belief is that they think that morality cannot be justified without recourse to religion; so even though on some level they know religion doesn’t make sense, they are afraid to admit it, because they think that means admitting that morality doesn’t make sense. If you are even tempted by this inference, I present to you the entire history of ethical philosophy. Divine Command theory has been a minority view among philosophers for centuries.

Indeed, it is precisely because your moral beliefs are not based on your religion that you feel a need to resort to that defense of your religion. If you simply believed religion as a proper belief, you would base your moral beliefs on your religion, sure enough; but you’d also defend your religion in a fundamentally different way, not as something you’re supposed to believe, not as a belief that makes you a good person, but as something that is just actually true. (And indeed, many fanatics actually do defend their beliefs in those terms.) No one ever uses the argument that if we stop believing in chairs we’ll all become murderers, because chairs are actually there. We don’t believe in belief in chairs; we believe in chairs.

And really, if such a belief were completely isolated, it would not be a problem; it would just be this weird thing you say you believe that everyone really knows you don’t and it doesn’t affect how you behave, but okay, whatever. The problem is that it’s never quite isolated from your proper beliefs; it does affect some things—and in particular it can offer a kind of “support” for other real, proper beliefs that you do have, support which is now immune to rational criticism.

For example, as I already mentioned: Most of these “Bible-believing Christians” do, in fact, morally oppose homosexuality, and say that their reason for doing so is based on the Bible. This cannot literally be true, because if they actually believed the Bible they wouldn’t want gay marriage taken off the books, they’d want a mass pogrom of 4-10% of the population (depending how you count), on a par with the Holocaust. Fortunately their proper belief that genocide is wrong is overriding. But they have no such overriding belief supporting the moral permissibility of homosexuality or the personal liberty of marriage rights, so the very tenuous link to their belief-in-belief in the Bible is sufficient to tilt their actual behavior.

Similarly, if the people I meet who say they think maybe 9/11 was an inside job by our government really believed that, they would most likely be trying to organize a violent revolution; any government willing to murder 3,000 of its own citizens in a false flag operation is one that must be overturned and can probably only be overturned by force. At the very least, they would flee the country. If they lived in a country where the government is actually like that, like Zimbabwe or North Korea, they wouldn’t fear being dismissed as conspiracy theorists, they’d fear being captured and executed. The very fact that you live within the United States and exercise your free speech rights here says pretty strongly that you don’t actually believe our government is that evil. But they wouldn’t be so outspoken about their conspiracy theories if they didn’t at least believe in believing them.

I also have to wonder how many of our politicians who lean on the Constitution as their source of authority have actually read the Constitution, as it says a number of rather explicit things against, oh, say, the establishment of religion (First Amendment) or searches and arrests without warrants (Fourth Amendment) that they don’t much seem to care about. Some are better about this than others; Rand Paul, for instance, actually takes the Constitution pretty seriously (and is frequently found arguing against things like warrantless searches as a result!), but Ted Cruz for example says he has spent decades “defending the Constitution”, despite saying things like “America is a Christian nation” that directly violate the First Amendment. Cruz doesn’t really seem to believe in the Constitution; but maybe he believes in believing the Constitution. (It’s also quite possible he’s just lying to manipulate voters.)

 

Debunking the Simulation Argument

Oct 23, JDN 2457685

Every subculture of humans has words, attitudes, and ideas that hold it together. The obvious example is religions, but the same is true of sports fandoms, towns, and even scientific disciplines. (I would estimate that 40-60% of scientific jargon, depending on discipline, is not actually useful, but simply a way of exhibiting membership in the tribe. Even physicists do this: “quantum entanglement” is useful jargon, but “p-brane” surely isn’t. Statisticians too: Why say the clear and understandable “unequal variance” when you could show off by saying “heteroskedasticity”? In certain disciplines of the humanities this figure can rise as high as 90%: “imaginary” as a noun leaps to mind.)

One particularly odd idea that seems to define certain subcultures of very intelligent and rational people is the Simulation Argument, originally (and probably best) propounded by Nick Bostrom:

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.

In this original formulation by Bostrom, the argument actually makes some sense. It can be escaped, because it makes some subtle anthropic assumptions that need to be considered more carefully (in short, there could be ancestor-simulations but we could still know we aren’t in one); but it deserves to be taken seriously. Indeed, I think proposition (2) is almost certainly true, and proposition (1) might be as well; thus I have no problem accepting the disjunction.

Of course, the typical form of the argument isn’t nearly so cogent. In popular outlets as prestigious as the New York Times, Scientific American and the New Yorker, the idea is simply presented as “We are living in a simulation.” The only major outlet I could find that properly presented Bostrom’s disjunction was PBS. Indeed, there are now some Silicon Valley billionaires who believe the argument, or at least think it merits enough attention to be worth funding research into how we might escape the simulation we are in. (Frankly, even if we were inside a simulation, it’s not clear that “escaping” would be something worthwhile or even possible.)

Yet most people, when presented with this idea, think it is profoundly silly and a waste of time.

I believe this is the correct response. I am 99.9% sure we are not living in a simulation.

But it’s one thing to know that an argument is wrong, and quite another to actually show why; in that respect the Simulation Argument is a lot like the Ontological Argument for God:

However, as Bertrand Russell observed, it is much easier to be persuaded that ontological arguments are no good than it is to say exactly what is wrong with them.

To resolve this problem, I am writing this post (at the behest of my Patreons) to provide you now with a concise and persuasive argument directly against the Simulation Argument. No longer will you have to rely on your intuition that it can’t be right; you actually will have compelling logical reasons to reject it.

Note that I will not deny the core principle of cognitive science that minds are computational and therefore in principle could be simulated in such a way that the “simulations” would be actual minds. That’s usually what defenders of the Simulation Argument assume you’re denying, and perhaps in many cases it is; but that’s not what I’m denying. Yeah, sure, minds are computational (probably). There’s still no reason to think we’re living in a simulation.

To make this refutation, I should definitely address the strongest form of the argument, which is Nick Bostrom’s original disjunction. As I already noted, I believe that the disjunction is in fact true; at least one of those propositions is almost certainly correct, and perhaps two of them.

Indeed, I can tell you which one: Proposition (2). That is, I see no reason whatsoever why an advanced “posthuman” species would want to create simulated universes remotely resembling our own.


First of all, let’s assume that we do make it that far and posthumans do come into existence. I really don’t have sufficient evidence to say this is so, and the combination of millions of racists and thousands of nuclear weapons does not bode particularly well for that probability. But I think there is at least some good chance that this will happen—perhaps 10%?—so, let’s concede that point for now, and say that yes, posthumans will one day exist.

To be fair, I am not a posthuman, and cannot say for certain what beings of vastly greater intelligence and knowledge than I might choose to do. But since we are assuming that they exist as the result of our descendants more or less achieving everything we ever hoped for—peace, prosperity, immortality, vast knowledge—one thing I think I can safely extrapolate is that they will be moral. They will have a sense of ethics and morality not too dissimilar from our own. It will probably not agree in every detail—certainly not with what ordinary people believe, but very likely not with what even our greatest philosophers believe. It will most likely be better than our current best morality—closer to the objective moral truth that underlies reality.

I say this because this is the pattern that has emerged throughout the advancement of civilization thus far, and the whole reason we’re assuming posthumans might exist is that we are projecting this advancement further into the future. Humans have, on average, in the long run, become more intelligent, more rational, more compassionate. We have given up entirely on ancient moral concepts that we now recognize to be fundamentally defective, such as “witchcraft” and “heresy”; we are in the process of abandoning others for which some of us see the flaws but others don’t, such as “blasphemy” and “apostasy”. We have dramatically expanded the rights of women and various minority groups. Indeed, we have expanded our concept of which beings are morally relevant, our “circle of concern”, from only those in our tribe on outward to whole nations, whole races of people—and for some of us, as far as all humans or even all vertebrates. Therefore I expect us to continue to expand this moral circle, until it encompasses all sentient beings in the universe. Indeed, on some level I already believe that, though I know I don’t actually live in accordance with that theory—blame me if you will for my weakness of will, but can you really doubt the theory? Does it not seem likely that this it the theory to which our posthuman descendants will ultimately converge?

If that is the case, then posthumans would never make a simulation remotely resembling the universe I live in.

Maybe not me in particular, for I live relatively well—though I must ask why the migraines were really necessary. But among humans in general, there are many millions who live in conditions of such abject squalor and suffering that to create a universe containing them can only be counted as the gravest of crimes, morally akin to the Holocaust.

Indeed, creating this universe must, by construction, literally include the Holocaust. Because the Holocaust happened in this universe, you know.

So unless you think that our posthuman descendants are monstersdemons really, immortal beings of vast knowledge and power who thrive on the death and suffering of other sentient beings, you cannot think that they would create our universe. They might create a universe of some sort—but they would not create this one. You may consider this a corollary of the Problem of Evil, which has always been one of the (many) knockdown arguments against the existence of God as depicted in any major religion.

To deny this, you must twist the simulation argument quite substantially, and say that only some of us are actual people, sentient beings instantiated by the simulation, while the vast majority are, for lack of a better word, NPCs. The millions of children starving in southeast Asia and central Africa aren’t real, they’re just simulated, so that the handful of us who are real have a convincing environment for the purposes of this experiment. Even then, it seems monstrous to deceive us in this way, to make us think that millions of children are starving just to see if we’ll try to save them.

Bostrom presents it as obvious that any species of posthumans would want to create ancestor-simulations, and to make this seem plausible he compares to the many simulations we already create with our current technology, which we call “video games”. But this is such a severe equivocation on the word “simulation” that it frankly seems disingenuous (or for the pun perhaps I should say dissimulation).

This universe can’t possibly be a simulation in the sense that Halo 4 is a simulation. Indeed, this is something that I know with near-perfect certainty, for I am a sentient being (“Cogito ergo sum” and all that). There is at least one actual sentient person here—me—and based on my observations of your behavior, I know with quite high probability that there are many others as well—all of you.

Whereas, if I thought for even a moment there was even a slight probability that Halo 4 contains actual sentient beings that I am murdering, I would never play the game again; indeed I think I would smash the machine, and launch upon a global argumentative crusade to convince everyone to stop playing violent video games forevermore. If I thought that these video game characters that I explode with virtual plasma grenades were actual sentient people—or even had a non-negligible chance of being such—then what I am doing would be literally murder.

So whatever else the posthumans would be doing by creating our universe inside some vast computer, it is not “simulation” in the sense of a video game. If they are doing this for amusement, they are monsters. Even if they are doing it for some higher purpose such as scientific research, I strongly doubt that it can be justified; and I even more strongly doubt that it could be justified frequently. Perhaps once or twice in the whole history of the civilization, as a last resort to achieve some vital scientific objective when all other methods have been thoroughly exhausted. Furthermore it would have to be toward some truly cosmic objective, such as forestalling the heat death of the universe. Anything less would not justify literally replicating thousands of genocides.

But the way Bostrom generates a nontrivial probability of us living in a simulation is by assuming that each posthuman civilization will create many simulations similar to our own, so that the prior probability of being in a simulation is so high that it overwhelms the much higher likelihood that we are in the real universe. (This a deeply Bayesian argument; of that part, I approve. In Bayesian reasoning, the likelihood is the probability that we would observe the evidence we do given that the theory is true, while the prior is the probability that the theory is true, before we’ve seen any evidence. The probability of the theory actually being true is proportional to the likelihood multiplied by the prior.) But if the Foundation IRB will only approve the construction of a Synthetic Universe in order to achieve some cosmic objective, then the prior probability is something like 2/3, or 9/10; and thus it is no match whatsoever for the some 10^12 evidence in favor of this being actual reality.

Just what is this so compelling likelihood? That brings me to my next point, which is a bit more technical, but important because it’s really where the Simulation Argument truly collapses.

How do I know we aren’t in a simulation?

The fundamental equations of the laws of nature do not have closed-form solutions.

Take a look at the Schrodinger Equation, the Einstein field equations, the Navier-Stokes Equations, even Maxwell’s Equations (which are relatively well-behaved all things considered). These are second-order partial differential equations all, extremely complex to solve. They are all defined over continuous time and space, which has uncountably many points in every interval (though there are some physicists who believe that spacetime may be discrete on the order of 10^-44 seconds.) Not one of them has a general closed-form solution, by which I mean a formula that you could just plug in numbers for the parameters on one side of the equation and output an answer on the other. (x^3 + y^3 = 3 is not a closed-form solution, but y = (3 – x^3)^(1/3) is.) They have such exact solutions in certain special cases, but in general we can only solve them approximately, if at all.

This is not particularly surprising if you assume we’re in the actual universe. I have no particular reason to think that the fundamental laws underlying reality should be of a form that is exactly solvable to minds like my own, or even solvable at all in any but a trivial sense. (They must be “solvable” in the sense of actually resulting in something in particular happening at any given time, but that’s all.)

But it is extremely surprising if you assume we’re in a universe that is simulated by posthumans. If posthumans are similar to us, but… more so I guess, then when they set about to simulate a universe, they should do so in a fashion not too dissimilar from how we would do it. And how would we do it? We’d code in a bunch of laws into a computer in discrete time (and definitely not with time-steps of 10^-44 seconds either!), and those laws would have to be encoded as functions, not equations. There could be many inputs in many different forms, perhaps even involving mathematical operations we haven’t invented yet—but each configuration of inputs would have to yield precisely one output, if the computer program is to run at all.

Indeed, if they are really like us, then their computers will probably only be capable of one core operation—conditional bit flipping, 1 to 0 or 0 to 1 depending on some state—and the rest will be successive applications of that operation. Bit shifts are many bit flips, addition is many bit shifts, multiplication is many additions, exponentiation is many multiplications. We would therefore expect the fundamental equations of the simulated universe to have an extremely simple functional form, literally something that can be written out as many successive steps of “if A, flip X to 1” and “if B, flip Y to 0”. It could be a lot of such steps mind you—existing programs require billions or trillions of such operations—but one thing it could never be is a partial differential equation that cannot be solved exactly.

What fans of the Simulation Argument seem to forget is that while this simple set of operations is extremely general, capable of generating quite literally any possible computable function (Turing proved that), it is not capable of generating any function that isn’t computable, much less any equation that can’t be solved into a function. So unless the laws of the universe can actually be reduced to computable functions, it’s not even possible for us to be inside a computer simulation.

What is the probability that all the fundamental equations of the universe can be reduced to computable functions? Well, it’s difficult to assign a precise figure of course. I have no idea what new discoveries might be made in science or mathematics in the next thousand years (if I did, I would make a few and win the Nobel Prize). But given that we have been trying to get closed-form solutions for the fundamental equations of the universe and failing miserably since at least Isaac Newton, I think that probability is quite small.

Then there’s the fact that (again unless you believe some humans in our universe are NPCs) there are 7.3 billion minds (and counting) that you have to simulate at once, even assuming that the simulation only includes this planet and yet somehow perfectly generates an apparent cosmos that even behaves as we would expect under things like parallax and redshift. There’s the fact that whenever we try to study the fundamental laws of our universe, we are able to do so, and never run into any problems of insufficient resolution; so apparently at least this planet and its environs are being simulated at the scale of nanometers and femtoseconds. This is a ludicrously huge amount of data, and while I cannot rule out the possibility of some larger universe existing that would allow a computer large enough to contain it, you have a very steep uphill battle if you want to argue that this is somehow what our posthuman descendants will consider the best use of their time and resources. Bostrom uses the video game comparison to make it sound like they are just cranking out copies of Halo 917 (“Plasma rifles? How quaint!”) when in fact it amounts to assuming that our descendants will just casually create universes of 10^50 particles running over space intervals of 10^-9 meters and time-steps of 10^-15 seconds that contain billions of actual sentient beings and thousands of genocides, and furthermore do so in a way that somehow manages to make the apparent fundamental equations inside those universes unsolvable.

Indeed, I think it’s conservative to say that the likelihood ratio is 10^12—observing what we do is a trillion times more likely if this is the real universe than if it’s a simulation. Therefore, unless you believe that our posthuman descendants would have reason to create at least a billion simulations of universes like our own, you can assign a probability that we are in the actual universe of at least 99.9%.

As indeed I do.

How do we reach people with ridiculous beliefs?

Oct 16, JDN 2457678

One of the most unfortunate facts in the world—indeed, perhaps the most unfortunate fact, from which most other unfortunate facts follow—is that it is quite possible for a human brain to sincerely and deeply hold a belief that is, by any objective measure, totally and utterly ridiculous.

And to be clear, I don’t just mean false; I mean ridiculous. People having false beliefs is an inherent part of being finite beings in a vast and incomprehensible universe. Monetarists are wrong, but they are not ludicrous. String theorists are wrong, but they are not absurd. Multiregionalism is wrong, but it is not nonsensical. Indeed, I, like anyone else, am probably wrong about a great many things, though of course if I knew which ones I’d change my mind. (Indeed, I admit a small but nontrivial probability of being wrong about the three things I just listed.)

I mean ridiculous beliefs. I mean that any rational, objective assessment of the probability of that belief being true would be vanishingly small, 1 in 1 million at best. I’m talking about totally nonsensical beliefs, beliefs that go against overwhelming evidence; some of them are outright incoherent. Yet millions of people go on believing them.

For example, over 40% of Americans believe that human beings were created by God in their present form less than 10,000 years ago, and typically offer no evidence for this besides “The Bible says so.” (Strictly speaking, even that isn’t true—standard interpretations of the Bible say so. The Bible itself contains no clearly stated date for creation.) This despite the absolutely overwhelming body of evidence supporting the theory of evolution by Darwinian natural selection.

Over a third of Americans don’t believe in global warming, which is not only a complete consensus among all credible climate scientists based on overwhelming evidence, but one of the central threats facing human civilization over the 21st century. On a global scale this is rather like standing on a train track and saying you don’t believe in trains. (Or like the time my mother once told me about where an alert went out to her office that there was a sniper in the area, indiscriminately shooting at civilians, and one of her co-workers refused to join the security protocol and declared smugly, “I don’t believe in snipers.” Fortunately, he was unharmed in the incident. This time.)

1/4 of Americans believe in astrology, and 1/4 Americans believe that aliens have visited the Earth. (Not sure if it’s the same 1/4. Probably considerable but not total overlap.) The existence of extraterrestrial civilizations somewhere in this mind-bogglingly (perhaps infinitely) vast universe has probability 1. But visiting us is quite another matter, and there is absolutely no credible evidence of it. As for astrology? I shouldn’t have to explain why the position of Jupiter, much less Sirius, on your birthday is not a major influence on your behavior or life outcomes. Your obstetrician exerted more gravitational force on you than Jupiter did at the moment you were born.

The majority of Americans believe in telepathy or extrasensory perception. I confess that I actually did when I was very young, though I think I disabused myself of this around the time I stopped believing in Santa Claus.

I love the term “extrasensory perception” because it is such an oxymoron; if you’re perceiving, it is via senses. “Sixth sense” is better, except that we actually already have at least nine senses: The ones you probably know, vision (sight), audition (hearing), olfaction (smell), gustation (taste), and tactition (touch)—and the ones you may not know, thermoception (heat), proprioception (body position), vestibulation (balance), and nociception (pain). These can probably be subdivided further—vision and spatial reasoning are dissociated in blind people, heat and cold are separate nerve pathways, pain and itching are distinct systems, and there are a variety of different sensors used for proprioception. So we really could have as many as twenty senses, depending on how you’re counting.

What about telepathy? Well, that is not actually impossible in principle; it’s just that there’s no evidence that humans actually do it. Smartphones do it almost literally constantly, transmitting data via high-frequency radio waves back and forth to one another. We could have evolved some sort of radio transceiver organ (perhaps an offshoot of an electric defense organ such as that of electric eels), but as it turns out we didn’t. Actually in some sense—which some might say is trivial, but I think it’s actually quite deep—we do have telepathy; it’s just that we transmit our thoughts not via radio waves or anything more exotic, but via sound waves (speech) and marks on paper (writing) and electronic images (what you’re reading right now). Human beings really do transmit our thoughts to one another, and this truly is a marvelous thing we should not simply take for granted (it is one of our most impressive feats of Mundane Magic); but somehow I don’t think that’s what people mean when they say they believe in psychic telepathy.

And lest you think this is a uniquely American phenomenon: The particular beliefs vary from place to place, but bizarre beliefs abound worldwide, from conspiracy theories in the UK to 9/11 “truthers” in Canada to HIV denialism in South Africa (fortunately on the wane). The American examples are more familiar to me and most of my readers are Americans, but wherever you are reading from, there are probably ridiculous beliefs common there.

I could go on, listing more objectively ridiculous beliefs that are surprisingly common; but the more I do that, the more I risk alienating you, in case you should happen to believe one of them. When you add up the dizzying array of ridiculous beliefs one could hold, odds are that most people you’d ever meet will have at least one of them. (“Not me!” you’re thinking; and perhaps you’re right. Then again, I’m pretty sure that the 4% or so of people who believe in the Reptilians think the same thing.)

Which brings me to my real focus: How do we reach these people?

One possible approach would be to just ignore them, leave them alone, or go about our business with them as though they did not have ridiculous beliefs. This is in fact the right thing to do under most circumstances, I think; when a stranger on the bus starts blathering about how the lizard people are going to soon reveal themselves and establish the new world order, I don’t think it’s really your responsibility to persuade that person to realign their beliefs with reality. Nodding along quietly would be acceptable, and it would be above and beyond the call of duty to simply say, “Um, no… I’m fairly sure that isn’t true.”

But this cannot always be the answer, if for no other reason than the fact that we live in a democracy, and people with ridiculous beliefs frequently vote according to them. Then people with ridiculous beliefs can take office, and make laws that affect our lives. Actually this would be true even if we had some other system of government; there’s nothing in particular to stop monarchs, hereditary senates, or dictators from believing ridiculous things. If anything, the opposite; dictators are known for their eccentricity precisely because there are no checks on their behavior.

At some point, we’re going to need to confront the fact that over half of the Republicans in the US Congress do not believe in climate change, and are making policy accordingly, rolling drunk on petroleum and treating the hangover with the hair of the dog.

We’re going to have to confront the fact that school boards in Southern states, particularly Texas, continually vote to censor biology textbooks of their dreaded Darwinian evolution.

So we really do need to find a way to talk to people who have ridiculous beliefs, and engage with them, understand why they think the way they do, and then—hopefully at least—tilt them a little bit back toward rational reality. You will not be able to change their mind completely right away, but if each of us can at least chip away at their edifice of absurdity, then all together perhaps we can eventually bring them to enlightenment.

Of course, a good start is probably not to say you think that their beliefs are ridiculous, because people get very defensive when you do that, even—perhaps especially—when it’s true. People invest their identity in beliefs, and decide what beliefs to profess based on the group identities they value most.

This is the link that we must somehow break. We must show people that they are not defined by their beliefs, that it is okay to change your mind. We must be patient and compassionate—sometimes heroically so, as people spout offensive nonsense in our faces, sometimes offensive nonsense that directly attacks us personally. (“Atheists deserve Hell”, taken literally, would constitute something like a death threat except infinitely worse. While to them it very likely is just reciting a slogan, to the atheist listening it says that you believe that they are so evil, so horrible that they deserve eternal torture for believing what they do. And you get mad when we say your beliefs are ridiculous?)

We must also remind people that even very smart people can believe very dumb things—indeed, I’d venture a guess that most dumb things are in fact believed by smart people. Even the most intelligent human beings can only glimpse a tiny fraction of the universe, and all human brains are subject to the same fundamental limitations, the same core heuristics and biases. Make it clear that you’re saying you think their beliefs are false, not that they are stupid or crazy. And indeed, make it clear to yourself that this is indeed what you believe, because it ought to be. It can be tempting to think that only an idiot would believe something so ridiculous—and you are safe, for you are no idiot!—but the truth is far more humbling: Human brains are subject to many flaws, and guarding the fortress of the mind against error and deceit is a 24-7 occupation. Indeed, I hope that you will ask yourself: “What beliefs do I hold that other people might find ridiculous? Are they, in fact, ridiculous?”

Even then, it won’t be easy. Most people are strongly resistant to any change in belief, however small, and it is in the nature of ridiculous beliefs that they require radical changes in order to restore correspondence with reality. So we must try in smaller steps.

Maybe don’t try to convince them that 9/11 was actually the work of Osama bin Laden; start by pointing out that yes, steel does bend much more easily at the temperature at which jet fuel burns. Maybe don’t try to persuade them that astrology is meaningless; start by pointing out the ways that their horoscope doesn’t actually seem to fit them, or could be made to fit anybody. Maybe don’t try to get across the real urgency of climate change just yet, and instead point out that the “study” they read showing it was a hoax was clearly funded by oil companies, who would perhaps have a vested interest here. And as for ESP? I think it’s a good start just to point out that we have more than five senses already, and there are many wonders of the human brain that actual scientists know about well worth exploring—so who needs to speculate about things that have no scientific evidence?

How personality makes cognitive science hard

August 13, JDN 2457614

Why is cognitive science so difficult? First of all, let’s acknowledge that it is difficult—that even those of us who understand it better than most are still quite baffled by it in quite fundamental ways. The Hard Problem still looms large over us all, and while I know that the Chinese Room Argument is wrong, I cannot precisely pin down why.

The recursive, reflexive character of cognitive science is part of the problem; can a thing understand itself without understanding understanding itself, understanding understanding understanding itself, and on in an infinite regress? But this recursiveness applies just as much to economics and sociology, and honestly to physics and biology as well. We are physical biological systems in an economic and social system, yet most people at least understand these sciences at the most basic level—which is simply not true of cognitive science.

One of the most basic facts of cognitive science (indeed I am fond of calling it The Basic Fact of Cognitive Science) is that we are our brains, that everything human consciousness does is done by and within the brain. Yet the majority of humans believe in souls (including the majority of Americans and even the majority of Brits), and just yesterday I saw a news anchor say “Based on a new study, that feeling may originate in your brain!” He seriously said “may”. “may”? Why, next you’ll tell me that when my arms lift things, maybe they do it with muscles! Other scientists are often annoyed by how many misconceptions the general public has about science, but this is roughly the equivalent of a news anchor saying, “Based on a new study, human bodies may be made of cells!” or “Based on a new study, diamonds may be made of carbon atoms!” The misunderstanding of many sciences is widespread, but the misunderstanding of cognitive science is fundamental.

So what makes cognitive science so much harder? I have come to realize that there is a deep feature of human personality that makes cognitive science inherently difficult in a way other sciences are not.

Decades of research have uncovered a number of consistent patterns in human personality, where people’s traits tend to lie along a continuum from one extreme to another, and usually cluster near either end. Most people are familiar with a few of these, such as introversion/extraversion and optimism/pessimism; but the one that turns out to be important here is empathizing/systematizing.

Empathizers view the world as composed of sentient beings, living agents with thoughts, feelings, and desires. They are good at understanding other people and providing social support. Poets are typically empathizers.

Systematizers view the world as composed of interacting parts, interlocking components that have complex inner workings which can be analyzed and understood. They are good at solving math problems and tinkering with machines. Engineers are typically systematizers.

Most people cluster near one end of the continuum or the other; they are either strong empathizers or strong systematizers. (If you’re curious, there’s an online test you can take to find out which you are.)

But a rare few of us, perhaps as little as 2% and no more than 10%, are both; we are empathizer-systematizers, strong on both traits (showing that it’s not really a continuum between two extremes after all, and only seemed to be because the two traits are negatively correlated). A comparable number are also low on both traits, which must quite frankly make the world a baffling place in general.

Empathizer-systematizers understand the world as it truly is: Composed of sentient beings that are made of interacting parts.

The very title of this blog shows I am among this group: “human” for the empathizer, “economics” for the systematizer!

We empathizer-systematizers can intuitively grasp that there is no contradiction in saying that a person is sad because he lost his job and he is sad because serotonin levels in his cingulate gyrus are low—because it was losing his job that triggered other thoughts and memories that lowered serotonin levels in his cingulate gyrus and thereby made him sad. No one fully understands the details of how low serotonin feels like sadness—hence, the Hard Problem—but most people can’t even seem to grasp the connection at all. How can something as complex and beautiful as a human mind be made of… sparking gelatin?

Well, what would you prefer it to be made of? Silicon chips? We’re working on that. Something else? Magical fairy dust, perhaps? Pray tell, what material could the human mind be constructed from that wouldn’t bother you on a deep level?

No, what really seems to bother people is the very idea that a human mind can be constructed from material, that thoughts and feelings can be divisible into their constituent parts.

This leads people to adopt one of two extreme positions on cognitive science, both of which are quite absurd—frankly I’m not sure they are even coherent.

Pure empathizers often become dualists, saying that the mind cannot be divisible, cannot be made of material, but must be… something else, somehow, outside the material universe—whatever that means.

Pure systematizers instead often become eliminativists, acknowledging the functioning of the brain and then declaring proudly that the mind does not exist—that consciousness, emotion, and experience are all simply illusions that advanced science will one day dispense with—again, whatever that means.

I can at least imagine what a universe would be like if eliminativism were true and there were no such thing as consciousness—just a vast expanse of stars and rocks and dust, lifeless and empty. Of course, I know that I’m not in such a universe, because I am experiencing consciousness right now, and the illusion of consciousness is… consciousness. (You are not experiencing what you are experiencing right now, I say!) But I can at least visualize what such a universe would be like, and indeed it probably was our universe (or at least our solar system) up until about a billion years ago when the first sentient animals began to evolve.

Dualists, on the other hand, are speaking words, structured into grammatical sentences, but I’m not even sure they are forming coherent assertions. Sure, you can sort of imagine our souls being floating wisps of light and energy (ala the “ascended beings”, my least-favorite part of the Stargate series, which I otherwise love), but ultimately those have to be made of something, because nothing can be both fundamental and complex. Moreover, the fact that they interact with ordinary matter strongly suggests that they are made of ordinary matter (and to be fair to Stargate, at one point in the series Rodney with his already-great intelligence vastly increased declares confidently that ascended beings are indeed nothing more than “protons and electrons, protons and electrons”). Even if they were made of some different kind of matter like dark matter, they would need to obey a common system of physical laws, and ultimately we would come to think of them as matter. Otherwise, how do the two interact? If we are made of soul-stuff which is fundamentally different from other stuff, then how do we even know that other stuff exists? If we are not our bodies, then how do we experience pain when they are damaged and control them with our volition? The most coherent theory of dualism is probably Malebranche’s, which is quite literally “God did it”. Epiphenomenalism, which says that thoughts are just sort of an extra thing that also happens but has no effect (an “epiphenomenon”) on the physical brain, is also quite popular for some reason. People don’t quite seem to understand that the Law of Conservation of Energy directly forbids an “epiphenomenon” in this sense, because anything that happens involves energy, and that energy (unlike, say, money) can’t be created out of nothing; it has to come from somewhere. Analogies are often used: The whistle of a train, the smoke of a flame. But the whistle of a train is a pressure wave that vibrates the train; the smoke from a flame is made of particulates that could be used to smother the flame. At best, there are some phenomena that don’t affect each other very much—but any causal interaction at all makes dualism break down.

How can highly intelligent, highly educated philosophers and scientists make such basic errors? I think it has to be personality. They have deep, built-in (quite likely genetic) intuitions about the structure of the universe, and they just can’t shake them.

And I confess, it’s very hard for me to figure out what to say in order to break those intuitions, because my deep intuitions are so different. Just as it seems obvious to them that the world cannot be this way, it seems obvious to me that it is. It’s a bit like living in a world where 45% of people can see red but not blue and insist the American Flag is red and white, another 45% of people can see blue but not red and insist the flag is blue and white, and I’m here in the 10% who can see all colors and I’m trying to explain that the flag is red, white, and blue.

The best I can come up with is to use analogies, and computers make for quite good analogies, not least because their functioning is modeled on our thinking.

Is this word processor program (LibreOffice Writer, as it turns out) really here, or is it merely an illusion? Clearly it’s really here, right? I’m using it. It’s doing things right now. Parts of it are sort of illusions—it looks like a blank page, but it’s actually an LCD screen lit up all the way; it looks like ink, but it’s actually where the LCD turns off. But there is clearly something here, an actual entity worth talking about which has properties that are usefully described without trying to reduce them to the constituent interactions of subatomic particles.

On the other hand, can it be reduced to the interactions of subatomic particles? Absolutely. A brief sketch is something like this: It’s a software program, running on an operating system, and these in turn are represented in the physical hardware as long binary sequences, stored by ever-so-slightly higher or lower voltages in particular hardware components, which in turn are due to electrons being moved from one valence to another. Those electrons move in precise accordance with the laws of quantum mechanics, I assure you; yet this in no way changes the fact that I’m typing a blog post on a word processor.

Indeed, it’s not even particularly useful to know that the electrons are obeying the laws of quantum mechanics, and quite literally no possible computer that could be constructed in our universe could ever be large enough to fully simulate all these quantum interactions within the amount of time since the dawn of the universe. If we are to understand it at all, it must be at a much higher level—and the “software program” level really seems to be the best one for most circumstances. The vast majority of problems I’m likely to encounter are either at the software level or the macro hardware level; it’s conceivable that a race condition could emerge in the processor cache or the voltage could suddenly spike or even that a cosmic ray could randomly ionize a single vital electron, but these scenarios are far less likely to affect my life than, say, I accidentally deleted the wrong file or the battery ran out of charge because I forgot to plug it in.

Likewise, when dealing with a relationship problem, or mediating a conflict between two friends, it’s rarely relevant that some particular neuron is firing in someone’s nucleus accumbens, or that one of my friends is very low on dopamine in his mesolimbic system today. It could be, particularly if some sort of mental or neurological illness in involved, but in most cases the real issues are better understood as higher level phenomena—people being angry, or tired, or sad. These emotions are ultimately constructed of axon potentials and neurotransmitters, but that doesn’t make them any less real, nor does it change the fact that it is at the emotional level that most human matters are best understood.

Perhaps part of the problem is that human emotions take on moral significance, which other higher-level entities generally do not? But they sort of do, really, in a more indirect way. It matters a great deal morally whether or not climate change is a real phenomenon caused by carbon emissions (it is). Ultimately this moral significance can be tied to human experiences, so everything rests upon human experiences being real; but they are real, in much the same way that rocks and trees and carbon emissions are real. No amount of neuroscience will ever change that, just as no amount of biological science would disprove the existence of trees.

Indeed, some of the world’s greatest moral problems could be better solved if people were better empathizer-systematizers, and thus more willing to do cost-benefit analysis.