The mythology mindset

Feb 5 JDN 2459981

I recently finished reading Steven Pinker’s latest book Rationality. It’s refreshing, well-written, enjoyable, and basically correct with some small but notable errors that seem sloppy—but then you could have guessed all that from the fact that it was written by Steven Pinker.

What really makes the book interesting is an insight Pinker presents near the end, regarding the difference between the “reality mindset” and the “mythology mindset”.

It’s a pretty simple notion, but a surprisingly powerful one.

In the reality mindset, a belief is a model of how the world actually functions. It must be linked to the available evidence and integrated into a coherent framework of other beliefs. You can logically infer from how some parts work to how other parts must work. You can predict the outcomes of various actions. You live your daily life in the reality mindset; you couldn’t function otherwise.

In the mythology mindset, a belief is a narrative that fulfills some moral, emotional, or social function. It’s almost certainly untrue or even incoherent, but that doesn’t matter. The important thing is that it sends the right messages. It has the right moral overtones. It shows you’re a member of the right tribe.

The idea is similar to Dennett’s “belief in belief”, which I’ve written about before; but I think this characterization may actually be a better one, not least because people would be more willing to use it as a self-description. If you tell someone “You don’t really believe in God, you believe in believing in God”, they will object vociferously (which is, admittedly, what the theory would predict). But if you tell them, “Your belief in God is a form of the mythology mindset”, I think they are at least less likely to immediately reject your claim out of hand. “You believe in God a different way than you believe in cyanide” isn’t as obviously threatening to their identity.

A similar notion came up in a Psychology of Religion course I took, in which the professor discussed “anomalous beliefs” linked to various world religions. He picked on a bunch of obscure religions, often held by various small tribes. He asked for more examples from the class. Knowing he was nominally Catholic and not wanting to let mainstream religion off the hook, I presented my example: “This bread and wine are the body and blood of Christ.” To his credit, he immediately acknowledged it as a very good example.

It’s also not quite the same thing as saying that religion is a “metaphor”; that’s not a good answer for a lot of reasons, but perhaps chief among them is that people don’t say they believe metaphors. If I say something metaphorical and then you ask me, “Hang on; is that really true?” I will immediately acknowledge that it is not, in fact, literally true. Love is a rose with all its sweetness and all its thorns—but no, love isn’t really a rose. And when it comes to religious belief, saying that you think it’s a metaphor is basically a roundabout way of saying you’re an atheist.

From all these different directions, we seem to be converging on a single deeper insight: when people say they believe something, quite often, they clearly mean something very different by “believe” than what I would ordinarily mean.

I’m tempted even to say that they don’t really believe it—but in common usage, the word “belief” is used at least as often to refer to the mythology mindset as the reality mindset. (In fact, it sounds less weird to say “I believe in transsubstantiation” than to say “I believe in gravity”.) So if they don’t really believe it, then they at least mythologically believe it.

Both mindsets seem to come very naturally to human beings, in particular contexts. And not just modern people, either. Humans have always been like this.

Ask that psychology professor about Jesus, and he’ll tell you a tall tale of life, death, and resurrection by a demigod. But ask him about the Stroop effect, and he’ll provide a detailed explanation of rigorous experimental protocol. He believes something about God; but he knows something about psychology.

Ask a hunter-gatherer how the world began, and he’ll surely spin you a similarly tall tale about some combination of gods and spirits and whatever else, and it will all be peculiarly particular to his own tribe and no other. But ask him how to gut a fish, and he’ll explain every detail with meticulous accuracy, with almost the same rigor as that scientific experiment. He believes something about the sky-god; but he knows something about fish.

To be a rationalist, then, is to aspire to live your whole life in the reality mindset. To seek to know rather than believe.

This isn’t about certainty. A rationalist can be uncertain about many things—in fact, it’s rationalists of all people who are most willing to admit and quantify their uncertainty.

This is about whether you allow your beliefs to float free as bare, almost meaningless assertions that you profess to show you are a member of the tribe, or you make them pay rent, directly linked to other beliefs and your own experience.

As long as I can remember, I have always aspired to do this. But not everyone does. In fact, I dare say most people don’t. And that raises a very important question: Should they? Is it better to live the rationalist way?

I believe that it is. I suppose I would, temperamentally. But say what you will about the Enlightenment and the scientific revolution, they have clearly revolutionized human civilization and made life much better today than it was for most of human existence. We are peaceful, safe, and well-fed in a way that our not-so-distant ancestors could only dream of, and it’s largely thanks to systems built under the principles of reason and rationality—that is, the reality mindset.

We would never have industrialized agriculture if we still thought in terms of plant spirits and sky gods. We would never have invented vaccines and antibiotics if we still believed disease was caused by curses and witchcraft. We would never have built power grids and the Internet if we still saw energy as a mysterious force permeating the world and not as a measurable, manipulable quantity.

This doesn’t mean that ancient people who saw the world in a mythological way were stupid. In fact, it doesn’t even mean that people today who still think this way are stupid. This is not about some innate, immutable mental capacity. It’s about a technology—or perhaps the technology, the meta-technology that makes all other technology possible. It’s about learning to think the same way about the mysterious and the familiar, using the same kind of reasoning about energy and death and sunlight as we already did about rocks and trees and fish. When encountering something new and mysterious, someone in the mythology mindset quickly concocts a fanciful tale about magical beings that inevitably serves to reinforce their existing beliefs and attitudes, without the slightest shred of evidence for any of it. In their place, someone in the reality mindset looks closer and tries to figure it out.

Still, this gives me some compassion for people with weird, crazy ideas. I can better make sense of how someone living in the modern world could believe that the Earth is 6,000 years old or that the world is ruled by lizard-people. Because they probably don’t really believe it, they just mythologically believe it—and they don’t understand the difference.

Mind reading is not optional

Nov 20 JDN 2459904

I have great respect for cognitive-behavioral therapy (CBT), and it has done a lot of good for me. (It is also astonishingly cost-effective; its QALY per dollar rate compares favorably to almost any other First World treatment, and loses only to treating high-impact Third World diseases like malaria and schistomoniasis.)

But there are certain aspects of it that have always been frustrating to me. Standard CBT techniques often present as ‘cognitive distortions‘ what are in fact clearly necessary heuristics without which it would be impossible to function.

Perhaps the worst of these is so-called ‘mind reading‘. The very phrasing of it makes it sound ridiculous: Are you suggesting that you have some kind of extrasensory perception? Are you claiming to be a telepath?

But in fact ‘mind reading’ is simply the use of internal cognitive models to forecast the thoughts, behaviors, and expectations of other human beings. And without it, it would be completely impossible to function in human society.

For instance, I have had therapists tell me that it is ‘mind reading’ for me to anticipate that people will have tacit expectations for my behavior that they will judge me for failing to meet, and I should simply wait for people to express their expectations rather than assuming them. I admit, life would be much easier if I could do that. But I know for a fact that I can’t. Indeed, I used to do that, as a child, and it got me in trouble all the time. People were continually upset at me for not doing things they had expected me to do but never bothered to actually mention. They thought these expectations were “obvious”; they were not, at least not to me.

It was often little things, and in hindsight some of these things seem silly: I didn’t know what a ‘made bed’ was supposed to look like, so I put it in a state that was functional for me, but that was not considered ‘making the bed’. (I have since learned that my way was actually better: It’s good to let sheets air out before re-using them.) I was asked to ‘clear the sink’, so I moved the dishes out of the sink and left them on the counter, not realizing that the implicit command was for me to wash those dishes, dry them, and put them away. I was asked to ‘bring the dinner plates to the table’, so I did that, and left them in a stack there, not realizing that I should be setting them out in front of each person’s chair and also bringing flatware. Of course I know better now. But how was I supposed to know then? It seems like I was expected to, though.

Most people just really don’t seem to realize how many subtle, tacit expectations are baked into every single task. I think neurodivergence is quite relevant here; I have a mild autism spectrum disorder, and so I think rather differently than most people. If you are neurotypical, then you probably can forecast other people’s expectations fairly well automatically, and so they may seem obvious to you. In fact, they may seem so obvious that you don’t even realize you’re doing it. Then when someone like me comes along and is consciously, actively trying to forecast other people’s expectations, and sometimes doing it poorly, you go and tell them to stop trying to forecast. But if they were to do that, they’d end up even worse off than they are. What you really need to be telling them is how to forecast better—but that would require insight into your own forecasting methods which you aren’t even consciously aware of.

Seriously, stop and think for a moment all of the things other people expect you to do every day that are rarely if ever explicitly stated. How you are supposed to dress, how you are supposed to speak, how close you are supposed to stand to other people, how long you are supposed to hold eye contact—all of these are standards you will be expected to meet, whether or not any of them have ever been explicitly explained to you. You may do this automatically; or you may learn to do it consciously after being criticized for failing to do it. But one way or another, you must forecast what other people will expect you to do.

To my knowledge, no one has ever explicitly told me not to wear a Starfleet uniform to work. I am not aware of any part of the university dress code that explicitly forbids such attire. But I’m fairly sure it would not be a good idea. To my knowledge, no one has ever explicitly told me not to burst out into song in the middle of a meeting. But I’m still pretty sure I shouldn’t do that. To my knowledge, no one has ever explicitly told me what the ‘right of way’ rules are for walking down a crowded sidewalk, who should be expected to move out of the way of whom. But people still get mad if you mess up and bump into them.

Even when norms are stated explicitly, it is often as a kind of last resort, and the mere fact that you needed to have a norm stated is often taken as a mark against your character. I have been explicitly told in various contexts not to talk to myself or engage in stimming leg movements; but the way I was told has generally suggested that I would have been judged better if I hadn’t had to be told, if I had simply known the way that other people seem to know. (Or is it that they never felt any particular desire to stim?)

In fact, I think a major part of developing social skills and becoming more functional, to the point where a lot of people actually now seem a bit surprised to learn I have an autism spectrum disorder, has been improving my ability to forecast other people’s expectations for my behavior. There are dozens if not hundreds of norms that people expect you to follow at any given moment; most people seem to intuit them so easily that they don’t even realize they are there. But they are there all the same, and this is painfully evident to those of us who aren’t always able to immediately intuit them all.

Now, the fact remains that my current mental models are surely imperfect. I am often wrong about what other people expect of me. I’m even prepared to believe that some of my anxiety comes from believing that people have expectations more demanding than what they actually have. But I can’t simply abandon the idea of forecasting other people’s expectations. Don’t tell me to stop doing it; tell me how to do it better.

Moreover, there is a clear asymmetry here: If you think people want more from you than they actually do, you’ll be anxious, but people will like you and be impressed by you. If you think people want less from you than they actually do, people will be upset at you and look down on you. So, in the presence of uncertainty, there’s a lot of pressure to assume that the expectations are high. It would be best to get it right, of course; but when you aren’t sure you can get it right, you’re often better off erring on the side of caution—which is to say, the side of anxiety.

In short, mind reading isn’t optional. If you think it is, that’s only because you do it automatically.

Mindful of mindfulness

Sep 25 JDN 2459848

I have always had trouble with mindfulness meditation.

On the one hand, I find it extremely difficult to do: if there is one thing my mind is good at, it’s wandering. (I think in addition to my autism spectrum disorder, I may also have a smidgen of ADHD. I meet some of the criteria at least.) And it feels a little too close to a lot of practices that are obviously mumbo-jumbo nonsense, like reiki, qigong, and reflexology.

On the other hand, mindfulness meditation has been empirically shown to have large beneficial effects in study after study after study. It helps with not only depression, but also chronic pain. It even seems to improve immune function. The empirical data is really quite clear at this point. The real question is how it does all this.

And I am, above all, an empiricist. I bow before the data. So, when my new therapist directed me to an app that’s supposed to train me to do mindfulness meditation, I resolved that I would in fact give it a try.

Honestly, as of writing this, I’ve been using it less than a week; it’s probably too soon to make a good evaluation. But I did have some prior experience with mindfulness, so this was more like getting back into it rather than starting from scratch. And, well, I think it might actually be working. I feel a bit better than I did when I started.

If it is working, it doesn’t seem to me that the mechanism is greater focus or mental control. I don’t think I’ve really had time to meaningfully improve those skills, and to be honest, I have a long way to go there. The pre-recorded voice samples keep telling me it’s okay if my mind wanders, but I doubt the app developers planned for how much my mind can wander. When they suggest I try to notice each wandering thought, I feel like saying, “Do you want the complete stack trace, or just the final output? Because if I wrote down each terminal branch alone, my list would say something like ‘fusion reactors, ice skating, Napoleon’.”

I think some of the benefit is simply parasympathetic activation, that is, being more relaxed. I am, and have always been, astonishingly bad at relaxing. It’s not that I lack positive emotions: I can enjoy, I can be excited. Nor am I incapable of low-arousal emotions: I can get bored, I can be lethargic. I can also experience emotions that are negative and high-arousal: I can be despondent or outraged. But I have great difficulty reaching emotional states which are simultaneously positive and low-arousal, i.e. states of calm and relaxation. (See here for more on the valence/arousal model of emotional states.) To some extent I think this is due to innate personality: I am high in both Conscientiousness and Neuroticism, which basically amounts to being “high-strung“. But mindfulness has taught me that it’s also trainable, to some extent; I can get better at relaxing, and I already have.

And even more than that, I think the most important effect has been reminding and encouraging me to practice self-compassion. I am an intensely compassionate person, toward other people; but toward myself, I am brutal, demanding, unforgiving, even cruel. My internal monologue says terrible things to me that I wouldnever say to anyone else. (Or at least, not to anyone else who wasn’t a mass murderer or something. I wouldn’t feel particularly bad about saying “You are a failure, you are broken, you are worthless, you are unworthy of love” to, say, Josef Stalin. And yes, these are in fact things my internal monologue has said to me.) Whenever I am unable to master a task I consider important, my automatic reaction is to denigrate myself for failing; I think the greatest benefit I am getting from practicing meditation is being encouraged to fight that impulse. That is, the most important value added by the meditation app has not been in telling me how to focus on my own breathing, but in reminding me to forgive myself when I do it poorly.

If this is right (as I said, it’s probably too soon to say), then we may at last be able to explain why meditation is simultaneously so weird and tied to obvious mumbo-jumbo on the one hand, and also so effective on the other. The actual function of meditation is to be a difficult cognitive task which doesn’t require outside support.

And then the benefit actually comes from doing this task, getting slowly better at it—feeling that sense of progress—and also from learning to forgive yourself when you do it badly. The task probably could have been anything: Find paths through mazes. Fill out Sudoku grids. Solve integrals. But these things are hard to do without outside resources: It’s basically impossible to draw a maze without solving it in the process. Generating a Sudoku grid with a unique solution is at least as hard as solving one (which is NP-complete). By the time you know a given function is even integrable over elementary functions, you’ve basically integrated it. But focusing on your breath? That you can do anywhere, anytime. And the difficulty of controlling all your wandering thoughts may be less a bug than a feature: It’s precisely because the task is so difficult that you will have reason to practice forgiving yourself for failure.

The arbitrariness of the task itself is how you can get a proliferation of different meditation techniques, and a wide variety of mythologies and superstitions surrounding them all, but still have them all be about equally effective in the end. Because it was never really about the task at all. It’s about getting better and failing gracefully.

It probably also helps that meditation is relaxing. Solving integrals might not actually work as well as focusing on your breath, even if you had a textbook handy full of integrals to solve. Breathing deeply is calming; integration by parts isn’t. But lots of things are calming, and some things may be calming to one person but not to another.

It is possible that there is yet some other benefit to be had directly via mindfulness itself. If there is, it will surely have more to do with anterior cingulate activation than realignment of qi. But such a particular benefit isn’t necessary to explain the effectiveness of meditation, and indeed would be hard-pressed to explain why so many different kinds of meditation all seem to work about as well.

Because it was never about what you’re doing—it was always about how.

Gender norms are weird.

Apr 3 JDN 2459673

Field Adjunct Xorlan nervously adjusted their antenna jewelry and twiddled their mandibles as they waited to be called before the Xenoanthropology Committee.

At last, it was Xorlan’s turn to speak. They stepped slowly, hesitantly up to the speaking perch, trying not to meet any of the dozens of quartets of eyes gazing upon them. “So… yes. The humans of Terra. I found something…” Their throat suddenly felt dry. “Something very unusual.”

The Committee Chair glared at Xorlan impatiently. “Go on, then.”

“Well, to begin, humans exhibit moderate sexual dimorphism, though much more in physical than mental traits.”

The Chair rolled all four of their eyes. “That is hardly unusual at all! I could name a dozen species on as many worlds—”

“Uh, if I may, I wasn’t finished. But the humans, you see, they endeavor greatly—at enormous individual and social cost—to emphasize their own dimorphism. They wear clothing that accentuates their moderate physical differences. They organize themselves into groups based primarily if not entirely around secondary sexual characteristics. Many of their languages even directly incorporate pronouns or even adjectives and nouns associated with these categorizations.”

Seemingly placated for now, the Chair was no longer glaring or rolling their eyes. “Continue.”

“They build complex systems of norms surrounding the appropriate dress and behavior of individuals based on these dimorphic characteristics. Moreover, they enforce these norms with an iron mandible—” Xorlan choked at their own cliched metaphor, regretting it immediately. “Well, uh, not literally, humans don’t have mandibles—but what I mean to say is, they enforce these norms extremely aggressively. Humans will berate, abuse, ostracize, in extreme cases even assault or kill one another over seemingly trivial violations of these norms.”

Now the Chair sounded genuinely interested. “We know religion is common among humans. Do the norms have some religious significance, perhaps?”

“Sometimes. But not always. Oftentimes the norms seem to be entirely separate from religious practices, yet are no less intensively enforced. Different groups of humans even have quite different norms, though I have noticed certain patterns, if you’ll turn to table 4 of my report—”

The Chair waved dismissively. “In due time, Field Adjunct. For now, tell us: Do the humans have a name for this strange practice?”

“Ah. Yes, in fact they do. They call it gender.

We are so thoroughly accustomed to them—in basically every human society—that we hardly even notice their existence, much less think to question them most of the time. But as I hope this little vignette about an alien anthropologist illustrates, gender norms are actually quite profoundly weird.

Sexual dimorphism is not weird. A huge number of species have vary degrees of dimorphism, and mammals in particular are especially likely to exhibit significant dimorphism, from the huge antlers of a stag to the silver back of a gorilla. Human dimorphism is in a fairly moderate range; our males and females are neither exceptionally similar nor exceptionally different by most mammal standards.

No, what’s weird is gender—the way that, in nearly every human society, culture has taken our sexual dimorphism and expanded it into an incredibly intricate, incredibly draconian system of norms that everyone is expected to follow on pain of ostracism if not outright violence.

Imagine a government that passed laws implementing the following:

Shortly after your birth, you will be assigned to a group without your input, and will remain it in your entire life. Based on your group assignment, you must obey the following rules: You must wear only clothing on this approved list, and never anything on this excluded list. You must speak with a voice pitch within a particular octave range. You must stand and walk a certain way. You must express, or not express, your emotions under certain strictly defined parameters—for group A, anger is almost never acceptable, while for group B, anger is the only acceptable emotion in most circumstances. You are expected to eat certain approved foods and exclude other foods. You must exhibit the assigned level of dominance for your group. All romantic and sexual relations are to be only with those assigned to the opposite group. If you violate any of these rules, you will be punished severely.

We surely see any such government as the epitome of tyranny. These rules are petty, arbitrary, oppressive, and disproportionately and capriciously enforced. And yet, when for millennia we in every society on Earth have imposed these rules upon ourselves and each other, it seems to us as though nothing is amiss.

Note that I’m not saying that men and women are the same in every way. That’s clearly not true physically; the differences in upper body strength and grip strength are frankly staggering. The average man is nearly twice as strong as the average woman, and an average 75-year-old man grips better with his left hand than an average 25-year-old woman grips with her right.

It isn’t really true mentally either: There are some robust correlations between gender and certain psychological traits. But they are just that: Correlations. Men are more likely to be dominant, aggressive, risk-seeking and visually oriented, while women are more likely to be submissive, nurturing, neurotic, and verbally oriented. There is still an enormous amount of variation within each group, such that knowing only someone’s gender actually tells you very little about their psychology.

And whatever differences there may be, however small or large, and whatever exceptions may exist, whether rare or ubiquitous—the question remains: Why enforce this? Why punish people for deviating from whatever trends may exist? Why is deviating from gender norms not simply unusual, but treated as immoral?

I don’t have a clear answer. People do generally enforce all sorts of social norms, some good and some bad; but gender norms in particular seem especially harshly enforced. People do generally feel uncomfortable with having their mental categories challenged or violated, but sporks and schnoodles have never received anything like the kind of hatred that is routinely directed at trans people. There’s something about gender in particular that seems to cut very deep into the core of human psychology.

Indeed, so deep that I doubt we’ll ever be truly free of them. But perhaps we can at least reduce their draconian demands on us by remaining aware of just how weird those demands are.

Commitment and sophistication

Mar 13 JDN 2459652

One of the central insights of cognitive and behavioral economics is that understanding the limitations of our own rationality can help us devise mechanisms to overcome those limitations—that knowing we are not perfectly rational can make us more rational. The usual term for this is a somewhat vague one: behavioral economists generally call it simply sophistication.

For example, suppose that you are short-sighted and tend to underestimate the importance of the distant future. (This is true of most of us, to greater or lesser extent.)

It’s rational to consider the distant future less important than the present—things change in the meantime, and if we go far enough you may not even be around to see it. In fact, rationality alone doesn’t even say how much you should discount any given distance in the future. But most of us are inconsistent about our attitudes toward the future: We exhibit dynamic inconsistency.

For instance, suppose I ask you today whether you would like $100 today or $102 tomorrow. It is likely you’ll choose $100 today. But if I ask you whether you would like $100 365 days from now or $102 366 days from now, you’ll almost certainly choose the $102.


This means that if I asked you the second question first, then waited a year and asked you the first question, you’d change your mind—that’s inconsistent. Whichever choice is better shouldn’t systematically change over time. (It might happen to change, if your circumstances changed in some unexpected way. But on average it shouldn’t change.) Indeed, waiting a day for an extra $2 is typically going to be worth it; 2% daily interest is pretty hard to beat.

Now, suppose you have some option to make a commitment, something that will bind you to your earlier decision. It could be some sort of punishment for deviating from your earlier choice, some sort of reward for keeping to the path, or, in the most extreme example, a mechanism that simply won’t let you change your mind. (The literally classic example of this is Odysseus having his crew tie him to the mast so he can listen to the Sirens.)

If you didn’t know that your behavior was inconsistent, you’d never want to make such a commitment. You don’t expect to change your mind, and if you do change your mind, it would be because your circumstances changed in some unexpected way—in which case changing your mind would be the right thing to do. And if your behavior wasn’t inconsistent, this reasoning would be quite correct: No point in committing when you have less information.

But if you know that your behavior is inconsistent, you can sometimes improve the outcome for yourself by making a commitment. You can force your own behavior into consistency, even though you will later be tempted to deviate from your plan.

Yet there is a piece missing from this account, often not clearly enough stated: Why should we trust the version of you that has a year to plan over the version of you that is making the decision today? What’s the difference between those two versions of you that makes them inconsistent, and why is one more trustworthy than the other?

The biggest difference is emotional. You don’t really feel $100 a year from now, so you can do the math and see that 2% daily interest is pretty darn good. But $100 today makes you feel something—excitement over what you might buy, or relief over a bill you can now pay. (Actually that’s one of the few times when it would be rational to take $100 today: If otherwise you’re going to miss a deadline and pay a late fee.) And that feeling about $102 tomorrow just isn’t as strong.

We tend to think that our emotional selves and our rational selves are in conflict, and so we expect to be more rational when we are less emotional. There is some truth to this—strong emotions can cloud our judgments and make us behave rashly.

Yet this is only one side of the story. We also need emotions to be rational. There is a condition known as flat affect, often a symptom of various neurological disorders, in which emotional reactions are greatly blunted or even non-existent. People with flat affect aren’t more rational—they just do less. In the worst cases, they completely lose their ability to be motivated to do things and become outright inert, known as abulia.

Emotional judgments are often less accurate than thoughtfully reasoned arguments, but they are also much faster—and that’s why we have them. In many contexts, particularly when survival is at stake, doing something pretty well right away is often far better than waiting long enough to be sure you’ll get the right answer. Running away from a loud sound that turns out to be nothing is a lot better than waiting to carefully determine whether that sound was really a tiger—and finding that it was.

With this in mind, the cases where we should expected commitment to be effective are those that are unfamiliar, not only on an individual level, but in an evolutionary sense. I have no doubt that experienced stock traders can develop certain intuitions that make them better at understanding financial markets than randomly chosen people—but they still systematically underperform simple mathematical models, likely because finance is just so weird from an evolutionary perspective. So when deciding whether to accept some amount of money m1 at time t1 and some other amount of money m2 at time t2, your best bet is really to just do the math.

But this may not be the case for many other types of decisions. Sometimes how you feel in the moment really is the right signal to follow. Committing to work at your job every day may seem responsible, ethical, rational—but if you hate your job when you’re actually doing it, maybe it really isn’t how you should be spending your life. Buying a long-term gym membership to pressure yourself to exercise may seem like a good idea, but if you’re miserable every time you actually go to the gym, maybe you really need to be finding a better way to integrate exercise into your lifestyle.

There are no easy answers here. We can think of ourselves as really being made of two (if not more) individuals: A cold, calculating planner who looks far into the future, and a heated, emotional experiencer who lives in the moment. There’s a tendency to assume that the planner is our “true self”, the one we should always listen to, but this is wrong; we are both of those people, and a life well-lived requires finding the right balance between their conflicting desires.

Mental illness is different from physical illness.

Post 311 Oct 13 JDN 2458770

There’s something I have heard a lot of people say about mental illness that is obviously well-intentioned, but ultimately misguided: “Mental illness is just like physical illness.”

Sometimes they say it explicitly in those terms. Other times they make analogies, like “If you wouldn’t shame someone with diabetes for using insulin, why shame someone with depression for using SSRIs?”

Yet I don’t think this line of argument will ever meaningfully reduce the stigma surrounding mental illness, because, well, it’s obviously not true.

There are some characteristics of mental illness that are analogous to physical illness—but there are some that really are quite different. And these are not just superficial differences, the way that pancreatic disease is different from liver disease. No one would say that liver cancer is exactly the same as pancreatic cancer; but they’re both obviously of the same basic category. There are differences between physical and mental illness which are both obvious, and fundamental.

Here’s the biggest one: Talk therapy works on mental illness.

You can’t talk yourself out of diabetes. You can’t talk yourself out of myocardial infarct. You can’t even talk yourself out of migraine (though I’ll get back to that one in a little bit). But you can, in a very important sense, talk yourself out of depression.

In fact, talk therapy is one of the most effective treatments for most mental disorders. Cognitive behavioral therapy for depression is on its own as effective as most antidepressants (with far fewer harmful side effects), and the two combined are clearly more effective than either alone. Talk therapy is as effective as medication on bipolar disorder, and considerably better on social anxiety disorder.

To be clear: Talk therapy is not just people telling you to cheer up, or saying it’s “all in your head”, or suggesting that you get more exercise or eat some chocolate. Nor does it consist of you ruminating by yourself and trying to talk yourself out of your disorder. Cognitive behavioral therapy is a very complex, sophisticated series of techniques that require years of expert training to master. Yet, at its core, cognitive therapy really is just a very sophisticated form of talking.

The fact that mental disorders can be so strongly affected by talk therapy shows that there really is an important sense in which mental disorders are “all in your head”, and not just the trivial way that an axe wound or even a migraine is all in your head. It isn’t just the fact that it is physically located in your brain that makes a mental disorder different; it’s something deeper than that.

Here’s the best analogy I can come up with: Physical illness is hardware. Mental illness is software.

If a computer breaks after being dropped on the floor, that’s like an axe wound: An obvious, traumatic source of physical damage that is an unambiguous cause of the failure.

If a computer’s CPU starts overheating, that’s like a physical illness, like diabetes: There may be no particular traumatic cause, or even any clear cause at all, but there is obviously something physically wrong that needs physical intervention to correct.

But if a computer is suffering glitches and showing error messages when it tries to run particular programs, that is like mental illness: Something is wrong not on the low-level hardware, but on the high-level software.

These different types of problem require different types of solutions. If your CPU is overheating, you might want to see about replacing your cooling fan or your heat sink. But if your software is glitching while your CPU is otherwise running fine, there’s no point in replacing your fan or heat sink. You need to get a programmer in there to look at the code and find out where it’s going wrong. A talk therapist is like a programmer: The words they say to you are like code scripts they’re trying to get your processor to run correctly.

Of course, our understanding of computers is vastly better than our understanding of human brains, and as a result, programmers tend to get a lot better results than psychotherapists. (Interestingly they do actually get paid about the same, though! Programmers make about 10% more on average than psychotherapists, and both are solidly within the realm of average upper-middle-class service jobs.) But the basic process is the same: Using your expert knowledge of the system, find the right set of inputs that will fix the underlying code and solve the problem. At no point do you physically intervene on the system; you could do it remotely without ever touching it—and indeed, remote talk therapy is a thing.

What about other neurological illnesses, like migraine or fibromyalgia? Well, I think these are somewhere in between. They’re definitely more physical in some sense than a mental disorder like depression. There isn’t any cognitive content to a migraine the way there is to a depressive episode. When I feel depressed or anxious, I feel depressed or anxious about something. But there’s nothing a migraine is about. To use the technical term in cognitive science, neurological disorders lack the intentionality that mental disorders generally have. “What are you depressed about?” is a question you usually can answer. “What are you migrained about?” generally isn’t.

But like mental disorders, neurological disorders are directly linked to the functioning of the brain, and often seem to operate at a higher level of functional abstraction. The brain doesn’t have pain receptors on itself the way most of your body does; getting a migraine behind your left eye doesn’t actually mean that that specific lobe of your brain is what’s malfunctioning. It’s more like a general alert your brain is sending out that something is wrong, somewhere. And fibromyalgia often feels like it’s taking place in your entire body at once. Moreover, most neurological disorders are strongly correlated with mental disorders—indeed, the comorbidity of depression with migraine and fibromyalgia in particular is extremely high.

Which disorder causes the other? That’s a surprisingly difficult question. Intuitively we might expect the “more physical” disorder to be the primary cause, but that’s not always clear. Successful treatment for depression often improves symptoms of migraine and fibromyalgia as well (though the converse is also true). They seem to be mutually reinforcing one another, and it’s not at all clear which came first. I suppose if I had to venture a guess, I’d say the pain disorders probably have causal precedence over the mood disorders, but I don’t actually know that for a fact.

To stretch my analogy a little, it may be like a software problem that ends up causing a hardware problem, or a hardware problem that ends up causing a software problem. There actually have been a few examples of this, like games with graphics so demanding that they caused GPUs to overheat.

The human brain is a lot more complicated than a computer, and the distinction between software and hardware is fuzzier; we don’t actually have “code” that runs on a “processor”. We have synapses that continually fire on and off and rewire each other. The closest thing we have to code that gets processed in sequence would be our genome, and that is several orders of magnitude less complex than the structure of our brains. Aside from simply physically copying the entire brain down to every synapse, it’s not clear that you could ever “download” a mind, science fiction notwithstanding.

Indeed, anything that changes your mind necessarily also changes your brain; the effects of talking are generally subtler than the effects of a drug (and certainly subtler than the effects of an axe wound!), but they are nevertheless real, physical changes. (This is why it is so idiotic whenever the popular science press comes out with: “New study finds that X actually changes your brain!” where X might be anything from drinking coffee to reading romance novels. Of course it does! If it has an effect on your mind, it did so by having an effect on your brain. That’s the Basic Fact of Cognitive Science.) This is not so different from computers, however: Any change in software is also a physical change, in the form of some sequence of electrical charges that were moved from one place to another. Actual physical electrons are a few microns away from where they otherwise would have been because of what was typed into that code.

Of course I want to reduce the stigma surrounding mental illness. (For both selfish and altruistic reasons, really.) But blatantly false assertions don’t seem terribly productive toward that goal. Mental illness is different from physical illness; we can’t treat it the same.

How can we stop rewarding psychopathy?

Oct 1, JDN 24578028

A couple of weeks ago The New York Times ran an interesting article about how entrepreneurs were often juvenile delinquents, who then often turn into white-collar criminals. They didn’t quite connect the dots, though; they talked about the relevant trait driving this behavior as “rule-breaking”, when it is probably better defined as psychopathy. People like Martin Shkreli aren’t just “rule-breakers”; they are psychopaths. While only about 1% of humans in general are psychopaths, somewhere between 3% and 4% of business executives are psychopaths. I was unable to find any specific data assessing the prevalence of psychopathy among politicians, but if you just read the Hare checklist, it’s not hard to see that psychopathic traits are overrepresented among politicians as well.

This is obviously the result of selection bias; as a society, we are systematically appointing psychopaths to positions of wealth and power. Why are we doing this? How can we stop?

One very important factor here that may be especially difficult to deal with is desire. We generally think that in a free society, people should be allowed to seek out the sort of life they want to live. But one of the reasons that psychopaths are more likely to become rich and powerful is precisely that they want it more.

To most of us, being rich is probably something we want, but not the most important thing to us. We’d accept being poor if it meant we could be happy, surrounded by friends and family who love us, and made a great contribution to society. We would like to be rich, but it’s more important that we be good people. But to many psychopaths, being rich is the one single thing they care about. All those other considerations are irrelevant.

With power, matters are even more extreme: Most people actually seem convinced that they don’t want power at all. They associate power with corruption and cruelty (because, you know, so many of the people in power are psychopaths!), and they want no part of it.

So the saying goes: “Power tends to corrupt, and absolute power corrupts absolutely.” Does it, now? Did power corrupt George Washington and Abraham Lincoln? Did it corrupt Mahatma Gandhi and Nelson Mandela? I’m not saying that any of these men were without flaws, even serious ones—but was it power that made them so? Who would they have been, and more importantly, what would they have done, if they hadn’t had power? Would the world really have been better off if Abraham Lincoln and Nelson Mandela had stayed out of politics? I don’t think so.

Part of what we need, therefore, is to convince good people that wanting power is not inherently bad. Power just means the ability to do things; it’s what you do that matters. You should want power—the power to right wrongs, mend injustices, uplift humanity’s future. Thinking that the world would be better if you were in charge not only isn’t a bad thing—it is quite likely to be true. If you are not a psychopath, then the world would probably be better off if you were in charge of it.

Of course, that depends partly on what “in charge of the world” even means; it’s not like we have a global government, after all. But even suppose you were granted the power of an absolute dictatorship over all of humanity; what would you do with that power? My guess is that you’d probably do what I would do: Start by using that power to correct the greatest injustices, then gradually cede power to a permanent global democracy. That wouldn’t just be a good thing; it would be quite literally and without a doubt the best thing that ever happened. Of course, it would be all the better if we never built such a dictatorship in the first place; but mainly that’s because of the sort of people who tend to become dictators. A benevolent dictatorship really would be a wonderful thing; the problem is that dictators almost never remain benevolent. Dictatorship is simply too enticing to psychopaths.

And what if you don’t think you’re competent enough in policy to make such decisions? Simple: You don’t make them yourself, you delegate them to responsible and trustworthy people to make them for you. Recognizing your own limitations is one of the most important differences between a typical leader and a good leader.

Desire isn’t the only factor here, however. Even though psychopaths tend to seek wealth and power with more zeal than others, there are still a lot of good people trying to seek wealth and power. We need to look very carefully at the process of how we select our leaders.

Let’s start with the private sector. How are managers chosen? Mainly, by managers above them. What criteria do they use? Mostly, they use similarity. Managers choose other managers who are “like them”—middle-aged straight White men with psychopathic tendencies.

This is something that could be rectified with regulation; we could require businesses to choose a more diverse array of managers that is more representative of the population at large. While this would no doubt trigger many complaints of “government interference” and “inefficiency”, in fact it almost certainly would increase the long-term profitability of most corporations. Study after study after study shows that increased diversity, particularly including more equal representation of women, results in better business performance. A recent MIT study found that switching from an all-male or all-female management population to a 50-50 male/female split could increase profits by as much as forty percent. The reason boards of directors aren’t including more diversity is that they ultimately care more about protecting their old boys’ club (and increasing their own compensation, of course) than they do about maximizing profits for their shareholders.

I think it would actually be entirely reasonable to include regulations about psychopathy in particular; designate certain industries (such as lobbying and finance; I would not include medicine, as psychopaths actually seem to make pretty good neurosurgeons!) as “systematically vital” and require psychopathy screening tests as part of their licensing process. This is no small matter, and definitely does represent an incursion into civil liberties; but given the enormous potential benefits, I don’t think it can be dismissed out of hand. We do license professions; why shouldn’t at least a minimal capacity for empathy and ethical behavior be part of that licensing process?

Where the civil liberty argument becomes overwhelming is in politics. I don’t think we can justify any restrictions on who should be allowed to run for office. Frankly, I think even the age limits should be struck from the Constitution; you should be allowed to run for President at 18 if you want. Requiring psychological tests for political office borders on dystopian.

That means we need to somehow reform either the campaign system, the voting system, or the behavior of voters themselves.

Of course, we should reform all three. Let’s start with the voting system itself, as that is the simplest: We should be using range voting, and we should abolish the Electoral College. Districts should be replaced by proportional representation through reweighted range voting, eliminating gerrymandering once and for all without question.

The campaign system is trickier. We could start by eliminating or tightly capping private and corporate campaign donations, and replace them with a system similar to the “Democracy Vouchers” being tested in Seattle. The basic idea is simple and beautiful: Everyone gets an equal amount of vouchers to give to whatever candidates they like, and then all the vouchers can be redeemed for campaign financing from public funds. It’s like everyone giving a donation (or monetary voting), but everyone has the same amount of “money”.

This would not solve all the problems, however. There is still an oligopoly of news media distorting our political discourse. There is still astonishingly bad journalism even in our most respected outlets, like the way the New York Times was obsessed with Comey’s letter and CNN’s wall-to-wall coverage of totally unfounded speculation about a missing airliner.

Then again, CNN’s ratings skyrocketed during that period. This shows that the problems run much deeper than a handful of bad journalists or corrupt media companies. These companies are, to a surprisingly large degree, just trying to cater to what their audience has said it wants, just “giving the people what they want”.

Our fundamental challenge, therefore, is to change what the people want. We have to somehow convince the public at large—or at least a big enough segment of the public at large—that they don’t really want TV news that spends hours telling them nothing and they don’t really want to elect the candidate who is the tallest or has the nicest hair. And we have to get them to actually change the way they behave accordingly.

When it comes to that part, I have no idea what to do. A voting population that is capable of electing Donald Trump—Electoral College nonsense notwithstanding, he won sixty million votes—is one that I honestly have no idea how to interface with at all. But we must try.

The replication crisis, and the future of science

Aug 27, JDN 2457628 [Sat]

After settling in a little bit in Irvine, I’m now ready to resume blogging, but for now it will be on a reduced schedule. I’ll release a new post every Saturday, at least for the time being.

Today’s post was chosen by Patreon vote, though only one person voted (this whole Patreon voting thing has not been as successful as I’d hoped). It’s about something we scientists really don’t like to talk about, but definitely need to: We are in the middle of a major crisis of scientific replication.

Whenever large studies are conducted attempting to replicate published scientific results, their ability to do so is almost always dismal.

Psychology is the one everyone likes to pick on, because their record is particularly bad. Only 39% of studies were really replicated with the published effect size, though a further 36% were at least qualitatively but not quantitatively similar. Yet economics has its own replication problem, and even medical research is not immune to replication failure.

It’s important not to overstate the crisis; the majority of scientific studies do at least qualitatively replicate. We are doing better than flipping a coin, which is better than one can say of financial forecasters.
There are three kinds of replication, and only one of them should be expected to give near-100% results. That kind is reanalysiswhen you take the same data and use the same methods, you absolutely should get the exact same results. I favor making reanalysis a routine requirement of publication; if we can’t get your results by applying your statistical methods to your data, then your paper needs revision before we can entrust it to publication. A number of papers have failed on reanalysis, which is absurd and embarrassing; the worst offender was probably Rogart-Reinhoff, which was used in public policy decisions around the world despite having spreadsheet errors.

The second kind is direct replication—when you do the exact same experiment again and see if you get the same result within error bounds. This kind of replication should work something like 90% of the time, but in fact works more like 60% of the time.

The third kind is conceptual replication—when you do a similar experiment designed to test the same phenomenon from a different perspective. This kind of replication should work something like 60% of the time, but actually only works about 20% of the time.

Economists are well equipped to understand and solve this crisis, because it’s not actually about science. It’s about incentives. I facepalm every time I see another article by an aggrieved statistician about the “misunderstanding” of p-values; no, scientist aren’t misunderstanding anything. They know damn well how p-values are supposed to work. So why do they keep using them wrong? Because their jobs depend on doing so.

The first key point to understand here is “publish or perish”; academics in an increasingly competitive system are required to publish their research in order to get tenure, and frequently required to get tenure in order to keep their jobs at all. (Or they could become adjuncts, who are paid one-fifth as much.)

The second is the fundamentally defective way our research journals are run (as I have discussed in a previous post). As private for-profit corporations whose primary interest is in raising more revenue, our research journals aren’t trying to publish what will genuinely advance scientific knowledge. They are trying to publish what will draw attention to themselves. It’s a similar flaw to what has arisen in our news media; they aren’t trying to convey the truth, they are trying to get ratings to draw advertisers. This is how you get hours of meaningless fluff about a missing airliner and then a single chyron scroll about a war in Congo or a flood in Indonesia. Research journals haven’t fallen quite so far because they have reputations to uphold in order to attract scientists to read them and publish in them; but still, their fundamental goal is and has always been to raise attention in order to raise revenue.

The best way to do that is to publish things that are interesting. But if a scientific finding is interesting, that means it is surprising. It has to be unexpected or unusual in some way. And above all, it has to be positive; you have to have actually found an effect. Except in very rare circumstances, the null result is never considered interesting. This adds up to making journals publish what is improbable.

In particular, it creates a perfect storm for the abuse of p-values. A p-value, roughly speaking, is the probability you would get the observed result if there were no effect at all—for instance, the probability that you’d observe this wage gap between men and women in your sample if in the real world men and women were paid the exact same wages. The standard heuristic is a p-value of 0.05; indeed, it has become so enshrined that it is almost an explicit condition of publication now. Your result must be less than 5% likely to happen if there is no real difference. But if you will only publish results that show a p-value of 0.05, then the papers that get published and read will only be the ones that found such p-values—which renders the p-values meaningless.

It was never particularly meaningful anyway; as we Bayesians have been trying to explain since time immemorial, it matters how likely your hypothesis was in the first place. For something like wage gaps where we’re reasonably sure, but maybe could be wrong, the p-value is not too unreasonable. But if the theory is almost certainly true (“does gravity fall off as the inverse square of distance?”), even a high p-value like 0.35 is still supportive, while if the theory is almost certainly false (“are human beings capable of precognition?”—actual study), even a tiny p-value like 0.001 is still basically irrelevant. We really should be using much more sophisticated inference techniques, but those are harder to do, and don’t provide the nice simple threshold of “Is it below 0.05?”

But okay, p-values can be useful in many cases—if they are used correctly and you see all the results. If you have effect X with p-values 0.03, 0.07, 0.01, 0.06, and 0.09, effect X is probably a real thing. If you have effect Y with p-values 0.04, 0.02, 0.29, 0.35, and 0.74, effect Y is probably not a real thing. But I’ve just set it up so that these would be published exactly the same. They each have two published papers with “statistically significant” results. The other papers never get published and therefore never get seen, so we throw away vital information. This is called the file drawer problem.

Researchers often have a lot of flexibility in designing their experiments. If their only goal were to find truth, they would use this flexibility to test a variety of scenarios and publish all the results, so they can be compared holistically. But that isn’t their only goal; they also care about keeping their jobs so they can pay rent and feed their families. And under our current system, the only way to ensure that you can do that is by publishing things, which basically means only including the parts that showed up as statistically significant—otherwise, journals aren’t interested. And so we get huge numbers of papers published that tell us basically nothing, because we set up such strong incentives for researchers to give misleading results.

The saddest part is that this could be easily fixed.

First, reduce the incentives to publish by finding other ways to evaluate the skill of academics—like teaching for goodness’ sake. Working papers are another good approach. Journals already get far more submissions than they know what to do with, and most of these papers will never be read by more than a handful of people. We don’t need more published findings, we need better published findings—so stop incentivizing mere publication and start finding ways to incentivize research quality.

Second, eliminate private for-profit research journals. Science should be done by government agencies and nonprofits, not for-profit corporations. (And yes, I would apply this to pharmaceutical companies as well, which should really be pharmaceutical manufacturers who make cheap drugs based off of academic research and carry small profit margins.) Why? Again, it’s all about incentives. Corporations have no reason to want to find truth and every reason to want to tilt it in their favor.

Third, increase the number of tenured faculty positions. Instead of building so many new grand edifices to please your plutocratic donors, use your (skyrocketing) tuition money to hire more professors so that you can teach more students better. You can find even more funds if you cut the salaries of your administrators and football coaches. Come on, universities; you are the one industry in the world where labor demand and labor supply are the same people a few years later. You have no excuse for not having the smoothest market clearing in the world. You should never have gluts or shortages.

Fourth, require pre-registration of research studies (as some branches of medicine already do). If the study is sound, an optimal rational agent shouldn’t care in the slightest whether it had a positive or negative result, and if our ape brains won’t let us think that way, we need to establish institutions to force it to happen. They shouldn’t even see the effect size and p-value before they make the decision to publish it; all they should care about is that the experiment makes sense and the proper procedure was conducted.
If we did all that, the replication crisis could be almost completely resolved, as the incentives would be realigned to more closely match the genuine search for truth.

Alas, I don’t see universities or governments or research journals having the political will to actually make such changes, which is very sad indeed.

“But wait, there’s more!”: The clever tricks of commercials

JDN 2457565

I’m sure you’ve all seen commercials like this dozens of times:

A person is shown (usually in black-and-white) trying to use an ordinary consumer product, and failing miserably. Often their failure can only be attributed to the most abject incompetence, but the narrator will explain otherwise: “Old product is so hard to use. Who can handle [basic household activity] and [simple instructions]?”

“Struggle no more!” he says (it’s almost always a masculine narrator), and the video turns to full color as the same person is shown using the new consumer product effortlessly. “With innovative high-tech new product, you can do [basic household activity] with ease in no time!”

“Best of all, new product, a $400 value, can be yours for just five easy payments of $19.95. That’s five easy payments of $19.95!”

And then, here it comes: “But wait. There’s more! Order within the next 15 minutes and you will get two new products, for the same low price. That’s $800 in value for just five easy payments of $19.95! And best of all, your satisfaction is guaranteed! If you don’t like new product, return it within 30 days for your money back!” (A much quieter, faster voice says: “Just pay shipping and handling.”)

Call 555-1234. That’s 555-1234.

“CALL NOW!”

Did you ever stop and think about why so many commercials follow this same precise format?

In short, because it works. Indeed, it works a good deal better than simply presenting the product’s actual upsides and downsides and reporting a sensible market price—even if that sensible market price is lower than the “five easy payments of $19.95”.

We owe this style of marketing to one Ron Popeil; Ron Popeil was a prolific inventor, but none of his inventions have had so much impact as the market methods he used to sell them.

Let’s go through step by step. Why is the person using the old product so incompetent? Surely they could sell their product without implying that we don’t know how to do basic household activities like boiling pasta and cutting vegetables?

Well, first of all, many of these products do nothing but automate such simple household activities (like the famous Veg-O-Matic which cuts vegetables and “It slices! It dices!”), so if they couldn’t at least suggest that this is a lot of work they’re saving us, we’d have no reason to want their product.

But there’s another reason as well: Watching someone else fumble with basic household appliances is funny, as any fan of the 1950s classic I Love Lucy would attest (in fact, it may not be a coincidence that the one fumbling with the vegetables is often a woman who looks a lot like Lucy), and meta-analysis of humor in advertising has shown that it draws attention and triggers positive feelings.

Why use black-and-white for the first part? The switch to color enhances the feeling of contrast, and the color video is more appealing. You wouldn’t consciously say “Wow, that slicer changed the tomatoes from an ugly grey to a vibrant red!” but your subconscious mind is still registering that association.

Then they will hit you with appealing but meaningless buzzwords. For technology it will be things like “innovative”, “ground-breaking”, “high-tech” and “state-of-the-art”, while for foods and nutritional supplements it will be things like “all-natural”, “organic”, “no chemicals”, and “just like homemade”. It will generally be either so vague as to be unverifiable (what constitutes “innovative”?), utterly tautological (all carbon-based substances are “organic” and this term is not regulated), or transparently false but nonetheless not specific enough to get them in trouble (“just like homemade” literally can’t be true if you’re buying it from a TV ad). These give you positive associations without forcing the company to commit to making a claim they could actually be sued for breaking. It’s the same principle as the Applause Lights that politicians bring to every speech: “Three cheers for moms!” “A delicious slice of homemade apple pie!” “God Bless America!”

Occasionally you’ll also hear buzzwords that do have some meaning, but often not nearly as strong as people imagine: “Patent pending” means that they applied for the patent and it wasn’t summarily rejected—but not that they’ll end up getting it approved. “Certified organic” means that the USDA signed off on the farming standards, which is better than nothing but leaves a lot of wiggle room for animal abuse and irresponsible environmental practices.

And then we get to the price. They’ll quote some ludicrous figure for its “value”, which may be a price that no one has ever actually paid for a product of this kind, then draw a line through it and replace it with the actual price, which will be far lower.

Indeed, not just lower: The actual price is almost always $19.99 or $19.95. If the product is too expensive to make for them to sell it at $19.95, they will sell it at several payments of $19.95, and emphasize that these are “easy” payments, as though the difficulty of writing the check were a major factor in people’s purchasing decisions. (That actually is a legitimate concern for micropayments, but not for buying kitchen appliances!) They’ll repeat the price because repetition improves memory and also makes statements more persuasive.

This is what we call psychological pricing, and it’s one of those enormous market distortions that once you realize it’s there, you see it everywhere and start to wonder how our whole market system hasn’t collapsed on itself from the sheer weight of our overwhelming irrationality. The price of a product sold on TV will almost always be just slightly less than $20.

In general, most prices will take the form of $X.95 or $X.99; Costco even has a code system they use in the least significant digit. Continuous substances like gasoline can even be sold at fractional pennies, and so they’ll usually be at $X.X99, being not even one penny less. It really does seem to work; despite being an eminently trivial difference from the round number, and typically rounded up from what it actually should have been, it just feels like less to see $19.95 rather than $20.00.

Moreover, I have less data to support this particular hypothesis, but I think that $20 in particular is a very specific number, because $19.95 pops up so very, very often. I think most Americans have what we might call a “Jackson heuristic”, which is as follows: If something costs less than a Jackson (a $20 bill, though hopefully they’ll put Harriet Tubman on soon, so “Tubman heuristic”), you’re allowed to buy it on impulse without thinking too hard about whether it’s worth it. But if it costs more than a Jackson, you need to stop and think about it, weigh the alternatives before you come to a decision. Since these TV ads are almost always aiming for the thoughtless impulse buy, they try to scrape in just under the Jackson heuristic.

Of course, inflation will change the precise figure over time; in the 1980s it was probably a Hamilton heuristic, in the 1970s a Lincoln heuristic, in the 1940s a Washington heuristic. Soon enough it will be a Grant heuristic and then a Benjamin heuristic. In fact it’s probably something like “The closest commonly-used cash denomination to half a milliQALY”, but nobody does that calculation consciously; the estimate is made automatically without thinking. This in turn is probably figured because you could literally do that once a day every single day for only about 20% of your total income, and if you hold it to once a week you’re under 3% of your income. So if you follow the Jackson heuristic on impulse buys every week or so, your impulse spending is a “statistically insignificant” proportion of your income. (Why do we use that anyway? And suddenly we realize: The 95% confidence level is itself nothing more than a heuristic.)

Then they take advantage of our difficulty in discounting time rationally, by spreading it into payments; “five easy payments of $19.95” sounds a lot more affordable than “$100”, but they are in fact basically the same. (You save $0.25 by the payment plan, maybe as much as a few dollars if your cashflow is very bad and thus you have a high temporal discount rate.)

And then, finally, “But wait. There’s more!” They offer you another of the exact same product, knowing full well you’ll probably have no use for the second one. They’ll multiply their previous arbitrary “value” by 2 to get an even more ludicrous number. Now it sounds like they’re doing you a favor, so you’ll feel obliged to do one back by buying the product. Gifts often have this effect in experiments: People are significantly more motivated to answer a survey if you give them a small gift beforehand, even if they get to keep it without taking the survey.

They’ll tell you to call in the next 15 minutes so that you feel like part of an exclusive club (when in reality you could probably call at any time and get the same deal). This also ensures that you’re staying in impulse-buy mode, since if you wait longer to think, you’ll miss the window!

They will offer a “money-back guarantee” to give you a sense of trust in the product, and this would be a rational response, except for that little disclaimer: “Just pay shipping and handling.” For many products, especially nutritional supplements (which cost basically nothing to make), the “handling” fee is high enough that they don’t lose much money, if any, even if you immediately send it back for a refund. Besides, they know that hardly anyone actually bothers to return products. Retailers are currently in a panic about “skyrocketing” rates of product returns that are still under 10%.

Then, they’ll repeat their phone number, followed by a remarkably brazen direct command: “Call now!” Personally I tend to bristle at direct commands, even from legitimate authorities; but apparently I’m unusual in that respect, and most people will in fact obey direct commands from random strangers as long as they aren’t too demanding. A famous demonstration of this you could try yourself if you’re feeling like a prankster is to walk into a room, point at someone, and say “You! Stand up!” They probably will. There’s a whole literature in social psychology about what makes people comply with commands of this sort.

And all, to make you buy a useless gadget you’ll try to use once and then leave in a cupboard somewhere. What untold billions of dollars in wealth are wasted this way?

Why it matters that torture is ineffective

JDN 2457531

Like “longest-ever-serving Speaker of the House sexually abuses teenagers” and “NSA spy program is trying to monitor the entire telephone and email system”, the news that the US government systematically tortures suspects is an egregious violation that goes to the highest levels of our government—that for some reason most Americans don’t particularly seem to care about.

The good news is that President Obama signed an executive order in 2009 banning torture domestically, reversing official policy under the Bush Administration, and then better yet in 2014 expanded the order to apply to all US interests worldwide. If this is properly enforced, perhaps our history of hypocrisy will finally be at its end. (Well, not if Trump wins…)

Yet as often seems to happen, there are two extremes in this debate and I think they’re both wrong.
The really disturbing side is “Torture works and we have to use it!” The preferred mode of argumentation for this is the “ticking time bomb scenario”, in which we have some urgent disaster to prevent (such as a nuclear bomb about to go off) and torture is the only way to stop it from happening. Surely then torture is justified? This argument may sound plausible, but as I’ll get to below, this is a lot like saying, “If aliens were attacking from outer space trying to wipe out humanity, nuclear bombs would probably be justified against them; therefore nuclear bombs are always justified and we can use them whenever we want.” If you can’t wait for my explanation, The Atlantic skewers the argument nicely.

Yet the opponents of torture have brought this sort of argument on themselves, by staking out a position so extreme as “It doesn’t matter if torture works! It’s wrong, wrong, wrong!” This kind of simplistic deontological reasoning is very appealing and intuitive to humans, because it casts the world into simple black-and-white categories. To show that this is not a strawman, here are several different people all making this same basic argument, that since torture is illegal and wrong it doesn’t matter if it works and there should be no further debate.

But the truth is, if it really were true that the only way to stop a nuclear bomb from leveling Los Angeles was to torture someone, it would be entirely justified—indeed obligatory—to torture that suspect and stop that nuclear bomb.

The problem with that argument is not just that this is not our usual scenario (though it certainly isn’t); it goes much deeper than that:

That scenario makes no sense. It wouldn’t happen.

To use the example the late Antonin Scalia used from an episode of 24 (perhaps the most egregious Fictional Evidence Fallacy ever committed), if there ever is a nuclear bomb planted in Los Angeles, that would literally be one of the worst things that ever happened in the history of the human race—literally a Holocaust in the blink of an eye. We should be prepared to cause extreme suffering and death in order to prevent it. But not only is that event (fortunately) very unlikely, torture would not help us.

Why? Because torture just doesn’t work that well.

It would be too strong to say that it doesn’t work at all; it’s possible that it could produce some valuable intelligence—though clear examples of such results are amazingly hard to come by. There are some social scientists who have found empirical results showing some effectiveness of torture, however. We can’t say with any certainty that it is completely useless. (For obvious reasons, a randomized controlled experiment in torture is wildly unethical, so none have ever been attempted.) But to justify torture it isn’t enough that it could work sometimes; it has to work vastly better than any other method we have.

And our empirical data is in fact reliable enough to show that that is not the case. Torture often produces unreliable information, as we would expect from the game theory involved—your incentive is to stop the pain, not provide accurate intel; the psychological trauma that torture causes actually distorts memory and reasoning; and as a matter of fact basically all the useful intelligence obtained in the War on Terror was obtained through humane interrogation methods. As interrogation experts agree, torture just isn’t that effective.

In principle, there are four basic cases to consider:

1. Torture is vastly more effective than the best humane interrogation methods.

2. Torture is slightly more effective than the best humane interrogation methods.

3. Torture is as effective as the best humane interrogation methods.

4. Torture is less effective than the best humane interrogation methods.

The evidence points most strongly to case 4, which would mean that torture is a no-brainer; if it doesn’t even work as well as other methods, it’s absurd to use it. You’re basically kicking puppies at that point—purely sadistic violence that accomplishes nothing. But the data isn’t clear enough for us to rule out case 3 or even case 2. There is only one case we can strictly rule out, and that is case 1.

But it was only in case 1 that torture could ever be justified!

If you’re trying to justify doing something intrinsically horrible, it’s not enough that it has some slight benefit.

People seem to have this bizarre notion that we have only two choices in morality:

Either we are strict deontologists, and wrong actions can never be justified by good outcomes ever, in which case apparently vaccines are morally wrong, because stabbing children with needles is wrong. Tto be fair, some people seem to actually believe this; but then, some people believe the Earth is less than 10,000 years old.

Or alternatively we are the bizarre strawman concept most people seem to have of utilitarianism, under which any wrong action can be justified by even the slightest good outcome, in which case all you need to do to justify slavery is show that it would lead to a 1% increase in per-capita GDP. Sadly, there honestly do seem to be economists who believe this sort of thing. Here’s one arguing that US chattel slavery was economically efficient, and some of the more extreme arguments for why sweatshops are good can take on this character. Sweatshops may be a necessary evil for the time being, but they are still an evil.

But what utilitarianism actually says (and I consider myself some form of nuanced rule-utilitarian, though actually I sometimes call it “deontological consequentialism” to emphasize that I mean to synthesize the best parts of the two extremes) is not that the ends always justify the means, but that the ends can justify the means—that it can be morally good or even obligatory to do something intrinsically bad (like stabbing children with needles) if it is the best way to accomplish some greater good (like saving them from measles and polio). But the good actually has to be greater, and it has to be the best way to accomplish that good.

To see why this later proviso is important, consider the real-world ethical issues involved in psychology experiments. The benefits of psychology experiments are already quite large, and poised to grow as the science improves; one day the benefits of cognitive science to humanity may be even larger than the benefits of physics and biology are today. Imagine a world without mood disorders or mental illness of any kind; a world without psychopathy, where everyone is compassionate; a world where everyone is achieving their full potential for happiness and self-actualization. Cognitive science may yet make that world possible—and I haven’t even gotten into its applications in artificial intelligence.

To achieve that world, we will need a great many psychology experiments. But does that mean we can just corral people off the street and throw them into psychology experiments without their consent—or perhaps even their knowledge? That we can do whatever we want in those experiments, as long as it’s scientifically useful? No, it does not. We have ethical standards in psychology experiments for a very good reason, and while those ethical standards do slightly reduce the efficiency of the research process, the reduction is small enough that the moral choice is obviously to retain the ethics committees and accept the slight reduction in research efficiency. Yes, randomly throwing people into psychology experiments might actually be slightly better in purely scientific terms (larger and more random samples)—but it would be terrible in moral terms.

Along similar lines, even if torture works about as well or even slightly better than other methods, that’s simply not enough to justify it morally. Making a successful interrogation take 16 days instead of 17 simply wouldn’t be enough benefit to justify the psychological trauma to the suspect (and perhaps the interrogator!), the risk of harm to the falsely accused, or the violation of international human rights law. And in fact a number of terrorism suspects were waterboarded for months, so even the idea that it could shorten the interrogation is pretty implausible. If anything, torture seems to make interrogations take longer and give less reliable information—case 4.

A lot of people seem to have this impression that torture is amazingly, wildly effective, that a suspect who won’t crack after hours of humane interrogation can be tortured for just a few minutes and give you all the information you need. This is exactly what we do not find empirically; if he didn’t crack after hours of talk, he won’t crack after hours of torture. If you literally only have 30 minutes to find the nuke in Los Angeles, I’m sorry; you’re not going to find the nuke in Los Angeles. No adversarial interrogation is ever going to be completed that quickly, no matter what technique you use. Evacuate as many people to safe distances or underground shelters as you can in the time you have left.

This is why the “ticking time-bomb” scenario is so ridiculous (and so insidious); that’s simply not how interrogation works. The best methods we have for “rapid” interrogation of hostile suspects take hours or even days, and they are humane—building trust and rapport is the most important step. The goal is to get the suspect to want to give you accurate information.

For the purposes of the thought experiment, okay, you can stipulate that it would work (this is what the Stanford Encyclopedia of Philosophy does). But now all you’ve done is made the thought experiment more distant from the real-world moral question. The closest real-world examples we’ve ever had involved individual crimes, probably too small to justify the torture (as bad as a murdered child is, think about what you’re doing if you let the police torture people). But by the time the terrorism to be prevented is large enough to really be sufficient justification, it (1) hasn’t happened in the real world and (2) surely involves terrorists who are sufficiently ideologically committed that they’ll be able to resist the torture. If such a situation arises, of course we should try to get information from the suspects—but what we try should be our best methods, the ones that work most consistently, not the ones that “feel right” and maybe happen to work on occasion.

Indeed, the best explanation I have for why people use torture at all, given its horrible effects and mediocre effectiveness at best is that it feels right.

When someone does something terrible (such as an act of terrorism), we rightfully reduce our moral valuation of them relative to everyone else. If you are even tempted to deny this, suppose a terrorist and a random civilian are both inside a burning building and you only have time to save one. Of course you save the civilian and not the terrorist. And that’s still true even if you know that once the terrorist was rescued he’d go to prison and never be a threat to anyone else. He’s just not worth as much.

In the most extreme circumstances, a person can be so terrible that their moral valuation should be effectively zero: If the only person in a burning building is Stalin, I’m not sure you should save him even if you easily could. But it is a grave moral mistake to think that a person’s moral valuation should ever go negative, yet I think this is something that people do when confronted with someone they truly hate. The federal agents torturing those terrorists didn’t merely think of them as worthless—they thought of them as having negative worth. They felt it was a positive good to harm them. But this is fundamentally wrong; no sentient being has negative worth. Some may be so terrible as to have essentially zero worth; and we are often justified in causing harm to some in order to save others. It would have been entirely justified to kill Stalin (as a matter of fact he died of heart disease and old age), to remove the continued threat he posed; but to torture him would not have made the world a better place, and actually might well have made it worse.

Yet I can see how psychologically it could be useful to have a mechanism in our brains that makes us hate someone so much we view them as having negative worth. It makes it a lot easier to harm them when necessary, makes us feel a lot better about ourselves when we do. The idea that any act of homicide is a tragedy but some of them are necessary tragedies is a lot harder to deal with than the idea that some people are just so evil that killing or even torturing them is intrinsically good. But some of the worst things human beings have ever done ultimately came from that place in our brains—and torture is one of them.