Adversity is not a gift

Nov 29 JDN 2459183

For the last several weeks I’ve been participating in a program called “positive intelligence” (which they abbreviate “PQ” even though that doesn’t make sense); it’s basically a self-help program that is designed to improve mood and increase productivity. I am generally skeptical of such things, and I could tell from the start that it was being massively oversold, but I had the opportunity to participate for free, and I looked into the techniques involved and most of them seem to be borrowed from cognitive-behavioral therapy and mindfulness meditation.

Overall, I would say that the program has had small but genuine benefits for me. I think the most helpful part was actually getting the chance to participate in group sessions (via Zoom of course) with others also going through the program. That kind of mutual social support can make a big difference. The group I joined was all comprised of fellow economists (some other grad students, some faculty), so we had a lot of shared experiences.

Some of the techniques feel very foolish, and others just don’t seem to work for me; but I did find at least some of the meditation techniques (which they annoyingly insist on calling by the silly name “PQ reps”) have helped me relax.

But there’s one part of the PQ program in particular that I just can’t buy into, and this is the idea that adversity is a gift and an opportunity.

They call it the “Sage perspective”: You observe the world without judging what is good or bad, and any time you think something is bad, you find a way to transform it into a gift and an opportunity. The claim is that everything—or nearly everything—that happens to you can make you better off. There’s a lot of overlap here with the attitude “Everything happens for a reason”.

I don’t doubt that sincerely believing this would make you happier. Nevertheless, it is obviously false.

If indeed adversity were a gift, we would seek it out. If getting fired or going bankrupt or getting sick were a gift and an opportunity, we’d work to make these things happen.

Yes, it’s true that sometimes an event which seems bad at the time can turn out to have good consequences in the long run. This is simply because we are unable to foresee all future ramifications. Sometimes things turn out differently than you think they will. But most of the time, when something seems bad, it is actually bad.

There might be some small amount of discomfort or risk that would be preferable to a life of complete safety and complacency; but we are perfectly capable of seeking out whatever discomfort or risk we choose. Most of us live with far more discomfort and risk than we would prefer, and simply have no choice in the matter.

If adversity were a gift, people would thank you for giving it to them. “Thanks for dumping me!” “Thanks for firing me!” “Thanks for punching me!” These aren’t the sort of thing we hear very often (at least not sincerely).

I think this is fairly obvious, honestly, so I won’t belabor it any further. But it raises a question: Is there a way to salvage the mental health benefits of this attitude while abandoning its obvious falsehood?

“Everything happens for a reason” doesn’t work; we live in a universe of deep randomness, ruled by the blind idiot gods of natural law.

“Every cloud has a silver lining” is better; but clearly not every bad thing has an upside, or if it does the upside can be so small as to be utterly negligible. (What was the upside of Rwandan genocide?) Restricted to ordinary events like getting fired this one works pretty well; but it obviously fails for the most extreme traumas, and doesn’t seem particularly helpful for the death of a loved one either.

“What doesn’t kill me makes me stronger” is better still, but clearly not true in every case; some bad events that don’t actually kill us can traumatize us and make the rest of our lives harder. Perhaps “What doesn’t permanently damage me makes me stronger”?

I think the version of this attitude that I have found closest to the truth is “Everything is raw material”. Sometimes bad things just happen: Bad luck, or bad actions, can harm just about anyone at just about any time. But it is within our power to decide how we will respond to what happens to us, and wallowing in despair is almost never the best response.

Thus, while it is foolish to see adversity as a gift, it is not so foolish to see it as an opportunity. Don’t try to pretend that bad things aren’t bad. There’s no sense in denying that we would prefer some outcomes over others, and we feel hurt or disappointed when things don’t turn out how we wanted. Yet even what is bad can still contain within it chances to learn or make things better.

What’s wrong with “should”?

Nov 8 JDN 2459162

I have been a patient in cognitive behavioral therapy (CBT) for many years now. The central premise that thoughts can influence emotions is well-founded, and the results of CBT are empirically well supported.

One of the central concepts in CBT is cognitive distortions: There are certain systematic patterns in how we tend to think, which often results in beliefs and emotions that are disproportionate with reality.

Most of the cognitive distortions CBT deals with make sense to me—and I am well aware that my mind applies them frequently: All-or-nothing, jumping to conclusions, overgeneralization, magnification and minimization, mental filtering, discounting the positive, personalization, emotional reasoning, and labeling are all clearly distorted modes of thinking that nevertheless are extremely common.

But there’s one “distortion” on CBT lists that always bothers me: “should statements”.

Listen to this definition of what is allegedly a cognitive distortion:

Another particularly damaging distortion is the tendency to make “should” statements. Should statements are statements that you make to yourself about what you “should” do, what you “ought” to do, or what you “must” do. They can also be applied to others, imposing a set of expectations that will likely not be met.

When we hang on too tightly to our “should” statements about ourselves, the result is often guilt that we cannot live up to them. When we cling to our “should” statements about others, we are generally disappointed by their failure to meet our expectations, leading to anger and resentment.

So any time we use “should”, “ought”, or “must”, we are guilty of distorted thinking? In other words, all of ethics is a cognitive distortion? The entire concept of obligation is a symptom of a mental disorder?

Different sources on CBT will define “should statements” differently, and sometimes they offer a more nuanced definition that doesn’t have such extreme implications:

Individuals thinking in ‘shoulds’, ‘oughts; or ‘musts’ have an ironclad view of how they and others ‘should’ and ‘ought’ to be. These rigid views or rules can generate feels of anger, frustration, resentment, disappointment and guilt if not followed.

Example: You don’t like playing tennis but take lessons as you feel you ‘should’, and that you ‘shouldn’t’ make so many mistakes on the court, and that your coach ‘ought to’ be stricter on you. You also feel that you ‘must’ please him by trying harder.

This is particularly problematic, I think, because of the All-or-Nothing distortion which does genuinely seem to be common among people with depression: Unless you are very clear from the start about where to draw the line, our minds will leap to saying that all statements involving the word “should” are wrong.

I think what therapists are trying to capture with this concept is something like having unrealistic expectations, or focusing too much on what could or should have happened instead of dealing with the actual situation you are in. But many seem to be unable to articulate that clearly, and instead end up asserting that entire concept of moral obligation is a cognitive distortion.

There may be a deeper error here as well: The way we study mental illness doesn’t involve enough comparison with the control group. Psychologists are accustomed to asking the question, “How do people with depression think?”; but they are not accustomed to asking the question, “How do people with depression think compared to people who don’t?” If you want to establish that A causes B, it’s not enough to show that those with B have A; you must also show that those who don’t have B also don’t have A.

This is an extreme example for illustration, but suppose someone became convinced that depression is caused by having a liver. They studied a bunch of people with depression, and found that they all had livers; hypothesis confirmed! Clearly, we need to remove the livers, and that will cure the depression.

The best example I can find of a study that actually asked that question compared nursing students and found that cognitive distortions explain about 20% of the variance in depression. This is a significant amount—but still leaves a lot unexplained. And most of the research on depression doesn’t even seem to think to compare against people without depression.

My impression is that some cognitive distortions are genuinely more common among people with depression—but not all of them. There is an ongoing controversy over what’s called the depressive realism effect, which is the finding that in at least some circumstances the beliefs of people with mild depression seem to be more accurate than the beliefs of people with no depression at all. The result is controversial both because it seems to threaten the paradigm that depression is caused by distortions, and because it seems to be very dependent on context; sometimes depression makes people more accurate in their beliefs, other times it makes them less accurate.

Overall, I am inclined to think that most people have a variety of cognitive distortions, but we only tend to notice when those distortions begin causing distress—such when are they involved in depression. Human thinking in general seems to be a muddled mess of heuristics, and the wonder is that we function as well as we do.

Does this mean that we should stop trying to remove cognitive distortions? Not at all. Distorted thinking can be harmful even if it doesn’t cause you distress: The obvious example is a fanatical religious or political belief that leads you to harm others. And indeed, recognizing and challenging cognitive distortions is a highly effective treatment for depression.

Actually I created a simple cognitive distortion worksheet based on the TEAM-CBT approach developed by David Burns that has helped me a great deal in a remarkably short time. You can download the worksheet yourself and try it out. Start with a blank page and write down as many negative thoughts as you can, and then pick 3-5 that seem particularly extreme or unlikely. Then make a copy of the cognitive distortion worksheet for each of those thoughts and follow through it step by step. Particularly do not ignore the step “This thought shows the following good things about me and my core values:”; that often feels the strangest, but it’s a critical part of what makes the TEAM-CBT approach better than conventional CBT.

So yes, we should try to challenge our cognitive distortions. But the mere fact that a thought is distressing doesn’t imply that it is wrong, and giving up on the entire concept of “should” and “ought” is throwing out a lot of babies with that bathwater.

We should be careful about labeling any thoughts that depressed people have as cognitive distortions—and “should statements” is a clear example where many psychologists have overreached in what they characterize as a distortion.

Sincerity inflation

Aug 30 JDN 2459092

What is the most saccharine, empty, insincere way to end a letter? “Sincerely”.

Whence such irony? Well, we’ve all been using it for so long that we barely notice it anymore. It’s just the standard way to end a letter now.

This process is not unlike inflation: As more and more dollars get spent, the value of a dollar decreases, and as a word or phrase gets used more and more, its meaning weakens.

It’s hardly just the word “Sincerely” itself that has thus inflated. Indeed, almost any sincere expression of caring often feels empty. We routinely ask strangers “How are you?” when we don’t actually care how they are.

I felt this quite vividly when I was applying to GiveWell (alas, they decided not to hire me). I was trying to express how much I care about GiveWell’s mission to maximize the effectiveness of charity at saving lives, and it was quite hard to find the words. I kept find myself saying things that anyone could say, whether they really cared or not. Fighting global poverty is nothing less than my calling in life—but how could I say that without sounding obsequious or hyperbolic? Anyone can say that they care about global poverty—and if you asked them, hardly anyone would say that they don’t care at all about saving African children from malaria—but how many people actually give money to the Against Malaria Foundation?

Or think about how uncomfortable it can feel to tell a friend that you care about them. I’ve seen quite a few posts on social media that are sort of scattershot attempts at this: “I love you all!” Since that is obviously not true—you do not in fact love all 286 of your Facebook friends—it has plausible deniability. But you secretly hope that the ones you really do care about will see its truth.

Where is this ‘sincerity inflation’ coming from? It can’t really be from overuse of sincerity in ordinary conversation—the question is precisely why such conversation is so rare.

But there is a clear source of excessive sincerity, and it is all around us: Advertising.

Every product is the “best”. They will all “change your life”. You “need” every single one. Every corporation “supports family”. Every product will provide “better living”. The product could be a toothbrush or an automobile; the ads are never really about the product. They are about how the corporation will make your family happy.

Consider the following hilarious subversion by the Steak-umms Twitter account (which is a candle in the darkness of these sad times; they have lots of really great posts about Coronavirus and critical thinking).

Kevin Farzard (who I know almost nothing about, but gather he’s a comedian?) wrote this on Twitter: “I just want one brand to tell me that we are not in this together and their health is our lowest priority”

Steak-umms diligently responded: “Kevin we are not in this together and your health is our lowest priority”

Why is this amusing? Because every other corporation—whose executives surely care less about public health than whatever noble creature runs the Steak-umms Twitter feed—has been saying the opposite: “We are all in this together and your health is our highest priority.”

We are so inundated with this saccharine sincerity by advertisers that we learn to tune it out—we have to, or else we’d go crazy and/or bankrupt. But this has an unfortunate side effect: We tune out expressions of caring when they come from other human beings as well.

Therefore let us endeavor to change this, to express our feelings clearly and plainly to those around us, while continuing to shield ourselves from the bullshit of corporations. (I choose that word carefully: These aren’t lies, they’re bullshit. They aren’t false so much as they are utterly detached from truth.) Part of this means endeavoring to be accepting and supportive when others express their feelings to us, not retreating into the comfort of dismissal or sarcasm. Restoring the value of our sincerity will require a concerted effort from many people acting at once.

For this project to succeed, we must learn to make a sharp distinction between the institutions that are trying to extract profits from us and the people who have relationships with us. This is not to say that human beings cannot lie or be manipulative; of course they can. Trust is necessary for all human relationships, but there is such a thing as too much trust. There is a right amount to trust others you do not know, and it is neither complete distrust nor complete trust. Higher levels of trust must be earned.

But at least human beings are not systematically designed to be amoral and manipulative—which corporations are. A corporation exists to do one thing: Maximize profit for its shareholders. Whatever else a corporation is doing, it is in service of that one ultimate end. Corporations can do many good things; but they sort of do it by accident, along the way toward their goal of maximizing profit. And when those good things stop being profitable, they stop doing them. Keep these facts in mind, and you may have an easier time ignoring everything that corporations say without training yourself to tune out all expressions of sincerity.

Then, perhaps one day it won’t feel so uncomfortable to tell people that we care about them.

Reflections on the Chinese Room

Jul 12 JDN 2459044

Perhaps the most famous thought experiment in the philosophy of mind, John Searle’s Chinese Room is the sort of argument that basically every expert knows is wrong, yet can’t quite explain what is wrong with it. Here’s a brief summary of the argument; for more detail you can consult Wikipedia or the Stanford Encyclopedia of Philosophy.

I am locked in a room. The only way to communicate with me is via a slot in the door, through which papers can be passed.

Someone on the other side of the door is passing me papers with Chinese writing on them. I do not speak any Chinese. Fortunately, there is a series of file cabinets in the room, containing instruction manuals which explain (in English) what an appropriate response in Chinese would be to any given input of Chinese characters. These instructions are simply conditionals like “After receiving input A B C, output X.”

I can follow these instructions and thereby ‘hold a conversation’ in Chinese with the person outside, despite never understanding Chinese.

This room is like a Turing Test. A computer is fed symbols and has instructions telling it to output symbols; it may ‘hold a conversation’, but it will never really understand language.

First, let me note that if this argument were right, it would pretty much doom the entire project of cognitive science. Searle seems to think that calling consciousness a “biological function” as opposed to a “computation” can somehow solve this problem; but this is not how functions work. We don’t say that a crane ‘isn’t really lifting’ because it’s not made of flesh and bone. We don’t say that an airplane ‘isn’t really flying’ because it doesn’t flap its wings like a bird. He often compares to digestion, which is unambiguously a biological function; but if you make a machine that processes food chemically in the same way as digestion, that is basically a digestion machine. (In fact there is a machine called a digester that basically does that.) If Searle is right that no amount of computation could ever get you to consciousness, then we basically have no idea how anything would ever get us to consciousness.

Second, I’m guessing that the argument sounds fairly compelling, especially if you’re not very familiar with the literature. Searle chose his examples very carefully to create a powerfully seductive analogy that tilts our intuitions in a particular direction.

There are various replies that have been made to the Chinese Room. Some have pointed out that the fact that I don’t understand Chinese doesn’t mean that the system doesn’t understand Chinese (the “Systems Reply”). Others have pointed out that in the real world, conscious beings interact with their environment; they don’t just passively respond to inputs (the “Robot Reply”).

Searle has his own counter-reply to these arguments: He insists that if instead of having all those instruction manuals, I memorized all the rules, and then went out in the world and interacted with Chinese speakers, it would still be the case that I didn’t actually understand Chinese. This seems quite dubious to me: For one thing, how is that different from what we would actually observe in someone who does understand Chinese? For another, once you’re interacting with people in the real world, they can do things like point to an object and say the word for it; in such interactions, wouldn’t you eventually learn to genuinely understand the language?

But I’d like to take a somewhat different approach, and instead attack the analogy directly. The argument I’m making here is very much in the spirit of Churchland’s Luminous Room reply, but a little more concrete.

I want you to stop and think about just how big those file cabinets would have to be.

For a proper Turing Test, you can’t have a pre-defined list of allowed topics and canned responses. You’re allowed to talk about anything and everything. There are thousands of symbols in Chinese. There’s no specified limit to how long the test needs to go, or how long each sentence can be.

After each 10-character sequence, the person in the room has to somehow sort through all those file cabinets and find the right set of instructions—not simply to find the correct response to that particular 10-character sequence, but to that sequence in the context of every other sequence that has occurred so far. “What do you think about that?” is a question that one answers very differently depending on what was discussed previously.

The key issue here is combinatoric explosion. Suppose we’re dealing with 100 statements, each 10 characters long, from a vocabulary of 10,000 characters. This means that there are ((10,000)^10)^100 = 10^4000 possible conversations. That’s a ludicrously huge number. It’s bigger than a googol. Even if each atom could store one instruction, there aren’t enough atoms in the known universe. After a few dozen sentences, simply finding the correct file cabinet would be worse than finding a needle in a haystack; it would be finding a hydrogen atom in the whole galaxy.

Even if you assume a shorter memory (which I don’t think is fair; human beings can absolutely remember 100 statements back), say only 10 statements, things aren’t much better: ((10,000)^10)^10 is 10^400, which is still more atoms than there are in the known universe.

In fact, even if I assume no memory at all, just a simple Markov chain that responds only to your previous statement (which can be easily tripped up by asking the same question in a few different contexts), that would still be 10,000^10 = 10^40 sequences, which is at least a quintillion times the total data storage of every computer currently on Earth.

And I’m supposed to imagine that this can be done by hand, in real time, in order to carry out a conversation?

Note that I am not simply saying that a person in a room is too slow for the Chinese Room to work. You can use an exaflop quantum supercomputer if you like; it’s still utterly impossible to store and sort through all possible conversations.

This means that, whatever is actually going on inside the head of a real human being, it is nothing like a series of instructions that say “After receiving input A B C, output X.” A human mind cannot even fathom the total set of possible conversations, much less have a cached response to every possible sequence. This means that rules that simple cannot possibly mimic consciousness. This doesn’t mean consciousness isn’t computational; it means you’re doing the wrong kind of computations.

I’m sure Searle’s response would be to say that this is a difference only of degree, not of kind. But is it, really? Sometimes a sufficiently large difference of degree might as well be a difference of kind. (Indeed, perhaps all differences of kind are really very large differences of degree. Remember, there is a continuous series of common ancestors that links you and I to bananas.)

Moreover, Searle has claimed that his point was about semantics rather than consciousness: In an exchange with Daniel Dennett he wrote “Rather he [Dennett] misstates my position as being about consciousness rather than about semantics.” Yet semantics is exactly how we would solve this problem of combinatoric explosion.

Suppose that instead of simply having a list of symbol sequences, the file cabinets contained detailed English-to-Chinese dictionaries and grammars. After reading and memorizing those, then conversing for awhile with the Chinese speaker outside the room, who would deny that the person in the room understands Chinese? Indeed what other way is there to understand Chinese, if not reading dictionaries and talking to Chinese speakers?

Now imagine somehow converting those dictionaries and grammars into a form that a computer could directly apply. I don’t simply mean digitizing the dictionary; of course that’s easy, and it’s been done. I don’t even mean writing a program that translates automatically between English and Chinese; people are currently working on this sort of thing, and while still pretty poor, it’s getting better all the time.

No, I mean somehow coding the software so that the computer can respond to sentences in Chinese with appropriate responses in Chinese. I mean having some kind of mapping within the software of how different concepts relate to one another, with categorizations and associations built in.

I mean something like a searchable cross-referenced database, so that when asked the question, “What’s your favorite farm animal?” despite never having encountered this sentence before, the computer can go through a list of farm animals and choose one to designate as its ‘favorite’, and then store that somewhere so that later on when it is again asked it will give the same answer. And then why asked “Why do you like goats?” the computer can go through the properties of goats, choose some to be the ‘reason’ why it ‘likes’ them, and then adjust its future responses accordingly. If it decides that the reason is “horns are cute”, then when you mention some other horned animal, it updates to increase its probability of considering that animal “cute”.

I mean something like a program that is programmed to follow conversational conventions, so when you ask it its name, will not only tell you something; it will ask you your name in return, and stores that information for later. And then it will map the sound of your name to known patterns of ethnic naming conventions, and so when you say your name is “Ling-Ling Xu” it asks “Is your family Chinese?” And then when you say “yes” it asks “What part of China are they from?” and then when you say “Shanghai” it asks “Did you grow up there?” and so on. It’s not that it has some kind of rule that says “Respond to ‘Shanghai’ with ‘Did you grow up there?’”; on the contrary, later in the conversation you may say “Shanghai” and get a different response because it was in a different context. In fact, if you were to keep spamming “Shanghai” over and over again, it would sound confused: “Why do you keep saying ‘Shanghai’? I don’t understand.”

In other words, I mean semantics. I mean something approaching how human beings actually seem to organize the meanings of words in their brains. Words map to other words and contexts, and some very fundamental words (like “pain” or “red”) map directly to sensory experiences. If you are asked to define what a word means, you generally either use a lot of other words, or you point to a thing and say “It means that.” Why can’t a robot do the same thing?

I really cannot emphasize enough how radically different that process would be from simply having rules like “After receiving input A B C, output X.” I think part of why Searle’s argument is so seductive is that most people don’t have a keen grasp of computer science, and so the difference between a task that is O(N^2) like what I just outlined above doesn’t sound that different to them compared to a task that is O(10^(10^N)) like the simple input-output rules Searle describes. With a fast enough computer it wouldn’t matter, right? Well, if by “fast enough” you mean “faster than could possibly be built in our known universe”, I guess so. But O(N^2) tasks with N in the thousands are done by your computer all the time; no O(10^(10^N)) task will ever be accomplished for such an N within the Milky Way in the next ten billion years.

I suppose you could still insist that this robot, despite having the same conceptual mappings between words as we do, and acquiring new knowledge in the same way we do, and interacting in the world in the same way we do, and carrying on conversations of arbitrary length on arbitrary topics in ways indistinguishable from the way we do, still nevertheless “is not really conscious”. I don’t know how I would conclusively prove you wrong.

But I have two things to say about that: One, how do I know you aren’t such a machine? This is the problem of zombies. Two, is that really how you would react, if you met such a machine? When you see Lieutenant Commander Data on Star Trek: The Next Generation, is your thought “Oh, he’s just a calculating engine that makes a very convincing simulation of human behavior”? I don’t think it is. I think the natural, intuitive response is actually to assume that anything behaving that much like us is in fact a conscious being.

And that’s all the Chinese Room was anyway: Intuition. Searle never actually proved that the person in the room, or the person-room system, or the person-room-environment system, doesn’t actually understand Chinese. He just feels that way, and expects us to feel that way as well. But I contend that if you ever did actually meet a machine that really, truly passed the strictest form of a Turing Test, your intuition would say something quite different: You would assume that machine was as conscious as you and I.

Moral disagreement is not bad faith

Jun 7 JDN 2459008

One of the most dangerous moves to make in an argument is to accuse your opponent of bad faith. It’s a powerful, and therefore tempting, maneuver: If they don’t even really believe what they are saying, then you can safely ignore basically whatever comes out of their mouth. And part of why this is so tempting is that it is in fact occasionally true—people do sometimes misrepresent their true beliefs in various ways for various reasons. On the Internet especially, sometimes people are just trolling.

But unless you have really compelling evidence that someone is arguing in bad faith, you should assume good faith. You should assume that whatever they are asserting is what they actually believe. For if you assume bad faith and are wrong, you have just cut off any hope of civil discourse between the two of you. You have made it utterly impossible for either side to learn anything or change their mind in any way. If you assume good faith and are wrong, you may have been overly charitable; but in the end you are the one that is more likely to persuade any bystanders, not the one who was arguing in bad faith.

Furthermore, it is important to really make an effort to understand your opponent’s position as they understand it before attempting to respond to it. Far too many times, I have seen someone accused of bad faith by an opponent who simply did not understand their worldview—and did not even seem willing to try to understand their worldview.

In this post, I’m going to point out some particularly egregious examples of this phenomenon that I’ve found, all statements made by left-wing people in response to right-wing people. Why am I focusing on these? Well, for one thing, it’s as important to challenge bad arguments on your own side as it is to do so on the other side. I also think I’m more likely to be persuasive to a left-wing audience. I could find right-wing examples easily enough, but I think it would be less useful: It would be too tempting to think that this is something only the other side does.

Example 1: “Republicans Have Stopped Pretending to Care About Life”

The phrase “pro-life” means thinking that abortion is wrong. That’s all it means. It’s jargon at this point. The phrase has taken on this meaning independent of its constituent parts, just as a red herring need not be either red or a fish.

Stop accusing people of not being “truly pro-life” because they don’t adopt some other beliefs that are not related to abortion. Even if those would be advancing life in some sense (most people probably think that most things they think are good advance life in some sense!), they aren’t relevant to the concept of being “pro-life”. Moreover, being “pro-life” in the traditional conservative sense isn’t even about minimizing the harm of abortion or the abortion rate. It’s about emphasizing the moral wrongness of abortion itself, and often even criminalizing it.


I don’t think this is really so hard to understand. If someone truly, genuinely believes that abortion is murdering a child, it’s quite clear why they won’t be convinced by attempts at minimizing harm or trying to reduce the abortion rate via contraception or other social policy. Many policies are aimed at “reducing the demand for abortion”; would you want to “reduce the demand for murder”? No, you’d want murderers to be locked up. You wouldn’t care what their reasons were, and you wouldn’t be interested in using social policies to address those reasons. It’s not even hard to understand why this would be such an important issue to them, overriding almost anything else: If you thought that millions of people were murdering children you would consider that an extremely important issue too.

If you want to convince people to support Roe v. Wade, you’re going to have to change their actual belief that abortion is murder. You may even be able to convince them that they don’t really think abortion is murder—many conservatives support the death penalty for murder, but very few do so for abortion. But they clearly do think that abortion is a grave moral wrong, and you can’t simply end-run around that by calling them hypocrites because they don’t care about whatever other issue you think they should care about.

Example 2: “Stop pretending to care about human life if you support wars in the Middle East”

I had some trouble finding the exact wording of the meme I originally saw with this sentiment, but the gist of it was basically that if you support bombing Afghanistan, Libya, Iraq, and/or Syria, you have lost all legitimacy to claiming that you care about human life.

Say what you will about these wars (though to be honest I think what the US has done in Libya and Syria has done more good than harm), but simply supporting a war does not automatically undermine all your moral legitimacy. The kind of radical pacifism that requires us to never kill anyone ever is utterly unrealistic; the question is and has always been “Which people is it okay to kill, when and how and why?” Some wars are justified; we have to accept that.

It would be different if these were wars of genocidal extermination; I can see a case for saying that anyone who supported the Holocaust or the Rwandan Genocide has lost all moral legitimacy. But even then it isn’t really accurate to say that those people don’t care about human life; it’s much more accurate to say that they have assigned the group of people they want to kill to a subhuman status. Maybe you would actually get more traction by saying “They are human beings too!” rather than by accusing people of not believing in the value of human life.

And clearly these are not wars of extermination—if the US military wanted to exterminate an entire nation of people, they could do so much more efficiently than by using targeted airstrikes and conventional warfare. Remember: They have nuclear weapons. Even if you think that they wouldn’t use nukes because of fear of retaliation (Would Russia or China really retaliate using their own nukes if the US nuked Afghanistan or Iran?), it’s clear that they could have done a lot more to kill a lot more innocent people if that were actually their goal. It’s one thing to say they don’t take enough care not to kill innocent civilians—I agree with that. It’s quite another to say that they actively try to kill innocent civilians—that’s clearly not what is happening.

Example 3: “Stop pretending to be Christian if you won’t help the poor.”

This one I find a good deal more tempting: In the Bible, Jesus does spend an awful lot more words on helping the poor than he does on, well, almost anything else; and he doesn’t even once mention abortion or homosexuality. (The rest of the Bible does at least mention homosexuality, but it really doesn’t have any clear mentions of abortion.) So it really is tempting to say that anyone who doesn’t make helping the poor their number one priority can’t really be a Christian.

But the world is more complicated than that. People can truly and deeply believe some aspects of a religion while utterly rejecting others. They can do this more or less arbitrarily, in a way that may not even be logically coherent. They may even honestly believe that every single word of the Bible to be the absolute perfect truth of an absolute perfect God, and yet there are still passages you could point them to that they would have to admit they don’t believe in. (There are literally hundreds of explicit contradictions in the Bible. Many are minor—though still undermine any claim to absolute perfect truth—but some are really quite substantial. Does God forgive and forget, or does he visit revenge upon generations to come? That’s kind of a big deal! And should we be answering fools or not?) In some sense they don’t really believe that every word is true, then; but they do seem to believe in believing it.

Yes, it’s true; people can worship a penniless son of a carpenter who preached peace and charity and at the same time support cutting social welfare programs and bombing the Middle East. Such a worldview may not be entirely self-consistent; it’s certainly not the worldview that Jesus himself espoused. But it nevertheless is quite sincerely believed by many millions of people.

It may still be useful to understand the Bible in order to persuade Christians to help the poor more. There are certainly plenty of passages you can point them to where Jesus talks about how important it is to help the poor. Likewise, Jesus doesn’t seem to much like the rich, so it is fair to ask: How Christian is it for Republicans to keep cutting taxes on the rich? (I literally laughed out loud when I first saw this meme: “Celebrate Holy Week By Flogging a Banker: It’s What Jesus Would Have Done!“) But you should not accuse people of “pretending to be Christian”. They really do strongly identify themselves as Christian, and would sooner give up almost anything else about their identity. If you accuse them of pretending, all that will do is shut down the conversation.

Now, after all that, let me give one last example that doesn’t fit the trend, one example where I really do think the other side is acting in bad faith.


Example 4: “#AllLivesMatter is a lie. You don’t actually think all lives matter.”

I think this one is actually true. If you truly believed that all lives matter, you wouldn’t post the hashtag #AllLivesMatter in response to #BlackLivesMatter protests against police brutality.

First of all, you’d probably be supporting those protests. But even if you didn’t for some reason, that isn’t how you would use the hashtag. As a genuine expression of caring, the hashtag #AllLivesMatter would only really make sense for something like Oxfam or UNICEF: Here are these human lives that are in danger and we haven’t been paying enough attention to them, and here, you can follow my hashtag and give some money to help them because all lives matter. If it were really about all lives mattering, then you’d see the hashtag pop up after a tsunami in Southeast Asia or a famine in central Africa. (For awhile I tried actually using it that way; I quickly found that it was overwhelmed by the bad faith usage and decided to give up.)

No, this hashtag really seems to be trying to use a genuinely reasonable moral norm—all lives matter—as a weapon against a political movement. We don’t see #AllLivesMatter popping up asking people to help save some lives—it’s always as a way of shouting down other people who want to save some lives. It’s a glib response that lets you turn away and ignore their pleas, without ever actually addressing the substance of what they are saying. If you really believed that all lives matter, you would not be so glib; you would want to understand how so many people are suffering and want to do something to help them. Even if you ultimately disagreed with what they were saying, you would respect them enough to listen.

The counterpart #BlueLivesMatter isn’t in bad faith, but it is disturbing in a different way: What are ‘blue lives’? People aren’t born police officers. They volunteer for that job. They can quit if want. No one can quit being Black. Working as a police officer isn’t even especially dangerous! But it’s not a bad faith argument: These people really do believe that the lives of police officers are worth more—apparently much more—than the lives of Black civilians.

I do admit, the phrasing “#BlackLivesMatter” is a bit awkward, and could be read to suggest that other lives don’t matter, but it takes about 2 minutes of talking to someone (or reading a blog by someone) who supports those protests to gather that this is not their actual view. Perhaps they should have used #BlackLivesMatterToo, but when your misconception is that easily rectified the responsibility to avoid it falls on you. (Then again, some people do seem to stoke this misconception: I was quite annoyed when a question was asked at a Democratic debate: “Do Black Lives Matter, or Do All Lives Matter?” The correct answer of course is “All lives matter, which is why I support the Black Lives Matter movement.”)

So, yes, bad faith arguments do exist, and sometimes we need to point them out. But I implore you, consider that a last resort, a nuclear option you’ll only deploy when all other avenues have been exhausted. Once you accuse someone of bad faith, you have shut down the conversation completely—preventing you, them, and anyone else who was listening from having any chance of learning or changing their mind.

Terrible but not likely, likely but not terrible

May 17 JDN 2458985

The human brain is a remarkably awkward machine. It’s really quite bad at organizing data, relying on associations rather than formal categories.

It is particularly bad at negation. For instance, if I tell you that right now, no matter what, you must not think about a yellow submarine, the first thing you will do is think about a yellow submarine. (You may even get the Beatles song stuck in your head, especially now that I’ve mentioned it.) A computer would never make such a grievous error.

The human brain is also quite bad at separation. Daniel Dennett coined a word “deepity” for a particular kind of deep-sounding but ultimately trivial aphorism that seems to be quite common, which relies upon this feature of the brain. A deepity has at least two possible readings: On one reading, it is true, but utterly trivial. On another, it would be profound if true, but it simply isn’t true. But if you experience both at once, your brain is triggered for both “true” and “profound” and yields “profound truth”. The example he likes to use is “Love is just a word”. Well, yes, “love” is in fact just a word, but who cares? Yeah, words are words. But love, the underlying concept it describes, is not just a word—though if it were that would change a lot.

One thing I’ve come to realize about my own anxiety is that it involves a wide variety of different scenarios I imagine in my mind, and broadly speaking these can be sorted into two categories: Those that are likely but not terrible, and those that are terrible but not likely.

In the former category we have things like taking an extra year to finish my dissertation; the mean time to completion for a PhD is over 8 years, so finishing in 6 instead of 5 can hardly be considered catastrophic.

In the latter category we have things like dying from COVID-19. Yes, I’m a male with type A blood and asthma living in a high-risk county; but I’m also a young, healthy nonsmoker living under lockdown. Even without knowing the true fatality rate of the virus, my chances of actually dying from it are surely less than 1%.

But when both of those scenarios are running through my brain at the same time, the first triggers a reaction for “likely” and the second triggers a reaction for “terrible”, and I get this feeling that something terrible is actually likely to happen. And indeed if my probability of dying were as high as my probability of needing a 6th year to finish my PhD, that would be catastrophic.

I suppose it’s a bit strange that the opposite doesn’t happen: I never seem to get the improbability of dying attached to the mildness of needing an extra year. The confusion never seems to trigger “neither terrible nor likely”. Or perhaps it does, and my brain immediately disregards that as not worthy of consideration? It makes a certain sort of sense: An event that is neither probable nor severe doesn’t seem to merit much anxiety.

I suspect that many other people’s brains work the same way, eliding distinctions between different outcomes and ending up with a sort of maximal product of probability and severity.
The solution to this is not an easy one: It requires deliberate effort and extensive practice, and benefits greatly from formal training by a therapist. Counter-intuitively, you need to actually focus more on the scenarios that cause you anxiety, and accept the anxiety that such focus triggers in you. I find that it helps to actually write down the details of each scenario as vividly as possible, and review what I have written later. After doing this enough times, you can build up a greater separation in your mind, and more clearly categorize—this one is likely but not terrible, that one is terrible but not likely. It isn’t a cure, but it definitely helps me a great deal. Perhaps it could help you.

Creativity and mental illness

Dec 1 JDN 2458819

There is some truth to the stereotype that artistic people are crazy. Mental illnesses, particularly bipolar disorder, are overrepresented among artists, writers, and musicians. Creative people score highly on literally all five of the Big Five personality traits: They are higher in Openness, higher in Conscientiousness, higher in Extraversion (that one actually surprised me), higher in Agreeableness, and higher in Neuroticism. Creative people just have more personality, it seems.

But in fact mental illness is not as overrepresented among creative people as most people think, and the highest probability of being a successful artist occurs when you have close relatives with mental illness, but are not yourself mentally ill. Those with mental illness actually tend to be most creative when their symptoms are in remission. This suggests that the apparent link between creativity and mental illness may actually increase over time, as treatments improve and remission becomes easier.

One possible source of the link is that artistic expression may be a form of self-medication: Art therapy does seem to have some promise in treating a variety of mental disorders (though it is not nearly as effective as therapy and medication). And that wouldn’t explain why family history of mental illness is actually a better predictor of creativity than mental illness itself.

My guess is that in order to be creative, you need to think differently than other people. You need to see the world in a way that others do not see it. Mental illness is surely not the only way to do that, but it’s definitely one way.

But creativity also requires basic functioning: If you are totally crippled by a mental illness, you’re not going to be very creative. So the people who are most creative have just enough craziness to think differently, but not so much that it takes over their lives.

This might even help explain how mental illness persisted in our population, despite its obvious survival disadvantages. It could be some form of heterozygote advantage.

The classic example of heterozygote advantage is sickle-cell anemia: If you have no copies of the sickle-cell gene, you’re normal. If you have two copies, you have sickle-cell anemia, which is very bad. But if you have only one copy, you’re healthy—and you’re resistant to malaria. Thus, high risk of malaria—as we certainly had, living in central Africa—creates a selection pressure that keeps sickle-cell genes in the population, even though having two copies is much worse than having none at all.

Mental illness might function something like this. I suspect it’s far more complicated than sickle-cell anemia, which is literally just two alleles of a single gene; but the overall process may be similar. If having just a little bit of bipolar disorder or schizophrenia makes you see the world differently than other people and makes you more creative, there are lots of reasons why that might improve the survival of your genes: There are the obvious problem-solving benefits, but also the simple fact that artists are sexy.

The downside of such “weird-thinking” genes is that they can go too far and make you mentally ill, perhaps if you have too many copies of them, or if you face an environmental trigger that sets them off. Sometimes the reason you see the world differently than everyone else is that you’re just seeing it wrong. But if the benefits of creativity are high enough—and they surely are—this could offset the risks, in an evolutionary sense.

But one thing is quite clear: If you are mentally ill, don’t avoid treatment for fear it will damage your creativity. Quite the opposite: A mental illness that is well treated and in remission is the optimal state for creativity. Go seek treatment, so that your creativity may blossom.

Pascal’s Mugging

Nov 10 JDN 2458798

In the Singularitarian community there is a paradox known as “Pascal’s Mugging”. The name is an intentional reference to Pascal’s Wager (and the link is quite apt, for reasons I’ll discuss in a later post.)

There are a few different versions of the argument; Yudkowsky’s original argument in which he came up with the name “Pascal’s Mugging” relies upon the concept of the universe as a simulation and an understanding of esoteric mathematical notation. So here is a more intuitive version:

A strange man in a dark hood comes up to you on the street. “Give me five dollars,” he says, “or I will destroy an entire planet filled with ten billion innocent people. I cannot prove to you that I have this power, but how much is an innocent life worth to you? Even if it is as little as $5,000, are you really willing to bet on ten trillion to one odds that I am lying?”

Do you give him the five dollars? I suspect that you do not. Indeed, I suspect that you’d be less likely to give him the five dollars than if he had merely said he was homeless and asked for five dollars to help pay for food. (Also, you may have objected that you value innocent lives, even faraway strangers you’ll never meet, at more than $5,000 each—but if that’s the case, you should probably be donating more, because the world’s best charities can save a live for about $3,000.)

But therein lies the paradox: Are you really willing to bet on ten trillion to one odds?

This argument gives me much the same feeling as the Ontological Argument; as Russell said of the latter, “it is much easier to be persuaded that ontological arguments are no good than it is to say exactly what is wrong with them.” It wasn’t until I read this post on GiveWell that I could really formulate the answer clearly enough to explain it.

The apparent force of Pascal’s Mugging comes from the idea of expected utility: Even if the probability of an event is very small, if it has a sufficiently great impact, the expected utility can still be large.

The problem with this argument is that extraordinary claims require extraordinary evidence. If a man held a gun to your head and said he’d shoot you if you didn’t give him five dollars, you’d give him five dollars. This is a plausible claim and he has provided ample evidence. If he were instead wearing a bomb vest (or even just really puffy clothing that could conceal a bomb vest), and he threatened to blow up a building unless you gave him five dollars, you’d probably do the same. This is less plausible (what kind of terrorist only demands five dollars?), but it’s not worth taking the chance.

But when he claims to have a Death Star parked in orbit of some distant planet, primed to make another Alderaan, you are right to be extremely skeptical. And if he claims to be a being from beyond our universe, primed to destroy so many lives that we couldn’t even write the number down with all the atoms in our universe (which was actually Yudkowsky’s original argument), to say that you are extremely skeptical seems a grievous understatement.

That GiveWell post provides a way to make this intuition mathematically precise in terms of Bayesian logic. If you have a normal prior with mean 0 and standard deviation 1, and you are presented with a likelihood with mean X and standard deviation X, what should you make your posterior distribution?

Normal priors are quite convenient; they conjugate nicely. The precision (inverse variance) of the posterior distribution is the sum of the two precisions, and the mean is a weighted average of the two means, weighted by their precision.

So the posterior variance is 1/(1 + 1/X^2).

The posterior mean is 1/(1+1/X^2)*(0) + (1/X^2)/(1+1/X^2)*(X) = X/(X^2+1).

That is, the mean of the posterior distribution is just barely higher than zero—and in fact, it is decreasing in X, if X > 1.

For those who don’t speak Bayesian: If someone says he’s going to have an effect of magnitude X, you should be less likely to believe him the larger that X is. And indeed this is precisely what our intuition said before: If he says he’s going to kill one person, believe him. If he says he’s going to destroy a planet, don’t believe him, unless he provides some really extraordinary evidence.

What sort of extraordinary evidence? To his credit, Yudkowsky imagined the sort of evidence that might actually be convincing:

If a poorly-dressed street person offers to save 10(10^100) lives (googolplex lives) for $5 using their Matrix Lord powers, and you claim to assign this scenario less than 10-(10^100) probability, then apparently you should continue to believe absolutely that their offer is bogus even after they snap their fingers and cause a giant silhouette of themselves to appear in the sky.

This post he called “Pascal’s Muggle”, after the term from the Harry Potter series, since some of the solutions that had been proposed for dealing with Pascal’s Mugging had resulted in a situation almost as absurd, in which the mugger could exhibit powers beyond our imagining and yet nevertheless we’d never have sufficient evidence to believe him.

So, let me go on record as saying this: Yes, if someone snaps his fingers and causes the sky to rip open and reveal a silhouette of himself, I’ll do whatever that person says. The odds are still higher that I’m dreaming or hallucinating than that this is really a being from beyond our universe, but if I’m dreaming, it makes no difference, and if someone can make me hallucinate that vividly he can probably cajole the money out of me in other ways. And there might be just enough chance that this could be real that I’m willing to give up that five bucks.

These seem like really strange thought experiments, because they are. But like many good thought experiments, they can provide us with some important insights. In this case, I think they are telling us something about the way human reasoning can fail when faced with impacts beyond our normal experience: We are in danger of both over-estimating and under-estimating their effects, because our brains aren’t equipped to deal with magnitudes and probabilities on that scale. This has made me realize something rather important about both Singularitarianism and religion, but I’ll save that for next week’s post.

Mental illness is different from physical illness.

Post 311 Oct 13 JDN 2458770

There’s something I have heard a lot of people say about mental illness that is obviously well-intentioned, but ultimately misguided: “Mental illness is just like physical illness.”

Sometimes they say it explicitly in those terms. Other times they make analogies, like “If you wouldn’t shame someone with diabetes for using insulin, why shame someone with depression for using SSRIs?”

Yet I don’t think this line of argument will ever meaningfully reduce the stigma surrounding mental illness, because, well, it’s obviously not true.

There are some characteristics of mental illness that are analogous to physical illness—but there are some that really are quite different. And these are not just superficial differences, the way that pancreatic disease is different from liver disease. No one would say that liver cancer is exactly the same as pancreatic cancer; but they’re both obviously of the same basic category. There are differences between physical and mental illness which are both obvious, and fundamental.

Here’s the biggest one: Talk therapy works on mental illness.

You can’t talk yourself out of diabetes. You can’t talk yourself out of myocardial infarct. You can’t even talk yourself out of migraine (though I’ll get back to that one in a little bit). But you can, in a very important sense, talk yourself out of depression.

In fact, talk therapy is one of the most effective treatments for most mental disorders. Cognitive behavioral therapy for depression is on its own as effective as most antidepressants (with far fewer harmful side effects), and the two combined are clearly more effective than either alone. Talk therapy is as effective as medication on bipolar disorder, and considerably better on social anxiety disorder.

To be clear: Talk therapy is not just people telling you to cheer up, or saying it’s “all in your head”, or suggesting that you get more exercise or eat some chocolate. Nor does it consist of you ruminating by yourself and trying to talk yourself out of your disorder. Cognitive behavioral therapy is a very complex, sophisticated series of techniques that require years of expert training to master. Yet, at its core, cognitive therapy really is just a very sophisticated form of talking.

The fact that mental disorders can be so strongly affected by talk therapy shows that there really is an important sense in which mental disorders are “all in your head”, and not just the trivial way that an axe wound or even a migraine is all in your head. It isn’t just the fact that it is physically located in your brain that makes a mental disorder different; it’s something deeper than that.

Here’s the best analogy I can come up with: Physical illness is hardware. Mental illness is software.

If a computer breaks after being dropped on the floor, that’s like an axe wound: An obvious, traumatic source of physical damage that is an unambiguous cause of the failure.

If a computer’s CPU starts overheating, that’s like a physical illness, like diabetes: There may be no particular traumatic cause, or even any clear cause at all, but there is obviously something physically wrong that needs physical intervention to correct.

But if a computer is suffering glitches and showing error messages when it tries to run particular programs, that is like mental illness: Something is wrong not on the low-level hardware, but on the high-level software.

These different types of problem require different types of solutions. If your CPU is overheating, you might want to see about replacing your cooling fan or your heat sink. But if your software is glitching while your CPU is otherwise running fine, there’s no point in replacing your fan or heat sink. You need to get a programmer in there to look at the code and find out where it’s going wrong. A talk therapist is like a programmer: The words they say to you are like code scripts they’re trying to get your processor to run correctly.

Of course, our understanding of computers is vastly better than our understanding of human brains, and as a result, programmers tend to get a lot better results than psychotherapists. (Interestingly they do actually get paid about the same, though! Programmers make about 10% more on average than psychotherapists, and both are solidly within the realm of average upper-middle-class service jobs.) But the basic process is the same: Using your expert knowledge of the system, find the right set of inputs that will fix the underlying code and solve the problem. At no point do you physically intervene on the system; you could do it remotely without ever touching it—and indeed, remote talk therapy is a thing.

What about other neurological illnesses, like migraine or fibromyalgia? Well, I think these are somewhere in between. They’re definitely more physical in some sense than a mental disorder like depression. There isn’t any cognitive content to a migraine the way there is to a depressive episode. When I feel depressed or anxious, I feel depressed or anxious about something. But there’s nothing a migraine is about. To use the technical term in cognitive science, neurological disorders lack the intentionality that mental disorders generally have. “What are you depressed about?” is a question you usually can answer. “What are you migrained about?” generally isn’t.

But like mental disorders, neurological disorders are directly linked to the functioning of the brain, and often seem to operate at a higher level of functional abstraction. The brain doesn’t have pain receptors on itself the way most of your body does; getting a migraine behind your left eye doesn’t actually mean that that specific lobe of your brain is what’s malfunctioning. It’s more like a general alert your brain is sending out that something is wrong, somewhere. And fibromyalgia often feels like it’s taking place in your entire body at once. Moreover, most neurological disorders are strongly correlated with mental disorders—indeed, the comorbidity of depression with migraine and fibromyalgia in particular is extremely high.

Which disorder causes the other? That’s a surprisingly difficult question. Intuitively we might expect the “more physical” disorder to be the primary cause, but that’s not always clear. Successful treatment for depression often improves symptoms of migraine and fibromyalgia as well (though the converse is also true). They seem to be mutually reinforcing one another, and it’s not at all clear which came first. I suppose if I had to venture a guess, I’d say the pain disorders probably have causal precedence over the mood disorders, but I don’t actually know that for a fact.

To stretch my analogy a little, it may be like a software problem that ends up causing a hardware problem, or a hardware problem that ends up causing a software problem. There actually have been a few examples of this, like games with graphics so demanding that they caused GPUs to overheat.

The human brain is a lot more complicated than a computer, and the distinction between software and hardware is fuzzier; we don’t actually have “code” that runs on a “processor”. We have synapses that continually fire on and off and rewire each other. The closest thing we have to code that gets processed in sequence would be our genome, and that is several orders of magnitude less complex than the structure of our brains. Aside from simply physically copying the entire brain down to every synapse, it’s not clear that you could ever “download” a mind, science fiction notwithstanding.

Indeed, anything that changes your mind necessarily also changes your brain; the effects of talking are generally subtler than the effects of a drug (and certainly subtler than the effects of an axe wound!), but they are nevertheless real, physical changes. (This is why it is so idiotic whenever the popular science press comes out with: “New study finds that X actually changes your brain!” where X might be anything from drinking coffee to reading romance novels. Of course it does! If it has an effect on your mind, it did so by having an effect on your brain. That’s the Basic Fact of Cognitive Science.) This is not so different from computers, however: Any change in software is also a physical change, in the form of some sequence of electrical charges that were moved from one place to another. Actual physical electrons are a few microns away from where they otherwise would have been because of what was typed into that code.

Of course I want to reduce the stigma surrounding mental illness. (For both selfish and altruistic reasons, really.) But blatantly false assertions don’t seem terribly productive toward that goal. Mental illness is different from physical illness; we can’t treat it the same.

Pinker Propositions

May 19 2458623

What do the following statements have in common?

1. “Capitalist countries have less poverty than Communist countries.

2. “Black men in the US commit homicide at a higher rate than White men.

3. “On average, in the US, Asian people score highest on IQ tests, White and Hispanic people score near the middle, and Black people score the lowest.

4. “Men on average perform better at visual tasks, and women on average perform better on verbal tasks.

5. “In the United States, White men are no more likely to be mass shooters than other men.

6. “The genetic heritability of intelligence is about 60%.

7. “The plurality of recent terrorist attacks in the US have been committed by Muslims.

8. “The period of US military hegemony since 1945 has been the most peaceful period in human history.

These statements have two things in common:

1. All of these statements are objectively true facts that can be verified by rich and reliable empirical data which is publicly available and uncontroversially accepted by social scientists.

2. If spoken publicly among left-wing social justice activists, all of these statements will draw resistance, defensiveness, and often outright hostility. Anyone making these statements is likely to be accused of racism, sexism, imperialism, and so on.

I call such propositions Pinker Propositions, after an excellent talk by Steven Pinker illustrating several of the above statements (which was then taken wildly out of context by social justice activists on social media).

The usual reaction to these statements suggests that people think they imply harmful far-right policy conclusions. This inference is utterly wrong: A nuanced understanding of each of these propositions does not in any way lead to far-right policy conclusions—in fact, some rather strongly support left-wing policy conclusions.

1. Capitalist countries have less poverty than Communist countries, because Communist countries are nearly always corrupt and authoritarian. Social democratic countries have the lowest poverty and the highest overall happiness (#ScandinaviaIsBetter).

2. Black men commit more homicide than White men because of poverty, discrimination, mass incarceration, and gang violence. Black men are also greatly overrepresented among victims of homicide, as most homicide is intra-racial. Homicide rates often vary across ethnic and socioeconomic groups, and these rates vary over time as a result of cultural and political changes.

3. IQ tests are a highly imperfect measure of intelligence, and the genetics of intelligence cut across our socially-constructed concept of race. There is far more within-group variation in IQ than between-group variation. Intelligence is not fixed at birth but is affected by nutrition, upbringing, exposure to toxins, and education—all of which statistically put Black people at a disadvantage. Nor does intelligence remain constant within populations: The Flynn Effect is the well-documented increase in intelligence which has occurred in almost every country over the past century. Far from justifying discrimination, these provide very strong reasons to improve opportunities for Black children. The lead and mercury in Flint’s water suppressed the brain development of thousands of Black children—that’s going to lower average IQ scores. But that says nothing about supposed “inherent racial differences” and everything about the catastrophic damage of environmental racism.

4. To be quite honest, I never even understood why this one shocks—or even surprises—people. It’s not even saying that men are “smarter” than women—overall IQ is almost identical. It’s just saying that men are more visual and women are more verbal. And this, I think, is actually quite obvious. I think the clearest evidence of this—the “interocular trauma” that will convince you the effect is real and worth talking about—is pornography. Visual porn is overwhelmingly consumed by men, even when it was designed for women (e.g. Playgirla majority of its readers are gay men, even though there are ten times as many straight women in the world as there are gay men). Conversely, erotic novels are overwhelmingly consumed by women. I think a lot of anti-porn feminism can actually be explained by this effect: Feminists (who are usually women, for obvious reasons) can say they are against “porn” when what they are really against is visual porn, because visual porn is consumed by men; then the kind of porn that they like (erotic literature) doesn’t count as “real porn”. And honestly they’re mostly against the current structure of the live-action visual porn industry, which is totally reasonable—but it’s a far cry from being against porn in general. I have some serious issues with how our farming system is currently set up, but I’m not against farming.

5. This one is interesting, because it’s a lack of a race difference, which normally is what the left wing always wants to hear. The difference of course is that this alleged difference would make White men look bad, and that’s apparently seen as a desirable goal for social justice. But the data just doesn’t bear it out: While indeed most mass shooters are White men, that’s because most Americans are White, which is a totally uninteresting reason. There’s no clear evidence of any racial disparity in mass shootings—though the gender disparity is absolutely overwhelming: It’s almost always men.

6. Heritability is a subtle concept; it doesn’t mean what most people seem to think it means. It doesn’t mean that 60% of your intelligence is due to your your genes. Indeed, I’m not even sure what that sentence would actually mean; it’s like saying that 60% of the flavor of a cake is due to the eggs. What this heritability figure actually means that when you compare across individuals in a population, and carefully control for environmental influences, you find that about 60% of the variance in IQ scores is explained by genetic factors. But this is within a particular population—here, US adults—and is absolutely dependent on all sorts of other variables. The more flexible one’s environment becomes, the more people self-select into their preferred environment, and the more heritable traits become. As a result, IQ actually becomes more heritable as children become adults, called the Wilson Effect.

7. This one might actually have some contradiction with left-wing policy. The disproportionate participation of Muslims in terrorism—controlling for just about anything you like, income, education, age etc.—really does suggest that, at least at this point in history, there is some real ideological link between Islam and terrorism. But the fact remains that the vast majority of Muslims are not terrorists and do not support terrorism, and antagonizing all the people of an entire religion is fundamentally unjust as well as likely to backfire in various ways. We should instead be trying to encourage the spread of more tolerant forms of Islam, and maintaining the strict boundaries of secularism to prevent the encroach of any religion on our system of government.

8. The fact that US military hegemony does seem to be a cause of global peace doesn’t imply that every single military intervention by the US is justified. In fact, it doesn’t even necessarily imply that any such interventions are justified—though I think one would be hard-pressed to say that the NATO intervention in the Kosovo War or the defense of Kuwait in the Gulf War was unjustified. It merely points out that having a hegemon is clearly preferable to having a multipolar world where many countries jockey for military supremacy. The Pax Romana was a time of peace but also authoritarianism; the Pax Americana is better, but that doesn’t prevent us from criticizing the real harms—including major war crimes—committed by the United States.

So it is entirely possible to know and understand these facts without adopting far-right political views.

Yet Pinker’s point—and mine—is that by suppressing these true facts, by responding with hostility or even ostracism to anyone who states them, we are actually adding fuel to the far-right fire. Instead of presenting the nuanced truth and explaining why it doesn’t imply such radical policies, we attack the messenger; and this leads people to conclude three things:

1. The left wing is willing to lie and suppress the truth in order to achieve political goals (they’re doing it right now).

2. These statements actually do imply right-wing conclusions (else why suppress them?).

3. Since these statements are true, that must mean the right-wing conclusions are actually correct.

Now (especially if you are someone who identifies unironically as “woke”), you might be thinking something like this: “Anyone who can be turned away from social justice so easily was never a real ally in the first place!”

This is a fundamentally and dangerously wrongheaded view. No one—not me, not you, not anyone—was born believing in social justice. You did not emerge from your mother’s womb ranting against colonalist imperialism. You had to learn what you now know. You came to believe what you now believe, after once believing something else that you now think is wrong. This is true of absolutely everyone everywhere. Indeed, the better you are, the more true it is; good people learn from their mistakes and grow in their knowledge.

This means that anyone who is now an ally of social justice once was not. And that, in turn, suggests that many people who are currently not allies could become so, under the right circumstances. They would probably not shift all at once—as I didn’t, and I doubt you did either—but if we are welcoming and open and honest with them, we can gradually tilt them toward greater and greater levels of support.

But if we reject them immediately for being impure, they never get the chance to learn, and we never get the chance to sway them. People who are currently uncertain of their political beliefs will become our enemies because we made them our enemies. We declared that if they would not immediately commit to everything we believe, then they may as well oppose us. They, quite reasonably unwilling to commit to a detailed political agenda they didn’t understand, decided that it would be easiest to simply oppose us.

And we don’t have to win over every person on every single issue. We merely need to win over a large enough critical mass on each issue to shift policies and cultural norms. Building a wider tent is not compromising on your principles; on the contrary, it’s how you actually win and make those principles a reality.

There will always be those we cannot convince, of course. And I admit, there is something deeply irrational about going from “those leftists attacked Charles Murray” to “I think I’ll start waving a swastika”. But humans aren’t always rational; we know this. You can lament this, complain about it, yell at people for being so irrational all you like—it won’t actually make people any more rational. Humans are tribal; we think in terms of teams. We need to make our team as large and welcoming as possible, and suppressing Pinker Propositions is not the way to do that.