Sincerity inflation

Aug 30 JDN 2459092

What is the most saccharine, empty, insincere way to end a letter? “Sincerely”.

Whence such irony? Well, we’ve all been using it for so long that we barely notice it anymore. It’s just the standard way to end a letter now.

This process is not unlike inflation: As more and more dollars get spent, the value of a dollar decreases, and as a word or phrase gets used more and more, its meaning weakens.

It’s hardly just the word “Sincerely” itself that has thus inflated. Indeed, almost any sincere expression of caring often feels empty. We routinely ask strangers “How are you?” when we don’t actually care how they are.

I felt this quite vividly when I was applying to GiveWell (alas, they decided not to hire me). I was trying to express how much I care about GiveWell’s mission to maximize the effectiveness of charity at saving lives, and it was quite hard to find the words. I kept find myself saying things that anyone could say, whether they really cared or not. Fighting global poverty is nothing less than my calling in life—but how could I say that without sounding obsequious or hyperbolic? Anyone can say that they care about global poverty—and if you asked them, hardly anyone would say that they don’t care at all about saving African children from malaria—but how many people actually give money to the Against Malaria Foundation?

Or think about how uncomfortable it can feel to tell a friend that you care about them. I’ve seen quite a few posts on social media that are sort of scattershot attempts at this: “I love you all!” Since that is obviously not true—you do not in fact love all 286 of your Facebook friends—it has plausible deniability. But you secretly hope that the ones you really do care about will see its truth.

Where is this ‘sincerity inflation’ coming from? It can’t really be from overuse of sincerity in ordinary conversation—the question is precisely why such conversation is so rare.

But there is a clear source of excessive sincerity, and it is all around us: Advertising.

Every product is the “best”. They will all “change your life”. You “need” every single one. Every corporation “supports family”. Every product will provide “better living”. The product could be a toothbrush or an automobile; the ads are never really about the product. They are about how the corporation will make your family happy.

Consider the following hilarious subversion by the Steak-umms Twitter account (which is a candle in the darkness of these sad times; they have lots of really great posts about Coronavirus and critical thinking).

Kevin Farzard (who I know almost nothing about, but gather he’s a comedian?) wrote this on Twitter: “I just want one brand to tell me that we are not in this together and their health is our lowest priority”

Steak-umms diligently responded: “Kevin we are not in this together and your health is our lowest priority”

Why is this amusing? Because every other corporation—whose executives surely care less about public health than whatever noble creature runs the Steak-umms Twitter feed—has been saying the opposite: “We are all in this together and your health is our highest priority.”

We are so inundated with this saccharine sincerity by advertisers that we learn to tune it out—we have to, or else we’d go crazy and/or bankrupt. But this has an unfortunate side effect: We tune out expressions of caring when they come from other human beings as well.

Therefore let us endeavor to change this, to express our feelings clearly and plainly to those around us, while continuing to shield ourselves from the bullshit of corporations. (I choose that word carefully: These aren’t lies, they’re bullshit. They aren’t false so much as they are utterly detached from truth.) Part of this means endeavoring to be accepting and supportive when others express their feelings to us, not retreating into the comfort of dismissal or sarcasm. Restoring the value of our sincerity will require a concerted effort from many people acting at once.

For this project to succeed, we must learn to make a sharp distinction between the institutions that are trying to extract profits from us and the people who have relationships with us. This is not to say that human beings cannot lie or be manipulative; of course they can. Trust is necessary for all human relationships, but there is such a thing as too much trust. There is a right amount to trust others you do not know, and it is neither complete distrust nor complete trust. Higher levels of trust must be earned.

But at least human beings are not systematically designed to be amoral and manipulative—which corporations are. A corporation exists to do one thing: Maximize profit for its shareholders. Whatever else a corporation is doing, it is in service of that one ultimate end. Corporations can do many good things; but they sort of do it by accident, along the way toward their goal of maximizing profit. And when those good things stop being profitable, they stop doing them. Keep these facts in mind, and you may have an easier time ignoring everything that corporations say without training yourself to tune out all expressions of sincerity.

Then, perhaps one day it won’t feel so uncomfortable to tell people that we care about them.

Reflections on the Chinese Room

Jul 12 JDN 2459044

Perhaps the most famous thought experiment in the philosophy of mind, John Searle’s Chinese Room is the sort of argument that basically every expert knows is wrong, yet can’t quite explain what is wrong with it. Here’s a brief summary of the argument; for more detail you can consult Wikipedia or the Stanford Encyclopedia of Philosophy.

I am locked in a room. The only way to communicate with me is via a slot in the door, through which papers can be passed.

Someone on the other side of the door is passing me papers with Chinese writing on them. I do not speak any Chinese. Fortunately, there is a series of file cabinets in the room, containing instruction manuals which explain (in English) what an appropriate response in Chinese would be to any given input of Chinese characters. These instructions are simply conditionals like “After receiving input A B C, output X.”

I can follow these instructions and thereby ‘hold a conversation’ in Chinese with the person outside, despite never understanding Chinese.

This room is like a Turing Test. A computer is fed symbols and has instructions telling it to output symbols; it may ‘hold a conversation’, but it will never really understand language.

First, let me note that if this argument were right, it would pretty much doom the entire project of cognitive science. Searle seems to think that calling consciousness a “biological function” as opposed to a “computation” can somehow solve this problem; but this is not how functions work. We don’t say that a crane ‘isn’t really lifting’ because it’s not made of flesh and bone. We don’t say that an airplane ‘isn’t really flying’ because it doesn’t flap its wings like a bird. He often compares to digestion, which is unambiguously a biological function; but if you make a machine that processes food chemically in the same way as digestion, that is basically a digestion machine. (In fact there is a machine called a digester that basically does that.) If Searle is right that no amount of computation could ever get you to consciousness, then we basically have no idea how anything would ever get us to consciousness.

Second, I’m guessing that the argument sounds fairly compelling, especially if you’re not very familiar with the literature. Searle chose his examples very carefully to create a powerfully seductive analogy that tilts our intuitions in a particular direction.

There are various replies that have been made to the Chinese Room. Some have pointed out that the fact that I don’t understand Chinese doesn’t mean that the system doesn’t understand Chinese (the “Systems Reply”). Others have pointed out that in the real world, conscious beings interact with their environment; they don’t just passively respond to inputs (the “Robot Reply”).

Searle has his own counter-reply to these arguments: He insists that if instead of having all those instruction manuals, I memorized all the rules, and then went out in the world and interacted with Chinese speakers, it would still be the case that I didn’t actually understand Chinese. This seems quite dubious to me: For one thing, how is that different from what we would actually observe in someone who does understand Chinese? For another, once you’re interacting with people in the real world, they can do things like point to an object and say the word for it; in such interactions, wouldn’t you eventually learn to genuinely understand the language?

But I’d like to take a somewhat different approach, and instead attack the analogy directly. The argument I’m making here is very much in the spirit of Churchland’s Luminous Room reply, but a little more concrete.

I want you to stop and think about just how big those file cabinets would have to be.

For a proper Turing Test, you can’t have a pre-defined list of allowed topics and canned responses. You’re allowed to talk about anything and everything. There are thousands of symbols in Chinese. There’s no specified limit to how long the test needs to go, or how long each sentence can be.

After each 10-character sequence, the person in the room has to somehow sort through all those file cabinets and find the right set of instructions—not simply to find the correct response to that particular 10-character sequence, but to that sequence in the context of every other sequence that has occurred so far. “What do you think about that?” is a question that one answers very differently depending on what was discussed previously.

The key issue here is combinatoric explosion. Suppose we’re dealing with 100 statements, each 10 characters long, from a vocabulary of 10,000 characters. This means that there are ((10,000)^10)^100 = 10^4000 possible conversations. That’s a ludicrously huge number. It’s bigger than a googol. Even if each atom could store one instruction, there aren’t enough atoms in the known universe. After a few dozen sentences, simply finding the correct file cabinet would be worse than finding a needle in a haystack; it would be finding a hydrogen atom in the whole galaxy.

Even if you assume a shorter memory (which I don’t think is fair; human beings can absolutely remember 100 statements back), say only 10 statements, things aren’t much better: ((10,000)^10)^10 is 10^400, which is still more atoms than there are in the known universe.

In fact, even if I assume no memory at all, just a simple Markov chain that responds only to your previous statement (which can be easily tripped up by asking the same question in a few different contexts), that would still be 10,000^10 = 10^40 sequences, which is at least a quintillion times the total data storage of every computer currently on Earth.

And I’m supposed to imagine that this can be done by hand, in real time, in order to carry out a conversation?

Note that I am not simply saying that a person in a room is too slow for the Chinese Room to work. You can use an exaflop quantum supercomputer if you like; it’s still utterly impossible to store and sort through all possible conversations.

This means that, whatever is actually going on inside the head of a real human being, it is nothing like a series of instructions that say “After receiving input A B C, output X.” A human mind cannot even fathom the total set of possible conversations, much less have a cached response to every possible sequence. This means that rules that simple cannot possibly mimic consciousness. This doesn’t mean consciousness isn’t computational; it means you’re doing the wrong kind of computations.

I’m sure Searle’s response would be to say that this is a difference only of degree, not of kind. But is it, really? Sometimes a sufficiently large difference of degree might as well be a difference of kind. (Indeed, perhaps all differences of kind are really very large differences of degree. Remember, there is a continuous series of common ancestors that links you and I to bananas.)

Moreover, Searle has claimed that his point was about semantics rather than consciousness: In an exchange with Daniel Dennett he wrote “Rather he [Dennett] misstates my position as being about consciousness rather than about semantics.” Yet semantics is exactly how we would solve this problem of combinatoric explosion.

Suppose that instead of simply having a list of symbol sequences, the file cabinets contained detailed English-to-Chinese dictionaries and grammars. After reading and memorizing those, then conversing for awhile with the Chinese speaker outside the room, who would deny that the person in the room understands Chinese? Indeed what other way is there to understand Chinese, if not reading dictionaries and talking to Chinese speakers?

Now imagine somehow converting those dictionaries and grammars into a form that a computer could directly apply. I don’t simply mean digitizing the dictionary; of course that’s easy, and it’s been done. I don’t even mean writing a program that translates automatically between English and Chinese; people are currently working on this sort of thing, and while still pretty poor, it’s getting better all the time.

No, I mean somehow coding the software so that the computer can respond to sentences in Chinese with appropriate responses in Chinese. I mean having some kind of mapping within the software of how different concepts relate to one another, with categorizations and associations built in.

I mean something like a searchable cross-referenced database, so that when asked the question, “What’s your favorite farm animal?” despite never having encountered this sentence before, the computer can go through a list of farm animals and choose one to designate as its ‘favorite’, and then store that somewhere so that later on when it is again asked it will give the same answer. And then why asked “Why do you like goats?” the computer can go through the properties of goats, choose some to be the ‘reason’ why it ‘likes’ them, and then adjust its future responses accordingly. If it decides that the reason is “horns are cute”, then when you mention some other horned animal, it updates to increase its probability of considering that animal “cute”.

I mean something like a program that is programmed to follow conversational conventions, so when you ask it its name, will not only tell you something; it will ask you your name in return, and stores that information for later. And then it will map the sound of your name to known patterns of ethnic naming conventions, and so when you say your name is “Ling-Ling Xu” it asks “Is your family Chinese?” And then when you say “yes” it asks “What part of China are they from?” and then when you say “Shanghai” it asks “Did you grow up there?” and so on. It’s not that it has some kind of rule that says “Respond to ‘Shanghai’ with ‘Did you grow up there?’”; on the contrary, later in the conversation you may say “Shanghai” and get a different response because it was in a different context. In fact, if you were to keep spamming “Shanghai” over and over again, it would sound confused: “Why do you keep saying ‘Shanghai’? I don’t understand.”

In other words, I mean semantics. I mean something approaching how human beings actually seem to organize the meanings of words in their brains. Words map to other words and contexts, and some very fundamental words (like “pain” or “red”) map directly to sensory experiences. If you are asked to define what a word means, you generally either use a lot of other words, or you point to a thing and say “It means that.” Why can’t a robot do the same thing?

I really cannot emphasize enough how radically different that process would be from simply having rules like “After receiving input A B C, output X.” I think part of why Searle’s argument is so seductive is that most people don’t have a keen grasp of computer science, and so the difference between a task that is O(N^2) like what I just outlined above doesn’t sound that different to them compared to a task that is O(10^(10^N)) like the simple input-output rules Searle describes. With a fast enough computer it wouldn’t matter, right? Well, if by “fast enough” you mean “faster than could possibly be built in our known universe”, I guess so. But O(N^2) tasks with N in the thousands are done by your computer all the time; no O(10^(10^N)) task will ever be accomplished for such an N within the Milky Way in the next ten billion years.

I suppose you could still insist that this robot, despite having the same conceptual mappings between words as we do, and acquiring new knowledge in the same way we do, and interacting in the world in the same way we do, and carrying on conversations of arbitrary length on arbitrary topics in ways indistinguishable from the way we do, still nevertheless “is not really conscious”. I don’t know how I would conclusively prove you wrong.

But I have two things to say about that: One, how do I know you aren’t such a machine? This is the problem of zombies. Two, is that really how you would react, if you met such a machine? When you see Lieutenant Commander Data on Star Trek: The Next Generation, is your thought “Oh, he’s just a calculating engine that makes a very convincing simulation of human behavior”? I don’t think it is. I think the natural, intuitive response is actually to assume that anything behaving that much like us is in fact a conscious being.

And that’s all the Chinese Room was anyway: Intuition. Searle never actually proved that the person in the room, or the person-room system, or the person-room-environment system, doesn’t actually understand Chinese. He just feels that way, and expects us to feel that way as well. But I contend that if you ever did actually meet a machine that really, truly passed the strictest form of a Turing Test, your intuition would say something quite different: You would assume that machine was as conscious as you and I.

Moral disagreement is not bad faith

Jun 7 JDN 2459008

One of the most dangerous moves to make in an argument is to accuse your opponent of bad faith. It’s a powerful, and therefore tempting, maneuver: If they don’t even really believe what they are saying, then you can safely ignore basically whatever comes out of their mouth. And part of why this is so tempting is that it is in fact occasionally true—people do sometimes misrepresent their true beliefs in various ways for various reasons. On the Internet especially, sometimes people are just trolling.

But unless you have really compelling evidence that someone is arguing in bad faith, you should assume good faith. You should assume that whatever they are asserting is what they actually believe. For if you assume bad faith and are wrong, you have just cut off any hope of civil discourse between the two of you. You have made it utterly impossible for either side to learn anything or change their mind in any way. If you assume good faith and are wrong, you may have been overly charitable; but in the end you are the one that is more likely to persuade any bystanders, not the one who was arguing in bad faith.

Furthermore, it is important to really make an effort to understand your opponent’s position as they understand it before attempting to respond to it. Far too many times, I have seen someone accused of bad faith by an opponent who simply did not understand their worldview—and did not even seem willing to try to understand their worldview.

In this post, I’m going to point out some particularly egregious examples of this phenomenon that I’ve found, all statements made by left-wing people in response to right-wing people. Why am I focusing on these? Well, for one thing, it’s as important to challenge bad arguments on your own side as it is to do so on the other side. I also think I’m more likely to be persuasive to a left-wing audience. I could find right-wing examples easily enough, but I think it would be less useful: It would be too tempting to think that this is something only the other side does.

Example 1: “Republicans Have Stopped Pretending to Care About Life”

The phrase “pro-life” means thinking that abortion is wrong. That’s all it means. It’s jargon at this point. The phrase has taken on this meaning independent of its constituent parts, just as a red herring need not be either red or a fish.

Stop accusing people of not being “truly pro-life” because they don’t adopt some other beliefs that are not related to abortion. Even if those would be advancing life in some sense (most people probably think that most things they think are good advance life in some sense!), they aren’t relevant to the concept of being “pro-life”. Moreover, being “pro-life” in the traditional conservative sense isn’t even about minimizing the harm of abortion or the abortion rate. It’s about emphasizing the moral wrongness of abortion itself, and often even criminalizing it.

I don’t think this is really so hard to understand. If someone truly, genuinely believes that abortion is murdering a child, it’s quite clear why they won’t be convinced by attempts at minimizing harm or trying to reduce the abortion rate via contraception or other social policy. Many policies are aimed at “reducing the demand for abortion”; would you want to “reduce the demand for murder”? No, you’d want murderers to be locked up. You wouldn’t care what their reasons were, and you wouldn’t be interested in using social policies to address those reasons. It’s not even hard to understand why this would be such an important issue to them, overriding almost anything else: If you thought that millions of people were murdering children you would consider that an extremely important issue too.

If you want to convince people to support Roe v. Wade, you’re going to have to change their actual belief that abortion is murder. You may even be able to convince them that they don’t really think abortion is murder—many conservatives support the death penalty for murder, but very few do so for abortion. But they clearly do think that abortion is a grave moral wrong, and you can’t simply end-run around that by calling them hypocrites because they don’t care about whatever other issue you think they should care about.

Example 2: “Stop pretending to care about human life if you support wars in the Middle East”

I had some trouble finding the exact wording of the meme I originally saw with this sentiment, but the gist of it was basically that if you support bombing Afghanistan, Libya, Iraq, and/or Syria, you have lost all legitimacy to claiming that you care about human life.

Say what you will about these wars (though to be honest I think what the US has done in Libya and Syria has done more good than harm), but simply supporting a war does not automatically undermine all your moral legitimacy. The kind of radical pacifism that requires us to never kill anyone ever is utterly unrealistic; the question is and has always been “Which people is it okay to kill, when and how and why?” Some wars are justified; we have to accept that.

It would be different if these were wars of genocidal extermination; I can see a case for saying that anyone who supported the Holocaust or the Rwandan Genocide has lost all moral legitimacy. But even then it isn’t really accurate to say that those people don’t care about human life; it’s much more accurate to say that they have assigned the group of people they want to kill to a subhuman status. Maybe you would actually get more traction by saying “They are human beings too!” rather than by accusing people of not believing in the value of human life.

And clearly these are not wars of extermination—if the US military wanted to exterminate an entire nation of people, they could do so much more efficiently than by using targeted airstrikes and conventional warfare. Remember: They have nuclear weapons. Even if you think that they wouldn’t use nukes because of fear of retaliation (Would Russia or China really retaliate using their own nukes if the US nuked Afghanistan or Iran?), it’s clear that they could have done a lot more to kill a lot more innocent people if that were actually their goal. It’s one thing to say they don’t take enough care not to kill innocent civilians—I agree with that. It’s quite another to say that they actively try to kill innocent civilians—that’s clearly not what is happening.

Example 3: “Stop pretending to be Christian if you won’t help the poor.”

This one I find a good deal more tempting: In the Bible, Jesus does spend an awful lot more words on helping the poor than he does on, well, almost anything else; and he doesn’t even once mention abortion or homosexuality. (The rest of the Bible does at least mention homosexuality, but it really doesn’t have any clear mentions of abortion.) So it really is tempting to say that anyone who doesn’t make helping the poor their number one priority can’t really be a Christian.

But the world is more complicated than that. People can truly and deeply believe some aspects of a religion while utterly rejecting others. They can do this more or less arbitrarily, in a way that may not even be logically coherent. They may even honestly believe that every single word of the Bible to be the absolute perfect truth of an absolute perfect God, and yet there are still passages you could point them to that they would have to admit they don’t believe in. (There are literally hundreds of explicit contradictions in the Bible. Many are minor—though still undermine any claim to absolute perfect truth—but some are really quite substantial. Does God forgive and forget, or does he visit revenge upon generations to come? That’s kind of a big deal! And should we be answering fools or not?) In some sense they don’t really believe that every word is true, then; but they do seem to believe in believing it.

Yes, it’s true; people can worship a penniless son of a carpenter who preached peace and charity and at the same time support cutting social welfare programs and bombing the Middle East. Such a worldview may not be entirely self-consistent; it’s certainly not the worldview that Jesus himself espoused. But it nevertheless is quite sincerely believed by many millions of people.

It may still be useful to understand the Bible in order to persuade Christians to help the poor more. There are certainly plenty of passages you can point them to where Jesus talks about how important it is to help the poor. Likewise, Jesus doesn’t seem to much like the rich, so it is fair to ask: How Christian is it for Republicans to keep cutting taxes on the rich? (I literally laughed out loud when I first saw this meme: “Celebrate Holy Week By Flogging a Banker: It’s What Jesus Would Have Done!“) But you should not accuse people of “pretending to be Christian”. They really do strongly identify themselves as Christian, and would sooner give up almost anything else about their identity. If you accuse them of pretending, all that will do is shut down the conversation.

Now, after all that, let me give one last example that doesn’t fit the trend, one example where I really do think the other side is acting in bad faith.

Example 4: “#AllLivesMatter is a lie. You don’t actually think all lives matter.”

I think this one is actually true. If you truly believed that all lives matter, you wouldn’t post the hashtag #AllLivesMatter in response to #BlackLivesMatter protests against police brutality.

First of all, you’d probably be supporting those protests. But even if you didn’t for some reason, that isn’t how you would use the hashtag. As a genuine expression of caring, the hashtag #AllLivesMatter would only really make sense for something like Oxfam or UNICEF: Here are these human lives that are in danger and we haven’t been paying enough attention to them, and here, you can follow my hashtag and give some money to help them because all lives matter. If it were really about all lives mattering, then you’d see the hashtag pop up after a tsunami in Southeast Asia or a famine in central Africa. (For awhile I tried actually using it that way; I quickly found that it was overwhelmed by the bad faith usage and decided to give up.)

No, this hashtag really seems to be trying to use a genuinely reasonable moral norm—all lives matter—as a weapon against a political movement. We don’t see #AllLivesMatter popping up asking people to help save some lives—it’s always as a way of shouting down other people who want to save some lives. It’s a glib response that lets you turn away and ignore their pleas, without ever actually addressing the substance of what they are saying. If you really believed that all lives matter, you would not be so glib; you would want to understand how so many people are suffering and want to do something to help them. Even if you ultimately disagreed with what they were saying, you would respect them enough to listen.

The counterpart #BlueLivesMatter isn’t in bad faith, but it is disturbing in a different way: What are ‘blue lives’? People aren’t born police officers. They volunteer for that job. They can quit if want. No one can quit being Black. Working as a police officer isn’t even especially dangerous! But it’s not a bad faith argument: These people really do believe that the lives of police officers are worth more—apparently much more—than the lives of Black civilians.

I do admit, the phrasing “#BlackLivesMatter” is a bit awkward, and could be read to suggest that other lives don’t matter, but it takes about 2 minutes of talking to someone (or reading a blog by someone) who supports those protests to gather that this is not their actual view. Perhaps they should have used #BlackLivesMatterToo, but when your misconception is that easily rectified the responsibility to avoid it falls on you. (Then again, some people do seem to stoke this misconception: I was quite annoyed when a question was asked at a Democratic debate: “Do Black Lives Matter, or Do All Lives Matter?” The correct answer of course is “All lives matter, which is why I support the Black Lives Matter movement.”)

So, yes, bad faith arguments do exist, and sometimes we need to point them out. But I implore you, consider that a last resort, a nuclear option you’ll only deploy when all other avenues have been exhausted. Once you accuse someone of bad faith, you have shut down the conversation completely—preventing you, them, and anyone else who was listening from having any chance of learning or changing their mind.

Terrible but not likely, likely but not terrible

May 17 JDN 2458985

The human brain is a remarkably awkward machine. It’s really quite bad at organizing data, relying on associations rather than formal categories.

It is particularly bad at negation. For instance, if I tell you that right now, no matter what, you must not think about a yellow submarine, the first thing you will do is think about a yellow submarine. (You may even get the Beatles song stuck in your head, especially now that I’ve mentioned it.) A computer would never make such a grievous error.

The human brain is also quite bad at separation. Daniel Dennett coined a word “deepity” for a particular kind of deep-sounding but ultimately trivial aphorism that seems to be quite common, which relies upon this feature of the brain. A deepity has at least two possible readings: On one reading, it is true, but utterly trivial. On another, it would be profound if true, but it simply isn’t true. But if you experience both at once, your brain is triggered for both “true” and “profound” and yields “profound truth”. The example he likes to use is “Love is just a word”. Well, yes, “love” is in fact just a word, but who cares? Yeah, words are words. But love, the underlying concept it describes, is not just a word—though if it were that would change a lot.

One thing I’ve come to realize about my own anxiety is that it involves a wide variety of different scenarios I imagine in my mind, and broadly speaking these can be sorted into two categories: Those that are likely but not terrible, and those that are terrible but not likely.

In the former category we have things like taking an extra year to finish my dissertation; the mean time to completion for a PhD is over 8 years, so finishing in 6 instead of 5 can hardly be considered catastrophic.

In the latter category we have things like dying from COVID-19. Yes, I’m a male with type A blood and asthma living in a high-risk county; but I’m also a young, healthy nonsmoker living under lockdown. Even without knowing the true fatality rate of the virus, my chances of actually dying from it are surely less than 1%.

But when both of those scenarios are running through my brain at the same time, the first triggers a reaction for “likely” and the second triggers a reaction for “terrible”, and I get this feeling that something terrible is actually likely to happen. And indeed if my probability of dying were as high as my probability of needing a 6th year to finish my PhD, that would be catastrophic.

I suppose it’s a bit strange that the opposite doesn’t happen: I never seem to get the improbability of dying attached to the mildness of needing an extra year. The confusion never seems to trigger “neither terrible nor likely”. Or perhaps it does, and my brain immediately disregards that as not worthy of consideration? It makes a certain sort of sense: An event that is neither probable nor severe doesn’t seem to merit much anxiety.

I suspect that many other people’s brains work the same way, eliding distinctions between different outcomes and ending up with a sort of maximal product of probability and severity.
The solution to this is not an easy one: It requires deliberate effort and extensive practice, and benefits greatly from formal training by a therapist. Counter-intuitively, you need to actually focus more on the scenarios that cause you anxiety, and accept the anxiety that such focus triggers in you. I find that it helps to actually write down the details of each scenario as vividly as possible, and review what I have written later. After doing this enough times, you can build up a greater separation in your mind, and more clearly categorize—this one is likely but not terrible, that one is terrible but not likely. It isn’t a cure, but it definitely helps me a great deal. Perhaps it could help you.

Creativity and mental illness

Dec 1 JDN 2458819

There is some truth to the stereotype that artistic people are crazy. Mental illnesses, particularly bipolar disorder, are overrepresented among artists, writers, and musicians. Creative people score highly on literally all five of the Big Five personality traits: They are higher in Openness, higher in Conscientiousness, higher in Extraversion (that one actually surprised me), higher in Agreeableness, and higher in Neuroticism. Creative people just have more personality, it seems.

But in fact mental illness is not as overrepresented among creative people as most people think, and the highest probability of being a successful artist occurs when you have close relatives with mental illness, but are not yourself mentally ill. Those with mental illness actually tend to be most creative when their symptoms are in remission. This suggests that the apparent link between creativity and mental illness may actually increase over time, as treatments improve and remission becomes easier.

One possible source of the link is that artistic expression may be a form of self-medication: Art therapy does seem to have some promise in treating a variety of mental disorders (though it is not nearly as effective as therapy and medication). And that wouldn’t explain why family history of mental illness is actually a better predictor of creativity than mental illness itself.

My guess is that in order to be creative, you need to think differently than other people. You need to see the world in a way that others do not see it. Mental illness is surely not the only way to do that, but it’s definitely one way.

But creativity also requires basic functioning: If you are totally crippled by a mental illness, you’re not going to be very creative. So the people who are most creative have just enough craziness to think differently, but not so much that it takes over their lives.

This might even help explain how mental illness persisted in our population, despite its obvious survival disadvantages. It could be some form of heterozygote advantage.

The classic example of heterozygote advantage is sickle-cell anemia: If you have no copies of the sickle-cell gene, you’re normal. If you have two copies, you have sickle-cell anemia, which is very bad. But if you have only one copy, you’re healthy—and you’re resistant to malaria. Thus, high risk of malaria—as we certainly had, living in central Africa—creates a selection pressure that keeps sickle-cell genes in the population, even though having two copies is much worse than having none at all.

Mental illness might function something like this. I suspect it’s far more complicated than sickle-cell anemia, which is literally just two alleles of a single gene; but the overall process may be similar. If having just a little bit of bipolar disorder or schizophrenia makes you see the world differently than other people and makes you more creative, there are lots of reasons why that might improve the survival of your genes: There are the obvious problem-solving benefits, but also the simple fact that artists are sexy.

The downside of such “weird-thinking” genes is that they can go too far and make you mentally ill, perhaps if you have too many copies of them, or if you face an environmental trigger that sets them off. Sometimes the reason you see the world differently than everyone else is that you’re just seeing it wrong. But if the benefits of creativity are high enough—and they surely are—this could offset the risks, in an evolutionary sense.

But one thing is quite clear: If you are mentally ill, don’t avoid treatment for fear it will damage your creativity. Quite the opposite: A mental illness that is well treated and in remission is the optimal state for creativity. Go seek treatment, so that your creativity may blossom.

Pascal’s Mugging

Nov 10 JDN 2458798

In the Singularitarian community there is a paradox known as “Pascal’s Mugging”. The name is an intentional reference to Pascal’s Wager (and the link is quite apt, for reasons I’ll discuss in a later post.)

There are a few different versions of the argument; Yudkowsky’s original argument in which he came up with the name “Pascal’s Mugging” relies upon the concept of the universe as a simulation and an understanding of esoteric mathematical notation. So here is a more intuitive version:

A strange man in a dark hood comes up to you on the street. “Give me five dollars,” he says, “or I will destroy an entire planet filled with ten billion innocent people. I cannot prove to you that I have this power, but how much is an innocent life worth to you? Even if it is as little as $5,000, are you really willing to bet on ten trillion to one odds that I am lying?”

Do you give him the five dollars? I suspect that you do not. Indeed, I suspect that you’d be less likely to give him the five dollars than if he had merely said he was homeless and asked for five dollars to help pay for food. (Also, you may have objected that you value innocent lives, even faraway strangers you’ll never meet, at more than $5,000 each—but if that’s the case, you should probably be donating more, because the world’s best charities can save a live for about $3,000.)

But therein lies the paradox: Are you really willing to bet on ten trillion to one odds?

This argument gives me much the same feeling as the Ontological Argument; as Russell said of the latter, “it is much easier to be persuaded that ontological arguments are no good than it is to say exactly what is wrong with them.” It wasn’t until I read this post on GiveWell that I could really formulate the answer clearly enough to explain it.

The apparent force of Pascal’s Mugging comes from the idea of expected utility: Even if the probability of an event is very small, if it has a sufficiently great impact, the expected utility can still be large.

The problem with this argument is that extraordinary claims require extraordinary evidence. If a man held a gun to your head and said he’d shoot you if you didn’t give him five dollars, you’d give him five dollars. This is a plausible claim and he has provided ample evidence. If he were instead wearing a bomb vest (or even just really puffy clothing that could conceal a bomb vest), and he threatened to blow up a building unless you gave him five dollars, you’d probably do the same. This is less plausible (what kind of terrorist only demands five dollars?), but it’s not worth taking the chance.

But when he claims to have a Death Star parked in orbit of some distant planet, primed to make another Alderaan, you are right to be extremely skeptical. And if he claims to be a being from beyond our universe, primed to destroy so many lives that we couldn’t even write the number down with all the atoms in our universe (which was actually Yudkowsky’s original argument), to say that you are extremely skeptical seems a grievous understatement.

That GiveWell post provides a way to make this intuition mathematically precise in terms of Bayesian logic. If you have a normal prior with mean 0 and standard deviation 1, and you are presented with a likelihood with mean X and standard deviation X, what should you make your posterior distribution?

Normal priors are quite convenient; they conjugate nicely. The precision (inverse variance) of the posterior distribution is the sum of the two precisions, and the mean is a weighted average of the two means, weighted by their precision.

So the posterior variance is 1/(1 + 1/X^2).

The posterior mean is 1/(1+1/X^2)*(0) + (1/X^2)/(1+1/X^2)*(X) = X/(X^2+1).

That is, the mean of the posterior distribution is just barely higher than zero—and in fact, it is decreasing in X, if X > 1.

For those who don’t speak Bayesian: If someone says he’s going to have an effect of magnitude X, you should be less likely to believe him the larger that X is. And indeed this is precisely what our intuition said before: If he says he’s going to kill one person, believe him. If he says he’s going to destroy a planet, don’t believe him, unless he provides some really extraordinary evidence.

What sort of extraordinary evidence? To his credit, Yudkowsky imagined the sort of evidence that might actually be convincing:

If a poorly-dressed street person offers to save 10(10^100) lives (googolplex lives) for $5 using their Matrix Lord powers, and you claim to assign this scenario less than 10-(10^100) probability, then apparently you should continue to believe absolutely that their offer is bogus even after they snap their fingers and cause a giant silhouette of themselves to appear in the sky.

This post he called “Pascal’s Muggle”, after the term from the Harry Potter series, since some of the solutions that had been proposed for dealing with Pascal’s Mugging had resulted in a situation almost as absurd, in which the mugger could exhibit powers beyond our imagining and yet nevertheless we’d never have sufficient evidence to believe him.

So, let me go on record as saying this: Yes, if someone snaps his fingers and causes the sky to rip open and reveal a silhouette of himself, I’ll do whatever that person says. The odds are still higher that I’m dreaming or hallucinating than that this is really a being from beyond our universe, but if I’m dreaming, it makes no difference, and if someone can make me hallucinate that vividly he can probably cajole the money out of me in other ways. And there might be just enough chance that this could be real that I’m willing to give up that five bucks.

These seem like really strange thought experiments, because they are. But like many good thought experiments, they can provide us with some important insights. In this case, I think they are telling us something about the way human reasoning can fail when faced with impacts beyond our normal experience: We are in danger of both over-estimating and under-estimating their effects, because our brains aren’t equipped to deal with magnitudes and probabilities on that scale. This has made me realize something rather important about both Singularitarianism and religion, but I’ll save that for next week’s post.

Mental illness is different from physical illness.

Post 311 Oct 13 JDN 2458770

There’s something I have heard a lot of people say about mental illness that is obviously well-intentioned, but ultimately misguided: “Mental illness is just like physical illness.”

Sometimes they say it explicitly in those terms. Other times they make analogies, like “If you wouldn’t shame someone with diabetes for using insulin, why shame someone with depression for using SSRIs?”

Yet I don’t think this line of argument will ever meaningfully reduce the stigma surrounding mental illness, because, well, it’s obviously not true.

There are some characteristics of mental illness that are analogous to physical illness—but there are some that really are quite different. And these are not just superficial differences, the way that pancreatic disease is different from liver disease. No one would say that liver cancer is exactly the same as pancreatic cancer; but they’re both obviously of the same basic category. There are differences between physical and mental illness which are both obvious, and fundamental.

Here’s the biggest one: Talk therapy works on mental illness.

You can’t talk yourself out of diabetes. You can’t talk yourself out of myocardial infarct. You can’t even talk yourself out of migraine (though I’ll get back to that one in a little bit). But you can, in a very important sense, talk yourself out of depression.

In fact, talk therapy is one of the most effective treatments for most mental disorders. Cognitive behavioral therapy for depression is on its own as effective as most antidepressants (with far fewer harmful side effects), and the two combined are clearly more effective than either alone. Talk therapy is as effective as medication on bipolar disorder, and considerably better on social anxiety disorder.

To be clear: Talk therapy is not just people telling you to cheer up, or saying it’s “all in your head”, or suggesting that you get more exercise or eat some chocolate. Nor does it consist of you ruminating by yourself and trying to talk yourself out of your disorder. Cognitive behavioral therapy is a very complex, sophisticated series of techniques that require years of expert training to master. Yet, at its core, cognitive therapy really is just a very sophisticated form of talking.

The fact that mental disorders can be so strongly affected by talk therapy shows that there really is an important sense in which mental disorders are “all in your head”, and not just the trivial way that an axe wound or even a migraine is all in your head. It isn’t just the fact that it is physically located in your brain that makes a mental disorder different; it’s something deeper than that.

Here’s the best analogy I can come up with: Physical illness is hardware. Mental illness is software.

If a computer breaks after being dropped on the floor, that’s like an axe wound: An obvious, traumatic source of physical damage that is an unambiguous cause of the failure.

If a computer’s CPU starts overheating, that’s like a physical illness, like diabetes: There may be no particular traumatic cause, or even any clear cause at all, but there is obviously something physically wrong that needs physical intervention to correct.

But if a computer is suffering glitches and showing error messages when it tries to run particular programs, that is like mental illness: Something is wrong not on the low-level hardware, but on the high-level software.

These different types of problem require different types of solutions. If your CPU is overheating, you might want to see about replacing your cooling fan or your heat sink. But if your software is glitching while your CPU is otherwise running fine, there’s no point in replacing your fan or heat sink. You need to get a programmer in there to look at the code and find out where it’s going wrong. A talk therapist is like a programmer: The words they say to you are like code scripts they’re trying to get your processor to run correctly.

Of course, our understanding of computers is vastly better than our understanding of human brains, and as a result, programmers tend to get a lot better results than psychotherapists. (Interestingly they do actually get paid about the same, though! Programmers make about 10% more on average than psychotherapists, and both are solidly within the realm of average upper-middle-class service jobs.) But the basic process is the same: Using your expert knowledge of the system, find the right set of inputs that will fix the underlying code and solve the problem. At no point do you physically intervene on the system; you could do it remotely without ever touching it—and indeed, remote talk therapy is a thing.

What about other neurological illnesses, like migraine or fibromyalgia? Well, I think these are somewhere in between. They’re definitely more physical in some sense than a mental disorder like depression. There isn’t any cognitive content to a migraine the way there is to a depressive episode. When I feel depressed or anxious, I feel depressed or anxious about something. But there’s nothing a migraine is about. To use the technical term in cognitive science, neurological disorders lack the intentionality that mental disorders generally have. “What are you depressed about?” is a question you usually can answer. “What are you migrained about?” generally isn’t.

But like mental disorders, neurological disorders are directly linked to the functioning of the brain, and often seem to operate at a higher level of functional abstraction. The brain doesn’t have pain receptors on itself the way most of your body does; getting a migraine behind your left eye doesn’t actually mean that that specific lobe of your brain is what’s malfunctioning. It’s more like a general alert your brain is sending out that something is wrong, somewhere. And fibromyalgia often feels like it’s taking place in your entire body at once. Moreover, most neurological disorders are strongly correlated with mental disorders—indeed, the comorbidity of depression with migraine and fibromyalgia in particular is extremely high.

Which disorder causes the other? That’s a surprisingly difficult question. Intuitively we might expect the “more physical” disorder to be the primary cause, but that’s not always clear. Successful treatment for depression often improves symptoms of migraine and fibromyalgia as well (though the converse is also true). They seem to be mutually reinforcing one another, and it’s not at all clear which came first. I suppose if I had to venture a guess, I’d say the pain disorders probably have causal precedence over the mood disorders, but I don’t actually know that for a fact.

To stretch my analogy a little, it may be like a software problem that ends up causing a hardware problem, or a hardware problem that ends up causing a software problem. There actually have been a few examples of this, like games with graphics so demanding that they caused GPUs to overheat.

The human brain is a lot more complicated than a computer, and the distinction between software and hardware is fuzzier; we don’t actually have “code” that runs on a “processor”. We have synapses that continually fire on and off and rewire each other. The closest thing we have to code that gets processed in sequence would be our genome, and that is several orders of magnitude less complex than the structure of our brains. Aside from simply physically copying the entire brain down to every synapse, it’s not clear that you could ever “download” a mind, science fiction notwithstanding.

Indeed, anything that changes your mind necessarily also changes your brain; the effects of talking are generally subtler than the effects of a drug (and certainly subtler than the effects of an axe wound!), but they are nevertheless real, physical changes. (This is why it is so idiotic whenever the popular science press comes out with: “New study finds that X actually changes your brain!” where X might be anything from drinking coffee to reading romance novels. Of course it does! If it has an effect on your mind, it did so by having an effect on your brain. That’s the Basic Fact of Cognitive Science.) This is not so different from computers, however: Any change in software is also a physical change, in the form of some sequence of electrical charges that were moved from one place to another. Actual physical electrons are a few microns away from where they otherwise would have been because of what was typed into that code.

Of course I want to reduce the stigma surrounding mental illness. (For both selfish and altruistic reasons, really.) But blatantly false assertions don’t seem terribly productive toward that goal. Mental illness is different from physical illness; we can’t treat it the same.

Pinker Propositions

May 19 2458623

What do the following statements have in common?

1. “Capitalist countries have less poverty than Communist countries.

2. “Black men in the US commit homicide at a higher rate than White men.

3. “On average, in the US, Asian people score highest on IQ tests, White and Hispanic people score near the middle, and Black people score the lowest.

4. “Men on average perform better at visual tasks, and women on average perform better on verbal tasks.

5. “In the United States, White men are no more likely to be mass shooters than other men.

6. “The genetic heritability of intelligence is about 60%.

7. “The plurality of recent terrorist attacks in the US have been committed by Muslims.

8. “The period of US military hegemony since 1945 has been the most peaceful period in human history.

These statements have two things in common:

1. All of these statements are objectively true facts that can be verified by rich and reliable empirical data which is publicly available and uncontroversially accepted by social scientists.

2. If spoken publicly among left-wing social justice activists, all of these statements will draw resistance, defensiveness, and often outright hostility. Anyone making these statements is likely to be accused of racism, sexism, imperialism, and so on.

I call such propositions Pinker Propositions, after an excellent talk by Steven Pinker illustrating several of the above statements (which was then taken wildly out of context by social justice activists on social media).

The usual reaction to these statements suggests that people think they imply harmful far-right policy conclusions. This inference is utterly wrong: A nuanced understanding of each of these propositions does not in any way lead to far-right policy conclusions—in fact, some rather strongly support left-wing policy conclusions.

1. Capitalist countries have less poverty than Communist countries, because Communist countries are nearly always corrupt and authoritarian. Social democratic countries have the lowest poverty and the highest overall happiness (#ScandinaviaIsBetter).

2. Black men commit more homicide than White men because of poverty, discrimination, mass incarceration, and gang violence. Black men are also greatly overrepresented among victims of homicide, as most homicide is intra-racial. Homicide rates often vary across ethnic and socioeconomic groups, and these rates vary over time as a result of cultural and political changes.

3. IQ tests are a highly imperfect measure of intelligence, and the genetics of intelligence cut across our socially-constructed concept of race. There is far more within-group variation in IQ than between-group variation. Intelligence is not fixed at birth but is affected by nutrition, upbringing, exposure to toxins, and education—all of which statistically put Black people at a disadvantage. Nor does intelligence remain constant within populations: The Flynn Effect is the well-documented increase in intelligence which has occurred in almost every country over the past century. Far from justifying discrimination, these provide very strong reasons to improve opportunities for Black children. The lead and mercury in Flint’s water suppressed the brain development of thousands of Black children—that’s going to lower average IQ scores. But that says nothing about supposed “inherent racial differences” and everything about the catastrophic damage of environmental racism.

4. To be quite honest, I never even understood why this one shocks—or even surprises—people. It’s not even saying that men are “smarter” than women—overall IQ is almost identical. It’s just saying that men are more visual and women are more verbal. And this, I think, is actually quite obvious. I think the clearest evidence of this—the “interocular trauma” that will convince you the effect is real and worth talking about—is pornography. Visual porn is overwhelmingly consumed by men, even when it was designed for women (e.g. Playgirla majority of its readers are gay men, even though there are ten times as many straight women in the world as there are gay men). Conversely, erotic novels are overwhelmingly consumed by women. I think a lot of anti-porn feminism can actually be explained by this effect: Feminists (who are usually women, for obvious reasons) can say they are against “porn” when what they are really against is visual porn, because visual porn is consumed by men; then the kind of porn that they like (erotic literature) doesn’t count as “real porn”. And honestly they’re mostly against the current structure of the live-action visual porn industry, which is totally reasonable—but it’s a far cry from being against porn in general. I have some serious issues with how our farming system is currently set up, but I’m not against farming.

5. This one is interesting, because it’s a lack of a race difference, which normally is what the left wing always wants to hear. The difference of course is that this alleged difference would make White men look bad, and that’s apparently seen as a desirable goal for social justice. But the data just doesn’t bear it out: While indeed most mass shooters are White men, that’s because most Americans are White, which is a totally uninteresting reason. There’s no clear evidence of any racial disparity in mass shootings—though the gender disparity is absolutely overwhelming: It’s almost always men.

6. Heritability is a subtle concept; it doesn’t mean what most people seem to think it means. It doesn’t mean that 60% of your intelligence is due to your your genes. Indeed, I’m not even sure what that sentence would actually mean; it’s like saying that 60% of the flavor of a cake is due to the eggs. What this heritability figure actually means that when you compare across individuals in a population, and carefully control for environmental influences, you find that about 60% of the variance in IQ scores is explained by genetic factors. But this is within a particular population—here, US adults—and is absolutely dependent on all sorts of other variables. The more flexible one’s environment becomes, the more people self-select into their preferred environment, and the more heritable traits become. As a result, IQ actually becomes more heritable as children become adults, called the Wilson Effect.

7. This one might actually have some contradiction with left-wing policy. The disproportionate participation of Muslims in terrorism—controlling for just about anything you like, income, education, age etc.—really does suggest that, at least at this point in history, there is some real ideological link between Islam and terrorism. But the fact remains that the vast majority of Muslims are not terrorists and do not support terrorism, and antagonizing all the people of an entire religion is fundamentally unjust as well as likely to backfire in various ways. We should instead be trying to encourage the spread of more tolerant forms of Islam, and maintaining the strict boundaries of secularism to prevent the encroach of any religion on our system of government.

8. The fact that US military hegemony does seem to be a cause of global peace doesn’t imply that every single military intervention by the US is justified. In fact, it doesn’t even necessarily imply that any such interventions are justified—though I think one would be hard-pressed to say that the NATO intervention in the Kosovo War or the defense of Kuwait in the Gulf War was unjustified. It merely points out that having a hegemon is clearly preferable to having a multipolar world where many countries jockey for military supremacy. The Pax Romana was a time of peace but also authoritarianism; the Pax Americana is better, but that doesn’t prevent us from criticizing the real harms—including major war crimes—committed by the United States.

So it is entirely possible to know and understand these facts without adopting far-right political views.

Yet Pinker’s point—and mine—is that by suppressing these true facts, by responding with hostility or even ostracism to anyone who states them, we are actually adding fuel to the far-right fire. Instead of presenting the nuanced truth and explaining why it doesn’t imply such radical policies, we attack the messenger; and this leads people to conclude three things:

1. The left wing is willing to lie and suppress the truth in order to achieve political goals (they’re doing it right now).

2. These statements actually do imply right-wing conclusions (else why suppress them?).

3. Since these statements are true, that must mean the right-wing conclusions are actually correct.

Now (especially if you are someone who identifies unironically as “woke”), you might be thinking something like this: “Anyone who can be turned away from social justice so easily was never a real ally in the first place!”

This is a fundamentally and dangerously wrongheaded view. No one—not me, not you, not anyone—was born believing in social justice. You did not emerge from your mother’s womb ranting against colonalist imperialism. You had to learn what you now know. You came to believe what you now believe, after once believing something else that you now think is wrong. This is true of absolutely everyone everywhere. Indeed, the better you are, the more true it is; good people learn from their mistakes and grow in their knowledge.

This means that anyone who is now an ally of social justice once was not. And that, in turn, suggests that many people who are currently not allies could become so, under the right circumstances. They would probably not shift all at once—as I didn’t, and I doubt you did either—but if we are welcoming and open and honest with them, we can gradually tilt them toward greater and greater levels of support.

But if we reject them immediately for being impure, they never get the chance to learn, and we never get the chance to sway them. People who are currently uncertain of their political beliefs will become our enemies because we made them our enemies. We declared that if they would not immediately commit to everything we believe, then they may as well oppose us. They, quite reasonably unwilling to commit to a detailed political agenda they didn’t understand, decided that it would be easiest to simply oppose us.

And we don’t have to win over every person on every single issue. We merely need to win over a large enough critical mass on each issue to shift policies and cultural norms. Building a wider tent is not compromising on your principles; on the contrary, it’s how you actually win and make those principles a reality.

There will always be those we cannot convince, of course. And I admit, there is something deeply irrational about going from “those leftists attacked Charles Murray” to “I think I’ll start waving a swastika”. But humans aren’t always rational; we know this. You can lament this, complain about it, yell at people for being so irrational all you like—it won’t actually make people any more rational. Humans are tribal; we think in terms of teams. We need to make our team as large and welcoming as possible, and suppressing Pinker Propositions is not the way to do that.

Moral luck: How it matters, and how it doesn’t

Feb 10 JDN 2458525

The concept of moral luck is now relatively familiar to most philosophers, but I imagine most other people haven’t heard it before. It sounds like a contradiction, which is probably why it drew so much attention.

The term “moral luck” seems to have originated in essay by Thomas Nagel, but the intuition is much older, dating at least back to Greek philosophy (and really probably older than that; we just don’t have good records that far back).

The basic argument is this:

Most people would say that if you had no control over something, you can’t be held morally responsible for it. It was just luck.

But if you look closely, everything we do—including things we would conventionally regard as moral actions—depends heavily on things we don’t have control over.

Therefore, either we can be held responsible for things we have no control over, or we can’t be held responsible for anything at all!

Neither approach seems very satisfying; hence the conundrum.

For example, consider four drivers:

Anna is driving normally, and nothing of note happens.

Bob is driving recklessly, but nothing of note happens.

Carla is driving normally, but a child stumbles out into the street and she runs the child over.

Dan is driving recklessly, and a child stumbles out into the street and he runs the child over.

The presence or absence of a child in the street was not in the control of any of the four drivers. Yet I think most people would agree that Dan should be held more morally responsible than Bob, and Carla should be held more morally responsible than Anna. (Whether Bob should be held more morally responsible than Carla is not as clear.) Yet both Bob and Dan were driving recklessly, and both Anna and Carla were driving normally. The moral evaluation seems to depend upon the presence of the child, which was not under the drivers’ control.

Other philosophers have argued that the difference is an epistemic one: We know the moral character of someone who drove recklessly and ran over a child better than the moral character of someone who drove recklessly and didn’t run over a child. But do we, really?

Another response is simply to deny that we should treat Bob and Dan any differently, and say that reckless driving is reckless driving, and safe driving is safe driving. For this particular example, maybe that works. But it’s not hard to come up with better examples where that doesn’t work:

Ted is a psychopathic serial killer. He kidnaps, rapes, and murder people. Maybe he can control whether or not he rapes and murders someone. But the reason he rapes and murders someone is that he is a psychopath. And he can’t control that he is a psychopath. So how can we say that his actions are morally wrong?

Obviously, we want to say that his actions are morally wrong.

I have heard one alternative, which is to consider psychopaths as morally equivalent to viruses: Zero culpability, zero moral value, something morally neutral but dangerous that we should contain or eradicate as swiftly as possible. HIV isn’t evil; it’s just harmful. We should kill it not because it deserves to die, but because it will kill us if we don’t. On this theory, Ted doesn’t deserve to be executed; it’s just that we must execute him in order to protect ourselves from the danger he poses.

But this quickly becomes unsatisfactory as well:

Jonas is a medical researcher whose work has saved millions of lives. Maybe he can control the research he works on, but he only works on medical research because he was born with a high IQ and strong feelings of compassion. He can’t control that he was born with a high IQ and strong feelings of compassion. So how can we say his actions are morally right?

This is the line of reasoning that quickly leads to saying that all actions are outside our control, and therefore morally neutral; and then the whole concept of morality falls apart.

So we need to draw the line somewhere; there has to be a space of things that aren’t in our control, but nonetheless carry moral weight. That’s moral luck.

Philosophers have actually identified four types of moral luck, which turns out to be tremendously useful in drawing that line.

Resultant luck is luck that determines the consequences of your actions, how things “turn out”. Happening to run over the child because you couldn’t swerve fast enough is resultant luck.

Circumstantial luck is luck that determines the sorts of situations you are in, and what moral decisions you have to make. A child happening to stumble across the street is circumstantial luck.

Constitutive luck is luck that determines who you are, your own capabilities, virtues, intentions and so on. Having a high IQ and strong feelings of compassion is constitutive luck.

Causal luck is the inherent luck written into the fabric of the universe that determines all events according to the fundamental laws of physics. Causal luck is everything and everywhere; it is written into the universal wavefunction.

I have a very strong intuition that this list is ordered; going from top to bottom makes things “less luck” in a vital sense.

Resultant luck is pure luck, what we originally meant when we said the word “luck”. It’s the roll of the dice.

Circumstantial luck is still mostly luck, but maybe not entirely; there are some aspects of it that do seem to be under our control.

Constitutive luck is maybe luck, sort of, but not really. Yes, “You’re lucky to be so smart” makes sense, but “You’re lucky to not be a psychopath” already sounds pretty weird. We’re entering territory here where our ordinary notions of luck and responsibility really don’t seem to apply.

Causal luck is not luck at all. Causal luck is really the opposite of luck: Without a universe with fundamental laws of physics to maintain causal order, none of our actions would have any meaning at all. They wouldn’t even really be actions; they’d just be events. You can’t do something in a world of pure chaos; things only happen. And being made of physical particles doesn’t make you any less what you are; a table made of wood is still a table, and a rocket made of steel is still a rocket. Thou art physics.

And that, my dear reader, is the solution to the problem of moral luck. Forget “causal luck”, which isn’t luck at all. Then, draw a hard line at constitutive luck: regardless of how you became who you are, you are responsible for what you do.

You don’t need to have control over who you are (what would that even mean!?).

You merely need to have control over what you do.

This is how the word “control” is normally used, by the way; when we say that a manufacturing process is “under control” or a pilot “has control” of an airplane, we aren’t asserting some grand metaphysical claim of ultimate causation. We’re merely saying that the system is working as it’s supposed to; the outputs coming out are within the intended parameters. This is all we need for moral responsibility as well.

In some cases, maybe people’s brains really are so messed up that we can’t hold them morally responsible; they aren’t “under control”. Okay, we’re back to the virus argument then: Contain or eradicate. If a brain tumor makes you so dangerous that we can’t trust you around sharp objects, unless we can take out that tumor, we’ll need to lock you up somewhere where you can’t get any sharp objects. Sorry. Maybe you don’t deserve that in some ultimate sense, but it’s still obviously what we have to do. And this is obviously quite exceptional; most people are not suffering from brain tumors that radically alter their personalities—and even most psychopaths are otherwise neurologically normal.

Ironically, it’s probably my fellow social scientists who will scoff the most at this answer. “But so much of what we are is determined by our neurochemistry/cultural norms/social circumstances/political institutions/economic incentives!” Yes, that’s true. And if we want to change those things to make us and others better, I’m all for it. (Well, neurochemistry is a bit problematic, so let’s focus on the others first—but if you can make a pill that cures psychopathy, I would support mandatory administration of that pill to psychopaths in positions of power.)

When you make a moral choice, we have to hold you responsible for that choice.

Maybe Ted is psychopathic and sadistic because there was too much lead in his water as a child. That’s a good reason to stop putting lead in people’s water (like we didn’t already have plenty!); but it’s not a good reason to let Ted off the hook for all those rapes and murders.

Maybe Jonas is intelligent and compassionate because his parents were wealthy and well-educated. That’s a good reason to make sure people are financially secure and well-educated (again, did we need more?); but it’s not a good reason to deny Jonas his Nobel Prize for saving millions of lives.

Yes, “personal responsibility” has been used by conservatives as an excuse to not solve various social and economic problems (indeed, it has specifically been used to stop regulations on lead in water and public funding for education). But that’s not actually anything wrong with personal responsibility. We should hold those conservatives personally responsible for abusing the term in support of their destructive social and economic policies. No moral freedom is lost by preventing lead from turning children into psychopaths. No personal liberty is destroyed by ensuring that everyone has access to a good education.

In fact, there is evidence that telling people who are suffering from poverty or oppression that they should take personal responsibility for their choices benefits them. Self-perceived victimhood is linked to all sorts of destructive behaviors, even controlling for prior life circumstances. Feminist theorists have written about how taking responsibility even when you are oppressed can empower you to make your life better. Yes, obviously, we should be helping people when we can. But telling them that they are hopeless unless we come in to rescue them isn’t helping them.

This way of thinking may require a delicate balance at times, but it’s not inconsistent. You can both fight against lead pollution and support the criminal justice system. You can believe in both public education and the Nobel Prize. We should be working toward a world where people are constituted with more virtue for reasons beyond their control, and where people are held responsible for the actions they take that are under their control.

We can continue to talk about “moral luck” referring to constitutive luck, I suppose, but I think the term obscures more than it illuminates. The “luck” that made you a good or a bad person is very different from the “luck” that decides how things happen to turn out.

What really works against bigotry

Sep 30 JDN 2458392

With Donald Trump in office, I think we all need to be thinking carefully about what got us to this point, how we have apparently failed in our response to bigotry. It’s good to see that Kavanaugh’s nomination vote has been delayed pending investigations, but we can’t hope to rely on individual criminal accusations to derail every potentially catastrophic candidate. The damage that someone like Kavanaugh would do to the rights of women, racial minorities, and LGBT people is too severe to risk. We need to attack this problem at its roots: Why are there so many bigoted leaders, and so many bigoted voters willing to vote for them?

The problem is hardly limited to the United States; we are witnessing a global crisis of far-right ideology, as even the UN has publicly recognized.

I think the left made a very dangerous wrong turn with the notion of “call-out culture”. There is now empirical data to support me on this. Publicly calling people racist doesn’t make them less racist—in fact, it usually makes them more racist. Angrily denouncing people doesn’t change their minds—it just makes you feel righteous. Our own accusatory, divisive rhetoric is part of the problem: By accusing anyone who even slightly deviates from our party line (say, by opposing abortion in some circumstances, as 75% of Americans do?) of being a fascist, we slowly but surely push more people toward actual fascism.

Call-out culture encourages a black-and-white view of the world, where there are “good guys” (us) and “bad guys” (them), and our only job is to fight as hard as possible against the “bad guys”. It frees us from the pain of nuance, complexity, and self-reflection—at only the cost of giving up any hope of actually understanding the real causes or solving the problem. Bigotry is not something that “other” people have, which you, fine upstanding individual, could never suffer from. We are all Judy Hopps.

This is not to say we should do nothing—indeed, that would be just as bad if not worse. The rise of neofascism has been possible largely because so many people did nothing. Knowing that there is bigotry in all of us shouldn’t stop us from recognizing that some people are far worse than others, or paralyze us against constructively improving ourselves and our society. See the shades of gray without succumbing to the Fallacy of Gray.

The most effective interventions at reducing bigotry are done in early childhood; obviously, it’s far too late for that when it comes to people like Trump and Kavanaugh.

But there are interventions that can work at reducing bigotry among adults. We need to first understand where the bigotry comes from—and it doesn’t always come from the same source. We need to be willing to look carefully—yes, even sympathetically—at people with bigoted views so that we can understand them.

There are deep, innate systems in the human brain that make bigotry come naturally to us. Even people on the left who devote their lives to combating discrimination against women, racial minorities and LGBT people can still harbor bigoted attitudes toward other groups—such as rural people or Republicans. If you think that all Republicans are necessarily racist, that’s not a serious understanding of what motivates Republicans—that’s just bigotry on your part. Trump is racist. Pence is racist. One could argue that voting for them constitutes, in itself, a racist act. But that does not mean that every single Republican voter is fundamentally and irredeemably racist.

It’s also important to have conversations face-to-face. I must admit that I am personally terrible at this; despite training myself extensively in etiquette and public speaking to the point where most people perceive me as charismatic, even charming, deep down I am still a strong introvert. I dislike talking in person, and dread talking over the phone. I would much prefer to communicate entirely in written electronic communication—but the data is quite clear on this: Face-to-face conversations work better at changing people’s minds. It may be awkward and uncomfortable, but by being there in person, you limit their ability to ignore you or dismiss you; you aren’t a tweet from the void, but an actual person, sitting there in front of them.

Speak with friends and family members. This, I know, can be especially awkward and painful. In the last few years I have lost connections with friends who were once quite close to me as a result of difficult political conversations. But we must speak up, for silence becomes complicity. And speaking up really can work.

Don’t expect people to change their entire worldview overnight. Focus on small, concrete policy ideas. Don’t ask them to change who they are; ask them to change what they believe. Ask them to justify and explain their beliefs—and really listen to them when they do. Be open to the possibility that you, too might be wrong about something.

If they say “We should deport all illegal immigrants!”, point out that whenever we try this, a lot of fields go unharvested for lack of workers, and ask them why they are so concerned about illegal immigrants. If they say “Illegal immigrants come here and commit crimes!” point them to the statistical data showing that illegal immigrants actually commit fewer crimes on average than native-born citizens (probably because they are more afraid of what happens if they get caught).

If they are concerned about Muslim immigrants influencing our culture in harmful ways, first, acknowledge that there are legitimate concerns about Islamic cultural values (particularly toward women and LGBT people)but then point out that over 90% of Muslim-Americans are proud to be American, and that welcoming people is much more effective at getting them to assimilate into our culture than keeping them out and treating them as outsiders.

If they are concerned about “White people getting outnumbered”, first point out that White people are still over 70% of the US population, and in most rural areas there are only a tiny fraction of non-White people. Point out that Census projections showing the US will be majority non-White by 2045 are based on naively extrapolating current trends, and we really have no idea what the world will look like almost 30 years from now. Next, ask them why they worry about being “outnumbered”; get them to consider that perhaps racial demographics don’t have to be a matter of zero-sum conflict.

After you’ve done this, you will feel frustrated and exhausted, and the relationship between you and the person you’re trying to convince will be strained. You will probably feel like you have accomplished absolutely nothing to change their mind—but you are wrong. Even if they don’t acknowledge any change in their beliefs, the mere fact that you sat down and asked them to justify what they believe, and presented calm, reasonable, cogent arguments against those beliefs will have an effect. It will be a small effect, difficult for you to observe in that moment. But it will still be an effect.

Think about the last time you changed your mind about something important. (I hope you can remember such a time; none of us were born being right about everything!) Did it happen all at once? Was there just one, single knock-down argument that convinced you? Probably not. (On some mathematical and scientific questions I’ve had that experience: Oh, wow, yeah, that proof totally demolishes what I believed. Well, I guess I was wrong. But most beliefs aren’t susceptible to such direct proof.) More likely, you were presented with arguments from a variety of sources over a long span of time, gradually chipping away at what you thought you knew. In the moment, you might not even have admitted that you thought any differently—even to yourself. But as the months or years went by, you believed something quite different at the end than you had at the beginning.

Your goal should be to catalyze that process in other people. Don’t take someone who is currently a frothing neo-Nazi and expect them to start marching with Black Lives Matter. Take someone who is currently a little bit uncomfortable about immigration, and calm their fears. Don’t take someone who thinks all poor people are subhuman filth and try to get them to support a basic income. Take someone who is worried about food stamps adding to our national debt, and show them how it is a small portion of our budget. Don’t take someone who thinks global warming was made up by the Chinese and try to get them to support a ban on fossil fuels. Take someone who is worried about gas prices going up as a result of carbon taxes and show them that carbon offsets would add only about $100 per person per year while saving millions of lives.

And if you’re ever on the other side, and someone has just changed your mind, even a little bit—say so. Thank them for opening your eyes. I think a big part of why we don’t spend more time trying to honestly persuade people is that so few people acknowledge us when we do.