Bayesian updating with irrational belief change

Jul 27 JDN 2460884

For the last few weeks I’ve been working at a golf course. (It’s a bit of an odd situation: I’m not actually employed by the golf course; I’m contracted by a nonprofit to be a “job coach” for a group of youths who are part of a work program that involves them working at the golf course.)

I hate golf. I have always hated golf. I find it boring and pointless—which, to be fair, is my reaction to most sports—and also an enormous waste of land and water. A golf course is also a great place for oligarchs to arrange collusion.

But I noticed something about being on the golf course every day, seeing people playing and working there: I feel like I hate it a bit less now.

This is almost certainly a mere-exposure effect: Simply being exposed to something many times makes it feel familiar, and that tends to make you like it more, or at least dislike it less. (There are some exceptions: repeated exposure to trauma can actually make you more sensitive to it, hating it even more.)

I kinda thought this would happen. I didn’t really want it to happen, but I thought it would.

This is very interesting from the perspective of Bayesian reasoning, because it is a theorem (though I cannot seem to find anyone naming the theorem; it’s like a folk theorem, I guess?) of Bayesian logic that the following is true:

The prior expectation of the posterior is the expectation of the prior.

The prior is what you believe before observing the evidence; the posterior is what you believe afterward. This theorem describes a relationship that holds between them.

This theorem means that, if I am being optimally rational, I should take into account all expected future evidence, not just evidence I have already seen. I should not expect to encounter evidence that will change my beliefs—if I did expect to see such evidence, I should change my beliefs right now!

This might be easier to grasp with an example.

Suppose I am trying to predict whether it will rain at 5:00 pm tomorrow, and I currently estimate that the probability of rain is 30%. This is my prior probability.

What will actually happen tomorrow is that it will rain or it won’t; so my posterior probability will either be 100% (if it rains) or 0% (if it doesn’t). But I had better assign a 30% chance to the event that will make me 100% certain it rains (namely, I see rain), and a 70% chance to the event that will make me 100% certain it doesn’t rain (namely, I see no rain); if I were to assign any other probabilities, then I must not really think the probability of rain at 5:00 pm tomorrow is 30%.

(The keen Bayesian will notice that the expected variance of my posterior need not be the variance of my prior: My initial variance is relatively high (it’s actually 0.3*0.7 = 0.21, because this is a Bernoulli distribution), because I don’t know whether it will rain or not; but my posterior variance will be 0, because I’ll know the answer once it rains or doesn’t.)

It’s a bit trickier to analyze, but this also works even if the evidence won’t make me certain. Suppose I am trying to determine the probability that some hypothesis is true. If I expect to see any evidence that might change my beliefs at all, then I should, on average, expect to see just as much evidence making me believe the hypothesis more as I see evidence that will make me believe the hypothesis less. If that is not what I expect, I should really change how much I believe the hypothesis right now!

So what does this mean for the golf example?

Was I wrong to hate golf quite so much before, because I knew that spending time on a golf course might make me hate it less?

I don’t think so.

See, the thing is: I know I’m not perfectly rational.

If I were indeed perfectly rational, then anything I expect to change my beliefs is a rational Bayesian update, and I should indeed factor it into my prior beliefs.

But if I know for a fact that I am not perfectly rational, that there are things which will change my beliefs in ways that make them deviate from rational Bayesian updating, then in fact I should not take those expected belief changes into account in my prior beliefs—since I expect to be wrong later, updating on that would just make me wrong now as well. I should only update on the expected belief changes that I believe will be rational.

This is something that a boundedly-rational person should do that neither a perfectly-rational nor perfectly-irrational person would ever do!

But maybe you don’t find the golf example convincing. Maybe you think I shouldn’t hate golf so much, and it’s not irrational for me to change my beliefs in that direction.


Very well. Let me give you a thought experiment which provides a very clear example of a time when you definitely would think your belief change was irrational.


To be clear, I’m not suggesting the two situations are in any way comparable; the golf thing is pretty minor, and for the thought experiment I’m intentionally choosing something quite extreme.

Here’s the thought experiment.

A mad scientist offers you a deal: Take this pill and you will receive $50 million. Naturally, you ask what the catch is. The catch, he explains, is that taking the pill will make you staunchly believe that the Holocaust didn’t happen. Take this pill, and you’ll be rich, but you’ll become a Holocaust denier. (I have no idea if making such a pill is even possible, but it’s a thought experiment, so bear with me. It’s certainly far less implausible than Swampman.)

I will assume that you are not, and do not want to become, a Holocaust denier. (If not, I really don’t know what else to say to you right now. It happened.) So if you take this pill, your beliefs will change in a clearly irrational way.

But I still think it’s probably justifiable to take the pill. This is absolutely life-changing money, for one thing, and being a random person who is a Holocaust denier isn’t that bad in the scheme of things. (Maybe it would be worse if you were in a position to have some kind of major impact on policy.) In fact, before taking the pill, you could write out a contract with a trusted friend that will force you to donate some of the $50 million to high-impact charities—and perhaps some of it to organizations that specifically fight Holocaust denial—thus ensuring that the net benefit to humanity is positive. Once you take the pill, you may be mad about the contract, but you’ll still have to follow it, and the net benefit to humanity will still be positive as reckoned by your prior, more correct, self.

It’s certainly not irrational to take the pill. There are perfectly-reasonable preferences you could have (indeed, likely dohave) that would say that getting $50 million is more important than having incorrect beliefs about a major historical event.

And if it’s rational to take the pill, and you intend to take the pill, then of course it’s rational to believe that in the future, you will have taken the pill and you will become a Holocaust denier.

But it would be absolutely irrational for you to become a Holocaust denier right now because of that. The pill isn’t going to provide evidence that the Holocaust didn’t happen (for no such evidence exists); it’s just going to alter your brain chemistry in such a way as to make you believe that the Holocaust didn’t happen.

So here we have a clear example where you expect to be more wrong in the future.

Of course, if this really only happens in weird thought experiments about mad scientists, then it doesn’t really matter very much. But I contend it happens in reality all the time:

  • You know that by hanging around people with an extremist ideology, you’re likely to adopt some of that ideology, even if you really didn’t want to.
  • You know that if you experience a traumatic event, it is likely to make you anxious and fearful in the future, even when you have little reason to be.
  • You know that if you have a mental illness, you’re likely to form harmful, irrational beliefs about yourself and others whenever you have an episode of that mental illness.

Now, all of these belief changes are things you would likely try to guard against: If you are a researcher studying extremists, you might make a point of taking frequent vacations to talk with regular people and help yourself re-calibrate your beliefs back to normal. Nobody wants to experience trauma, and if you do, you’ll likely seek out therapy or other support to help heal yourself from that trauma. And one of the most important things they teach you in cognitive-behavioral therapy is how to challenge and modify harmful, irrational beliefs when they are triggered by your mental illness.

But these guarding actions only make sense precisely because the anticipated belief change is irrational. If you anticipate a rational change in your beliefs, you shouldn’t try to guard against it; you should factor it into what you already believe.

This also gives me a little more sympathy for Evangelical Christians who try to keep their children from being exposed to secular viewpoints. I think we both agree that having more contact with atheists will make their children more likely to become atheists—but we view this expected outcome differently.

From my perspective, this is a rational change, and it’s a good thing, and I wish they’d factor it into their current beliefs already. (Like hey, maybe if talking to a bunch of smart people and reading a bunch of books on science and philosophy makes you think there’s no God… that might be because… there’s no God?)

But I think, from their perspective, this is an irrational change, it’s a bad thing, the children have been “tempted by Satan” or something, and thus it is their duty to protect their children from this harmful change.

Of course, I am not a subjectivist. I believe there’s a right answer here, and in this case I’m pretty sure it’s mine. (Wouldn’t I always say that? No, not necessarily; there are lots of matters for which I believe that there are experts who know better than I do—that’s what experts are for, really—and thus if I find myself disagreeing with those experts, I try to educate myself more and update my beliefs toward theirs, rather than just assuming they’re wrong. I will admit, however, that a lot of people don’t seem to do this!)

But this does change how I might tend to approach the situation of exposing their children to secular viewpoints. I now understand better why they would see that exposure as a harmful thing, and thus be resistant to actions that otherwise seem obviously beneficial, like teaching kids science and encouraging them to read books. In order to get them to stop “protecting” their kids from the free exchange of ideas, I might first need to persuade them that introducing some doubt into their children’s minds about God isn’t such a terrible thing. That sounds really hard, but it at least clearly explains why they are willing to fight so hard against things that, from my perspective, seem good. (I could also try to convince them that exposure to secular viewpoints won’t make their kids doubt God, but the thing is… that isn’t true. I’d be lying.)

That is, Evangelical Christians are not simply incomprehensibly evil authoritarians who hate truth and knowledge; they quite reasonably want to protect their children from things that will harm them, and they firmly believe that being taught about evolution and the Big Bang will make their children more likely to suffer great harm—indeed, the greatest harm imaginable, the horror of an eternity in Hell. Convincing them that this is not the case—indeed, ideally, that there is no such place as Hell—sounds like a very tall order; but I can at least more keenly grasp the equilibrium they’ve found themselves in, where they believe that anything that challenges their current beliefs poses a literally existential threat. (Honestly, as a memetic adaptation, this is brilliant. Like a turtle, the meme has grown itself a nigh-impenetrable shell. No wonder it has managed to spread throughout the world.)

Wage-matching and the collusion under our noses

Jul 20 JDN 2460877

It was a minor epiphany for me when I learned, over the course of studying economics, that price-matching policies, while they seem like they benefit consumers, actually are a brilliant strategy for maintaining tacit collusion.

Consider a (Bertrand) market, with some small number n of firms in it.

Each firm announces a price, and then customers buy from whichever firm charges the lowest price. Firms can produce as much as they need to in order to meet this demand. (This makes the most sense for a service industry rather than as literal manufactured goods.)

In Nash equilibrium, all firms will charge the same price, because anyone who charged more would sell nothing. But what will that price be?

In the absence of price-matching, it will be just above the marginal cost of the service. Otherwise, it would be advantageous to undercut all the other firms by charging slightly less, and you could still make a profit. So the equilibrium price is basically the same as it would be in a perfectly-competitive market.

But now consider what happens if the firms can announce a price-matching policy.

If you were already planning on buying from firm 1 at price P1, and firm 2 announces that you can buy from them at some lower price P2, then you still have no reason to switch to firm 2, because you can still get price P2 from firm 1 as long as you show them the ad from the other firm. Under the very reasonable assumption that switching firms carries some cost (if nothing else, the effort of driving to a different store), people won’t switch—which means that any undercut strategy will fail.

Now, firms don’t need to set such low prices! They can set a much higher price, confident that if any other firm tries to undercut them, it won’t actually work—and thus, no one will try to undercut them. The new Nash equilibrium is now for the firms to charge the monopoly price.

In the real world, it’s a bit more complicated than that; for various reasons they may not actually be able to sustain collusion at the monopoly price. But there is considerable evidence that price-matching schemes do allow firms to charge a higher price than they would in perfect competition. (Though the literature is not completely unanimous; there are a few who argue that price-matching doesn’t actually facilitate collusion—but they are a distinct minority.)

Thus, a policy that on its face seems like it’s helping consumers by giving them lower prices actually ends up hurting them by giving them higher prices.

Now I want to turn things around and consider the labor market.

What would price-matching look like in the labor market?

It would mean that whenever you are offered a higher wage at a different firm, you can point this out to the firm you are currently working at, and they will offer you a raise to that new wage, to keep you from leaving.

That sounds like a thing that happens a lot.

Indeed, pretty much the best way to get a raise, almost anywhere you may happen to work, is to show your employer that you have a better offer elsewhere. It’s not the only way to get a raise, and it doesn’t always work—but it’s by far the most reliable way, because it usually works.

This for me was another minor epiphany:

The entire labor market is full of tacit collusion.

The very fact that firms can afford to give you a raise when you have an offer elsewhere basically proves that they weren’t previously paying you all that you were worth. If they had actually been paying you your value of marginal product as they should in a competitive labor market, then when you showed them a better offer, they would say: “Sorry, I can’t afford to pay you any more; good luck in your new job!”

This is not a monopoly price but a monopsonyprice (or at least something closer to it); people are being systematically underpaid so that their employers can make higher profits.

And since the phenomenon of wage-matching is so ubiquitous, it looks like this is happening just about everywhere.

This simple model doesn’t tell us how much higher wages would be in perfect competition. It could be a small difference, or a large one. (It likely varies by industry, in fact.) But the simple fact that nearly every employer engages in wage-matching implies that nearly every employer is in fact colluding on the labor market.

This also helps explain another phenomenon that has sometimes puzzled economists: Why doesn’t raising the minimum wage increase unemployment? Well, it absolutely wouldn’t, if all the firms paying minimum wage are colluding in the labor market! And we already knew that most labor markets were shockingly concentrated.

What should be done about this?

Now there we have a thornier problem.

I actually think we could implement a law against price-matching on product and service markets relatively easily, since these are generally applied to advertised prices.

But a law against wage-matching would be quite tricky indeed. Wages are generally not advertised—a problem unto itself—and we certainly don’t want to ban raises in general.

Maybe what we should actually do is something like this: Offer a cash bonus (refundable tax credit?) to anyone who changes jobs in order to get a higher wage. Make this bonus large enough to offset the costs of switching jobs—which are clearly substantial. Then, the “undercut” (“overcut”?) strategy will become more effective; employers will have an easier time poaching workers from each other, and a harder time sustaining collusive wages.

Businesses would of course hate this policy, and lobby heavily against it. This is precisely the reaction we should expect if they are relying upon collusion to sustain their profits.

Universal human rights are more radical than is commonly supposed

Jul 13 JDN 2460870

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

So begins the second paragraph of the Declaration of Independence. It had to have been obvious to many people, even at the time, how incredibly hypocritical it was for men to sign that document and then go home to give orders to their slaves.

And today, even though the Universal Declaration of Human Rights was signed over 75 years ago, there are still human rights violations ongoing in many different countries—including right here in the United States.

Why is it so easy to get people to declare that they believe in universal human rights—but so hard to get them to actually act accordingly?

Other moral issues are not like this. While hypocrisy certainly exists in many forms, for the most part people’s moral claims align with their behavior. Most people say they are against murder—and sure enough, most people aren’t murderers. Most people say they are against theft—and indeed, most people don’t steal very often. And when it comes to things that most people do all the time, most people aren’t morally opposed to them—even things like eating meat, for which there is a pretty compelling moral case against it.

But universal human rights seems like something that is far more honored in the breach than the observance.

I think this is because most people don’t quite grasp just how radical universal human rights really are.

The tricky part is the universal. They are supposed to apply to everyone.

Even those people. Even the people you are thinking of right now as an exception. Even the people you hate the most. Yes, even them.

Depending on who you are, you might be thinking of different exceptions: People of a particular race, or religion, or nationality, perhaps; or criminals, or terrorists; or bigots, or fascists. But almost everyone has some group of people that they don’t really think deserves the full array of human rights.

So I am here to tell you that, yes, those people too. Universal human rights means everyone.

No exceptions.

This doesn’t mean that we aren’t allowed to arrest and imprison people for crimes. It doesn’t even mean that we aren’t sometimes justified in killing people—e.g. in war or self-defense. But it does mean that there is no one, absolutely no one, who is considered beneath human dignity. Any time we are to deprive someone of life or liberty, we must do so with absolute respect for their fundamental rights.

This also means that there is no one you should be spitting on, no one you should be torturing, no one you should be calling dehumanizing names. Sometimes violence is necessary, to protect yourself, or to preserve liberty, or to overthrow tyranny. But yes, even psychopathic tyrants are human beings, and still deserve human rights. If you cannot recognize a person’s humanity while still defending yourself against them, you need to do some serious soul-searching and ask yourself why not.

I think what happens when most people are asked about “universal human rights”, they essentially exclude whoever they think doesn’t deserve rights from the very category of “human”. Then it essentially becomes a tautology: Everyone who deserves rights deserves rights.

And thus, everyone signs onto it—but it ends up meaning almost nothing. It doesn’t stop racism, or sexism, or police brutality, or mass incarceration, or rape, or torture, or genocide, because the people doing those things don’t think of the people they’re doing them to as actually human.

But no, the actual declaration says all human beings. Everyone. Even the people you hate. Even the people who hate you. Even people who want to torture and kill you. Yes, even them.

This is an incredibly radical idea.

It is frankly alien to a brain that evolved for tribalism; we are wired to think of the world in terms of in-groups and out-groups, and universal human rights effectively declare that everyone is in the in-group and the out-group doesn’t exist.

Indeed, perhaps too radical! I think a reasonable defense could be made of a view that some people (psychopathic tyrants?) really are just so evil that they don’t actually deserve basic human dignity. But I will say this: Usually the people arguing that some group of humans aren’t really humans ends up being on the wrong side of history.

The one possible exception I can think of here is abortion: The people arguing that fetuses are not human beings and it should be permissible to kill them when necessary are, at least in my view, generally on the right side of history. But even then, I tend to be much more sympathetic to the view that abortion, like war and self-defense, should be seen as a tragically necessary evil, not an inherent good. The ideal scenario would be to never need it, and allowing it when it’s needed is simply a second-best solution. So I think we can actually still fit this into a view that fetuses are morally important and deserving of dignity; it’s just that sometimes that the rights of one being can outweigh the rights of another.

And other than that, yeah, it’s pretty much the case that the people who want to justify enacting some terrible harm on some group of people because they say those people aren’t really people, end up being the ones that, sooner or later, the world recognizes as the bad guys.

So think about that, if there is still some group of human beings that you think of as not really human beings, not really deserving of universal human rights. Will history vindicate you—or condemn you?

Quantifying stereotypes

Jul 6 JDN 2460863

There are a lot of stereotypes in the world, from the relatively innocuous (“teenagers are rebellious”) to the extremely harmful (“Black people are criminals”).

Most stereotypes are not true.

But most stereotypes are not exactly false, either.

Here’s a list of forty stereotypes, all but one of which I got from this list of stereotypes:

(Can you guess which one? I’ll give you a hint: It’s a group I belong to and a stereotype I’ve experienced firsthand.)

  1. “Children are always noisy and misbehaving.”
  2. “Kids can’t understand complex concepts.”
  3. “Children are tech-savvy.”
  4. “Teenagers are always rebellious.”
  5. Teenagers are addicted to social media.”
  6. “Adolescents are irresponsible and careless.”
  7. “Adults are always busy and stressed.”
  8. “Adults are responsible.”
  9. “Adults are not adept at using modern technologies.”
  10. “Elderly individuals are always grumpy.”
  11. “Old people can’t learn new skills, especially related to technology.”
  12. “The elderly are always frail and dependent on others.”
  13. “Women are emotionally more expressive and sensitive than men.”
  14. “Females are not as good at math or science as males.”
  15. “Women are nurturing, caring, and focused on family and home.”
  16. “Females are not as assertive or competitive as men.”
  17. “Men do not cry or express emotions openly.”
  18. “Males are inherently better at physical activities and sports.”
  19. “Men are strong, independent, and the primary breadwinners.”
  20. “Males are not as good at multitasking as females.”
  21. “African Americans are good at sports.”
  22. “African Americans are inherently aggressive or violent.”
  23. “Black individuals have a natural talent for music and dance.”
  24. “Asians are highly intelligent, especially in math and science.”
  25. “Asian individuals are inherently submissive or docile.”
  26. “Asians know martial arts.”
  27. “Latinos are uneducated.”
  28. “Hispanic individuals are undocumented immigrants.”
  29. “Latinos are inherently passionate and hot-tempered.”
  30. “Middle Easterners are terrorists.”
  31. “Middle Eastern women are oppressed.”
  32. “Middle Eastern individuals are inherently violent or aggressive.”
  33. “White people are privileged and unacquainted with hardship.”
  34. White people are racist.”
  35. “White individuals lack rhythm in music or dance.”
  36. Gay men are excessively flamboyant.”
  37. Gay men have lisps.”
  38. Lesbians are masculine.”
  39. Bisexuals are promiscuous.”
  40. Trans people get gender-reassignment surgery.”

If you view the above 40 statements as absolute statements about everyone in the category (the first-order operator “for all”), they are obviously false; there are clear counter-examples to every single one. If you view them as merely saying that there are examples of each (the first-order operator “there exists”), they are obviously true, but also utterly trivial, as you could just as easily find examples from other groups.

But I think there’s a third way to read them, which may be more what most people actually have in mind. Indeed, it kinda seems uncharitable not to read them this third way.

That way is:

This is more true of the group I’m talking about than it is true of other groups.”

And that is not only a claim that can be true, it is a claim that can be quantified.

Recall my new favorite effect size measure, because it’s so simple and intuitive; I’m not much for the official name probability of superiority (especially in this context!), so I’m gonna call it the more down-to-earth chance of being higher.

It is exactly what it sounds like: If you compare a quantity X between group A and group B, what is the chance that the person in group A has a higher value of X?

Let’s start at the top: If you take one randomly-selected child, and one randomly-selected adult, what is the chance that the child is one who is more prone to being noisy and misbehaving?

Probably pretty high.

Or let’s take number 13: If you take one randomly-selected woman and one randomly-selected man, what is the chance that the woman is the more emotionally expressive one?

Definitely more than half.

Or how about number 27: If you take one randomly-selected Latino and one randomly-selected non-Latino (especially if you choose a White or Asian person), what is the chance that the Latino is the less-educated one?

That one I can do fairly precisely: Since 95% of White Americans have completed high school but only 75% of Latino Americans have, while 28% of Whites have a bachelor’s degree and only 21% of Latinos do, the probability of the White person being at least as educated as the Latino person is about 82%.

I don’t know the exact figures for all of these, and I didn’t want to spend all day researching 40 different stereotypes, but I am quite prepared to believe that at least all of the following exhibit a chance of being higher that is over 50%:

1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 15, 16, 17, 18, 19, 21, 24, 26, 27, 28, 29, 30, 31, 33, 34, 36, 37, 38, 40.

You may have noticed that that’s… most of them. I had to shrink the font a little to fit them all on one line.

I think 30 is an important one to mention, because while terrorists are a tiny proportion of the Middle Eastern population, they are in fact a much larger proportion of that population than they are of most other populations, and it doesn’t take that many terrorists to make a place dangerous. The Middle East is objectively a more dangerous place for terrorism than most other places, and only India and sub-Saharan Africa close (and both of which are also largely driven by Islamist terrorism). So while it’s bigoted to assume that any given Muslim or Middle Easterner is a terrorist, it is an objective fact that a disproportionate share of terrorists are Middle Eastern Muslims. Part of what I’m trying to do here is get people to more clearly distinguish between those two concepts, because one is true and the other is very, very false.

40 also deserves particular note, because the chance of being higher is almost certainly very close to 100%. While most trans people don’t get gender-reassignment surgery, virtually all people who get gender-reassignment surgery are trans.

Then again, you could see this as a limitation of the measure, since we might expect a 100% score to mean “it’s true of everyone in the group”, when here it simply means “if we ask people whether they have had gender-reassignment surgery, the trans people sometimes say yes and the cis people always say no.”


We could talk about a weak or strict chance of being higher: The weak chance is the chance of being greater than or equal to (which is the normal measure), while the strict chance is the chance of being strictly greater. In this case, the weak chance is nearly 100%, while the strict chance is hard to estimate but probably about 33% based on surveys.

This doesn’t mean that all stereotypes have some validity.

There are some stereotypes here, including a few pretty harmful ones, for which I’m not sure how the statistics would actually shake out:
10, 14, 22, 23, 25, 32, 35, 39

But I think we should be honestly prepared for the possibility that maybe there is some statistical validity to some of these stereotypes too, and instead of simply dismissing the stereotypes as false—or even bigoted—we should instead be trying to determine how true they are, and also look at why they might have some truth to them.

My proposal is to use the chance of being higher as a measure of the truth of a stereotype.

A stereotype is completely true if it has a chance of being higher of 100%.

It is completely false if it has a chance of being higher of 50%.

And it is completely backwards if it has a chance of being higher of 0%.

There is a unique affine transformation that does this: 2X-1.

100% maps to 100%, 50% maps to 0%, and 0% maps to -100%.

With discrete outcomes, the difference between weak and strong chance of being higher becomes very important. With a discrete outcome, you can have a 100% weak chance but a 1% strong chance, and honestly I’m really not sure whether we should say that stereotype is true or not.

For example, for the claim “trans men get bottom surgery”, the figures would be 100% and 6% respectively. The vast majority of trans men don’t get bottom surgery—but cis men almost never do. (Unless I count penis enlargement surgery? Then the numbers might be closer than you’d think, at least in the US where the vast majority of such surgery is performed.)

And for the claim “Middle Eastern Muslims are terrorists”, well, given two random people of whatever ethnicity or religion, they’re almost certainly not terrorists—but if it one of them is, it’s probably the Middle Eastern Muslim. It may be better in this case to talk about the conditional chance of being higher: If you have two random people, you know that one is a terrorist and one isn’t, and one is a Middle Eastern Muslim and one isn’t, how likely is it that the Middle Eastern Muslim is the terrorist? Probably about 80%. Definitely more than 50%, but also not 100%. So that’s the sense in which the stereotype has some validity. It’s still the case that 99.999% of Middle Eastern Muslims aren’t terrorists, and so it remains bigoted to treat every Middle Eastern Muslim you meet like a terrorist.

We could also work harder to more clearly distinguish between “Middle Easterners are terrorists” and “terrorists are Middle Easterners”; the former is really not true (99.999% are not), but the latter kinda is (the plurality of the world’s terrorists are in the Middle East).

Alternatively, for discrete traits we could just report all four probabilities, which would be something like this: 99.999% of Middle Eastern Muslims are not terrorists, and 0.001% are; 99.9998% of other Americans are not terrorists, and 0.0002% are. Compared to Muslim terrorists in the US, White terrorists actually are responsible for more attacks and a similar number of deaths, but largely because there just are a lot more White people in America.

These issues mainly arise when a trait is discrete. When the trait is itself quantitative (like rebelliousness, or math test scores), this is less of a problem, and the weak and strong chances of being higher are generally more or less the same.


So instead of asking whether a stereotype is true, we could ask: How true is it?

Using measures like this, we will find that some stereotypes probably have quite high truth levels, like 1 and 4; but others, if they are true at all, must have quite low truth levels, like 14; if there’s a difference, it’s a small difference!

The lower a stereotype’s truth level, the less useful it is; indeed, by this measure, it directly predicts how accurate you’d be at guessing someone’s score on the trait if you knew only the group they belong to. If you couldn’t really predict, then why are you using the stereotype? Get rid of it.

Moreover, some stereotypes are clearly more harmful than others.

Even if it is statistically valid to say that Black people are more likely to commit crimes in the US than White people (it is), the kind of person who goes around saying “Black people are criminals” is (1) smearing all Black people with the behavior of a minority of them, and (2) likely to be racist in other ways. So we have good reason to be suspect of people who say such things, even if there may be a statistical kernel of truth to their claims.

But we might still want to be a little more charitable, a little more forgiving, when people express stereotypes. They may make what sounds like a blanket absolute “for all” statement, but actually intend something much milder—something that might actually be true. They might not clearly grasp the distinction between “Middle Easterners are terrorists” and “terrorists are Middle Easterners”, and instead of denouncing them as a bigot immediately, you could try taking the time to listen to what they are saying and carefully explain what’s wrong with it.

Failing to be charitable like this—as we so often do—often feels to people like we are dismissing their lived experience. All the terrorists they can think of were Middle Eastern! All of the folks they know with a lisp turned out to be gay! Lived experience is ultimately anecdotal, but it still has a powerful effect on how people think (too powerful—see also availability heuristic), and it’s really not surprising that people would feel we are treating them unjustly if we immediately accuse them of bigotry simply for stating things that, based on their own experience, seem to be true.

I think there’s another harm here as well, which is that we damage our own credibility. If I believe that something is true and you tell me that I’m a bad person for believing it, that doesn’t make me not believe it—it makes me not trust you. You’ve presented yourself as the sort of person who wants to cover up the truth when it doesn’t fit your narrative. If you wanted to actually convince me that my belief is wrong, you could present evidence that might do that. (To be fair, this doesn’t always work; but sometimes it does!) But if you just jump straight to attacking my character, I don’t want to talk to you anymore.