On foxes and hedgehogs, part I

Aug 3 JDN 2460891

Today I finally got around to reading Expert Political Judgment by Philip E. Tetlock, more or less in a single sitting because I’ve been sick the last week with some pretty tight limits on what activities I can do. (It’s mostly been reading, watching TV, or playing video games that don’t require intense focus.)

It’s really an excellent book, and I now both understand why it came so highly recommended to me, and now pass on that recommendation to you: Read it.

The central thesis of the book really boils down to three propositions:

  1. Human beings, even experts, are very bad at predicting political outcomes.
  2. Some people, who use an open-minded strategy (called “foxes”), perform substantially better than other people, who use a more dogmatic strategy (called “hedgehogs”).
  3. When rewarding predictors with money, power, fame, prestige, and status, human beings systematically favor (over)confident “hedgehogs” over (correctly) humble “foxes”.

I decided I didn’t want to make this post about current events, but I think you’ll probably agree with me when I say:

That explains a lot.

How did Tetlock determine this?

Well, he studies the issue several different ways, but the core experiment that drives his account is actually a rather simple one:

  1. He gathered a large group of subject-matter experts: Economists, political scientists, historians, and area-studies professors.
  2. He came up with a large set of questions about politics, economics, and similar topics, which could all be formulated as a set of probabilities: “How likely is this to get better/get worse/stay the same?” (For example, this was in the 1980s, so he asked about the fate of the Soviet Union: “By 1990, will they become democratic, remain as they are, or collapse and fragment?”)
  3. Each respondent answered a subset of the questions, some about their own particular field, some about another, more distant field; they assigned probabilities on an 11-point scale, from 0% to 100% in increments of 10%.
  4. A few years later, he compared the predictions to the actual results, scoring them using a Brier score, which penalizes you for assigning high probability to things that didn’t happen or low probability to things that did happen.
  5. He compared the resulting scores between people with different backgrounds, on different topics, with different thinking styles, and a variety of other variables. He also benchmarked them using some automated algorithms like “always say 33%” and “always give ‘stay the same’ 100%”.

I’ll show you the key results of that analysis momentarily, but to help it make more sense to you, let me elaborate a bit more on the “foxes” and “hedgehogs”. The notion is was first popularized by Isaiah Berlin in an essay called, simply, The Hedgehog and the Fox.

“The fox knows many things, but the hedgehog knows one very big thing.”

That is, someone who reasons as a “fox” combines ideas from many different sources and perspective, and tries to weigh them all together into some sort of synthesis that then yields a final answer. This process is messy and complicated, and rarely yields high confidence about anything.

Whereas, someone who reasons as a “hedgehog” has a comprehensive theory of the world, an ideology, that provides clear answers to almost any possible question, with the surely minor, insubstantial flaw that those answers are not particularly likely to be correct.

He also considered “hedge-foxes” (people who are mostly fox but also a little bit hedgehog) and “fox-hogs” (people who are mostly hedgehog but also a little bit fox).

Tetlock has decomposed the scores into two components: calibration and discrimination. (Both very overloaded words, but they are standard in the literature.)

Calibration is how well your stated probabilities matched up with the actual probabilities; that is, if you predicted 10% probability on 20 different events, you have very good calibration if precisely 2 of those events occurred, and very poor calibration if 18 of those events occurred.

Discrimination more or less describes how useful your predictions are, what information they contain above and beyond the simple base rate. If you just assign equal probability to all events, you probably will have reasonably good calibration, but you’ll have zero discrimination; whereas if you somehow managed to assign 100% to everything that happened and 0% to everything that didn’t, your discrimination would be perfect (and we would have to find out how you cheated, or else declare you clairvoyant).

For both measures, higher is better. The ideal for each is 100%, but it’s virtually impossible to get 100% discrimination and actually not that hard to get 100% calibration if you just use the base rates for everything.


There is a bit of a tradeoff between these two: It’s not too hard to get reasonably good calibration if you just never go out on a limb, but then your predictions aren’t as useful; we could have mostly just guessed them from the base rates.

On the graph, you’ll see downward-sloping lines that are meant to represent this tradeoff: Two prediction methods that would yield the same overall score but different levels of calibration and discrimination will be on the same line. In a sense, two points on the same line are equally good methods that prioritize usefulness over accuracy differently.

All right, let’s see the graph at last:

The pattern is quite clear: The more foxy you are, the better you do, and the more hedgehoggy you are, the worse you do.

I’d also like to point out the other two regions here: “Mindless competition” and “Formal models”.

The former includes really simple algorithms like “always return 33%” or “always give ‘stay the same’ 100%”. These perform shockingly well. The most sophisticated of these, “case-specific extrapolation” (35 and 36 on the graph, which basically assumes that each country will continue doing what it’s been doing) actually performs as well if not better than even the foxes.

And what’s that at the upper-right corner, absolutely dominating the graph? That’s “Formal models”. This describes basically taking all the variables you can find and shoving them into a gigantic logit model, and then outputting the result. It’s computationally intensive and requires a lot of data (hence why he didn’t feel like it deserved to be called “mindless”), but it’s really not very complicated, and it’s the best prediction method, in every way, by far.

This has made me feel quite vindicated about a weird nerd thing I do: When I have a big decision to make (especially a financial decision), I create a spreadsheet and assemble a linear utility model to determine which choice will maximize my utility, under different parameterizations based on my past experiences. Whichever result seems to win the most robustly, I choose. This is fundamentally similar to the “formal models” prediction method, where the thing I’m trying to predict is my own happiness. (It’s a bit less formal, actually, since I don’t have detailed happiness data to feed into the regression.) And it has worked for me, astonishingly well. It definitely beats going by my own gut. I highly recommend it.

What does this mean?

Well first of all, it means humans suck at predicting things. At least for this data set, even our experts don’t perform substantially better than mindless models like “always assume the base rate”.

Nor do experts perform much better in their own fields than in other fields; they do all perform better than undergrads or random people (who somehow perform worse than the “mindless” models)

But Tetlock also investigates further, trying to better understand this “fox/hedgehog” distinction and why it yields different performance. He really bends over backwards to try to redeem the hedgehogs, in the following ways:

  1. He allows them to make post-hoc corrections to their scores, based on “value adjustments” (assigning higher probability to events that would be really important) and “difficulty adjustments” (assigning higher scores to questions where the three outcomes were close to equally probable) and “fuzzy sets” (giving some leeway on things that almost happened or things that might still happen later).
  2. He demonstrates a different, related experiment, in which certain manipulations can cause foxes to perform a lot worse than they normally would, and even yield really crazy results like probabilities that add up to 200%.
  3. He has a whole chapter that is a Socratic dialogue (seriously!) between four voices: A “hardline neopositivist”, a “moderate neopositivist”, a “reasonable relativist”, and an “unrelenting relativist”; and all but the “hardline neopositivist” agree that there is some legitimate place for the sort of post hoc corrections that the hedgehogs make to keep themselves from looking so bad.

This post is already getting a bit long, so that will conclude part I. Stay tuned for part II, next week!

Bayesian updating with irrational belief change

Jul 27 JDN 2460884

For the last few weeks I’ve been working at a golf course. (It’s a bit of an odd situation: I’m not actually employed by the golf course; I’m contracted by a nonprofit to be a “job coach” for a group of youths who are part of a work program that involves them working at the golf course.)

I hate golf. I have always hated golf. I find it boring and pointless—which, to be fair, is my reaction to most sports—and also an enormous waste of land and water. A golf course is also a great place for oligarchs to arrange collusion.

But I noticed something about being on the golf course every day, seeing people playing and working there: I feel like I hate it a bit less now.

This is almost certainly a mere-exposure effect: Simply being exposed to something many times makes it feel familiar, and that tends to make you like it more, or at least dislike it less. (There are some exceptions: repeated exposure to trauma can actually make you more sensitive to it, hating it even more.)

I kinda thought this would happen. I didn’t really want it to happen, but I thought it would.

This is very interesting from the perspective of Bayesian reasoning, because it is a theorem (though I cannot seem to find anyone naming the theorem; it’s like a folk theorem, I guess?) of Bayesian logic that the following is true:

The prior expectation of the posterior is the expectation of the prior.

The prior is what you believe before observing the evidence; the posterior is what you believe afterward. This theorem describes a relationship that holds between them.

This theorem means that, if I am being optimally rational, I should take into account all expected future evidence, not just evidence I have already seen. I should not expect to encounter evidence that will change my beliefs—if I did expect to see such evidence, I should change my beliefs right now!

This might be easier to grasp with an example.

Suppose I am trying to predict whether it will rain at 5:00 pm tomorrow, and I currently estimate that the probability of rain is 30%. This is my prior probability.

What will actually happen tomorrow is that it will rain or it won’t; so my posterior probability will either be 100% (if it rains) or 0% (if it doesn’t). But I had better assign a 30% chance to the event that will make me 100% certain it rains (namely, I see rain), and a 70% chance to the event that will make me 100% certain it doesn’t rain (namely, I see no rain); if I were to assign any other probabilities, then I must not really think the probability of rain at 5:00 pm tomorrow is 30%.

(The keen Bayesian will notice that the expected variance of my posterior need not be the variance of my prior: My initial variance is relatively high (it’s actually 0.3*0.7 = 0.21, because this is a Bernoulli distribution), because I don’t know whether it will rain or not; but my posterior variance will be 0, because I’ll know the answer once it rains or doesn’t.)

It’s a bit trickier to analyze, but this also works even if the evidence won’t make me certain. Suppose I am trying to determine the probability that some hypothesis is true. If I expect to see any evidence that might change my beliefs at all, then I should, on average, expect to see just as much evidence making me believe the hypothesis more as I see evidence that will make me believe the hypothesis less. If that is not what I expect, I should really change how much I believe the hypothesis right now!

So what does this mean for the golf example?

Was I wrong to hate golf quite so much before, because I knew that spending time on a golf course might make me hate it less?

I don’t think so.

See, the thing is: I know I’m not perfectly rational.

If I were indeed perfectly rational, then anything I expect to change my beliefs is a rational Bayesian update, and I should indeed factor it into my prior beliefs.

But if I know for a fact that I am not perfectly rational, that there are things which will change my beliefs in ways that make them deviate from rational Bayesian updating, then in fact I should not take those expected belief changes into account in my prior beliefs—since I expect to be wrong later, updating on that would just make me wrong now as well. I should only update on the expected belief changes that I believe will be rational.

This is something that a boundedly-rational person should do that neither a perfectly-rational nor perfectly-irrational person would ever do!

But maybe you don’t find the golf example convincing. Maybe you think I shouldn’t hate golf so much, and it’s not irrational for me to change my beliefs in that direction.


Very well. Let me give you a thought experiment which provides a very clear example of a time when you definitely would think your belief change was irrational.


To be clear, I’m not suggesting the two situations are in any way comparable; the golf thing is pretty minor, and for the thought experiment I’m intentionally choosing something quite extreme.

Here’s the thought experiment.

A mad scientist offers you a deal: Take this pill and you will receive $50 million. Naturally, you ask what the catch is. The catch, he explains, is that taking the pill will make you staunchly believe that the Holocaust didn’t happen. Take this pill, and you’ll be rich, but you’ll become a Holocaust denier. (I have no idea if making such a pill is even possible, but it’s a thought experiment, so bear with me. It’s certainly far less implausible than Swampman.)

I will assume that you are not, and do not want to become, a Holocaust denier. (If not, I really don’t know what else to say to you right now. It happened.) So if you take this pill, your beliefs will change in a clearly irrational way.

But I still think it’s probably justifiable to take the pill. This is absolutely life-changing money, for one thing, and being a random person who is a Holocaust denier isn’t that bad in the scheme of things. (Maybe it would be worse if you were in a position to have some kind of major impact on policy.) In fact, before taking the pill, you could write out a contract with a trusted friend that will force you to donate some of the $50 million to high-impact charities—and perhaps some of it to organizations that specifically fight Holocaust denial—thus ensuring that the net benefit to humanity is positive. Once you take the pill, you may be mad about the contract, but you’ll still have to follow it, and the net benefit to humanity will still be positive as reckoned by your prior, more correct, self.

It’s certainly not irrational to take the pill. There are perfectly-reasonable preferences you could have (indeed, likely dohave) that would say that getting $50 million is more important than having incorrect beliefs about a major historical event.

And if it’s rational to take the pill, and you intend to take the pill, then of course it’s rational to believe that in the future, you will have taken the pill and you will become a Holocaust denier.

But it would be absolutely irrational for you to become a Holocaust denier right now because of that. The pill isn’t going to provide evidence that the Holocaust didn’t happen (for no such evidence exists); it’s just going to alter your brain chemistry in such a way as to make you believe that the Holocaust didn’t happen.

So here we have a clear example where you expect to be more wrong in the future.

Of course, if this really only happens in weird thought experiments about mad scientists, then it doesn’t really matter very much. But I contend it happens in reality all the time:

  • You know that by hanging around people with an extremist ideology, you’re likely to adopt some of that ideology, even if you really didn’t want to.
  • You know that if you experience a traumatic event, it is likely to make you anxious and fearful in the future, even when you have little reason to be.
  • You know that if you have a mental illness, you’re likely to form harmful, irrational beliefs about yourself and others whenever you have an episode of that mental illness.

Now, all of these belief changes are things you would likely try to guard against: If you are a researcher studying extremists, you might make a point of taking frequent vacations to talk with regular people and help yourself re-calibrate your beliefs back to normal. Nobody wants to experience trauma, and if you do, you’ll likely seek out therapy or other support to help heal yourself from that trauma. And one of the most important things they teach you in cognitive-behavioral therapy is how to challenge and modify harmful, irrational beliefs when they are triggered by your mental illness.

But these guarding actions only make sense precisely because the anticipated belief change is irrational. If you anticipate a rational change in your beliefs, you shouldn’t try to guard against it; you should factor it into what you already believe.

This also gives me a little more sympathy for Evangelical Christians who try to keep their children from being exposed to secular viewpoints. I think we both agree that having more contact with atheists will make their children more likely to become atheists—but we view this expected outcome differently.

From my perspective, this is a rational change, and it’s a good thing, and I wish they’d factor it into their current beliefs already. (Like hey, maybe if talking to a bunch of smart people and reading a bunch of books on science and philosophy makes you think there’s no God… that might be because… there’s no God?)

But I think, from their perspective, this is an irrational change, it’s a bad thing, the children have been “tempted by Satan” or something, and thus it is their duty to protect their children from this harmful change.

Of course, I am not a subjectivist. I believe there’s a right answer here, and in this case I’m pretty sure it’s mine. (Wouldn’t I always say that? No, not necessarily; there are lots of matters for which I believe that there are experts who know better than I do—that’s what experts are for, really—and thus if I find myself disagreeing with those experts, I try to educate myself more and update my beliefs toward theirs, rather than just assuming they’re wrong. I will admit, however, that a lot of people don’t seem to do this!)

But this does change how I might tend to approach the situation of exposing their children to secular viewpoints. I now understand better why they would see that exposure as a harmful thing, and thus be resistant to actions that otherwise seem obviously beneficial, like teaching kids science and encouraging them to read books. In order to get them to stop “protecting” their kids from the free exchange of ideas, I might first need to persuade them that introducing some doubt into their children’s minds about God isn’t such a terrible thing. That sounds really hard, but it at least clearly explains why they are willing to fight so hard against things that, from my perspective, seem good. (I could also try to convince them that exposure to secular viewpoints won’t make their kids doubt God, but the thing is… that isn’t true. I’d be lying.)

That is, Evangelical Christians are not simply incomprehensibly evil authoritarians who hate truth and knowledge; they quite reasonably want to protect their children from things that will harm them, and they firmly believe that being taught about evolution and the Big Bang will make their children more likely to suffer great harm—indeed, the greatest harm imaginable, the horror of an eternity in Hell. Convincing them that this is not the case—indeed, ideally, that there is no such place as Hell—sounds like a very tall order; but I can at least more keenly grasp the equilibrium they’ve found themselves in, where they believe that anything that challenges their current beliefs poses a literally existential threat. (Honestly, as a memetic adaptation, this is brilliant. Like a turtle, the meme has grown itself a nigh-impenetrable shell. No wonder it has managed to spread throughout the world.)

Universal human rights are more radical than is commonly supposed

Jul 13 JDN 2460870

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

So begins the second paragraph of the Declaration of Independence. It had to have been obvious to many people, even at the time, how incredibly hypocritical it was for men to sign that document and then go home to give orders to their slaves.

And today, even though the Universal Declaration of Human Rights was signed over 75 years ago, there are still human rights violations ongoing in many different countries—including right here in the United States.

Why is it so easy to get people to declare that they believe in universal human rights—but so hard to get them to actually act accordingly?

Other moral issues are not like this. While hypocrisy certainly exists in many forms, for the most part people’s moral claims align with their behavior. Most people say they are against murder—and sure enough, most people aren’t murderers. Most people say they are against theft—and indeed, most people don’t steal very often. And when it comes to things that most people do all the time, most people aren’t morally opposed to them—even things like eating meat, for which there is a pretty compelling moral case against it.

But universal human rights seems like something that is far more honored in the breach than the observance.

I think this is because most people don’t quite grasp just how radical universal human rights really are.

The tricky part is the universal. They are supposed to apply to everyone.

Even those people. Even the people you are thinking of right now as an exception. Even the people you hate the most. Yes, even them.

Depending on who you are, you might be thinking of different exceptions: People of a particular race, or religion, or nationality, perhaps; or criminals, or terrorists; or bigots, or fascists. But almost everyone has some group of people that they don’t really think deserves the full array of human rights.

So I am here to tell you that, yes, those people too. Universal human rights means everyone.

No exceptions.

This doesn’t mean that we aren’t allowed to arrest and imprison people for crimes. It doesn’t even mean that we aren’t sometimes justified in killing people—e.g. in war or self-defense. But it does mean that there is no one, absolutely no one, who is considered beneath human dignity. Any time we are to deprive someone of life or liberty, we must do so with absolute respect for their fundamental rights.

This also means that there is no one you should be spitting on, no one you should be torturing, no one you should be calling dehumanizing names. Sometimes violence is necessary, to protect yourself, or to preserve liberty, or to overthrow tyranny. But yes, even psychopathic tyrants are human beings, and still deserve human rights. If you cannot recognize a person’s humanity while still defending yourself against them, you need to do some serious soul-searching and ask yourself why not.

I think what happens when most people are asked about “universal human rights”, they essentially exclude whoever they think doesn’t deserve rights from the very category of “human”. Then it essentially becomes a tautology: Everyone who deserves rights deserves rights.

And thus, everyone signs onto it—but it ends up meaning almost nothing. It doesn’t stop racism, or sexism, or police brutality, or mass incarceration, or rape, or torture, or genocide, because the people doing those things don’t think of the people they’re doing them to as actually human.

But no, the actual declaration says all human beings. Everyone. Even the people you hate. Even the people who hate you. Even people who want to torture and kill you. Yes, even them.

This is an incredibly radical idea.

It is frankly alien to a brain that evolved for tribalism; we are wired to think of the world in terms of in-groups and out-groups, and universal human rights effectively declare that everyone is in the in-group and the out-group doesn’t exist.

Indeed, perhaps too radical! I think a reasonable defense could be made of a view that some people (psychopathic tyrants?) really are just so evil that they don’t actually deserve basic human dignity. But I will say this: Usually the people arguing that some group of humans aren’t really humans ends up being on the wrong side of history.

The one possible exception I can think of here is abortion: The people arguing that fetuses are not human beings and it should be permissible to kill them when necessary are, at least in my view, generally on the right side of history. But even then, I tend to be much more sympathetic to the view that abortion, like war and self-defense, should be seen as a tragically necessary evil, not an inherent good. The ideal scenario would be to never need it, and allowing it when it’s needed is simply a second-best solution. So I think we can actually still fit this into a view that fetuses are morally important and deserving of dignity; it’s just that sometimes that the rights of one being can outweigh the rights of another.

And other than that, yeah, it’s pretty much the case that the people who want to justify enacting some terrible harm on some group of people because they say those people aren’t really people, end up being the ones that, sooner or later, the world recognizes as the bad guys.

So think about that, if there is still some group of human beings that you think of as not really human beings, not really deserving of universal human rights. Will history vindicate you—or condemn you?

Quantifying stereotypes

Jul 6 JDN 2460863

There are a lot of stereotypes in the world, from the relatively innocuous (“teenagers are rebellious”) to the extremely harmful (“Black people are criminals”).

Most stereotypes are not true.

But most stereotypes are not exactly false, either.

Here’s a list of forty stereotypes, all but one of which I got from this list of stereotypes:

(Can you guess which one? I’ll give you a hint: It’s a group I belong to and a stereotype I’ve experienced firsthand.)

  1. “Children are always noisy and misbehaving.”
  2. “Kids can’t understand complex concepts.”
  3. “Children are tech-savvy.”
  4. “Teenagers are always rebellious.”
  5. Teenagers are addicted to social media.”
  6. “Adolescents are irresponsible and careless.”
  7. “Adults are always busy and stressed.”
  8. “Adults are responsible.”
  9. “Adults are not adept at using modern technologies.”
  10. “Elderly individuals are always grumpy.”
  11. “Old people can’t learn new skills, especially related to technology.”
  12. “The elderly are always frail and dependent on others.”
  13. “Women are emotionally more expressive and sensitive than men.”
  14. “Females are not as good at math or science as males.”
  15. “Women are nurturing, caring, and focused on family and home.”
  16. “Females are not as assertive or competitive as men.”
  17. “Men do not cry or express emotions openly.”
  18. “Males are inherently better at physical activities and sports.”
  19. “Men are strong, independent, and the primary breadwinners.”
  20. “Males are not as good at multitasking as females.”
  21. “African Americans are good at sports.”
  22. “African Americans are inherently aggressive or violent.”
  23. “Black individuals have a natural talent for music and dance.”
  24. “Asians are highly intelligent, especially in math and science.”
  25. “Asian individuals are inherently submissive or docile.”
  26. “Asians know martial arts.”
  27. “Latinos are uneducated.”
  28. “Hispanic individuals are undocumented immigrants.”
  29. “Latinos are inherently passionate and hot-tempered.”
  30. “Middle Easterners are terrorists.”
  31. “Middle Eastern women are oppressed.”
  32. “Middle Eastern individuals are inherently violent or aggressive.”
  33. “White people are privileged and unacquainted with hardship.”
  34. White people are racist.”
  35. “White individuals lack rhythm in music or dance.”
  36. Gay men are excessively flamboyant.”
  37. Gay men have lisps.”
  38. Lesbians are masculine.”
  39. Bisexuals are promiscuous.”
  40. Trans people get gender-reassignment surgery.”

If you view the above 40 statements as absolute statements about everyone in the category (the first-order operator “for all”), they are obviously false; there are clear counter-examples to every single one. If you view them as merely saying that there are examples of each (the first-order operator “there exists”), they are obviously true, but also utterly trivial, as you could just as easily find examples from other groups.

But I think there’s a third way to read them, which may be more what most people actually have in mind. Indeed, it kinda seems uncharitable not to read them this third way.

That way is:

This is more true of the group I’m talking about than it is true of other groups.”

And that is not only a claim that can be true, it is a claim that can be quantified.

Recall my new favorite effect size measure, because it’s so simple and intuitive; I’m not much for the official name probability of superiority (especially in this context!), so I’m gonna call it the more down-to-earth chance of being higher.

It is exactly what it sounds like: If you compare a quantity X between group A and group B, what is the chance that the person in group A has a higher value of X?

Let’s start at the top: If you take one randomly-selected child, and one randomly-selected adult, what is the chance that the child is one who is more prone to being noisy and misbehaving?

Probably pretty high.

Or let’s take number 13: If you take one randomly-selected woman and one randomly-selected man, what is the chance that the woman is the more emotionally expressive one?

Definitely more than half.

Or how about number 27: If you take one randomly-selected Latino and one randomly-selected non-Latino (especially if you choose a White or Asian person), what is the chance that the Latino is the less-educated one?

That one I can do fairly precisely: Since 95% of White Americans have completed high school but only 75% of Latino Americans have, while 28% of Whites have a bachelor’s degree and only 21% of Latinos do, the probability of the White person being at least as educated as the Latino person is about 82%.

I don’t know the exact figures for all of these, and I didn’t want to spend all day researching 40 different stereotypes, but I am quite prepared to believe that at least all of the following exhibit a chance of being higher that is over 50%:

1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 15, 16, 17, 18, 19, 21, 24, 26, 27, 28, 29, 30, 31, 33, 34, 36, 37, 38, 40.

You may have noticed that that’s… most of them. I had to shrink the font a little to fit them all on one line.

I think 30 is an important one to mention, because while terrorists are a tiny proportion of the Middle Eastern population, they are in fact a much larger proportion of that population than they are of most other populations, and it doesn’t take that many terrorists to make a place dangerous. The Middle East is objectively a more dangerous place for terrorism than most other places, and only India and sub-Saharan Africa close (and both of which are also largely driven by Islamist terrorism). So while it’s bigoted to assume that any given Muslim or Middle Easterner is a terrorist, it is an objective fact that a disproportionate share of terrorists are Middle Eastern Muslims. Part of what I’m trying to do here is get people to more clearly distinguish between those two concepts, because one is true and the other is very, very false.

40 also deserves particular note, because the chance of being higher is almost certainly very close to 100%. While most trans people don’t get gender-reassignment surgery, virtually all people who get gender-reassignment surgery are trans.

Then again, you could see this as a limitation of the measure, since we might expect a 100% score to mean “it’s true of everyone in the group”, when here it simply means “if we ask people whether they have had gender-reassignment surgery, the trans people sometimes say yes and the cis people always say no.”


We could talk about a weak or strict chance of being higher: The weak chance is the chance of being greater than or equal to (which is the normal measure), while the strict chance is the chance of being strictly greater. In this case, the weak chance is nearly 100%, while the strict chance is hard to estimate but probably about 33% based on surveys.

This doesn’t mean that all stereotypes have some validity.

There are some stereotypes here, including a few pretty harmful ones, for which I’m not sure how the statistics would actually shake out:
10, 14, 22, 23, 25, 32, 35, 39

But I think we should be honestly prepared for the possibility that maybe there is some statistical validity to some of these stereotypes too, and instead of simply dismissing the stereotypes as false—or even bigoted—we should instead be trying to determine how true they are, and also look at why they might have some truth to them.

My proposal is to use the chance of being higher as a measure of the truth of a stereotype.

A stereotype is completely true if it has a chance of being higher of 100%.

It is completely false if it has a chance of being higher of 50%.

And it is completely backwards if it has a chance of being higher of 0%.

There is a unique affine transformation that does this: 2X-1.

100% maps to 100%, 50% maps to 0%, and 0% maps to -100%.

With discrete outcomes, the difference between weak and strong chance of being higher becomes very important. With a discrete outcome, you can have a 100% weak chance but a 1% strong chance, and honestly I’m really not sure whether we should say that stereotype is true or not.

For example, for the claim “trans men get bottom surgery”, the figures would be 100% and 6% respectively. The vast majority of trans men don’t get bottom surgery—but cis men almost never do. (Unless I count penis enlargement surgery? Then the numbers might be closer than you’d think, at least in the US where the vast majority of such surgery is performed.)

And for the claim “Middle Eastern Muslims are terrorists”, well, given two random people of whatever ethnicity or religion, they’re almost certainly not terrorists—but if it one of them is, it’s probably the Middle Eastern Muslim. It may be better in this case to talk about the conditional chance of being higher: If you have two random people, you know that one is a terrorist and one isn’t, and one is a Middle Eastern Muslim and one isn’t, how likely is it that the Middle Eastern Muslim is the terrorist? Probably about 80%. Definitely more than 50%, but also not 100%. So that’s the sense in which the stereotype has some validity. It’s still the case that 99.999% of Middle Eastern Muslims aren’t terrorists, and so it remains bigoted to treat every Middle Eastern Muslim you meet like a terrorist.

We could also work harder to more clearly distinguish between “Middle Easterners are terrorists” and “terrorists are Middle Easterners”; the former is really not true (99.999% are not), but the latter kinda is (the plurality of the world’s terrorists are in the Middle East).

Alternatively, for discrete traits we could just report all four probabilities, which would be something like this: 99.999% of Middle Eastern Muslims are not terrorists, and 0.001% are; 99.9998% of other Americans are not terrorists, and 0.0002% are. Compared to Muslim terrorists in the US, White terrorists actually are responsible for more attacks and a similar number of deaths, but largely because there just are a lot more White people in America.

These issues mainly arise when a trait is discrete. When the trait is itself quantitative (like rebelliousness, or math test scores), this is less of a problem, and the weak and strong chances of being higher are generally more or less the same.


So instead of asking whether a stereotype is true, we could ask: How true is it?

Using measures like this, we will find that some stereotypes probably have quite high truth levels, like 1 and 4; but others, if they are true at all, must have quite low truth levels, like 14; if there’s a difference, it’s a small difference!

The lower a stereotype’s truth level, the less useful it is; indeed, by this measure, it directly predicts how accurate you’d be at guessing someone’s score on the trait if you knew only the group they belong to. If you couldn’t really predict, then why are you using the stereotype? Get rid of it.

Moreover, some stereotypes are clearly more harmful than others.

Even if it is statistically valid to say that Black people are more likely to commit crimes in the US than White people (it is), the kind of person who goes around saying “Black people are criminals” is (1) smearing all Black people with the behavior of a minority of them, and (2) likely to be racist in other ways. So we have good reason to be suspect of people who say such things, even if there may be a statistical kernel of truth to their claims.

But we might still want to be a little more charitable, a little more forgiving, when people express stereotypes. They may make what sounds like a blanket absolute “for all” statement, but actually intend something much milder—something that might actually be true. They might not clearly grasp the distinction between “Middle Easterners are terrorists” and “terrorists are Middle Easterners”, and instead of denouncing them as a bigot immediately, you could try taking the time to listen to what they are saying and carefully explain what’s wrong with it.

Failing to be charitable like this—as we so often do—often feels to people like we are dismissing their lived experience. All the terrorists they can think of were Middle Eastern! All of the folks they know with a lisp turned out to be gay! Lived experience is ultimately anecdotal, but it still has a powerful effect on how people think (too powerful—see also availability heuristic), and it’s really not surprising that people would feel we are treating them unjustly if we immediately accuse them of bigotry simply for stating things that, based on their own experience, seem to be true.

I think there’s another harm here as well, which is that we damage our own credibility. If I believe that something is true and you tell me that I’m a bad person for believing it, that doesn’t make me not believe it—it makes me not trust you. You’ve presented yourself as the sort of person who wants to cover up the truth when it doesn’t fit your narrative. If you wanted to actually convince me that my belief is wrong, you could present evidence that might do that. (To be fair, this doesn’t always work; but sometimes it does!) But if you just jump straight to attacking my character, I don’t want to talk to you anymore.

What does nonviolence mean?

Jun 15 JDN 2460842

As I write this, the LA protests and the crackdown upon them have continued since Friday and it is now Wednesday. In a radical and authoritarian move by Trump, Marines have been deployed (with shockingly incompetent logistics unbefitting the usually highly-efficient US military); but so far they have done very little. Reuters has been posting live updates on new developments.

The LAPD has deployed a variety of less-lethal weapons to disperse the protests, including rubber bullets, tear gas, and pepper balls; but so far they have not used lethal force. Protesters have been arrested, some for specific crimes—and others simply for violating curfew.

More recently, the protests have spread to other cities, including New York, Atlanta, Austin, Chicago, San Fransisco, and Philadelphia. By the time this post goes live, there will probably be even more cities involved, and there may also be more escalation.

But for now, at least, the protests have been largely nonviolent.

And I thought it would be worthwhile to make it very clear what I mean by that, and why it is important.

I keep seeing a lot of leftist people on social media accepting the narrative that these protests are violent, but actively encouraging that; and some of them have taken to arrogantly accuse anyone who supports nonviolent protests over violent ones of either being naive idiots or acting in bad faith. (The most baffling part of this is that they seem to be saying that Martin Luther King and Mahatma Gandhi were naive idiots or were acting in bad faith? Is that what they meant to say?)

First of all, let me be absolutely clear that nonviolence does not mean comfortable or polite or convenient.

Anyone objecting to blocking traffic, strikes, or civil disobedience because they cause disorder and inconvenience genuinely does not understand the purpose of protest (or is a naive idiot or acting in bad faith). Effective protests are disruptive and controversial. They cause disorder.

Nonviolence does not mean always obeying the law.

Sometimes the law is itself unjust, and must be actively disobeyed. Most of the Holocaust was legal, after all.

Other times, it is necessary to break some laws (such as property laws, curfews, and laws against vandalism) in the service of higher goals.

I wouldn’t say that a law against vandalism is inherently unjust; but I would say that spray-painting walls and vehicles in the service of protecting human rights is absolutely justified, and even sometimes it’s necessary to break some windows or set some fires.

Nonviolence does not mean that nobody tries to call it violence.

Most governments are well aware that most of their citizens are much more willing to support a nonviolent movement than a violent moment—more on this later—and thus will do whatever they can to characterize nonviolent movements as violence. They have two chief strategies for doing so:

  1. Characterize nonviolent but illegal acts, such as vandalism and destruction of property, as violence
  2. Actively try to instigate violence by treating nonviolent protesters as if they were violent, and then characterizing their attempts at self-defense as violence

As a great example of the latter, a man in Phoenix was arrested for assault because he kicked a tear gas canister back at police. But kicking back a canister that was shot at you is the most paradigmatic example of self-defense I could possibly imagine. If the system weren’t so heavily biased in fair of the police, a judge would order his release immediately.

Nonviolence does not mean that no one at the protests gets violent.

Any large group of people will contain outliers. Gather a protest of thousands of people, and surely some fraction of them will be violent radicals, or just psychopaths looking for an excuse to hurt someone. A nonviolent protest is one in which most people are nonviolent, and in which anyone who does get violent is shunned by the organizers of the movement.

Nonviolence doesn’t mean that violence will never be used against you.

On the contrary, the more authoritarian the regime—and thus the more justified your protest—the more likely it is that violent force will be used to suppress your nonviolent protests.

In some places it will be limited to less-lethal means (as it has so far in the current protests); but in others, even in ostensibly-democratic countries, it can result in lethal force being deployed against innocent people (as it did at Kent State in 1970).

When this happens, are you supposed to just stand there and get shot?

Honestly? Yes. I know that requires tremendous courage and self-sacrifice, but yes.

I’m not going to fault anyone for running or hiding or even trying to fight back (I’d be more of the “run” persuasion myself), but the most heroic action you could possibly take in that situation is in fact to stand there and get shot. Becoming a martyr is a terrible sacrifice, and one I’m not sure it’s one I myself could ever make; but it really, really works. (Seriously, whole religions have been based on this!)

And when you get shot, for the love of all that is good in the world, make sure someone gets it on video.

The best thing you can do for your movement is to show the oppressors for what they truly are. If they are willing to shoot unarmed innocent people, and the world finds out about that, the world will turn against them. The more peaceful and nonviolent you can appear at the moment they shoot you, the more compelling that video will be when it is all over the news tomorrow.

A shockingly large number of social movements have pivoted sharply in public opinion after a widely-publicized martyrdom incident. If you show up peacefully to speak your minds and they shoot you, that is nonviolent protest working. That is your protest being effective.

I never said that nonviolent protest was easy or safe.

What is the core of nonviolence?

It’s really very simple. So simple, honestly, that I don’t understand why it’s hard to get across to people:

Nonviolence means you don’t initiate bodily harm against other human beings.

It does not necessarily preclude self-defense, so long as that self-defense is reasonable and proportionate; and it certainly does not in any way preclude breaking laws, damaging property, or disrupting civil order.


Nonviolence means you never throw the first punch.

Nonviolence is not simply a moral position, but a strategic one.

Some of the people you would be harming absolutely deserve it. I don’t believe in ACAB, but I do believe in SCAB, and nearly 30% of police officers are domestic abusers, who absolutely would deserve a good punch to the face. And this is all the more true of ICE officers, who aren’t just regular bastards; they are bastards whose core job is now enforcing the human rights violations of President Donald Trump. Kidnapping people with their unmarked uniforms and unmarked vehicles, ICE is basically the Gestapo.

But it’s still strategically very unwise for us to deploy violence. Why? Two reasons:

  1. Using violence is a sure-fire way to turn most Americans against our cause.
  2. We would probably lose.

Nonviolent protest is nearly twice as effective as violent insurrection. (If you take nothing else from this post, please take that.)

And the reason that nonviolent protest is so effective is that it changes minds.

Violence doesn’t do that; in fact, it tends to make people rally against you. Once you start killing people, even people who were on your side may start to oppose you—let alone anyone who was previously on the fence.

A successful violent revolution results in you having to build a government and enforce your own new laws against a population that largely still disagrees with you—and if you’re a revolution made of ACAB people, that sounds spectacularly difficult!

A successful nonviolent protest movement results in a country that agrees with you—and it’s extremely hard for even a very authoritarian regime to hang onto power when most of the people oppose it.

By contrast, the success rate of violent insurrections is not very high. Why?

Because they have all the guns, you idiot.

States try to maintain a monopoly on violence in their territory. They are usually pretty effective at doing so. Thus attacking a state when you are not a state puts you at a tremendous disadvantage.

Seriously; we are talking about the United States of America right now, the most powerful military hegemon the world has ever seen.

Maybe the people advocating violence don’t really understand this, but the US has not lost a major battle since 1945. Oh, yes, they’ve “lost wars”, but what that really means is that public opinion has swayed too far against the war for them to maintain morale (Vietnam) or their goals for state-building were so over-ambitious that they were basically impossible for anyone to achieve (Iraq and Afghanistan). If you tally up the actual number of soldiers killed, US troops always kill more than they lose, and typically by a very wide margin.


And even with the battles the US lost in WW1 and WW2, they still very much won the actual wars. So genuinely defeating the United States in open military conflict is not something that has happened since… I’m pretty sure the War of 1812.

Basically, advocating for a violent response to Trump is saying that you intend to do something that literally no one in the world—including major world military powers—has been able to accomplish in 200 years. The last time someone got close, the US nuked them.

If the protests in LA were genuinely the insurrectionists that Trump has been trying to characterize them as, those Marines would not only have been deployed, they would have started shooting. And I don’t know if you realize this, but US Marines are really good at shooting. It’s kind of their thing. Instead of skirmishes with rubber bullets and tear gas, we would have an absolute bloodbath. It would probably end up looking like the Tet Offensive, a battle where “unprepared” US forces “lost” because they lost 6,000 soldiers and “only” killed 45,000 in return. (The US military is so hegemonic that a kill ratio of more than 7 to 1 is considered a “loss” in the media and public opinion.)

Granted, winning a civil war is different from winning a conventional war; even if a civil war broke out, it’s unlikely that nukes would be used on American soil, for instance. But you’re still talking about a battle so uphill it’s more like trying to besiege Edinburgh Castle.

Our best hope in such a scenario, in fact, would probably be to get blue-state governments to assert control over US military forces in their own jurisdiction—which means that antagonizing Gavin Newsom, as I’ve been seeing quite a few leftists doing lately, seems like a really bad idea.

I’m not saying that winning a civil war would be completely impossible. Since we might be able to get blue-state governors to take control of forces in their own states and we would probably get support from Canada, France, and the United Kingdom, it wouldn’t be completely hopeless. But it would be extremely costly, millions of people would die, and victory would by no means be assured despite the overwhelming righteousness of our cause.

How about, for now at least, we stick to the methods that historically have proven twice as effective?

Patriotism for dark times

May 18 JDN 2460814

These are dark times indeed. ICE is now arresting people without warrants, uniforms or badges and detaining them in camps without lawyers or trials. That is, we now have secret police who are putting people in concentration camps. Don’t mince words here; these are not “arrests” or “deportations”, because those actions would require warrants and due process of law.

Fascism has arrived in America, and, just as predicted, it is indeed wrapped in the flag.

I don’t really have anything to say to console you about this. It’s absolutely horrific, and the endless parade of ever more insane acts and violations of civil rights under Trump’s regime has been seriously detrimental to my own mental health and that of nearly everyone I know.

But there is something I do want to say:

I believe the United States of America is worth saving.

I don’t think we need to burn it all down and start with something new. I think we actually had something pretty good here, and once Trump is finally gone and we manage to fix some of the tremendous damage he has done, I believe that we can put better safeguards in place to stop something like this from happening again.

Of course there are many, many ways that the United States could be made better—even before Trump took the reins and started wrecking everything. But when we consider what we might have had instead, the United States turns out looking a lot better than most of the alternatives.

Is the United States especially evil?

Every nation in the world has darkness in its history. The United States is assuredly no exception: Genocide against Native Americans, slavery, Jim Crow, and the Japanese internment to name a few. (I could easily name many more, but I think you get the point.) This country is certainly responsible for a great deal of evil.

But unlike a lot of people on the left, I don’t think the United States is uniquely or especially evil. In fact, I think we have quite compelling reasons to think that the United States overall has been especially good, and could be again.

How can I say such a thing about a country that has massacred natives, enslaved millions, and launched a staggering number of coups?

Well, here’s the thing:

Every country’s history is like that.

Some are better or worse than others, but it’s basically impossible to find a nation on Earth that hasn’t massacred, enslaved, or conquered another group—and often all three. I guess maybe some of the very youngest countries might count, those that were founded by overthrowing colonial rule within living memory. But certainly those regions and cultures all had similarly dark pasts.

So what actually makes the United States different?

What is distinctive about the United States, relative to other countries? It’s large, it’s wealthy, it’s powerful; that is certainly all true. But other nations and empires have been like that—Rome once was, and China has gained and lost such status multiple times throughout its long history.

Is it especially corrupt? No, its corruption ratings are on a par with other First World countries.

Is it especially unequal? Compared to the rest of the First World, certainly; but by world standards, not really. (The world is a very unequal place.)

But there are two things about the United States that really do seem unique.

The first is how the United States was founded.

Some countries just sort of organically emerged. They were originally tribes that lived in that area since time immemorial, and nobody really knows when they came about; they just sort of happened.

Most countries were created by conquering or overthrowing some other country. Usually one king wanted some territory that was held by another king, so he gathered an army and took over that territory and said it was his now. Or someone who wasn’t a king really wanted to become one, so he killed the current king and took his place on the throne.

And indeed, for most of history, most nations have been some variant of authoritarianism. Monarchy was probably the most common, but there were also various kinds of oligarchy, and sometimes military dictatorship. Even Athens, the oldest recorded “democracy”, was really an oligarchy of Greek male property owners. (Granted, the US also started out pretty much the same way.)

I’m glossing over a huge amount of variation and history here, of course. But what I really want to get at is just how special the founding of the United States was.

The United States of America was the first country on Earth to be designed.

Up until that point, countries just sort of emerged, or they governed however their kings wanted, or they sort of evolved over time as different interest groups jockeyed for control of the oligarchy.

But the Constitution of the United States was something fundamentally new. A bunch of very smart, well-read, well-educated people (okay, mostly White male property owners, with a few exceptions) gathered together to ask the bold question: “What is the best way to run a country?”

And they discussed and argued and debated over this, sometimes finding agreement, other times reaching awkward compromises that no one was really satisfied with. But when the dust finally settled, they had a blueprint for a better kind of nation. And then they built it.

This was a turning point in human history.

Since then, hundreds of constitutions have been written, and most nations on Earth have one of some sort (and many have gone through several). We now think of writing a constitution as what you do to make a country. But before the United States, it wasn’t! A king just took charge and did whatever he wanted! There were no rules; there was no document telling him what he could and couldn’t do.

Most countries for most of history really only had one rule:

L’Etat, c’est moi.

Yes, there was some precedent for a constitution, even going all the way back to the Magna Carta; but that wasn’t created when England was founded, it was foisted upon the king after England had already been around for centuries. And it was honestly still pretty limited in how it restricted the king.

Now, it turns out that the Founding Fathers made a lot of mistakes in designing the Constitution; but I think this is quite forgivable, for two reasons:

  1. They were doing this for the first time. Nobody had ever written a constitution before! Nobody had governed a democracy (even of the White male property-owner oligarchy sort) in centuries!
  2. They knew they would make mistakes—and they included in the Constitution itself a mechanism for amending it to correct those mistakes.

And amend it we have, 27 times so far, most importantly the Bill of Rights and the Fifteenth and Nineteenth Amendments, which together finally created true universal suffrage—a real democracy. And even in 1920 when the Nineteenth Amendment was passed, this was an extremely rare thing. Many countries had followed the example of the United States by now, but only a handful of them granted voting rights to women.

The United States really was a role model for modern democracy. It showed the world that a nation governed by its own people could be prosperous and powerful.

The second is how the United States expanded its influence.

Many have characterized the United States as an empire, because its influence is so strongly felt around the world. It is undeniably a hegemon, at least.

The US military is the world’s most powerful, accounting for by far the highest spending (more than the next 9 countries combined!) and 20 of the world’s 51 aircraft carriers (China has 5—and they’re much smaller). (The US military is arguably not the largest since China has more soldiers and more ships. But US soldiers are much better trained and equipped, and the US Navy has far greater tonnage.) Most of the world’s currency exchange is done in dollars. Nearly all the world’s air traffic control is done in English. The English-language Internet is by far the largest, forming nearly the majority of all pages by itself. Basically every computer in the world either runs as its operating system Windows, Mac, or Linux—all of which were created in the United States. And since the US attained its hegemony after World War 2, the world has enjoyed a long period of relative peace not seen in centuries, sometimes referred to as the Pax Americana. These all sound like characteristics of an empire.

Yet if it is an empire, the United States is a very unusual one.

Most empires are formed by conquest: Rome created an empire by conquering most of Europe and North Africa. Britain created an empire by colonizing and conquering natives all around the globe.

Yet aside from the Native Americans (which, I admit, is a big thing to discount) and a few other exceptions, the United States engaged in remarkably little conquest. Its influence is felt as surely across the globe as Britain’s was at the height of the British Empire, yet where under Britain all those countries were considered holdings of the Crown (until they all revolted), under the Pax Americana they all have their own autonomous governments, most of them democracies (albeit most of them significantly flawed—including the US itself, these days).

That is, the United States does not primarily spread its influence by conquering other nations. It primarily spreads its influence through diplomacy and trade. Its primary methods are peaceful and mutually-beneficial. And the world has become tremendously wealthier, more peaceful, and all around better off because of this.

Yes, there are some nuances here: The US certainly has engaged in a large number of coups intended to decide what sort of government other countries would have, especially in Latin America. Some of these coups were in favor of democratic governments, which might be justifiable; but many were in favor of authoritarian governments that were simply more capitalist, which is awful. (Then again, while the US was instrumental in supporting authoritarian capitalist regimes in Chile and South Korea, those two countries did ultimately turn into prosperous democracies—especially South Korea.)

So it still remains true that the United States is guilty of many horrible crimes; I’m not disputing that. What I’m saying is that if any other nation had been in its place, things would most like have been worse. This is even true of Britain or France, which are close allies of the US and quite similar; both of these countries, when they had a chance at empire, took it by brutal force. Even Norway once had an empire built by conquest—though I’ll admit, that was a very long time ago.

I admit, it’s depressing that this is what a good nation looks like.

I think part of the reason why so many on the left imagine the United States to be uniquely evil is that they want to think that somewhere out there is a country that’s better than this, a country that doesn’t have staggering amounts of blood on its hands.

But no, this is pretty much as good as it gets. While there are a few countries with a legitimate claim to being better (mostly #ScandinaviaIsBetter), the vast majority of nations on Earth are not better than the United States; they are worse.

Humans have a long history of doing terrible things to other humans. Some say it’s in our nature. Others believe that it is the fault of culture or institutions. Likely both are true to some extent. But if you look closely into the history of just about anywhere on Earth, you will find violence and horror there.

What you won’t always find is a nation that marks a turning point toward global democracy, or a nation that establishes its global hegemony through peaceful and mutually-beneficial means. Those nations are few and far between, and indeed are best exemplified by the United States of America.

A knockdown proof of social preferences

Apr 27 JDN 2460793

In economics jargon, social preferences basically just means that people care about what happens to people other than themselves.

If you are not an economist, it should be utterly obvious that social preferences exist:

People generally care the most about their friends and family, less but still a lot about their neighbors and acquaintances, less but still moderately about other groups they belong to such as those delineated by race, gender, religion, and nationality (or for that matter alma mater), and less still but not zero about any randomly-selected human being. Most of us even care about the welfare of other animals, though we can be curiously selective about this: Abuse that would horrify most people if done to cats or dogs passes more or less ignored when it is committed against cows, pigs, and chickens.

For some people, there are also groups for which there seem to be negative social preferences, sometimes called “spiteful preferences”, but that doesn’t really seem to capture it: I think we need a stronger word like hatredfor whatever emotion human beings feel when they are willing and eager to participate in genocide. Yet even that is still a social preference: If you want someone to suffer or die, you do care about what happens to them.

But if you are an economist, you’ll know that the very idea of social preferences remains controversial, even after it has been clearly and explictly demonstrated by numerous randomized controlled experiments. (I will never forget the professor who put “altruism” in scare quotes in an email reply he sent me.)

Indeed, I have realized that the experimental evidence is so clear, so obvious, that it surprises me that I haven’t seen anyone present the really overwhelming knockdown evidence that ought to convince any reasonable skeptic. So that is what I have decided to do today.

Consider the following four economics experiments:

Dictator 1Participant 1 chooses an allocation of $20, dividing it between themself and Participant 2. Whatever allocation Participant 1 chooses, Participant 2 must accept. Both participants get their allocated amounts.
Dictator 2Participant 1 chooses an allocation of $20, choosing how much they get. Participant 1 gets their allocated amount. The rest of the money is burned.
Ultimatum 1Participant 1 chooses an allocation of $20, dividing it between themself and Participant 2. Participant 2 may choose to accept or reject this allocation; if they accept, both participants get their allocated amounts. If they reject, both participants get nothing.
Ultimatum 2Participant 1 chooses an allocation of $20, dividing it between themself and Participant 2. Participant 2 may choose to accept or reject this allocation; if they accept, both participants get their allocated amounts. If they reject, Participant 2 gets nothing, but Participant 1 still gets the allocated amount.

Dictator 1 and Ultimatum 1 are the standard forms of the Dictator Game and Ultimatum Game, which are experiments that have been conducted dozens if not hundreds of times and are the subject of a huge number of papers in experimental economics.

These experiments clearly demonstrate the existence of social preferences. But I think even most behavioral economists don’t quite seem to grasp just how compelling that evidence is.

This is because they have generally failed to compare against my other two experiments, Dictator 2 and Ultimatum 2.

If social preferences did not exist, Participant 1 would be completely indifferent about what happened to the money that they themself did not receive.

In that case, Dictator 1 and Dictator 2 should show the same result: Participant 1 chooses to get $20.

Likewise, Ultimatum 1 and Ultimatum 2 should show the same result: Participant 1 chooses to get $19, offering only $1 to Participant 2, and Participant 2 accepts. This is the outcome that is “rational” in the hyper-selfish neoclassical sense.

Much ink has already been spilled over the fact that these are not the typical outcomes of Dictator 1 and Ultimatum 1. Far more likely is that Participant 1 offers something close to $10, or even $10 exactly, in both games; and in Ultimatum 1, in the unlikely event that Participant 1 should offer only $1 or $2, Participant 2 will typically reject.

But what I’d like to point out today is that the “rational” neoclassical outcome is what would happen in Dictator 2 and Ultimatum 2, and that this is so obvious we probably don’t even need to run the experiments (but we might as well, just to be sure).

In Dictator 1, the money that Participant 1 doesn’t keep goes to Participant 2, and so they are deciding how to weigh their own interests against those of another. But in Dictator 2, Participant 1 is literally just deciding how much free money they will receive. The other money doesn’t go to anyone—not even back to the university conducting the experiment. It’s just burned. It provides benefit to no one. So the rational choice is in fact obvious: Take all of the free money. (Technically, burning money and thereby reducing the money supply would have a miniscule effect of reducing future inflation across the entire economy. But even the full $20 would be several orders of magnitude too small for anyone to notice—and even a much larger amount like $10 billion would probably end up being compensated by the actions of the Federal Reserve.)

Likewise, in both Ultimatum 1 and Ultimatum 2, the money that Participant 1 doesn’t keep will go to Participant 2. Their offer will thus probably be close to $10. But what I really want to focus in on is Participant 2’s choice: If they are offered only $1 or $2, will they accept? Neoclassical theory says that the “rational” choice is to accept it. But in Ultimatum 1, most people will reject it. Are they being irrational?

If they were simply being irrational—failing to maximize their own payoff—then they should reject just as often in Ultimatum 2. But I contend that they would in fact accept far more offers in Ultimatum 2 than they did in Ultimatum 1. Why? Because rejection doesn’t stop Participant 1 from getting what they demanded. There is no way to punish Participant 1 for an unfair offer in Ultimatum 2: It is literally just a question of whether you get $1 or $0.

Like I said, I haven’t actually run these experiments. I’m not sure anyone has. But these results seem very obvious, and I would be deeply shocked if they did not turn out the way I expect. (Perhaps as shocked as so many neoclassical economists were when they first saw the results of experiments on Dictator 1 and Ultimatum 1!)

Thus, Dictator 2 and Ultimatum 2 should have outcomes much more like what neoclassical economics predicts than Dictator 1 and Ultimatum 1.

Yet the only difference—the only difference—between Dictator 1 and Dictator 2, and between Ultimatum 1 and Ultimatum 2, is what happens to someone else’s payoff when you make your decision. Your own payoff is exactly identical.

Thus, behavior changes when we change only the effects on the payoffs of other people; therefore people care about the payoffs of others; therefore social preferences exist.

QED.

Of course this still leaves the question of what sort of social preferences people have, and why:

  • Why are some people more generous than others? Why are people sometimes spiteful—or even hateful?
  • Is it genetic? Is it evolutionary? Is it learned? Is it cultural? Likely all of the above.
  • Are people implicitly thinking of themselves as playing in a broader indefinitely iterated game called “life” and using that to influence their decisions? Quite possibly.
  • Is maintaining a reputation of being a good person important to people? In general, I’m sure it is, but I don’t think it can explain the results of these economic experiments by itself—especially in versions where everything is completely anonymous.

But given the stark differences between Dictator 1 versus Dictator 2 and Ultimatum 1 versus Ultimatum 2 (and really, feel free to run the experiments!), I don’t think anyone can reasonably doubt that social preferences do, in fact, exist.

If you ever find someone who does doubt social preferences, point them to this post.

The Index of Necessary Expenditure

Mar 16 JDN 2460751

I’m still reeling from the fact that Donald Trump was re-elected President. He seemed obviously horrible at the time, and he still seems horrible now, for many of the same reasons as before (we all knew the tariffs were coming, and I think deep down we knew he would sell out Ukraine because he loves Putin), as well as some brand new ones (I did not predict DOGE would gain access to all the government payment systems, nor that Trump would want to start a “crypto fund”). Kamala Harris was not an ideal candidate, but she was a good candidate, and the comparison between the two could not have been starker.

Now that the dust has cleared and we have good data on voting patterns, I am now less convinced than I was that racism and sexism were decisive against Harris. I think they probably hurt her some, but given that she actually lost the most ground among men of color, racism seems like it really couldn’t have been a big factor. Sexism seems more likely to be a significant factor, but the fact that Harris greatly underperformed Hillary Clinton among Latina women at least complicates that view.

A lot of voters insisted that they voted on “inflation” or “the economy”. Setting aside for a moment how absurd it was—even at the time—to think that Trump (he of the tariffs and mass deportations!) was going to do anything beneficial for the economy, I would like to better understand how people could be so insistent that the economy was bad even though standard statistical measures said it was doing fine.

Krugman believes it was a “vibecession”, where people thought the economy was bad even though it wasn’t. I think there may be some truth to this.


But today I’d like to evaluate another possibility, that what people were really reacting against was not inflation per se but necessitization.

I first wrote about necessitization in 2020; as far as I know, the term is my own coinage. The basic notion is that while prices overall may not have risen all that much, prices of necessities have risen much faster, and the result is that people feel squeezed by the economy even as CPI growth remains low.

In this post I’d like to more directly evaluate that notion, by constructing an index of necessary expenditure (INE).

The core idea here is this:

What would you continue to buy, in roughly the same amounts, even if it doubled in price, because you simply can’t do without it?

For example, this is clearly true of housing: You can rent or you can own, but can’t not have a house. And nor are most families going to buy multiple houses—and they can’t buy partial houses.

It’s also true of healthcare: You need whatever healthcare you need. Yes, depending on your conditions, you maybe could go without, but not without suffering, potentially greatly. Nor are you going to go out and buy a bunch of extra healthcare just because it’s cheap. You need what you need.

I think it’s largely true of education as well: You want your kids to go to college. If college gets more expensive, you might—of necessity—send them to a worse school or not allow them to complete their degree, but this would feel like a great hardship for your family. And in today’s economy you can’t not send your kids to college.

But this is not true of technology: While there is a case to be made that in today’s society you need a laptop in the house, the fact is that people didn’t used to have those not that long ago, and if they suddenly got a lot cheaper you very well might buy another one.

Well, it just so happens that housing, healthcare, and education have all gotten radically more expensive over time, while technology has gotten radically cheaper. So prima facie, this is looking pretty plausible.

But I wanted to get more precise about it. So here is the index I have constructed. I consider a family of four, two adults, two kids, making the median household income.

To get the median income, I’ll use this FRED series for median household income, then use this table of median federal tax burden to get an after-tax wage. (State taxes vary too much for me to usefully include them.) Since the tax table ends in 2020 which was anomalous, I’m going to extrapolate that 2021-2024 should be about the same as 2019.

I assume the kids go to public school, but the parents are saving up for college; to make the math simple, I’ll assume the family is saving enough for each kid to graduate from with a four-year degree from a public university, and that saving is spread over 16 years of the child’s life. 2*4/16 = 0.5; this means that each year the family needs to come up with 0.5 years of cost of attendance. (I had to get the last few years from here, but the numbers are comparable.)

I assume the family owns two cars—both working full time, they kinda have to—which I amortize over 10 year lifetimes; 2*1/10 = 0.2, so each year the family pays 0.2 times the value of an average midsize car. (The current average new car price is $33226; I then use the CPI for cars to figure out what it was in previous years.)

I assume they pay a 30-year mortgage on the median home; they would pay interest on this mortgage, so I need to factor that in. I’ll assume they pay the average mortgage rate in that year, but I don’t want to have to do a full mortgage calculation (including PMI, points, down payment etc.) for each year, so I’ll say that they amount they pay is (1/30 + 0.5 (interest rate))*(home value) per year, which seems to be a reasonable approximation over the relevant range.

I assume that both adults have a 15-mile commute (this seems roughly commensurate with the current mean commute time of 26 minutes), both adults work 5 days per week, 50 weeks per year, and their cars get the median level of gas mileage. This means that they consume 2*15*2*5*50/(median MPG) = 15000/(median MPG) gallons of gasoline per year. I’ll use this BTS data for gas mileage. I’m intentionally not using median gasoline consumption, because when gas is cheap, people might take more road trips, which is consumption that could be avoided without great hardship when gas gets expensive. I will also assume that the kids take the bus to school, so that doesn’t contribute to the gasoline cost.

That I will multiply by the average price of gasoline in June of that year, which I have from the EIA since 1993. (I’ll extrapolate 1990-1992 as the same as 1993, which is conservative.)

I will assume that the family owns 2 cell phones, 1 computer, and 1 television. This is tricky, because the quality of these tech items has dramatically increased over time.

If you try to measure with equivalent buying power (e.g. a 1 MHz computer, a 20-inch CRT TV), then you’ll find that these items have gotten radically cheaper; $1000 in 1950 would only buy as much TV as $7 today, and a $50 Raspberry Pi‘s 2.4 GHz processor is 150 times faster than the 16 MHz offered by an Apple Powerbook in 1991—despite the latter selling for $2500 nominally. So in dollars per gigahertz, the price of computers has fallen by an astonishing 7,500 times just since 1990.

But I think that’s an unrealistic comparison. The standards for what was considered necessary have also increased over time. I actually think it’s quite fair to assume that people have spent a roughly constant nominal amount on these items: about $500 for a TV, $1000 for a computer, and $500 for a cell phone. I’ll also assume that the TV and phones are good for 5 years while the computer is good for 2 years, which makes the total annual expenditure for 2 phones, a TV, and a computer equal to 2/5*500 + 1/5*500 + 1/2*1000 = 800. This is about what a family must spend every year to feel like they have an adequate amount of digital technology.

I will also assume that the family buys clothes with this equivalent purchasing power, with an index that goes from 166 in 1990 to 177 in 2024—also nearly constant in nominal terms. I’ll multiply that index by $10 because the average annual household spending on clothes is about $1700 today.

I will assume that the family buys the equivalent of five months of infant care per year; they surely spend more than this (in either time or money) when they have actual infants, but less as the kids grow. This amounts to about $5000 today, but was only $1600 in 1990—a 214% increase, or 3.42% per year.

For food expenditure, I’m going to use the USDA’s thrifty plan for June of that year. I’ll use the figures assuming that one child is 6 and the other is 9. I don’t have data before 1994, so I’ll extrapolate that with the average growth rate of 3.2%.

Food expenditures have been at a fairly consistent 11% of disposable income since 1990; so I’m going to include them as 2*11%*40*50*(after-tax median wage) = 440*(after-tax median wage).

The figures I had the hardest time getting were for utilities. It’s also difficult to know what to include: Is Internet access a necessity? Probably, nowadays—but not in 1990. Should I separate electric and natural gas, even though they are partial substitutes? But using these figures I estimate that utility costs rise at about 0.8% per year in CPI-adjusted terms, so what I’ll do is benchmark to $3800 in 2016 and assume that utility costs have risen by (0.8% + inflation rate) per year each year.

Healthcare is also a tough one; pardon the heteronormativity, but for simplicity I’m going to use the mean personal healthcare expenditures for one man and woman (aged 19-44) and one boy and one girl (aged 0-18). Unfortunately I was only able to find that for two-year intervals in the range from 2002 to 2020, so I interpolated and extrapolated both directions assuming the same average growth rate of 3.5%.

So let’s summarize what all is included here:

  • Estimated payment on a mortgage
  • 0.5 years of college tuition
  • amortized cost of 2 cars
  • 7500/(median MPG) gallons of gasoline
  • amortized cost of 2 phones, 1 computer, and 1 television
  • average spending on clothes
  • 11% of income on food
  • Estimated utilities spending
  • Estimated childcare equivalent to five months of infant care
  • Healthcare for one man, one woman, one boy, one girl

There are obviously many criticisms you could make of these choices. If I were writing a proper paper, I would search harder for better data and run robustness checks over the various estimation and extrapolation assumptions. But for these purposes I really just want a ballpark figure, something that will give me a sense of what rising cost of living feels like to most people.

What I found absolutely floored me. Over the range from 1990 to 2024:

  1. The Index of Necessary Expenditure rose by an average of 3.45% per year, almost a full percentage point higher than the average CPI inflation of 2.62% per year.
  2. Over the same period, after-tax income rose at a rate of 3.31%, faster than CPI inflation, but slightly slower than the growth rate of INE.
  3. The Index of Necessary Expenditure was over 100% of median after-tax household income every year except 2020.
  4. Since 2021, the Index of Necessary Expenditure has risen at an average rate of 5.74%, compared to CPI inflation of only 2.66%. In that same time, after-tax income has only grown at a rate of 4.94%.

Point 3 is the one that really stunned me. The only time in the last 34 years that a family of four has been able to actually pay for all necessities—just necessities—on a typical household income was during the COVID pandemic, and that in turn was only because the federal tax burden had been radically reduced in response to the crisis. This means that every single year, a typical American family has been either going further and further into debt, or scrimping on something really important—like healthcare or education.

No wonder people feel like the economy is failing them! It is!

In fact, I can even make sense now of how Trump could convince people with “Are you better off than you were four years ago?” in 2024 looking back at 2020—while the pandemic was horrific and the disruption to the economy was massive, thanks to the US government finally actually being generous to its citizens for once, people could just about actually make ends meet. That one year. In my entire life.

This is why people felt betrayed by Biden’s economy. For the first time most of us could remember, we actually had this brief moment when we could pay for everything we needed and still have money left over. And then, when things went back to “normal”, it was taken away from us. We were back to no longer making ends meet.

When I went into this, I expected to see that the INE had risen faster than both inflation and income, which was indeed the case. But I expected to find that INE was a large but manageable proportion of household income—maybe 70% or 80%—and slowly growing. Instead, I found that INE was greater than 100% of income in every year but one.

And the truth is, I’m not sure I’ve adequately covered all necessary spending! My figures for childcare and utilities are the most uncertain; those could easily go up or down by quite a bit. But even if I exclude them completely, the reduced INE is still greater than income in most years.

Suddenly the way people feel about the economy makes a lot more sense to me.

What’s fallacious about naturalism?

Jan 5 JDN 2460681

There is another line of attack against a scientific approach to morality, one which threatens all the more because it comes from fellow scientists. Even though they generally agree that morality is real and important, many scientists have suggested that morality is completely inaccessible to science. There are a few different ways that this claim can be articulated; the most common are Stephen Jay Gould’s concept of “non-overlapping magisteria” (NOMA), David Hume’s “is-ought problem”, and G.E. Moore’s “naturalistic fallacy”. As I will show, none of these pose serious threats to a scientific understanding of morality.

NOMA

Stephen Jay Gould, though a scientist, an agnostic, and a morally upright person, did not think that morality could be justified in scientific or naturalistic terms. He seemed convinced that moral truth could only be understood through religion, and indeed seemed to use the words “religion” and “morality” almost interchangeably:

The magisterium of science covers the empirical realm: what the Universe is made of (fact) and why does it work in this way (theory). The magisterium of religion extends over questions of ultimate meaning and moral value. These two magisteria do not overlap, nor do they encompass all inquiry (consider, for example, the magisterium of art and the meaning of beauty).

If we take Gould to be using a very circumscribed definition of “science” to just mean the so-called “natural sciences” like physics and chemistry, then the claim is trivial. Of course we cannot resolve moral questions about stem cell research entirely in terms of quantum physics or even entirely in terms of cellular biology; no one ever supposed that we could. Yes, it’s obvious that we need to understand the way people think and the way they interact in social structures. But that’s precisely what the fields of psychology, sociology, economics, and political science are designed to do. It would be like saying that quantum physics cannot by itself explain the evolution of life on Earth. This is surely true, but it’s hardly relevant.

Conversely, if we define science broadly to include all rational and empirical methods: physics, chemistry, geology, biology, psychology, sociology, astronomy, logic, mathematics, philosophy, history, archaeology, anthropology, economics, political science, and so on, then Gould’s claim would mean that there is no rational reason for thinking that rape and genocide are immoral.

And even if we suppose there is something wrong with using science to study morality, the alternative Gould offers us—religion—is far worse. As I’ve already shown in previous posts, religion is a very poor source of moral understanding. If morality is defined by religious tradition, then it is arbitrary and capricious, and real moral truth disintegrates.

Fortunately, we have no reason to think so. The entire history of ethical philosophy speaks against such notions, and had Immanuel Kant and John Stuart Mill alive been alive to read them, they would have scoffed at Gould’s claims. I suspect Peter Singer and Thomas Pogge would scoff similarly today. Religion doesn’t offer any deep insights into morality, and reason often does; NOMA is simply wrong.

What’s the problem with “ought” and “is”?

The next common objection to a scientific approach to morality is the remark, after David Hume, that “one cannot derive an ought from an is”; due to a conflation with a loosely-related argument that G.E. Moore made later, the attempt to derive moral statements from empirical facts has become called the “naturalistic fallacy” (this is clearly not what Moore intended; I will address Moore’s actual point in a later post). But in truth, I do not really see where the fallacy is meant to lie; there is little difference in principle between deriving “ought” from “is” than there is from deriving anything from anything else.

First, let’s put aside direct inferences from “X is true” to “X ought to be true”; these are obviously fallacious. If that’s all Hume was saying, then he is of course correct; but this does little to undermine any serious scientific theory of morality. You can’t infer from “there are genocides” to “there ought to be genocides”; nor can you infer from “there ought to be happy people” to “there are happy people”; but nor would I or any other scientist seek to do so. This is a strawman of naturalistic morality.

It’s true that some people do attempt to draw similar inferences, usually stated in a slightly different form—but these are not moral scientists, they are invariably laypeople with little understanding of the subject. Arguments based on the claim that “homosexuality is unnatural” (therefore wrong) or “violence is natural” (therefore right) are guilty of this sort of fallacy, but I’ve never heard any credible philosopher or scientist support such arguments. (And by the way, homosexuality is nearly as common among animals as violence.)

A subtler way of reasoning from “is” to “ought” that is still problematic is the common practice of surveying people about their moral attitudes and experimentally testing their moral behaviors, sometimes called experimental philosophy. I do think this kind of research is useful and relevant, but it doesn’t get us as far as some people seem to think. Even if we were to prove that 100% of humans who have ever lived believe that cannibalism is wrong, it does not follow that cannibalism is in fact wrong. It is indeed evidence that there is something wrong with cannibalism—perhaps it is maladaptive to the point of being evolutionarily unstable, or it is so obviously wrong that even the most morally-blind individuals can detect its wrongness. But this extra step of explanation is necessary; it simply doesn’t follow from the fact that “everyone believes X is wrong” that in fact “X is wrong”. (Before 1900 just about everyone quite reasonably believed that the passage of time is the same everywhere regardless of location, speed or gravity; Einstein proved everyone wrong.) Moral realism demands that we admit people can be mistaken about their moral beliefs, just as they can be mistaken about other beliefs.

But these are not the only way to infer from “is” to “ought”, and there are many ways to make such inferences that are in fact perfectly valid. For instance, I know at least two ways to validly prove moral claims from nonmoral claims. The first is by conjunctive addition: “2+2=4, therefore 2+2=4 or genocide is wrong”. The second is by contradictory explosion: “2+2=5, therefore genocide is wrong”. Both of these arguments are logically valid. Obviously they are also quite trivial; “genocide is wrong” could be replaced by any other conceivable proposition (even a contradiction!), leaving an equally valid argument. Still, we have validly derived a moral statement from nonmoral statements, while obeying the laws of logic.

Moreover, it is clearly rational to infer a certain kind of “ought” from statements that entirely involve facts. For instance, it is rational to reason, “If you are cold, you ought to close the window”. This is an instrumental “ought” (it says what it is useful to do, given the goals that you have), not a moral “ought” (which would say what goals you should have in the first place). Hence, this is not really inferring moral claims from non-moral claims, since the “ought” isn’t really a moral “ought” at all; if the ends are immoral the means will be immoral too. (It would be equally rational in this instrumental sense to say, “If you want to destroy the world, you ought to get control of the nuclear launch codes”.) In fact this kind of instrumental rationality—doing what accomplishes our goals—actually gets us quite far in defining moral norms for real human beings; but clearly it does not get us far enough.

Finally, and most importantly, epistemic normativity, which any rational being must accept, is itself an inference from “is” to “ought”; it involves inferences like “Is it raining, therefore you ought to believe it is raining.”

With these considerations in mind, we must carefully rephrase Hume’s remark, to something like this:

One cannot nontrivially with logical certainty derive moral statements from entirely nonmoral statements.

This is indeed correct; but here the word “moral” carries no weight and could be replaced by almost anything. One cannot nontrivially with logical certainty derive physical statements from entirely nonphysical statements, nor nontrivially with logical certainty derive statements about fish from statements that are entirely not about fish. For all X, one cannot nontrivially with logical certainty derive statements about X from statements entirely unrelated to X. This is an extremely general truth. We could very well make it a logical axiom. In fact, if we do so, we pretty much get relevance logic, which takes the idea of “nontrivial” proofs to the extreme of actually considering trivial proofs invalid. Most logicians don’t go so far—they say that “2+2=5, therefore genocide is wrong” is technically a valid argument—but everyone agrees that such arguments are pointless and silly. In any case the word “moral” carries no weight here; it is no harder to derive an “ought” from an “is” than it is to derive a “fish” from a “molecule”.

Moreover, the claim that nonmoral propositions can never validly influence moral propositions is clearly false; the argument “Killing is wrong, shooting someone will kill them, therefore shooting someone is wrong” is entirely valid, and the moral proposition “shooting someone is wrong” is derived in large part from the nonmoral proposition “shooting someone will kill them”. In fact, the entire Frege-Geach argument against expressivism hinges upon the fact that we all realize that moral propositions function logically the same way as nonmoral propositions, and can interact with nonmoral propositions in all the usual ways. Even expressivists usually do not deny this; they simply try to come up with ways of rescuing expressivism despite this observation.

There are also ways of validly deriving moral propositions from entirely nonmoral propositions, in an approximate or probabilistic fashion. “Genocide causes a great deal of suffering and death, and almost everyone who has ever lived has agreed that suffering and death are bad and that genocide is wrong, therefore genocide is probably wrong” is a reasonably sound probabilistic argument that infers a moral conclusion based on entirely nonmoral premises, though it lacks the certainty of a logical proof.

We could furthermore take as axiom some definition of moral concepts in terms of nonmoral concepts, and then derive consequences of this definition with logical certainty. “A morally right action maximizes pleasure and minimizes pain. Genocide fails to maximize pleasure or minimize pain. Therefore genocide is not morally right.” Obviously one is free to challenge the definition, but that’s true of many different types of philosophical arguments, not a specific problem in arguments about morality.

So what exactly was Hume trying to say? I’m really not sure. Maybe he has in mind the sort of naive arguments that infer from “unnatural” to “wrong”; if so, he’s surely correct, but the argument does little to undermine any serious naturalistic theories of morality.

Why I celebrate Christmas

Dec 22 JDN 2460667

In my last several posts I’ve been taking down religion and religious morality. So it might seem strange, or even hypocritical, that I would celebrate Christmas, which is widely regarded as a Christian religious holiday. Allow me to explain.

First of all, Christmas is much older than Christianity.

It had other names before: Solstice celebrations, Saturnalia, Yuletide. But human beings of a wide variety of cultures around the world have been celebrating some kind of winter festival around the solstice since time immemorial.

Indeed, many of the traditions we associate with Christmas, such as decorating trees and having an—ahem—Yule log, are in fact derived from pre-Christian traditions that Christians simply adopted.

The reason different regions have their own unique Christmas traditions, such as Krampus, is most likely that these regions already had such traditions surrounding their winter festivals which likewise got absorbed into Christmas once Christianity took over. (Though oddly enough, Mari Lwyd seems to be much more recent, created in the 1800s.)

In fact, Christmas really has nothing to do with the birth of Jesus.

It’s wildly improbable that Jesus was born in December. Indeed, we have very little historical or even Biblical evidence of his birth date. (What little we do have strongly suggests it wasn’t in winter.)

The date of December 25 was almost certainly chosen in order to coincide—and therefore compete—with the existing Roman holiday of Dies Natalis Solis Invicti (literally, “the birthday of the invincible sun”), an ancient solstice celebration. Today the Winter Solstice is slightly earlier, but in the Julian calendar it was December 25.

In the past, Christians have sometimes suppressed Christmas celebration.

Particularly during the 17th century, most Protestant sects, especially the Puritans, regarded Christmas as a Catholic thing, and therefore strongly discouraged their own adherents from celebrating it.

Besides, Christmas is very secularized at this point.

Many have bemoaned its materialistic nature—and even economists have claimed it is “inefficient”—but gift-giving has become a central part of the celebration of Christmas, despite it being a relatively recent addition. Santa Claus has a whole fantasy magic narrative woven around him that is the source of countless movies and has absolutely nothing to do with Christianity.

I celebrate because we celebrate.

When I celebrate Christmas, I’m also celebrating Saturnalia, and Yuletide, and many of the hundreds of other solstice celebrations and winter festivals that human cultures around the world have held for thousands of years. I’m placing myself within a grander context, a unified human behavior that crosses lines of race, religion, and nationality.

Not all cultures celebrate the Winter Solstice, but a huge number do—and those that don’t have their own celebrations which often involve music and feasting and gift-giving too.

So Merry Christmas, and Happy Yuletide, and Io Saturnalia to you all.