Grief, a rationalist perspective

Aug 31 JDN 2460919

This post goes live on the 8th anniversary of my father’s death. Thus it seems an appropriate time to write about grief—indeed, it’s somewhat difficult for me to think about much else.

Far too often, the only perspectives on grief we hear are religious ones. Often, these take the form of consolation: “He’s in a better place now.” “You’ll see him again someday.”

Rationalism doesn’t offer such consolations. Technically one can be an atheist and still believe in an afterlife; but rationalism is stronger than mere atheism. It requires that we believe in scientific facts, and the permanent end of consciousness at death is a scientific fact. We know from direct experiments and observations in neuroscience that a destroyed brain cannot think, feel, see, hear, or remember—when your brain shuts down, whatever you are now will be gone.

It is the Basic Fact of Cognitive Science: There is no soul but the brain.

Moreover, I think, deep down, we all know that death is the end. Even religious people grieve. Their words may say that their loved one is in a better place, but their tears tell a different story.

Maybe it’s an evolutionary instinct, programmed deep into our minds like an ancestral memory, a voice that screams in our minds, insistent on being heard:

Death is bad!”

If there is one crucial instinct a lifeform needs in order to survive, surely it is something like that one: The preference for life over death. In order to live in a hostile world, you have to want to live.

There are some people who don’t want to live, people who become suicidal. Sometimes even the person we are grieving was someone who chose to take their own life. Generally this is because they believe that their life from then on would be defined only by suffering. Usually, I would say they are wrong about that; but in some cases, maybe they are right, and choosing death is rational. Most of the time, life is worth living, even when we can’t see that.

But aside from such extreme circumstances, most of us feel most of the time that death is one of the worst things that could happen to us or our loved ones. And it makes sense that we feel that way. It is right to feel that way. It is rational to feel that way.

This is why grief hurts so much.

This is why you are not okay.

If the afterlife were real—or even plausible—then grief would not hurt so much. A loved one dying would be like a loved one traveling away to somewhere nice; bittersweet perhaps, maybe even sad—but not devastating the way that grief is. You don’t hold a funeral for someone who just booked a one-way trip to Hawaii, even if you know they aren’t ever coming back.

Religion tries to be consoling, but it typically fails. Because that voice in our heads is still there, repeating endlessly: “Death is bad!” “Death is bad!” “Death is bad!”

But what if religion does give people some comfort in such a difficult time? What if supposing something as nonsensical as Heaven numbs the pain for a little while?

In my view, you’d be better off using drugs. Drugs have side effects and can be addictive, but at least they don’t require you to fundamentally abandon your ontology. Mainstream religion isn’t simply false; it’s absurd. It’s one of the falsest things anyone has ever believed about anything. It’s obviously false. It’s ridiculous. It has never deserved any of the respect and reverence it so often receives.

And in a great many cases, religion is evil. Religion teaches people to be obedient to authoritarians, and to oppress those who are different. Some of the greatest atrocities in history were committed in the name of religion, and some of the worst oppression going on today is done in the name of religion.

Rationalists should give religion no quarter. It is better for someone to find solace in alcohol or cannabis than for them to find solace in religion.

And maybe, in the end, it’s better if they don’t find solace at all.

Grief is good. Grief is healthy. Grief is what we should feel when something as terrible as death happens. That voice screaming “Death is bad!” is right, and we should listen to it.

No, what we need is to not be paralyzed by grief, destroyed by grief. We need to withstand our grief, get through it. We must learn to be strong enough to bear what seems unbearable, not console ourselves with lies.

If you are a responsible adult, then when something terrible happens to you, you don’t pretend it isn’t real. You don’t conjure up a fantasy world in which everything is fine. You face your terrors. You learn to survive them. You make yourself strong enough to carry on. The death of a loved one is a terrible thing; you shouldn’t pretend otherwise. But it doesn’t have to destroy you. You can grow, and heal, and move on.

Moreover, grief has a noble purpose. From our grief we must find motivation to challenge death, to fight death wherever we find it. Those we have already lost are gone; it’s too late for them. But it’s not too late for the rest of us. We can keep fighting.

And through economic development and medical science, we do keep fighting.

In fact, little by little, we are winning the war on death.

Death has already lost its hold upon our children. For most of human history, nearly a third of children died before the age of 5. Now less than 1% do, in rich countries, and even in the poorest countries, it’s typically under 10%. With a little more development—development that is already happening in many places—we can soon bring everyone in the world to the high standard of the First World. We have basically won the war on infant and child mortality.

And death is losing its hold on the rest of us, too. Life expectancy at adulthood is also increasing, and more and more people are living into their nineties and even their hundreds.

It’s true, there still aren’t many people living to be 120 (and some researchers believe it will be a long time before this changes). But living to be 85 instead of 65 is already an extra 20 years of life—and these can be happy, healthy years too, not years of pain and suffering. They say that 60 is the new 50; physiologically, we are so much healthier than our ancestors that it’s as if we were ten years younger.

My sincere hope is that our grief for those we have lost and fear of losing those we still have will drive us forward to even greater progress in combating death. I believe that one day we will finally be able to slow, halt, perhaps even reverse aging itself, rendering us effectively immortal.

Religion promises us immortality, but it isn’t real.

Science offers us the possibility of immortality that’s real.

It won’t be easy to get there. It won’t happen any time soon. In all likelihood, we won’t live to see it ourselves. But one day, our descendants may achieve the grandest goal of all: Finally conquering death.

And even long before that glorious day, our lives are already being made longer and healthier by science. We are pushing death back, step by step, day by day. We are fighting, and we are winning.

Moreover, we as individuals are not powerless in this fight: you can fight death a little harder yourself, by becoming an organ donor, or by donating to organizations that fight global poverty or advance medical science. Let your grief drive you to help others, so that they don’t have to grieve as you do.

And if you need consolation from your grief, let it come from this truth: Death is rarer now today than it was yesterday, and will be rarer still tomorrow. We can’t bring back who we have lost, but we can keep ourselves from losing more so soon.

Solving the student debt problem

Aug 24 JDN 2460912

A lot of people speak about student debt as a “crisis”, which makes it sound like the problem is urgent and will have severe consequences if we don’t soon intervene. I don’t think that’s right. While it’s miserable to be unable to pay your student loans, student loans don’t seem to be driving people to bankruptcy or homelessness the way that medical bills do.

Instead I think what we have here is a long-term problem, something that’s been building for a long time and will slowly but surely continue getting worse if we don’t change course. (I guess you can still call it a “crisis” if you want; climate change is also like this, and arguably a crisis.)

But there is a problem here: Student loan balances are rising much faster than other kinds of debt, and the burden falls the worst on Black women and students who went to for-profit schools. A big part of the problem seems to be predatory schools that charge high prices and make big promises but offer poor results.

Making all this worse is the fact that some of the most important income-based repayment plans were overturned by a federal court, forcing everyone who was on them into forebearance. Income-based repayment was a big reason why student loans actually weren’t as bad a burden as their high loan balances might suggest; unlike a personal loan or a mortgage, if you didn’t have enough income to repay your student loans at the full amount, you could get on a plan that would let you make smaller payments, and if you paid on that plan for long enough—even if it didn’t add up to the full balance—your loans would be forgiven.

Now the forebearance is ending for a lot of borrowers, and so they are going into default; and most of that loan forgiveness has been ruled illegal. (Supposedly this is because Congress didn’t approve it. I’ll believe that was the reason when the courts overrule Trump’s tariffs, which clearly have just as thin a legal justification and will cause far more harm to us and the rest of the world.)

In theory, student loans don’t really seem like a bad idea.

College is expensive, because it requires highly-trained professors, who demand high salaries. (The tuition money also goes other places, of course….)

College is valuable, because it provides you with knowledge and skills that can improve your life and also increase your long-term earnings. It’s a big difference: Median salary for someone with a college degree is about $60k, while median salary for someone with only a high school diploma is about $34k.

Most people don’t have enough liquidity to pay for college.

So, we provide loans, so that people can pay for college, and then when they make more money after graduating, they can pay the loans back.

That’s the theory, anyway.

The problem is that average or even median salaries obscure a lot of variation. Some college graduates become doctors, lawyers, or stockbrokers and make huge salaries. Others can’t find jobs at all. In the absence of income-based repayment plans, all students have to pay back their loans in full, regardless of their actual income after graduation.

There is inherent risk in trying to build a career. Our loan system—especially with the recent changes—puts most of this risk on the student. We treat it as their fault they can’t get a good job, and then punish them with loans they can’t afford to repay.

In fact, right now the job market is pretty badfor recent graduates—while usually unemployment for recent college grads is lower than that of the general population, since about 2018 it has actually been higher. (It’s no longer sky-high like it was during COVID; 4.8% is not bad in the scheme of things.)

Actually the job market may even be worse than it looks, because new hires are actually the lowest rate they’ve been since 2020. Our relatively low unemployment currently seems to reflect a lack of layoffs, not a healthy churn of people entering and leaving jobs. People seem to be locked into their jobs, and if they do leave them, finding another is quite difficult.

What I think we need is a system that makes the government take on more of the risk, instead of the students.

There are lots of ways to do this. Actually, the income-based repayment systems we used to have weren’t too bad.

But there is actually a way to do it without student loans at all. College could be free, paid for by taxes.


Now, I know what you’re thinking: Isn’t this unfair to people who didn’t go to college? Why should they have to pay?

Who said they were paying?

There could simply be a portion of the income tax that you only pay if you have a bachelor’s degree. Then you would only pay this tax if you both graduated from college and make a lot of money.

I don’t think this would create a strong incentive not to get a bachelor’s degree; the benefits of doing so remain quite large, even if your taxes were a bit higher as a result.

It might create incentives to major in subjects that aren’t as closely linked to higher earnings—liberal arts instead of engineering, medicine, law, or business. But this I see as fundamentally a public good: The world needs people with liberal arts education. If the market fails to provide for them, the government should step in.

This plan is not as progressive as Elizabeth Warren’s proposal to use wealth taxes to fund free college; but it might be more politically feasible. The argument that people who didn’t go to college shouldn’t have to pay for people who did actually seems reasonable to me; but this system would ensure that in fact they don’t.

The transfer of wealth here would be from people who went to college and make a lot of money to people who went to college and don’t make a lot of money. It would be the government bearing some of the financial risk of taking on a career in an uncertain world.

Conflict without shared reality

Aug 17 JDN 2460905

Donald Trump has federalized the police in Washington D.C. and deployed the National Guard. He claims he is doing this in response to a public safety emergency and crime that is “out of control”.

Crime rates in Washington, D.C. are declining and overall at their lowest level in 30 years. Its violent crime rate has not been this low since the 1960s.

By any objective standard, there is no emergency here. Crime in D.C. is not by any means out of control.

Indeed, across the United States, homicide rates are as low as they have been in 60 years.

But we do not live in a world where politics is based on objective truth.

We live in a world where the public perception of reality itself is shaped by the political narrative.

One of the first things that authoritarians do to control these narratives is try to make their followers distrust objective sources. I watch in disgust as not simply the Babylon Bee (which is a right-wing satire site that tries really hard to be funny but never quite manages it) but even the Atlantic (a mainstream news outlet generally considered credible) feeds—in multiple articles—into this dangerous lie that crime is increasing and the official statistics are somehow misleading us about that.

Of course the Atlantic‘s take is much more nuanced; but quite frankly, now is not the time for nuance. A fascist is trying to take over our government, and he needs to be resisted at every turn by every means possible. You need to be calling him out on every single lie he makes—yes, every single one, I know there are a lot of them, and that’s kind of the point—rather than trying to find alternative framings on which maybe part of what he said could somehow be construed as reasonable from a certain point of view. Every time you make Trump sound more reasonable than he is—and mainstream news outlets have done this literally hundreds of times—you are pushing America closer to fascism.

I really don’t know what to do here.

It is impossible to resolve conflicts when they are not based on shared reality.

No policy can solve a crime wave that doesn’t exist. No trade agreement can stop unfair trading practices that aren’t happening. Nothing can stop vaccines from causing autism that they already don’t cause. There is no way to fix problems when those problems are completely imaginary.

I used to think that political conflict was about different values which had to be balanced against one another: Liberty versus security, efficiency versus equality, justice versus mercy. I thought that we all agreed on the basic facts and even most of the values, and were just disagreeing about how to weigh certain values over others.

Maybe I was simply naive; maybe it’s never been like that. But it certainly isn’t right now. We aren’t disagreeing about what should be done; we are disagreeing about what is happening in front of our eyes. We don’t simply have different priorities or even different values; it’s like we are living in different worlds.

I have read, e.g. by Jonathan Haidt, that conservatives largely understand what liberals want, but liberals don’t really understand what conservatives want. (I would like to take one of the tests they use in these experiments, see how I actually do; but I’ve never been able to find one.)

Haidt’s particular argument seems to be that liberals don’t “understand” the “moral dimensions” of loyalty, authority, and sanctity, because we only “understand” harm and fairness as the basis of morality. But just because someone says something is morally relevant, that doesn’t mean it is morally relevant! And indeed, based on more or less the entirety of ethical philosophy, I can say that harm and fairness are morality, and the others simply aren’t. They are distortions of morality, they are inherently evil, and we are right to oppose them at every turn. Loyalty, authority, and sanctity are what fed Nazi Germany and the Spanish Inquisition.

This claim that liberals don’t understand conservatives has always seemed very odd to me: I feel like I have a pretty clear idea what conservatives want, it’s just that what they want is terrible: Kick out the immigrants, take money from the poor and give it to the rich, and put rich straight Christian White men back in charge of everything. (I mean, really, if that’s not what they want, why do they keep voting for people who do it? Revealed preferences, people!)

Or, more sympathetically: They want to go back to a nostalgia-tinted vision of the 1950s and 1960s in which it felt like things were going well for our country—because they were blissfully ignorant of all the violence and injustice in the world. No, thank you, Black people and queer people do not want to go back to how we were treated in the 1950s—when segregation was legal and Alan Turing was chemically castrated. (And they also don’t seem to grasp that among the things that did make some things go relatively well in that period were unions, antitrust law and progressive taxes, which conservatives now fight against at every turn.)

But I think maybe part of what’s actually happening here is that a lot of conservatives actually “want” things that literally don’t make sense, because they rest upon assumptions about the world that simply aren’t true.

They want to end “out of control” crime that is the lowest it’s been in decades.

They want to stop schools from teaching things that they already aren’t teaching.

They want the immigrants to stop bringing drugs and crime that they aren’t bringing.

They want LGBT people to stop converting their children, which we already don’t and couldn’t. (And then they want to do their own conversions in the other direction—which also don’t work, but cause tremendous harm.)

They want liberal professors to stop indoctrinating their students in ways we already aren’t and can’t. (If we could indoctrinate our students, don’t you think we’d at least make them read the syllabus?)

They want to cut government spending by eliminating “waste” and “fraud” that are trivial amounts, without cutting the things that are actually expensive, like Social Security, Medicare, and the military. They think we can balance the budget without cutting these things or raising taxes—which is just literally mathematically impossible.

They want to close off trade to bring back jobs that were sent offshore—but those jobs weren’t sent offshore, they were replaced by robots. (US manufacturing output is near its highest ever, even though manufacturing employment is half what it once was.)


And meanwhile, there’s a bunch of real problems that aren’t getting addressed: Soaring inequality, a dysfunctional healthcare system, climate change, the economic upheaval of AI—and they either don’t care about those, aren’t paying attention to them, or don’t even believe they exist.

It feels a bit like this:

You walk into a room and someone points a gun at you, shouting “Drop the weapon!” but you’re not carrying a weapon. And you show your hands, and try to explain that you don’t have a weapon, but they just keep shouting “Drop the weapon!” over and over again. Someone else has already convinced them that you have a weapon, and they expect you to drop that weapon, and nothing you say can change their mind about this.

What exactly should you do in that situation?

How do you avoid getting shot?

Do you drop something else and say it’s the weapon (make some kind of minor concession that looks vaguely like what they asked for)? Do you try to convince them that you have a right to the weapon (accept their false premise but try to negotiate around it)? Do you just run away (leave the country?)? Do you double down and try even harder to convince them that you really, truly, have no weapon?

I’m not saying that everyone on the left has a completely accurate picture of reality; there are clearly a lot of misconceptions on this side of the aisle as well. But at least among the mainstream center left, there seems to be a respect for objective statistics and a generally accurate perception of how the world works—the “reality-based community”. Sometimes liberals make mistakes, have bad ideas, or even tell lies; but I don’t hear a lot of liberals trying to fix problems that don’t exist or asking for the government budget to be changed in ways that violate basic arithmetic.

I really don’t know what do here, though.

How do you change people’s minds when they won’t even agree on the basic facts?

On foxes and hedgehogs, part II

Aug 3 JDN 2460891

In last week’s post I described Philip E. Tetlock’s experiment showing that “foxes” (people who are open-minded and willing to consider alternative views) make more accurate predictions than “hedgehogs” (people who are dogmatic and conform strictly to a single ideology).

As I explained at the end of the post, he, uh, hedges on this point quite a bit, coming up with various ways that the hedgehogs might be able to redeem themselves, but still concluding that in most circumstances, the foxes seem to be more accurate.

Here are my thoughts on this:

I think he went too easy on the hedgehogs.

I consider myself very much a fox, and I honestly would never assign a probability of 0% or 100% to any physically possible event. Honestly I consider it a flaw in Tetlock’s design that he included those as options but didn’t include probabilities I would assign, like 1%, 0.1%, or 0.01%.

He only let people assign probabilities in 10% increments. So I guess if you thought something was 3% likely, you’re supposed to round to 0%? That still feels terrible. I’d probably still write 10%. There weren’t any questions like “Aliens from the Andromeda Galaxy arrive to conquer our planet, thus rendering all previous political conflicts moot”, but man, had there been, I’d still be tempted to not put 0%. I guess I would put 0% for that though? Because in 99.999999% of cases, I’d get it right—it wouldn’t happen—and I’d get more points. But man, even single-digit percentages? I’d mash the 10% button. I am pretty much allergic to overconfidence.

In fact, I think in my mind I basically try to use a logarithmic score, which unlike a Brier score, severely (technically, infinitely) punishes you for saying that something impossible happened or something inevitable didn’t. Like, really, if you’re doing it right, that should never, ever happen to you. If you assert that something has 0% probability and it happens, you have just conclusively disproven your worldview. (Admittedly it’s possible you could fix it with small changes—but a full discussion of that would get us philosophically too far afield. “outside the scope of this paper”.)

So honestly I think he was too lenient on overconfidence by using a Brier score, which does penalize this kind of catastrophic overconfidence, but only by a moderate amount. If you say that something has a 0% chance and then it happens, you get a Brier score of -1. But if you say that something has a 50% chance and then it happens (which it would, you know, 50% of the time), you’d get a Brier score of -0.25. So even absurd overconfidence isn’t really penalized that badly.

Compare this to a logarithmic rule: Say 0% and it happens, and you get negative infinity. You lose. You fail. Go home. Your worldview is bad and you should feel bad. This should never happen to you if you have a coherent worldview (modulo the fact that he didn’t let you say 0.01%).

So if I had designed this experiment, I would have given finer-grained options at the extremes, and then brought the hammer down on anybody who actually asserted a 0% chance of an event that actually occurred. (There’s no need for the finer-grained options elsewhere; over millennia of history, the difference between 0% and 0.1% is whether it won’t happen or it will—quite relevant for, say, full-scale nuclear war—while the difference between 40% and 42.1% is whether it’ll happen every 2 to 3 years or… every 2 to 3 years.)

But okay, let’s say we stick with the Brier score, because infinity is scary.

  1. About the adjustments:
    1. The “value adjustments” are just absolute nonsense. Those would be reasons to adjust your policy response, via your utility function—they are not a reason to adjust your probability. Yes, a nuclear terrorist attack would be a really big deal if it happened and we should definitely be taking steps to prevent that; but that doesn’t change the fact that the probability of one happening is something like 0.1% per year and none have ever happened. Predicting things that don’t happen is bad forecasting, even if the things you are predicting would be very important if they happened.
    2. The “difficulty adjustments” are sort of like applying a different scoring rule, so that I’m more okay with; but that wasn’t enough to make the hedgehogs look better than the foxes.
    3. The “fuzzy set” adjustments could be legitimate, but only under particular circumstances. Being “almost right” is only valid if you clearly showed that the result was anomalous because of some other unlikely event, and—because the timeframe was clearly specified in the questions—“might still happen” should still get fewer points than accurately predicting that it hasn’t happened yet. Moreover, it was very clear that people only ever applied these sort of changes when they got things wrong; they rarely if ever said things like “Oh, wow, I said that would happen and it did, but for completely different reasons that I didn’t expect—I was almost wrong there.” (Crazy example, but if the Soviet Union had been taken over by aliens, “the Soviet Union will fall” would be correct—but I don’t think you could really attribute that to good political prediction.)
  2. The second exercise shows that even the foxes are not great Bayesians, and that some manipulations can make people even more inaccurate than before; but the hedgehogs also perform worse and also make some of the same crazy mistakes and still perform worse overall than the foxes, even in that experiment.
  3. I guess he’d call me a “hardline neopositivist”? Because I think that your experiment asking people to predict things should require people to, um, actually predict things? The task was not to get the predictions wrong but be able to come up with clever excuses for why they were wrong that don’t challenge their worldview. The task was to not get the predictions wrong. Apparently this very basic level of scientific objectivity is now considered “hardline neopositivism”.

I guess we can reasonably acknowledge that making policy is about more than just prediction, and indeed maybe being consistent and decisive is advantageous in a game-theoretic sense (in much the same way that the way to win a game of Chicken is to very visibly throw away your steering wheel). So you could still make a case for why hedgehogs are good decision-makers or good leaders.

But I really don’t see how you weasel out of the fact that hedgehogs are really bad predictors. If I were running a corporation, or a government department, or an intelligence agency, I would want accurate predictions. I would not be interested in clever excuses or rich narratives. Maybe as leaders one must assemble such narratives in order to motivate people; so be it, there’s a division of labor there. Maybe I’d have a separate team of narrative-constructing hedgehogs to help me with PR or something. But the people who are actually analyzing the data should be people who are good at making accurate predictions, full stop.

And in fact, I don’t think hedgehogs are good decision-makers or good leaders. I think they are good politicians. I think they are good at getting people to follow them and believe what they say. But I do not think they are actually good at making the decisions that would be the best for society.

Indeed, I think this is a very serious problem.

I think we systematically elect people to higher office—and hire them for jobs, and approve them for tenure, and so on—because they express confidence rather than competence. We pick the people who believe in themselves the most, who (by regression to the mean if nothing else) are almost certainly the people who are most over-confident in themselves.

Given that confidence is easier to measure than competence in most areas, it might still make sense to choose confident people if confidence were really positively correlated with competence, but I’m not convinced that it is. I think part of what Tetlock is showing us is that the kind of cognitive style that yields high confidence—a hedgehog—simply is not the kind of cognitive style that yields accurate beliefs—a fox. People who are really good at their jobs are constantly questioning themselves, always open to new ideas and new evidence; but that also means that they hedge their bets, say “on the other hand” a lot, and often suffer from Impostor Syndrome. (Honestly, testing someone for Impostor Syndrome might be a better measure of competence than a traditional job interview! Then again, Goodhart’s Law.)

Indeed, I even see this effect within academic science; the best scientists I know are foxes through and through, but they’re never the ones getting published in top journals and invited to give keynote speeches at conferences. The “big names” are always hedgehog blowhards with some pet theory they developed in the 1980s that has failed to replicate but somehow still won’t die.

Moreover, I would guess that trustworthiness is actually pretty strongly inversely correlated to confidence—“con artist” is short for “confidence artist”, after all.

Then again, I tried to find rigorous research comparing openness (roughly speaking “fox-ness”) or humility to honesty, and it was surprisingly hard to find. Actually maybe the latter is just considered an obvious consensus in the literature, because there is a widely-used construct called honesty-humility. (In which case, yeah, my thinking on trustworthiness and confidence is an accepted fact among professional psychologists—but then, why don’t more people know that?)

But that still doesn’t tell me if there is any correlation between honesty-humility and openness.

I did find these studies showing that both honesty-humility and openness are both positively correlated with well-being, both positively correlated with cooperation in experimental games, and both positively correlated with being left-wing; but that doesn’t actually prove they are positively correlated with each other. I guess it provides weak evidence in that direction, but only weak evidence. It’s entirely possible for A to be positively correlated with both B and C but B and C are uncorrelated or negatively correlated. (Living in Chicago is positively correlated with being a White Sox fan and positively correlated with being a Cubs fan, but being a White Sox fan is not positively correlated with being a Cubs fan!)

I also found studies showing that higher openness predicts less right-wing authoritarianism and higher honesty predicts less social conformity; but that wasn’t the question either.

Here’s a factor analysis specifically arguing for designing measures of honesty-humility so that they don’t correlate with other personality traits, so it can be seen as its own independent personality trait. There are some uncomfortable degrees of freedom in designing new personality metrics, which may make this sort of thing possible; and then by construction honesty-humility and openness would be uncorrelated, because any shared components were parceled out to one trait or the other.

So, I guess I can’t really confirm my suspicion here; maybe people who think like hedgehogs aren’t any less honest, or are even more honest, than people who think like foxes. But I’d still bet otherwise. My own life experience has been that foxes are honest and humble while hedgehogs are deceitful and arrogant.

Indeed, I believe that in systematically choosing confident hedgehogs as leaders, the world economy loses tens of trillions of dollars a year in inefficiencies. In fact, I think that we could probably end world hunger if we only ever put leaders in charge who were both competent and trustworthy.

Of course, in some sense that’s a pipe dream; we’re never going to get all good leaders, just as we’ll never get zero death or zero crime.

But based on how otherwise-similar countries have taken wildly different trajectories based on differences in leadership, I suspect that even relatively small changes in that direction could have quite large impacts on a society’s outcomes: South Korea isn’t perfect at picking its leaders; but surely it’s better than North Korea, and indeed that seems like one of the primary things that differentiates the two countries. Botswana is not a utopian paradise, but it’s a much nicer place to live than Nigeria, and a lot of the difference seems to come down to who is in charge, or who has been in charge for the last few decades.

And I could put in a jab here about the current state of the United States, but I’ll resist. If you read my blog, you already know my opinions on this matter.

On foxes and hedgehogs, part I

Aug 3 JDN 2460891

Today I finally got around to reading Expert Political Judgment by Philip E. Tetlock, more or less in a single sitting because I’ve been sick the last week with some pretty tight limits on what activities I can do. (It’s mostly been reading, watching TV, or playing video games that don’t require intense focus.)

It’s really an excellent book, and I now both understand why it came so highly recommended to me, and now pass on that recommendation to you: Read it.

The central thesis of the book really boils down to three propositions:

  1. Human beings, even experts, are very bad at predicting political outcomes.
  2. Some people, who use an open-minded strategy (called “foxes”), perform substantially better than other people, who use a more dogmatic strategy (called “hedgehogs”).
  3. When rewarding predictors with money, power, fame, prestige, and status, human beings systematically favor (over)confident “hedgehogs” over (correctly) humble “foxes”.

I decided I didn’t want to make this post about current events, but I think you’ll probably agree with me when I say:

That explains a lot.

How did Tetlock determine this?

Well, he studies the issue several different ways, but the core experiment that drives his account is actually a rather simple one:

  1. He gathered a large group of subject-matter experts: Economists, political scientists, historians, and area-studies professors.
  2. He came up with a large set of questions about politics, economics, and similar topics, which could all be formulated as a set of probabilities: “How likely is this to get better/get worse/stay the same?” (For example, this was in the 1980s, so he asked about the fate of the Soviet Union: “By 1990, will they become democratic, remain as they are, or collapse and fragment?”)
  3. Each respondent answered a subset of the questions, some about their own particular field, some about another, more distant field; they assigned probabilities on an 11-point scale, from 0% to 100% in increments of 10%.
  4. A few years later, he compared the predictions to the actual results, scoring them using a Brier score, which penalizes you for assigning high probability to things that didn’t happen or low probability to things that did happen.
  5. He compared the resulting scores between people with different backgrounds, on different topics, with different thinking styles, and a variety of other variables. He also benchmarked them using some automated algorithms like “always say 33%” and “always give ‘stay the same’ 100%”.

I’ll show you the key results of that analysis momentarily, but to help it make more sense to you, let me elaborate a bit more on the “foxes” and “hedgehogs”. The notion is was first popularized by Isaiah Berlin in an essay called, simply, The Hedgehog and the Fox.

“The fox knows many things, but the hedgehog knows one very big thing.”

That is, someone who reasons as a “fox” combines ideas from many different sources and perspective, and tries to weigh them all together into some sort of synthesis that then yields a final answer. This process is messy and complicated, and rarely yields high confidence about anything.

Whereas, someone who reasons as a “hedgehog” has a comprehensive theory of the world, an ideology, that provides clear answers to almost any possible question, with the surely minor, insubstantial flaw that those answers are not particularly likely to be correct.

He also considered “hedge-foxes” (people who are mostly fox but also a little bit hedgehog) and “fox-hogs” (people who are mostly hedgehog but also a little bit fox).

Tetlock has decomposed the scores into two components: calibration and discrimination. (Both very overloaded words, but they are standard in the literature.)

Calibration is how well your stated probabilities matched up with the actual probabilities; that is, if you predicted 10% probability on 20 different events, you have very good calibration if precisely 2 of those events occurred, and very poor calibration if 18 of those events occurred.

Discrimination more or less describes how useful your predictions are, what information they contain above and beyond the simple base rate. If you just assign equal probability to all events, you probably will have reasonably good calibration, but you’ll have zero discrimination; whereas if you somehow managed to assign 100% to everything that happened and 0% to everything that didn’t, your discrimination would be perfect (and we would have to find out how you cheated, or else declare you clairvoyant).

For both measures, higher is better. The ideal for each is 100%, but it’s virtually impossible to get 100% discrimination and actually not that hard to get 100% calibration if you just use the base rates for everything.


There is a bit of a tradeoff between these two: It’s not too hard to get reasonably good calibration if you just never go out on a limb, but then your predictions aren’t as useful; we could have mostly just guessed them from the base rates.

On the graph, you’ll see downward-sloping lines that are meant to represent this tradeoff: Two prediction methods that would yield the same overall score but different levels of calibration and discrimination will be on the same line. In a sense, two points on the same line are equally good methods that prioritize usefulness over accuracy differently.

All right, let’s see the graph at last:

The pattern is quite clear: The more foxy you are, the better you do, and the more hedgehoggy you are, the worse you do.

I’d also like to point out the other two regions here: “Mindless competition” and “Formal models”.

The former includes really simple algorithms like “always return 33%” or “always give ‘stay the same’ 100%”. These perform shockingly well. The most sophisticated of these, “case-specific extrapolation” (35 and 36 on the graph, which basically assumes that each country will continue doing what it’s been doing) actually performs as well if not better than even the foxes.

And what’s that at the upper-right corner, absolutely dominating the graph? That’s “Formal models”. This describes basically taking all the variables you can find and shoving them into a gigantic logit model, and then outputting the result. It’s computationally intensive and requires a lot of data (hence why he didn’t feel like it deserved to be called “mindless”), but it’s really not very complicated, and it’s the best prediction method, in every way, by far.

This has made me feel quite vindicated about a weird nerd thing I do: When I have a big decision to make (especially a financial decision), I create a spreadsheet and assemble a linear utility model to determine which choice will maximize my utility, under different parameterizations based on my past experiences. Whichever result seems to win the most robustly, I choose. This is fundamentally similar to the “formal models” prediction method, where the thing I’m trying to predict is my own happiness. (It’s a bit less formal, actually, since I don’t have detailed happiness data to feed into the regression.) And it has worked for me, astonishingly well. It definitely beats going by my own gut. I highly recommend it.

What does this mean?

Well first of all, it means humans suck at predicting things. At least for this data set, even our experts don’t perform substantially better than mindless models like “always assume the base rate”.

Nor do experts perform much better in their own fields than in other fields; they do all perform better than undergrads or random people (who somehow perform worse than the “mindless” models)

But Tetlock also investigates further, trying to better understand this “fox/hedgehog” distinction and why it yields different performance. He really bends over backwards to try to redeem the hedgehogs, in the following ways:

  1. He allows them to make post-hoc corrections to their scores, based on “value adjustments” (assigning higher probability to events that would be really important) and “difficulty adjustments” (assigning higher scores to questions where the three outcomes were close to equally probable) and “fuzzy sets” (giving some leeway on things that almost happened or things that might still happen later).
  2. He demonstrates a different, related experiment, in which certain manipulations can cause foxes to perform a lot worse than they normally would, and even yield really crazy results like probabilities that add up to 200%.
  3. He has a whole chapter that is a Socratic dialogue (seriously!) between four voices: A “hardline neopositivist”, a “moderate neopositivist”, a “reasonable relativist”, and an “unrelenting relativist”; and all but the “hardline neopositivist” agree that there is some legitimate place for the sort of post hoc corrections that the hedgehogs make to keep themselves from looking so bad.

This post is already getting a bit long, so that will conclude part I. Stay tuned for part II, next week!

Bayesian updating with irrational belief change

Jul 27 JDN 2460884

For the last few weeks I’ve been working at a golf course. (It’s a bit of an odd situation: I’m not actually employed by the golf course; I’m contracted by a nonprofit to be a “job coach” for a group of youths who are part of a work program that involves them working at the golf course.)

I hate golf. I have always hated golf. I find it boring and pointless—which, to be fair, is my reaction to most sports—and also an enormous waste of land and water. A golf course is also a great place for oligarchs to arrange collusion.

But I noticed something about being on the golf course every day, seeing people playing and working there: I feel like I hate it a bit less now.

This is almost certainly a mere-exposure effect: Simply being exposed to something many times makes it feel familiar, and that tends to make you like it more, or at least dislike it less. (There are some exceptions: repeated exposure to trauma can actually make you more sensitive to it, hating it even more.)

I kinda thought this would happen. I didn’t really want it to happen, but I thought it would.

This is very interesting from the perspective of Bayesian reasoning, because it is a theorem (though I cannot seem to find anyone naming the theorem; it’s like a folk theorem, I guess?) of Bayesian logic that the following is true:

The prior expectation of the posterior is the expectation of the prior.

The prior is what you believe before observing the evidence; the posterior is what you believe afterward. This theorem describes a relationship that holds between them.

This theorem means that, if I am being optimally rational, I should take into account all expected future evidence, not just evidence I have already seen. I should not expect to encounter evidence that will change my beliefs—if I did expect to see such evidence, I should change my beliefs right now!

This might be easier to grasp with an example.

Suppose I am trying to predict whether it will rain at 5:00 pm tomorrow, and I currently estimate that the probability of rain is 30%. This is my prior probability.

What will actually happen tomorrow is that it will rain or it won’t; so my posterior probability will either be 100% (if it rains) or 0% (if it doesn’t). But I had better assign a 30% chance to the event that will make me 100% certain it rains (namely, I see rain), and a 70% chance to the event that will make me 100% certain it doesn’t rain (namely, I see no rain); if I were to assign any other probabilities, then I must not really think the probability of rain at 5:00 pm tomorrow is 30%.

(The keen Bayesian will notice that the expected variance of my posterior need not be the variance of my prior: My initial variance is relatively high (it’s actually 0.3*0.7 = 0.21, because this is a Bernoulli distribution), because I don’t know whether it will rain or not; but my posterior variance will be 0, because I’ll know the answer once it rains or doesn’t.)

It’s a bit trickier to analyze, but this also works even if the evidence won’t make me certain. Suppose I am trying to determine the probability that some hypothesis is true. If I expect to see any evidence that might change my beliefs at all, then I should, on average, expect to see just as much evidence making me believe the hypothesis more as I see evidence that will make me believe the hypothesis less. If that is not what I expect, I should really change how much I believe the hypothesis right now!

So what does this mean for the golf example?

Was I wrong to hate golf quite so much before, because I knew that spending time on a golf course might make me hate it less?

I don’t think so.

See, the thing is: I know I’m not perfectly rational.

If I were indeed perfectly rational, then anything I expect to change my beliefs is a rational Bayesian update, and I should indeed factor it into my prior beliefs.

But if I know for a fact that I am not perfectly rational, that there are things which will change my beliefs in ways that make them deviate from rational Bayesian updating, then in fact I should not take those expected belief changes into account in my prior beliefs—since I expect to be wrong later, updating on that would just make me wrong now as well. I should only update on the expected belief changes that I believe will be rational.

This is something that a boundedly-rational person should do that neither a perfectly-rational nor perfectly-irrational person would ever do!

But maybe you don’t find the golf example convincing. Maybe you think I shouldn’t hate golf so much, and it’s not irrational for me to change my beliefs in that direction.


Very well. Let me give you a thought experiment which provides a very clear example of a time when you definitely would think your belief change was irrational.


To be clear, I’m not suggesting the two situations are in any way comparable; the golf thing is pretty minor, and for the thought experiment I’m intentionally choosing something quite extreme.

Here’s the thought experiment.

A mad scientist offers you a deal: Take this pill and you will receive $50 million. Naturally, you ask what the catch is. The catch, he explains, is that taking the pill will make you staunchly believe that the Holocaust didn’t happen. Take this pill, and you’ll be rich, but you’ll become a Holocaust denier. (I have no idea if making such a pill is even possible, but it’s a thought experiment, so bear with me. It’s certainly far less implausible than Swampman.)

I will assume that you are not, and do not want to become, a Holocaust denier. (If not, I really don’t know what else to say to you right now. It happened.) So if you take this pill, your beliefs will change in a clearly irrational way.

But I still think it’s probably justifiable to take the pill. This is absolutely life-changing money, for one thing, and being a random person who is a Holocaust denier isn’t that bad in the scheme of things. (Maybe it would be worse if you were in a position to have some kind of major impact on policy.) In fact, before taking the pill, you could write out a contract with a trusted friend that will force you to donate some of the $50 million to high-impact charities—and perhaps some of it to organizations that specifically fight Holocaust denial—thus ensuring that the net benefit to humanity is positive. Once you take the pill, you may be mad about the contract, but you’ll still have to follow it, and the net benefit to humanity will still be positive as reckoned by your prior, more correct, self.

It’s certainly not irrational to take the pill. There are perfectly-reasonable preferences you could have (indeed, likely dohave) that would say that getting $50 million is more important than having incorrect beliefs about a major historical event.

And if it’s rational to take the pill, and you intend to take the pill, then of course it’s rational to believe that in the future, you will have taken the pill and you will become a Holocaust denier.

But it would be absolutely irrational for you to become a Holocaust denier right now because of that. The pill isn’t going to provide evidence that the Holocaust didn’t happen (for no such evidence exists); it’s just going to alter your brain chemistry in such a way as to make you believe that the Holocaust didn’t happen.

So here we have a clear example where you expect to be more wrong in the future.

Of course, if this really only happens in weird thought experiments about mad scientists, then it doesn’t really matter very much. But I contend it happens in reality all the time:

  • You know that by hanging around people with an extremist ideology, you’re likely to adopt some of that ideology, even if you really didn’t want to.
  • You know that if you experience a traumatic event, it is likely to make you anxious and fearful in the future, even when you have little reason to be.
  • You know that if you have a mental illness, you’re likely to form harmful, irrational beliefs about yourself and others whenever you have an episode of that mental illness.

Now, all of these belief changes are things you would likely try to guard against: If you are a researcher studying extremists, you might make a point of taking frequent vacations to talk with regular people and help yourself re-calibrate your beliefs back to normal. Nobody wants to experience trauma, and if you do, you’ll likely seek out therapy or other support to help heal yourself from that trauma. And one of the most important things they teach you in cognitive-behavioral therapy is how to challenge and modify harmful, irrational beliefs when they are triggered by your mental illness.

But these guarding actions only make sense precisely because the anticipated belief change is irrational. If you anticipate a rational change in your beliefs, you shouldn’t try to guard against it; you should factor it into what you already believe.

This also gives me a little more sympathy for Evangelical Christians who try to keep their children from being exposed to secular viewpoints. I think we both agree that having more contact with atheists will make their children more likely to become atheists—but we view this expected outcome differently.

From my perspective, this is a rational change, and it’s a good thing, and I wish they’d factor it into their current beliefs already. (Like hey, maybe if talking to a bunch of smart people and reading a bunch of books on science and philosophy makes you think there’s no God… that might be because… there’s no God?)

But I think, from their perspective, this is an irrational change, it’s a bad thing, the children have been “tempted by Satan” or something, and thus it is their duty to protect their children from this harmful change.

Of course, I am not a subjectivist. I believe there’s a right answer here, and in this case I’m pretty sure it’s mine. (Wouldn’t I always say that? No, not necessarily; there are lots of matters for which I believe that there are experts who know better than I do—that’s what experts are for, really—and thus if I find myself disagreeing with those experts, I try to educate myself more and update my beliefs toward theirs, rather than just assuming they’re wrong. I will admit, however, that a lot of people don’t seem to do this!)

But this does change how I might tend to approach the situation of exposing their children to secular viewpoints. I now understand better why they would see that exposure as a harmful thing, and thus be resistant to actions that otherwise seem obviously beneficial, like teaching kids science and encouraging them to read books. In order to get them to stop “protecting” their kids from the free exchange of ideas, I might first need to persuade them that introducing some doubt into their children’s minds about God isn’t such a terrible thing. That sounds really hard, but it at least clearly explains why they are willing to fight so hard against things that, from my perspective, seem good. (I could also try to convince them that exposure to secular viewpoints won’t make their kids doubt God, but the thing is… that isn’t true. I’d be lying.)

That is, Evangelical Christians are not simply incomprehensibly evil authoritarians who hate truth and knowledge; they quite reasonably want to protect their children from things that will harm them, and they firmly believe that being taught about evolution and the Big Bang will make their children more likely to suffer great harm—indeed, the greatest harm imaginable, the horror of an eternity in Hell. Convincing them that this is not the case—indeed, ideally, that there is no such place as Hell—sounds like a very tall order; but I can at least more keenly grasp the equilibrium they’ve found themselves in, where they believe that anything that challenges their current beliefs poses a literally existential threat. (Honestly, as a memetic adaptation, this is brilliant. Like a turtle, the meme has grown itself a nigh-impenetrable shell. No wonder it has managed to spread throughout the world.)

Wage-matching and the collusion under our noses

Jul 20 JDN 2460877

It was a minor epiphany for me when I learned, over the course of studying economics, that price-matching policies, while they seem like they benefit consumers, actually are a brilliant strategy for maintaining tacit collusion.

Consider a (Bertrand) market, with some small number n of firms in it.

Each firm announces a price, and then customers buy from whichever firm charges the lowest price. Firms can produce as much as they need to in order to meet this demand. (This makes the most sense for a service industry rather than as literal manufactured goods.)

In Nash equilibrium, all firms will charge the same price, because anyone who charged more would sell nothing. But what will that price be?

In the absence of price-matching, it will be just above the marginal cost of the service. Otherwise, it would be advantageous to undercut all the other firms by charging slightly less, and you could still make a profit. So the equilibrium price is basically the same as it would be in a perfectly-competitive market.

But now consider what happens if the firms can announce a price-matching policy.

If you were already planning on buying from firm 1 at price P1, and firm 2 announces that you can buy from them at some lower price P2, then you still have no reason to switch to firm 2, because you can still get price P2 from firm 1 as long as you show them the ad from the other firm. Under the very reasonable assumption that switching firms carries some cost (if nothing else, the effort of driving to a different store), people won’t switch—which means that any undercut strategy will fail.

Now, firms don’t need to set such low prices! They can set a much higher price, confident that if any other firm tries to undercut them, it won’t actually work—and thus, no one will try to undercut them. The new Nash equilibrium is now for the firms to charge the monopoly price.

In the real world, it’s a bit more complicated than that; for various reasons they may not actually be able to sustain collusion at the monopoly price. But there is considerable evidence that price-matching schemes do allow firms to charge a higher price than they would in perfect competition. (Though the literature is not completely unanimous; there are a few who argue that price-matching doesn’t actually facilitate collusion—but they are a distinct minority.)

Thus, a policy that on its face seems like it’s helping consumers by giving them lower prices actually ends up hurting them by giving them higher prices.

Now I want to turn things around and consider the labor market.

What would price-matching look like in the labor market?

It would mean that whenever you are offered a higher wage at a different firm, you can point this out to the firm you are currently working at, and they will offer you a raise to that new wage, to keep you from leaving.

That sounds like a thing that happens a lot.

Indeed, pretty much the best way to get a raise, almost anywhere you may happen to work, is to show your employer that you have a better offer elsewhere. It’s not the only way to get a raise, and it doesn’t always work—but it’s by far the most reliable way, because it usually works.

This for me was another minor epiphany:

The entire labor market is full of tacit collusion.

The very fact that firms can afford to give you a raise when you have an offer elsewhere basically proves that they weren’t previously paying you all that you were worth. If they had actually been paying you your value of marginal product as they should in a competitive labor market, then when you showed them a better offer, they would say: “Sorry, I can’t afford to pay you any more; good luck in your new job!”

This is not a monopoly price but a monopsonyprice (or at least something closer to it); people are being systematically underpaid so that their employers can make higher profits.

And since the phenomenon of wage-matching is so ubiquitous, it looks like this is happening just about everywhere.

This simple model doesn’t tell us how much higher wages would be in perfect competition. It could be a small difference, or a large one. (It likely varies by industry, in fact.) But the simple fact that nearly every employer engages in wage-matching implies that nearly every employer is in fact colluding on the labor market.

This also helps explain another phenomenon that has sometimes puzzled economists: Why doesn’t raising the minimum wage increase unemployment? Well, it absolutely wouldn’t, if all the firms paying minimum wage are colluding in the labor market! And we already knew that most labor markets were shockingly concentrated.

What should be done about this?

Now there we have a thornier problem.

I actually think we could implement a law against price-matching on product and service markets relatively easily, since these are generally applied to advertised prices.

But a law against wage-matching would be quite tricky indeed. Wages are generally not advertised—a problem unto itself—and we certainly don’t want to ban raises in general.

Maybe what we should actually do is something like this: Offer a cash bonus (refundable tax credit?) to anyone who changes jobs in order to get a higher wage. Make this bonus large enough to offset the costs of switching jobs—which are clearly substantial. Then, the “undercut” (“overcut”?) strategy will become more effective; employers will have an easier time poaching workers from each other, and a harder time sustaining collusive wages.

Businesses would of course hate this policy, and lobby heavily against it. This is precisely the reaction we should expect if they are relying upon collusion to sustain their profits.

Universal human rights are more radical than is commonly supposed

Jul 13 JDN 2460870

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

So begins the second paragraph of the Declaration of Independence. It had to have been obvious to many people, even at the time, how incredibly hypocritical it was for men to sign that document and then go home to give orders to their slaves.

And today, even though the Universal Declaration of Human Rights was signed over 75 years ago, there are still human rights violations ongoing in many different countries—including right here in the United States.

Why is it so easy to get people to declare that they believe in universal human rights—but so hard to get them to actually act accordingly?

Other moral issues are not like this. While hypocrisy certainly exists in many forms, for the most part people’s moral claims align with their behavior. Most people say they are against murder—and sure enough, most people aren’t murderers. Most people say they are against theft—and indeed, most people don’t steal very often. And when it comes to things that most people do all the time, most people aren’t morally opposed to them—even things like eating meat, for which there is a pretty compelling moral case against it.

But universal human rights seems like something that is far more honored in the breach than the observance.

I think this is because most people don’t quite grasp just how radical universal human rights really are.

The tricky part is the universal. They are supposed to apply to everyone.

Even those people. Even the people you are thinking of right now as an exception. Even the people you hate the most. Yes, even them.

Depending on who you are, you might be thinking of different exceptions: People of a particular race, or religion, or nationality, perhaps; or criminals, or terrorists; or bigots, or fascists. But almost everyone has some group of people that they don’t really think deserves the full array of human rights.

So I am here to tell you that, yes, those people too. Universal human rights means everyone.

No exceptions.

This doesn’t mean that we aren’t allowed to arrest and imprison people for crimes. It doesn’t even mean that we aren’t sometimes justified in killing people—e.g. in war or self-defense. But it does mean that there is no one, absolutely no one, who is considered beneath human dignity. Any time we are to deprive someone of life or liberty, we must do so with absolute respect for their fundamental rights.

This also means that there is no one you should be spitting on, no one you should be torturing, no one you should be calling dehumanizing names. Sometimes violence is necessary, to protect yourself, or to preserve liberty, or to overthrow tyranny. But yes, even psychopathic tyrants are human beings, and still deserve human rights. If you cannot recognize a person’s humanity while still defending yourself against them, you need to do some serious soul-searching and ask yourself why not.

I think what happens when most people are asked about “universal human rights”, they essentially exclude whoever they think doesn’t deserve rights from the very category of “human”. Then it essentially becomes a tautology: Everyone who deserves rights deserves rights.

And thus, everyone signs onto it—but it ends up meaning almost nothing. It doesn’t stop racism, or sexism, or police brutality, or mass incarceration, or rape, or torture, or genocide, because the people doing those things don’t think of the people they’re doing them to as actually human.

But no, the actual declaration says all human beings. Everyone. Even the people you hate. Even the people who hate you. Even people who want to torture and kill you. Yes, even them.

This is an incredibly radical idea.

It is frankly alien to a brain that evolved for tribalism; we are wired to think of the world in terms of in-groups and out-groups, and universal human rights effectively declare that everyone is in the in-group and the out-group doesn’t exist.

Indeed, perhaps too radical! I think a reasonable defense could be made of a view that some people (psychopathic tyrants?) really are just so evil that they don’t actually deserve basic human dignity. But I will say this: Usually the people arguing that some group of humans aren’t really humans ends up being on the wrong side of history.

The one possible exception I can think of here is abortion: The people arguing that fetuses are not human beings and it should be permissible to kill them when necessary are, at least in my view, generally on the right side of history. But even then, I tend to be much more sympathetic to the view that abortion, like war and self-defense, should be seen as a tragically necessary evil, not an inherent good. The ideal scenario would be to never need it, and allowing it when it’s needed is simply a second-best solution. So I think we can actually still fit this into a view that fetuses are morally important and deserving of dignity; it’s just that sometimes that the rights of one being can outweigh the rights of another.

And other than that, yeah, it’s pretty much the case that the people who want to justify enacting some terrible harm on some group of people because they say those people aren’t really people, end up being the ones that, sooner or later, the world recognizes as the bad guys.

So think about that, if there is still some group of human beings that you think of as not really human beings, not really deserving of universal human rights. Will history vindicate you—or condemn you?

Quantifying stereotypes

Jul 6 JDN 2460863

There are a lot of stereotypes in the world, from the relatively innocuous (“teenagers are rebellious”) to the extremely harmful (“Black people are criminals”).

Most stereotypes are not true.

But most stereotypes are not exactly false, either.

Here’s a list of forty stereotypes, all but one of which I got from this list of stereotypes:

(Can you guess which one? I’ll give you a hint: It’s a group I belong to and a stereotype I’ve experienced firsthand.)

  1. “Children are always noisy and misbehaving.”
  2. “Kids can’t understand complex concepts.”
  3. “Children are tech-savvy.”
  4. “Teenagers are always rebellious.”
  5. Teenagers are addicted to social media.”
  6. “Adolescents are irresponsible and careless.”
  7. “Adults are always busy and stressed.”
  8. “Adults are responsible.”
  9. “Adults are not adept at using modern technologies.”
  10. “Elderly individuals are always grumpy.”
  11. “Old people can’t learn new skills, especially related to technology.”
  12. “The elderly are always frail and dependent on others.”
  13. “Women are emotionally more expressive and sensitive than men.”
  14. “Females are not as good at math or science as males.”
  15. “Women are nurturing, caring, and focused on family and home.”
  16. “Females are not as assertive or competitive as men.”
  17. “Men do not cry or express emotions openly.”
  18. “Males are inherently better at physical activities and sports.”
  19. “Men are strong, independent, and the primary breadwinners.”
  20. “Males are not as good at multitasking as females.”
  21. “African Americans are good at sports.”
  22. “African Americans are inherently aggressive or violent.”
  23. “Black individuals have a natural talent for music and dance.”
  24. “Asians are highly intelligent, especially in math and science.”
  25. “Asian individuals are inherently submissive or docile.”
  26. “Asians know martial arts.”
  27. “Latinos are uneducated.”
  28. “Hispanic individuals are undocumented immigrants.”
  29. “Latinos are inherently passionate and hot-tempered.”
  30. “Middle Easterners are terrorists.”
  31. “Middle Eastern women are oppressed.”
  32. “Middle Eastern individuals are inherently violent or aggressive.”
  33. “White people are privileged and unacquainted with hardship.”
  34. White people are racist.”
  35. “White individuals lack rhythm in music or dance.”
  36. Gay men are excessively flamboyant.”
  37. Gay men have lisps.”
  38. Lesbians are masculine.”
  39. Bisexuals are promiscuous.”
  40. Trans people get gender-reassignment surgery.”

If you view the above 40 statements as absolute statements about everyone in the category (the first-order operator “for all”), they are obviously false; there are clear counter-examples to every single one. If you view them as merely saying that there are examples of each (the first-order operator “there exists”), they are obviously true, but also utterly trivial, as you could just as easily find examples from other groups.

But I think there’s a third way to read them, which may be more what most people actually have in mind. Indeed, it kinda seems uncharitable not to read them this third way.

That way is:

This is more true of the group I’m talking about than it is true of other groups.”

And that is not only a claim that can be true, it is a claim that can be quantified.

Recall my new favorite effect size measure, because it’s so simple and intuitive; I’m not much for the official name probability of superiority (especially in this context!), so I’m gonna call it the more down-to-earth chance of being higher.

It is exactly what it sounds like: If you compare a quantity X between group A and group B, what is the chance that the person in group A has a higher value of X?

Let’s start at the top: If you take one randomly-selected child, and one randomly-selected adult, what is the chance that the child is one who is more prone to being noisy and misbehaving?

Probably pretty high.

Or let’s take number 13: If you take one randomly-selected woman and one randomly-selected man, what is the chance that the woman is the more emotionally expressive one?

Definitely more than half.

Or how about number 27: If you take one randomly-selected Latino and one randomly-selected non-Latino (especially if you choose a White or Asian person), what is the chance that the Latino is the less-educated one?

That one I can do fairly precisely: Since 95% of White Americans have completed high school but only 75% of Latino Americans have, while 28% of Whites have a bachelor’s degree and only 21% of Latinos do, the probability of the White person being at least as educated as the Latino person is about 82%.

I don’t know the exact figures for all of these, and I didn’t want to spend all day researching 40 different stereotypes, but I am quite prepared to believe that at least all of the following exhibit a chance of being higher that is over 50%:

1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 15, 16, 17, 18, 19, 21, 24, 26, 27, 28, 29, 30, 31, 33, 34, 36, 37, 38, 40.

You may have noticed that that’s… most of them. I had to shrink the font a little to fit them all on one line.

I think 30 is an important one to mention, because while terrorists are a tiny proportion of the Middle Eastern population, they are in fact a much larger proportion of that population than they are of most other populations, and it doesn’t take that many terrorists to make a place dangerous. The Middle East is objectively a more dangerous place for terrorism than most other places, and only India and sub-Saharan Africa close (and both of which are also largely driven by Islamist terrorism). So while it’s bigoted to assume that any given Muslim or Middle Easterner is a terrorist, it is an objective fact that a disproportionate share of terrorists are Middle Eastern Muslims. Part of what I’m trying to do here is get people to more clearly distinguish between those two concepts, because one is true and the other is very, very false.

40 also deserves particular note, because the chance of being higher is almost certainly very close to 100%. While most trans people don’t get gender-reassignment surgery, virtually all people who get gender-reassignment surgery are trans.

Then again, you could see this as a limitation of the measure, since we might expect a 100% score to mean “it’s true of everyone in the group”, when here it simply means “if we ask people whether they have had gender-reassignment surgery, the trans people sometimes say yes and the cis people always say no.”


We could talk about a weak or strict chance of being higher: The weak chance is the chance of being greater than or equal to (which is the normal measure), while the strict chance is the chance of being strictly greater. In this case, the weak chance is nearly 100%, while the strict chance is hard to estimate but probably about 33% based on surveys.

This doesn’t mean that all stereotypes have some validity.

There are some stereotypes here, including a few pretty harmful ones, for which I’m not sure how the statistics would actually shake out:
10, 14, 22, 23, 25, 32, 35, 39

But I think we should be honestly prepared for the possibility that maybe there is some statistical validity to some of these stereotypes too, and instead of simply dismissing the stereotypes as false—or even bigoted—we should instead be trying to determine how true they are, and also look at why they might have some truth to them.

My proposal is to use the chance of being higher as a measure of the truth of a stereotype.

A stereotype is completely true if it has a chance of being higher of 100%.

It is completely false if it has a chance of being higher of 50%.

And it is completely backwards if it has a chance of being higher of 0%.

There is a unique affine transformation that does this: 2X-1.

100% maps to 100%, 50% maps to 0%, and 0% maps to -100%.

With discrete outcomes, the difference between weak and strong chance of being higher becomes very important. With a discrete outcome, you can have a 100% weak chance but a 1% strong chance, and honestly I’m really not sure whether we should say that stereotype is true or not.

For example, for the claim “trans men get bottom surgery”, the figures would be 100% and 6% respectively. The vast majority of trans men don’t get bottom surgery—but cis men almost never do. (Unless I count penis enlargement surgery? Then the numbers might be closer than you’d think, at least in the US where the vast majority of such surgery is performed.)

And for the claim “Middle Eastern Muslims are terrorists”, well, given two random people of whatever ethnicity or religion, they’re almost certainly not terrorists—but if it one of them is, it’s probably the Middle Eastern Muslim. It may be better in this case to talk about the conditional chance of being higher: If you have two random people, you know that one is a terrorist and one isn’t, and one is a Middle Eastern Muslim and one isn’t, how likely is it that the Middle Eastern Muslim is the terrorist? Probably about 80%. Definitely more than 50%, but also not 100%. So that’s the sense in which the stereotype has some validity. It’s still the case that 99.999% of Middle Eastern Muslims aren’t terrorists, and so it remains bigoted to treat every Middle Eastern Muslim you meet like a terrorist.

We could also work harder to more clearly distinguish between “Middle Easterners are terrorists” and “terrorists are Middle Easterners”; the former is really not true (99.999% are not), but the latter kinda is (the plurality of the world’s terrorists are in the Middle East).

Alternatively, for discrete traits we could just report all four probabilities, which would be something like this: 99.999% of Middle Eastern Muslims are not terrorists, and 0.001% are; 99.9998% of other Americans are not terrorists, and 0.0002% are. Compared to Muslim terrorists in the US, White terrorists actually are responsible for more attacks and a similar number of deaths, but largely because there just are a lot more White people in America.

These issues mainly arise when a trait is discrete. When the trait is itself quantitative (like rebelliousness, or math test scores), this is less of a problem, and the weak and strong chances of being higher are generally more or less the same.


So instead of asking whether a stereotype is true, we could ask: How true is it?

Using measures like this, we will find that some stereotypes probably have quite high truth levels, like 1 and 4; but others, if they are true at all, must have quite low truth levels, like 14; if there’s a difference, it’s a small difference!

The lower a stereotype’s truth level, the less useful it is; indeed, by this measure, it directly predicts how accurate you’d be at guessing someone’s score on the trait if you knew only the group they belong to. If you couldn’t really predict, then why are you using the stereotype? Get rid of it.

Moreover, some stereotypes are clearly more harmful than others.

Even if it is statistically valid to say that Black people are more likely to commit crimes in the US than White people (it is), the kind of person who goes around saying “Black people are criminals” is (1) smearing all Black people with the behavior of a minority of them, and (2) likely to be racist in other ways. So we have good reason to be suspect of people who say such things, even if there may be a statistical kernel of truth to their claims.

But we might still want to be a little more charitable, a little more forgiving, when people express stereotypes. They may make what sounds like a blanket absolute “for all” statement, but actually intend something much milder—something that might actually be true. They might not clearly grasp the distinction between “Middle Easterners are terrorists” and “terrorists are Middle Easterners”, and instead of denouncing them as a bigot immediately, you could try taking the time to listen to what they are saying and carefully explain what’s wrong with it.

Failing to be charitable like this—as we so often do—often feels to people like we are dismissing their lived experience. All the terrorists they can think of were Middle Eastern! All of the folks they know with a lisp turned out to be gay! Lived experience is ultimately anecdotal, but it still has a powerful effect on how people think (too powerful—see also availability heuristic), and it’s really not surprising that people would feel we are treating them unjustly if we immediately accuse them of bigotry simply for stating things that, based on their own experience, seem to be true.

I think there’s another harm here as well, which is that we damage our own credibility. If I believe that something is true and you tell me that I’m a bad person for believing it, that doesn’t make me not believe it—it makes me not trust you. You’ve presented yourself as the sort of person who wants to cover up the truth when it doesn’t fit your narrative. If you wanted to actually convince me that my belief is wrong, you could present evidence that might do that. (To be fair, this doesn’t always work; but sometimes it does!) But if you just jump straight to attacking my character, I don’t want to talk to you anymore.

And just like that, we’re at war.

Jun 29 JDN 2460856

Israel attacked Iran. Iran counter-attacked. Then Israel requested US support.

President Trump waffled about giving that support, then, late Jun 21 (US time—early June 22 Iran time), without any authorization from anyone else, he ordered an attack, using B-2 stealth bombers to drop GBU-57 MOP bombs on Iranian nuclear enrichment facilities.

So apparently we’re at war now, because Donald Trump decided we would be.

We could talk about the strategic question of whether that attack was a good idea. We could talk about the moral question of whether that attack was justified.

But I have in mind a different question: Why was he allowed to do that?

In theory, the United States Constitution grants Congress the authority to declare war. The President is the Commander-in-Chief of our military forces, but only once war has actually been declared. What’s supposed to happen is that if a need for military action arises, Congress makes a declaration of war, and then the President orders the military into action.

Yet in fact we haven’t actually done that since 1942. Despite combat in Korea, Vietnam, Afghanistan, Iraq, Bosnia, Libya, Kosovo, and more, we have never officially declared war since World War 2. In some of these wars, there was a UN resolution and/or Congressional approval, so that’s sort of like getting a formal declaration of war. But in others, there was no such thing; the President just ordered our troops to fight, and they fought.

This is not what the Constitution says, nor is it what the War Powers Act says. The President isn’t supposed to be able to do this. And yet Presidents have done it over a dozen times.

How did this happen? Why have we, as a society, become willing to accept this kind of unilateral authority on such vitally important matters?

Part of the problem seems to be that Congress is (somewhat correctly) perceived as slow and dysfunctional. But that doesn’t seem like an adequate explanation, because surely if we were actually under imminent threat, even a dysfunctional Congress could find it in itself to approve a declaration of war. (And if we’re not under imminent threat, then it isn’t so urgent!)

I think the more important reason may be that Congress consistently fails to hold the President accountable for overstepping his authority. It doesn’t even seem to matter which party is in which branch; they just never actually seem to remove a President from office for overstepping his authority. (Indeed, while three Presidents have been impeached—Trump twice—not one has ever actually been removed from office for any reason.) The checks and balances that are supposed to rein in the President simply are not ever actually deployed.

As a result, the power of the Executive Branch has gradually expanded over time, as Presidents test the waters by asserting more authority—and then are literally never punished for doing so.

I suppose we have Congress to blame for this: They could be asserting their authority, and aren’t doing so. But voters bare some share of the blame as well: We could vote out representatives who fail to rein in the President, and we haven’t been doing that.

Surely it would also help to elect better Presidents (and almost literally anyone would have been better than Donald Trump), but part of the point of having a Constitution is that the system is supposed to be able to defend against occasionally putting someone awful in charge. But as we’ve seen, in practice those defenses seem to fall apart quite easily.

So now we live in a world where a maniac can simply decide to drop a bunch of bombs wherever he wants and nobody will stop him.