Bayesian updating with irrational belief change

Jul 27 JDN 2460884

For the last few weeks I’ve been working at a golf course. (It’s a bit of an odd situation: I’m not actually employed by the golf course; I’m contracted by a nonprofit to be a “job coach” for a group of youths who are part of a work program that involves them working at the golf course.)

I hate golf. I have always hated golf. I find it boring and pointless—which, to be fair, is my reaction to most sports—and also an enormous waste of land and water. A golf course is also a great place for oligarchs to arrange collusion.

But I noticed something about being on the golf course every day, seeing people playing and working there: I feel like I hate it a bit less now.

This is almost certainly a mere-exposure effect: Simply being exposed to something many times makes it feel familiar, and that tends to make you like it more, or at least dislike it less. (There are some exceptions: repeated exposure to trauma can actually make you more sensitive to it, hating it even more.)

I kinda thought this would happen. I didn’t really want it to happen, but I thought it would.

This is very interesting from the perspective of Bayesian reasoning, because it is a theorem (though I cannot seem to find anyone naming the theorem; it’s like a folk theorem, I guess?) of Bayesian logic that the following is true:

The prior expectation of the posterior is the expectation of the prior.

The prior is what you believe before observing the evidence; the posterior is what you believe afterward. This theorem describes a relationship that holds between them.

This theorem means that, if I am being optimally rational, I should take into account all expected future evidence, not just evidence I have already seen. I should not expect to encounter evidence that will change my beliefs—if I did expect to see such evidence, I should change my beliefs right now!

This might be easier to grasp with an example.

Suppose I am trying to predict whether it will rain at 5:00 pm tomorrow, and I currently estimate that the probability of rain is 30%. This is my prior probability.

What will actually happen tomorrow is that it will rain or it won’t; so my posterior probability will either be 100% (if it rains) or 0% (if it doesn’t). But I had better assign a 30% chance to the event that will make me 100% certain it rains (namely, I see rain), and a 70% chance to the event that will make me 100% certain it doesn’t rain (namely, I see no rain); if I were to assign any other probabilities, then I must not really think the probability of rain at 5:00 pm tomorrow is 30%.

(The keen Bayesian will notice that the expected variance of my posterior need not be the variance of my prior: My initial variance is relatively high (it’s actually 0.3*0.7 = 0.21, because this is a Bernoulli distribution), because I don’t know whether it will rain or not; but my posterior variance will be 0, because I’ll know the answer once it rains or doesn’t.)

It’s a bit trickier to analyze, but this also works even if the evidence won’t make me certain. Suppose I am trying to determine the probability that some hypothesis is true. If I expect to see any evidence that might change my beliefs at all, then I should, on average, expect to see just as much evidence making me believe the hypothesis more as I see evidence that will make me believe the hypothesis less. If that is not what I expect, I should really change how much I believe the hypothesis right now!

So what does this mean for the golf example?

Was I wrong to hate golf quite so much before, because I knew that spending time on a golf course might make me hate it less?

I don’t think so.

See, the thing is: I know I’m not perfectly rational.

If I were indeed perfectly rational, then anything I expect to change my beliefs is a rational Bayesian update, and I should indeed factor it into my prior beliefs.

But if I know for a fact that I am not perfectly rational, that there are things which will change my beliefs in ways that make them deviate from rational Bayesian updating, then in fact I should not take those expected belief changes into account in my prior beliefs—since I expect to be wrong later, updating on that would just make me wrong now as well. I should only update on the expected belief changes that I believe will be rational.

This is something that a boundedly-rational person should do that neither a perfectly-rational nor perfectly-irrational person would ever do!

But maybe you don’t find the golf example convincing. Maybe you think I shouldn’t hate golf so much, and it’s not irrational for me to change my beliefs in that direction.


Very well. Let me give you a thought experiment which provides a very clear example of a time when you definitely would think your belief change was irrational.


To be clear, I’m not suggesting the two situations are in any way comparable; the golf thing is pretty minor, and for the thought experiment I’m intentionally choosing something quite extreme.

Here’s the thought experiment.

A mad scientist offers you a deal: Take this pill and you will receive $50 million. Naturally, you ask what the catch is. The catch, he explains, is that taking the pill will make you staunchly believe that the Holocaust didn’t happen. Take this pill, and you’ll be rich, but you’ll become a Holocaust denier. (I have no idea if making such a pill is even possible, but it’s a thought experiment, so bear with me. It’s certainly far less implausible than Swampman.)

I will assume that you are not, and do not want to become, a Holocaust denier. (If not, I really don’t know what else to say to you right now. It happened.) So if you take this pill, your beliefs will change in a clearly irrational way.

But I still think it’s probably justifiable to take the pill. This is absolutely life-changing money, for one thing, and being a random person who is a Holocaust denier isn’t that bad in the scheme of things. (Maybe it would be worse if you were in a position to have some kind of major impact on policy.) In fact, before taking the pill, you could write out a contract with a trusted friend that will force you to donate some of the $50 million to high-impact charities—and perhaps some of it to organizations that specifically fight Holocaust denial—thus ensuring that the net benefit to humanity is positive. Once you take the pill, you may be mad about the contract, but you’ll still have to follow it, and the net benefit to humanity will still be positive as reckoned by your prior, more correct, self.

It’s certainly not irrational to take the pill. There are perfectly-reasonable preferences you could have (indeed, likely dohave) that would say that getting $50 million is more important than having incorrect beliefs about a major historical event.

And if it’s rational to take the pill, and you intend to take the pill, then of course it’s rational to believe that in the future, you will have taken the pill and you will become a Holocaust denier.

But it would be absolutely irrational for you to become a Holocaust denier right now because of that. The pill isn’t going to provide evidence that the Holocaust didn’t happen (for no such evidence exists); it’s just going to alter your brain chemistry in such a way as to make you believe that the Holocaust didn’t happen.

So here we have a clear example where you expect to be more wrong in the future.

Of course, if this really only happens in weird thought experiments about mad scientists, then it doesn’t really matter very much. But I contend it happens in reality all the time:

  • You know that by hanging around people with an extremist ideology, you’re likely to adopt some of that ideology, even if you really didn’t want to.
  • You know that if you experience a traumatic event, it is likely to make you anxious and fearful in the future, even when you have little reason to be.
  • You know that if you have a mental illness, you’re likely to form harmful, irrational beliefs about yourself and others whenever you have an episode of that mental illness.

Now, all of these belief changes are things you would likely try to guard against: If you are a researcher studying extremists, you might make a point of taking frequent vacations to talk with regular people and help yourself re-calibrate your beliefs back to normal. Nobody wants to experience trauma, and if you do, you’ll likely seek out therapy or other support to help heal yourself from that trauma. And one of the most important things they teach you in cognitive-behavioral therapy is how to challenge and modify harmful, irrational beliefs when they are triggered by your mental illness.

But these guarding actions only make sense precisely because the anticipated belief change is irrational. If you anticipate a rational change in your beliefs, you shouldn’t try to guard against it; you should factor it into what you already believe.

This also gives me a little more sympathy for Evangelical Christians who try to keep their children from being exposed to secular viewpoints. I think we both agree that having more contact with atheists will make their children more likely to become atheists—but we view this expected outcome differently.

From my perspective, this is a rational change, and it’s a good thing, and I wish they’d factor it into their current beliefs already. (Like hey, maybe if talking to a bunch of smart people and reading a bunch of books on science and philosophy makes you think there’s no God… that might be because… there’s no God?)

But I think, from their perspective, this is an irrational change, it’s a bad thing, the children have been “tempted by Satan” or something, and thus it is their duty to protect their children from this harmful change.

Of course, I am not a subjectivist. I believe there’s a right answer here, and in this case I’m pretty sure it’s mine. (Wouldn’t I always say that? No, not necessarily; there are lots of matters for which I believe that there are experts who know better than I do—that’s what experts are for, really—and thus if I find myself disagreeing with those experts, I try to educate myself more and update my beliefs toward theirs, rather than just assuming they’re wrong. I will admit, however, that a lot of people don’t seem to do this!)

But this does change how I might tend to approach the situation of exposing their children to secular viewpoints. I now understand better why they would see that exposure as a harmful thing, and thus be resistant to actions that otherwise seem obviously beneficial, like teaching kids science and encouraging them to read books. In order to get them to stop “protecting” their kids from the free exchange of ideas, I might first need to persuade them that introducing some doubt into their children’s minds about God isn’t such a terrible thing. That sounds really hard, but it at least clearly explains why they are willing to fight so hard against things that, from my perspective, seem good. (I could also try to convince them that exposure to secular viewpoints won’t make their kids doubt God, but the thing is… that isn’t true. I’d be lying.)

That is, Evangelical Christians are not simply incomprehensibly evil authoritarians who hate truth and knowledge; they quite reasonably want to protect their children from things that will harm them, and they firmly believe that being taught about evolution and the Big Bang will make their children more likely to suffer great harm—indeed, the greatest harm imaginable, the horror of an eternity in Hell. Convincing them that this is not the case—indeed, ideally, that there is no such place as Hell—sounds like a very tall order; but I can at least more keenly grasp the equilibrium they’ve found themselves in, where they believe that anything that challenges their current beliefs poses a literally existential threat. (Honestly, as a memetic adaptation, this is brilliant. Like a turtle, the meme has grown itself a nigh-impenetrable shell. No wonder it has managed to spread throughout the world.)

Wage-matching and the collusion under our noses

Jul 20 JDN 2460877

It was a minor epiphany for me when I learned, over the course of studying economics, that price-matching policies, while they seem like they benefit consumers, actually are a brilliant strategy for maintaining tacit collusion.

Consider a (Bertrand) market, with some small number n of firms in it.

Each firm announces a price, and then customers buy from whichever firm charges the lowest price. Firms can produce as much as they need to in order to meet this demand. (This makes the most sense for a service industry rather than as literal manufactured goods.)

In Nash equilibrium, all firms will charge the same price, because anyone who charged more would sell nothing. But what will that price be?

In the absence of price-matching, it will be just above the marginal cost of the service. Otherwise, it would be advantageous to undercut all the other firms by charging slightly less, and you could still make a profit. So the equilibrium price is basically the same as it would be in a perfectly-competitive market.

But now consider what happens if the firms can announce a price-matching policy.

If you were already planning on buying from firm 1 at price P1, and firm 2 announces that you can buy from them at some lower price P2, then you still have no reason to switch to firm 2, because you can still get price P2 from firm 1 as long as you show them the ad from the other firm. Under the very reasonable assumption that switching firms carries some cost (if nothing else, the effort of driving to a different store), people won’t switch—which means that any undercut strategy will fail.

Now, firms don’t need to set such low prices! They can set a much higher price, confident that if any other firm tries to undercut them, it won’t actually work—and thus, no one will try to undercut them. The new Nash equilibrium is now for the firms to charge the monopoly price.

In the real world, it’s a bit more complicated than that; for various reasons they may not actually be able to sustain collusion at the monopoly price. But there is considerable evidence that price-matching schemes do allow firms to charge a higher price than they would in perfect competition. (Though the literature is not completely unanimous; there are a few who argue that price-matching doesn’t actually facilitate collusion—but they are a distinct minority.)

Thus, a policy that on its face seems like it’s helping consumers by giving them lower prices actually ends up hurting them by giving them higher prices.

Now I want to turn things around and consider the labor market.

What would price-matching look like in the labor market?

It would mean that whenever you are offered a higher wage at a different firm, you can point this out to the firm you are currently working at, and they will offer you a raise to that new wage, to keep you from leaving.

That sounds like a thing that happens a lot.

Indeed, pretty much the best way to get a raise, almost anywhere you may happen to work, is to show your employer that you have a better offer elsewhere. It’s not the only way to get a raise, and it doesn’t always work—but it’s by far the most reliable way, because it usually works.

This for me was another minor epiphany:

The entire labor market is full of tacit collusion.

The very fact that firms can afford to give you a raise when you have an offer elsewhere basically proves that they weren’t previously paying you all that you were worth. If they had actually been paying you your value of marginal product as they should in a competitive labor market, then when you showed them a better offer, they would say: “Sorry, I can’t afford to pay you any more; good luck in your new job!”

This is not a monopoly price but a monopsonyprice (or at least something closer to it); people are being systematically underpaid so that their employers can make higher profits.

And since the phenomenon of wage-matching is so ubiquitous, it looks like this is happening just about everywhere.

This simple model doesn’t tell us how much higher wages would be in perfect competition. It could be a small difference, or a large one. (It likely varies by industry, in fact.) But the simple fact that nearly every employer engages in wage-matching implies that nearly every employer is in fact colluding on the labor market.

This also helps explain another phenomenon that has sometimes puzzled economists: Why doesn’t raising the minimum wage increase unemployment? Well, it absolutely wouldn’t, if all the firms paying minimum wage are colluding in the labor market! And we already knew that most labor markets were shockingly concentrated.

What should be done about this?

Now there we have a thornier problem.

I actually think we could implement a law against price-matching on product and service markets relatively easily, since these are generally applied to advertised prices.

But a law against wage-matching would be quite tricky indeed. Wages are generally not advertised—a problem unto itself—and we certainly don’t want to ban raises in general.

Maybe what we should actually do is something like this: Offer a cash bonus (refundable tax credit?) to anyone who changes jobs in order to get a higher wage. Make this bonus large enough to offset the costs of switching jobs—which are clearly substantial. Then, the “undercut” (“overcut”?) strategy will become more effective; employers will have an easier time poaching workers from each other, and a harder time sustaining collusive wages.

Businesses would of course hate this policy, and lobby heavily against it. This is precisely the reaction we should expect if they are relying upon collusion to sustain their profits.

Universal human rights are more radical than is commonly supposed

Jul 13 JDN 2460870

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

So begins the second paragraph of the Declaration of Independence. It had to have been obvious to many people, even at the time, how incredibly hypocritical it was for men to sign that document and then go home to give orders to their slaves.

And today, even though the Universal Declaration of Human Rights was signed over 75 years ago, there are still human rights violations ongoing in many different countries—including right here in the United States.

Why is it so easy to get people to declare that they believe in universal human rights—but so hard to get them to actually act accordingly?

Other moral issues are not like this. While hypocrisy certainly exists in many forms, for the most part people’s moral claims align with their behavior. Most people say they are against murder—and sure enough, most people aren’t murderers. Most people say they are against theft—and indeed, most people don’t steal very often. And when it comes to things that most people do all the time, most people aren’t morally opposed to them—even things like eating meat, for which there is a pretty compelling moral case against it.

But universal human rights seems like something that is far more honored in the breach than the observance.

I think this is because most people don’t quite grasp just how radical universal human rights really are.

The tricky part is the universal. They are supposed to apply to everyone.

Even those people. Even the people you are thinking of right now as an exception. Even the people you hate the most. Yes, even them.

Depending on who you are, you might be thinking of different exceptions: People of a particular race, or religion, or nationality, perhaps; or criminals, or terrorists; or bigots, or fascists. But almost everyone has some group of people that they don’t really think deserves the full array of human rights.

So I am here to tell you that, yes, those people too. Universal human rights means everyone.

No exceptions.

This doesn’t mean that we aren’t allowed to arrest and imprison people for crimes. It doesn’t even mean that we aren’t sometimes justified in killing people—e.g. in war or self-defense. But it does mean that there is no one, absolutely no one, who is considered beneath human dignity. Any time we are to deprive someone of life or liberty, we must do so with absolute respect for their fundamental rights.

This also means that there is no one you should be spitting on, no one you should be torturing, no one you should be calling dehumanizing names. Sometimes violence is necessary, to protect yourself, or to preserve liberty, or to overthrow tyranny. But yes, even psychopathic tyrants are human beings, and still deserve human rights. If you cannot recognize a person’s humanity while still defending yourself against them, you need to do some serious soul-searching and ask yourself why not.

I think what happens when most people are asked about “universal human rights”, they essentially exclude whoever they think doesn’t deserve rights from the very category of “human”. Then it essentially becomes a tautology: Everyone who deserves rights deserves rights.

And thus, everyone signs onto it—but it ends up meaning almost nothing. It doesn’t stop racism, or sexism, or police brutality, or mass incarceration, or rape, or torture, or genocide, because the people doing those things don’t think of the people they’re doing them to as actually human.

But no, the actual declaration says all human beings. Everyone. Even the people you hate. Even the people who hate you. Even people who want to torture and kill you. Yes, even them.

This is an incredibly radical idea.

It is frankly alien to a brain that evolved for tribalism; we are wired to think of the world in terms of in-groups and out-groups, and universal human rights effectively declare that everyone is in the in-group and the out-group doesn’t exist.

Indeed, perhaps too radical! I think a reasonable defense could be made of a view that some people (psychopathic tyrants?) really are just so evil that they don’t actually deserve basic human dignity. But I will say this: Usually the people arguing that some group of humans aren’t really humans ends up being on the wrong side of history.

The one possible exception I can think of here is abortion: The people arguing that fetuses are not human beings and it should be permissible to kill them when necessary are, at least in my view, generally on the right side of history. But even then, I tend to be much more sympathetic to the view that abortion, like war and self-defense, should be seen as a tragically necessary evil, not an inherent good. The ideal scenario would be to never need it, and allowing it when it’s needed is simply a second-best solution. So I think we can actually still fit this into a view that fetuses are morally important and deserving of dignity; it’s just that sometimes that the rights of one being can outweigh the rights of another.

And other than that, yeah, it’s pretty much the case that the people who want to justify enacting some terrible harm on some group of people because they say those people aren’t really people, end up being the ones that, sooner or later, the world recognizes as the bad guys.

So think about that, if there is still some group of human beings that you think of as not really human beings, not really deserving of universal human rights. Will history vindicate you—or condemn you?

Quantifying stereotypes

Jul 6 JDN 2460863

There are a lot of stereotypes in the world, from the relatively innocuous (“teenagers are rebellious”) to the extremely harmful (“Black people are criminals”).

Most stereotypes are not true.

But most stereotypes are not exactly false, either.

Here’s a list of forty stereotypes, all but one of which I got from this list of stereotypes:

(Can you guess which one? I’ll give you a hint: It’s a group I belong to and a stereotype I’ve experienced firsthand.)

  1. “Children are always noisy and misbehaving.”
  2. “Kids can’t understand complex concepts.”
  3. “Children are tech-savvy.”
  4. “Teenagers are always rebellious.”
  5. Teenagers are addicted to social media.”
  6. “Adolescents are irresponsible and careless.”
  7. “Adults are always busy and stressed.”
  8. “Adults are responsible.”
  9. “Adults are not adept at using modern technologies.”
  10. “Elderly individuals are always grumpy.”
  11. “Old people can’t learn new skills, especially related to technology.”
  12. “The elderly are always frail and dependent on others.”
  13. “Women are emotionally more expressive and sensitive than men.”
  14. “Females are not as good at math or science as males.”
  15. “Women are nurturing, caring, and focused on family and home.”
  16. “Females are not as assertive or competitive as men.”
  17. “Men do not cry or express emotions openly.”
  18. “Males are inherently better at physical activities and sports.”
  19. “Men are strong, independent, and the primary breadwinners.”
  20. “Males are not as good at multitasking as females.”
  21. “African Americans are good at sports.”
  22. “African Americans are inherently aggressive or violent.”
  23. “Black individuals have a natural talent for music and dance.”
  24. “Asians are highly intelligent, especially in math and science.”
  25. “Asian individuals are inherently submissive or docile.”
  26. “Asians know martial arts.”
  27. “Latinos are uneducated.”
  28. “Hispanic individuals are undocumented immigrants.”
  29. “Latinos are inherently passionate and hot-tempered.”
  30. “Middle Easterners are terrorists.”
  31. “Middle Eastern women are oppressed.”
  32. “Middle Eastern individuals are inherently violent or aggressive.”
  33. “White people are privileged and unacquainted with hardship.”
  34. White people are racist.”
  35. “White individuals lack rhythm in music or dance.”
  36. Gay men are excessively flamboyant.”
  37. Gay men have lisps.”
  38. Lesbians are masculine.”
  39. Bisexuals are promiscuous.”
  40. Trans people get gender-reassignment surgery.”

If you view the above 40 statements as absolute statements about everyone in the category (the first-order operator “for all”), they are obviously false; there are clear counter-examples to every single one. If you view them as merely saying that there are examples of each (the first-order operator “there exists”), they are obviously true, but also utterly trivial, as you could just as easily find examples from other groups.

But I think there’s a third way to read them, which may be more what most people actually have in mind. Indeed, it kinda seems uncharitable not to read them this third way.

That way is:

This is more true of the group I’m talking about than it is true of other groups.”

And that is not only a claim that can be true, it is a claim that can be quantified.

Recall my new favorite effect size measure, because it’s so simple and intuitive; I’m not much for the official name probability of superiority (especially in this context!), so I’m gonna call it the more down-to-earth chance of being higher.

It is exactly what it sounds like: If you compare a quantity X between group A and group B, what is the chance that the person in group A has a higher value of X?

Let’s start at the top: If you take one randomly-selected child, and one randomly-selected adult, what is the chance that the child is one who is more prone to being noisy and misbehaving?

Probably pretty high.

Or let’s take number 13: If you take one randomly-selected woman and one randomly-selected man, what is the chance that the woman is the more emotionally expressive one?

Definitely more than half.

Or how about number 27: If you take one randomly-selected Latino and one randomly-selected non-Latino (especially if you choose a White or Asian person), what is the chance that the Latino is the less-educated one?

That one I can do fairly precisely: Since 95% of White Americans have completed high school but only 75% of Latino Americans have, while 28% of Whites have a bachelor’s degree and only 21% of Latinos do, the probability of the White person being at least as educated as the Latino person is about 82%.

I don’t know the exact figures for all of these, and I didn’t want to spend all day researching 40 different stereotypes, but I am quite prepared to believe that at least all of the following exhibit a chance of being higher that is over 50%:

1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 15, 16, 17, 18, 19, 21, 24, 26, 27, 28, 29, 30, 31, 33, 34, 36, 37, 38, 40.

You may have noticed that that’s… most of them. I had to shrink the font a little to fit them all on one line.

I think 30 is an important one to mention, because while terrorists are a tiny proportion of the Middle Eastern population, they are in fact a much larger proportion of that population than they are of most other populations, and it doesn’t take that many terrorists to make a place dangerous. The Middle East is objectively a more dangerous place for terrorism than most other places, and only India and sub-Saharan Africa close (and both of which are also largely driven by Islamist terrorism). So while it’s bigoted to assume that any given Muslim or Middle Easterner is a terrorist, it is an objective fact that a disproportionate share of terrorists are Middle Eastern Muslims. Part of what I’m trying to do here is get people to more clearly distinguish between those two concepts, because one is true and the other is very, very false.

40 also deserves particular note, because the chance of being higher is almost certainly very close to 100%. While most trans people don’t get gender-reassignment surgery, virtually all people who get gender-reassignment surgery are trans.

Then again, you could see this as a limitation of the measure, since we might expect a 100% score to mean “it’s true of everyone in the group”, when here it simply means “if we ask people whether they have had gender-reassignment surgery, the trans people sometimes say yes and the cis people always say no.”


We could talk about a weak or strict chance of being higher: The weak chance is the chance of being greater than or equal to (which is the normal measure), while the strict chance is the chance of being strictly greater. In this case, the weak chance is nearly 100%, while the strict chance is hard to estimate but probably about 33% based on surveys.

This doesn’t mean that all stereotypes have some validity.

There are some stereotypes here, including a few pretty harmful ones, for which I’m not sure how the statistics would actually shake out:
10, 14, 22, 23, 25, 32, 35, 39

But I think we should be honestly prepared for the possibility that maybe there is some statistical validity to some of these stereotypes too, and instead of simply dismissing the stereotypes as false—or even bigoted—we should instead be trying to determine how true they are, and also look at why they might have some truth to them.

My proposal is to use the chance of being higher as a measure of the truth of a stereotype.

A stereotype is completely true if it has a chance of being higher of 100%.

It is completely false if it has a chance of being higher of 50%.

And it is completely backwards if it has a chance of being higher of 0%.

There is a unique affine transformation that does this: 2X-1.

100% maps to 100%, 50% maps to 0%, and 0% maps to -100%.

With discrete outcomes, the difference between weak and strong chance of being higher becomes very important. With a discrete outcome, you can have a 100% weak chance but a 1% strong chance, and honestly I’m really not sure whether we should say that stereotype is true or not.

For example, for the claim “trans men get bottom surgery”, the figures would be 100% and 6% respectively. The vast majority of trans men don’t get bottom surgery—but cis men almost never do. (Unless I count penis enlargement surgery? Then the numbers might be closer than you’d think, at least in the US where the vast majority of such surgery is performed.)

And for the claim “Middle Eastern Muslims are terrorists”, well, given two random people of whatever ethnicity or religion, they’re almost certainly not terrorists—but if it one of them is, it’s probably the Middle Eastern Muslim. It may be better in this case to talk about the conditional chance of being higher: If you have two random people, you know that one is a terrorist and one isn’t, and one is a Middle Eastern Muslim and one isn’t, how likely is it that the Middle Eastern Muslim is the terrorist? Probably about 80%. Definitely more than 50%, but also not 100%. So that’s the sense in which the stereotype has some validity. It’s still the case that 99.999% of Middle Eastern Muslims aren’t terrorists, and so it remains bigoted to treat every Middle Eastern Muslim you meet like a terrorist.

We could also work harder to more clearly distinguish between “Middle Easterners are terrorists” and “terrorists are Middle Easterners”; the former is really not true (99.999% are not), but the latter kinda is (the plurality of the world’s terrorists are in the Middle East).

Alternatively, for discrete traits we could just report all four probabilities, which would be something like this: 99.999% of Middle Eastern Muslims are not terrorists, and 0.001% are; 99.9998% of other Americans are not terrorists, and 0.0002% are. Compared to Muslim terrorists in the US, White terrorists actually are responsible for more attacks and a similar number of deaths, but largely because there just are a lot more White people in America.

These issues mainly arise when a trait is discrete. When the trait is itself quantitative (like rebelliousness, or math test scores), this is less of a problem, and the weak and strong chances of being higher are generally more or less the same.


So instead of asking whether a stereotype is true, we could ask: How true is it?

Using measures like this, we will find that some stereotypes probably have quite high truth levels, like 1 and 4; but others, if they are true at all, must have quite low truth levels, like 14; if there’s a difference, it’s a small difference!

The lower a stereotype’s truth level, the less useful it is; indeed, by this measure, it directly predicts how accurate you’d be at guessing someone’s score on the trait if you knew only the group they belong to. If you couldn’t really predict, then why are you using the stereotype? Get rid of it.

Moreover, some stereotypes are clearly more harmful than others.

Even if it is statistically valid to say that Black people are more likely to commit crimes in the US than White people (it is), the kind of person who goes around saying “Black people are criminals” is (1) smearing all Black people with the behavior of a minority of them, and (2) likely to be racist in other ways. So we have good reason to be suspect of people who say such things, even if there may be a statistical kernel of truth to their claims.

But we might still want to be a little more charitable, a little more forgiving, when people express stereotypes. They may make what sounds like a blanket absolute “for all” statement, but actually intend something much milder—something that might actually be true. They might not clearly grasp the distinction between “Middle Easterners are terrorists” and “terrorists are Middle Easterners”, and instead of denouncing them as a bigot immediately, you could try taking the time to listen to what they are saying and carefully explain what’s wrong with it.

Failing to be charitable like this—as we so often do—often feels to people like we are dismissing their lived experience. All the terrorists they can think of were Middle Eastern! All of the folks they know with a lisp turned out to be gay! Lived experience is ultimately anecdotal, but it still has a powerful effect on how people think (too powerful—see also availability heuristic), and it’s really not surprising that people would feel we are treating them unjustly if we immediately accuse them of bigotry simply for stating things that, based on their own experience, seem to be true.

I think there’s another harm here as well, which is that we damage our own credibility. If I believe that something is true and you tell me that I’m a bad person for believing it, that doesn’t make me not believe it—it makes me not trust you. You’ve presented yourself as the sort of person who wants to cover up the truth when it doesn’t fit your narrative. If you wanted to actually convince me that my belief is wrong, you could present evidence that might do that. (To be fair, this doesn’t always work; but sometimes it does!) But if you just jump straight to attacking my character, I don’t want to talk to you anymore.

And just like that, we’re at war.

Jun 29 JDN 2460856

Israel attacked Iran. Iran counter-attacked. Then Israel requested US support.

President Trump waffled about giving that support, then, late Jun 21 (US time—early June 22 Iran time), without any authorization from anyone else, he ordered an attack, using B-2 stealth bombers to drop GBU-57 MOP bombs on Iranian nuclear enrichment facilities.

So apparently we’re at war now, because Donald Trump decided we would be.

We could talk about the strategic question of whether that attack was a good idea. We could talk about the moral question of whether that attack was justified.

But I have in mind a different question: Why was he allowed to do that?

In theory, the United States Constitution grants Congress the authority to declare war. The President is the Commander-in-Chief of our military forces, but only once war has actually been declared. What’s supposed to happen is that if a need for military action arises, Congress makes a declaration of war, and then the President orders the military into action.

Yet in fact we haven’t actually done that since 1942. Despite combat in Korea, Vietnam, Afghanistan, Iraq, Bosnia, Libya, Kosovo, and more, we have never officially declared war since World War 2. In some of these wars, there was a UN resolution and/or Congressional approval, so that’s sort of like getting a formal declaration of war. But in others, there was no such thing; the President just ordered our troops to fight, and they fought.

This is not what the Constitution says, nor is it what the War Powers Act says. The President isn’t supposed to be able to do this. And yet Presidents have done it over a dozen times.

How did this happen? Why have we, as a society, become willing to accept this kind of unilateral authority on such vitally important matters?

Part of the problem seems to be that Congress is (somewhat correctly) perceived as slow and dysfunctional. But that doesn’t seem like an adequate explanation, because surely if we were actually under imminent threat, even a dysfunctional Congress could find it in itself to approve a declaration of war. (And if we’re not under imminent threat, then it isn’t so urgent!)

I think the more important reason may be that Congress consistently fails to hold the President accountable for overstepping his authority. It doesn’t even seem to matter which party is in which branch; they just never actually seem to remove a President from office for overstepping his authority. (Indeed, while three Presidents have been impeached—Trump twice—not one has ever actually been removed from office for any reason.) The checks and balances that are supposed to rein in the President simply are not ever actually deployed.

As a result, the power of the Executive Branch has gradually expanded over time, as Presidents test the waters by asserting more authority—and then are literally never punished for doing so.

I suppose we have Congress to blame for this: They could be asserting their authority, and aren’t doing so. But voters bare some share of the blame as well: We could vote out representatives who fail to rein in the President, and we haven’t been doing that.

Surely it would also help to elect better Presidents (and almost literally anyone would have been better than Donald Trump), but part of the point of having a Constitution is that the system is supposed to be able to defend against occasionally putting someone awful in charge. But as we’ve seen, in practice those defenses seem to fall apart quite easily.

So now we live in a world where a maniac can simply decide to drop a bunch of bombs wherever he wants and nobody will stop him.

Toward a positive vision of the future

Jun 22 JDN 2460849

Things look pretty bleak right now. Wildfires rage across Canada, polluting the air across North America. Russia is still at war with Ukraine, and Israel seems to be trying to start a war with Iran. ICE continues sending agents without badges to kidnap people in unmarked vehicles and sending them to undisclosed locations. Climate change is getting worse, and US policy is pivoting from subsidizing renewables back to subsidizing fossil fuels. And Trump, now revealed to be a literal fascist, is still President.

But things can get better.

I can’t guarantee that they will, nor can I say when; but there is still hope that a better future is possible.

It has been very difficult to assemble a strong coalition against the increasingly extreme far-right in this country (epitomized by Trump). This seems odd, when most Americans hold relatively centrist views. Yes, more Americans identify as conservative than as liberal, but Trump isn’t a conservative; he’s a radical far-right fascist. Trump recently gave a speech endorsing ethnic cleansing, for goodness’ sake! I’m liberal, but I’d definitely vote for a conservative like Mitt Romney rather than a Stalinist! So why are “conservatives” voting for a fascist?

But setting aside the question of why people voted for Trump, we still have the question of why the left has not been able to assemble a strong coalition against him.

I think part of the problem is that the left really has two coalitions within it: The center left, who were relatively happy with the status quo before Trump and want to go back to that; and the far left, who were utterly unhappy with that status quo and want radical change. So while we all agree that Trump is awful, we don’t really agree on what he’s supposed to be replaced with.

It’s of course possible to be in between, and indeed I would say that I am. While clearly things were better under Obama and Biden than they have been under Trump, there were still a lot of major problems in this country that should have been priorities for national policy but weren’t:

  1. Above all, climate change—the Democrats at least try to do something against it, but not nearly enough. Our carbon emissions are declining, but it’s very unclear if we’ll actually hit our targets. The way we have been going, we’re in for a lot more hurricanes and wildfires and droughts.
  2. Housing affordability is still an absolute crisis; half of renters spend more than the targeted 30% of their income on housing, and a fourth spend more than 50%.Homelessness is now at a record high.
  3. Healthcare is still far too expensive in this country; we continue to spend far more than other First World countries without getting meaningfully better care.
  4. While rights and protections for LGB people have substantially improved in the last 30 years, rights and protections for trans people continue to lag behind.
  5. Racial segregation in housing remains the de facto norm, even though it is de jure illegal.
  6. Livestock remain exempted from the Animal Welfare Act and in 2002 laboratory rats and mice were excluded as well, meaning that cruel or negligent treatment which would be illegal for cats and dogs is still allowed on livestock and lab rats.
  7. Income and wealth inequality in this country remains staggeringly high, and the super-rich continue to gain wealth at a terrifying rate.
  8. Our voting system is terrible—literally the worst possible system that can technically still be considered democracy.

This list is by no means exhaustive, but these are the issues that seem most salient to me.

2 and 3 both clearly showed up in my Index of Necessary Expenditure; these costs were the primary reason why raising a family of 4 was unaffordable on a median household income.

So it isn’t right to say that I was completely happy with how things were going before. But I still think of myself as center left, because I don’t believe we need to tear everything down and start over.

I have relatively simple recommendations that would go a long way toward solving all 8 of these problems:

Climate change could be greatly mitigated if we’d just tax carbon already, or implement a cap-and-trade system like California’s nationwide. If that’s too politically unpalatable, subsidize nuclear power, fusion research, and renewables instead. That’s way worse from a budget perspective, but for some reason Americans are just fanatically opposed to higher gas prices.

Housing affordability is politically thorny, but economically quite simple: Build more housing. Whatever we have to do to make that happen, we should do it. Maybe this involves changes to zoning or other regulations. Maybe it involves subsidies to developers. Maybe it involves deploying eminent domain to build public housing. Maybe it involves using government funds to build housing and then offering it for sale on the market. But whatever we do, we need more housing.

Healthcare costs are a trickier one; Obamacare helped, but wasn’t enough. I think what I would like to see next is an option to buy into Medicare; before you are old enough to get it for free, you can pay a premium to be covered by it. Because Medicare is much more efficient than private insurance, you could pay a lower premium and get better coverage, so a lot of people would likely switch (which is of course exactly why insurance companies would fight the policy at every turn). Even putting everyone on Medicare might not be enough; to really bring costs down, we may need to seriously address the fact that US doctors, particularly specialists, are just radically higher-paid than any other doctors in the world. Is an American doctor who gets $269,000 per year really 88% better than a French doctor who gets $143,000?

The policies we need for LGBT rights are mostly no-brainers.

Okay, I can admit to some reasonable nuance when it comes to trans women in pro sports (the statistical advantages they have over cis women are not as clear-cut as many people think, but they do seem to exist; average athletic performance for trans women seems to be somewhere in between the average for cis men and the average for cis women), but that’s really not a very important issue. Like, seriously, why do we care so much about pro sports? Either let people play sports according to their self-identified gender, or make the two options “cis women” and “other” and let trans people play the latter. And you can do the same thing with school sports, or you can eliminate them entirely because they are a stupid waste of academic resources; but either way this should not be considered a top priority policy question. (If parents want their kids to play sports, they can form their own leagues; the school shouldn’t be paying for it. Winning games is not one of the goals of an academic institution. If you want kids to get more exercise, give them more recess and reform the physical education system so it isn’t so miserable for the kids who need it most.)

But there is absolutely no reason not to let people use whatever pronouns and bathrooms they want; indeed, there doesn’t really seem to be a compelling reason to gender-segregate bathrooms in the first place, and removing that segregation would most benefit women, who often have to wait much longer in line for the bathroom. (The argument that this somehow protects women never made sense to me; if a man wants to assault women in the bathroom, what’s to stop him from just going into the women’s bathroom? It’s not like there’s a magic field that prevents men from entering. He’s already planning on committing a crime, so it doesn’t seem like he’s very liable to held back by social norms. It’s worthwhile to try to find ways to prevent sexual assault, but segregating bathrooms does little or nothing toward that goal—and indeed, trans-inclusive bathrooms do not statistically correlate with higher rates of sexual assault.) But okay, fine, if you insist on having the segregation, at least require gender-neutral bathrooms as well. This is really not that difficult; it’s pretty clearly bigotry driving this, not serious policy concerns.

Not exempting any vertebrate animals from anti-cruelty legislation is an incredibly simple thing to do, obviously morally better, and the only reason we’re not doing it is that it would hurt agribusinesses and make meat more expensive. There is literally zero question what the morally right thing to do here is; the question is only how to get people to actually do that morally right thing.

Finally, how do we fix income inequality? Some people—including some economists—treat this as a very complicated, difficult question, but I don’t think it is. I think the really simple, obvious answer is actually the correct one: Tax rich people more, and use the proceeds to help poor people. We should be taxing the rich a lot more; I want something like the revenue-maximizing rate, estimated at about 70%. (And an even higher rate like the 90% we had in the 1950s is not out of the question.) These funds could either provide services like education and healthcare, or they could simply be direct cash transfers. But one way or another, the simplest, most effective way to reduce inequality is to tax the rich and help the poor. A lot of economists fear that this would hurt the overall economy, but particularly if these rates are really targeted at the super-rich (the top 0.01%), I don’t see how they could, because all those billions of dollars are very clearly monopoly rents rather than genuine productivity. If anything, making it harder to amass monopoly rents should make the economy more efficient. And taking say 90% of the roughly 10% return just the top 400 billionaires make on their staggering wealth would give us an additional $480 billion per year.

Fixing our voting system is also quite straightforward. Ranked-choice voting would be a huge improvement, and has already been implemented successfully in several states. Even better would be range voting, but so far very few places have been bold enough to actually try it. But even ranked-choice voting would remove most of the terrible incentives that plurality voting creates, and likely allow us to move beyond the two-party system into a much more representative multiparty system.

None of this requires overthrowing the entire system or dismantling capitalism.

That is, we can have a positive vision of the future that doesn’t require revolution or radical change.

Unfortunately, there’s still a very good chance we’ll do none of it.

What does nonviolence mean?

Jun 15 JDN 2460842

As I write this, the LA protests and the crackdown upon them have continued since Friday and it is now Wednesday. In a radical and authoritarian move by Trump, Marines have been deployed (with shockingly incompetent logistics unbefitting the usually highly-efficient US military); but so far they have done very little. Reuters has been posting live updates on new developments.

The LAPD has deployed a variety of less-lethal weapons to disperse the protests, including rubber bullets, tear gas, and pepper balls; but so far they have not used lethal force. Protesters have been arrested, some for specific crimes—and others simply for violating curfew.

More recently, the protests have spread to other cities, including New York, Atlanta, Austin, Chicago, San Fransisco, and Philadelphia. By the time this post goes live, there will probably be even more cities involved, and there may also be more escalation.

But for now, at least, the protests have been largely nonviolent.

And I thought it would be worthwhile to make it very clear what I mean by that, and why it is important.

I keep seeing a lot of leftist people on social media accepting the narrative that these protests are violent, but actively encouraging that; and some of them have taken to arrogantly accuse anyone who supports nonviolent protests over violent ones of either being naive idiots or acting in bad faith. (The most baffling part of this is that they seem to be saying that Martin Luther King and Mahatma Gandhi were naive idiots or were acting in bad faith? Is that what they meant to say?)

First of all, let me be absolutely clear that nonviolence does not mean comfortable or polite or convenient.

Anyone objecting to blocking traffic, strikes, or civil disobedience because they cause disorder and inconvenience genuinely does not understand the purpose of protest (or is a naive idiot or acting in bad faith). Effective protests are disruptive and controversial. They cause disorder.

Nonviolence does not mean always obeying the law.

Sometimes the law is itself unjust, and must be actively disobeyed. Most of the Holocaust was legal, after all.

Other times, it is necessary to break some laws (such as property laws, curfews, and laws against vandalism) in the service of higher goals.

I wouldn’t say that a law against vandalism is inherently unjust; but I would say that spray-painting walls and vehicles in the service of protecting human rights is absolutely justified, and even sometimes it’s necessary to break some windows or set some fires.

Nonviolence does not mean that nobody tries to call it violence.

Most governments are well aware that most of their citizens are much more willing to support a nonviolent movement than a violent moment—more on this later—and thus will do whatever they can to characterize nonviolent movements as violence. They have two chief strategies for doing so:

  1. Characterize nonviolent but illegal acts, such as vandalism and destruction of property, as violence
  2. Actively try to instigate violence by treating nonviolent protesters as if they were violent, and then characterizing their attempts at self-defense as violence

As a great example of the latter, a man in Phoenix was arrested for assault because he kicked a tear gas canister back at police. But kicking back a canister that was shot at you is the most paradigmatic example of self-defense I could possibly imagine. If the system weren’t so heavily biased in fair of the police, a judge would order his release immediately.

Nonviolence does not mean that no one at the protests gets violent.

Any large group of people will contain outliers. Gather a protest of thousands of people, and surely some fraction of them will be violent radicals, or just psychopaths looking for an excuse to hurt someone. A nonviolent protest is one in which most people are nonviolent, and in which anyone who does get violent is shunned by the organizers of the movement.

Nonviolence doesn’t mean that violence will never be used against you.

On the contrary, the more authoritarian the regime—and thus the more justified your protest—the more likely it is that violent force will be used to suppress your nonviolent protests.

In some places it will be limited to less-lethal means (as it has so far in the current protests); but in others, even in ostensibly-democratic countries, it can result in lethal force being deployed against innocent people (as it did at Kent State in 1970).

When this happens, are you supposed to just stand there and get shot?

Honestly? Yes. I know that requires tremendous courage and self-sacrifice, but yes.

I’m not going to fault anyone for running or hiding or even trying to fight back (I’d be more of the “run” persuasion myself), but the most heroic action you could possibly take in that situation is in fact to stand there and get shot. Becoming a martyr is a terrible sacrifice, and one I’m not sure it’s one I myself could ever make; but it really, really works. (Seriously, whole religions have been based on this!)

And when you get shot, for the love of all that is good in the world, make sure someone gets it on video.

The best thing you can do for your movement is to show the oppressors for what they truly are. If they are willing to shoot unarmed innocent people, and the world finds out about that, the world will turn against them. The more peaceful and nonviolent you can appear at the moment they shoot you, the more compelling that video will be when it is all over the news tomorrow.

A shockingly large number of social movements have pivoted sharply in public opinion after a widely-publicized martyrdom incident. If you show up peacefully to speak your minds and they shoot you, that is nonviolent protest working. That is your protest being effective.

I never said that nonviolent protest was easy or safe.

What is the core of nonviolence?

It’s really very simple. So simple, honestly, that I don’t understand why it’s hard to get across to people:

Nonviolence means you don’t initiate bodily harm against other human beings.

It does not necessarily preclude self-defense, so long as that self-defense is reasonable and proportionate; and it certainly does not in any way preclude breaking laws, damaging property, or disrupting civil order.


Nonviolence means you never throw the first punch.

Nonviolence is not simply a moral position, but a strategic one.

Some of the people you would be harming absolutely deserve it. I don’t believe in ACAB, but I do believe in SCAB, and nearly 30% of police officers are domestic abusers, who absolutely would deserve a good punch to the face. And this is all the more true of ICE officers, who aren’t just regular bastards; they are bastards whose core job is now enforcing the human rights violations of President Donald Trump. Kidnapping people with their unmarked uniforms and unmarked vehicles, ICE is basically the Gestapo.

But it’s still strategically very unwise for us to deploy violence. Why? Two reasons:

  1. Using violence is a sure-fire way to turn most Americans against our cause.
  2. We would probably lose.

Nonviolent protest is nearly twice as effective as violent insurrection. (If you take nothing else from this post, please take that.)

And the reason that nonviolent protest is so effective is that it changes minds.

Violence doesn’t do that; in fact, it tends to make people rally against you. Once you start killing people, even people who were on your side may start to oppose you—let alone anyone who was previously on the fence.

A successful violent revolution results in you having to build a government and enforce your own new laws against a population that largely still disagrees with you—and if you’re a revolution made of ACAB people, that sounds spectacularly difficult!

A successful nonviolent protest movement results in a country that agrees with you—and it’s extremely hard for even a very authoritarian regime to hang onto power when most of the people oppose it.

By contrast, the success rate of violent insurrections is not very high. Why?

Because they have all the guns, you idiot.

States try to maintain a monopoly on violence in their territory. They are usually pretty effective at doing so. Thus attacking a state when you are not a state puts you at a tremendous disadvantage.

Seriously; we are talking about the United States of America right now, the most powerful military hegemon the world has ever seen.

Maybe the people advocating violence don’t really understand this, but the US has not lost a major battle since 1945. Oh, yes, they’ve “lost wars”, but what that really means is that public opinion has swayed too far against the war for them to maintain morale (Vietnam) or their goals for state-building were so over-ambitious that they were basically impossible for anyone to achieve (Iraq and Afghanistan). If you tally up the actual number of soldiers killed, US troops always kill more than they lose, and typically by a very wide margin.


And even with the battles the US lost in WW1 and WW2, they still very much won the actual wars. So genuinely defeating the United States in open military conflict is not something that has happened since… I’m pretty sure the War of 1812.

Basically, advocating for a violent response to Trump is saying that you intend to do something that literally no one in the world—including major world military powers—has been able to accomplish in 200 years. The last time someone got close, the US nuked them.

If the protests in LA were genuinely the insurrectionists that Trump has been trying to characterize them as, those Marines would not only have been deployed, they would have started shooting. And I don’t know if you realize this, but US Marines are really good at shooting. It’s kind of their thing. Instead of skirmishes with rubber bullets and tear gas, we would have an absolute bloodbath. It would probably end up looking like the Tet Offensive, a battle where “unprepared” US forces “lost” because they lost 6,000 soldiers and “only” killed 45,000 in return. (The US military is so hegemonic that a kill ratio of more than 7 to 1 is considered a “loss” in the media and public opinion.)

Granted, winning a civil war is different from winning a conventional war; even if a civil war broke out, it’s unlikely that nukes would be used on American soil, for instance. But you’re still talking about a battle so uphill it’s more like trying to besiege Edinburgh Castle.

Our best hope in such a scenario, in fact, would probably be to get blue-state governments to assert control over US military forces in their own jurisdiction—which means that antagonizing Gavin Newsom, as I’ve been seeing quite a few leftists doing lately, seems like a really bad idea.

I’m not saying that winning a civil war would be completely impossible. Since we might be able to get blue-state governors to take control of forces in their own states and we would probably get support from Canada, France, and the United Kingdom, it wouldn’t be completely hopeless. But it would be extremely costly, millions of people would die, and victory would by no means be assured despite the overwhelming righteousness of our cause.

How about, for now at least, we stick to the methods that historically have proven twice as effective?

The CBO report on Trump’s terrible new budget

Jun 8 JDN 2460835

And now back to our regularly scheduled programming. We’re back to talking about economics, which in our current environment pretty much always means bad news. The budget the House passed is pretty much the same terrible one Trump proposed.

The Congressional Budget Office (CBO), one of those bureaucratic agencies that most people barely even realize exists, but is actually extremely useful, spectacularly competent, and indeed one of the most important and efficient agencies in the world, has released its official report on the Trump budget that recently passed the House. (Other such agencies include the Bureau of Labor Statistics and the Bureau of Economic Analysis. US economic statistics are among the best in the world—some refer to them as the “gold standard”, but I refuse to insult them in that way.)

The whole thing is pretty long, but you can get a lot of the highlights from the summary tables.

The tables are broken down by the House committee responsible for choosing them; here are the effects on the federal budget deficit the CBO predicts for the next 5 and 10 years. For these numbers, positive means more deficit (bad), negative means less deficit (good).

Commitee5 years10 years
Agriculture-88,304-238,238
Armed Services124,602143,992
Education and Workforce-253,295-349,142
Energy and Commerce-247,074-995,062
Financial Services-373-5,155
Homeland Security27,87467,147
Judiciary26,9896,910
Natural Resources-4,789-20,158
Oversight and Government Reform-17,449-50,951
Transportation and Infrastructure-361-36,551
Ways and Means2,199,4033,767,402

These are in units of millions of dollars.

Almost all the revenue comes from the Ways and Means committee, because that’s the committee that sets tax rates. (If you hate your taxes, don’t hate the IRS; hate the Ways and Means Committee.) So for all the other departments, we can basically take the effect on the deficit as how much spending was changing.

If this budget makes it through the Senate, Trump will almost certainly sign it into law. If that happens:

We’ll be cutting $238 billion from Agriculture Committee programs: And most of where those cuts come from are programs that provide food for poor people.

We’ll be adding $144 billion to the military budget, and a further $67 billion to “homeland security” (which here mostly means CBP and ICE). Honestly, I was expecting more, so I’m vaguely relieved.

We’ll be cutting $349 billion from Education and Workforce programs; this is mostly coming from the student loan system, so we can expect much more brutal repayment requirements for people with student loans.

We’ll be cutting almost $1 trillion from Energy and Commerce programs; this is mainly driven by massive cuts to Medicare and Medicaid (why are they handled by this committee? I don’t know).
The bill itself doesn’t clearly specify, so the CBO issued another report offering some scenarios for how these budget cuts could be achieved. Every single scenario results in millions of people losing coverage, and the one that saves the most money would result in 5.5 million people losing some coverage and 2.4 million becoming completely uninsured.

The $20 billion from Natural Resources mostly involves rolling back environmental regulations, cutting renewable energy subsidies, and making it easier to lease federal lands for oil and gas drilling. All of these are bad, and none of them are surprising; but their effect on the budget is pretty small.

The Oversight and Government Reform portion is reducing the budget deficit by $51 billion mainly by forcing federal employees to contribute a larger share of their pensions—which is to say, basically cutting federal salaries across the board. While this has a small effect on the budget, it will impose substantial harm on the federal workforce (which has already been gutted by DOGE).

The Transportation and Infrastructure changes involve expansions of the Coast Guard (why are they not in Armed Services again?) along with across-the-board cuts of anything resembling support for sustainability or renewable energy; but the main way they actually decrease the deficit is by increasing the cost of registering cars. I think they’re trying to look like they are saving money by cutting “wasteful” (read: left-wing) programs, but in fact they mainly just made it more expensive to own a car—which, quite frankly, is probably a good thing from an environmental perspective.

Then, last but certainly not least, we come to the staggering $3.7 trillion increase in our 10-year deficit from the Ways and Means committee. What is this change that is more than 3 times as expensive as all the savings from the other departments combined?

Cutting taxes on rich people.

They are throwing some bones to the rest of the population, such as removing the taxes on tips and overtime (temporarily), and making a bunch of other changes to the tax code in terms of deductions and credits and such (because that’s what we needed, a more complicated tax code!); but the majority of the decrease in revenue comes from cutting income taxes, especially at the very highest brackets.

The University of Pennsylvania estimates that the poorest 40% of the population will actually see their after-tax incomes decrease as a result of the bill. Those in the 40% to 80% percentiles will see very little change. Only those in the richest 20% will see meaningful increases in income, and those will be highest for the top 5% and above.

The 95-99% percentile will see the greatest proportional gain, 3.5% of their income.

But the top 0.1% will see by far the greatest absolute gain, each gaining an average of $385,000 per year. Every one of these people already has an annual income of at least $4 million.

The median price of a house in the United States is $416,000.

That is, we are basically handing a free house to every millionaire in America—every year for the next 10 years.

That is why we’re adding $3.7 trillion to the national debt. So that the top 0.1% can have free houses.

Without these tax cuts, the new budget would actually reduce the deficit—which is really something we ought to be doing, because we’re running a deficit of $1.8 trillion per year and we’re not even in a recession. But because Republicans love nothing more than cutting taxes on the rich—indeed, sometimes it seems it is literally the only thing they care about—we’re going to make the deficit even bigger instead.

I can hope this won’t make it through the Senate, but I’m not holding my breath.

Open World without Level Scaling

Jun 1 JDN 2460828

This week I’m going to take a break from serious content and talk about something a little more fun and frivolous: Video games.

One of my pet peeves about a lot of video games, especially open-world games, is level scaling: As your character levels up and becomes stronger, enemies also become stronger, and so the effects basically cancel out. It’s kinda like inflation: Your wage goes up, but so do the prices, so you feel no change.

This became particularly salient for me when Oblivion Remastered was released, because Oblivion has some of the most egregious level scaling I’ve ever seen in a game. (Skyrim also has level scaling, but it’s not nearly as bad.)

This bothers me for several reasons:

  • It’s frustrating for players, and kinda defeats the point of leveling up: You put in all this effort to make your character stronger, and then your enemies just get stronger too, and it makes no difference.
  • It’s unrealistic and hurts immersion: Even if you are the chosen one, the world shouldn’t revolve around you this much. The undead who lay undisturbed in their tombs for centuries shouldn’t get more powerful over a month just because you did. The dragons laying waste to the countryside shouldn’t be weaker just because you are.
  • It creates incentives to metagame in strange ways: You sometimes want to avoid leveling up because it would make your enemies stronger. (This is especially true in Oblivion, because you can improve your skills without leveling up if you simply never sleep—and it’s actually strategically beneficial to do so. The easiest path to victory in Oblivion is to be a level-1 insomniac through the entire game.)
  • It’s a lazy solution: Rather than find a good way to maintain a constant sense of challenge throughout the game, they just have the hit points and damage automatically adjust.
  • If items are also leveled, it creates even worse incentives: You don’t want to go collect that ancient magical artifact yet, because if you wait a few levels, it will somehow be more powerful. (Oblivion does this, and I hated it so much I installed a mod that made it go away.)

I do appreciate the need to maintain a constant sense of challenge: You don’t want the early game to be absurdly difficult and the late game to be absurdly easy. But I have a proposal for how that could be achieved without level scaling:

  • Each type of enemy always has approximately the same level, so there are no surprises when encountering familiar enemies.
  • Make it possible to avoid or escape most fights, so that if you find yourself outmatched, you can flee and live to fight another day.
  • Make tactics and advantages matter more, so that a well-prepared player can defeat higher-level enemies, and a player who is ambushed by lower-level enemies is still in danger.
  • When it is necessary to face more difficult enemies at lower levels, provide allies to support the player or add other advantages.
  • When it is necessary to face easier enemies at higher levels, make them more numerous or add other disadvantages.
  • When quests would have time limits in the story, give them actual time limits in the game, and consequences for failure. None of this “you arrived just in time!” regardless of whether you went straight there or waited 10 days. This way, quests with easy enemies can still be challenging, because you are on a time limit. (Conversely, if you want players to be able to wait as long as they need to, make that make sense in the story.)
  • As the player levels up, they should change what kind of challenges they take on. The escaped prisoner with only a rusty dagger and the clothes on their back shouldn’t be facing dragons or demons, and the chosen one savior of the world whose sword and armor were forced from dragonbone shouldn’t have any trouble with a gang of bandits.

The last one is very important, so let me elaborate further by offering an example of how progression could—and in my view, should—have worked in Oblivion:

  1. At very low levels, you should mostly be avoiding combat. You can earn money and experience by doing odd jobs in towns or running errands—or by engaging in pickpocketing and burglary. You could hunt deer, because they don’t really fight back. You can maybe defend yourself against wolves or goblins, but only if you really need to.
  2. Then, once you have started improving your combat abilities, you can start taking on easier enemies: Goblins are now no problem, and you can go out hunting for wolves, bears or sabre cats. If you encounter Mythic Dawn cultists, you hope you’re in a city, so that the guards can save you; otherwise, you’d better run.
  3. After that, you can start escorting merchant caravans and taking on bandits.
  4. As you get to moderate levels, you can start facing down Mythic Dawn cultists even when the guards aren’t there to protect you.
  5. Then, you can start facing magical creatures, like trolls and minotaurs.
  6. Then, you can start exploring ancient ruins and facing undead.
  7. Then, once you are getting quite strong, you can fight mages and necromancers.
  8. Finally, once you very powerful, you can finally travel through the Oblivion Gates to the Deadlands and face the Daedra. (This is basically travelling to Hell to fight demons.)

Notice how this still provides a steady progression of difficulty and reasonably constant challenge, but it doesn’t require any enemies to scale with you. Goblins are always weak, Daedra are always strong.

Moreover, I think it would be much more satisfying progression for players: As their character grows more powerful, they can take on foes that they couldn’t before, and enemies that were once difficult become easy.

It does mean that players can’t just do literally anything in any order; but there’s still lots of flexibility in the open world, because there are many different places you can go with various quests to do at any given level of difficulty. (And there should generally be this option when being offered a quest: “I’m not ready for that yet, but I’ll come back later.”)

It might mean that the main quest is too difficult to do without doing some side quests first; but if players want to go through quickly, let them lower the difficulty settings, rather than effectively forcing that on them with level scaling. Moreover, if players want to speedrun higher difficulties by facing opponents that by all rights they should have no hope against, that could be a very compelling challenge—and give them some serious bragging rights if they succeed!

Conversely, if players feel overleveled for a quest they want to do, let them raise the difficulty settings, rather than forcing that on them too. And sometimes being overleveled can be fun; you feel powerful and dangerous.

I would also be all right with making level scaling optional: If some players like to play that way, okay, let them do that. But don’t make us all play that way. (Wartales does this, but in kind of a weird way; if you turn off level scaling, it assigns a difficulty level to each region, which means that the same enemies in Drombach are much more dangerous than they would be in Tiltren. What I want is for quests and enemies to have fixed difficulty levels—not regions.)

Baldur’s Gate 3 did this well: there is absolutely no level scaling in the game. (It helps that Dungeons and Dragons 5E already has a system where proper preparation can allow you to defeat enemies substantially higher level than you are.) It’s not quite as open-world, because there is a fairly clear progression of what order to do things in, at least until you reach Act 3; but if that’s the price we have to pay for no level scaling, I’m willing to live with that.

You hear me, Bethesda? I want no level scaling in Elder Scrolls VI!

How to teach people about vaccines

May 25 JDN 2460821

Vaccines are one of the greatest accomplishments in human history. They have saved hundreds of millions of lives with minimal cost and almost no downside at all. (For everyone who suffers a side effect from a vaccine, I guarantee you: Someone else would have had it much worse from the disease if they hadn’t been vaccinated.)

It’s honestly really astonishing just how much good vaccines have done for humanity.

Thus, it’s a bit of a mystery how there are so many people who oppose vaccines.

But this mystery becomes a little less baffling in light of behavioral economics. People assess the probability of an event mainly based on the availability heuristic: How many examples can they think of when it happened?

Precisely because vaccines have been so effective at preventing disease, we have now reached a point where diseases that were once commonplace are now virtually eradicated. Thus, parents considering whether to vaccinate their children think about whether they know anyone who has gotten sick from that disease, and they can’t think of anyone, so they assume that it’s not a real danger. Then, someone comes along and convinces them (based on utter lies that have been thoroughly debunked) that vaccines cause autism, and they get scared about autism, because they can think of someone they know who has autism.

But of course, the reason that they can’t think of anyone who died from measles or pertussis is because of the vaccines. So I think we need an educational campaign that makes these rates more vivid for people, which plays into the availability heuristic instead of against it.

Here’s my proposal for a little educational game that might help:

It functions quite similarly to a classic tabletop RPG like Dungeons & Dragons, only here the target numbers are based on real figures.


Gather a group of at least 100 people. (Too few, and the odds become small enough that you may get no examples of some diseases.)

Each person needs 3 10-sided dice. Preferably they would be different colors or somehow labeled, because we want one to represent the 100s digit, one the 10s digit, and one the 1s digit. (The numbers you can roll thus range uniformly from 0 to 999.) In TTRPG parlance, this is called a d1000.

Give each person a worksheet that looks like this:

DiseaseBefore vaccine: Caught?Before vaccine: Died?After vaccine: Caught?After vaccine: Died?
Diptheria



Measles



Mumps



Pertussis



Polio



Rubella



Smallpox



Tetanus



Hep A



Hep B



Pneumococca



Varicella



In the first round, use the figures for before the vaccine. In the second round, use the figures for after the vaccine.

For each disease in each round, there will be a certain roll that people need to get in order to either not contract the disease: Roll that number or higher, and you are okay; roll below it, and you catch the disease.


Likewise, there will be a certain roll they need to get to survive if they contract it: Roll that number or higher, and you get sick but survive; roll below it, and you die.

Each time, name a disease, and then tell people what they need to roll to not catch it.

Have them all roll, and if they catch it, check off that box.

Then, for everyone who catches it, have them roll again to see if they survive it. If they die, check that box.

Based on the historical incidences which I have converted into lifetime prevalences, the target numbers are as follows:

DiseaseBefore vaccine: Roll to not catchBefore vaccine: Roll to surviveAfter vaccine: Roll to not catchAfter vaccine: Roll to survive
Diptheria138700
Measles244100
Mumps66020
Pertussis1232042
Polio208900
Rubella191190
Smallpox201200
Tetanus1800171
Hep A37141
Hep B22444
Pneumococca1910311119
Varicella95011640

What you should expect to see for a group of 100 is something like this (of course the results are random, so it won’t be this exactly):

DiseaseBefore vaccine: Number caughtBefore vaccine: Number diedAfter vaccine: Number caughtAfter vaccine: Number died
Diptheria1000
Measles24000
Mumps7000
Pertussis12100
Polio2000
Rubella2000
Smallpox2000
Tetanus0000
Hep A4000
Hep B2000
Pneumococca2111
Varicella950160

You’ll find that not a lot of people have checked those “dead” boxes either before or after the vaccine. So if you just look at death rates, the difference may not seem that stark.

(Of course, over a world as big as ours, it adds up: The difference between the 0.25% death rate of pertussis before the vaccine and 0% today is 20 million people—roughly the number of people who live in the New York City metro area.)

But I think people will notice that a lot more people got sick in the “before-vaccine” world than the “after-vaccine” world. Moreover, those that did get sick will find themselves rolling the dice on dying; they’ll probably be fine, but you never know for sure.

Make sure people also notice that (except for pneumococca), if you do get sick, the roll you need to survive is a lot higher without the vaccine. (If anyone does get unlucky enough to get tetanus in the first round, they’re probably gonna die!)

If anyone brings up autism, you can add an extra round where you roll for that too.

The supposedly “epidemic” prevalence of autism today is… 3.2%.

(Honestly I expected higher than that, but then, I hang around with a lot of queer and neurodivergent people. (So the availability heuristic got me too!))

Thus, what’s the roll to not get autism? 32.

Even with the expansive diagnostic criteria that include a lot of borderline cases like yours truly, you still only need to roll 32 on this d1000 to not get autism.

This means that only about 3 people in your group of 100 should end up getting autism, most likely fewer than the number who were saved from getting measles, mumps, and rubella by the vaccine, comparable to the number saved from getting most of the other diseases—and almost certainly fewer than the number saved from getting varicella.

So even if someone remains absolutely convinced that vaccines cause autism, you can now point out that vaccines also clearly save billions of people from getting sick and millions from dying.

Also, there are different kinds of autism. Some forms might not even be considered a disability if society were more accommodating; others are severely debilitating.

Recently clinicians have started to categorize “profound autism”, the kind that is severely debilitating. This constitutes about 25% of children with autism—but it’s a falling percentage over time, because broader diagnostic criteria are including more people as autistic, but not changing the number who are severely debilitated. (It is controversial exactly what should constitute “profound autism”, but I do think the construct is useful; there’s a big difference between someone like me who can basically function normally with some simple accommodations, and someone who never even learns to talk.)

So you can have the group do another roll, specifically for profound autism; that target number is now only 8.

There’s also one more demonstration you can do.

Aggregating over all these diseases, we can find the overall chance of dying from any of these diseases before and after the vaccine.

Have everyone roll for that, too:

Before the vaccines, the target number is 8. Afterward, it is 1.

If autism was brought up, make that comparison explicit.

Even if 100% of autism cases were caused by vaccines (which, I really must say, is ridiculous, as there’s no credible evidence that vaccines cause autism at all) that would still mean the following:

You are trading off a 32 in 1000 chance of your child being autistic and an 8 in 1000 chance of your child being profoundly autistic, against a 7 in 1000 chance of your child dying.

If someone is still skeptical of vaccines at this point, you should ask them point-blank:

Do you really think that being autistic is one-fifth as bad as dying?

Do you really think that being profoundly autistic is as bad as dying?