Adverse selection and all-you-can-eat

Jul 7 JDN 2460499

The concept of adverse selection is normally associated with finance and insurance, and they certainly do have a lot of important applications there. But finance and insurance are complicated (possibly intentionally?) and a lot of people are intimidated by them, and it turns out there’s a much simpler example of this phenomenon, which most people should find familiar:

All-you-can-eat meals.

At most restaurants, you buy a specific amount of food: One cheeseburger, one large order of fries. But at some, you have another option: You can buy an indeterminate amount of food, as much as you are able to eat at one sitting.

Now think about this from the restaurant’s perspective: How do you price an all-you-can-eat meal and turn a profit? Your cost obviously depends on how much food you need to prepare, but you don’t know exactly how much each customer is going to eat.

Fortunately, you don’t need to! You only need to know how much people will eat on average. As long as the average customer’s meal is worth less than what they paid for it, you will continue to make a profit, even though some customers end up eating more than what they paid for.

Insurance works the same way: Some people will cash in on their insurance, costing the company money; but most will not, providing the company with revenue. In fact, you could think of an all-you-can-eat-meal as a form of food insurance.

So, all you need to do is figure out how much an average person eats in one meal, and price based on that, right?

Wrong. Here’s the problem: The people who eat at your restaurant aren’t a random sample of people. They are specifically the kind of people who eat at all-you-can-eat restaurants.

Someone who eats very little probably won’t want to go to your restaurant very much, because they’ll have to pay a high price for very little food. But someone with a big appetite will go to your restaurant frequently, because they get to eat a large amount of food for that same price.

This means that, on average, your customers will end up eating more than what an average restaurant customer eats. You’ll have to raise the price accordingly—which will make the effect even stronger.

This can end in one of two ways: Either an equilibrium is reached where the price is pretty high and most of the customers have big appetites, or no equilibrium is reached, and the restaurant either goes bankrupt or gets rid of its all-you-can-eat policy.

But there’s basically no way to get the outcome that seems the best, which is a low price and a wide variety of people attending the restaurant. Those who eat very little just won’t show up.

That’s adverse selection. Because there’s no way to charge people who eat more a higher price (other than, you know, not being all-you-can-eat), people will self-select by choosing whether or not to attend, and the people who show up at your restaurant will be the ones with big appetites.

The same thing happens with insurance. Say we’re trying to price health insurance; we don’t just need to know the average medical expenses of our population, even if we know a lot of specific demographic information. People who are very healthy may choose not to buy insurance, leaving us with only the less-healthy people buying our insurance—which will force us to raise the price of our insurance.

Once again, you’re not getting a random sample; you’re getting a sample of the kind of people who buy health insurance.

Obamacare was specifically designed to prevent this, by imposing a small fine on people who choose not to buy health insurance. The goal was to get more healthy people buying insurance, in order to bring the cost down. It worked, at least for awhile—but now that individual mandate has been nullified, so adverse selection will once again rear its ugly head. Had our policymakers better understood this concept, they might not have removed the individual mandate.

Another option might occur to you, analogous to the restaurant: What if we just didn’t offer insurance, and made people pay for all their own healthcare? This would be like the restaurant ending its all-you-can-eat policy and charging for each new serving. Most restaurants do that, so maybe it’s the better option in general?

There are two problems here, one ethical, one economic.

The ethical problem is that people don’t deserve to be sick or injured. They didn’t choose those things. So it isn’t fair to let them suffer or bear all the costs of getting better. As a society, we should share in those costs. We should help people in need. (If you don’t already believe this, I don’t know how to convince you of it. But hopefully most people do already believe this.)

The economic problem is that some healthcare is rarely needed, but very expensive. That’s exactly the sort of situation where insurance makes sense, to spread the cost around. If everyone had to pay for their own care with no insurance at all, then most people who get severe illnesses simply wouldn’t be able to afford it. They’d go massively into debt, go bankrupt—people already do, even with insurance!—and still not even get much of the care they need. It wouldn’t matter that we have good treatments for a lot of cancers now; they are all very expensive, so most people with cancer would be unable to pay for them, and they’d just die anyway.

In fact, the net effect of such a policy would probably be to make us all poorer, because a lot of illness and disability would go untreated, making our workforce less productive. Even if you are very healthy and never need health insurance, it may still be in your own self-interest to support a policy of widespread health insurance, so that sick people get treated and can go back to work.

A world without all-you-can-eat restaurants wouldn’t be so bad. But a world without health insurance would be one in which millions of people suffer needlessly because they can’t afford healthcare.

Why does everyone work full-time?

Jun 30 JDN 2460492

Over 70% of US workers work “full-time”, that is, at least 40 hours a week. The average number of hours worked per week is 33.8, and the average number of overtime hours is only 3.6. So basically, about 2/3 of workers work almost exactly 40 hours per week.

We’re accustomed to this situation, so it may not seem strange to you. But stop and think for a moment: What are the odds that across every industry, exactly 40 hours per week is the most efficient arrangement?

Indeed, there is mounting evidence that in many industries, 40 hours is too much, and something like 5 or even 30 would actually be more efficient. Yet we continue to work 40-hour weeks.

This looks like a corner solution: Rather than choosing an optimal amount, we’re all up against some kind of constraint.


What’s the constraint? Well, the government requires (for most workers) that anything above 40 hours per week must be paid as overtime, that is, at a higher wage rate. So it looks like we would all be working more than 40 hours per week, but we hit the upper limit due to these regulations.

Does this mean we would be better off without the regulations? Clearly not. As I just pointed out, the evidence is mounting that 40 hours is too much, not too little. But why, then, would we all be trying to work so many hours?

I believe this is yet another example of hyper-competition, where competition drives us to an inefficient outcome.

Employers value employees who work a lot of hours. Indeed, I contend that they do so far more than makes any rational sense; they seem to care more about how many hours you work than about the actual quality or quantity of your output. Maybe this is because hours worked is easier to measure, or because it seems like a fairer estimate of your effort; but for whatever reason, employers really seem to reward employees who work a lot of hours, regardless of almost everything else.

In the absence of a limit on hours worked, then, employers are going to heap rewards on whoever works the most hours, and so people will be pressured to work more and more hours. Then we would all work ourselves to death, and it’s not even clear that this would be good for GDP.

Indeed, this seems to be what happened, before the 40-hour work week became the standard. In the 1800s, the average American worked over 60 hours per week. It wasn’t until the 1940s that 40-hour weeks became the norm.

But speaking of norms, that also seems to be a big factor here. The truth is, overtime isn’t really that expensive, and employers could be smarter about rewarding good work rather than more hours. But once a norm establishes itself in a society, it can be very hard to change. And right now, the norm is that 40 hours is a “normal” “standard” “full” work week—any more is above and beyond, and any less is inferior.

This is a problem, because a lot of people can’t work 40-hour weeks. Our standard for what makes someone “disabled” isn’t that you can’t work at all; it’s that you can’t work as much as society expects. I wonder how many people are currently living on disability who could have been working part-time, but there just weren’t enough part-time jobs available. The employment rate among people with a disability is only 41%, compared to 77% of the general population.

And it’s not that we need to work this much. Our productivity is now staggeringly high: We produce more than five times as much wealth per hour of work than we did as recently as the 1940s. So in theory, we should be able to live just as well while working one-fifth as much… but that’s clearly not what happened.

Keynes accurately predicted our high level of productivity; but he wrongly predicted that we would work less, when instead we just kept right on working almost as hard as before.

Indeed, it doesn’t even seem like we live five times as well while working just as much. Many things are better now—healthcare, entertainment, and of course electronics—but somehow, we really don’t feel like we are living better lives than our ancestors.

The Economic Policy Institute offers an explanation for this phenomenon: Our pay hasn’t kept up with our productivity.


Up until about 1980, productivity and pay rose in lockstep. But then they started to diverge, and they never again converged. Productivity continued to soar, while real wages only barely increased. The result is that since then, productivity has grown by 64%, and hourly pay has only grown 15%.

This is definitely part of the problem, but I think there’s more to it as well. Housing and healthcare have become so utterly unaffordable in this country that it really doesn’t matter that our cars are nice and our phones are dirt cheap. We are theoretically wealthier now, but most of that extra wealth goes into simply staying healthy and having a home. Our consumption has been necessitized.

If we can solve these problems, maybe people won’t feel a need to work so many hours. Or, maybe competition will continue to pressure them to work those hours… but at least we’ll actually feel richer when we do it.

No, the system is not working as designed

You say you’ve got a real solution…

Well, you know,

We’d all love to see the plan.

“Revolution”, the Beatles


Jun 16 JDN 2460478


There are several different versions of the meme, but they all follow the same basic format: Rejecting the statement “the system is broken and must be fixed”, they endorse the statement “the system is working exactly as intended and must be destroyed”.


This view is not just utterly wrong; it’s also incredibly dangerous.

First of all, it should be apparent to anyone who has ever worked in any large, complex organization—a corporation, a university, even a large nonprofit org—that no human system works exactly as intended. Some obviously function better than others, and most function reasonably well most of the time (probably because those that don’t fail and disappear, so there is a sort of natural selection process at work); but even with apparently simple goals and extensive resources, no complex organization will ever be able to coordinate its actions perfectly toward those goals.

But when we’re talking about “the system”, well, first of all:

What exactly is “the system”?

Is it government? Society as a whole? The whole culture, or some subculture? Is it local, national, or international? Are we talking about democracy, or maybe capitalism? The world isn’t just one system; it’s a complex network of interacting systems. So to be quite honest with you, I don’t even know what people are complaining about when they complain about “the system”. All I know is that there is some large institution that they don’t like.

Let’s suppose we can pin that down—say we’re talking about capitalism, for instance, or the US government. Then, there is still the obvious fact that any real-world implementation of a system is going to have failures. Particularly when millions of people are involved, no system is ever going to coordinate exactly toward achieving its goals as efficiently as possible. At best it’s going to coordinate reasonably well and achieve its goals most of the time.

But okay, let’s try to be as charitable as possible here.

What are people trying to say when they say this?

I think that fundamentally this is meant as an expression of Conflict Theory over Mistake Theory: The problems with the world aren’t due to well-intentioned people making honest mistakes, they are due to people being evil. The response isn’t to try to correct their mistakes; it’s to fight them (kill them?), because they are evil.

Well, it is certainly true that evil people exist. There are mass murderers and tyrants, rapists and serial killers. And though they may be less extreme, it is genuinely true that billionaires are disproportionately likely to be psychopaths and that those who aren’t typically share a lot of psychopathic traits.

But does this really look like the sort of system that was designed to optimize payoffs for a handful of psychopaths? Really? You can’t imagine any way that the world could be more optimized for that goal?

How about, say… feudalism?

Not that long ago, historically—less than a millennium—the world was literally ruled by those same sorts of uber-rich psychopaths, and they wielded absolute power over their subjects. In medieval times, your king could confiscate your wealth whenever he chose, or even have you executed on a whim. That system genuinely looks like it’s optimized for the power of a handful of evil people.

Democracy, on the other hand, actually looks like it’s trying to be better. Maybe sometimes it isn’t better—or at least isn’t enough better. But why would they even bother letting us vote, if they were building a system to optimize their own power over us? Why would we have these free speech protections—that allow you to post those memes without going to prison?

In fact, there are places today where near-absolute power really is concentrated in a handful of psychopaths, where authoritarian dictators still act very much like kings of yore. In North Korea or Russia or China, there really is a system in place that’s very well optimized to maximize the power of a few individuals over everyone else.

But in the United States, we don’t have that. Not yet, anyway. Our democracy is flawed and imperilled, but so far, it stands. It needs our constant vigilance to defend it, but so far, it stands.

This is precisely why these ideas are so dangerous.

If you tell people that the system is already as bad as it’s ever going to get, that the only hope now is to burn it all down and build something new, then those people aren’t going to stand up and defend what we still have. They aren’t going to fight to keep authoritarians out of office, because they don’t believe that their votes or donations or protests actually do anything to control who ends up in office.

In other words, they are acting exactly as the authoritarians want them to.

Short of your actual support, the best gift you can give your enemy is apathy.

If all the good people give up on democracy, then it will fail, and we will see something worse in its place. Your belief that the world can’t get any worse can make the world much, much worse.

I’m not saying our system of government couldn’t be radically improved. It absolutely could, even by relatively simple reforms, such as range voting and a universal basic income. But there are people who want to tear it all down, and if they succeed, what they put in its place is almost certainly going to be worse, not better.

That’s what happened in Communist countries, after all: They started with bad systems, they tore them down in the name of making something better—and then they didn’t make something better. They made something worse.

And I don’t think it’s an accident that Marxists are so often Conflict Theorists; Marx himself certainly was. Marx seemed convinced that all we needed to do was tear down the old system, and a new, better system would spontaneously emerge. But that isn’t how any of this works.

Good governance is actually really hard.

Life isn’t simple. People aren’t easy to coordinate. Conflicts of interest aren’t easy to resolve. Coordination failures are everywhere. If you tear down the best systems we have for solving these problems, with no vision at all of what you would replace them with, you’re not going to get something better.

Different people want different things. We have to resolve those disagreements somehow. There are lots of ways we could go about doing that. But so far, some variation on voting seems to be the best method we have for resolving disagreements fairly.

It’s true; some people out there are really just bad people. Some of what even good people want is ultimately not reasonable, or based on false presumptions. (Like people who want to “cut” foreign aid to 5% of the budget—when it is in fact about 1%.) Maybe there is some alternative system out there that could solve these problems better, ensure that only the reasonable voices with correct facts actually get heard.

If so, well, you know:

We’d all love to see the plan.

It’s not enough to recognize that our current system is flawed and posit that something better could exist. You need to actually have a clear vision of what that better system looks like. For if you go tearing down the current system without any idea of what to replace it with, you’re going to end up with something much worse.

Indeed, if you had a detailed plan of how to improve things, it’s quite possible you could convince enough people to get that plan implemented, without tearing down the whole system first.

We’ve done it before, after all:

We ended slavery, then racial segregation. We gave women the right to vote, then integrated them into the workforce. We removed the ban of homosexuality, and then legalized same-sex marriage.


We have a very clear track record of reform working. Things are getting better, on a lot of different fronts. (Maybe not all fronts, I admit.) When the moral case becomes overwhelming, we really can convince people to change their minds and then vote to change our policies.

We do not have such a track record when it comes to revolutions.

Yes, some revolutions have worked out well, such as the one that founded the United States. (But I really cannot emphasize this: they had a plan!) But plenty more have worked out very badly. Even France, which turned out okay in the end, had to go through a Napoleon phase first.

Overall, it seems like our odds are better when we treat the system as broken and try to fix it, than when we treat it as evil and try to tear it down.

The world could be a lot better than it is. But never forget: It could also be a lot worse.

Wrongful beneficence

Jun 9 JDN 2460471

One of the best papers I’ve ever read—one that in fact was formative in making me want to be an economist—is Wrongful Beneficence by Chris Meyers.

This paper opened my eyes to a whole new class of unethical behavior: Acts that unambiguously make everyone better off, but nevertheless are morally wrong. Hence, wrongful beneficence.

A lot of economists don’t even seem to believe in such things. They seem convinced that as long as no one is made worse off by a transaction, that transaction must be ethically defensible.

Chris Meyers convinced me that they are wrong.

The key insight here is that it’s still possible to exploit someone even if you make them better off. This happens when they are in a desperate situation and you take advantage of that to get an unfair payoff.


Here one of the cases Meyers offers to demonstrate this:

Suppose Carole is driving across the desert on a desolate road when her car breaks down. After two days and two nights without seeing a single car pass by, she runs out of water and feels rather certain that she will perish if not rescued soon. Now suppose that Jason happens to drive down this road and finds Carole. He sees that her situation is rather desperate and that she needs (or strongly desires) to get to the nearest town as soon as possible. So Jason offers her a ride but only on the condition that […] [she gives him] her entire net worth, the title to her house and car, all of her money in the bank, and half of her earnings for the next ten years.

Carole obviously is better off than she would be if Jason hadn’t shown up—she might even have died. She freely consented to this transaction—again, because if she didn’t, she might die. Yet it seems absurd to say that Jason has done nothing wrong by making such an exorbitant demand. If he had asked her to pay for gas, or even to compensate him for his time at a reasonable rate, we’d have no objection. But to ask for her life savings, all her assets, and half her earnings for ten years? Obviously unfair—and obviously unethical. Jason is making Carole (a little) better off while making himself (a lot) better off, so everyone is benefited; but what he’s doing is obviously wrong.

Once you recognize that such behavior can exist, you start to see it all over the place, particularly in markets, where corporations are quite content to gouge their customers with high prices and exploit their workers with low wages—but still, technically, we’re better off than we would be with no products and no jobs at all.

Indeed, the central message of Wrongful Beneficence is actually about sweatshop labor: It’s not that the workers are worse off than they would have been (in general, they aren’t); it’s that they are so desperate that corporations can get away with exploiting them with obviously unfair wages and working conditions.

Maybe it would be easier just to move manufacturing back to First World countries?

Right-wingers are fond of making outlandish claims that making products at First World wages would be utterly infeasible; here’s one claiming that an iPhone would need to cost $30,000 if it were made in the US. In fact, the truth is that it would only need to cost about $40 more—because hardly any of its cost is actually going to labor. Most of its price is pure monopoly profit for Apple; most of the rest is components and raw materials. (Of course, if those also had to come from the US, the price would go up more; but even so, we’re talking something like double its original price, not thirty times. Workers in the US are indeed paid a lot more than workers in China; they are also more productive.)

It’s true that actually moving manufacturing from other countries back to the US would be a substantial undertaking, requiring retooling factories, retraining engineers, and so on; but it’s not like we’ve never done that sort of thing before. I’m sure it could not be done overnight; but of course it could be done. We do this sort of thing all the time.

Ironically, this sort of right-wing nonsense actually seems to feed the far left as well, supporting their conviction that all this prosperity around us is nothing more than an illusion, that all our wealth only exists because we steal it from others. But this could scarcely be further from the truth; our wealth comes from technology, not theft. If we offered a fairer bargain to poorer countries, we’d be a bit less rich, but they would be much less poor—the overall wealth in the world would in fact probably increase.

A better argument for not moving manufacturing back to the First World is that many Third World economies would collapse if they stopped manufacturing things for other countries, and that would be disastrous for millions of people.

And free trade really does increase efficiency and prosperity for all.

So, yes; let’s keep on manufacturing goods wherever it is cheapest to do so. But when we decide what’s cheapest, let’s evaluate that based on genuinely fair wages and working conditions, not the absolute cheapest that corporations think they can get away with.

Sometimes they may even decide that it’s not really cheaper to manufacture in poorer countries, because they need advanced technology and highly-skilled workers that are easier to come by in First World countries. In that case, bringing production back here is the right thing to do.

Of course, this raises the question:

What would be fair wages and working conditions?

That’s not so easy to answer. Since workers in Third World countries are less educated than workers in First World countries, and have access to less capital and worse technology, we should in fact expect them to be less productive and therefore get paid less. That may be unfair in some cosmic sense, but it’s not anyone’s fault, and it’s not any particular corporation’s responsibility to fix it.

But when there are products for which less than 1% of the sales price of the product goes to the workers who actually made the product, something is wrong. When the profit margin is often wildly larger than the total amount spent on labor, something is wrong.

It may be that we will never have precise thresholds we can set to decide what definitely is or is not exploitative; but that doesn’t mean we can’t ever recognize it when we see it. There are various institutional mechanisms we could use to enforce better wages and working conditions without ever making such a sharp threshold.

One of the simplest, in fact, is Fair Trade.

Fair Trade is by no means a flawless system; in fact there’s a lot of research debating how effective it is at achieving its goals. But it does seem to be accomplishing something. And it’s a system that we already have in place, operating successfully in many countries; it simply needs to be scaled up (and hopefully improved along the way).

One of the clearest pieces of evidence that it’s helping, in fact, is that farmers are willing to participate in it. That shows that it is beneficent.

Of course, that doesn’t mean that it’s genuinely fair! This could just be another kind of wrongful beneficence. Perhaps Fair Trade is really just less exploitative than all the available alternatives.

If so, then we need something even better still, some new system that will reliably pass on the increased cost for customers all the way down to increased wages for workers.

Fair Trade shows us something else, too: A lot of customers clearly are willing to pay a bit more in order to see workers treated better. Even if they weren’t, maybe they should be forced to. But the fact is, they are! Even those who are most adamantly opposed to Fair Trade can’t deny that people really are willing to pay more to help other people. (Yet another example of obvious altruism that neoclassical economists somehow manage to ignore.) They simply deny that it’s actually helping, which is an empirical matter.

But if this isn’t helping enough, fine; let’s find something else that does.

Go ahead and identify as a season

Jun 2 JDN 2460464

A few weeks back, Fox News was running the story that “kids today are identifying as seasons instead of genders”. I suspected that by “kids today” they meant “one particular person on the Internet”, but in fact it was even worse than that; the one person on the Internet they had used as an example hadn’t actually said what Fox claimed they said.

What they actually said was far more nuanced: It was basically that their fluid gender expression varied based on what kind of clothes they wear, which, naturally, varies with the seasons. So they end up feeling more masculine at certain times of year when they like to wear masculine clothing. Honestly, this would be pretty boring stuff if conservatives hadn’t blown it out of proportion.

But after thinking about it for awhile, I decided that I don’t even care if kids want to identify as seasons.

It seems silly. I don’t understand why you’d want to do it. It would probably always feel weird to me. (And what pronouns do you even use for someone who identifies as “summer”?)

But ultimately, it seems completely, utterly harmless. So if there are in fact kids—or adults—out there who really feel that they want to identify their gender with a season, I’m here to tell you now:

Go right ahead and do that.

It’s really astonishing just what upsets conservatives in this world. Poverty? No big deal. Climate change? Probably a hoax or something. War? That’s just how it goes. But kids with weird genders!? The horror! The horror!

I think the reasoning here goes something like this:

  1. Civilization is built upon social constructions.
  2. Social constructions rely upon consensus behavior.
  3. Consensus behavior relies upon shared norms.
  4. Challenging any shared norms challenges all shared norms.
  5. Challenging any norm will cause it to collapse.
  6. Challenging gender norms is challenging a shared norm.
  7. Therefore, challenging gender norms will cause civilization to collapse.

Premises 1 through 3 are true, though I suspect that phrases like “social construction” would actually not sit well with most conservatives. (Part of their whole shtick seems to be that if you simply admit that money, government, and national identity are socially constructed, that in itself will cause them to immediately and irretrievably collapse. Nevermind that I can tell you money is made up all day long, and you’ll still be able to spend it.)

Premise 6 is also true, indeed, nearly tautological.

And, indeed, the argument is valid; the conclusion would follow from the premises.

So of course we come to the two premises that aren’t valid.


Premise 4 is wrong because you can challenge some norms but not others. I have yet to see anyone seriously challenge the norm against murder, for example. Nor does it even seem especially popular to challenge the norm in favor of democratic voting. But those are the kind of norms that actually sustain our civilization—not gender!

And premise 5 is even worse: A norm that can’t withstand even the slightest challenge is a norm that’s too weak to rely upon in the first place. If our civilization is to be strong and robust, it must allow its norms to be challenged, and those norms must be able to sustain themselves against the challenge. And indeed, if someone were to challenge the norm against murder or the norm in favor of democratic voting, there are plenty of things I could say to reply to that challenge. These norms aren’t arbitrary. They are strong because we can defend them.

What about gender norms? How defensible are they?

Well, uh… not very, it turns out.

The existence of sexes is defensible. Humans are sexually dimorphic, and the vast majority of humans can be readily classified as either male or female. Yes, there are exceptions even to that, and those people count too. But it’s a pretty useful and accurate heuristic to divide our species into two sexes.

But gender norms are so much more than this. We don’t simply recognize that some people have penises and others have vaginas. We attach all sorts of social and behavioral requirements to people based on their bodies, many of which are utterly arbitrary and culturally dependent. (Not all, to be fair: The stereotype that men are stronger than women is itself a very useful and accurate heuristic.)

Worse, we don’t merely assign stereotypes to predict behavior—which might sometimes be useful. We assign norms to control behavior. We tell people who deviate from those norms that they are bad. We abuse them, discriminate against them, ostracize them from society. This is really weird.

And for what?

What benefit do gender norms have?

I can see how norms against murder and in favor of democracy sustain our civilization. I’m just not seeing how norms against using she/her pronouns when you have a penis provide similar support.

It’s true, most human societies throughout history have had strict gender norms, so maybe that’s some sort of evidence in their favor… but how about we at least try not having them for awhile? Or just relax them here and there, a little at a time, see how it goes? If indeed it seems to result in some sort of disaster, we’ll stop doing it. But I don’t see how it could—and so far, it hasn’t.

I think maybe the problem here is that conservatives don’t understand how to evaluate norms, or perhaps even that norms can be evaluated. To them, a rule is a rule, and you never challenge the rules, because if there were no rules, there would be chaos and destruction.

But challenging some rules—or even all rules—doesn’t mean having no rules! It means checking to make sure our rules are good rules, and if they aren’t, changing them so they are.

And since I see no particular reason why having two genders is an especially good rule, go ahead, make up some more if you want.

Go ahead and identify if a season, if you really want to.

Medical progress, at least, is real

May 26 JDN 2460457

The following vignettes are about me.

Well, one of them is about me as I actually am. The others are about the person I would have been, if someone very much like me, with the same medical conditions, had been born in a particular place and time. Someone in these times and places probably had actual experiences like this, though of course we’ll never know who they were.

976 BC, the hilled lands near the mouth of the river:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain plants, or if I awaken too early, or if I exert myself too much, or if a storm is coming. No one knows why. The healers have tried every herb and tincture imaginable in their efforts to cure me, but nothing has worked. The priests believe it is a curse from the gods, but at least they appreciate my ability to sometimes predict storms. I am lucky to even remain alive, as I am of little use to the tribe. I will most likely remain this way the rest of my life.

24 AD, Rome:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain plants, or if I awaken too early, or if I exert myself too much, or if a storm is coming. No one knows why. The healers have tried every herb and tincture imaginable in their efforts to cure me, but nothing has worked. The priests believe it is a curse from the gods, but at least they appreciate my ability to sometimes predict storms. I am lucky that my family was rich enough to teach me reading and mathematics, as I would be of little use for farm work, but can at least be somewhat productive as a scribe and a tutor. I will most likely remain this way the rest of my life.

1024 AD, England:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain plants, or if I awaken too early, or if I exert myself too much, or if a storm is coming. No one knows why. The healers have tried every herb and tincture imaginable in their efforts to cure me, but nothing has worked. The priests believe it is a curse imposed upon me by some witchcraft, but at least they appreciate my ability to sometimes predict storms. I am lucky that my family was rich enough to teach me reading and mathematics, as I would be of little use for farm work, but can at least be somewhat productive as a scribe and a tutor. I will most likely remain this way the rest of my life.

2024 AD, Michigan:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain pollens, fragrances, or chemicals, or if I awaken too early, or if I exert myself too much, or when the air pressure changes before a storm. Brain scans detected no gross abnormalities. I have been diagnosed with chronic migraine, but this is more a description of my symptoms than an explanation. I have tried over a dozen different preventative medications; most of them didn’t work at all, some of them worked but gave me intolerable side effects. (One didn’t work at all and put me in the hospital with a severe allergic reaction.) I’ve been more successful with acute medications, which at least work as advertised, but I have to ration them carefully to avoid rebound effects. And the most effective acute medication is a subcutaneous injection that makes me extremely nauseated unless I also take powerful anti-emetics along with it. I have had the most success with botulinum toxin injections, so I will be going back to that soon; but I am also looking into transcranial magnetic stimulation. Currently my condition is severe enough that I can’t return to full-time work, but I am hopeful that with future treatment I will be able to someday. For now, I can at least work as a writer and a tutor. Hopefully things get better soon.

3024 AD, Aegir 7, Ran System:

For a few months when I was fourteen years old, I woke up nearly every day in pain. Often it was mild, but occasionally it was severe. It often seemed to be worse when I encountered certain pollens, fragrances or chemicals, or if I awakened too early, or if I exerted myself too much, or when the air pressure changed before a storm. Brain scans detected no gross abnormalities, only subtle misfiring patterns. Genetic analysis confirmed I had chronic migraine type IVb, and treatment commenced immediately. Acute medications suppressed the pain while I underwent gene therapy and deep-effect transcranial magnetic stimulation. After three months of treatment, I was cured. That was an awful few months, but it’s twenty years behind me now. I can scarcely imagine how it might have impaired my life if it had gone on that whole time.

What is the moral of this story?

Medical progress is real.

Many people often doubt that society has made real progress. And in a lot of ways, maybe it hasn’t. Human nature is still the same, and so many of the problems we suffer have remained the same.

Economically, of course we have had tremendous growth in productivity and output, but it doesn’t really seem to have made us much happier. We have all this stuff, but we’re still struggling and miserable as a handful at the top become spectacularly, disgustingly rich.

Social progress seems to have gone better: Institutions have improved, more of the world is democratic than ever before, and women and minorities are better represented and better protected from oppression. Rates of violence have declined to some of their lowest levels in history. But even then, it’s pretty clear that we have a long, long way to go.

But medical progress is undeniable. We live longer, healthier lives than at any other point in history. Our infant and child mortality rates have plummeted. Even chronic conditions that seem intractable today (such as my chronic migraines) still show signs of progress; in a few generations they should be cured—in surely far less than the thousand years I’ve considered here.

Like most measures of progress, this change wasn’t slow and gradual over thousands of years; it happened remarkably suddenly. Humans went almost 200,000 years without any detectable progress in medicine, using basically the same herbs and tinctures (and a variety of localized and ever-changing superstitions) the entire time. Some of it worked (the herbs and tinctures, at least), but mostly it didn’t. Then, starting around the 18th century, as the Enlightenment took hold and Industrial Revolution ramped up, everything began to change.

We began to test our medicine and see if it actually worked. (Yes, amazingly, somehow, nobody had actually ever thought to do that before—not in anything resembling a scientific way.) And when we learned that most of it didn’t, we began to develop new methods, and see if those worked; and when they didn’t either, we tried new things instead—until, finally, eventually, we actually found medicines that actually did something, medicines worthy of the name. Our understanding of anatomy and biology greatly improved as well, allowing us to make better predictions about the effects our medicines would have. And after a few hundred years of that—a few hundred, out of two hundred thousand years of our species—we actually reached the point where most medicine is effective and a variety of health conditions are simply curable or preventable, including diseases like malaria and polio that had once literally plagued us.

Scientific medicine brought humanity into a whole new era of existence.

I could have set the first vignette 10,000 years ago without changing it. But the final vignette I could probably have set only 200 years from now. I’m actually assuming remarkable stagnation by putting it in the 31st century; but presumably technological advancement will slow at one point, perhaps after we’ve more or less run out of difficult challenges to resolve. (Then again, for all I know, maybe my 31st century counterpart will be an emulated consciousness, and his chronic pain will be resolved in 17.482 seconds by a code update.)

Indeed, the really crazy thing about all this is that there are still millions of people who don’t believe in scientific medicine, who want to use “homeopathy” or “naturopathy” or “acupuncture” or “chiropractic” or whatever else—who basically want to go back to those same old herbs and tinctures that maybe sometimes kinda worked but probably not and nobody really knows. (I have a cousin who is a chiropractor. I try to be polite about it, but….) They point out the various ways that scientific medicine has failed—and believe me, I am painfully aware of those failures—but then where the obvious solution is to improve scientific medicine, they instead want to turn the whole ship around, and go back to what we had before, which was obviously a million times worse.

And don’t tell me it’s harmless: One, it’s a completewaste of resources that could instead have been used for actual scientific medicine. (9% of all out-of-pocket spending on healthcare in the US is on “alternative medicine”—which is to say, on pointless nonsense.) Two, when you have a chronic illness and people keep shoving nonsense treatments in your face, you start to feel blamed for your condition: “Why haven’t you tried [other incredibly stupid idea that obviously won’t work]? You’re so closed-minded! Maybe your illness isn’t really that bad, or you’d be more desperate!” If “alternative medicine” didn’t exist, maybe these people could help me cope with the challenges of living with a chronic illness, or even just sympathize with me, instead of constantly shoving stupid nonsense in my face.

Not everything about the future looks bright.

In particular, I am pessimistic about the near-term future of artificial intelligence, which I think will cause a lot more problems than it solves and does have a small—but not negligible—risk of causing a global catastrophe.

I’m also not very optimistic about climate change; I don’t think it will wipe out our civilization or anything so catastrophic, but I do think it’s going to kill millions of people and we’ve done too little, too late to prevent that. We’re now doing about what we should have been doing in the 1980s.

But I am optimistic about scientific medicine. Every day, new discoveries are made. Every day, new treatments are invented. Yes, there is a lot we haven’t figured out how to cure yet; but people are working on it.

And maybe they could do it faster if we stopped wasting time on stuff that obviously won’t work.

How Effective Altruism hurt me

May 12 JDN 2460443

I don’t want this to be taken the wrong way. I still strongly believe in the core principles of Effective Altruism. Indeed, it’s shockingly hard to deny them, because basically they come out to this:

Doing more good is better than doing less good.

Then again, most people want to do good. Basically everyone agrees that more good is better than less good. So what’s the big deal about Effective Altruism?

Well, in practice, most people put shockingly little effort into trying to ensure that they are doing the most good they can. A lot of people just try to be nice people, without ever concerning themselves with the bigger picture. Many of these people don’t give to charity at all.

Then, even among people who do give to charity, typically give to charities more or less at random—or worse, in proportion to how much mail those charities send them begging for donations. (Surely you can see how that is a perverse incentive?) They donate to religious organizations, which sometimes do good things, but fundamentally are founded upon ignorance, patriarchy, and lies.

Effective Altruism is a movement intended to fix this, to get people to see the bigger picture and focus their efforts on where they will do the most good. Vet charities not just for their honesty, but also their efficiency and cost-effectiveness:

Just how many mQALY can you buy with that $1?

That part I still believe in. There is a lot of value in assessing which charities are the most effective, and trying to get more people to donate to those high-impact charities.

But there is another side to Effective Altruism, which I now realize has severely damaged my mental health.

That is the sense of obligation to give as much as you possibly can.

Peter Singer is the most extreme example of this. He seems to have mellowed—a little—in more recent years, but in some of his most famous books he uses the following thought experiment:

To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.

I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.

Basically everyone agrees with this particular decision: Even if you are wearing a very expensive suit that will be ruined, even if you’ll miss something really important like a job interview or even a wedding—most people agree that if you ever come across a drowning child, you should save them.

(Oddly enough, when contemplating this scenario, nobody ever seems to consider the advice that most lifeguards give, which is to throw a life preserver and then go find someone qualified to save the child—because saving someone who is drowning is a lot harder and a lot riskier than most people realize. (“Reach or throw, don’t go.”) But that’s a bit beside the point.)

But Singer argues that we are basically in this position all the time. For somewhere between $500 and $3000, you—yes, you—could donate to a high-impact charity, and thereby save a child’s life.

Does it matter that many other people are better positioned to donate than you are? Does it matter that the child is thousands of miles away and you’ll never see them? Does it matter that there are actually millions of children, and you could never save them all by yourself? Does it matter that you’ll only save a child in expectation, rather than saving some specific child with certainty?

Singer says that none of this matters. For a long time, I believed him.

Now, I don’t.

For, if you actually walked by a drowning child that you could save, only at the cost of missing a wedding and ruining your tuxedo, you clearly should do that. (If it would risk your life, maybe not—and as I alluded to earlier, that’s more likely than you might imagine.) If you wouldn’t, there’s something wrong with you. You’re a bad person.

But most people don’t donate everything they could to high-impact charities. Even Peter Singer himself doesn’t. So if donating is the same as saving the drowning child, it follows that we are all bad people.

(Note: In general, if an ethical theory results in the conclusion that the whole of humanity is evil, there is probably something wrong with that ethical theory.)

Singer has tried to get out of this by saying we shouldn’t “sacrifice things of comparable importance”, and then somehow cash out what “comparable importance” means in such a way that it doesn’t require you to live on the street and eat scraps from trash cans. (Even though the people you’d be donating to largely do live that way.)

I’m not sure that really works, but okay, let’s say it does. Even so, it’s pretty clear that anything you spend money on purely for enjoyment would have to go. You would never eat out at restaurants, unless you could show that the time saved allowed you to get more work done and therefore donate more. You would never go to movies or buy video games, unless you could show that it was absolutely necessary for your own mental functioning. Your life would be work, work, work, then donate, donate, donate, and then do the absolute bare minimum to recover from working and donating so you can work and donate some more.

You would enslave yourself.

And all the while, you’d believe that you were never doing enough, you were never good enough, you are always a terrible person because you try to cling to any personal joy in your own life rather than giving, giving, giving all you have.

I now realize that Effective Altruism, as a movement, had been basically telling me to do that. And I’d been listening.

I now realize that Effective Altruism has given me this voice in my head, which I hear whenever I want to apply for a job or submit work for publication:

If you try, you will probably fail. And if you fail, a child will die.

The “if you try, you will probably fail” is just an objective fact. It’s inescapable. Any given job application or writing submission will probably fail.

Yes, maybe there’s some sort of bundling we could do to reframe that, as I discussed in an earlier post. But basically, this is correct, and I need to accept it.

Now, what about the second part? “If you fail, a child will die.” To most of you, that probably sounds crazy. And it is crazy. It’s way more pressure than any ordinary person should have in their daily life. This kind of pressure should be reserved for neurosurgeons and bomb squads.

But this is essentially what Effective Altruism taught me to believe. It taught me that every few thousand dollars I don’t donate is a child I am allowing to die. And since I can’t donate what I don’t have, it follows that every few thousand dollars I fail to get is another dead child.

And since Effective Altruism is so laser-focused on results above all else, it taught me that it really doesn’t matter whether I apply for the job and don’t get it, or never apply at all; the outcome is the same, and that outcome is that children suffer and die because I had no money to save them.

I think part of the problem here is that Effective Altruism is utilitarian through and through, and utilitarianism has very little place for good enough. There is better and there is worse; but there is no threshold at which you can say that your moral obligations are discharged and you are free to live your life as you wish. There is always more good that you could do, and therefore always more that you should do.

Do we really want to live in a world where to be a good person is to owe your whole life to others?

I do not believe in absolute selfishness. I believe that we owe something to other people. But I no longer believe that we owe everything. Sacrificing my own well-being at the altar of altruism has been incredibly destructive to my mental health, and I don’t think I’m the only one.

By all means, give to high-impact charities. But give a moderate amount—at most, tithe—and then go live your life. You don’t owe the world more than that.

Of men and bears

May 5 JDN 2460436

[CW: rape, violence, crime, homicide]

I think it started on TikTok, but I’m too old for TikTok, so I first saw it on Facebook and Twitter.

Men and women were asked:
“Would you rather be alone in the woods with a man, or a bear?”

Answers seem to have been pretty mixed. Some women still thought a man was a safer choice, but a significant number chose the bear.

Then when the question was changed to a woman, almost everyone chose the woman over the bear.

What can we learn from this?

I think the biggest thing it tells us is that a lot of women are afraid of men. If you are seriously considering the wild animal over the other human being, you’re clearly afraid.

A lot of the discourse on this seems to be assuming that they are right to be afraid, but I’m not so sure.

It’s not that the fear is unfounded: Most women will suffer some sort of harassment, and a sizeable fraction will suffer some sort of physical or sexual assault, at the hands of some men at some point in their lives.

But there is a cost to fear, and I don’t think we’re taking it properly into account here. I’m worried that encouraging women to fear men will only serve to damage relationships between men and women, the vast majority of which are healthy and positive. I’m worried that this fear is really the sort of overreaction to trauma that ends up causing its own kind of harm.

If you think that’s wrong, consider this:

A sizeable fraction of men will be physically assaulted by other men.

Should men fear each other?

Should all men fear all other men?

What does it do to a society when its whole population fears half of its population? Does that sound healthy? Does whatever small increment in security that might provide seem worth it?

Keep in mind that women being afraid of men doesn’t seem to be protecting them from harm right now. So even if there is genuine harm to be feared, the harm of that fear is actually a lot more obvious than the benefit of it. Our entire society becomes fearful and distrustful, and we aren’t actually any safer.

I’m worried that this is like our fear of terrorism, which made us sacrifice our civil liberties without ever clearly making us safer. What are women giving up due to their fear of men? Is it actually protecting them?

If you have any ideas for how we might actually make women safer, let’s hear them. But please, stop saying idiotic things like “Don’t be a rapist.” 95% of men already aren’t, and the 5% who are, are not going to listen to anything you—or I—say to them. (Bystander intervention programs can work. But just telling men to not be rapists does not.)

I’m all for teaching about consent, but it really isn’t that hard to do—and most rapists seem to understand it just fine, they just don’t care. They’ll happily answer on a survey that they “had sex with someone without their consent”. By all means, undermine rape myths; just don’t expect it to dramatically reduce the rate of rape.

I absolutely want to make people safer. But telling people to be afraid of people like me doesn’t actually seem to accomplish that.

And yes, it hurts when people are afraid of you.

This is not a small harm. This is not a minor trifle. Once we are old enough to be seen as “men” rather than “boys” (which seems to happen faster if you’re Black than if you’re White), men know that other people—men and women, but especially women—will fear us. We go through our whole lives having to be careful what we say, how we move, when we touch someone else, because we are shaped like rapists.

When my mother encounters a child, she immediately walks up to the child and starts talking to them, pointing, laughing, giggling. I can’t do that. If I tried to do the exact same thing, I would be seen as a predator. In fact, without children of my own, it’s safer for me to just not interact with children at all, unless they are close friends or family. This is a whole class of joyful, fulfilling experience that I just don’t get to have because people who look like me commit acts of violence.

Normally we’re all about breaking down prejudice, not treating people differently based on how they look—except when it comes to gender, apparently. It’s okay to fear men but not women.

Who is responsible for this?

Well, obviously the ones most responsible are actual rapists.

But they aren’t very likely to listen to me. If I know any rapists, I don’t know that they are rapists. If I did know, I would want them imprisoned. (Which is likely why they wouldn’t tell me if they were.)

Moreover, my odds of actually knowing a rapist are probably lower than you think, because I don’t like to spend time with men who are selfish, cruel, aggressive, misogynist, or hyper-masculine. The fact that 5% of men in general are rapists doesn’t mean that 5% of any non-random sample of men are rapists. I can only think of a few men I have ever known personally who I would even seriously suspect, and I’ve cut ties with all of them.

The fact that psychopaths are not slavering beasts, obviously different from the rest of us, does not mean that there is no way to tell who is a psychopath. It just means that you need to know what you’re actually looking for. When I once saw a glimmer of joy in someone’s eyes as he described the suffering of animals in an experiment, I knew in that moment he was a psychopath. (There are legitimate reasons to harm animals in scientific experiments—but a good person does not enjoy it.) He did not check most of the boxes of the “Slavering Beast theory”: He had many friends; he wasn’t consistently violent; he was a very good liar; he was quite accomplished in life; he was handsome and charismatic. But go through an actual psychopathy checklist, and you realize that every one of these features makes psychopathy more likely, not less.

I’m not even saying it’s easy to detect psychopaths. It’s not. Even experts need to look very closely and carefully, because psychopaths are often very good at hiding. But there are differences. And it really is true that the selfish, cruel, aggressive, misogynist, hyper-masculine men are more likely to be rapists than the generous, kind, gentle, feminist, androgynous men. It’s not a guarantee—there are lots of misogynists who aren’t rapists, and there are men who present as feminists in public but are rapists in private. But it is a tendency nevertheless. You don’t need to treat every man as equally dangerous, and I don’t think it’s healthy to do so.

Indeed, if I had the choice to be alone in the woods with either a gay male feminist or a woman I knew was cruel to animals, I’d definitely choose the man. These differences matter.

And maybe, just maybe, if we could tamp down this fear a little bit, men and women could have healthier interactions with one another and build stronger relationships. Even if the fear is justified, it could still be doing more harm than good.

So are you safer with a man, or a bear?

Let’s go back to the original thought experiment, and consider the actual odds of being attacked. Yes, the number of people actually attacked by bears is far smaller than the number of people actually attacked by men. (It’s also smaller than the number of people attacked by women, by the way.)

This is obviously because we are constantly surrounded by people, and rarely interact with bears.

In other words, that fact alone basically tells us nothing. It could still be true even if bears are far more dangerous than men, because people interact with bears far less often.

The real question is “How likely is an attack, given that you’re alone in the woods with one?”

Unfortunately, I was unable to find any useful statistics on this. There area lot of vague statements like “Bears don’t usually attack humans” or “Bears only attack when startled or protecting their young”; okay. But how often is “usually”? How often are bears startled? What proportion of bears you might encounter are protecting their young?

So this is really a stab in the dark; but do you think it’s perhaps fair to say that maybe 10% of bear-human close encounters result in an attack?

That doesn’t seem like an unreasonably high number, at least. 90% not attacking sounds like “usually”. Being startled or protecting their young don’t seem like events much rarer than 10%. This estimate could certainly be wrong (and I’m sure it’s not precise), but it seems like the right order of magnitude.

So I’m going to take that as my estimate:

If you are alone in the woods with a bear, you have about a 10% chance of being attacked.

Now, what is the probability that a randomly-selected man would attack you, if you were alone in the woods with him?

This one can be much better estimated. It is roughly equal to the proportion of men who are psychopaths.


Now, figures on this vary too, partly because psychopathy comes in degrees. But at the low end we have about 1.2% of men and 0.3% of women who are really full-blown psychopaths, and at the high end we have about 10% of men and 2% of women who exhibit significant psychopathic traits.

I’d like to note two things about these figures:

  1. It still seems like the man is probably safer than the bear.
  2. Men are only about four or five times as likely to be psychopaths as women.

Admittedly, my bear estimate is very imprecise; so if, say, only 5% of bear encounters result in attacks and 10% of men would attack if you were alone in the woods, men could be more dangerous. But I think it’s unlikely. I’m pretty sure bears are more dangerous.

But the really interesting thing is that people who seemed ambivalent about man versus bear, or even were quite happy to choose the bear, seem quite consistent in choosing women over bears. And I’m not sure the gender difference is really large enough to justify that.

If 1.2% to 10% of men are enough for us to fear all men, why aren’t 0.3% to 2% of women enough for us to fear all women? Is there a threshold at 1% or 5% that flips us from “safe” to “dangerous”?

But aren’t men responsible for most violence, especially sexual violence?

Yes, but probably not by as much as you think.

The vast majority of rapesare committed by men, and most of those are against women. But the figures may not be as lopsided as you imagine; in a given year, about 0.3% of women are raped by a man, and about 0.1% of men are raped by a woman. Over their lifetimes, about 25% of women will be sexually assaulted, and about 5% of men will be. Rapes of men by women have gone even more under-reported than rapes in general, in part because it was only recently that being forced to penetrate someone was counted as a sexual assault—even though it very obviously is.

So men are about 5 times as likely to commit rape as women. That’s a big difference, but I bet it’s a lot smaller than what many of you believed. There are statistics going around that claim that as many as 99% of rapes are committed by men; those statistics are ignoring the “forced to penetrate” assaults, and thus basically defining rape of men by women out of existence.

Indeed, 5 to 1 is quite close to the ratio in psychopathy.

I think that’s no coincidence: In fact, I think it’s largely the case that the psychopaths and the rapists are the same people.

What about homicide?

While men are indeed much more likely to be perpetrators of homicide, they are also much more likely to be victims.

Of about 23,000 homicide offenders in 2022, 15,100 were known to be men, 2,100 were known to be women, and 5,800 were unknown (because we never caught them). Assuming that women are no more or less likely to be caught than men, we can ignore the unknown, and presume that the same gender ratio holds across all homicides: 12% are committed by women.

Of about 22,000 homicides in the US last year, 17,700 victims were men. 3,900 victims were women. So men are 4.5 times as likely to be murdered than women in the US. Similar ratios hold in most First World countries (though total numbers are lower).

Overall, this means that men are about 7 times as likely to commit murder, but about 4.5 times as likely to suffer it.

So if we measure by rate of full-blown psychopathy, men are about 4 times as dangerous as women. If we measure by rate of moderate psychopathy, men are about 5 times as dangerous. If we measure by rate of rape, men are about 5 times as dangerous. And if we measure by rate of homicide, men are about 7 times as dangerous—but mainly to each other.

Put all this together, and I think it’s fair to summarize these results as:

Men are about five times as dangerous as women.

That’s not a small difference. But it’s also not an astronomical one. If you are right to be afraid of all men because they could rape or murder you, why are you not also right to be afraid of all women, who are one-fifth as likely to do the same?

Should we all fear everyone?

Surely you can see that isn’t a healthy way for a society to operate. Yes, there are real dangers in this world; but being constantly afraid of everyone will make you isolated, lonely, paranoid and probably depressed—and it may not even protect you.

It seems like a lot of men responding to the “man or bear” meme were honestly shocked that women are so afraid. If so, they have learned something important. Maybe that’s the value in the meme.

But the fear can be real, even justified, and still be hurting more than it’s helping. I don’t see any evidence that it’s actually making anyone any safer.

We need a better answer than fear.

Everyone includes your mother and Los Angeles

Apr 28 JDN 2460430

What are the chances that artificial intelligence will destroy human civilization?

A bunch of experts were surveyed on that question and similar questions, and half of respondents gave a probability of 5% or more; some gave probabilities as high as 99%.

This is incredibly bizarre.

Most AI experts are people who work in AI. They are actively participating in developing this technology. And yet more than half of them think that the technology they are working on right now has a more than 5% chance of destroying human civilization!?

It feels to me like they honestly don’t understand what they’re saying. They can’t really grasp at an intuitive level just what a 5% or 10% chance of global annihilation means—let alone a 99% chance.

If something has a 5% chance of killing everyone, we should consider that at least as bad as something that is guaranteed to kill 5% of people.

Probably worse, in fact, because you can recover from losing 5% of the population (we have, several times throughout history). But you cannot recover from losing everyone. So really, it’s like losing 5% of all future people who will ever live—which could be a very large number indeed.

But let’s be a little conservative here, and just count people who already, currently exist, and use 5% of that number.

5% of 8 billion people is 400 million people.

So anyone who is working on AI and also says that AI has a 5% chance of causing human extinction is basically saying: “In expectation, I’m supporting 20 Holocausts.”

If you really think the odds are that high, why aren’t you demanding that any work on AI be tried as a crime against humanity? Why aren’t you out there throwing Molotov cocktails at data centers?

(To be fair, Eliezer Yudkowsky is actually calling for a global ban on AI that would be enforced by military action. That’s the kind of thing you should be doing if indeed you believe the odds are that high. But most AI doomsayers don’t call for such drastic measures, and many of them even continue working in AI as if nothing is wrong.)

I think this must be scope neglector something even worse.

If you thought a drug had a 99% chance of killing your mother, you would never let her take the drug, and you would probably sue the company for making it.

If you thought a technology had a 99% chance of destroying Los Angeles, you would never even consider working on that technology, and you would want that technology immediately and permanently banned.

So I would like to remind anyone who says they believe the danger is this great and yet continues working in the industry:

Everyone includes your mother and Los Angeles.

If AI destroys human civilization, that means AI destroys Los Angeles. However shocked and horrified you would be if a nuclear weapon were detonated in the middle of Hollywood, you should be at least that shocked and horrified by anyone working on advancing AI, if indeed you truly believe that there is at least a 5% chance of AI destroying human civilization.

But people just don’t seem to think this way. Their minds seem to take on a totally different attitude toward “everyone” than they would take toward any particular person or even any particular city. The notion of total human annihilation is just so remote, so abstract, they can’t even be afraid of it the way they are afraid of losing their loved ones.

This despite the fact that everyone includes all your loved ones.

If a drug had a 5% chance of killing your mother, you might let her take it—but only if that drug was the best way to treat some very serious disease. Chemotherapy can be about that risky—but you don’t go on chemo unless you have cancer.

If a technology had a 5% chance of destroying Los Angeles, I’m honestly having trouble thinking of scenarios in which we would be willing to take that risk. But the closest I can come to it is the Manhattan Project. If you’re currently fighting a global war against fascist imperialists, and they are also working on making an atomic bomb, then being the first to make an atomic bomb may in fact be the best option, even if you know that it carries a serious risk of utter catastrophe.

In any case, I think one thing is clear: You don’t take that kind of serious risk unless there is some very large benefit. You don’t take chemotherapy on a whim. You don’t invent atomic bombs just out of curiosity.

Where’s the huge benefit of AI that would justify taking such a huge risk?

Some forms of automation are clearly beneficial, but so far AI per se seems to have largely made our society worse. ChatGPT lies to us. Robocalls inundate us. Deepfakes endanger journalism. What’s the upside here? It makes a ton of money for tech companies, I guess?

Now, fortunately, I think 5% is too high an estimate.

(Scientific American agrees.)

My own estimate is that, over the next two centuries, there is about a 1% chance that AI destroys human civilization, and only a 0.1% chance that it results in human extinction.

This is still really high.

People seem to have trouble with that too.

“Oh, there’s a 99.9% chance we won’t all die; everything is fine, then?” No. There are plenty of other scenarios that would also be very bad, and a total extinction scenario is so terrible that even a 0.1% chance is not something we can simply ignore.

0.1% of people is still 8 million people.

I find myself in a very odd position: On the one hand, I think the probabilities that doomsayers are giving are far too high. On the other hand, I think the actions that are being taken—even by those same doomsayers—are far too small.

Most of them don’t seem to consider a 5% chance to be worthy of drastic action, while I consider a 0.1% chance to be well worthy of it. I would support a complete ban on all AI research immediately, just from that 0.1%.

The only research we should be doing that is in any way related to AI should involve how to make AI safer—absolutely no one should be trying to make it more powerful or apply it to make money. (Yet in reality, almost the opposite is the case.)

Because 8 million people is still a lot of people.

Is it fair to treat a 0.1% chance of killing everyone as equivalent to killing 0.1% of people?

Well, first of all, we have to consider the uncertainty. The difference between a 0.05% chance and a 0.015% chance is millions of people, but there’s probably no way we can actually measure it that precisely.

But it seems to me that something expected to kill between 4 million and 12 million people would still generally be considered very bad.

More importantly, there’s also a chance that AI will save people, or have similarly large benefits. We need to factor that in as well. Something that will kill 4-12 million people but also save 15-30 million people is probably still worth doing (but we should also be trying to find ways to minimize the harm and maximize the benefit).

The biggest problem is that we are deeply uncertain about both the upsides and the downsides. There are a vast number of possible outcomes from inventing AI. Many of those outcomes are relatively mundane; some are moderately good, others are moderately bad. But the moral question seems to be dominated by the big outcomes: With some small but non-negligible probability, AI could lead to either a utopian future or an utter disaster.

The way we are leaping directly into applying AI without even being anywhere close to understanding AI seems to me especially likely to lean toward disaster. No other technology has ever become so immediately widespread while also being so poorly understood.

So far, I’ve yet to see any convincing arguments that the benefits of AI are anywhere near large enough to justify this kind of existential risk. In the near term, AI really only promises economic disruption that will largely be harmful. Maybe one day AI could lead us into a glorious utopia of automated luxury communism, but we really have no way of knowing that will happen—and it seems pretty clear that Google is not going to do that.

Artificial intelligence technology is moving too fast. Even if it doesn’t become powerful enough to threaten our survival for another 50 years (which I suspect it won’t), if we continue on our current path of “make money now, ask questions never”, it’s still not clear that we would actually understand it well enough to protect ourselves by then—and in the meantime it is already causing us significant harm for little apparent benefit.

Why are we even doing this? Why does halting AI research feel like stopping a freight train?

I dare say it’s because we have handed over so much power to corporations.

The paperclippers are already here.

Surviving in an ad-supported world

Apr 21 JDN 2460423

Advertising is as old as money—perhaps even older. Scams have likewise been a part of human society since time immemorial.

But I think it’s fair to say that recently, since the dawn of the Internet at least, both advertising and scams have been proliferating, far beyond what they used to be.

We live in an ad-supported world.

News sites are full of ads. Search engines are full of ads. Even shopping sites are full of ads now; we literally came here planning to buy something, but that wasn’t good enough for you; you want us to also buy something else. Most of the ads are for legitimate products; but some are for scams. (And then there’s multi-level marketing, which is somewhere in between: technically not a scam.)

We’re so accustomed to getting spam emails, phone calls, and texts full of ads and scams that we just accept it as a part of our lives. But these are not something people had to live with even 50 years ago. This is a new, fresh Hell we have wrought for ourselves as a civilization.

AI promises to make this problem even worse. AI still isn’t very good at doing anything particularly useful; you can’t actually trust it to drive a truck or diagnose an X-ray. (There are people working on this sort of thing, but they haven’t yet succeeded.) But it’s already pretty good at making spam texts and phone calls. It’s already pretty good at catfishing people. AI isn’t smart enough to really help us, but it is smart enough to hurt us, especially those of us who are most vulnerable.

I think that this causes a great deal more damage to our society than is commonly understood.

It’s not just that ads are annoying (though they are), or that they undermine our attention span (though they do), or that they exploit the vulnerable (though they do).

I believe that an ad-supported world is a world where trust goes to die.

When the vast majority of your interactions with other people involve those people trying to get your money, some of them by outright fraud—but none of them really honestly—you have no choice but to ratchet down your sense of trust. It begins to feel as this financial transactions are the only form of interaction there is in the world.

But in fact most people can be trusted, and should be trusted—you are missing out on a great deal of what makes life worth living if you do not know how to trust.

The question is whom you trust. You should trust people you know, people you interact with personally and directly. Even strangers are more trustworthy than any corporation will ever be. And never are corporations more dishonest than when they are sending out ads.


The more the world fills with ads, the less room it has for trust.

Is there any way to stem this tide? Or are we simply doomed to live in the cyberpunk dystopia our forebears warned about, where everything is for sale and all available real estate is used for advertising?

Ads and scams only exist because they are profitable; so our goal should be to make them no longer profitable.

Here is one very simple piece of financial advice that will help protect you. Indeed, I believe it can protect so well, that if everyone followed it consistently, we would stem the tide.

Only give money to people you have sought out yourself.

Only buy things you already knew you wanted.

Yes, of course you must buy things. We live in a capitalist society. You can’t survive without buying things. But this is how buying things should work:

You check your fridge and see you are out of milk. So you put “milk” on your grocery list, you go to the grocery store, you find some milk that looks good, and you buy it.

Or, your car is getting old and expensive to maintain, and you decide you need a new one. You run the numbers on your income and expenses, and come up with a budget for a new car. You go to the dealership, they help you pick out a car that fits your needs and your budget, and you buy it.

Your tennis shoes are getting frayed, and it’s time to replace them. You go online and search for “tennis shoes”, looking up sizes and styles until you find a pair that suits you. You order that pair.

You should be the one to decide that you need a thing, and then you should go out looking for it.

It’s okay to get help searching, or even listen to some sales pitches, as long as the whole thing was your idea from the start.

But if someone calls you, texts you, or emails you, asking for your money for something?

Don’t give them a cent.

Just don’t. Don’t do it. Even if it sounds like a good product. Even if it is a good product. If the product they are selling sounds so great that you decide you actually want to buy it, go look for it on your own. Shop around. If you can, go out of your way to buy it from a competing company.

Your attention is valuable. Don’t reward them for stealing it.

This applies to donations, too. Donation asks aren’t as awful as ads, let alone scams, but they are pretty obnoxious, and they only send those things out because people respond to them. If we all stopped responding, they’d stop sending.

Yes, you absolutely should give money to charity. But you should seek out the charities to donate to. You should use trusted sources (like GiveWell and Charity Navigator) to vet them for their reliability, transparency, and cost-effectiveness.

If you just receive junk mail asking you for donations, feel free to take out any little gifts they gave you (it’s often return address labels, for some reason), and then recycle the rest.

Don’t give to the ones who ask for it. Give to the ones who will use it the best.

Reward the charities that do good, not the charities that advertise well.

This is the rule to follow:

If someone contacts you—if they initiate the contact—refuse to give them any money. Ever.

Does this rule seem too strict? It is quite strict, in fact. It requires you to pass up many seemingly-appealing opportunities, and the more ads there are, the more opportunities you’ll need to pass up.

There may even be a few exceptions; no great harm befalls us if we buy Girl Scout cookies or donate to the ASPCA because the former knocked on our doors and the latter showed us TV ads. (Then again, you could just donate to feminist and animal rights charities without any ads or sales pitches.)

But in general, we live in a society that is absolutely inundated with people accosting us and trying to take our money, and they’re only ever going to stop trying to get our money if we stop giving it to them. They will not stop it out of the goodness of their hearts—no, not even the charities, who at least do have some goodness in their hearts. (And certainly not the scammers, who have none.)

They will only stop if it stops working.

So we need to make it stop working. We need to draw this line.

Trust the people around you, who have earned it. Do not trust anyone who seeks you out asking for money.

Telemarketing calls? Hang up. Spam emails? Delete. Junk mail? Recycle. TV ads? Mute and ignore.

And then, perhaps, future generations won’t have to live in an ad-supported world.