How Effective Altruism hurt me

May 12 JDN 2460443

I don’t want this to be taken the wrong way. I still strongly believe in the core principles of Effective Altruism. Indeed, it’s shockingly hard to deny them, because basically they come out to this:

Doing more good is better than doing less good.

Then again, most people want to do good. Basically everyone agrees that more good is better than less good. So what’s the big deal about Effective Altruism?

Well, in practice, most people put shockingly little effort into trying to ensure that they are doing the most good they can. A lot of people just try to be nice people, without ever concerning themselves with the bigger picture. Many of these people don’t give to charity at all.

Then, even among people who do give to charity, typically give to charities more or less at random—or worse, in proportion to how much mail those charities send them begging for donations. (Surely you can see how that is a perverse incentive?) They donate to religious organizations, which sometimes do good things, but fundamentally are founded upon ignorance, patriarchy, and lies.

Effective Altruism is a movement intended to fix this, to get people to see the bigger picture and focus their efforts on where they will do the most good. Vet charities not just for their honesty, but also their efficiency and cost-effectiveness:

Just how many mQALY can you buy with that $1?

That part I still believe in. There is a lot of value in assessing which charities are the most effective, and trying to get more people to donate to those high-impact charities.

But there is another side to Effective Altruism, which I now realize has severely damaged my mental health.

That is the sense of obligation to give as much as you possibly can.

Peter Singer is the most extreme example of this. He seems to have mellowed—a little—in more recent years, but in some of his most famous books he uses the following thought experiment:

To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.

I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.

Basically everyone agrees with this particular decision: Even if you are wearing a very expensive suit that will be ruined, even if you’ll miss something really important like a job interview or even a wedding—most people agree that if you ever come across a drowning child, you should save them.

(Oddly enough, when contemplating this scenario, nobody ever seems to consider the advice that most lifeguards give, which is to throw a life preserver and then go find someone qualified to save the child—because saving someone who is drowning is a lot harder and a lot riskier than most people realize. (“Reach or throw, don’t go.”) But that’s a bit beside the point.)

But Singer argues that we are basically in this position all the time. For somewhere between $500 and $3000, you—yes, you—could donate to a high-impact charity, and thereby save a child’s life.

Does it matter that many other people are better positioned to donate than you are? Does it matter that the child is thousands of miles away and you’ll never see them? Does it matter that there are actually millions of children, and you could never save them all by yourself? Does it matter that you’ll only save a child in expectation, rather than saving some specific child with certainty?

Singer says that none of this matters. For a long time, I believed him.

Now, I don’t.

For, if you actually walked by a drowning child that you could save, only at the cost of missing a wedding and ruining your tuxedo, you clearly should do that. (If it would risk your life, maybe not—and as I alluded to earlier, that’s more likely than you might imagine.) If you wouldn’t, there’s something wrong with you. You’re a bad person.

But most people don’t donate everything they could to high-impact charities. Even Peter Singer himself doesn’t. So if donating is the same as saving the drowning child, it follows that we are all bad people.

(Note: In general, if an ethical theory results in the conclusion that the whole of humanity is evil, there is probably something wrong with that ethical theory.)

Singer has tried to get out of this by saying we shouldn’t “sacrifice things of comparable importance”, and then somehow cash out what “comparable importance” means in such a way that it doesn’t require you to live on the street and eat scraps from trash cans. (Even though the people you’d be donating to largely do live that way.)

I’m not sure that really works, but okay, let’s say it does. Even so, it’s pretty clear that anything you spend money on purely for enjoyment would have to go. You would never eat out at restaurants, unless you could show that the time saved allowed you to get more work done and therefore donate more. You would never go to movies or buy video games, unless you could show that it was absolutely necessary for your own mental functioning. Your life would be work, work, work, then donate, donate, donate, and then do the absolute bare minimum to recover from working and donating so you can work and donate some more.

You would enslave yourself.

And all the while, you’d believe that you were never doing enough, you were never good enough, you are always a terrible person because you try to cling to any personal joy in your own life rather than giving, giving, giving all you have.

I now realize that Effective Altruism, as a movement, had been basically telling me to do that. And I’d been listening.

I now realize that Effective Altruism has given me this voice in my head, which I hear whenever I want to apply for a job or submit work for publication:

If you try, you will probably fail. And if you fail, a child will die.

The “if you try, you will probably fail” is just an objective fact. It’s inescapable. Any given job application or writing submission will probably fail.

Yes, maybe there’s some sort of bundling we could do to reframe that, as I discussed in an earlier post. But basically, this is correct, and I need to accept it.

Now, what about the second part? “If you fail, a child will die.” To most of you, that probably sounds crazy. And it is crazy. It’s way more pressure than any ordinary person should have in their daily life. This kind of pressure should be reserved for neurosurgeons and bomb squads.

But this is essentially what Effective Altruism taught me to believe. It taught me that every few thousand dollars I don’t donate is a child I am allowing to die. And since I can’t donate what I don’t have, it follows that every few thousand dollars I fail to get is another dead child.

And since Effective Altruism is so laser-focused on results above all else, it taught me that it really doesn’t matter whether I apply for the job and don’t get it, or never apply at all; the outcome is the same, and that outcome is that children suffer and die because I had no money to save them.

I think part of the problem here is that Effective Altruism is utilitarian through and through, and utilitarianism has very little place for good enough. There is better and there is worse; but there is no threshold at which you can say that your moral obligations are discharged and you are free to live your life as you wish. There is always more good that you could do, and therefore always more that you should do.

Do we really want to live in a world where to be a good person is to owe your whole life to others?

I do not believe in absolute selfishness. I believe that we owe something to other people. But I no longer believe that we owe everything. Sacrificing my own well-being at the altar of altruism has been incredibly destructive to my mental health, and I don’t think I’m the only one.

By all means, give to high-impact charities. But give a moderate amount—at most, tithe—and then go live your life. You don’t owe the world more than that.

Of men and bears

May 5 JDN 2460436

[CW: rape, violence, crime, homicide]

I think it started on TikTok, but I’m too old for TikTok, so I first saw it on Facebook and Twitter.

Men and women were asked:
“Would you rather be alone in the woods with a man, or a bear?”

Answers seem to have been pretty mixed. Some women still thought a man was a safer choice, but a significant number chose the bear.

Then when the question was changed to a woman, almost everyone chose the woman over the bear.

What can we learn from this?

I think the biggest thing it tells us is that a lot of women are afraid of men. If you are seriously considering the wild animal over the other human being, you’re clearly afraid.

A lot of the discourse on this seems to be assuming that they are right to be afraid, but I’m not so sure.

It’s not that the fear is unfounded: Most women will suffer some sort of harassment, and a sizeable fraction will suffer some sort of physical or sexual assault, at the hands of some men at some point in their lives.

But there is a cost to fear, and I don’t think we’re taking it properly into account here. I’m worried that encouraging women to fear men will only serve to damage relationships between men and women, the vast majority of which are healthy and positive. I’m worried that this fear is really the sort of overreaction to trauma that ends up causing its own kind of harm.

If you think that’s wrong, consider this:

A sizeable fraction of men will be physically assaulted by other men.

Should men fear each other?

Should all men fear all other men?

What does it do to a society when its whole population fears half of its population? Does that sound healthy? Does whatever small increment in security that might provide seem worth it?

Keep in mind that women being afraid of men doesn’t seem to be protecting them from harm right now. So even if there is genuine harm to be feared, the harm of that fear is actually a lot more obvious than the benefit of it. Our entire society becomes fearful and distrustful, and we aren’t actually any safer.

I’m worried that this is like our fear of terrorism, which made us sacrifice our civil liberties without ever clearly making us safer. What are women giving up due to their fear of men? Is it actually protecting them?

If you have any ideas for how we might actually make women safer, let’s hear them. But please, stop saying idiotic things like “Don’t be a rapist.” 95% of men already aren’t, and the 5% who are, are not going to listen to anything you—or I—say to them. (Bystander intervention programs can work. But just telling men to not be rapists does not.)

I’m all for teaching about consent, but it really isn’t that hard to do—and most rapists seem to understand it just fine, they just don’t care. They’ll happily answer on a survey that they “had sex with someone without their consent”. By all means, undermine rape myths; just don’t expect it to dramatically reduce the rate of rape.

I absolutely want to make people safer. But telling people to be afraid of people like me doesn’t actually seem to accomplish that.

And yes, it hurts when people are afraid of you.

This is not a small harm. This is not a minor trifle. Once we are old enough to be seen as “men” rather than “boys” (which seems to happen faster if you’re Black than if you’re White), men know that other people—men and women, but especially women—will fear us. We go through our whole lives having to be careful what we say, how we move, when we touch someone else, because we are shaped like rapists.

When my mother encounters a child, she immediately walks up to the child and starts talking to them, pointing, laughing, giggling. I can’t do that. If I tried to do the exact same thing, I would be seen as a predator. In fact, without children of my own, it’s safer for me to just not interact with children at all, unless they are close friends or family. This is a whole class of joyful, fulfilling experience that I just don’t get to have because people who look like me commit acts of violence.

Normally we’re all about breaking down prejudice, not treating people differently based on how they look—except when it comes to gender, apparently. It’s okay to fear men but not women.

Who is responsible for this?

Well, obviously the ones most responsible are actual rapists.

But they aren’t very likely to listen to me. If I know any rapists, I don’t know that they are rapists. If I did know, I would want them imprisoned. (Which is likely why they wouldn’t tell me if they were.)

Moreover, my odds of actually knowing a rapist are probably lower than you think, because I don’t like to spend time with men who are selfish, cruel, aggressive, misogynist, or hyper-masculine. The fact that 5% of men in general are rapists doesn’t mean that 5% of any non-random sample of men are rapists. I can only think of a few men I have ever known personally who I would even seriously suspect, and I’ve cut ties with all of them.

The fact that psychopaths are not slavering beasts, obviously different from the rest of us, does not mean that there is no way to tell who is a psychopath. It just means that you need to know what you’re actually looking for. When I once saw a glimmer of joy in someone’s eyes as he described the suffering of animals in an experiment, I knew in that moment he was a psychopath. (There are legitimate reasons to harm animals in scientific experiments—but a good person does not enjoy it.) He did not check most of the boxes of the “Slavering Beast theory”: He had many friends; he wasn’t consistently violent; he was a very good liar; he was quite accomplished in life; he was handsome and charismatic. But go through an actual psychopathy checklist, and you realize that every one of these features makes psychopathy more likely, not less.

I’m not even saying it’s easy to detect psychopaths. It’s not. Even experts need to look very closely and carefully, because psychopaths are often very good at hiding. But there are differences. And it really is true that the selfish, cruel, aggressive, misogynist, hyper-masculine men are more likely to be rapists than the generous, kind, gentle, feminist, androgynous men. It’s not a guarantee—there are lots of misogynists who aren’t rapists, and there are men who present as feminists in public but are rapists in private. But it is a tendency nevertheless. You don’t need to treat every man as equally dangerous, and I don’t think it’s healthy to do so.

Indeed, if I had the choice to be alone in the woods with either a gay male feminist or a woman I knew was cruel to animals, I’d definitely choose the man. These differences matter.

And maybe, just maybe, if we could tamp down this fear a little bit, men and women could have healthier interactions with one another and build stronger relationships. Even if the fear is justified, it could still be doing more harm than good.

So are you safer with a man, or a bear?

Let’s go back to the original thought experiment, and consider the actual odds of being attacked. Yes, the number of people actually attacked by bears is far smaller than the number of people actually attacked by men. (It’s also smaller than the number of people attacked by women, by the way.)

This is obviously because we are constantly surrounded by people, and rarely interact with bears.

In other words, that fact alone basically tells us nothing. It could still be true even if bears are far more dangerous than men, because people interact with bears far less often.

The real question is “How likely is an attack, given that you’re alone in the woods with one?”

Unfortunately, I was unable to find any useful statistics on this. There area lot of vague statements like “Bears don’t usually attack humans” or “Bears only attack when startled or protecting their young”; okay. But how often is “usually”? How often are bears startled? What proportion of bears you might encounter are protecting their young?

So this is really a stab in the dark; but do you think it’s perhaps fair to say that maybe 10% of bear-human close encounters result in an attack?

That doesn’t seem like an unreasonably high number, at least. 90% not attacking sounds like “usually”. Being startled or protecting their young don’t seem like events much rarer than 10%. This estimate could certainly be wrong (and I’m sure it’s not precise), but it seems like the right order of magnitude.

So I’m going to take that as my estimate:

If you are alone in the woods with a bear, you have about a 10% chance of being attacked.

Now, what is the probability that a randomly-selected man would attack you, if you were alone in the woods with him?

This one can be much better estimated. It is roughly equal to the proportion of men who are psychopaths.


Now, figures on this vary too, partly because psychopathy comes in degrees. But at the low end we have about 1.2% of men and 0.3% of women who are really full-blown psychopaths, and at the high end we have about 10% of men and 2% of women who exhibit significant psychopathic traits.

I’d like to note two things about these figures:

  1. It still seems like the man is probably safer than the bear.
  2. Men are only about four or five times as likely to be psychopaths as women.

Admittedly, my bear estimate is very imprecise; so if, say, only 5% of bear encounters result in attacks and 10% of men would attack if you were alone in the woods, men could be more dangerous. But I think it’s unlikely. I’m pretty sure bears are more dangerous.

But the really interesting thing is that people who seemed ambivalent about man versus bear, or even were quite happy to choose the bear, seem quite consistent in choosing women over bears. And I’m not sure the gender difference is really large enough to justify that.

If 1.2% to 10% of men are enough for us to fear all men, why aren’t 0.3% to 2% of women enough for us to fear all women? Is there a threshold at 1% or 5% that flips us from “safe” to “dangerous”?

But aren’t men responsible for most violence, especially sexual violence?

Yes, but probably not by as much as you think.

The vast majority of rapesare committed by men, and most of those are against women. But the figures may not be as lopsided as you imagine; in a given year, about 0.3% of women are raped by a man, and about 0.1% of men are raped by a woman. Over their lifetimes, about 25% of women will be sexually assaulted, and about 5% of men will be. Rapes of men by women have gone even more under-reported than rapes in general, in part because it was only recently that being forced to penetrate someone was counted as a sexual assault—even though it very obviously is.

So men are about 5 times as likely to commit rape as women. That’s a big difference, but I bet it’s a lot smaller than what many of you believed. There are statistics going around that claim that as many as 99% of rapes are committed by men; those statistics are ignoring the “forced to penetrate” assaults, and thus basically defining rape of men by women out of existence.

Indeed, 5 to 1 is quite close to the ratio in psychopathy.

I think that’s no coincidence: In fact, I think it’s largely the case that the psychopaths and the rapists are the same people.

What about homicide?

While men are indeed much more likely to be perpetrators of homicide, they are also much more likely to be victims.

Of about 23,000 homicide offenders in 2022, 15,100 were known to be men, 2,100 were known to be women, and 5,800 were unknown (because we never caught them). Assuming that women are no more or less likely to be caught than men, we can ignore the unknown, and presume that the same gender ratio holds across all homicides: 12% are committed by women.

Of about 22,000 homicides in the US last year, 17,700 victims were men. 3,900 victims were women. So men are 4.5 times as likely to be murdered than women in the US. Similar ratios hold in most First World countries (though total numbers are lower).

Overall, this means that men are about 7 times as likely to commit murder, but about 4.5 times as likely to suffer it.

So if we measure by rate of full-blown psychopathy, men are about 4 times as dangerous as women. If we measure by rate of moderate psychopathy, men are about 5 times as dangerous. If we measure by rate of rape, men are about 5 times as dangerous. And if we measure by rate of homicide, men are about 7 times as dangerous—but mainly to each other.

Put all this together, and I think it’s fair to summarize these results as:

Men are about five times as dangerous as women.

That’s not a small difference. But it’s also not an astronomical one. If you are right to be afraid of all men because they could rape or murder you, why are you not also right to be afraid of all women, who are one-fifth as likely to do the same?

Should we all fear everyone?

Surely you can see that isn’t a healthy way for a society to operate. Yes, there are real dangers in this world; but being constantly afraid of everyone will make you isolated, lonely, paranoid and probably depressed—and it may not even protect you.

It seems like a lot of men responding to the “man or bear” meme were honestly shocked that women are so afraid. If so, they have learned something important. Maybe that’s the value in the meme.

But the fear can be real, even justified, and still be hurting more than it’s helping. I don’t see any evidence that it’s actually making anyone any safer.

We need a better answer than fear.

Everyone includes your mother and Los Angeles

Apr 28 JDN 2460430

What are the chances that artificial intelligence will destroy human civilization?

A bunch of experts were surveyed on that question and similar questions, and half of respondents gave a probability of 5% or more; some gave probabilities as high as 99%.

This is incredibly bizarre.

Most AI experts are people who work in AI. They are actively participating in developing this technology. And yet more than half of them think that the technology they are working on right now has a more than 5% chance of destroying human civilization!?

It feels to me like they honestly don’t understand what they’re saying. They can’t really grasp at an intuitive level just what a 5% or 10% chance of global annihilation means—let alone a 99% chance.

If something has a 5% chance of killing everyone, we should consider that at least as bad asthan something that is guaranteed to kill 5% of people.

Probably worse, in fact, because you can recover from losing 5% of the population (we have, several times throughout history). But you cannot recover from losing everyone. So really, it’s like losing 5% of all future people who will ever live—which could be a very large number indeed.

But let’s be a little conservative here, and just count people who already, currently exist, and use 5% of that number.

5% of 8 billion people is 400 million people.

So anyone who is working on AI and also says that AI has a 5% chance of causing human extinction is basically saying: “In expectation, I’m supporting 20 Holocausts.”

If you really think the odds are that high, why aren’t you demanding that any work on AI be tried as a crime against humanity? Why aren’t you out there throwing Molotov cocktails at data centers?

(To be fair, Eliezer Yudkowsky is actually calling for a global ban on AI that would be enforced by military action. That’s the kind of thing you should be doing if indeed you believe the odds are that high. But most AI doomsayers don’t call for such drastic measures, and many of them even continue working in AI as if nothing is wrong.)

I think this must be scope neglector something even worse.

If you thought a drug had a 99% chance of killing your mother, you would never let her take the drug, and you would probably sue the company for making it.

If you thought a technology had a 99% chance of destroying Los Angeles, you would never even consider working on that technology, and you would want that technology immediately and permanently banned.

So I would like to remind anyone who says they believe the danger is this great and yet continues working in the industry:

Everyone includes your mother and Los Angeles.

If AI destroys human civilization, that means AI destroys Los Angeles. However shocked and horrified you would be if a nuclear weapon were detonated in the middle of Hollywood, you should be at least that shocked and horrified by anyone working on advancing AI, if indeed you truly believe that there is at least a 5% chance of AI destroying human civilization.

But people just don’t seem to think this way. Their minds seem to take on a totally different attitude toward “everyone” than they would take toward any particular person or even any particular city. The notion of total human annihilation is just so remote, so abstract, they can’t even be afraid of it the way they are afraid of losing their loved ones.

This despite the fact that everyone includes all your loved ones.

If a drug had a 5% chance of killing your mother, you might let her take it—but only if that drug was the best way to treat some very serious disease. Chemotherapy can be about that risky—but you don’t go on chemo unless you have cancer.

If a technology had a 5% chance of destroying Los Angeles, I’m honestly having trouble thinking of scenarios in which we would be willing to take that risk. But the closest I can come to it is the Manhattan Project. If you’re currently fighting a global war against fascist imperialists, and they are also working on making an atomic bomb, then being the first to make an atomic bomb may in fact be the best option, even if you know that it carries a serious risk of utter catastrophe.

In any case, I think one thing is clear: You don’t take that kind of serious risk unless there is some very large benefit. You don’t take chemotherapy on a whim. You don’t invent atomic bombs just out of curiosity.

Where’s the huge benefit of AI that would justify taking such a huge risk?

Some forms of automation are clearly beneficial, but so far AI per se seems to have largely made our society worse. ChatGPT lies to us. Robocalls inundate us. Deepfakes endanger journalism. What’s the upside here? It makes a ton of money for tech companies, I guess?

Now, fortunately, I think 5% is too high an estimate.

(Scientific American agrees.)

My own estimate is that, over the next two centuries, there is about a 1% chance that AI destroys human civilization, and only a 0.1% chance that it results in human extinction.

This is still really high.

People seem to have trouble with that too.

“Oh, there’s a 99.9% chance we won’t all die; everything is fine, then?” No. There are plenty of other scenarios that would also be very bad, and a total extinction scenario is so terrible that even a 0.1% chance is not something we can simply ignore.

0.1% of people is still 8 million people.

I find myself in a very odd position: On the one hand, I think the probabilities that doomsayers are giving are far too high. On the other hand, I think the actions that are being taken—even by those same doomsayers—are far too small.

Most of them don’t seem to consider a 5% chance to be worthy of drastic action, while I consider a 0.1% chance to be well worthy of it. I would support a complete ban on all AI research immediately, just from that 0.1%.

The only research we should be doing that is in any way related to AI should involve how to make AI safer—absolutely no one should be trying to make it more powerful or apply it to make money. (Yet in reality, almost the opposite is the case.)

Because 8 million people is still a lot of people.

Is it fair to treat a 0.1% chance of killing everyone as equivalent to killing 0.1% of people?

Well, first of all, we have to consider the uncertainty. The difference between a 0.05% chance and a 0.015% chance is millions of people, but there’s probably no way we can actually measure it that precisely.

But it seems to me that something expected to kill between 4 million and 12 million people would still generally be considered very bad.

More importantly, there’s also a chance that AI will save people, or have similarly large benefits. We need to factor that in as well. Something that will kill 4-12 million people but also save 15-30 million people is probably still worth doing (but we should also be trying to find ways to minimize the harm and maximize the benefit).

The biggest problem is that we are deeply uncertain about both the upsides and the downsides. There are a vast number of possible outcomes from inventing AI. Many of those outcomes are relatively mundane; some are moderately good, others are moderately bad. But the moral question seems to be dominated by the big outcomes: With some small but non-negligible probability, AI could lead to either a utopian future or an utter disaster.

The way we are leaping directly into applying AI without even being anywhere close to understanding AI seems to me especially likely to lean toward disaster. No other technology has ever become so immediately widespread while also being so poorly understood.

So far, I’ve yet to see any convincing arguments that the benefits of AI are anywhere near large enough to justify this kind of existential risk. In the near term, AI really only promises economic disruption that will largely be harmful. Maybe one day AI could lead us into a glorious utopia of automated luxury communism, but we really have no way of knowing that will happen—and it seems pretty clear that Google is not going to do that.

Artificial intelligence technology is moving too fast. Even if it doesn’t become powerful enough to threaten our survival for another 50 years (which I suspect it won’t), if we continue on our current path of “make money now, ask questions never”, it’s still not clear that we would actually understand it well enough to protect ourselves by then—and in the meantime it is already causing us significant harm for little apparent benefit.

Why are we even doing this? Why does halting AI research feel like stopping a freight train?

I dare say it’s because we have handed over so much power to corporations.

The paperclippers are already here.

Surviving in an ad-supported world

Apr 21 JDN 2460423

Advertising is as old as money—perhaps even older. Scams have likewise been a part of human society since time immemorial.

But I think it’s fair to say that recently, since the dawn of the Internet at least, both advertising and scams have been proliferating, far beyond what they used to be.

We live in an ad-supported world.

News sites are full of ads. Search engines are full of ads. Even shopping sites are full of ads now; we literally came here planning to buy something, but that wasn’t good enough for you; you want us to also buy something else. Most of the ads are for legitimate products; but some are for scams. (And then there’s multi-level marketing, which is somewhere in between: technically not a scam.)

We’re so accustomed to getting spam emails, phone calls, and texts full of ads and scams that we just accept it as a part of our lives. But these are not something people had to live with even 50 years ago. This is a new, fresh Hell we have wrought for ourselves as a civilization.

AI promises to make this problem even worse. AI still isn’t very good at doing anything particularly useful; you can’t actually trust it to drive a truck or diagnose an X-ray. (There are people working on this sort of thing, but they haven’t yet succeeded.) But it’s already pretty good at making spam texts and phone calls. It’s already pretty good at catfishing people. AI isn’t smart enough to really help us, but it is smart enough to hurt us, especially those of us who are most vulnerable.

I think that this causes a great deal more damage to our society than is commonly understood.

It’s not just that ads are annoying (though they are), or that they undermine our attention span (though they do), or that they exploit the vulnerable (though they do).

I believe that an ad-supported world is a world where trust goes to die.

When the vast majority of your interactions with other people involve those people trying to get your money, some of them by outright fraud—but none of them really honestly—you have no choice but to ratchet down your sense of trust. It begins to feel as this financial transactions are the only form of interaction there is in the world.

But in fact most people can be trusted, and should be trusted—you are missing out on a great deal of what makes life worth living if you do not know how to trust.

The question is whom you trust. You should trust people you know, people you interact with personally and directly. Even strangers are more trustworthy than any corporation will ever be. And never are corporations more dishonest than when they are sending out ads.


The more the world fills with ads, the less room it has for trust.

Is there any way to stem this tide? Or are we simply doomed to live in the cyberpunk dystopia our forebears warned about, where everything is for sale and all available real estate is used for advertising?

Ads and scams only exist because they are profitable; so our goal should be to make them no longer profitable.

Here is one very simple piece of financial advice that will help protect you. Indeed, I believe it can protect so well, that if everyone followed it consistently, we would stem the tide.

Only give money to people you have sought out yourself.

Only buy things you already knew you wanted.

Yes, of course you must buy things. We live in a capitalist society. You can’t survive without buying things. But this is how buying things should work:

You check your fridge and see you are out of milk. So you put “milk” on your grocery list, you go to the grocery store, you find some milk that looks good, and you buy it.

Or, your car is getting old and expensive to maintain, and you decide you need a new one. You run the numbers on your income and expenses, and come up with a budget for a new car. You go to the dealership, they help you pick out a car that fits your needs and your budget, and you buy it.

Your tennis shoes are getting frayed, and it’s time to replace them. You go online and search for “tennis shoes”, looking up sizes and styles until you find a pair that suits you. You order that pair.

You should be the one to decide that you need a thing, and then you should go out looking for it.

It’s okay to get help searching, or even listen to some sales pitches, as long as the whole thing was your idea from the start.

But if someone calls you, texts you, or emails you, asking for your money for something?

Don’t give them a cent.

Just don’t. Don’t do it. Even if it sounds like a good product. Even if it is a good product. If the product they are selling sounds so great that you decide you actually want to buy it, go look for it on your own. Shop around. If you can, go out of your way to buy it from a competing company.

Your attention is valuable. Don’t reward them for stealing it.

This applies to donations, too. Donation asks aren’t as awful as ads, let alone scams, but they are pretty obnoxious, and they only send those things out because people respond to them. If we all stopped responding, they’d stop sending.

Yes, you absolutely should give money to charity. But you should seek out the charities to donate to. You should use trusted sources (like GiveWell and Charity Navigator) to vet them for their reliability, transparency, and cost-effectiveness.

If you just receive junk mail asking you for donations, feel free to take out any little gifts they gave you (it’s often return address labels, for some reason), and then recycle the rest.

Don’t give to the ones who ask for it. Give to the ones who will use it the best.

Reward the charities that do good, not the charities that advertise well.

This is the rule to follow:

If someone contacts you—if they initiate the contact—refuse to give them any money. Ever.

Does this rule seem too strict? It is quite strict, in fact. It requires you to pass up many seemingly-appealing opportunities, and the more ads there are, the more opportunities you’ll need to pass up.

There may even be a few exceptions; no great harm befalls us if we buy Girl Scout cookies or donate to the ASPCA because the former knocked on our doors and the latter showed us TV ads. (Then again, you could just donate to feminist and animal rights charities without any ads or sales pitches.)

But in general, we live in a society that is absolutely inundated with people accosting us and trying to take our money, and they’re only ever going to stop trying to get our money if we stop giving it to them. They will not stop it out of the goodness of their hearts—no, not even the charities, who at least do have some goodness in their hearts. (And certainly not the scammers, who have none.)

They will only stop if it stops working.

So we need to make it stop working. We need to draw this line.

Trust the people around you, who have earned it. Do not trust anyone who seeks you out asking for money.

Telemarketing calls? Hang up. Spam emails? Delete. Junk mail? Recycle. TV ads? Mute and ignore.

And then, perhaps, future generations won’t have to live in an ad-supported world.

The Butlerian Jihad is looking better all the time

Mar 24 JDN 2460395

A review of The Age of Em by Robin Hanson

In the Dune series, the Butlerian Jihad was a holy war against artificial intelligence that resulted in a millenias-long taboo against all forms of intelligent machines. It was effectively a way to tell a story about the distant future without basically everything being about robots or cyborgs.

After reading Robin Hanson’s book, I’m starting to think that maybe we should actually do it.

Thus it is written: “Thou shalt not make a machine in the likeness of a human mind.”

Hanson says he’s trying to reserve judgment and present objective predictions without evaluation, but it becomes very clear throughout that this is the future he wants, as well as—or perhaps even instead of—the world he expects.

In many ways, it feels like he has done his very best to imagine a world of true neoclassical rational agents in perfect competition, a sort of sandbox for the toys he’s always wanted to play with. Throughout he very much takes the approach of a neoclassical economist, making heroic assumptions and then following them to their logical conclusions, without ever seriously asking whether those assumptions actually make any sense.

To his credit, Hanson does not buy into the hype that AGI will be successful any day now. He predicts that we will achieve the ability to fully emulate human brains and thus create a sort of black-box AGI that behaves very much like a human within about 100 years. Given how the Blue Brain Project has progressed (much slower than its own hype machine told us it would—and let it be noted that I predicted this from the very beginning), I think this is a fairly plausible time estimate. He refers to a mind emulated in this way as an “em”; I have mixed feelings about the term, but I suppose we did need some word for that, and it certainly has conciseness on its side.

Hanson believes that a true understanding of artificial intelligence will only come later, and the sort of AGI that can be taken apart and reprogrammed for specific goals won’t exist for at least a century after that. Both of these sober, reasonable predictions are deeply refreshing in a field that’s been full of people saying “any day now” for the last fifty years.

But Hanson’s reasonableness just about ends there.

In The Age of Em, government is exactly as strong as Hanson needs it to be. Somehow it simultaneously ensures a low crime rate among a population that doubles every few months while also having no means of preventing that population growth. Somehow ensures that there is no labor collusion and corporations never break the law, but without imposing any regulations that might reduce efficiency in any way.

All of this begins to make more sense when you realize that Hanson’s true goal here is to imagine a world where neoclassical economics is actually true.

He realized it didn’t work on humans, so instead of giving up the theory, he gave up the humans.

Hanson predicts that ems will casually make short-term temporary copies of themselves called “spurs”, designed to perform a particular task and then get erased. I guess maybe he would, but I for one would not so cavalierly create another person and then make their existence dedicated to doing a single job before they die. The fact that I created this person, and they are very much like me, seem like reasons to care more about their well-being, not less! You’re asking me to enslave and murder my own child. (Honestly, the fact that Robin Hanson thinks ems will do this all the time says more about Robin Hanson than anything else.) Any remotely sane society of ems would ban the deletion of another em under any but the most extreme circumstances, and indeed treat it as tantamount to murder.

Hanson predicts that we will only copy the minds of a few hundred people. This is surely true at some point—the technology will take time to develop, and we’ll have to start somewhere. But I don’t see why we’d stop there, when we could continue to copy millions or billions of people; and his choices of who would be emulated, while not wildly implausible, are utterly terrifying.

He predicts that we’d emulate genius scientists and engineers; okay, fair enough, that seems right. I doubt that the benefits of doing so will be as high as many people imagine, because scientific progress actually depends a lot more on the combined efforts of millions of scientists than on rare sparks of brilliance by lone geniuses; but those people are definitely very smart, and having more of them around could be a good thing. I can also see people wanting to do this, and thus investing in making it happen.

He also predicts that we’d emulate billionaires. Now, as a prediction, I have to admit that this is actually fairly plausible; billionaires are precisely the sort of people who are rich enough to pay to be emulated and narcissistic enough to want to. But where Hanson really goes off the deep end here is that he sees this as a good thing. He seems to honestly believe that billionaires are so rich because they are so brilliant and productive. He thinks that a million copies of Elon Musks would produce a million hectobillionaires—when in reality it would produce a million squabbling narcissists, who at best had to split the same $200 billion wealth between them, and might very well end up with less because they squander it.

Hanson has a long section on trying to predict the personalities of ems. Frankly this could just have been dropped entirely; it adds almost nothing to the book, and the book is much too long. But the really striking thing to me about that section is what isn’t there. He goes through a long list of studies that found weak correlations between various personality traits like extroversion or openness and wealth—mostly comparing something like the 20th percentile to the 80th percentile—and then draws sweeping conclusions about what ems will be like, under the assumption that ems are all drawn from people in the 99.99999th percentile. (Yes, upper-middle-class people are, on average, more intelligent and more conscientious than lower-middle-class people. But do we even have any particular reason to think that the personalities of people who make $150,000 are relevant to understanding the behavior of people who make $15 billion?) But he completely glosses over the very strong correlations that specifically apply to people in that very top super-rich class: They’re almost all narcissists and/or psychopaths.

Hanson predicts a world where each em is copied many, many times—millions, billions, even trillions of times, and also in which the very richest ems are capable of buying parallel processing time that lets them accelerate their own thought processes to a million times faster than a normal human. (Is that even possible? Does consciousness work like that? Who knows!?) The world that Hanson is predicting is thus one where all the normal people get outnumbered and overpowered by psychopaths.

Basically this is the most abjectly dystopian cyberpunk hellscape imaginable. And he talks about it the whole time as if it were good.

It’s like he played the game Action Potential and thought, “This sounds great! I’d love to live there!” I mean, why wouldn’t you want to owe a life-debt on your own body and have to work 120-hour weeks for a trillion-dollar corporation just to make the payments on it?

Basically, Hanson doesn’t understand how wealth is actually acquired. He is educated as an economist, yet his understanding of capitalism basically amounts to believing in magic. He thinks that competitive markets just somehow perfectly automatically allocate wealth to whoever is most productive, and thus concludes that whoever is wealthy now must just be that productive.

I can see no other way to explain his wildly implausible predictions that the em economy will double every month or two. A huge swath of the book depends upon this assumption, but he waits until halfway through the book to even try to defend it, and then does an astonishingly bad job of doing so. (Honestly, even if you buy his own arguments—which I don’t—they seem to predict that population would grow with Moore’s Law—doubling every couple of years, not every couple of months.)

Whereas Keynes predicted based on sound economic principles that economic growth would more or less proceed apace and got his answer spot-on, Hanson predicts that for mysterious, unexplained reasons economic growth will suddenly increase by two orders of magnitude—and I’m pretty sure he’s going to be wildly wrong.

Hanson also predicts that ems will be on average poorer than we are, based on some sort of perfect-competition argument that doesn’t actually seem to mesh at all with his predictions of spectacularly rapid economic and technological growth. I think the best way to make sense of this is to assume that it means the trend toward insecure affluence will continue: Ems will have an objectively high standard of living in terms of what they own, what games they play, where they travel, and what they eat and drink (in simulation), but they will constantly be struggling to keep up with the rent on their homes—or even their own bodies. This is a world where (the very finest simulation of) Dom Perignon is $7 a bottle and wages are $980 an hour—but monthly rent is $284,000.

Early in the book Hanson argues that this life of poverty and scarcity will lead to more conservative values, on the grounds that people who are poorer now seem to be more conservative, and this has something to do with farmers versus foragers. Hanson’s explanation of all this is baffling; I will quote it at length, just so it’s clear I’m not misrepresenting it:

The other main (and independent) axis of value variation ranges between poor and rich societies. Poor societies place more value on conformity, security, and traditional values such as marriage, heterosexuality, religion, patriotism, hard work, and trust in authority. In contrast, rich societies place more value on individualism, self-direction, tolerance, pleasure, nature, leisure, and trust. When the values of individuals within a society vary on the same axis, we call this a left/liberal (rich) versus right/conservative (poor) axis.

Foragers tend to have values more like those of rich/liberal people today, while subsistence farmers tend to have values more like those of poor/conservative people today. As industry has made us richer, we have on average moved from conservative/farmer values to liberal/forager values. This value movement can make sense if cultural evolution used the social pressures farmers faced, such as conformity and religion, to induce humans, who evolved to find forager behaviors natural, to instead act like farmers. As we become rich, we don’t as strongly fear the threats behind these social pressures. This connection may result in part from disease; rich people are healthier, and healthier societies fear less.

The alternate theory that we have instead learned that rich forager values are more true predicts that values should have followed a random walk over time, and be mostly common across space. It also predicts the variance of value changes tracking the rate at which relevant information appears. But in fact industrial-era value changes have tracked the wealth of each society in much more steady and consistent fashion. And on this theory, why did foragers ever acquire farmer values?

[…]

In the scenario described in this book, many strange-to-forager behaviors are required, and median per-person (i.e. per-em) incomes return to near-subsistence levels. This suggests that the em era may reverse the recent forager-like trend toward more liberality; ems may have more farmer-like values.

The Age of Em, p. 26-27

There’s a lot to unpack here, but maybe it’s better to burn the whole suitcase.

First of all, it’s not entirely clear that this is really a single axis of variation, that foragers and farmers differ from each other in the same way as liberals and conservatives. There’s some truth to that at least—both foragers and liberals tend to be more generous, both farmers and conservatives tend to enforce stricter gender norms. But there are also clear ways that liberal values radically deviate from forager values: Forager societies are extremely xenophobic, and typically very hostile to innovation, inequality, or any attempts at self-aggrandizement (a phenomenon called “fierce egalitarianism“). San Francisco epitomizes rich, liberal values, but it would be utterly alien and probably regarded as evil by anyone from the Yanomamo.

Second, there is absolutely no reason to predict any kind of random walk. That’s just nonsense. Would you predict that scientific knowledge is a random walk, with each new era’s knowledge just a random deviation from the last’s? Maybe next century we’ll return to geocentrism, or phrenology will be back in vogue? On the theory that liberal values (or at least some liberal values) are objectively correct, we would expect them to advance as knowledge doesimproving over time, and improving faster in places that have better institutions for research, education, and free expression. And indeed, this is precisely the pattern we have observed. (Those places are also richer, but that isn’t terribly surprising either!)

Third, while poorer regions are indeed more conservative, poorer people within a region actually tend to be more liberal. Nigeria is poorer and more conservative than Norway, and Mississippi is poorer and more conservative than Massachusetts. But higher-income households in the United States are more likely to vote Republican. I think this is particularly true of people living under insecure affluence: We see the abundance of wealth around us, and don’t understand why we can’t learn to share it better. We’re tired of fighting over scraps while the billionaires claim more and more. Millennials and Zoomers absolutely epitomize insecure affluence, and we also absolutely epitomize liberalism. So, if indeed ems live a life of insecure affluence, we should expect them to be like Zoomers: “Trans liberation now!” and “Eat the rich!” (Or should I say, “Delete the rich!”)

And really, doesn’t that make more sense? Isn’t that the trend our society has been on, for at least the last century? We’ve been moving toward more and more acceptance of women and minorities, more and more deviation from norms, more and more concern for individual rights and autonomy, more and more resistance to authority and inequality.

The funny thing is, that world sounds a lot better than the one Hanson is predicting.

A world of left-wing ems would probably run things a lot better than Hanson imagines: Instead of copying the same hundred psychopaths over and over until we fill the planet, have no room for anything else, and all struggle to make enough money just to stay alive, we could moderate our population to a more sustainable level, preserve diversity and individuality, and work toward living in greater harmony with each other and the natural world. We could take this economic and technological abundance and share it and enjoy it, instead of killing ourselves and each other to make more of it for no apparent reason.

The one good argument Hanson makes here is expressed in a single sentence: “And on this theory, why did foragers ever acquire farmer values?” That actually is a good question; why did we give up on leisure and egalitarianism when we transitioned from foraging to agriculture?

I think scarcity probably is relevant here: As food became scarcer, maybe because of climate change, people were forced into an agricultural lifestyle just to have enough to eat. Early agricultural societies were also typically authoritarian and violent. Under those conditions, people couldn’t be so generous and open-minded; they were surrounded by threats and on the verge of starvation.

I guess if Hanson is right that the em world is also one of poverty and insecurity, we might go back to those sort of values, borne of desperation. But I don’t see any reason to think we’d give up all of our liberal values. I would predict that ems will still be feminist, for instance; in fact, Hanson himself admits that since VR avatars would let us change gender presentation at will, gender would almost certainly become more fluid in a world of ems. Far from valuing heterosexuality more highly (as conservatives do, a “farmer value” according to Hanson), I suspect that ems will have no further use for that construct, because reproduction will be done by manufacturing, not sex, and it’ll be so easy to swap your body into a different one that hardly anyone will even keep the same gender their whole life. They’ll think it’s quaint that we used to identify so strongly with our own animal sexual dimorphism.

But maybe it is true that the scarcity induced by a hyper-competitive em world would make people more selfish, less generous, less trusting, more obsessed with work. Then let’s not do that! We don’t have to build that world! This isn’t a foregone conclusion!

There are many other paths yet available to us.

Indeed, perhaps the simplest would be to just ban artificial intelligence, at least until we can get a better handle on what we’re doing—and perhaps until we can institute the kind of radical economic changes necessary to wrest control of the world away from the handful of psychopaths currently trying their best to run it into the ground.

I admit, it would kind of suck to not get any of the benefits of AI, like self-driving cars, safer airplanes, faster medical research, more efficient industry, and better video games. It would especially suck if we did go full-on Butlerian Jihad and ban anything more complicated than a pocket calculator. (Our lifestyle might have to go back to what it was in—gasp! The 1950s!)

But I don’t think it would suck nearly as much as the world Robin Hanson thinks is in store for us if we continue on our current path.

So I certainly hope he’s wrong about all this.

Fortunately, I think he probably is.

How I feel is how things are

Mar 17 JDN 2460388

One of the most difficult things in life to learn is how to treat your own feelings and perceptions as feelings and perceptions—rather than simply as the way the world is.

A great many errors people make can be traced to this.

When we disagree with someone (whether it is as trivial as pineapple on pizza or as important as international law), we feel like they must be speaking in bad faith, they must be lying—because, to us, they are denying the way the world is. If the subject is important enough, we may become convinced that they are evil—for only someone truly evil could deny such important truths. (Ultimately, even holy wars may come from this perception.)


When we are overconfident, we not only can’t see that; we can scarcely even consider that it could be true. Because we don’t simply feel confident; we are sure we will succeed. And thus if we do fail, as we often do, the result is devastating; it feels as if the world itself has changed in order to make our wishes not come true.

Conversely, when we succumb to Impostor Syndrome, we feel inadequate, and so become convinced that we are inadequate, and thus that anyone who says they believe we are competent must either be lying or else somehow deceived. And then we fear to tell anyone, because we know that our jobs and our status depend upon other people seeing us as competent—and we are sure that if they knew the truth, they’d no longer see us that way.

When people see their beliefs as reality, they don’t even bother to check whether their beliefs are accurate.

Why would you need to check whether the way things are is the way things are?

This is how common misconceptions persist—the information needed to refute them is widely available, but people simply don’t realize they needed to be looking for that information.

For lots of things, misconceptions aren’t very consequential. But some common misconceptions do have large consequences.

For instance, most Americans think that crime is increasing and worse now than it was 30 or 50 years ago. (I tested this on my mother this morning; she thought so too.) It is in fact much, much better—violent crimes are about half as common in the US today as they were in the 1970s. Republicans are more likely to get this wrong than Democrats—but an awful lot of Democrats still get it wrong.

It’s not hard to see how that kind of misconception could drive voters into supporting “tough on crime” candidates who will enact needlessly harsh punishments and waste money on excessive police and incarceration. Indeed, when you look at our world-leading spending on police and incarceration (highest in absolute terms, third-highest as a portion of GDP), it’s pretty clear this is exactly what’s happening.

And it would be so easy—just look it up, right here, or here, or here—to correct that misconception. But people don’t even think to bother; they just know that their perception must be the truth. It never even occurs to them that they could be wrong, and so they don’t even bother to look.

This is not because people are stupid or lazy. (I mean, compared to what?) It’s because perceptions feel like the truth, and it’s shockingly difficult to see them as anything other than the truth.

It takes a very dedicated effort, and no small amount of training, to learn to see your own perceptions as how you see things rather than simply how things are.

I think part of what makes this so difficult is the existential terror that results when you realize that anything you believe—even anything you perceive—could potentially be wrong. Basically the entire field of epistemology is dedicated to understanding what we can and can’t be certain of—and the “can’t” is a much, much bigger set than the “can”.

In a sense, you can be certain of what you feel and perceive—you can be certain that you feel and perceive them. But you can’t be certain whether those feelings and perceptions correspond to your external reality.

When you are sad, you know that you are sad. You can be certain of that. But you don’t know whether you should be sad—whether you have a reason to be sad. Often, perhaps even usually, you do. But sometimes, the sadness comes from within you, or from misperceiving the world.

Once you learn to recognize your perceptions as perceptions, you can question them, doubt them, challenge them. Training your mind to do this is an important part of mindfulness meditation, and also of cognitive behavioral therapy.

But even after years of training, it’s still shockingly hard to do this, especially in the throes of a strong emotion. Simply seeing that what you’re feeling—about yourself, or your situation, or the world—is not an entirely accurate perception can take an incredible mental effort.

We really seem to be wired to see our perceptions as reality.

This makes a certain amount of sense, in evolutionary terms. In an ancestral environment where death was around every corner, we really didn’t have time to stop and thinking carefully about whether our perceptions were accurate.

Two ancient hominids hear a sound that might be a tiger. One immediately perceives it as a tiger, and runs away. The other stops to think, and then begins carefully examining his surroundings, looking for more conclusive evidence to determine whether it is in fact a tiger.

The latter is going to have more accurate beliefs—right up until the point where it is a tiger and he gets eaten.

But in our world today, it may be more dangerous to hold onto false beliefs than to analyze and challenge our beliefs. We may harm ourselves—and others—more by trusting our perceptions too much rather than by taking the time to analyze them.

Against Self-Delusion

Mar 10 JDN 2460381

Is there a healthy amount of self-delusion? Would we be better off convincing ourselves that the world is better than it really is, in order to be happy?


A lot of people seem to think so.

I most recently encountered this attitude in Kathryn Schulz’s book Being Wrong (I liked the TED talk much better, in part because it didn’t have this), but there are plenty of other examples.

You’ll even find advocates for this attitude in the scientific literature, particularly when talking about the Lake Wobegon Effect, optimism bias, and depressive realism.

Fortunately, the psychology community seems to be turning away from this, perhaps because of mounting empirical evidence that “depressive realism” isn’t a robust effect. When I searched today, it was easier to find pop psych articles against self-delusion than in favor of it. (I strongly suspect that would not have been true about 10 years ago.)

I have come up with a very simple, powerful argument against self-delusion:

If you’re allowed to delude yourself, why not just believe everything is perfect?

If you can paint your targets after shooting, why not always paint a bullseye?

The notion seems to be that deluding yourself will help you achieve your goals. But if you’re going to delude yourself, why bother achieving goals? You could just pretend to achieve goals. You could just convince yourself that you have achieved goals. Wouldn’t that be so much easier?

The idea seems to be, for instance, to get an aspiring writer to actually finish the novel and submit it to the publisher. But why shouldn’t she simply imagine she has already done so? Why not simply believe she’s already a bestselling author?

If there’s something wrong with deluding yourself into thinking you’re a bestselling author, why isn’t that exact same thing wrong with deluding yourself into thinking you’re a better writer than you are?

Once you have opened this Pandora’s Box of lies, it’s not clear how you can ever close it again. Why shouldn’t you just stop working, stop eating, stop doing anything at all, but convince yourself that your life is wonderful and die in a state of bliss?

Granted, this is not generally what people who favor (so-called) “healthy self-delusion” advocate. But it’s difficult to see any principled reason why they should reject it. Once you give up on tying your beliefs to reality, it’s difficult to see why you shouldn’t just say that anything goes.

Why are some deviations from reality okay, but not others? Is it because they are small? Small changes in belief can still have big consequences: Believe a car is ten meters behind where it really is, and it may just run you over.

The general approach of “healthy self-delusion” seems to be that it’s all right to believe that you are smarter, prettier, healthier, wiser, and more competent than you actually are, because that will make you more confident and therefore more successful.

Well, first of all, it’s worth pointing out that some people obviously go way too far in that direction and become narcissists. But okay, let’s say we find a way to avoid that. (It’s unclear exactly how, since, again, by construction, we aren’t tying ourselves to reality.)

In practice, the people who most often get this sort of advice are people who currently lack self-confidence, who doubt their own abilities—people who suffer from Impostor Syndrome. And for people like that (and I count myself among them), a certain amount of greater self-confidence would surely be a good thing.

The idea seems to be that deluding yourself to increase your confidence will get you to face challenges and take risks you otherwise wouldn’t have, and that this will yield good outcomes.

But there’s a glaring hole in this argument:

If you have to delude yourself in order to take a risk, you shouldn’t take that risk.

Risk-taking is not an unalloyed good. Russian Roulette is certainly risky, but it’s not a good career path.

There are in fact a lot of risks you simply shouldn’t take, because they aren’t worth it.

The right risks to take are the ones for which the expected benefit outweighs the expected cost: The one with the highest expected utility. (That sounds simple, and in principle it is; but in practice, it can be extraordinarily difficult to determine.)

In other words, the right risks to take are the ones that are rational. The ones that a correct view of the world will instruct you to take.

That aspiring novelist, then, should write the book and submit it to publishers—if she’s actually any good at writing. If she’s actually terrible, then never submitting the book is the correct decision; she should spend more time honing her craft before she tries to finish it—or maybe even give up on it and do something else with her life.

What she needs, therefore, is not a confident assessment of her abilities, but an accurate one. She needs to believe that she is competent if and only if she actually is competent.

But I can also see how self-delusion can seem like good advice—and even work for some people.

If you start from an excessively negative view of yourself or the world, then giving yourself a more positive view will likely cause you to accomplish more things. If you’re constantly telling yourself that you are worthless and hopeless, then convincing yourself that you’re better than you thought is absolutely what you need to do. (Because it’s true.)

I can even see how convincing yourself that you are the best is useful—even though, by construction, most people aren’t. When you live in a hyper-competitive society like ours, where we are constantly told that winning is everything, losers are worthless, and second place is as bad as losing, it may help you get by to tell yourself that you really are the best, that you really can win. (Even weirder: “Winning isn’t everything; it’s the only thing.” Uh, that’s just… obviously false? Like, what is this even intended to mean that “Winning is everything” didn’t already say better?)

But that’s clearly not the right answer. You’re solving one problem by adding another. You shouldn’t believe you are the best; you should recognize that you don’t have to be. Second place is not as bad as losing—and neither is fifth, or tenth, or fiftieth place. The 100th-most successful author in the world still makes millions writing. The 1,000th-best musician does regular concert tours. The 10,000th-best accountant has a steady job. Even the 100,000th-best trucker can make a decent living. (Well, at least until the robots replace him.)

Honestly, it’d be great if our whole society would please get this memo. It’s no problem that “only a minority of schools play sport to a high level”—indeed, that’s literally inevitable. It’s also not clear that “60% of students read below grade level” is a problem, when “grade level” seems to be largely defined by averages. (Literacy is great and all, but what’s your objective standard for “what a sixth grader should be able to read”?)

We can’t all be the best. We can’t all even be above-average.

That’s okay. Below-average does not mean inadequate.

That’s the message we need to be sending:

You don’t have to be the best in order to succeed.

You don’t have to be perfect in order to be good enough.

You don’t even have to be above-average.

This doesn’t require believing anything that isn’t true. It doesn’t require overestimating your abilities or your chances. In fact, it asks you to believe something that is more true than “You have to be the best” or “Winning is everything”.

If what you want to do is actually worth doing, an accurate assessment will tell you that. And if an accurate assessment tells you not to do it, then you shouldn’t do it. So you have no reason at all to strive for anything other than accurate beliefs.

With this in mind, the fact that the empirical evidence for “depressive realism” is shockingly weak is not only unsurprising; it’s almost irrelevant. You can’t have evidence against being rational. If deluded people succeed more, that means something is very, very wrong; and the solution is clearly not to make more people deluded.

Of course, it’s worth pointing out that the evidence is shockingly weak: Depressed people show different biases, not less bias. And in fact they seem to be more overconfident in the following sense: They are more certain that what they predict will happen is what will actually happen.

So while most people think they will succeed when they will probably fail, depressed people are certain they will fail when in fact they could succeed. Both beliefs are inaccurate, but the depressed one is in an important sense more inaccurate: It tells you to give up, which is the wrong thing to do.

“Healthy self-delusion” ultimately amounts to trying to get you to do the right thing for the wrong reasons. But why? Do the right thing for the right reasons! If it’s really the right thing, it should have the right reasons!

Serenity and its limits

Feb 25 JDN 2460367

God grant me the serenity
to accept the things I cannot change;
courage to change the things I can;
and wisdom to know the difference.

Of course I don’t care for its religious message (and the full prayer is even more overtly religious), but the serenity prayer does capture an important insight into some of the most difficult parts of human existence.

Some things are as we would like them to be. They don’t require our intervention. (Though we may still stand to benefit from teaching ourselves to savor them and express gratitude for them.)

Other things are not as we would like them to be. The best option, of course, would be to change them.

But such change is often difficult, and sometimes practically impossible.

Sometimes we don’t even know whether change is possible—that’s where the wisdom to know the difference comes in. This is a wisdom we often lack, but it’s at least worth striving for.

If it is impossible to change what we want to change, then we are left with only one choice:

Do we accept it, or not?

The serenity prayer tells us to accept it. There is wisdom in this. Often it is the right answer. Some things about our lives are awful, but simply cannot be changed by any known means.

Death, for instance.

Someday, perhaps, we will finally conquer death, and humanity—or whatever humanity has become—will enter a new era of existence. But today is not that day. When grieving the loss of people we love, ultimately our only option is to accept that they are gone, and do our best to appreciate what they left behind, and the parts of them that are still within us. They would want us to carry on and live full lives, not forever be consumed by grief.

There are many other things we’d like to change, and maybe someday we will, but right now, we simply don’t know how: diseases we can’t treat, problems we can’t solve, questions we can’t answer. It’s often useful for someone to be trying to push those frontiers, but for any given person, the best option is often to find a way to accept things as they are.

But there are also things I cannot change and yet will not accept.

Most of these things fall into one broad category:

Injustice.

I can’t end war, or poverty, or sexism, or racism, or homophobia. Neither can you. Neither can any one person, or any hundred people, or any thousand people, or probably even any million people. (If all it took were a million dreams, we’d be there already. A billion might be enough—though it would depend which billion people shared the dream.)

I can’t. You can’t. But we can.

And here I mean “we” in a very broad sense indeed: Humanity as a collective whole. All of us together can end injustice—and indeed that is the only way it ever could be ended, by our collective action. Collective action is what causes injustice, and collective action is what can end it.

I therefore consider serenity in the face of injustice to be a very dangerous thing.

At times, and to certain degrees, that serenity may be necessary.

Those who are right now in the grips of injustice may need to accept it in order to survive. Reflecting on the horror of a concentration camp won’t get you out of it. Embracing the terror of war won’t save you from being bombed. Weeping about the sorrow of being homeless won’t get you off the streets.

Even for those of us who are less directly affected, it may sometimes be wisest to blunt our rage and sorrow at injustice—for otherwise they could be paralyzing, and if we are paralyzed, we can’t help anyone.

Sometimes we may even need to withdraw from the fight for justice, simply because we are too exhausted to continue. I read recently of a powerful analogy about this:

A choir can sing the same song forever, as long as its singers take turns resting.

If everyone tries to sing their very hardest all the time, the song must eventually end, as no one can sing forever. But if we rotate our efforts, so that at any given moment some are singing while others are resting, then we theoretically could sing for all time—as some of us die, others would be born to replace us in the song.

For a literal choir this seems absurd: Who even wants to sing the same song forever? (Lamb Chop, I guess.)

But the fight for justice probably is one we will need to continue forever, in different forms in different times and places. There may never be a perfectly just society, and even if there is, there will be no guarantee that it remains so without eternal vigilance. Yet the fight is worth it: in so many ways our society is already more just than it once was, and could be made more so in the future.

This fight will only continue if we don’t accept the way things are. Even when any one of us can’t change the world—even if we aren’t sure how many of us it would take to change the world—we still have to keep trying.

But as in the choir, each one of us also needs to rest.

We can’t all be fighting all the time as hard as we can. (I suppose if literally everyone did that, the fight for justice would be immediately and automatically won. But that’s never going to happen. There will always be opposition.)

And when it is time for each of us to rest, perhaps some serenity is what we need after all. Perhaps there is a balance to be found here: We do not accept things as they are, but we do accept that we cannot change them immediately or single-handedly. We accept that our own strength is limited and sometimes we must withdraw from the fight.

So yes, we need some serenity. But not too much.

Enough serenity to accept that we won’t win the fight immediately or by ourselves, and sometimes we’ll need to stop fighting and rest. But not so much serenity that we give up the fight altogether.

For there are many things that I can’t change—but we can.

Love is more than chemicals

Feb 18 JDN 2460360

One of the biggest problems with the rationalist community is an inability to express sincerity and reverence.

I get it: Religion is the world’s greatest source of sincerity and reverence, and religion is the most widespread and culturally important source of irrationality. So we declare ourselves enemies of religion, and also end up being enemies of sincerity and reverence.

But in doing so, we lose something very important. We cut ourselves off from some of the greatest sources of meaning and joy in human life.

In fact, we may even be undermining our own goals: If we don’t offer people secular, rationalist forms of reverence, they may find they need to turn back to religion in order to fill that niche.

One of the most pernicious forms of this anti-sincerity, anti-reverence attitude (I can’t just say ‘insincere’ or ‘irreverent’, as those have different meanings) is surely this one:

Love is just a chemical reaction.

(I thought it seemed particularly apt to focus on this one during the week of Valentine’s Day.)

On the most casual of searches I could find at least half a dozen pop-sci articles and a YouTube video propounding this notion (though I could also find a few articles trying to debunk the notion as well).

People who say this sort of thing seem to think that they are being wise and worldly while the rest of us are just being childish and naive. They think we are seeing something that isn’t there. In fact, they are being jaded and cynical. They are failing to see something that is there.

(Perhaps the most extreme form of this was from Rick & Morty; and while Rick as a character is clearly intended to be jaded and cynical, far too many people also see him as a role model.)

Part of the problem may also be a failure to truly internalize the Basic Fact of Cognitive Science:

You are your brain.

No, your consciousness is not an illusion. It’s not an “epiphenomenon” (whatever that isI’ve never encountered one in real life). Your mind is not fake or imaginary. Your mind actually exists—and it is a product of your brain. Both brain and mind exist, and are in fact the same.

It’s so hard for people to understand this that some become dualists, denying the unity of the brain and the mind. That, at least, I can sympathize with, even though we have compelling evidence that it is wrong. But there’s another tack people sometimes take, eliminative materialism, where they try to deny that the mind exists at all. And that I truly do not understand. How can you think that nobody can think? Yet intelligent, respected philosophers have claimed to believe such things.

Love is one of the most important parts of our lives.

This may be more true of humans than of literally any other entity in the known universe.

The only serious competition comes from other mammals: They are really the only other beings we know of that are capable of love. And even they don’t seem to be as good at it as we are; they can love only those closest to them, while we can love entire nations and even abstract concepts.

And once you go beyond that, even to reptiles—let alone fish, or amphibians, or insects, or molluscs—it’s not clear that other animals are really capable of love at all. They seem to be capable of some forms of thought and feeling: They get hungry, or angry, or horny. But do they really love?

And even the barest emotional capacities of an insect are still categorically beyond what most of the universe is capable of feeling, which is to say: Nothing. The vast, vast majority of the universe feels neither love nor hate, neither joy nor pain.

Yet humans can love, and do love, and it is a large part of what gives our lives meaning.

I don’t just mean romantic love here, though I do think it’s worth noting that people who dismiss the reality of romantic love somehow seem reluctant to do the same for the love parents have for their children—even though it’s made of pretty much the same brain chemicals. Perhaps there is a limit to their cynicism.

Yes, love is made of chemicals—because everything is made of chemicals. We live in a material, chemical universe. Saying that love is made of chemicals is an almost completely vacuous statement; it’s basically tantamount to saying that love exists.

In other contexts, you already understand this.

“That’s not a bridge, it’s just a bunch of iron atoms!” rightfully strikes you as an absurd statement to make. Yes, the bridge is made of steel, and steel is mostly iron, and everything is made of atoms… but clearly there’s a difference between a random pile of iron and a bridge.

“That’s not a computer, it’s just a bunch of silicon atoms!” similarly registers as nonsense: Yes, it is indeed mostly made of silicon, but beach sand and quartz crystals are not computers.

It is in this same sense that joy is made of dopamine and love is made of chemical reactions. Yes, those are in fact the constituent parts—but things are more than just their parts.

I think that on some level, even most rationalists recognize that love is more than some arbitrary chemical reaction. I think “love is just chemicals” is mainly something people turn to for a couple of reasons: Sometimes, they are so insistent on rejecting everything that even resembles religious belief that they end up rejecting all meaning and value in human life. Other times, they have been so heartbroken, that they try to convince themselves love isn’t real—to dull the pain. (But of course if it weren’t, there would be no pain to dull.)

But love is no more (or less) a chemical reaction than any other human experience: The very belief “love is just a chemical reaction” is, itself, made of chemical reactions.

Everything we do is made of chemical reactions, because we are made of chemical reactions.

Part of the problem here—and with the Basic Fact of Cognitive Science in general—is that we really have no idea how this works. For most of what we deal with in daily life, and even an impressive swath of the overall cosmos, we have a fairly good understanding of how things work. We know how cars drive, how wind blows, why rain falls; we even know how cats purr and why birds sing. But when it comes to understanding how the physical matter of the brain generates the subjective experiences of thought, feeling, and belief—of which love is made—we lack even the most basic understanding. The correlation between the two is far too strong to deny; but as far as causal mechanisms, we know absolutely nothing. (Indeed, worse than that: We can scarcely imagine a causal mechanism that would make any sense. We not only don’t know the answer; we don’t know what an answer would look like.)

So, no, I can’t tell you how we get from oxytocin and dopamine to love. I don’t know how that makes any sense. No one does. But we do know it’s true.

And just like everything else, love is more than the chemicals it’s made of.

Let’s call it “copytheft”

Feb 11 JDN 2460353

I have written previously about how ridiculous it is that we refer to the unauthorized copying of media such as music and video games as “piracy” as though it were somehow equivalent to capturing ships on the high seas.

In that post a few years ago I suggested calling it simply “unauthorized copying”, but that clearly isn’t catching on, perhaps because it’s simply too much of a mouthful. So today I offer a compromise:

Let’s call it “copytheft”.

That takes no longer to say than “piracy” (and only slightly longer to write), and far more clearly states what’s actually going on. No ships have been seized on the high seas; there has been no murder, arson, or slavery.

Yes, it’s debatable whether copytheft really constitutes theft—and I would generally argue that it does not—but just from hearing that word, you would probably infer that the following process took place:

  1. I took a thing.
  2. I made a copy of that thing that I wasn’t supposed to.
  3. I put the original thing back where it was, unharmed.

The paradigmatic example of this theft-copy-replace sequence would be a key, of course: You take someone’s key, copy it, then put the key back where it was, so you now can unlock their locks but they are none the wiser.

With unauthorized copying of media, you’re not exactly doing steps 1 and 3; the copier often has the media completely legitimately before they make the copy, and it may not even have a clear physical location to be put back to (it must be physically stored somewhere, but particularly if it’s streamed from the cloud it hardly matters where).

But you’re definitely doing step 2, and that was the only part that had a permanent effect; so I think that the nomenclature still seems to work well enough.

Copytheft also has a similar sound to copyleft, the use of alternative intellectual property mechanisms by the authors to grand broader licensing than is ordinarily afforded by copyright, and also to copyfraud, the crime of claiming exclusive copyright to content that is in fact public domain. Hopefully that common structure will help the term get some purchase.

Of course, I can hardly bring a word into widespread use on my own. Others like you have to not only read it, but like it enough that you’re willing to actually use it—and then we need a certain critical mass of people using it in order to make it actually catch on.

So, I’d like to take a moment to offer you some justification why it’s worth changing to this new word.

First, it is admittedly imperfect; by containing the word “theft”, it already feels like we’re conceding something to the defenders of copyright.

But by including the word “copy” in the term, we can draw attention to the most important aspect that distinguishes copytheft from, well, theft:

The original owner still has the thing.

That’s the part that they want us to forget, that the harsh word “piracy” leads you towards. A ship that is captured by pirates is a ship that may never again sail for your own navy. A song that is “pirated”—copythefted—is one that not only the original owners, but also everyone who bought it, still have in exactly the same state they did before.

Thus it simply cannot be that copytheft takes money out of the hands of artists. At worst, it fails to give money to artists.

That could still be a bad thing: Artists need to pay bills too, and a world where nobody pays for any art is surely a world with a lot fewer artists—and the ones who remain far more miserable. But it’s clearly a different sort of thing than ordinary theft, as nothing has been lost.

Moreover, it’s not clear that in most cases copytheft even does fail to give money that would otherwise have been given. Maybe sometimes it does—a certain proportion of people who copytheft a given song, film, or video game might have been willing to pay the original price if the copythefted version had not been available. But typically I suspect that people who’d be willing to pay full price… do pay full price. Thus, the people who are copythefting the media wouldn’t have bought it at full price anyway.

They might have bought it at some lower price, in which case that is foregone payment; but it’s surely considerably less than the “losses” often reported by the film and music industries, which seem to be based on the assumption that everyone who copythefts would have otherwise paid full price. And in fact many people might have been unwilling to buy at any nonzero price, and were only willing to copytheft the media precisely because it didn’t cost them any money or a great deal of effort to do so.

And in fact if you think about it, what about people who would have been willing to pay more than the original price? Surely there were many of them as well, yet we don’t grant media corporations the right to that money. That is also money that they could have been given but weren’t—and we decided, as a society, that they didn’t deserve to have it. It’s not that it would be impossible to do so: We could give corporations the authority to price-discriminate on all of their media. (They probably couldn’t do it perfectly, but they could surely do it quite well.) But we made the policy choice to live in a world where media is sold by single-price monopolies rather than one where it is sold by price-discriminating monopolies.

The mere fact that someone might have been willing to pay you more money if the market were different does not entitle you to receive that money. It has not been stolen from you. Indeed, typically it’s more that you have not been allowed to exploit them. It’s usually the presence of competition that prevents corporations from receiving the absolute maximum profit they might potentially have received if they had full control over the market. Corporations making less profit than they otherwise would have is generally a sign of good economic policy—a sign that things are reasonably fair.

Why else is “copytheft” a good word to use?

Above all, we do not allow our terms to be defined by our opponents.

We don’t allow them insinuate that our technically violating draconian regulations designed to maximize the profits of Disney and Viacom somehow constitutes a terrible crime against other human beings.

“Piracy is not a victimless crime”, they will say.

Well, actual piracy isn’t. But copytheft? Yeah, uh, it kinda is.

Maybe not quite as victimless as, say, marijuana or psilocybin, which no one even has any rational reason to prefer you not do. But still, you’re not really making anyone else worse off—that sounds pretty victimless.

Of course, it does give us less reason to wear tricorn hats and eyepatches.

But guess what? You can still do that anyway!