The Butlerian Jihad is looking better all the time

Mar 24 JDN 2460395

A review of The Age of Em by Robin Hanson

In the Dune series, the Butlerian Jihad was a holy war against artificial intelligence that resulted in a millenias-long taboo against all forms of intelligent machines. It was effectively a way to tell a story about the distant future without basically everything being about robots or cyborgs.

After reading Robin Hanson’s book, I’m starting to think that maybe we should actually do it.

Thus it is written: “Thou shalt not make a machine in the likeness of a human mind.”

Hanson says he’s trying to reserve judgment and present objective predictions without evaluation, but it becomes very clear throughout that this is the future he wants, as well as—or perhaps even instead of—the world he expects.

In many ways, it feels like he has done his very best to imagine a world of true neoclassical rational agents in perfect competition, a sort of sandbox for the toys he’s always wanted to play with. Throughout he very much takes the approach of a neoclassical economist, making heroic assumptions and then following them to their logical conclusions, without ever seriously asking whether those assumptions actually make any sense.

To his credit, Hanson does not buy into the hype that AGI will be successful any day now. He predicts that we will achieve the ability to fully emulate human brains and thus create a sort of black-box AGI that behaves very much like a human within about 100 years. Given how the Blue Brain Project has progressed (much slower than its own hype machine told us it would—and let it be noted that I predicted this from the very beginning), I think this is a fairly plausible time estimate. He refers to a mind emulated in this way as an “em”; I have mixed feelings about the term, but I suppose we did need some word for that, and it certainly has conciseness on its side.

Hanson believes that a true understanding of artificial intelligence will only come later, and the sort of AGI that can be taken apart and reprogrammed for specific goals won’t exist for at least a century after that. Both of these sober, reasonable predictions are deeply refreshing in a field that’s been full of people saying “any day now” for the last fifty years.

But Hanson’s reasonableness just about ends there.

In The Age of Em, government is exactly as strong as Hanson needs it to be. Somehow it simultaneously ensures a low crime rate among a population that doubles every few months while also having no means of preventing that population growth. Somehow ensures that there is no labor collusion and corporations never break the law, but without imposing any regulations that might reduce efficiency in any way.

All of this begins to make more sense when you realize that Hanson’s true goal here is to imagine a world where neoclassical economics is actually true.

He realized it didn’t work on humans, so instead of giving up the theory, he gave up the humans.

Hanson predicts that ems will casually make short-term temporary copies of themselves called “spurs”, designed to perform a particular task and then get erased. I guess maybe he would, but I for one would not so cavalierly create another person and then make their existence dedicated to doing a single job before they die. The fact that I created this person, and they are very much like me, seem like reasons to care more about their well-being, not less! You’re asking me to enslave and murder my own child. (Honestly, the fact that Robin Hanson thinks ems will do this all the time says more about Robin Hanson than anything else.) Any remotely sane society of ems would ban the deletion of another em under any but the most extreme circumstances, and indeed treat it as tantamount to murder.

Hanson predicts that we will only copy the minds of a few hundred people. This is surely true at some point—the technology will take time to develop, and we’ll have to start somewhere. But I don’t see why we’d stop there, when we could continue to copy millions or billions of people; and his choices of who would be emulated, while not wildly implausible, are utterly terrifying.

He predicts that we’d emulate genius scientists and engineers; okay, fair enough, that seems right. I doubt that the benefits of doing so will be as high as many people imagine, because scientific progress actually depends a lot more on the combined efforts of millions of scientists than on rare sparks of brilliance by lone geniuses; but those people are definitely very smart, and having more of them around could be a good thing. I can also see people wanting to do this, and thus investing in making it happen.

He also predicts that we’d emulate billionaires. Now, as a prediction, I have to admit that this is actually fairly plausible; billionaires are precisely the sort of people who are rich enough to pay to be emulated and narcissistic enough to want to. But where Hanson really goes off the deep end here is that he sees this as a good thing. He seems to honestly believe that billionaires are so rich because they are so brilliant and productive. He thinks that a million copies of Elon Musks would produce a million hectobillionaires—when in reality it would produce a million squabbling narcissists, who at best had to split the same $200 billion wealth between them, and might very well end up with less because they squander it.

Hanson has a long section on trying to predict the personalities of ems. Frankly this could just have been dropped entirely; it adds almost nothing to the book, and the book is much too long. But the really striking thing to me about that section is what isn’t there. He goes through a long list of studies that found weak correlations between various personality traits like extroversion or openness and wealth—mostly comparing something like the 20th percentile to the 80th percentile—and then draws sweeping conclusions about what ems will be like, under the assumption that ems are all drawn from people in the 99.99999th percentile. (Yes, upper-middle-class people are, on average, more intelligent and more conscientious than lower-middle-class people. But do we even have any particular reason to think that the personalities of people who make $150,000 are relevant to understanding the behavior of people who make $15 billion?) But he completely glosses over the very strong correlations that specifically apply to people in that very top super-rich class: They’re almost all narcissists and/or psychopaths.

Hanson predicts a world where each em is copied many, many times—millions, billions, even trillions of times, and also in which the very richest ems are capable of buying parallel processing time that lets them accelerate their own thought processes to a million times faster than a normal human. (Is that even possible? Does consciousness work like that? Who knows!?) The world that Hanson is predicting is thus one where all the normal people get outnumbered and overpowered by psychopaths.

Basically this is the most abjectly dystopian cyberpunk hellscape imaginable. And he talks about it the whole time as if it were good.

It’s like he played the game Action Potential and thought, “This sounds great! I’d love to live there!” I mean, why wouldn’t you want to owe a life-debt on your own body and have to work 120-hour weeks for a trillion-dollar corporation just to make the payments on it?

Basically, Hanson doesn’t understand how wealth is actually acquired. He is educated as an economist, yet his understanding of capitalism basically amounts to believing in magic. He thinks that competitive markets just somehow perfectly automatically allocate wealth to whoever is most productive, and thus concludes that whoever is wealthy now must just be that productive.

I can see no other way to explain his wildly implausible predictions that the em economy will double every month or two. A huge swath of the book depends upon this assumption, but he waits until halfway through the book to even try to defend it, and then does an astonishingly bad job of doing so. (Honestly, even if you buy his own arguments—which I don’t—they seem to predict that population would grow with Moore’s Law—doubling every couple of years, not every couple of months.)

Whereas Keynes predicted based on sound economic principles that economic growth would more or less proceed apace and got his answer spot-on, Hanson predicts that for mysterious, unexplained reasons economic growth will suddenly increase by two orders of magnitude—and I’m pretty sure he’s going to be wildly wrong.

Hanson also predicts that ems will be on average poorer than we are, based on some sort of perfect-competition argument that doesn’t actually seem to mesh at all with his predictions of spectacularly rapid economic and technological growth. I think the best way to make sense of this is to assume that it means the trend toward insecure affluence will continue: Ems will have an objectively high standard of living in terms of what they own, what games they play, where they travel, and what they eat and drink (in simulation), but they will constantly be struggling to keep up with the rent on their homes—or even their own bodies. This is a world where (the very finest simulation of) Dom Perignon is $7 a bottle and wages are $980 an hour—but monthly rent is $284,000.

Early in the book Hanson argues that this life of poverty and scarcity will lead to more conservative values, on the grounds that people who are poorer now seem to be more conservative, and this has something to do with farmers versus foragers. Hanson’s explanation of all this is baffling; I will quote it at length, just so it’s clear I’m not misrepresenting it:

The other main (and independent) axis of value variation ranges between poor and rich societies. Poor societies place more value on conformity, security, and traditional values such as marriage, heterosexuality, religion, patriotism, hard work, and trust in authority. In contrast, rich societies place more value on individualism, self-direction, tolerance, pleasure, nature, leisure, and trust. When the values of individuals within a society vary on the same axis, we call this a left/liberal (rich) versus right/conservative (poor) axis.

Foragers tend to have values more like those of rich/liberal people today, while subsistence farmers tend to have values more like those of poor/conservative people today. As industry has made us richer, we have on average moved from conservative/farmer values to liberal/forager values. This value movement can make sense if cultural evolution used the social pressures farmers faced, such as conformity and religion, to induce humans, who evolved to find forager behaviors natural, to instead act like farmers. As we become rich, we don’t as strongly fear the threats behind these social pressures. This connection may result in part from disease; rich people are healthier, and healthier societies fear less.

The alternate theory that we have instead learned that rich forager values are more true predicts that values should have followed a random walk over time, and be mostly common across space. It also predicts the variance of value changes tracking the rate at which relevant information appears. But in fact industrial-era value changes have tracked the wealth of each society in much more steady and consistent fashion. And on this theory, why did foragers ever acquire farmer values?

[…]

In the scenario described in this book, many strange-to-forager behaviors are required, and median per-person (i.e. per-em) incomes return to near-subsistence levels. This suggests that the em era may reverse the recent forager-like trend toward more liberality; ems may have more farmer-like values.

The Age of Em, p. 26-27

There’s a lot to unpack here, but maybe it’s better to burn the whole suitcase.

First of all, it’s not entirely clear that this is really a single axis of variation, that foragers and farmers differ from each other in the same way as liberals and conservatives. There’s some truth to that at least—both foragers and liberals tend to be more generous, both farmers and conservatives tend to enforce stricter gender norms. But there are also clear ways that liberal values radically deviate from forager values: Forager societies are extremely xenophobic, and typically very hostile to innovation, inequality, or any attempts at self-aggrandizement (a phenomenon called “fierce egalitarianism“). San Francisco epitomizes rich, liberal values, but it would be utterly alien and probably regarded as evil by anyone from the Yanomamo.

Second, there is absolutely no reason to predict any kind of random walk. That’s just nonsense. Would you predict that scientific knowledge is a random walk, with each new era’s knowledge just a random deviation from the last’s? Maybe next century we’ll return to geocentrism, or phrenology will be back in vogue? On the theory that liberal values (or at least some liberal values) are objectively correct, we would expect them to advance as knowledge doesimproving over time, and improving faster in places that have better institutions for research, education, and free expression. And indeed, this is precisely the pattern we have observed. (Those places are also richer, but that isn’t terribly surprising either!)

Third, while poorer regions are indeed more conservative, poorer people within a region actually tend to be more liberal. Nigeria is poorer and more conservative than Norway, and Mississippi is poorer and more conservative than Massachusetts. But higher-income households in the United States are more likely to vote Republican. I think this is particularly true of people living under insecure affluence: We see the abundance of wealth around us, and don’t understand why we can’t learn to share it better. We’re tired of fighting over scraps while the billionaires claim more and more. Millennials and Zoomers absolutely epitomize insecure affluence, and we also absolutely epitomize liberalism. So, if indeed ems live a life of insecure affluence, we should expect them to be like Zoomers: “Trans liberation now!” and “Eat the rich!” (Or should I say, “Delete the rich!”)

And really, doesn’t that make more sense? Isn’t that the trend our society has been on, for at least the last century? We’ve been moving toward more and more acceptance of women and minorities, more and more deviation from norms, more and more concern for individual rights and autonomy, more and more resistance to authority and inequality.

The funny thing is, that world sounds a lot better than the one Hanson is predicting.

A world of left-wing ems would probably run things a lot better than Hanson imagines: Instead of copying the same hundred psychopaths over and over until we fill the planet, have no room for anything else, and all struggle to make enough money just to stay alive, we could moderate our population to a more sustainable level, preserve diversity and individuality, and work toward living in greater harmony with each other and the natural world. We could take this economic and technological abundance and share it and enjoy it, instead of killing ourselves and each other to make more of it for no apparent reason.

The one good argument Hanson makes here is expressed in a single sentence: “And on this theory, why did foragers ever acquire farmer values?” That actually is a good question; why did we give up on leisure and egalitarianism when we transitioned from foraging to agriculture?

I think scarcity probably is relevant here: As food became scarcer, maybe because of climate change, people were forced into an agricultural lifestyle just to have enough to eat. Early agricultural societies were also typically authoritarian and violent. Under those conditions, people couldn’t be so generous and open-minded; they were surrounded by threats and on the verge of starvation.

I guess if Hanson is right that the em world is also one of poverty and insecurity, we might go back to those sort of values, borne of desperation. But I don’t see any reason to think we’d give up all of our liberal values. I would predict that ems will still be feminist, for instance; in fact, Hanson himself admits that since VR avatars would let us change gender presentation at will, gender would almost certainly become more fluid in a world of ems. Far from valuing heterosexuality more highly (as conservatives do, a “farmer value” according to Hanson), I suspect that ems will have no further use for that construct, because reproduction will be done by manufacturing, not sex, and it’ll be so easy to swap your body into a different one that hardly anyone will even keep the same gender their whole life. They’ll think it’s quaint that we used to identify so strongly with our own animal sexual dimorphism.

But maybe it is true that the scarcity induced by a hyper-competitive em world would make people more selfish, less generous, less trusting, more obsessed with work. Then let’s not do that! We don’t have to build that world! This isn’t a foregone conclusion!

There are many other paths yet available to us.

Indeed, perhaps the simplest would be to just ban artificial intelligence, at least until we can get a better handle on what we’re doing—and perhaps until we can institute the kind of radical economic changes necessary to wrest control of the world away from the handful of psychopaths currently trying their best to run it into the ground.

I admit, it would kind of suck to not get any of the benefits of AI, like self-driving cars, safer airplanes, faster medical research, more efficient industry, and better video games. It would especially suck if we did go full-on Butlerian Jihad and ban anything more complicated than a pocket calculator. (Our lifestyle might have to go back to what it was in—gasp! The 1950s!)

But I don’t think it would suck nearly as much as the world Robin Hanson thinks is in store for us if we continue on our current path.

So I certainly hope he’s wrong about all this.

Fortunately, I think he probably is.

How I feel is how things are

Mar 17 JDN 2460388

One of the most difficult things in life to learn is how to treat your own feelings and perceptions as feelings and perceptions—rather than simply as the way the world is.

A great many errors people make can be traced to this.

When we disagree with someone (whether it is as trivial as pineapple on pizza or as important as international law), we feel like they must be speaking in bad faith, they must be lying—because, to us, they are denying the way the world is. If the subject is important enough, we may become convinced that they are evil—for only someone truly evil could deny such important truths. (Ultimately, even holy wars may come from this perception.)


When we are overconfident, we not only can’t see that; we can scarcely even consider that it could be true. Because we don’t simply feel confident; we are sure we will succeed. And thus if we do fail, as we often do, the result is devastating; it feels as if the world itself has changed in order to make our wishes not come true.

Conversely, when we succumb to Impostor Syndrome, we feel inadequate, and so become convinced that we are inadequate, and thus that anyone who says they believe we are competent must either be lying or else somehow deceived. And then we fear to tell anyone, because we know that our jobs and our status depend upon other people seeing us as competent—and we are sure that if they knew the truth, they’d no longer see us that way.

When people see their beliefs as reality, they don’t even bother to check whether their beliefs are accurate.

Why would you need to check whether the way things are is the way things are?

This is how common misconceptions persist—the information needed to refute them is widely available, but people simply don’t realize they needed to be looking for that information.

For lots of things, misconceptions aren’t very consequential. But some common misconceptions do have large consequences.

For instance, most Americans think that crime is increasing and worse now than it was 30 or 50 years ago. (I tested this on my mother this morning; she thought so too.) It is in fact much, much better—violent crimes are about half as common in the US today as they were in the 1970s. Republicans are more likely to get this wrong than Democrats—but an awful lot of Democrats still get it wrong.

It’s not hard to see how that kind of misconception could drive voters into supporting “tough on crime” candidates who will enact needlessly harsh punishments and waste money on excessive police and incarceration. Indeed, when you look at our world-leading spending on police and incarceration (highest in absolute terms, third-highest as a portion of GDP), it’s pretty clear this is exactly what’s happening.

And it would be so easy—just look it up, right here, or here, or here—to correct that misconception. But people don’t even think to bother; they just know that their perception must be the truth. It never even occurs to them that they could be wrong, and so they don’t even bother to look.

This is not because people are stupid or lazy. (I mean, compared to what?) It’s because perceptions feel like the truth, and it’s shockingly difficult to see them as anything other than the truth.

It takes a very dedicated effort, and no small amount of training, to learn to see your own perceptions as how you see things rather than simply how things are.

I think part of what makes this so difficult is the existential terror that results when you realize that anything you believe—even anything you perceive—could potentially be wrong. Basically the entire field of epistemology is dedicated to understanding what we can and can’t be certain of—and the “can’t” is a much, much bigger set than the “can”.

In a sense, you can be certain of what you feel and perceive—you can be certain that you feel and perceive them. But you can’t be certain whether those feelings and perceptions correspond to your external reality.

When you are sad, you know that you are sad. You can be certain of that. But you don’t know whether you should be sad—whether you have a reason to be sad. Often, perhaps even usually, you do. But sometimes, the sadness comes from within you, or from misperceiving the world.

Once you learn to recognize your perceptions as perceptions, you can question them, doubt them, challenge them. Training your mind to do this is an important part of mindfulness meditation, and also of cognitive behavioral therapy.

But even after years of training, it’s still shockingly hard to do this, especially in the throes of a strong emotion. Simply seeing that what you’re feeling—about yourself, or your situation, or the world—is not an entirely accurate perception can take an incredible mental effort.

We really seem to be wired to see our perceptions as reality.

This makes a certain amount of sense, in evolutionary terms. In an ancestral environment where death was around every corner, we really didn’t have time to stop and thinking carefully about whether our perceptions were accurate.

Two ancient hominids hear a sound that might be a tiger. One immediately perceives it as a tiger, and runs away. The other stops to think, and then begins carefully examining his surroundings, looking for more conclusive evidence to determine whether it is in fact a tiger.

The latter is going to have more accurate beliefs—right up until the point where it is a tiger and he gets eaten.

But in our world today, it may be more dangerous to hold onto false beliefs than to analyze and challenge our beliefs. We may harm ourselves—and others—more by trusting our perceptions too much rather than by taking the time to analyze them.

Against Self-Delusion

Mar 10 JDN 2460381

Is there a healthy amount of self-delusion? Would we be better off convincing ourselves that the world is better than it really is, in order to be happy?


A lot of people seem to think so.

I most recently encountered this attitude in Kathryn Schulz’s book Being Wrong (I liked the TED talk much better, in part because it didn’t have this), but there are plenty of other examples.

You’ll even find advocates for this attitude in the scientific literature, particularly when talking about the Lake Wobegon Effect, optimism bias, and depressive realism.

Fortunately, the psychology community seems to be turning away from this, perhaps because of mounting empirical evidence that “depressive realism” isn’t a robust effect. When I searched today, it was easier to find pop psych articles against self-delusion than in favor of it. (I strongly suspect that would not have been true about 10 years ago.)

I have come up with a very simple, powerful argument against self-delusion:

If you’re allowed to delude yourself, why not just believe everything is perfect?

If you can paint your targets after shooting, why not always paint a bullseye?

The notion seems to be that deluding yourself will help you achieve your goals. But if you’re going to delude yourself, why bother achieving goals? You could just pretend to achieve goals. You could just convince yourself that you have achieved goals. Wouldn’t that be so much easier?

The idea seems to be, for instance, to get an aspiring writer to actually finish the novel and submit it to the publisher. But why shouldn’t she simply imagine she has already done so? Why not simply believe she’s already a bestselling author?

If there’s something wrong with deluding yourself into thinking you’re a bestselling author, why isn’t that exact same thing wrong with deluding yourself into thinking you’re a better writer than you are?

Once you have opened this Pandora’s Box of lies, it’s not clear how you can ever close it again. Why shouldn’t you just stop working, stop eating, stop doing anything at all, but convince yourself that your life is wonderful and die in a state of bliss?

Granted, this is not generally what people who favor (so-called) “healthy self-delusion” advocate. But it’s difficult to see any principled reason why they should reject it. Once you give up on tying your beliefs to reality, it’s difficult to see why you shouldn’t just say that anything goes.

Why are some deviations from reality okay, but not others? Is it because they are small? Small changes in belief can still have big consequences: Believe a car is ten meters behind where it really is, and it may just run you over.

The general approach of “healthy self-delusion” seems to be that it’s all right to believe that you are smarter, prettier, healthier, wiser, and more competent than you actually are, because that will make you more confident and therefore more successful.

Well, first of all, it’s worth pointing out that some people obviously go way too far in that direction and become narcissists. But okay, let’s say we find a way to avoid that. (It’s unclear exactly how, since, again, by construction, we aren’t tying ourselves to reality.)

In practice, the people who most often get this sort of advice are people who currently lack self-confidence, who doubt their own abilities—people who suffer from Impostor Syndrome. And for people like that (and I count myself among them), a certain amount of greater self-confidence would surely be a good thing.

The idea seems to be that deluding yourself to increase your confidence will get you to face challenges and take risks you otherwise wouldn’t have, and that this will yield good outcomes.

But there’s a glaring hole in this argument:

If you have to delude yourself in order to take a risk, you shouldn’t take that risk.

Risk-taking is not an unalloyed good. Russian Roulette is certainly risky, but it’s not a good career path.

There are in fact a lot of risks you simply shouldn’t take, because they aren’t worth it.

The right risks to take are the ones for which the expected benefit outweighs the expected cost: The one with the highest expected utility. (That sounds simple, and in principle it is; but in practice, it can be extraordinarily difficult to determine.)

In other words, the right risks to take are the ones that are rational. The ones that a correct view of the world will instruct you to take.

That aspiring novelist, then, should write the book and submit it to publishers—if she’s actually any good at writing. If she’s actually terrible, then never submitting the book is the correct decision; she should spend more time honing her craft before she tries to finish it—or maybe even give up on it and do something else with her life.

What she needs, therefore, is not a confident assessment of her abilities, but an accurate one. She needs to believe that she is competent if and only if she actually is competent.

But I can also see how self-delusion can seem like good advice—and even work for some people.

If you start from an excessively negative view of yourself or the world, then giving yourself a more positive view will likely cause you to accomplish more things. If you’re constantly telling yourself that you are worthless and hopeless, then convincing yourself that you’re better than you thought is absolutely what you need to do. (Because it’s true.)

I can even see how convincing yourself that you are the best is useful—even though, by construction, most people aren’t. When you live in a hyper-competitive society like ours, where we are constantly told that winning is everything, losers are worthless, and second place is as bad as losing, it may help you get by to tell yourself that you really are the best, that you really can win. (Even weirder: “Winning isn’t everything; it’s the only thing.” Uh, that’s just… obviously false? Like, what is this even intended to mean that “Winning is everything” didn’t already say better?)

But that’s clearly not the right answer. You’re solving one problem by adding another. You shouldn’t believe you are the best; you should recognize that you don’t have to be. Second place is not as bad as losing—and neither is fifth, or tenth, or fiftieth place. The 100th-most successful author in the world still makes millions writing. The 1,000th-best musician does regular concert tours. The 10,000th-best accountant has a steady job. Even the 100,000th-best trucker can make a decent living. (Well, at least until the robots replace him.)

Honestly, it’d be great if our whole society would please get this memo. It’s no problem that “only a minority of schools play sport to a high level”—indeed, that’s literally inevitable. It’s also not clear that “60% of students read below grade level” is a problem, when “grade level” seems to be largely defined by averages. (Literacy is great and all, but what’s your objective standard for “what a sixth grader should be able to read”?)

We can’t all be the best. We can’t all even be above-average.

That’s okay. Below-average does not mean inadequate.

That’s the message we need to be sending:

You don’t have to be the best in order to succeed.

You don’t have to be perfect in order to be good enough.

You don’t even have to be above-average.

This doesn’t require believing anything that isn’t true. It doesn’t require overestimating your abilities or your chances. In fact, it asks you to believe something that is more true than “You have to be the best” or “Winning is everything”.

If what you want to do is actually worth doing, an accurate assessment will tell you that. And if an accurate assessment tells you not to do it, then you shouldn’t do it. So you have no reason at all to strive for anything other than accurate beliefs.

With this in mind, the fact that the empirical evidence for “depressive realism” is shockingly weak is not only unsurprising; it’s almost irrelevant. You can’t have evidence against being rational. If deluded people succeed more, that means something is very, very wrong; and the solution is clearly not to make more people deluded.

Of course, it’s worth pointing out that the evidence is shockingly weak: Depressed people show different biases, not less bias. And in fact they seem to be more overconfident in the following sense: They are more certain that what they predict will happen is what will actually happen.

So while most people think they will succeed when they will probably fail, depressed people are certain they will fail when in fact they could succeed. Both beliefs are inaccurate, but the depressed one is in an important sense more inaccurate: It tells you to give up, which is the wrong thing to do.

“Healthy self-delusion” ultimately amounts to trying to get you to do the right thing for the wrong reasons. But why? Do the right thing for the right reasons! If it’s really the right thing, it should have the right reasons!

Serenity and its limits

Feb 25 JDN 2460367

God grant me the serenity
to accept the things I cannot change;
courage to change the things I can;
and wisdom to know the difference.

Of course I don’t care for its religious message (and the full prayer is even more overtly religious), but the serenity prayer does capture an important insight into some of the most difficult parts of human existence.

Some things are as we would like them to be. They don’t require our intervention. (Though we may still stand to benefit from teaching ourselves to savor them and express gratitude for them.)

Other things are not as we would like them to be. The best option, of course, would be to change them.

But such change is often difficult, and sometimes practically impossible.

Sometimes we don’t even know whether change is possible—that’s where the wisdom to know the difference comes in. This is a wisdom we often lack, but it’s at least worth striving for.

If it is impossible to change what we want to change, then we are left with only one choice:

Do we accept it, or not?

The serenity prayer tells us to accept it. There is wisdom in this. Often it is the right answer. Some things about our lives are awful, but simply cannot be changed by any known means.

Death, for instance.

Someday, perhaps, we will finally conquer death, and humanity—or whatever humanity has become—will enter a new era of existence. But today is not that day. When grieving the loss of people we love, ultimately our only option is to accept that they are gone, and do our best to appreciate what they left behind, and the parts of them that are still within us. They would want us to carry on and live full lives, not forever be consumed by grief.

There are many other things we’d like to change, and maybe someday we will, but right now, we simply don’t know how: diseases we can’t treat, problems we can’t solve, questions we can’t answer. It’s often useful for someone to be trying to push those frontiers, but for any given person, the best option is often to find a way to accept things as they are.

But there are also things I cannot change and yet will not accept.

Most of these things fall into one broad category:

Injustice.

I can’t end war, or poverty, or sexism, or racism, or homophobia. Neither can you. Neither can any one person, or any hundred people, or any thousand people, or probably even any million people. (If all it took were a million dreams, we’d be there already. A billion might be enough—though it would depend which billion people shared the dream.)

I can’t. You can’t. But we can.

And here I mean “we” in a very broad sense indeed: Humanity as a collective whole. All of us together can end injustice—and indeed that is the only way it ever could be ended, by our collective action. Collective action is what causes injustice, and collective action is what can end it.

I therefore consider serenity in the face of injustice to be a very dangerous thing.

At times, and to certain degrees, that serenity may be necessary.

Those who are right now in the grips of injustice may need to accept it in order to survive. Reflecting on the horror of a concentration camp won’t get you out of it. Embracing the terror of war won’t save you from being bombed. Weeping about the sorrow of being homeless won’t get you off the streets.

Even for those of us who are less directly affected, it may sometimes be wisest to blunt our rage and sorrow at injustice—for otherwise they could be paralyzing, and if we are paralyzed, we can’t help anyone.

Sometimes we may even need to withdraw from the fight for justice, simply because we are too exhausted to continue. I read recently of a powerful analogy about this:

A choir can sing the same song forever, as long as its singers take turns resting.

If everyone tries to sing their very hardest all the time, the song must eventually end, as no one can sing forever. But if we rotate our efforts, so that at any given moment some are singing while others are resting, then we theoretically could sing for all time—as some of us die, others would be born to replace us in the song.

For a literal choir this seems absurd: Who even wants to sing the same song forever? (Lamb Chop, I guess.)

But the fight for justice probably is one we will need to continue forever, in different forms in different times and places. There may never be a perfectly just society, and even if there is, there will be no guarantee that it remains so without eternal vigilance. Yet the fight is worth it: in so many ways our society is already more just than it once was, and could be made more so in the future.

This fight will only continue if we don’t accept the way things are. Even when any one of us can’t change the world—even if we aren’t sure how many of us it would take to change the world—we still have to keep trying.

But as in the choir, each one of us also needs to rest.

We can’t all be fighting all the time as hard as we can. (I suppose if literally everyone did that, the fight for justice would be immediately and automatically won. But that’s never going to happen. There will always be opposition.)

And when it is time for each of us to rest, perhaps some serenity is what we need after all. Perhaps there is a balance to be found here: We do not accept things as they are, but we do accept that we cannot change them immediately or single-handedly. We accept that our own strength is limited and sometimes we must withdraw from the fight.

So yes, we need some serenity. But not too much.

Enough serenity to accept that we won’t win the fight immediately or by ourselves, and sometimes we’ll need to stop fighting and rest. But not so much serenity that we give up the fight altogether.

For there are many things that I can’t change—but we can.

Love is more than chemicals

Feb 18 JDN 2460360

One of the biggest problems with the rationalist community is an inability to express sincerity and reverence.

I get it: Religion is the world’s greatest source of sincerity and reverence, and religion is the most widespread and culturally important source of irrationality. So we declare ourselves enemies of religion, and also end up being enemies of sincerity and reverence.

But in doing so, we lose something very important. We cut ourselves off from some of the greatest sources of meaning and joy in human life.

In fact, we may even be undermining our own goals: If we don’t offer people secular, rationalist forms of reverence, they may find they need to turn back to religion in order to fill that niche.

One of the most pernicious forms of this anti-sincerity, anti-reverence attitude (I can’t just say ‘insincere’ or ‘irreverent’, as those have different meanings) is surely this one:

Love is just a chemical reaction.

(I thought it seemed particularly apt to focus on this one during the week of Valentine’s Day.)

On the most casual of searches I could find at least half a dozen pop-sci articles and a YouTube video propounding this notion (though I could also find a few articles trying to debunk the notion as well).

People who say this sort of thing seem to think that they are being wise and worldly while the rest of us are just being childish and naive. They think we are seeing something that isn’t there. In fact, they are being jaded and cynical. They are failing to see something that is there.

(Perhaps the most extreme form of this was from Rick & Morty; and while Rick as a character is clearly intended to be jaded and cynical, far too many people also see him as a role model.)

Part of the problem may also be a failure to truly internalize the Basic Fact of Cognitive Science:

You are your brain.

No, your consciousness is not an illusion. It’s not an “epiphenomenon” (whatever that isI’ve never encountered one in real life). Your mind is not fake or imaginary. Your mind actually exists—and it is a product of your brain. Both brain and mind exist, and are in fact the same.

It’s so hard for people to understand this that some become dualists, denying the unity of the brain and the mind. That, at least, I can sympathize with, even though we have compelling evidence that it is wrong. But there’s another tack people sometimes take, eliminative materialism, where they try to deny that the mind exists at all. And that I truly do not understand. How can you think that nobody can think? Yet intelligent, respected philosophers have claimed to believe such things.

Love is one of the most important parts of our lives.

This may be more true of humans than of literally any other entity in the known universe.

The only serious competition comes from other mammals: They are really the only other beings we know of that are capable of love. And even they don’t seem to be as good at it as we are; they can love only those closest to them, while we can love entire nations and even abstract concepts.

And once you go beyond that, even to reptiles—let alone fish, or amphibians, or insects, or molluscs—it’s not clear that other animals are really capable of love at all. They seem to be capable of some forms of thought and feeling: They get hungry, or angry, or horny. But do they really love?

And even the barest emotional capacities of an insect are still categorically beyond what most of the universe is capable of feeling, which is to say: Nothing. The vast, vast majority of the universe feels neither love nor hate, neither joy nor pain.

Yet humans can love, and do love, and it is a large part of what gives our lives meaning.

I don’t just mean romantic love here, though I do think it’s worth noting that people who dismiss the reality of romantic love somehow seem reluctant to do the same for the love parents have for their children—even though it’s made of pretty much the same brain chemicals. Perhaps there is a limit to their cynicism.

Yes, love is made of chemicals—because everything is made of chemicals. We live in a material, chemical universe. Saying that love is made of chemicals is an almost completely vacuous statement; it’s basically tantamount to saying that love exists.

In other contexts, you already understand this.

“That’s not a bridge, it’s just a bunch of iron atoms!” rightfully strikes you as an absurd statement to make. Yes, the bridge is made of steel, and steel is mostly iron, and everything is made of atoms… but clearly there’s a difference between a random pile of iron and a bridge.

“That’s not a computer, it’s just a bunch of silicon atoms!” similarly registers as nonsense: Yes, it is indeed mostly made of silicon, but beach sand and quartz crystals are not computers.

It is in this same sense that joy is made of dopamine and love is made of chemical reactions. Yes, those are in fact the constituent parts—but things are more than just their parts.

I think that on some level, even most rationalists recognize that love is more than some arbitrary chemical reaction. I think “love is just chemicals” is mainly something people turn to for a couple of reasons: Sometimes, they are so insistent on rejecting everything that even resembles religious belief that they end up rejecting all meaning and value in human life. Other times, they have been so heartbroken, that they try to convince themselves love isn’t real—to dull the pain. (But of course if it weren’t, there would be no pain to dull.)

But love is no more (or less) a chemical reaction than any other human experience: The very belief “love is just a chemical reaction” is, itself, made of chemical reactions.

Everything we do is made of chemical reactions, because we are made of chemical reactions.

Part of the problem here—and with the Basic Fact of Cognitive Science in general—is that we really have no idea how this works. For most of what we deal with in daily life, and even an impressive swath of the overall cosmos, we have a fairly good understanding of how things work. We know how cars drive, how wind blows, why rain falls; we even know how cats purr and why birds sing. But when it comes to understanding how the physical matter of the brain generates the subjective experiences of thought, feeling, and belief—of which love is made—we lack even the most basic understanding. The correlation between the two is far too strong to deny; but as far as causal mechanisms, we know absolutely nothing. (Indeed, worse than that: We can scarcely imagine a causal mechanism that would make any sense. We not only don’t know the answer; we don’t know what an answer would look like.)

So, no, I can’t tell you how we get from oxytocin and dopamine to love. I don’t know how that makes any sense. No one does. But we do know it’s true.

And just like everything else, love is more than the chemicals it’s made of.

Let’s call it “copytheft”

Feb 11 JDN 2460353

I have written previously about how ridiculous it is that we refer to the unauthorized copying of media such as music and video games as “piracy” as though it were somehow equivalent to capturing ships on the high seas.

In that post a few years ago I suggested calling it simply “unauthorized copying”, but that clearly isn’t catching on, perhaps because it’s simply too much of a mouthful. So today I offer a compromise:

Let’s call it “copytheft”.

That takes no longer to say than “piracy” (and only slightly longer to write), and far more clearly states what’s actually going on. No ships have been seized on the high seas; there has been no murder, arson, or slavery.

Yes, it’s debatable whether copytheft really constitutes theft—and I would generally argue that it does not—but just from hearing that word, you would probably infer that the following process took place:

  1. I took a thing.
  2. I made a copy of that thing that I wasn’t supposed to.
  3. I put the original thing back where it was, unharmed.

The paradigmatic example of this theft-copy-replace sequence would be a key, of course: You take someone’s key, copy it, then put the key back where it was, so you now can unlock their locks but they are none the wiser.

With unauthorized copying of media, you’re not exactly doing steps 1 and 3; the copier often has the media completely legitimately before they make the copy, and it may not even have a clear physical location to be put back to (it must be physically stored somewhere, but particularly if it’s streamed from the cloud it hardly matters where).

But you’re definitely doing step 2, and that was the only part that had a permanent effect; so I think that the nomenclature still seems to work well enough.

Copytheft also has a similar sound to copyleft, the use of alternative intellectual property mechanisms by the authors to grand broader licensing than is ordinarily afforded by copyright, and also to copyfraud, the crime of claiming exclusive copyright to content that is in fact public domain. Hopefully that common structure will help the term get some purchase.

Of course, I can hardly bring a word into widespread use on my own. Others like you have to not only read it, but like it enough that you’re willing to actually use it—and then we need a certain critical mass of people using it in order to make it actually catch on.

So, I’d like to take a moment to offer you some justification why it’s worth changing to this new word.

First, it is admittedly imperfect; by containing the word “theft”, it already feels like we’re conceding something to the defenders of copyright.

But by including the word “copy” in the term, we can draw attention to the most important aspect that distinguishes copytheft from, well, theft:

The original owner still has the thing.

That’s the part that they want us to forget, that the harsh word “piracy” leads you towards. A ship that is captured by pirates is a ship that may never again sail for your own navy. A song that is “pirated”—copythefted—is one that not only the original owners, but also everyone who bought it, still have in exactly the same state they did before.

Thus it simply cannot be that copytheft takes money out of the hands of artists. At worst, it fails to give money to artists.

That could still be a bad thing: Artists need to pay bills too, and a world where nobody pays for any art is surely a world with a lot fewer artists—and the ones who remain far more miserable. But it’s clearly a different sort of thing than ordinary theft, as nothing has been lost.

Moreover, it’s not clear that in most cases copytheft even does fail to give money that would otherwise have been given. Maybe sometimes it does—a certain proportion of people who copytheft a given song, film, or video game might have been willing to pay the original price if the copythefted version had not been available. But typically I suspect that people who’d be willing to pay full price… do pay full price. Thus, the people who are copythefting the media wouldn’t have bought it at full price anyway.

They might have bought it at some lower price, in which case that is foregone payment; but it’s surely considerably less than the “losses” often reported by the film and music industries, which seem to be based on the assumption that everyone who copythefts would have otherwise paid full price. And in fact many people might have been unwilling to buy at any nonzero price, and were only willing to copytheft the media precisely because it didn’t cost them any money or a great deal of effort to do so.

And in fact if you think about it, what about people who would have been willing to pay more than the original price? Surely there were many of them as well, yet we don’t grant media corporations the right to that money. That is also money that they could have been given but weren’t—and we decided, as a society, that they didn’t deserve to have it. It’s not that it would be impossible to do so: We could give corporations the authority to price-discriminate on all of their media. (They probably couldn’t do it perfectly, but they could surely do it quite well.) But we made the policy choice to live in a world where media is sold by single-price monopolies rather than one where it is sold by price-discriminating monopolies.

The mere fact that someone might have been willing to pay you more money if the market were different does not entitle you to receive that money. It has not been stolen from you. Indeed, typically it’s more that you have not been allowed to exploit them. It’s usually the presence of competition that prevents corporations from receiving the absolute maximum profit they might potentially have received if they had full control over the market. Corporations making less profit than they otherwise would have is generally a sign of good economic policy—a sign that things are reasonably fair.

Why else is “copytheft” a good word to use?

Above all, we do not allow our terms to be defined by our opponents.

We don’t allow them insinuate that our technically violating draconian regulations designed to maximize the profits of Disney and Viacom somehow constitutes a terrible crime against other human beings.

“Piracy is not a victimless crime”, they will say.

Well, actual piracy isn’t. But copytheft? Yeah, uh, it kinda is.

Maybe not quite as victimless as, say, marijuana or psilocybin, which no one even has any rational reason to prefer you not do. But still, you’re not really making anyone else worse off—that sounds pretty victimless.

Of course, it does give us less reason to wear tricorn hats and eyepatches.

But guess what? You can still do that anyway!

Adversarial design

Feb 4 JDN 2460346

Have you noticed how Amazon feels a lot worse lately? Years ago, it was extremely convenient: You’d just search for what you want, it would give you good search results, you could buy what you want and be done. But now you have to slog through “sponsored results” and a bunch of random crap made by no-name companies in China before you can get to what you actually want.

Temu is even worse, and has been from the start: You can’t buy anything on Temu without first being inundated in ads. It’s honestly such an awful experience, I don’t understand why anyone is willing to buy anything from Temu.

#WelcomeToCyberpunk, I guess.

Even some video games have become like this: The free-to-play or “freemium” business model seems to be taking off, where you don’t pay money for the game itself, but then have to deal with ads inside the game trying to sell you additional content, because that’s where the developers actually make their money. And now AAA firms like EA and Ubisoft are talking about going to a subscription-based model where you don’t even own your games anymore. (Fortunately there’s been a lot of backlash against that; I hope it persists.)

Why is this happening? Isn’t capitalism supposed to make life better for consumers? Isn’t competition supposed to make products and services supposed to improve over time?

Well, first of all, these markets are clearly not as competitive as they should be. Amazon has a disturbingly large market share, and while the video game market is more competitive, it’s still dominated by a few very large firms (like EA and Ubisoft).

But I think there’s a deeper problem here, one which may be specific to media content.

What I mean by “media content” here is fairly broad: I would include art, music, writing, journalism, film, and video games.

What all of these things have in common is that they are not physical products (they’re not like a car or a phone that is a single physical object), but they are also not really services either (they aren’t something you just do as an action and it’s done, like a haircut, a surgery, or a legal defense).

Another way of thinking about this is that media content can be copied with zero marginal cost.

Because it can be copied with zero marginal cost, media content can’t simply be made and sold the way that conventional products and services are. There are a few different ways it can be monetized.


The most innocuous way is commission or patronage, where someone pays someone else to create a work because they want that work. This is totally unproblematic. You want a piece of art, you pay an artist, they make it for you; great. Maybe you share copies with the world, maybe you don’t; whatever. It’s good either way.

Unfortunately, it’s hard to sustain most artists and innovators on that model alone. (In a sense I’m using a patronage model, because I have a Patreon. But I’m not making anywhere near enough to live on that way.)

The second way is intellectual property, which I have written about before, and surely will again. If you can enforce limits on who is allowed to copy a work, then you can make a work and sell it for profit without fear of being undercut by someone else who simply copies it and sells it for cheaper. A detailed discussion of that is beyond the scope of this post, but you can read those previous posts, and I can give you the TLDR version: Some degree of intellectual property is probably necessary, but in our current society, it has clearly been taken much too far. I think artists and authors deserve to be able to copyright (or maybe copyleft) their work—but probably not for 70 years after their death.

And then there is a third way, the most insidious way: advertising. If you embed advertisements for other products and services within your content, you can then sell those ad slots for profit. This is how newspapers stay afloat, mainly; subscriptions have never been the majority of their revenue. It’s how TV was supported before cable and streaming—and cable usually has ads too, and streaming is starting to.

There is something fundamentally different about advertising as a service. Whereas most products and services you encounter in a capitalist society are made for you, designed for you to use, advertising it made at you, designed to manipulate you.

I’ve heard it put well this way:

If you’re not paying, you aren’t the customer; you’re the product.

Monetizing content by advertising effectively makes your readers (or viewers, players, etc.) into the product instead of the customer.

I call this effect adversarial design.

I chose this term because it not only conveys the right sense of being an adversary: it also includes the word ‘ad’ and the same Latin root ‘advertere‘ as ‘advertising’.

When a company designs a car or a phone, they want it to appeal to customers—they want you to like it. Yes, they want to take your money; but it’s a mutually beneficial exchange. They get money, you get a product; you’re both happier.

When a company designs an ad, they want it to affect customers—they want you to do what it says, whether you like it or not. And they wouldn’t be doing it if they thought you would buy it anyway—so they are basically trying to make you do something you wouldn’t otherwise have done.

In other words, when designing a product, corporations want to be your friend.

When designing an ad, they become your enemy.

You would absolutely prefer not to have ads. You don’t want your attention taken in this way. But they way that these corporations make money—disgustingly huge sums of money—is by forcing those ads in your face anyway.

Yes, to be fair, there might be some kinds of ads that aren’t too bad. Simple, informative, unobtrusive ads that inform you that something is available you might not otherwise have known about. Movie trailers are like this; people often enjoy watching movie trailers, and they want to see what movies are going to come out next. That’s fine. I have no objection to that.

But it should be clear to anyone who has, um, used the Internet in the past decade that we have gone far, far beyond that sort of advertising. Ads have become aggressive, manipulative, aggravating, and—above all—utterly ubiquitous. You can’t escape them. They’re everywhere. Even when you use ad-block software (which I highly recommend, particularly Adblock Plus—which is free), you still can’t completely escape them.

That’s another thing that should make it pretty clear that there’s something wrong with ads: People are willing to make efforts or even pay money to make ads go away.

Whenever there is a game I like that’s ad-supported but you can pay to make the ads go away, I always feel like I’m being extorted, even if what I have to pay would have been a totally reasonable price for the game. Come on, just sell me the game. Don’t give me the game for free and then make me pay to make it not unpleasant. Don’t add anti-features.

This is clearly not a problem that market competition alone will solve. Even in highly competitive markets, advertising is still ubiquitous, aggressive and manipulative. In fact, competition may even make it worse—a true monopoly wouldn’t need to advertise very much.

Consider Coke and Pepsi ads; they’re actually relatively pleasant, aren’t they? Because all they’re trying to do is remind you and make you thirsty so you’ll buy more of the product you were already buying. They aren’t really trying to get you to buy something you wouldn’t have otherwise. They know that their duopoly is solid, and only a true Black Swan event would unseat their hegemony.

And have you ever seen an ad for your gas company? I don’t think I have—probably because I didn’t have a choice in who my gas company was; there was only one that covered my area. So why bother advertising to me?

If competition won’t fix this, what will? Is there some regulation we could impose that would make advertising less obtrusive? People have tried, without much success. I think imposing an advertising tax would help, but even that might not do enough.

What I really think we need right now is to recognize the problem and invest in solving it. Right now we have megacorporations which are thoroughly (literally) invested in making advertising more obtrusive and more ubiquitous. We need other institutions—maybe government, maybe civil society more generally—that are similarly invested in counteracting it.


Otherwise, it’s only going to get worse.

Administering medicine to the dead

Jan 28 JDN 2460339

Here are a couple of pithy quotes that go around rationalist circles from time to time:

“To argue with a man who has renounced the use and authority of reason, […] is like administering medicine to the dead[…].”

Thomas Paine, The American Crisis

“It is useless to attempt to reason a man out of a thing he was never reasoned into.”

Jonathan Swift

You usually hear that abridged version, but Thomas Paine’s full quotation is actually rather interesting:

“To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.”

― Thomas Paine, The American Crisis

It is indeed quite ineffective to convert an atheist by scripture (though that doesn’t seem to stop them from trying). Yet this quotation seems to claim that the opposite should be equally ineffective: It should be impossible to convert a theist by reason.

Well, then, how else are we supposed to do it!?

Indeed, how did we become atheists in the first place!?

You were born an atheist? No, you were born having absolutely no opinion about God whatsoever. (You were born not realizing that objects don’t fade from existence when you stop seeing them! In a sense, we were all born believing ourselves to be God.)

Maybe you were raised by atheists, and religion never tempted you at all. Lucky you. I guess you didn’t have to be reasoned into atheism.

Well, most of us weren’t. Most of us were raised into religion, and told that it held all the most important truths of morality and the universe, and that believing anything else was horrible and evil and would result in us being punished eternally.

And yet, somehow, somewhere along the way, we realized that wasn’t true. And we were able to realize that because people made rational arguments.

Maybe we heard those arguments in person. Maybe we read them online. Maybe we read them in books that were written by people who died long before we were born. But somehow, somewhere people actually presented the evidence for atheism, and convinced us.

That is, they reasoned us out of something that we were not reasoned into.

I know it can happen. I have seen it happen. It has happened to me.

And it was one of the most important events in my entire life. More than almost anything else, it made me who I am today.

I’m scared that if you keep saying it’s impossible, people will stop trying to do it—and then it will stop happening to people like me.

So please, please stop telling people it’s impossible!

Quotes like these encourage you to simply write off entire swaths of humanity—most of humanity, in fact—judging them as worthless, insane, impossible to reach. When you should be reaching out and trying to convince people of the truth, quotes like these instead tell you to give up and consider anyone who doesn’t already agree with you as your enemy.

Indeed, it seems to me that the only logical conclusion of quotes like these is violence. If it’s impossible to reason with people who oppose us, then what choice do we have, but to fight them?

Violence is a weapon anyone can use.

Reason is the one weapon in the universe that works better when you’re right.

Reason is the sword that only the righteous can wield. Reason is the shield that only protects the truth. Reason is the only way we can ever be sure that the right people win—instead of just whoever happens to be strongest.

Yes, it’s true: reason isn’t always effective, and probably isn’t as effective as it should be. Convincing people to change their minds through rational argument is difficult and frustrating and often painful for both you and them—but it absolutely does happen, and our civilization would have long ago collapsed if it didn’t.

Even people who claim to have renounced all reason really haven’t: they still know 2+2=4 and they still look both ways when they cross the street. Whatever they’ve renounced, it isn’t reason; and maybe, with enough effort, we can help them see that—by reason, of course.

In fact, maybe even literally administering medicine to the dead isn’t such a terrible idea.

There are degrees of death, after all: Someone whose heart has stopped is in a different state than someone whose cerebral activity has ceased, and both of them clearly stand a better chance of being resuscitated than someone who has been vaporized by an explosion.

As our technology improves, more and more states that were previously considered irretrievably dead will instead be considered severe states of illness or injury from which it is possible to recover. We can now restart many stopped hearts; we are working on restarting stopped brains. (Of course we’ll probably never be able to restore someone who got vaporized—unless we figure out how to make backup copies of people?)

Most of the people who now live in the world’s hundreds of thousands of ICU beds would have been considered dead even just 100 years ago. But many of them will recover, because we didn’t give up on them.

So don’t give up on people with crazy beliefs either.

They may seem like they are too far gone, like nothing in the world could ever bring them back to the light of reason. But you don’t actually know that for sure, and the only way to find out is to try.

Of course, you won’t convince everyone of everything immediately. No matter how good your evidence is, that’s just not how this works. But you probably will convince someone of something eventually, and that is still well worthwhile.

You may not even see the effects yourself—people are often loathe to admit when they’ve been persuaded. But others will see them. And you will see the effects of other people’s persuasion.

And in the end, reason is really all we have. It’s the only way to know that what we’re trying to make people believe is the truth.

Don’t give up on reason.

And don’t give up on other people, whatever they might believe.

Empathy is not enough

Jan 14 JDN 2460325

A review of Against Empathy by Paul Bloom

The title Against Empathy is clearly intentionally provocative, to the point of being obnoxious: How can you be against empathy? But the book really does largely hew toward the conclusion that empathy, far from being an unalloyed good as we may imagine it to be, is overall harmful and detrimental to society.

Bloom defines empathy narrowly, but sensibly, as the capacity to feel other people’s emotions automatically—to feel hurt when you see someone hurt, afraid when you see someone afraid. He argues surprisingly well that this capacity isn’t really such a great thing after all, because it often makes us help small numbers of people who are like us rather than large numbers of people who are different from us.

But something about the book rubs me the wrong way all throughout, and I think I finally put my finger on it:

If empathy is bad… compared to what?

Compared to some theoretical ideal of perfect compassion where we love all sentient beings in the universe equally and act only according to maxims that would yield the greatest benefit for all, okay, maybe empathy is bad.

But that is an impossible ideal. No human being has ever approached it. Even our greatest humanitarians are not like that.

Indeed, one thing has clearly characterized the very best human beings, and that is empathy. Every one of them has been highly empathetic.

The case for empathy gets even stronger if you consider the other extreme: What are human beings like when they lack empathy? Why, those people are psychopaths, and they are responsible for the majority of violent crimes and nearly all the most terrible atrocities.

Empirically, if you look at humans as we actually are, it really seems like this function is monotonic: More empathy makes people behave better. Less empathy makes them behave worse.

Yet Bloom does have a point, nevertheless.

There are real-world cases where empathy seems to have done more harm than good.

I think his best examples come from analysis of charitable donations. Most people barely give anything to charity, which we might think of as a lack of empathy. But a lot of people do give to a great deal to charity—yet the charities they give to and the gifts they give are often woefully inefficient.

Let’s even set aside cases like the Salvation Army, where the charity is actively detrimental to society due to the distortions of ideology. The Salvation Army is in fact trying to do good—they’re just starting from a fundamentally evil outlook on the universe. (And if that sounds harsh to you? Take a look at what they say about people like me.)

No, let’s consider charities that are well-intentioned, and not blinded by fanatical ideology, who really are trying to work toward good things. Most of them are just… really bad at it.

The most cost-effective charities, like the ones GiveWell gives top ratings to, can save a life for about $3,000-5,000, or about $150 to $250 per QALY.

But a typical charity is far, far less efficient than that. It’s difficult to get good figures on it, but I think it would be generous to say that a typical charity is as efficient as the standard cost-effectiveness threshold used in US healthcare, which is $50,000 per QALY. That’s already two hundred times less efficient.

And many charities appear to be even below that, where their marginal dollars don’t really seem to have any appreciable benefit in terms of QALY. Maybe $1 million per QALY—spend enough, and they’d get a QALY eventually.

Other times, people give gifts to good charities, but the gifts they give are useless—the Red Cross is frequently inundated with clothing and toys that it has absolutely no use for. (Please, please, I implore you: Give them money. They can buy what they need. And they know what they need a lot better than you do.)

Why do people give to charities that don’t really seem to accomplish anything? Because they see ads that tug on their heartstrings, or get solicited donations directly by people on the street or door-to-door canvassers. In other words, empathy.

Why do people give clothing and toys to the Red Cross after a disaster, instead of just writing a check or sending a credit card payment? Because they can see those crying faces in their minds, and they know that if they were a crying child, they’d want a toy to comfort them, not some boring, useless check. In other words, empathy.

Empathy is what you’re feeling when you see those Sarah McLachlan ads with sad puppies in them, designed to make you want to give money to the ASPCA.

Now, I’m not saying you shouldn’t give to the ASPCA. Actually animal welfare advocacy is one of those issues where cost-effectiveness is really hard to assess—like political donations, and for much the same reason. If we actually managed to tilt policy so that factory farming were banned, the direct impact on billions of animals spared that suffering—while indubitably enormous—might actually be less important, morally, than the impact on public health and climate change from people eating less meat. I don’t know what multiplier to apply to a cow’s suffering to convert her QALY into mine. But I do know that the world currently eats far too much meat, and it’s cooking the planet along with the cows. Meat accounts for 60% of food-related greenhouse gases, and 35% of all greenhouse gases.

But I am saying that if you give to the ASPCA, it should be because you support their advocacy against factory farming—not because you saw pictures of very sad puppies.

And empathy, unfortunately, doesn’t really work that way.

When you get right down to it, what Paul Bloom is really opposing is scope neglect, which is something I’ve written about before.

We just aren’t capable of genuinely feeling the pain of a million people, or a thousand, or probably even a hundred. (Maybe we can do a hundred; that’s under our Dunbar number, after all.) So when confronted with global problems that affect millions of people, our empathy system just kind of overloads and shuts down.

ERROR: OVERFLOW IN EMPATHY SYSTEM. ABORT, RETRY, IGNORE?

But when confronted with one suffering person—or five, or ten, or twenty—we can actually feel empathy for them. We can look at their crying face and we may share their tears.

Charities know this; that’s why Sarah McLachlan does those ASPCA ads. And if that makes people donate to good causes, that’s a good thing. (If it makes them donate to the Salvation Army, that’s a different story.)

The problem is, it really doesn’t tell us what causes are best to donate to. Almost any cause is going to alleviate some suffering of someone, somewhere; but there’s an enormous difference between $250 per QALY, $50,000per QALY, and $1 million per QALY. Your $50 donation would add either two and a half months, eight hours, or just over 26 minutes of joy to someone else’s life, respectively. (In the latter case, it may literally be better—morally—for you to go out to lunch or buy a video game.)

To really know the best places to give to, you simply can’t rely on your feelings of empathy toward the victims. You need to do research—you need to do math. (Or someone does, anyway; you can also trust GiveWell to do it for you.)

Paul Bloom is right about this. Empathy doesn’t solve this problem. Empathy is not enough.

But where I think he loses me is in suggesting that we don’t need empathy at all—that we could somehow simply dispense with it. His offer is to replace it with an even-handed, universal-minded utilitarian compassion, a caring for all beings in the universe that values all their interests evenly.

That sounds awfully appealing—other than the fact that it’s obviously impossible.

Maybe it’s something we can all aspire to. Maybe it’s something we as a civilization can someday change ourselves to become capable of feeling, in some distant transhuman future. Maybe even, sometimes, at our very best moments, we can even approximate it.

But as a realistic guide for how most people should live their lives? It’s a non-starter.

In the real world, people with little or no empathy are terrible. They don’t replace it with compassion; they replace it with selfishness, greed, and impulsivity.

Indeed, in the real world, empathy and compassion seem to go hand-in-hand: The greatest humanitarians do seem like they better approximate that universal caring (though of course they never truly achieve it). But they are also invariably people of extremely high empathy.

And so, Dr. Bloom, I offer you a new title, perhaps not as catchy or striking—perhaps it would even have sold fewer books. But I think it captures the correct part of your thesis much better:

Empathy is not enough.

Compassion and the cosmos

Dec 24 JDN 2460304

When this post goes live, it will be Christmas Eve, one of the most important holidays around the world.

Ostensibly it celebrates the birth of Jesus, but it doesn’t really.

For one thing, Jesus almost certainly wasn’t born in December. The date of Christmas was largely set by the Council of Tours in AD 567; it was set to coincide with existing celebrations—not only other Christian celebrations such as the Feast of the Epiphany, but also many non-Christian celebrations such as Yuletide, Saturnalia, and others around the Winter Solstice. (People today often say “Yuletide” when they actually mean Christmas, because the syncretization was so absolute.)

For another, an awful lot of the people celebrating Christmas don’t particularly care about Jesus. Countries like Sweden, Belgium, the UK, Australia, Norway, and Denmark are majority atheist but still very serious about Christmas. Maybe we should try to secularize and ecumenize the celebration and call it Solstice or something, but that’s a tall order. For now, it’s Christmas.

Compassion, love, and generosity are central themes of Christmas—and, by all accounts, Jesus did exemplify those traits. Christianity has a very complicated history, much of it quite dark; but this part of it at least seems worth preserving and even cherishing.

It is truly remarkable that we have compassion at all.

Most of this universe has no compassion. Many would like to believe otherwise, and they invent gods and other “higher beings” or attribute some sort of benevolent “universal consciousness” to the cosmos. (Really, most people copy the prior inventions of others.)

This is all wrong.

The universe is mostly empty, and what is here is mostly pitilessly indifferent.

The vast majority of the universe is comprised of cold, dark, empty space—or perhaps of “dark energy“, a phenomenon we really don’t understand at all, which many physicists believe is actually a shockingly powerful form of energy contained within empty space.

Most of the rest is made up of “dark matter“, a substance we still don’t really understand either, but believe to be basically a dense sea of particles that have mass but not much else, which cluster around other mass by gravity but otherwise rarely interact with other matter or even with each other.

Most of the “ordinary matter”, or more properly baryonic matter, (which we think of as ordinary, but actually by far the minority) is contained within stars and nebulae. It is mostly hydrogen and helium. Some of the other lighter elements—like lithium, sodium, carbon, oxygen, nitrogen, and all the way up to iron—can be made within ordinary stars, but still form a tiny fraction of the mass of the universe. Anything heavier than that—silver, gold, beryllium, uranium—can only be made in exotic, catastrophic cosmic events, mainly supernovae, and as a result these elements are even rarer still.

Most of the universe is mind-bendingly cold: about 3 Kelvin, just barely above absolute zero.

Most of the baryonic matter is mind-bendingly hot, contained within stars that burn with nuclear fires at thousands or even millions of Kelvin.

From a cosmic perspective, we are bizarre.

We live at a weird intermediate temperature and pressure, where matter can take on such exotic states as liquid and solid, rather than the far more common gas and plasma. We do contain a lot of hydrogen—that, at least, is normal by the standards of baryonic matter. But then we’re also made up of oxygen, carbon, nitrogen, and even little bits of all sorts of other elements that can only be made in supernovae? What kind of nonsense lifeform depends upon something as exotic as iodine to survive?

Most of the universe does not care at all about you.

Most of the universe does not care about anything.

Stars don’t burn because they want to. They burn because that’s what happens when hydrogen slams into other hydrogen hard enough.

Planets don’t orbit because they want to. They orbit because if they didn’t, they’d fly away or crash into their suns—and those that did are long gone now.

Even most living things, which are already nearly as bizarre as we are, don’t actually care much.

Maybe there is a sense in which a C. elegans or an oak tree or even a cyanobacterium wants to live. It certainly seems to try to live; it has behaviors that seem purposeful, which evolved to promote its ability to survive and pass on offspring. Rocks don’t behave. Stars don’t seek. But living things—even tiny, microscopic living things—do.

But we are something very special indeed.

We are animals. Lifeforms with complex, integrated nervous systems—in a word, brains—that allow us to not simply live, but to feel. To hunger. To fear. To think. To choose.

Animals—and to the best of our knowledge, only animals, though I’m having some doubts about AI lately—are capable of making choices and experiencing pleasure and pain, and thereby becoming something more than living beings: moral beings.

Because we alone can choose, we alone have the duty to choose rightly.

Because we alone can be hurt, we alone have the right to demand not to be.

Humans are even very special among animals. We are not just animals but chordates; not just chordates but mammals; not just mammals but primates. And even then, not just primates. We’re special even by those very high standards.

When you count up all the ways that we are strange compared to the rest of the universe, it seems incredibly unlikely that beings like us would come into existence at all.

Yet here we are. And however improbable it may have been for us to emerge as intelligent beings, we had to do so in order to wonder how improbable it was—and so in some sense we shouldn’t be too surprised.

It is a mistake to say that we are “more evolved” than any other lifeform; turtles and cockroaches had just as much time to evolve as we did, and if anything their relative stasis for hundreds of millions of years suggests a more perfected design: “If it ain’t broke, don’t fix it.”

But we are different than other lifeforms in a very profound way. And I dare say, we are better.

All animals feel pleasure, pain and hunger. (Some believe that even some plants and microscopic lifeforms may too.) Pain when something damages you; hunger when you need something; pleasure when you get what you needed.

But somewhere along the way, new emotions were added: Fear. Lust. Anger. Sadness. Disgust. Pride. To the best of our knowledge, these are largely chordate emotions, often believed to have emerged around the same time as reptiles. (Does this mean that cephalopods never get angry? Or did they evolve anger independently? Surely worms don’t get angry, right? Our common ancestor with cephalopods was probably something like a worm, perhaps a nematode. Does C. elegans get angry?)

And then, much later, still newer emotions evolved. These ones seem to be largely limited to mammals. They emerged from the need for mothers to care for their few and helpless young. (Consider how a bear or a cat fiercely protects her babies from harm—versus how a turtle leaves her many, many offspring to fend for themselves.)

One emotion formed the core of this constellation:

Love.

Caring, trust, affection, and compassion—and also rejection, betrayal, hatred, and bigotry—all came from this one fundamental capacity to love. To care about the well-being of others as well as our own. To see our purpose in the world as extending beyond the borders of our own bodies.

This is what makes humans different, most of all. We are the beings most capable of love.

We are of course by no means perfect at it. Some would say that we are not even very good at loving.

Certainly there are some humans, such as psychopaths, who seem virtually incapable of love. But they are rare.

We often wish that we were better at love. We wish that there were more compassion in the world, and fear that humanity will destroy itself because we cannot find enough compassion to compensate for our increasing destructive power.

Yet if we are bad at love, compared to what?

Compared to the unthinking emptiness of space, the hellish nuclear fires of stars, or even the pitiless selfishness of a worm or a turtle, we are absolute paragons of love.

We somehow find a way to love millions of others who we have never even met—maybe just a tiny bit, and maybe even in a way that becomes harmful, as solidarity fades into nationalism fades into bigotry—but we do find a way. Through institutions of culture and government, we find a way to trust and cooperate on a scale that would be utterly unfathomable even to the most wise and open-minded bonobo, let alone a nematode.

There are no other experts on compassion here. It’s just us.

Maybe that’s why so many people long for the existence of gods. They feel as ignorant as children, and crave the knowledge and support of a wise adult. But there aren’t any. We’re the adults. For all the vast expanses of what we do not know, we actually know more than anyone else. And most of the universe doesn’t know a thing.

If we are not as good at loving as we’d like, the answer is for us to learn to get better at it.

And we know that we can get better at it, because we have. Humanity is more peaceful and cooperative now than we have ever been in our history. The process is slow, and sometimes there is backsliding, but overall, life is getting better for most people in most of the world most of the time.

As a species, as a civilization, we are slowly learning how to love ourselves, one another, and the rest of the world around us.

No one else will learn to love for us. We must do it ourselves.

But we can.

And I believe we will.