How Effective Altruism hurt me

May 12 JDN 2460443

I don’t want this to be taken the wrong way. I still strongly believe in the core principles of Effective Altruism. Indeed, it’s shockingly hard to deny them, because basically they come out to this:

Doing more good is better than doing less good.

Then again, most people want to do good. Basically everyone agrees that more good is better than less good. So what’s the big deal about Effective Altruism?

Well, in practice, most people put shockingly little effort into trying to ensure that they are doing the most good they can. A lot of people just try to be nice people, without ever concerning themselves with the bigger picture. Many of these people don’t give to charity at all.

Then, even among people who do give to charity, typically give to charities more or less at random—or worse, in proportion to how much mail those charities send them begging for donations. (Surely you can see how that is a perverse incentive?) They donate to religious organizations, which sometimes do good things, but fundamentally are founded upon ignorance, patriarchy, and lies.

Effective Altruism is a movement intended to fix this, to get people to see the bigger picture and focus their efforts on where they will do the most good. Vet charities not just for their honesty, but also their efficiency and cost-effectiveness:

Just how many mQALY can you buy with that $1?

That part I still believe in. There is a lot of value in assessing which charities are the most effective, and trying to get more people to donate to those high-impact charities.

But there is another side to Effective Altruism, which I now realize has severely damaged my mental health.

That is the sense of obligation to give as much as you possibly can.

Peter Singer is the most extreme example of this. He seems to have mellowed—a little—in more recent years, but in some of his most famous books he uses the following thought experiment:

To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.

I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.

Basically everyone agrees with this particular decision: Even if you are wearing a very expensive suit that will be ruined, even if you’ll miss something really important like a job interview or even a wedding—most people agree that if you ever come across a drowning child, you should save them.

(Oddly enough, when contemplating this scenario, nobody ever seems to consider the advice that most lifeguards give, which is to throw a life preserver and then go find someone qualified to save the child—because saving someone who is drowning is a lot harder and a lot riskier than most people realize. (“Reach or throw, don’t go.”) But that’s a bit beside the point.)

But Singer argues that we are basically in this position all the time. For somewhere between $500 and $3000, you—yes, you—could donate to a high-impact charity, and thereby save a child’s life.

Does it matter that many other people are better positioned to donate than you are? Does it matter that the child is thousands of miles away and you’ll never see them? Does it matter that there are actually millions of children, and you could never save them all by yourself? Does it matter that you’ll only save a child in expectation, rather than saving some specific child with certainty?

Singer says that none of this matters. For a long time, I believed him.

Now, I don’t.

For, if you actually walked by a drowning child that you could save, only at the cost of missing a wedding and ruining your tuxedo, you clearly should do that. (If it would risk your life, maybe not—and as I alluded to earlier, that’s more likely than you might imagine.) If you wouldn’t, there’s something wrong with you. You’re a bad person.

But most people don’t donate everything they could to high-impact charities. Even Peter Singer himself doesn’t. So if donating is the same as saving the drowning child, it follows that we are all bad people.

(Note: In general, if an ethical theory results in the conclusion that the whole of humanity is evil, there is probably something wrong with that ethical theory.)

Singer has tried to get out of this by saying we shouldn’t “sacrifice things of comparable importance”, and then somehow cash out what “comparable importance” means in such a way that it doesn’t require you to live on the street and eat scraps from trash cans. (Even though the people you’d be donating to largely do live that way.)

I’m not sure that really works, but okay, let’s say it does. Even so, it’s pretty clear that anything you spend money on purely for enjoyment would have to go. You would never eat out at restaurants, unless you could show that the time saved allowed you to get more work done and therefore donate more. You would never go to movies or buy video games, unless you could show that it was absolutely necessary for your own mental functioning. Your life would be work, work, work, then donate, donate, donate, and then do the absolute bare minimum to recover from working and donating so you can work and donate some more.

You would enslave yourself.

And all the while, you’d believe that you were never doing enough, you were never good enough, you are always a terrible person because you try to cling to any personal joy in your own life rather than giving, giving, giving all you have.

I now realize that Effective Altruism, as a movement, had been basically telling me to do that. And I’d been listening.

I now realize that Effective Altruism has given me this voice in my head, which I hear whenever I want to apply for a job or submit work for publication:

If you try, you will probably fail. And if you fail, a child will die.

The “if you try, you will probably fail” is just an objective fact. It’s inescapable. Any given job application or writing submission will probably fail.

Yes, maybe there’s some sort of bundling we could do to reframe that, as I discussed in an earlier post. But basically, this is correct, and I need to accept it.

Now, what about the second part? “If you fail, a child will die.” To most of you, that probably sounds crazy. And it is crazy. It’s way more pressure than any ordinary person should have in their daily life. This kind of pressure should be reserved for neurosurgeons and bomb squads.

But this is essentially what Effective Altruism taught me to believe. It taught me that every few thousand dollars I don’t donate is a child I am allowing to die. And since I can’t donate what I don’t have, it follows that every few thousand dollars I fail to get is another dead child.

And since Effective Altruism is so laser-focused on results above all else, it taught me that it really doesn’t matter whether I apply for the job and don’t get it, or never apply at all; the outcome is the same, and that outcome is that children suffer and die because I had no money to save them.

I think part of the problem here is that Effective Altruism is utilitarian through and through, and utilitarianism has very little place for good enough. There is better and there is worse; but there is no threshold at which you can say that your moral obligations are discharged and you are free to live your life as you wish. There is always more good that you could do, and therefore always more that you should do.

Do we really want to live in a world where to be a good person is to owe your whole life to others?

I do not believe in absolute selfishness. I believe that we owe something to other people. But I no longer believe that we owe everything. Sacrificing my own well-being at the altar of altruism has been incredibly destructive to my mental health, and I don’t think I’m the only one.

By all means, give to high-impact charities. But give a moderate amount—at most, tithe—and then go live your life. You don’t owe the world more than that.

Of men and bears

May 5 JDN 2460436

[CW: rape, violence, crime, homicide]

I think it started on TikTok, but I’m too old for TikTok, so I first saw it on Facebook and Twitter.

Men and women were asked:
“Would you rather be alone in the woods with a man, or a bear?”

Answers seem to have been pretty mixed. Some women still thought a man was a safer choice, but a significant number chose the bear.

Then when the question was changed to a woman, almost everyone chose the woman over the bear.

What can we learn from this?

I think the biggest thing it tells us is that a lot of women are afraid of men. If you are seriously considering the wild animal over the other human being, you’re clearly afraid.

A lot of the discourse on this seems to be assuming that they are right to be afraid, but I’m not so sure.

It’s not that the fear is unfounded: Most women will suffer some sort of harassment, and a sizeable fraction will suffer some sort of physical or sexual assault, at the hands of some men at some point in their lives.

But there is a cost to fear, and I don’t think we’re taking it properly into account here. I’m worried that encouraging women to fear men will only serve to damage relationships between men and women, the vast majority of which are healthy and positive. I’m worried that this fear is really the sort of overreaction to trauma that ends up causing its own kind of harm.

If you think that’s wrong, consider this:

A sizeable fraction of men will be physically assaulted by other men.

Should men fear each other?

Should all men fear all other men?

What does it do to a society when its whole population fears half of its population? Does that sound healthy? Does whatever small increment in security that might provide seem worth it?

Keep in mind that women being afraid of men doesn’t seem to be protecting them from harm right now. So even if there is genuine harm to be feared, the harm of that fear is actually a lot more obvious than the benefit of it. Our entire society becomes fearful and distrustful, and we aren’t actually any safer.

I’m worried that this is like our fear of terrorism, which made us sacrifice our civil liberties without ever clearly making us safer. What are women giving up due to their fear of men? Is it actually protecting them?

If you have any ideas for how we might actually make women safer, let’s hear them. But please, stop saying idiotic things like “Don’t be a rapist.” 95% of men already aren’t, and the 5% who are, are not going to listen to anything you—or I—say to them. (Bystander intervention programs can work. But just telling men to not be rapists does not.)

I’m all for teaching about consent, but it really isn’t that hard to do—and most rapists seem to understand it just fine, they just don’t care. They’ll happily answer on a survey that they “had sex with someone without their consent”. By all means, undermine rape myths; just don’t expect it to dramatically reduce the rate of rape.

I absolutely want to make people safer. But telling people to be afraid of people like me doesn’t actually seem to accomplish that.

And yes, it hurts when people are afraid of you.

This is not a small harm. This is not a minor trifle. Once we are old enough to be seen as “men” rather than “boys” (which seems to happen faster if you’re Black than if you’re White), men know that other people—men and women, but especially women—will fear us. We go through our whole lives having to be careful what we say, how we move, when we touch someone else, because we are shaped like rapists.

When my mother encounters a child, she immediately walks up to the child and starts talking to them, pointing, laughing, giggling. I can’t do that. If I tried to do the exact same thing, I would be seen as a predator. In fact, without children of my own, it’s safer for me to just not interact with children at all, unless they are close friends or family. This is a whole class of joyful, fulfilling experience that I just don’t get to have because people who look like me commit acts of violence.

Normally we’re all about breaking down prejudice, not treating people differently based on how they look—except when it comes to gender, apparently. It’s okay to fear men but not women.

Who is responsible for this?

Well, obviously the ones most responsible are actual rapists.

But they aren’t very likely to listen to me. If I know any rapists, I don’t know that they are rapists. If I did know, I would want them imprisoned. (Which is likely why they wouldn’t tell me if they were.)

Moreover, my odds of actually knowing a rapist are probably lower than you think, because I don’t like to spend time with men who are selfish, cruel, aggressive, misogynist, or hyper-masculine. The fact that 5% of men in general are rapists doesn’t mean that 5% of any non-random sample of men are rapists. I can only think of a few men I have ever known personally who I would even seriously suspect, and I’ve cut ties with all of them.

The fact that psychopaths are not slavering beasts, obviously different from the rest of us, does not mean that there is no way to tell who is a psychopath. It just means that you need to know what you’re actually looking for. When I once saw a glimmer of joy in someone’s eyes as he described the suffering of animals in an experiment, I knew in that moment he was a psychopath. (There are legitimate reasons to harm animals in scientific experiments—but a good person does not enjoy it.) He did not check most of the boxes of the “Slavering Beast theory”: He had many friends; he wasn’t consistently violent; he was a very good liar; he was quite accomplished in life; he was handsome and charismatic. But go through an actual psychopathy checklist, and you realize that every one of these features makes psychopathy more likely, not less.

I’m not even saying it’s easy to detect psychopaths. It’s not. Even experts need to look very closely and carefully, because psychopaths are often very good at hiding. But there are differences. And it really is true that the selfish, cruel, aggressive, misogynist, hyper-masculine men are more likely to be rapists than the generous, kind, gentle, feminist, androgynous men. It’s not a guarantee—there are lots of misogynists who aren’t rapists, and there are men who present as feminists in public but are rapists in private. But it is a tendency nevertheless. You don’t need to treat every man as equally dangerous, and I don’t think it’s healthy to do so.

Indeed, if I had the choice to be alone in the woods with either a gay male feminist or a woman I knew was cruel to animals, I’d definitely choose the man. These differences matter.

And maybe, just maybe, if we could tamp down this fear a little bit, men and women could have healthier interactions with one another and build stronger relationships. Even if the fear is justified, it could still be doing more harm than good.

So are you safer with a man, or a bear?

Let’s go back to the original thought experiment, and consider the actual odds of being attacked. Yes, the number of people actually attacked by bears is far smaller than the number of people actually attacked by men. (It’s also smaller than the number of people attacked by women, by the way.)

This is obviously because we are constantly surrounded by people, and rarely interact with bears.

In other words, that fact alone basically tells us nothing. It could still be true even if bears are far more dangerous than men, because people interact with bears far less often.

The real question is “How likely is an attack, given that you’re alone in the woods with one?”

Unfortunately, I was unable to find any useful statistics on this. There area lot of vague statements like “Bears don’t usually attack humans” or “Bears only attack when startled or protecting their young”; okay. But how often is “usually”? How often are bears startled? What proportion of bears you might encounter are protecting their young?

So this is really a stab in the dark; but do you think it’s perhaps fair to say that maybe 10% of bear-human close encounters result in an attack?

That doesn’t seem like an unreasonably high number, at least. 90% not attacking sounds like “usually”. Being startled or protecting their young don’t seem like events much rarer than 10%. This estimate could certainly be wrong (and I’m sure it’s not precise), but it seems like the right order of magnitude.

So I’m going to take that as my estimate:

If you are alone in the woods with a bear, you have about a 10% chance of being attacked.

Now, what is the probability that a randomly-selected man would attack you, if you were alone in the woods with him?

This one can be much better estimated. It is roughly equal to the proportion of men who are psychopaths.


Now, figures on this vary too, partly because psychopathy comes in degrees. But at the low end we have about 1.2% of men and 0.3% of women who are really full-blown psychopaths, and at the high end we have about 10% of men and 2% of women who exhibit significant psychopathic traits.

I’d like to note two things about these figures:

  1. It still seems like the man is probably safer than the bear.
  2. Men are only about four or five times as likely to be psychopaths as women.

Admittedly, my bear estimate is very imprecise; so if, say, only 5% of bear encounters result in attacks and 10% of men would attack if you were alone in the woods, men could be more dangerous. But I think it’s unlikely. I’m pretty sure bears are more dangerous.

But the really interesting thing is that people who seemed ambivalent about man versus bear, or even were quite happy to choose the bear, seem quite consistent in choosing women over bears. And I’m not sure the gender difference is really large enough to justify that.

If 1.2% to 10% of men are enough for us to fear all men, why aren’t 0.3% to 2% of women enough for us to fear all women? Is there a threshold at 1% or 5% that flips us from “safe” to “dangerous”?

But aren’t men responsible for most violence, especially sexual violence?

Yes, but probably not by as much as you think.

The vast majority of rapesare committed by men, and most of those are against women. But the figures may not be as lopsided as you imagine; in a given year, about 0.3% of women are raped by a man, and about 0.1% of men are raped by a woman. Over their lifetimes, about 25% of women will be sexually assaulted, and about 5% of men will be. Rapes of men by women have gone even more under-reported than rapes in general, in part because it was only recently that being forced to penetrate someone was counted as a sexual assault—even though it very obviously is.

So men are about 5 times as likely to commit rape as women. That’s a big difference, but I bet it’s a lot smaller than what many of you believed. There are statistics going around that claim that as many as 99% of rapes are committed by men; those statistics are ignoring the “forced to penetrate” assaults, and thus basically defining rape of men by women out of existence.

Indeed, 5 to 1 is quite close to the ratio in psychopathy.

I think that’s no coincidence: In fact, I think it’s largely the case that the psychopaths and the rapists are the same people.

What about homicide?

While men are indeed much more likely to be perpetrators of homicide, they are also much more likely to be victims.

Of about 23,000 homicide offenders in 2022, 15,100 were known to be men, 2,100 were known to be women, and 5,800 were unknown (because we never caught them). Assuming that women are no more or less likely to be caught than men, we can ignore the unknown, and presume that the same gender ratio holds across all homicides: 12% are committed by women.

Of about 22,000 homicides in the US last year, 17,700 victims were men. 3,900 victims were women. So men are 4.5 times as likely to be murdered than women in the US. Similar ratios hold in most First World countries (though total numbers are lower).

Overall, this means that men are about 7 times as likely to commit murder, but about 4.5 times as likely to suffer it.

So if we measure by rate of full-blown psychopathy, men are about 4 times as dangerous as women. If we measure by rate of moderate psychopathy, men are about 5 times as dangerous. If we measure by rate of rape, men are about 5 times as dangerous. And if we measure by rate of homicide, men are about 7 times as dangerous—but mainly to each other.

Put all this together, and I think it’s fair to summarize these results as:

Men are about five times as dangerous as women.

That’s not a small difference. But it’s also not an astronomical one. If you are right to be afraid of all men because they could rape or murder you, why are you not also right to be afraid of all women, who are one-fifth as likely to do the same?

Should we all fear everyone?

Surely you can see that isn’t a healthy way for a society to operate. Yes, there are real dangers in this world; but being constantly afraid of everyone will make you isolated, lonely, paranoid and probably depressed—and it may not even protect you.

It seems like a lot of men responding to the “man or bear” meme were honestly shocked that women are so afraid. If so, they have learned something important. Maybe that’s the value in the meme.

But the fear can be real, even justified, and still be hurting more than it’s helping. I don’t see any evidence that it’s actually making anyone any safer.

We need a better answer than fear.

Everyone includes your mother and Los Angeles

Apr 28 JDN 2460430

What are the chances that artificial intelligence will destroy human civilization?

A bunch of experts were surveyed on that question and similar questions, and half of respondents gave a probability of 5% or more; some gave probabilities as high as 99%.

This is incredibly bizarre.

Most AI experts are people who work in AI. They are actively participating in developing this technology. And yet more than half of them think that the technology they are working on right now has a more than 5% chance of destroying human civilization!?

It feels to me like they honestly don’t understand what they’re saying. They can’t really grasp at an intuitive level just what a 5% or 10% chance of global annihilation means—let alone a 99% chance.

If something has a 5% chance of killing everyone, we should consider that at least as bad asthan something that is guaranteed to kill 5% of people.

Probably worse, in fact, because you can recover from losing 5% of the population (we have, several times throughout history). But you cannot recover from losing everyone. So really, it’s like losing 5% of all future people who will ever live—which could be a very large number indeed.

But let’s be a little conservative here, and just count people who already, currently exist, and use 5% of that number.

5% of 8 billion people is 400 million people.

So anyone who is working on AI and also says that AI has a 5% chance of causing human extinction is basically saying: “In expectation, I’m supporting 20 Holocausts.”

If you really think the odds are that high, why aren’t you demanding that any work on AI be tried as a crime against humanity? Why aren’t you out there throwing Molotov cocktails at data centers?

(To be fair, Eliezer Yudkowsky is actually calling for a global ban on AI that would be enforced by military action. That’s the kind of thing you should be doing if indeed you believe the odds are that high. But most AI doomsayers don’t call for such drastic measures, and many of them even continue working in AI as if nothing is wrong.)

I think this must be scope neglector something even worse.

If you thought a drug had a 99% chance of killing your mother, you would never let her take the drug, and you would probably sue the company for making it.

If you thought a technology had a 99% chance of destroying Los Angeles, you would never even consider working on that technology, and you would want that technology immediately and permanently banned.

So I would like to remind anyone who says they believe the danger is this great and yet continues working in the industry:

Everyone includes your mother and Los Angeles.

If AI destroys human civilization, that means AI destroys Los Angeles. However shocked and horrified you would be if a nuclear weapon were detonated in the middle of Hollywood, you should be at least that shocked and horrified by anyone working on advancing AI, if indeed you truly believe that there is at least a 5% chance of AI destroying human civilization.

But people just don’t seem to think this way. Their minds seem to take on a totally different attitude toward “everyone” than they would take toward any particular person or even any particular city. The notion of total human annihilation is just so remote, so abstract, they can’t even be afraid of it the way they are afraid of losing their loved ones.

This despite the fact that everyone includes all your loved ones.

If a drug had a 5% chance of killing your mother, you might let her take it—but only if that drug was the best way to treat some very serious disease. Chemotherapy can be about that risky—but you don’t go on chemo unless you have cancer.

If a technology had a 5% chance of destroying Los Angeles, I’m honestly having trouble thinking of scenarios in which we would be willing to take that risk. But the closest I can come to it is the Manhattan Project. If you’re currently fighting a global war against fascist imperialists, and they are also working on making an atomic bomb, then being the first to make an atomic bomb may in fact be the best option, even if you know that it carries a serious risk of utter catastrophe.

In any case, I think one thing is clear: You don’t take that kind of serious risk unless there is some very large benefit. You don’t take chemotherapy on a whim. You don’t invent atomic bombs just out of curiosity.

Where’s the huge benefit of AI that would justify taking such a huge risk?

Some forms of automation are clearly beneficial, but so far AI per se seems to have largely made our society worse. ChatGPT lies to us. Robocalls inundate us. Deepfakes endanger journalism. What’s the upside here? It makes a ton of money for tech companies, I guess?

Now, fortunately, I think 5% is too high an estimate.

(Scientific American agrees.)

My own estimate is that, over the next two centuries, there is about a 1% chance that AI destroys human civilization, and only a 0.1% chance that it results in human extinction.

This is still really high.

People seem to have trouble with that too.

“Oh, there’s a 99.9% chance we won’t all die; everything is fine, then?” No. There are plenty of other scenarios that would also be very bad, and a total extinction scenario is so terrible that even a 0.1% chance is not something we can simply ignore.

0.1% of people is still 8 million people.

I find myself in a very odd position: On the one hand, I think the probabilities that doomsayers are giving are far too high. On the other hand, I think the actions that are being taken—even by those same doomsayers—are far too small.

Most of them don’t seem to consider a 5% chance to be worthy of drastic action, while I consider a 0.1% chance to be well worthy of it. I would support a complete ban on all AI research immediately, just from that 0.1%.

The only research we should be doing that is in any way related to AI should involve how to make AI safer—absolutely no one should be trying to make it more powerful or apply it to make money. (Yet in reality, almost the opposite is the case.)

Because 8 million people is still a lot of people.

Is it fair to treat a 0.1% chance of killing everyone as equivalent to killing 0.1% of people?

Well, first of all, we have to consider the uncertainty. The difference between a 0.05% chance and a 0.015% chance is millions of people, but there’s probably no way we can actually measure it that precisely.

But it seems to me that something expected to kill between 4 million and 12 million people would still generally be considered very bad.

More importantly, there’s also a chance that AI will save people, or have similarly large benefits. We need to factor that in as well. Something that will kill 4-12 million people but also save 15-30 million people is probably still worth doing (but we should also be trying to find ways to minimize the harm and maximize the benefit).

The biggest problem is that we are deeply uncertain about both the upsides and the downsides. There are a vast number of possible outcomes from inventing AI. Many of those outcomes are relatively mundane; some are moderately good, others are moderately bad. But the moral question seems to be dominated by the big outcomes: With some small but non-negligible probability, AI could lead to either a utopian future or an utter disaster.

The way we are leaping directly into applying AI without even being anywhere close to understanding AI seems to me especially likely to lean toward disaster. No other technology has ever become so immediately widespread while also being so poorly understood.

So far, I’ve yet to see any convincing arguments that the benefits of AI are anywhere near large enough to justify this kind of existential risk. In the near term, AI really only promises economic disruption that will largely be harmful. Maybe one day AI could lead us into a glorious utopia of automated luxury communism, but we really have no way of knowing that will happen—and it seems pretty clear that Google is not going to do that.

Artificial intelligence technology is moving too fast. Even if it doesn’t become powerful enough to threaten our survival for another 50 years (which I suspect it won’t), if we continue on our current path of “make money now, ask questions never”, it’s still not clear that we would actually understand it well enough to protect ourselves by then—and in the meantime it is already causing us significant harm for little apparent benefit.

Why are we even doing this? Why does halting AI research feel like stopping a freight train?

I dare say it’s because we have handed over so much power to corporations.

The paperclippers are already here.

Love is more than chemicals

Feb 18 JDN 2460360

One of the biggest problems with the rationalist community is an inability to express sincerity and reverence.

I get it: Religion is the world’s greatest source of sincerity and reverence, and religion is the most widespread and culturally important source of irrationality. So we declare ourselves enemies of religion, and also end up being enemies of sincerity and reverence.

But in doing so, we lose something very important. We cut ourselves off from some of the greatest sources of meaning and joy in human life.

In fact, we may even be undermining our own goals: If we don’t offer people secular, rationalist forms of reverence, they may find they need to turn back to religion in order to fill that niche.

One of the most pernicious forms of this anti-sincerity, anti-reverence attitude (I can’t just say ‘insincere’ or ‘irreverent’, as those have different meanings) is surely this one:

Love is just a chemical reaction.

(I thought it seemed particularly apt to focus on this one during the week of Valentine’s Day.)

On the most casual of searches I could find at least half a dozen pop-sci articles and a YouTube video propounding this notion (though I could also find a few articles trying to debunk the notion as well).

People who say this sort of thing seem to think that they are being wise and worldly while the rest of us are just being childish and naive. They think we are seeing something that isn’t there. In fact, they are being jaded and cynical. They are failing to see something that is there.

(Perhaps the most extreme form of this was from Rick & Morty; and while Rick as a character is clearly intended to be jaded and cynical, far too many people also see him as a role model.)

Part of the problem may also be a failure to truly internalize the Basic Fact of Cognitive Science:

You are your brain.

No, your consciousness is not an illusion. It’s not an “epiphenomenon” (whatever that isI’ve never encountered one in real life). Your mind is not fake or imaginary. Your mind actually exists—and it is a product of your brain. Both brain and mind exist, and are in fact the same.

It’s so hard for people to understand this that some become dualists, denying the unity of the brain and the mind. That, at least, I can sympathize with, even though we have compelling evidence that it is wrong. But there’s another tack people sometimes take, eliminative materialism, where they try to deny that the mind exists at all. And that I truly do not understand. How can you think that nobody can think? Yet intelligent, respected philosophers have claimed to believe such things.

Love is one of the most important parts of our lives.

This may be more true of humans than of literally any other entity in the known universe.

The only serious competition comes from other mammals: They are really the only other beings we know of that are capable of love. And even they don’t seem to be as good at it as we are; they can love only those closest to them, while we can love entire nations and even abstract concepts.

And once you go beyond that, even to reptiles—let alone fish, or amphibians, or insects, or molluscs—it’s not clear that other animals are really capable of love at all. They seem to be capable of some forms of thought and feeling: They get hungry, or angry, or horny. But do they really love?

And even the barest emotional capacities of an insect are still categorically beyond what most of the universe is capable of feeling, which is to say: Nothing. The vast, vast majority of the universe feels neither love nor hate, neither joy nor pain.

Yet humans can love, and do love, and it is a large part of what gives our lives meaning.

I don’t just mean romantic love here, though I do think it’s worth noting that people who dismiss the reality of romantic love somehow seem reluctant to do the same for the love parents have for their children—even though it’s made of pretty much the same brain chemicals. Perhaps there is a limit to their cynicism.

Yes, love is made of chemicals—because everything is made of chemicals. We live in a material, chemical universe. Saying that love is made of chemicals is an almost completely vacuous statement; it’s basically tantamount to saying that love exists.

In other contexts, you already understand this.

“That’s not a bridge, it’s just a bunch of iron atoms!” rightfully strikes you as an absurd statement to make. Yes, the bridge is made of steel, and steel is mostly iron, and everything is made of atoms… but clearly there’s a difference between a random pile of iron and a bridge.

“That’s not a computer, it’s just a bunch of silicon atoms!” similarly registers as nonsense: Yes, it is indeed mostly made of silicon, but beach sand and quartz crystals are not computers.

It is in this same sense that joy is made of dopamine and love is made of chemical reactions. Yes, those are in fact the constituent parts—but things are more than just their parts.

I think that on some level, even most rationalists recognize that love is more than some arbitrary chemical reaction. I think “love is just chemicals” is mainly something people turn to for a couple of reasons: Sometimes, they are so insistent on rejecting everything that even resembles religious belief that they end up rejecting all meaning and value in human life. Other times, they have been so heartbroken, that they try to convince themselves love isn’t real—to dull the pain. (But of course if it weren’t, there would be no pain to dull.)

But love is no more (or less) a chemical reaction than any other human experience: The very belief “love is just a chemical reaction” is, itself, made of chemical reactions.

Everything we do is made of chemical reactions, because we are made of chemical reactions.

Part of the problem here—and with the Basic Fact of Cognitive Science in general—is that we really have no idea how this works. For most of what we deal with in daily life, and even an impressive swath of the overall cosmos, we have a fairly good understanding of how things work. We know how cars drive, how wind blows, why rain falls; we even know how cats purr and why birds sing. But when it comes to understanding how the physical matter of the brain generates the subjective experiences of thought, feeling, and belief—of which love is made—we lack even the most basic understanding. The correlation between the two is far too strong to deny; but as far as causal mechanisms, we know absolutely nothing. (Indeed, worse than that: We can scarcely imagine a causal mechanism that would make any sense. We not only don’t know the answer; we don’t know what an answer would look like.)

So, no, I can’t tell you how we get from oxytocin and dopamine to love. I don’t know how that makes any sense. No one does. But we do know it’s true.

And just like everything else, love is more than the chemicals it’s made of.

Administering medicine to the dead

Jan 28 JDN 2460339

Here are a couple of pithy quotes that go around rationalist circles from time to time:

“To argue with a man who has renounced the use and authority of reason, […] is like administering medicine to the dead[…].”

Thomas Paine, The American Crisis

“It is useless to attempt to reason a man out of a thing he was never reasoned into.”

Jonathan Swift

You usually hear that abridged version, but Thomas Paine’s full quotation is actually rather interesting:

“To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.”

― Thomas Paine, The American Crisis

It is indeed quite ineffective to convert an atheist by scripture (though that doesn’t seem to stop them from trying). Yet this quotation seems to claim that the opposite should be equally ineffective: It should be impossible to convert a theist by reason.

Well, then, how else are we supposed to do it!?

Indeed, how did we become atheists in the first place!?

You were born an atheist? No, you were born having absolutely no opinion about God whatsoever. (You were born not realizing that objects don’t fade from existence when you stop seeing them! In a sense, we were all born believing ourselves to be God.)

Maybe you were raised by atheists, and religion never tempted you at all. Lucky you. I guess you didn’t have to be reasoned into atheism.

Well, most of us weren’t. Most of us were raised into religion, and told that it held all the most important truths of morality and the universe, and that believing anything else was horrible and evil and would result in us being punished eternally.

And yet, somehow, somewhere along the way, we realized that wasn’t true. And we were able to realize that because people made rational arguments.

Maybe we heard those arguments in person. Maybe we read them online. Maybe we read them in books that were written by people who died long before we were born. But somehow, somewhere people actually presented the evidence for atheism, and convinced us.

That is, they reasoned us out of something that we were not reasoned into.

I know it can happen. I have seen it happen. It has happened to me.

And it was one of the most important events in my entire life. More than almost anything else, it made me who I am today.

I’m scared that if you keep saying it’s impossible, people will stop trying to do it—and then it will stop happening to people like me.

So please, please stop telling people it’s impossible!

Quotes like these encourage you to simply write off entire swaths of humanity—most of humanity, in fact—judging them as worthless, insane, impossible to reach. When you should be reaching out and trying to convince people of the truth, quotes like these instead tell you to give up and consider anyone who doesn’t already agree with you as your enemy.

Indeed, it seems to me that the only logical conclusion of quotes like these is violence. If it’s impossible to reason with people who oppose us, then what choice do we have, but to fight them?

Violence is a weapon anyone can use.

Reason is the one weapon in the universe that works better when you’re right.

Reason is the sword that only the righteous can wield. Reason is the shield that only protects the truth. Reason is the only way we can ever be sure that the right people win—instead of just whoever happens to be strongest.

Yes, it’s true: reason isn’t always effective, and probably isn’t as effective as it should be. Convincing people to change their minds through rational argument is difficult and frustrating and often painful for both you and them—but it absolutely does happen, and our civilization would have long ago collapsed if it didn’t.

Even people who claim to have renounced all reason really haven’t: they still know 2+2=4 and they still look both ways when they cross the street. Whatever they’ve renounced, it isn’t reason; and maybe, with enough effort, we can help them see that—by reason, of course.

In fact, maybe even literally administering medicine to the dead isn’t such a terrible idea.

There are degrees of death, after all: Someone whose heart has stopped is in a different state than someone whose cerebral activity has ceased, and both of them clearly stand a better chance of being resuscitated than someone who has been vaporized by an explosion.

As our technology improves, more and more states that were previously considered irretrievably dead will instead be considered severe states of illness or injury from which it is possible to recover. We can now restart many stopped hearts; we are working on restarting stopped brains. (Of course we’ll probably never be able to restore someone who got vaporized—unless we figure out how to make backup copies of people?)

Most of the people who now live in the world’s hundreds of thousands of ICU beds would have been considered dead even just 100 years ago. But many of them will recover, because we didn’t give up on them.

So don’t give up on people with crazy beliefs either.

They may seem like they are too far gone, like nothing in the world could ever bring them back to the light of reason. But you don’t actually know that for sure, and the only way to find out is to try.

Of course, you won’t convince everyone of everything immediately. No matter how good your evidence is, that’s just not how this works. But you probably will convince someone of something eventually, and that is still well worthwhile.

You may not even see the effects yourself—people are often loathe to admit when they’ve been persuaded. But others will see them. And you will see the effects of other people’s persuasion.

And in the end, reason is really all we have. It’s the only way to know that what we’re trying to make people believe is the truth.

Don’t give up on reason.

And don’t give up on other people, whatever they might believe.

Empathy is not enough

Jan 14 JDN 2460325

A review of Against Empathy by Paul Bloom

The title Against Empathy is clearly intentionally provocative, to the point of being obnoxious: How can you be against empathy? But the book really does largely hew toward the conclusion that empathy, far from being an unalloyed good as we may imagine it to be, is overall harmful and detrimental to society.

Bloom defines empathy narrowly, but sensibly, as the capacity to feel other people’s emotions automatically—to feel hurt when you see someone hurt, afraid when you see someone afraid. He argues surprisingly well that this capacity isn’t really such a great thing after all, because it often makes us help small numbers of people who are like us rather than large numbers of people who are different from us.

But something about the book rubs me the wrong way all throughout, and I think I finally put my finger on it:

If empathy is bad… compared to what?

Compared to some theoretical ideal of perfect compassion where we love all sentient beings in the universe equally and act only according to maxims that would yield the greatest benefit for all, okay, maybe empathy is bad.

But that is an impossible ideal. No human being has ever approached it. Even our greatest humanitarians are not like that.

Indeed, one thing has clearly characterized the very best human beings, and that is empathy. Every one of them has been highly empathetic.

The case for empathy gets even stronger if you consider the other extreme: What are human beings like when they lack empathy? Why, those people are psychopaths, and they are responsible for the majority of violent crimes and nearly all the most terrible atrocities.

Empirically, if you look at humans as we actually are, it really seems like this function is monotonic: More empathy makes people behave better. Less empathy makes them behave worse.

Yet Bloom does have a point, nevertheless.

There are real-world cases where empathy seems to have done more harm than good.

I think his best examples come from analysis of charitable donations. Most people barely give anything to charity, which we might think of as a lack of empathy. But a lot of people do give to a great deal to charity—yet the charities they give to and the gifts they give are often woefully inefficient.

Let’s even set aside cases like the Salvation Army, where the charity is actively detrimental to society due to the distortions of ideology. The Salvation Army is in fact trying to do good—they’re just starting from a fundamentally evil outlook on the universe. (And if that sounds harsh to you? Take a look at what they say about people like me.)

No, let’s consider charities that are well-intentioned, and not blinded by fanatical ideology, who really are trying to work toward good things. Most of them are just… really bad at it.

The most cost-effective charities, like the ones GiveWell gives top ratings to, can save a life for about $3,000-5,000, or about $150 to $250 per QALY.

But a typical charity is far, far less efficient than that. It’s difficult to get good figures on it, but I think it would be generous to say that a typical charity is as efficient as the standard cost-effectiveness threshold used in US healthcare, which is $50,000 per QALY. That’s already two hundred times less efficient.

And many charities appear to be even below that, where their marginal dollars don’t really seem to have any appreciable benefit in terms of QALY. Maybe $1 million per QALY—spend enough, and they’d get a QALY eventually.

Other times, people give gifts to good charities, but the gifts they give are useless—the Red Cross is frequently inundated with clothing and toys that it has absolutely no use for. (Please, please, I implore you: Give them money. They can buy what they need. And they know what they need a lot better than you do.)

Why do people give to charities that don’t really seem to accomplish anything? Because they see ads that tug on their heartstrings, or get solicited donations directly by people on the street or door-to-door canvassers. In other words, empathy.

Why do people give clothing and toys to the Red Cross after a disaster, instead of just writing a check or sending a credit card payment? Because they can see those crying faces in their minds, and they know that if they were a crying child, they’d want a toy to comfort them, not some boring, useless check. In other words, empathy.

Empathy is what you’re feeling when you see those Sarah McLachlan ads with sad puppies in them, designed to make you want to give money to the ASPCA.

Now, I’m not saying you shouldn’t give to the ASPCA. Actually animal welfare advocacy is one of those issues where cost-effectiveness is really hard to assess—like political donations, and for much the same reason. If we actually managed to tilt policy so that factory farming were banned, the direct impact on billions of animals spared that suffering—while indubitably enormous—might actually be less important, morally, than the impact on public health and climate change from people eating less meat. I don’t know what multiplier to apply to a cow’s suffering to convert her QALY into mine. But I do know that the world currently eats far too much meat, and it’s cooking the planet along with the cows. Meat accounts for 60% of food-related greenhouse gases, and 35% of all greenhouse gases.

But I am saying that if you give to the ASPCA, it should be because you support their advocacy against factory farming—not because you saw pictures of very sad puppies.

And empathy, unfortunately, doesn’t really work that way.

When you get right down to it, what Paul Bloom is really opposing is scope neglect, which is something I’ve written about before.

We just aren’t capable of genuinely feeling the pain of a million people, or a thousand, or probably even a hundred. (Maybe we can do a hundred; that’s under our Dunbar number, after all.) So when confronted with global problems that affect millions of people, our empathy system just kind of overloads and shuts down.

ERROR: OVERFLOW IN EMPATHY SYSTEM. ABORT, RETRY, IGNORE?

But when confronted with one suffering person—or five, or ten, or twenty—we can actually feel empathy for them. We can look at their crying face and we may share their tears.

Charities know this; that’s why Sarah McLachlan does those ASPCA ads. And if that makes people donate to good causes, that’s a good thing. (If it makes them donate to the Salvation Army, that’s a different story.)

The problem is, it really doesn’t tell us what causes are best to donate to. Almost any cause is going to alleviate some suffering of someone, somewhere; but there’s an enormous difference between $250 per QALY, $50,000per QALY, and $1 million per QALY. Your $50 donation would add either two and a half months, eight hours, or just over 26 minutes of joy to someone else’s life, respectively. (In the latter case, it may literally be better—morally—for you to go out to lunch or buy a video game.)

To really know the best places to give to, you simply can’t rely on your feelings of empathy toward the victims. You need to do research—you need to do math. (Or someone does, anyway; you can also trust GiveWell to do it for you.)

Paul Bloom is right about this. Empathy doesn’t solve this problem. Empathy is not enough.

But where I think he loses me is in suggesting that we don’t need empathy at all—that we could somehow simply dispense with it. His offer is to replace it with an even-handed, universal-minded utilitarian compassion, a caring for all beings in the universe that values all their interests evenly.

That sounds awfully appealing—other than the fact that it’s obviously impossible.

Maybe it’s something we can all aspire to. Maybe it’s something we as a civilization can someday change ourselves to become capable of feeling, in some distant transhuman future. Maybe even, sometimes, at our very best moments, we can even approximate it.

But as a realistic guide for how most people should live their lives? It’s a non-starter.

In the real world, people with little or no empathy are terrible. They don’t replace it with compassion; they replace it with selfishness, greed, and impulsivity.

Indeed, in the real world, empathy and compassion seem to go hand-in-hand: The greatest humanitarians do seem like they better approximate that universal caring (though of course they never truly achieve it). But they are also invariably people of extremely high empathy.

And so, Dr. Bloom, I offer you a new title, perhaps not as catchy or striking—perhaps it would even have sold fewer books. But I think it captures the correct part of your thesis much better:

Empathy is not enough.

Compassion and the cosmos

Dec 24 JDN 2460304

When this post goes live, it will be Christmas Eve, one of the most important holidays around the world.

Ostensibly it celebrates the birth of Jesus, but it doesn’t really.

For one thing, Jesus almost certainly wasn’t born in December. The date of Christmas was largely set by the Council of Tours in AD 567; it was set to coincide with existing celebrations—not only other Christian celebrations such as the Feast of the Epiphany, but also many non-Christian celebrations such as Yuletide, Saturnalia, and others around the Winter Solstice. (People today often say “Yuletide” when they actually mean Christmas, because the syncretization was so absolute.)

For another, an awful lot of the people celebrating Christmas don’t particularly care about Jesus. Countries like Sweden, Belgium, the UK, Australia, Norway, and Denmark are majority atheist but still very serious about Christmas. Maybe we should try to secularize and ecumenize the celebration and call it Solstice or something, but that’s a tall order. For now, it’s Christmas.

Compassion, love, and generosity are central themes of Christmas—and, by all accounts, Jesus did exemplify those traits. Christianity has a very complicated history, much of it quite dark; but this part of it at least seems worth preserving and even cherishing.

It is truly remarkable that we have compassion at all.

Most of this universe has no compassion. Many would like to believe otherwise, and they invent gods and other “higher beings” or attribute some sort of benevolent “universal consciousness” to the cosmos. (Really, most people copy the prior inventions of others.)

This is all wrong.

The universe is mostly empty, and what is here is mostly pitilessly indifferent.

The vast majority of the universe is comprised of cold, dark, empty space—or perhaps of “dark energy“, a phenomenon we really don’t understand at all, which many physicists believe is actually a shockingly powerful form of energy contained within empty space.

Most of the rest is made up of “dark matter“, a substance we still don’t really understand either, but believe to be basically a dense sea of particles that have mass but not much else, which cluster around other mass by gravity but otherwise rarely interact with other matter or even with each other.

Most of the “ordinary matter”, or more properly baryonic matter, (which we think of as ordinary, but actually by far the minority) is contained within stars and nebulae. It is mostly hydrogen and helium. Some of the other lighter elements—like lithium, sodium, carbon, oxygen, nitrogen, and all the way up to iron—can be made within ordinary stars, but still form a tiny fraction of the mass of the universe. Anything heavier than that—silver, gold, beryllium, uranium—can only be made in exotic, catastrophic cosmic events, mainly supernovae, and as a result these elements are even rarer still.

Most of the universe is mind-bendingly cold: about 3 Kelvin, just barely above absolute zero.

Most of the baryonic matter is mind-bendingly hot, contained within stars that burn with nuclear fires at thousands or even millions of Kelvin.

From a cosmic perspective, we are bizarre.

We live at a weird intermediate temperature and pressure, where matter can take on such exotic states as liquid and solid, rather than the far more common gas and plasma. We do contain a lot of hydrogen—that, at least, is normal by the standards of baryonic matter. But then we’re also made up of oxygen, carbon, nitrogen, and even little bits of all sorts of other elements that can only be made in supernovae? What kind of nonsense lifeform depends upon something as exotic as iodine to survive?

Most of the universe does not care at all about you.

Most of the universe does not care about anything.

Stars don’t burn because they want to. They burn because that’s what happens when hydrogen slams into other hydrogen hard enough.

Planets don’t orbit because they want to. They orbit because if they didn’t, they’d fly away or crash into their suns—and those that did are long gone now.

Even most living things, which are already nearly as bizarre as we are, don’t actually care much.

Maybe there is a sense in which a C. elegans or an oak tree or even a cyanobacterium wants to live. It certainly seems to try to live; it has behaviors that seem purposeful, which evolved to promote its ability to survive and pass on offspring. Rocks don’t behave. Stars don’t seek. But living things—even tiny, microscopic living things—do.

But we are something very special indeed.

We are animals. Lifeforms with complex, integrated nervous systems—in a word, brains—that allow us to not simply live, but to feel. To hunger. To fear. To think. To choose.

Animals—and to the best of our knowledge, only animals, though I’m having some doubts about AI lately—are capable of making choices and experiencing pleasure and pain, and thereby becoming something more than living beings: moral beings.

Because we alone can choose, we alone have the duty to choose rightly.

Because we alone can be hurt, we alone have the right to demand not to be.

Humans are even very special among animals. We are not just animals but chordates; not just chordates but mammals; not just mammals but primates. And even then, not just primates. We’re special even by those very high standards.

When you count up all the ways that we are strange compared to the rest of the universe, it seems incredibly unlikely that beings like us would come into existence at all.

Yet here we are. And however improbable it may have been for us to emerge as intelligent beings, we had to do so in order to wonder how improbable it was—and so in some sense we shouldn’t be too surprised.

It is a mistake to say that we are “more evolved” than any other lifeform; turtles and cockroaches had just as much time to evolve as we did, and if anything their relative stasis for hundreds of millions of years suggests a more perfected design: “If it ain’t broke, don’t fix it.”

But we are different than other lifeforms in a very profound way. And I dare say, we are better.

All animals feel pleasure, pain and hunger. (Some believe that even some plants and microscopic lifeforms may too.) Pain when something damages you; hunger when you need something; pleasure when you get what you needed.

But somewhere along the way, new emotions were added: Fear. Lust. Anger. Sadness. Disgust. Pride. To the best of our knowledge, these are largely chordate emotions, often believed to have emerged around the same time as reptiles. (Does this mean that cephalopods never get angry? Or did they evolve anger independently? Surely worms don’t get angry, right? Our common ancestor with cephalopods was probably something like a worm, perhaps a nematode. Does C. elegans get angry?)

And then, much later, still newer emotions evolved. These ones seem to be largely limited to mammals. They emerged from the need for mothers to care for their few and helpless young. (Consider how a bear or a cat fiercely protects her babies from harm—versus how a turtle leaves her many, many offspring to fend for themselves.)

One emotion formed the core of this constellation:

Love.

Caring, trust, affection, and compassion—and also rejection, betrayal, hatred, and bigotry—all came from this one fundamental capacity to love. To care about the well-being of others as well as our own. To see our purpose in the world as extending beyond the borders of our own bodies.

This is what makes humans different, most of all. We are the beings most capable of love.

We are of course by no means perfect at it. Some would say that we are not even very good at loving.

Certainly there are some humans, such as psychopaths, who seem virtually incapable of love. But they are rare.

We often wish that we were better at love. We wish that there were more compassion in the world, and fear that humanity will destroy itself because we cannot find enough compassion to compensate for our increasing destructive power.

Yet if we are bad at love, compared to what?

Compared to the unthinking emptiness of space, the hellish nuclear fires of stars, or even the pitiless selfishness of a worm or a turtle, we are absolute paragons of love.

We somehow find a way to love millions of others who we have never even met—maybe just a tiny bit, and maybe even in a way that becomes harmful, as solidarity fades into nationalism fades into bigotry—but we do find a way. Through institutions of culture and government, we find a way to trust and cooperate on a scale that would be utterly unfathomable even to the most wise and open-minded bonobo, let alone a nematode.

There are no other experts on compassion here. It’s just us.

Maybe that’s why so many people long for the existence of gods. They feel as ignorant as children, and crave the knowledge and support of a wise adult. But there aren’t any. We’re the adults. For all the vast expanses of what we do not know, we actually know more than anyone else. And most of the universe doesn’t know a thing.

If we are not as good at loving as we’d like, the answer is for us to learn to get better at it.

And we know that we can get better at it, because we have. Humanity is more peaceful and cooperative now than we have ever been in our history. The process is slow, and sometimes there is backsliding, but overall, life is getting better for most people in most of the world most of the time.

As a species, as a civilization, we are slowly learning how to love ourselves, one another, and the rest of the world around us.

No one else will learn to love for us. We must do it ourselves.

But we can.

And I believe we will.

The problem with “human capital”

Dec 3 JDN 2460282

By now, human capital is a standard part of the economic jargon lexicon. It has even begun to filter down into society at large. Business executives talk frequently about “investing in their employees”. Politicians describe their education policies as “investing in our children”.

The good news: This gives businesses a reason to train their employees, and governments a reason to support education.

The bad news: This is clearly the wrong reason, and it is inherently dehumanizing.

The notion of human capital means treating human beings as if they were a special case of machinery. It says that a business may own and value many forms of productive capital: Land, factories, vehicles, robots, patents, employees.

But wait: Employees?


Businesses don’t own their employees. They didn’t buy them. They can’t sell them. They couldn’t make more of them in another factory. They can’t recycle them when they are no longer profitable to maintain.

And the problem is precisely that they would if they could.

Indeed, they used to. Slavery pre-dates capitalism by millennia, but the two quite successfully coexisted for hundreds of years. From the dawn of civilization up until all too recently, people literally were capital assets—and we now remember it as one of the greatest horrors human beings have ever inflicted upon one another.

Nor is slavery truly defeated; it has merely been weakened and banished to the shadows. The percentage of the world’s population currently enslaved is as low as it has ever been, but there are still millions of people enslaved. In Mauritania, slavery wasn’t even illegal until 1981, and those laws weren’t strictly enforced until 2007. (I had graduated from high school!) One of the most shocking things about modern slavery is how cheaply human beings are willing to sell other human beings; I have bought sandwiches that cost more than some people have paid for other people.

The notion of “human capital” basically says that slavery is the correct attitude to have toward people. It says that we should value human beings for their usefulness, their productivity, their profitability.

Business executives are quite happy to see the world in that way. It makes the way they have spent their lives seem worthwhile—perhaps even best—while allowing them to turn a blind eye to the suffering they have neglected or even caused along the way.

I’m not saying that most economists believe in slavery; on the contrary, economists led the charge of abolitionism, and the reason we wear the phrase “the dismal science” like a badge is that the accusation was first leveled at us for our skepticism toward slavery.

Rather, I’m saying that jargon is not ethically neutral. The names we use for things have power; they affect how people view the world.

This is why I always endeavor to always speak of net wealth rather than net worth—because a billionare is not worth more than other people. I’m not even sure you should speak of the net worth of Tesla Incorporated; perhaps it would be better to simply speak of its net asset value or market capitalization. But at least Tesla is something you can buy and sell (piece by piece). Elon Musk is not.

Likewise, I think we need a new term for the knowledge, skills, training, and expertise that human beings bring to their work. It is clearly extremely important; in fact in some sense it’s the most important economic asset, as it’s the only one that can substitute for literally all the others—and the one that others can least substitute for.

Human ingenuity can’t substitute for air, you say? Tell that to Buzz Aldrin—or the people who were once babies that breathed liquid for their first months of life. Yes, it’s true, you need something for human ingenuity to work with; but it turns out that with enough ingenuity, you may not need much, or even anything in particular. One day we may manufacture the air, water and food we need to live from pure energy—or we may embody our minds in machines that no longer need those things.

Indeed, it is the expansion of human know-how and technology that has been responsible for the vast majority of economic growth. We may work a little harder than many of our ancestors (depending on which ancestors you have in mind), but we accomplish with that work far more than they ever could have, because we know so many things they did not.

All that capital we have now is the work of that ingenuity: Machines, factories, vehicles—even land, if you consider all the ways that we have intentionally reshaped the landscape.

Perhaps, then, what we really need to do is invert the expression:

Humans are not machines. Machines are embodied ingenuity.

We should not think of human beings as capital. We should think of capital as the creation of human beings.

Marx described capital as “embodied labor”, but that’s really less accurate: What makes a robot a robot is much less about the hours spent building it, than the centuries of scientific advancement needed to understand how to make it in the first place. Indeed, if that robot is made by another robot, no human need ever have done any labor on it at all. And its value comes not from the work put into it, but the work that comes out of it.

Like so much of neoliberal ideology, the notion of human capital seems to treat profit and economic growth as inherent ends in themselves. Human beings only become valued insofar as we advance the will of the almighty dollar. We forget that the whole reason we should care about economic growth in the first place is that it benefits people. Money is the means, not the end; people are the end, not the means.

We should not think in terms of “investing in children”, as if they were an asset that was meant to yield a return. We should think of enriching our children—of building a better world for them to live in.

We should not speak of “investing in employees”, as though they were just another asset. We should instead respect employees and seek to treat them with fairness and justice.

That would still give us plenty of reason to support education and training. But it would also give us a much better outlook on the world and our place in it.

You are worth more than your money or your job.

The economy exists for people, not the reverse.

Don’t ever forget that.

Time and How to Use It

Nov 5 JDN 2460254

A review of Four Thousand Weeks by Oliver Burkeman

The central message of Four Thousand Weeks: Time and How to Use It seems so obvious in hindsight it’s difficult to understand why it feels so new and unfamiliar. It’s a much-needed reaction to the obsessive culture of “efficiency” and “productivity” that dominates the self-help genre. Its core message is remarkable simple:

You don’t have time to do everything you want, so stop trying.

I actually think Burkeman understands the problem incorrectly. He argues repeatedly that it is our mortality which makes our lives precious—that it is because we only get four thousand weeks of life that we must use our time well. But this strikes me as just yet more making excuses for the dragon.

Our lives would not be less precious if we lived a thousand years or a million. Indeed, our time would hardly be any less scarce! You still can’t read every book ever written if you live a million years—for every one of those million years, another 500,000 books will be published. You could visit every one of the 10,000 cities in the world, surely; but if you spend a week in each one, by the time you get back to Paris for a second visit, centuries will have passed—I must imagine you’ll have missed quite a bit of change in that time. (And this assumes that our population remains the same—do we really think it would, if humans could live a million years?)

Even a truly immortal being that will live until the end of time needs to decide where to be at 7 PM this Saturday.

Yet Burkeman does grasp—and I fear that too many of us do not—that our time is precious, and when we try to do everything that seems worth doing, we end up failing to prioritize what really matters most.

What do most of us spend most of our lives doing? Whatever our bosses tell us to do. Aside from sleeping, the activity that human beings spend the largest chunk of their lives on is working.

This has made us tremendously, mind-bogglingly productive—our real GDP per capita is four times what it was in just 1950, and about eight times what it was in the 1920s. Projecting back further than that is a bit dicier, but assuming even 1% annual growth, it should be about twenty times what it was at the dawn of the Industrial Revolution. We could surely live better than medieval peasants did by working only a few hours per week; yet in fact on average we work more hours than they did—by some estimates, nearly twice as much. Rather than getting the same wealth for 5% of the work, or twice the wealth for 10%, we chose to get 40 times the wealth for twice the work.

It would be one thing if all this wealth and productivity actually seemed to make us happy. But does it?

Our physical health is excellent: We are tall, we live long lives—we are smarter, even, than people of the not-so-distant past. We have largely conquered disease as the ancients knew it. Even a ‘catastrophic’ global pandemic today kills a smaller share of the population than would die in a typical year from disease in ancient times. Even many of our most common physical ailments, such as obesity, heart disease, and diabetes, are more symptoms of abundance than poverty. Our higher rates of dementia and cancer are largely consequences of living longer lives—most medieval peasants simply didn’t make it long enough to get Alzheimer’s. I wonder sometimes how ancient people dealt with other common ailments such as migraine and sleep apnea; but my guess is that they basically just didn’t—since treatment was impossible, they learned to live with it. Maybe they consoled themselves with whatever placebo treatments the healers of their local culture offered.

Yet our mental health seems to be no better than ever—and depending on how you measure it, may actually be getting worse over time. Some of the measured increase is surely due to more sensitive diagnosis; but some of it may be a genuine increase—especially as a result of the COVID pandemic. I wasn’t able to find any good estimates of rates of depression or anxiety disorders in ancient or medieval times, so I guess I really can’t say whether this is a problem that’s getting worse. But it sure doesn’t seem to be getting better. We clearly have not solved the problem of depression the way we have solved the problem of infectious disease.

Burkeman doesn’t tell us to all quit our jobs and stop working. But he does suggest that if you are particularly unhappy at your current job (as I am), you may want to quit it and begin searching for something else (as I have). He reminds us that we often get stuck in a particular pattern and underestimate the possibilities that may be available to us.

And he has advice for those who want to stay in their current jobs, too: Do less. Don’t take on everything that is asked of you. Don’t work yourself to the bone. The rewards for working harder are far smaller than our society will tell you, and the costs of burning out are far higher. Do the work that is genuinely most important, and let the rest go.

Unlike most self-help books, Four Thousand Weeks offers very little in the way of practical advice. It’s more like a philosophical treatise, exhorting you to adopt a whole new outlook on time and how you use it. But he does offer a little bit of advice, near the end of the book, in “Ten Tools for Embracing Your Finitude” and “Five Questions”.

The ten tools are as follows:


Adopt a ‘fixed volume’ approach to productivity. Limit the number of tasks on your to-do list. Set aside a particular amount of time for productive work, and work only during that time.

I am relatively good at this one; I work only during certain hours on weekdays, and I resist the urge to work other times.

Serialize, serialize, serialize. Do one major project at a time.

I am terrible at this one; I constantly flit between different projects, leaving most of them unfinished indefinitely. But I’m not entirely convinced I’d do better trying to focus on one in particular. I switch projects because I get stalled on the current one, not because I’m anxious about not doing the others. Unless I can find a better way to break those stalls, switching projects still gets more done than staying stuck on the same one.

Decide in advance what to fail at. Prioritize your life and accept that some things will fail.

We all, inevitably, fail to achieve everything we want to. What Burkeman is telling us to do is choose in advance which achievements we will fail at. Ask yourself: How much do you really care about keeping the kitchen clean and the lawn mowed? If you’re doing these things to satisfy other people’s expectations but you don’t truly care about them yourself, maybe you should just accept that people will frown upon you for your messy kitchen and overgrown lawn.

Focus on what you’ve already completed, not just on what’s left to complete. Make a ‘done list’ of tasks you have completed today—even small ones like “brushed teeth” and “made breakfast”—to remind yourself that you do in fact accomplish things.

I may try this one for awhile. It feels a bit hokey to congratulate yourself on making breakfast—but when you are severely depressed, even small tasks like that can in fact feel like an ordeal.

Consolidate your caring. Be generous and kind, but pick your battles.

I’m not very good at this one either. Spending less time on social media has helped; I am no longer bombarded quite so constantly by worthy causes and global crises. Yet I still have a vague sense that I am not doing enough, that I should be giving more of myself to help others. For me this is partly colored by a feeling that I have failed to build a career that would have both allowed me to have direct impact on some issues and also made enough money to afford large donations.

Embrace boring and single-purpose technology. Downgrade your technology to reduce distraction.

I don’t do this one, but I also don’t see it as particularly good advice. Maybe taking Facebook and (the-platform-formerly-known-as-) Twitter off your phone home screen is a good idea. But the reason you go to social media isn’t that they are so easy to access. It’s that you are expected to, and that you try to use them to fill some kind of need in your life—though it’s unclear they ever actually fill it.

Seek out novelty in the mundane. Cultivate awareness and appreciation of the ordinary things around you.

This one is basically a stripped-down meditation technique. It does work, but it’s also a lot harder to do than most people seem to think. It is especially hard to do when you are severely depressed. One technique I’ve learned from therapy that is surprisingly helpful is to replace “I have to” with “I get to” whenever you can: You don’t have to scoop cat litter, you get to because you have an adorable cat. You don’t have to catch the bus to work, you get to because you have a job. You don’t have to make breakfast for your family, you get to because you have a loving family.

Be a ‘researcher’ in relationships. Cultivate curiosity rather than anxiety or judgment.

Human beings are tremendously varied and often unpredictable. If you worry about whether or not people will do what you want, you’ll be constantly worried. And I have certainly been there. It can help to try to take a stance of detachment, where you concern yourself less with getting the right outcome and more with learning about the people you are with. I think this can be taken too far—you can become totally detached from relationships, or you could put yourself in danger by failing to pass judgment on obviously harmful behaviors—but in moderation, it’s surprisingly powerful. The first time I ever enjoyed going to a nightclub, (at my therapist’s suggestion) I went as a social scientist, tasked with observing and cataloguing the behavior around me. I still didn’t feel fully integrated into the environment (and the music was still too damn loud!), but for once, I wasn’t anxious and miserable.

Cultivate instantaneous generosity. If you feel like doing something good for someone, just do it.

I’m honestly not sure whether this one is good advice. I used to follow it much more than I do now. Interacting with the Effective Altruism community taught me to temper these impulses, and instead of giving to every random charity or homeless person that asks for money, instead concentrate my donations into a few highly cost-effective charities. Objectively, concentrating donations in this way produces a larger positive impact on the world. But subjectively, it doesn’t feel as good, it makes people sad, and sometimes it can make you feel like a very callous person. Maybe there’s a balance to be had here: Give a little when the impulse strikes, but save up most of it for the really important donations.

Practice doing nothing.

This one is perhaps the most subversive, the most opposed to all standard self-help advice. Do nothing? Just rest? How can you say such a thing, when you just reminded us that we have only four thousand weeks to live? Yet this is in fact the advice most of us need to hear. We burn ourselves out because we forget how to rest.

I am also terrible at this one. I tend to get most anxious when I have between 15 and 45 minutes of free time before an activity, because 45 minutes doesn’t feel long enough to do anything, and 15 minutes feels too long to do nothing. Logically this doesn’t really make sense: Either you have time to do something, or you don’t. But it can be hard to find good ways to fill that sort of interval, because it requires the emotional overhead of starting and stopping a task.

Then, there are the five questions:

Where in your life or work are you currently pursuing comfort, when what’s called for is a little discomfort?

It seems odd to recommend discomfort as a goal, but I think what Burkeman is getting at is that we tend to get stuck in the comfortable and familiar, even when we would be better off reaching out and exploring into the unknown. I know that for me, finally deciding to quit this job was very uncomfortable; it required taking a big risk and going outside the familiar and expected. But I am now convinced it was the right decision.

Are you holding yourself to, and judging yourself by, standards of productivity or performance that are impossible to meet?

In a word? Yes. I’m sure I am. But this one is also slipperier than it may seem—for how do we really know what’s possible? And possible for whom? If you see someone else who seems to be living the life you think you want, is it just an illusion? Are they really suffering as badly as you? Or do they perhaps have advantages you don’t, which made it possible for them, but not for you? When people say they work 60 hours per week and you can barely manage 20, are they lying? Are you truly not investing enough effort? Or do you suffer from ailments they don’t, which make it impossible for you to commit those same hours?

In what ways have you yet to accept the fact that you are who you are, not the person you think you ought to be?

I think most of us have a lot of ways that we fail to accept ourselves: physically, socially, psychologically. We are never the perfect beings we aspire to be. And constantly aspiring to an impossible ideal will surely drain you. But I also fear that self-acceptance could be a dangerous thing: What if it makes us stop striving to improve? What if we could be better than we are, but we don’t bother? Would you want a murderous psychopath to practice self-acceptance? (Then again, do they already, whether we want them to or not?) How are we to know which flaws in ourselves should be accepted, and which repaired?

In which areas of your life are you still holding back until you feel like you know what you’re doing?

This one cut me very deep. I have several areas of my life where this accusation would be apt, and one in particular where I am plainly guilty as charged: Parenting. In a same-sex marriage, offspring don’t emerge automatically without intervention. If we want to have kids, we must do a great deal of work to secure adoption. And it has been much easier—safer, more comfortable—to simply put off that work, avoid the risk. I told myself we’d adopt once I finished grad school; but then I only got a temporary job, so I put it off again, saying we’d adopt once I found stability in my career. But what if I never find that stability? What if the rest of my career is always this precarious? What if I can always find some excuse to delay? The pain of never fulfilling that lifelong dream of parenthood might continue to gnaw at me forever.

How would you spend your days differently if you didn’t care so much about seeing your actions reach fruition?

This one is frankly useless. I hate it. It’s like when people say “What would you do if you knew you’d die tomorrow?” Obviously, you wouldn’t go to work, you wouldn’t pay your bills, you wouldn’t clean your bathroom. You might devote yourself single-mindedly to a single creative task you hoped to make a legacy, or gather your family and friends to share one last day of love, or throw yourself into meaningless hedonistic pleasure. Those might even be things worth doing, on occasion. But you can’t do them every day. If you knew you were about to die, you absolutely would not live in any kind of sustainable way.

Similarly, if I didn’t care about seeing my actions reach fruition, I would continue to write stories and never worry about publishing them. I would make little stabs at research when I got curious, then once it starts getting difficult or boring, give up and never bother writing the paper. I would continue flitting between a dozen random projects at once and never finish any of them. I might well feel happier—at least until it all came crashing down—but I would get absolutely nothing done.

Above all, I would never apply for any jobs, because applying for jobs is absolutely not about enjoying the journey. If you know for a fact that you won’t get an offer, you’re an idiot to bother applying. That is a task that is only worth doing if I believe that it will yield results—and indeed, a big part of why it’s so hard to bring myself to do it is that I have a hard time maintaining that belief.

If you read the surrounding context, Burkeman actually seems to intend something quite different than the actual question he wrote. He suggests devoting more time to big, long-term projects that require whole communities to complete. He likens this to laying bricks in a cathedral that we will never see finished.

I do think there is wisdom in this. But it isn’t a simple matter of not caring about results. Indeed, if you don’t care at all about whether the cathedral will stand, you won’t bother laying the bricks correctly. In some sense Burkeman is actually asking us to do the opposite: To care more about results, but specifically results that we may never live to see. Maybe he really intends to emphasize the word see—you care about your actions reaching fruition, but not whether or not you’ll ever see it.

Yet this, I am quite certain, is not my problem. When a psychiatrist once asked me, “What do you really want most in life?” I gave a very thoughtful answer: “To be remembered in a thousand years for my contribution to humanity.” (His response was glib: “You can’t control that.”) I still stand by that answer: If I could have whatever I want, no limits at all, three wishes from an all-powerful genie, two of them would be to solve some of the world’s greatest problems, and the third would be for the chance to live my life in a way that I knew would be forever remembered.

But I am slowly coming to realize that maybe I should abandon that answer. That psychiatrist’s answer was far too glib (he was in fact not a very good fit for me; I quickly switched to a different psychiatrist), but maybe it wasn’t fundamentally wrong. It may be impossible to predict, let alone control, whether our lives have that kind of lasting impact—and, almost by construction, most lives can’t.

Perhaps, indeed, I am too worried about whether the cathedral will stand. I only have a few bricks to lay myself, and while I can lay them the best I can, that ultimately will not be what decides the fate of the cathedral. A fire, or an earthquake, or simply some other bricklayer’s incompetence, could bring about its destruction—and there is nothing at all I can do to prevent that.

This post is already getting too long, so I should try to bring it to a close.

As the adage goes, perhaps if I had more time, I’d make it shorter.

How will AI affect inequality?

Oct 15 JDN 2460233

Will AI make inequality worse, or better? Could it do a bit of both? Does it depend on how we use it?

This is of course an extremely big question. In some sense it is the big economic question of the 21st century. The difference between the neofeudalist cyberpunk dystopia of Neuromancer and the social democratic utopia of Star Trek just about hinges on whether AI becomes a force for higher or lower inequality.

Krugman seems quite optimistic: Based on forecasts by Goldman Sachs, AI seems poised to automate more high-paying white-collar jobs than low-paying blue-collar ones.

But, well, it should be obvious that Goldman Sachs is not an impartial observer here. They do have reasons to get their forecasts right—their customers are literally invested in those forecasts—but like anyone who immensely profits from the status quo, they also have a broader agenda of telling the world that everything is going great and there’s no need to worry or change anything.

And when I look a bit closer at their graphs, it seems pretty clear that they aren’t actually answering the right question. They estimate an “exposure to AI” coefficient (somehow; their methodology is not clearly explained and lots of it is proprietary), and if it’s between 10% and 49% they call it “complementary” while if it’s 50% or above they call it “replacement”.

But that is not how complements and substitutes work. It isn’t a question of “how much of the work can be done by machine” (whatever that means). It’s a question of whether you will still need the expert human.

It could be that the machine does 90% of the work, but you still need a human being there to tell it what to do, and that would be complementary. (Indeed, this basically is how finance works right now, and I see no reason to think it will change any time soon.) Conversely, it could be that the machine only does 20% of the work, but that was the 20% that required expert skill, and so a once comfortable high-paying job can now be replaced by low-paid temp workers. (This is more or less what’s happening at Amazon warehouses: They are basically managed by AI, but humans still do most of the actual labor, and get paid peanuts for it.)

For their category “computer and mathematical”, they call it “complementary”, and I agree: We are still going to need people who can code. We’re still going to need people who know how to multiply matrices. We’re still going to need people who understand search algorithms. Indeed, if the past is any indicator, we’re going to need more and more of those people, and they’re going to keep getting paid higher and higher salaries. Someone has to make the AI, after all.

Yet I’m not quite so sure about the “mathematical” part in many cases. We may not need people who can solve differential equations, actually: maybe a few to design the algorithms, but honestly even then, a software program with a simple finite-difference algorithm can often solve much more interesting problems than one with a full-fledged differential equation solver, because one of the dirty secrets of differential equations is that for some of the most important ones (like the Navier-Stokes Equations), we simply do not know how to solve them. Once you have enough computing power, you often can stop trying to be clever and just brute-force the damn thing.

Yet for “transportation and material movement”—that is, trucking—Goldman Sachs confidently forecasts mostly “no automation” with a bit of “complementary”. Yet this year—not at some distant point in the future, not in some sci-fi novel, this year in the actual world—the Governor of California already vetoed a bill that would have required automated trucks to have human drivers. The trucks aren’t on the roads yet—but if we already are making laws about them, they’re going to be, soon. (State legislatures are not known for their brilliant foresight or excessive long-term thinking.) And if the law doesn’t require them to have human drivers, they probably won’t; which means that hundreds of thousands of long-haul truckers will suddenly be out of work.

It’s also important to differentiate between different types of jobs that may fall under the same category or industry.

Neurosurgeons are not going anywhere, and improved robotics will only allow them to perform better, safer laparoscopic surgeries. Nor are nurses going anywhere, because some things just need an actual person physically there with the patient. But general practictioners, psychotherapists, and even radiologists are already seeing many of their tasks automated. So is “medicine” being automated or not? That depends what sort of medicine you mean. And yet it clearly means an increase in inequality, because it’s the middle-paying jobs (like GPs) that are going away, while the high-paying jobs (like neurosurgeons) and the low-paying jobs (like nurses) that remain.

Likewise, consider “legal services”, which is one of the few industries that Goldman Sachs thinks will be substantially replaced by AI. Are high-stakes trial lawyers like Sam Bernstein getting replaced? Clearly not. Nor would I expect most corporate lawyers to disappear. Human lawyers will still continue to perform at least a little bit better than AI law systems, and the rich will continue to use them, because a few million dollars for a few percentage points better odds of winning is absolutely worth it when billions of dollars are on the line. So which law services are going to get replaced by AI? First, routine legal questions, like how to renew your work visa or set up a living will—it’s already happening. Next, someone will probably decide that public defenders aren’t worth the cost and start automating the legal defenses of poor people who get accused of crimes. (And to be honest, it may not be much worse than how things currently are in the public defender system.) The advantage of such a change is that it will most likely bring court costs down—and that is desperately needed. But it may also tilt the courts even further in favor of the rich. It may also make it even harder to start a career as a lawyer, cutting off the bottom of the ladder.

Or consider “management”, which Goldman Sachs thinks will be “complementary”. Are CEOs going to get replaced by AI? No, because the CEOs are the ones making that decision. Certainly this is true for any closely-held firm: No CEO is going to fire himself. Theoretically, if shareholders and boards of directors pushed hard enough, they might be able to get a CEO of a publicly-traded corporation ousted in favor of an AI, and if the world were really made of neoclassical rational agents, that might actually happen. But in the real world, the rich have tremendous solidarity for each other (and only each other), and very few billionaires are going to take aim at other billionaires when it comes time to decide whose jobs should be replaced. Yet, there are a lot of levels of management below the CEO and board of directors, and many of those are already in the process of being replaced: Instead of relying on the expert judgment of a human manager, it’s increasingly common to develop “performance metrics”, feed them into an algorithm, and use that result to decide who gets raises and who gets fired. It all feels very “objective” and “impartial” and “scientific”—and usually ends up being both dehumanizing and ultimately not even effective at increasing profits. At some point, many corporations are going to realize that their middle managers aren’t actually making any important decisions anymore, and they’ll feed that into the algorithm, and it will tell them to fire the middle managers.

Thus, even though we think of “medicine”, “law”, and “management” as high-paying careers, the effect of AI is largely going to be to increase inequality within those industries. It isn’t the really high-paid doctors, managers, and lawyers who are going to get replaced.

I am therefore much less optimistic than Krugman about this. I do believe there are many ways that technology, including artificial intelligence, could be used to make life better for everyone, and even perhaps one day lead us into a glorious utopian future.

But I don’t see most of the people who have the authority to make important decisions for our society actually working towards such a future. They seem much more interested in maximizing their own profits or advancing narrow-minded ideologies. (Or, as most right-wing political parties do today: Advancing narrow-minded ideologies about maximizing the profits of rich people.) And if we simply continue on the track we’ve been on, our future is looking a lot more like Neuromancer than it is like Star Trek.