Reflections on fatherhood

Jun 24 JDN 2460485

I am writing this on Father’s Day, which has become something of a morose occasion for me—or at least a bittersweet one. I had always thought that I would become a father while my own father were still around, that my children would have a full set of grandparents. But that isn’t how my life has turned out.

Humans are unusual, among mammals, in having fathers. Yes, biologically, there is always a male involved. But most male mammals really don’t do much of the parenting; they leave that task more or less entirely to the females. So while every mammal has a mother, most really don’t have a father.

We’re also unusual in just how much parenting we need to survive. All babies are vulnerable, but human babies are exceptionally so. Most mammals are born at least able to walk. Even other altricial mammals are not as underdeveloped at birth as we are. In many ways, it seems that we come out of the womb before we’re really done, in order to spare our mothers an impossible birth.

And it is most likely due to this state of exceptional need that we became creatures of exceptional caring. Fatherhood is one of the clearest examples of this: Our males devote enormous effort to the care and support of their offspring, comparable to the efforts that our females devote (though, even in modern societies, not equal).

It’s ironic that many people don’t think of humans as a uniquely caring species. Some even seem to imagine that we are uniquely violent and cruel. But violence and cruelty is everywhere in nature; it’s the lack of it that needs explained. Even bonobos are not as kind and cooperative as previously imagined, and eusocial species don’t generally cooperate outside their hives; humans may in fact be the most cooperative animal.

What about war? Is that not uniquely human, and thus proof of our inherent violence? Wars are indeed unusual in nature (though not nonexistent: ants and apes are both prone to them), but the part that’s unusual is not the violence—it’s the coordination. Almost all animals are violent to greater or lesser degree. But it’s the rare ones who are cooperative enough to be violent en masse. And most human societies are at peace with most of their neighbors most of the time.

In fact I think it is the fact that we are so caring that makes us so aware of our own cruelty. A truly cruel species would be far more violent, but also wouldn’t care about how violent it was. It wouldn’t feel guilt or shame about being so violent. The reason we feel so ashamed of our own violence is that we are capable of imagining peace.

And part of why we are able to imagine a more caring world is that (most of us) are born into one, in the hands of our mothers and fathers. When we become adults, we find ourselves longing for the peace and security we felt in childhood. And while caring is largely seen as a mother’s job, security is very much seen as a father’s. We feel so helpless and exposed when we grow up, because we were so protected and safe as children.

My father certainly taught me a great deal about caring—caring so much, perhaps too much. I suppose I don’t actually know how much of it he actually taught me, versus how much was encoded in genes I got from him; but I do know that I grew up to be just like him in so many ways, both good and bad—so kind, so loyal, so loving, but also so wounded, so aggrieved, so hopeless. My father was more caring than anyone else I have ever known. He carried the weight of the world on his shoulders, and now so do I. My father died without achieving most of his lifelong dreams. One of my greatest fears is that I will do the same.

Being in a same-sex marriage has also radically changed my relationship with fatherhood. It’s no longer something that can happen to me by accident, or something that would more or less end up happening on its own if we simply stopped fighting it. It is now something I must actively choose, a commitment I must make, a task I must willfully devote myself toward. And so far, it has never seemed like the right time to take that leap of faith. Another great fear of mine is that it never will.

Life is a succession of tomorrows that turn all too quickly into yesterdays, of could-bes that fade into could-have-beens, of shoulds that shrivel into should-haves. The possibilities are vast, but not limitless; more and more limits get imposed as time goes on, until at last death imposes the most final limit of all.

I don’t want my life to pass me by while I’m waiting for something better that never comes. But I clearly can’t be satisfied with where I am now, and I don’t want to give up on all my dreams. How do I know what I should fight for, and what I should give up on?

I wish I could ask my father for advice.

Medical progress, at least, is real

May 26 JDN 2460457

The following vignettes are about me.

Well, one of them is about me as I actually am. The others are about the person I would have been, if someone very much like me, with the same medical conditions, had been born in a particular place and time. Someone in these times and places probably had actual experiences like this, though of course we’ll never know who they were.

976 BC, the hilled lands near the mouth of the river:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain plants, or if I awaken too early, or if I exert myself too much, or if a storm is coming. No one knows why. The healers have tried every herb and tincture imaginable in their efforts to cure me, but nothing has worked. The priests believe it is a curse from the gods, but at least they appreciate my ability to sometimes predict storms. I am lucky to even remain alive, as I am of little use to the tribe. I will most likely remain this way the rest of my life.

24 AD, Rome:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain plants, or if I awaken too early, or if I exert myself too much, or if a storm is coming. No one knows why. The healers have tried every herb and tincture imaginable in their efforts to cure me, but nothing has worked. The priests believe it is a curse from the gods, but at least they appreciate my ability to sometimes predict storms. I am lucky that my family was rich enough to teach me reading and mathematics, as I would be of little use for farm work, but can at least be somewhat productive as a scribe and a tutor. I will most likely remain this way the rest of my life.

1024 AD, England:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain plants, or if I awaken too early, or if I exert myself too much, or if a storm is coming. No one knows why. The healers have tried every herb and tincture imaginable in their efforts to cure me, but nothing has worked. The priests believe it is a curse imposed upon me by some witchcraft, but at least they appreciate my ability to sometimes predict storms. I am lucky that my family was rich enough to teach me reading and mathematics, as I would be of little use for farm work, but can at least be somewhat productive as a scribe and a tutor. I will most likely remain this way the rest of my life.

2024 AD, Michigan:

Since I was fourteen years old, I have woken up almost every day in pain. Often it is mild, but occasionally it is severe. It often seems to be worse when I encounter certain pollens, fragrances, or chemicals, or if I awaken too early, or if I exert myself too much, or when the air pressure changes before a storm. Brain scans detected no gross abnormalities. I have been diagnosed with chronic migraine, but this is more a description of my symptoms than an explanation. I have tried over a dozen different preventative medications; most of them didn’t work at all, some of them worked but gave me intolerable side effects. (One didn’t work at all and put me in the hospital with a severe allergic reaction.) I’ve been more successful with acute medications, which at least work as advertised, but I have to ration them carefully to avoid rebound effects. And the most effective acute medication is a subcutaneous injection that makes me extremely nauseated unless I also take powerful anti-emetics along with it. I have had the most success with botulinum toxin injections, so I will be going back to that soon; but I am also looking into transcranial magnetic stimulation. Currently my condition is severe enough that I can’t return to full-time work, but I am hopeful that with future treatment I will be able to someday. For now, I can at least work as a writer and a tutor. Hopefully things get better soon.

3024 AD, Aegir 7, Ran System:

For a few months when I was fourteen years old, I woke up nearly every day in pain. Often it was mild, but occasionally it was severe. It often seemed to be worse when I encountered certain pollens, fragrances or chemicals, or if I awakened too early, or if I exerted myself too much, or when the air pressure changed before a storm. Brain scans detected no gross abnormalities, only subtle misfiring patterns. Genetic analysis confirmed I had chronic migraine type IVb, and treatment commenced immediately. Acute medications suppressed the pain while I underwent gene therapy and deep-effect transcranial magnetic stimulation. After three months of treatment, I was cured. That was an awful few months, but it’s twenty years behind me now. I can scarcely imagine how it might have impaired my life if it had gone on that whole time.

What is the moral of this story?

Medical progress is real.

Many people often doubt that society has made real progress. And in a lot of ways, maybe it hasn’t. Human nature is still the same, and so many of the problems we suffer have remained the same.

Economically, of course we have had tremendous growth in productivity and output, but it doesn’t really seem to have made us much happier. We have all this stuff, but we’re still struggling and miserable as a handful at the top become spectacularly, disgustingly rich.

Social progress seems to have gone better: Institutions have improved, more of the world is democratic than ever before, and women and minorities are better represented and better protected from oppression. Rates of violence have declined to some of their lowest levels in history. But even then, it’s pretty clear that we have a long, long way to go.

But medical progress is undeniable. We live longer, healthier lives than at any other point in history. Our infant and child mortality rates have plummeted. Even chronic conditions that seem intractable today (such as my chronic migraines) still show signs of progress; in a few generations they should be cured—in surely far less than the thousand years I’ve considered here.

Like most measures of progress, this change wasn’t slow and gradual over thousands of years; it happened remarkably suddenly. Humans went almost 200,000 years without any detectable progress in medicine, using basically the same herbs and tinctures (and a variety of localized and ever-changing superstitions) the entire time. Some of it worked (the herbs and tinctures, at least), but mostly it didn’t. Then, starting around the 18th century, as the Enlightenment took hold and Industrial Revolution ramped up, everything began to change.

We began to test our medicine and see if it actually worked. (Yes, amazingly, somehow, nobody had actually ever thought to do that before—not in anything resembling a scientific way.) And when we learned that most of it didn’t, we began to develop new methods, and see if those worked; and when they didn’t either, we tried new things instead—until, finally, eventually, we actually found medicines that actually did something, medicines worthy of the name. Our understanding of anatomy and biology greatly improved as well, allowing us to make better predictions about the effects our medicines would have. And after a few hundred years of that—a few hundred, out of two hundred thousand years of our species—we actually reached the point where most medicine is effective and a variety of health conditions are simply curable or preventable, including diseases like malaria and polio that had once literally plagued us.

Scientific medicine brought humanity into a whole new era of existence.

I could have set the first vignette 10,000 years ago without changing it. But the final vignette I could probably have set only 200 years from now. I’m actually assuming remarkable stagnation by putting it in the 31st century; but presumably technological advancement will slow at one point, perhaps after we’ve more or less run out of difficult challenges to resolve. (Then again, for all I know, maybe my 31st century counterpart will be an emulated consciousness, and his chronic pain will be resolved in 17.482 seconds by a code update.)

Indeed, the really crazy thing about all this is that there are still millions of people who don’t believe in scientific medicine, who want to use “homeopathy” or “naturopathy” or “acupuncture” or “chiropractic” or whatever else—who basically want to go back to those same old herbs and tinctures that maybe sometimes kinda worked but probably not and nobody really knows. (I have a cousin who is a chiropractor. I try to be polite about it, but….) They point out the various ways that scientific medicine has failed—and believe me, I am painfully aware of those failures—but then where the obvious solution is to improve scientific medicine, they instead want to turn the whole ship around, and go back to what we had before, which was obviously a million times worse.

And don’t tell me it’s harmless: One, it’s a completewaste of resources that could instead have been used for actual scientific medicine. (9% of all out-of-pocket spending on healthcare in the US is on “alternative medicine”—which is to say, on pointless nonsense.) Two, when you have a chronic illness and people keep shoving nonsense treatments in your face, you start to feel blamed for your condition: “Why haven’t you tried [other incredibly stupid idea that obviously won’t work]? You’re so closed-minded! Maybe your illness isn’t really that bad, or you’d be more desperate!” If “alternative medicine” didn’t exist, maybe these people could help me cope with the challenges of living with a chronic illness, or even just sympathize with me, instead of constantly shoving stupid nonsense in my face.

Not everything about the future looks bright.

In particular, I am pessimistic about the near-term future of artificial intelligence, which I think will cause a lot more problems than it solves and does have a small—but not negligible—risk of causing a global catastrophe.

I’m also not very optimistic about climate change; I don’t think it will wipe out our civilization or anything so catastrophic, but I do think it’s going to kill millions of people and we’ve done too little, too late to prevent that. We’re now doing about what we should have been doing in the 1980s.

But I am optimistic about scientific medicine. Every day, new discoveries are made. Every day, new treatments are invented. Yes, there is a lot we haven’t figured out how to cure yet; but people are working on it.

And maybe they could do it faster if we stopped wasting time on stuff that obviously won’t work.

How Effective Altruism hurt me

May 12 JDN 2460443

I don’t want this to be taken the wrong way. I still strongly believe in the core principles of Effective Altruism. Indeed, it’s shockingly hard to deny them, because basically they come out to this:

Doing more good is better than doing less good.

Then again, most people want to do good. Basically everyone agrees that more good is better than less good. So what’s the big deal about Effective Altruism?

Well, in practice, most people put shockingly little effort into trying to ensure that they are doing the most good they can. A lot of people just try to be nice people, without ever concerning themselves with the bigger picture. Many of these people don’t give to charity at all.

Then, even among people who do give to charity, typically give to charities more or less at random—or worse, in proportion to how much mail those charities send them begging for donations. (Surely you can see how that is a perverse incentive?) They donate to religious organizations, which sometimes do good things, but fundamentally are founded upon ignorance, patriarchy, and lies.

Effective Altruism is a movement intended to fix this, to get people to see the bigger picture and focus their efforts on where they will do the most good. Vet charities not just for their honesty, but also their efficiency and cost-effectiveness:

Just how many mQALY can you buy with that $1?

That part I still believe in. There is a lot of value in assessing which charities are the most effective, and trying to get more people to donate to those high-impact charities.

But there is another side to Effective Altruism, which I now realize has severely damaged my mental health.

That is the sense of obligation to give as much as you possibly can.

Peter Singer is the most extreme example of this. He seems to have mellowed—a little—in more recent years, but in some of his most famous books he uses the following thought experiment:

To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.

I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.

Basically everyone agrees with this particular decision: Even if you are wearing a very expensive suit that will be ruined, even if you’ll miss something really important like a job interview or even a wedding—most people agree that if you ever come across a drowning child, you should save them.

(Oddly enough, when contemplating this scenario, nobody ever seems to consider the advice that most lifeguards give, which is to throw a life preserver and then go find someone qualified to save the child—because saving someone who is drowning is a lot harder and a lot riskier than most people realize. (“Reach or throw, don’t go.”) But that’s a bit beside the point.)

But Singer argues that we are basically in this position all the time. For somewhere between $500 and $3000, you—yes, you—could donate to a high-impact charity, and thereby save a child’s life.

Does it matter that many other people are better positioned to donate than you are? Does it matter that the child is thousands of miles away and you’ll never see them? Does it matter that there are actually millions of children, and you could never save them all by yourself? Does it matter that you’ll only save a child in expectation, rather than saving some specific child with certainty?

Singer says that none of this matters. For a long time, I believed him.

Now, I don’t.

For, if you actually walked by a drowning child that you could save, only at the cost of missing a wedding and ruining your tuxedo, you clearly should do that. (If it would risk your life, maybe not—and as I alluded to earlier, that’s more likely than you might imagine.) If you wouldn’t, there’s something wrong with you. You’re a bad person.

But most people don’t donate everything they could to high-impact charities. Even Peter Singer himself doesn’t. So if donating is the same as saving the drowning child, it follows that we are all bad people.

(Note: In general, if an ethical theory results in the conclusion that the whole of humanity is evil, there is probably something wrong with that ethical theory.)

Singer has tried to get out of this by saying we shouldn’t “sacrifice things of comparable importance”, and then somehow cash out what “comparable importance” means in such a way that it doesn’t require you to live on the street and eat scraps from trash cans. (Even though the people you’d be donating to largely do live that way.)

I’m not sure that really works, but okay, let’s say it does. Even so, it’s pretty clear that anything you spend money on purely for enjoyment would have to go. You would never eat out at restaurants, unless you could show that the time saved allowed you to get more work done and therefore donate more. You would never go to movies or buy video games, unless you could show that it was absolutely necessary for your own mental functioning. Your life would be work, work, work, then donate, donate, donate, and then do the absolute bare minimum to recover from working and donating so you can work and donate some more.

You would enslave yourself.

And all the while, you’d believe that you were never doing enough, you were never good enough, you are always a terrible person because you try to cling to any personal joy in your own life rather than giving, giving, giving all you have.

I now realize that Effective Altruism, as a movement, had been basically telling me to do that. And I’d been listening.

I now realize that Effective Altruism has given me this voice in my head, which I hear whenever I want to apply for a job or submit work for publication:

If you try, you will probably fail. And if you fail, a child will die.

The “if you try, you will probably fail” is just an objective fact. It’s inescapable. Any given job application or writing submission will probably fail.

Yes, maybe there’s some sort of bundling we could do to reframe that, as I discussed in an earlier post. But basically, this is correct, and I need to accept it.

Now, what about the second part? “If you fail, a child will die.” To most of you, that probably sounds crazy. And it is crazy. It’s way more pressure than any ordinary person should have in their daily life. This kind of pressure should be reserved for neurosurgeons and bomb squads.

But this is essentially what Effective Altruism taught me to believe. It taught me that every few thousand dollars I don’t donate is a child I am allowing to die. And since I can’t donate what I don’t have, it follows that every few thousand dollars I fail to get is another dead child.

And since Effective Altruism is so laser-focused on results above all else, it taught me that it really doesn’t matter whether I apply for the job and don’t get it, or never apply at all; the outcome is the same, and that outcome is that children suffer and die because I had no money to save them.

I think part of the problem here is that Effective Altruism is utilitarian through and through, and utilitarianism has very little place for good enough. There is better and there is worse; but there is no threshold at which you can say that your moral obligations are discharged and you are free to live your life as you wish. There is always more good that you could do, and therefore always more that you should do.

Do we really want to live in a world where to be a good person is to owe your whole life to others?

I do not believe in absolute selfishness. I believe that we owe something to other people. But I no longer believe that we owe everything. Sacrificing my own well-being at the altar of altruism has been incredibly destructive to my mental health, and I don’t think I’m the only one.

By all means, give to high-impact charities. But give a moderate amount—at most, tithe—and then go live your life. You don’t owe the world more than that.

Of men and bears

May 5 JDN 2460436

[CW: rape, violence, crime, homicide]

I think it started on TikTok, but I’m too old for TikTok, so I first saw it on Facebook and Twitter.

Men and women were asked:
“Would you rather be alone in the woods with a man, or a bear?”

Answers seem to have been pretty mixed. Some women still thought a man was a safer choice, but a significant number chose the bear.

Then when the question was changed to a woman, almost everyone chose the woman over the bear.

What can we learn from this?

I think the biggest thing it tells us is that a lot of women are afraid of men. If you are seriously considering the wild animal over the other human being, you’re clearly afraid.

A lot of the discourse on this seems to be assuming that they are right to be afraid, but I’m not so sure.

It’s not that the fear is unfounded: Most women will suffer some sort of harassment, and a sizeable fraction will suffer some sort of physical or sexual assault, at the hands of some men at some point in their lives.

But there is a cost to fear, and I don’t think we’re taking it properly into account here. I’m worried that encouraging women to fear men will only serve to damage relationships between men and women, the vast majority of which are healthy and positive. I’m worried that this fear is really the sort of overreaction to trauma that ends up causing its own kind of harm.

If you think that’s wrong, consider this:

A sizeable fraction of men will be physically assaulted by other men.

Should men fear each other?

Should all men fear all other men?

What does it do to a society when its whole population fears half of its population? Does that sound healthy? Does whatever small increment in security that might provide seem worth it?

Keep in mind that women being afraid of men doesn’t seem to be protecting them from harm right now. So even if there is genuine harm to be feared, the harm of that fear is actually a lot more obvious than the benefit of it. Our entire society becomes fearful and distrustful, and we aren’t actually any safer.

I’m worried that this is like our fear of terrorism, which made us sacrifice our civil liberties without ever clearly making us safer. What are women giving up due to their fear of men? Is it actually protecting them?

If you have any ideas for how we might actually make women safer, let’s hear them. But please, stop saying idiotic things like “Don’t be a rapist.” 95% of men already aren’t, and the 5% who are, are not going to listen to anything you—or I—say to them. (Bystander intervention programs can work. But just telling men to not be rapists does not.)

I’m all for teaching about consent, but it really isn’t that hard to do—and most rapists seem to understand it just fine, they just don’t care. They’ll happily answer on a survey that they “had sex with someone without their consent”. By all means, undermine rape myths; just don’t expect it to dramatically reduce the rate of rape.

I absolutely want to make people safer. But telling people to be afraid of people like me doesn’t actually seem to accomplish that.

And yes, it hurts when people are afraid of you.

This is not a small harm. This is not a minor trifle. Once we are old enough to be seen as “men” rather than “boys” (which seems to happen faster if you’re Black than if you’re White), men know that other people—men and women, but especially women—will fear us. We go through our whole lives having to be careful what we say, how we move, when we touch someone else, because we are shaped like rapists.

When my mother encounters a child, she immediately walks up to the child and starts talking to them, pointing, laughing, giggling. I can’t do that. If I tried to do the exact same thing, I would be seen as a predator. In fact, without children of my own, it’s safer for me to just not interact with children at all, unless they are close friends or family. This is a whole class of joyful, fulfilling experience that I just don’t get to have because people who look like me commit acts of violence.

Normally we’re all about breaking down prejudice, not treating people differently based on how they look—except when it comes to gender, apparently. It’s okay to fear men but not women.

Who is responsible for this?

Well, obviously the ones most responsible are actual rapists.

But they aren’t very likely to listen to me. If I know any rapists, I don’t know that they are rapists. If I did know, I would want them imprisoned. (Which is likely why they wouldn’t tell me if they were.)

Moreover, my odds of actually knowing a rapist are probably lower than you think, because I don’t like to spend time with men who are selfish, cruel, aggressive, misogynist, or hyper-masculine. The fact that 5% of men in general are rapists doesn’t mean that 5% of any non-random sample of men are rapists. I can only think of a few men I have ever known personally who I would even seriously suspect, and I’ve cut ties with all of them.

The fact that psychopaths are not slavering beasts, obviously different from the rest of us, does not mean that there is no way to tell who is a psychopath. It just means that you need to know what you’re actually looking for. When I once saw a glimmer of joy in someone’s eyes as he described the suffering of animals in an experiment, I knew in that moment he was a psychopath. (There are legitimate reasons to harm animals in scientific experiments—but a good person does not enjoy it.) He did not check most of the boxes of the “Slavering Beast theory”: He had many friends; he wasn’t consistently violent; he was a very good liar; he was quite accomplished in life; he was handsome and charismatic. But go through an actual psychopathy checklist, and you realize that every one of these features makes psychopathy more likely, not less.

I’m not even saying it’s easy to detect psychopaths. It’s not. Even experts need to look very closely and carefully, because psychopaths are often very good at hiding. But there are differences. And it really is true that the selfish, cruel, aggressive, misogynist, hyper-masculine men are more likely to be rapists than the generous, kind, gentle, feminist, androgynous men. It’s not a guarantee—there are lots of misogynists who aren’t rapists, and there are men who present as feminists in public but are rapists in private. But it is a tendency nevertheless. You don’t need to treat every man as equally dangerous, and I don’t think it’s healthy to do so.

Indeed, if I had the choice to be alone in the woods with either a gay male feminist or a woman I knew was cruel to animals, I’d definitely choose the man. These differences matter.

And maybe, just maybe, if we could tamp down this fear a little bit, men and women could have healthier interactions with one another and build stronger relationships. Even if the fear is justified, it could still be doing more harm than good.

So are you safer with a man, or a bear?

Let’s go back to the original thought experiment, and consider the actual odds of being attacked. Yes, the number of people actually attacked by bears is far smaller than the number of people actually attacked by men. (It’s also smaller than the number of people attacked by women, by the way.)

This is obviously because we are constantly surrounded by people, and rarely interact with bears.

In other words, that fact alone basically tells us nothing. It could still be true even if bears are far more dangerous than men, because people interact with bears far less often.

The real question is “How likely is an attack, given that you’re alone in the woods with one?”

Unfortunately, I was unable to find any useful statistics on this. There area lot of vague statements like “Bears don’t usually attack humans” or “Bears only attack when startled or protecting their young”; okay. But how often is “usually”? How often are bears startled? What proportion of bears you might encounter are protecting their young?

So this is really a stab in the dark; but do you think it’s perhaps fair to say that maybe 10% of bear-human close encounters result in an attack?

That doesn’t seem like an unreasonably high number, at least. 90% not attacking sounds like “usually”. Being startled or protecting their young don’t seem like events much rarer than 10%. This estimate could certainly be wrong (and I’m sure it’s not precise), but it seems like the right order of magnitude.

So I’m going to take that as my estimate:

If you are alone in the woods with a bear, you have about a 10% chance of being attacked.

Now, what is the probability that a randomly-selected man would attack you, if you were alone in the woods with him?

This one can be much better estimated. It is roughly equal to the proportion of men who are psychopaths.


Now, figures on this vary too, partly because psychopathy comes in degrees. But at the low end we have about 1.2% of men and 0.3% of women who are really full-blown psychopaths, and at the high end we have about 10% of men and 2% of women who exhibit significant psychopathic traits.

I’d like to note two things about these figures:

  1. It still seems like the man is probably safer than the bear.
  2. Men are only about four or five times as likely to be psychopaths as women.

Admittedly, my bear estimate is very imprecise; so if, say, only 5% of bear encounters result in attacks and 10% of men would attack if you were alone in the woods, men could be more dangerous. But I think it’s unlikely. I’m pretty sure bears are more dangerous.

But the really interesting thing is that people who seemed ambivalent about man versus bear, or even were quite happy to choose the bear, seem quite consistent in choosing women over bears. And I’m not sure the gender difference is really large enough to justify that.

If 1.2% to 10% of men are enough for us to fear all men, why aren’t 0.3% to 2% of women enough for us to fear all women? Is there a threshold at 1% or 5% that flips us from “safe” to “dangerous”?

But aren’t men responsible for most violence, especially sexual violence?

Yes, but probably not by as much as you think.

The vast majority of rapesare committed by men, and most of those are against women. But the figures may not be as lopsided as you imagine; in a given year, about 0.3% of women are raped by a man, and about 0.1% of men are raped by a woman. Over their lifetimes, about 25% of women will be sexually assaulted, and about 5% of men will be. Rapes of men by women have gone even more under-reported than rapes in general, in part because it was only recently that being forced to penetrate someone was counted as a sexual assault—even though it very obviously is.

So men are about 5 times as likely to commit rape as women. That’s a big difference, but I bet it’s a lot smaller than what many of you believed. There are statistics going around that claim that as many as 99% of rapes are committed by men; those statistics are ignoring the “forced to penetrate” assaults, and thus basically defining rape of men by women out of existence.

Indeed, 5 to 1 is quite close to the ratio in psychopathy.

I think that’s no coincidence: In fact, I think it’s largely the case that the psychopaths and the rapists are the same people.

What about homicide?

While men are indeed much more likely to be perpetrators of homicide, they are also much more likely to be victims.

Of about 23,000 homicide offenders in 2022, 15,100 were known to be men, 2,100 were known to be women, and 5,800 were unknown (because we never caught them). Assuming that women are no more or less likely to be caught than men, we can ignore the unknown, and presume that the same gender ratio holds across all homicides: 12% are committed by women.

Of about 22,000 homicides in the US last year, 17,700 victims were men. 3,900 victims were women. So men are 4.5 times as likely to be murdered than women in the US. Similar ratios hold in most First World countries (though total numbers are lower).

Overall, this means that men are about 7 times as likely to commit murder, but about 4.5 times as likely to suffer it.

So if we measure by rate of full-blown psychopathy, men are about 4 times as dangerous as women. If we measure by rate of moderate psychopathy, men are about 5 times as dangerous. If we measure by rate of rape, men are about 5 times as dangerous. And if we measure by rate of homicide, men are about 7 times as dangerous—but mainly to each other.

Put all this together, and I think it’s fair to summarize these results as:

Men are about five times as dangerous as women.

That’s not a small difference. But it’s also not an astronomical one. If you are right to be afraid of all men because they could rape or murder you, why are you not also right to be afraid of all women, who are one-fifth as likely to do the same?

Should we all fear everyone?

Surely you can see that isn’t a healthy way for a society to operate. Yes, there are real dangers in this world; but being constantly afraid of everyone will make you isolated, lonely, paranoid and probably depressed—and it may not even protect you.

It seems like a lot of men responding to the “man or bear” meme were honestly shocked that women are so afraid. If so, they have learned something important. Maybe that’s the value in the meme.

But the fear can be real, even justified, and still be hurting more than it’s helping. I don’t see any evidence that it’s actually making anyone any safer.

We need a better answer than fear.

What does “can” mean, anyway?

Apr 7 JDN 2460409

I don’t remember where, but I believe I once heard a “philosopher” defined as someone who asks the sort of question everyone knows the answer to, and doesn’t know the answer.

By that definition, I’m feeling very much a philosopher today.

“can” is one of the most common words in the English language; the Oxford English Corpus lists it as the 53rd most common word. Similar words are found in essentially every language, and nearly always rank among their most common.

Yet when I try to precisely define what we mean by this word, it’s surprisingly hard.

Why, you might even say I can’t.

The very concept of “capability” is surprisingly slippery—just what is someone capable of?

My goal in this post is basically to make you as confused about the concept as I am.

I think that experiencing disabilities that include executive dysfunction has made me especially aware of just how complicated the concept of ability really is. This also relates back to my previous post questioning the idea of “doing your best”.

Here are some things that “can” might mean, or even sometimes seems to mean:

1. The laws of physics do not explicitly prevent it.

This seems far too broad. By this definition, you “can” do almost anything—as long as you don’t make free energy, reduce entropy, or exceed the speed of light.

2. The task is something that other human beings have performed in the past.

This is surely a lot better; it doesn’t say that I “can” fly to Mars or turn into a tree. But by this definition, I “can” sprint as fast as Usain Bolt and swim as long as Michael Phelps—which certainly doesn’t seem right. Indeed, not only would I say I can’t do that; I’d say I couldn’t do that, no matter how hard I tried.

3. The task is something that human beings in similar physical condition to my own have performed in the past.

Okay, we’re getting warmer. But just what do we mean, “similar condition”? No one else in the world is in exactly the same condition I am.

And even if those other people are in the same physical condition, their mental condition could be radically different. Maybe they’re smarter than I am, or more creative—or maybe they just speak Swahili. It doesn’t seem right to say that I can speak Swahili. Maybe I could speak Swahili, if I spent a lot of time and effort learning it. But at present, I can’t.

4. The task is something that human beings in similar physical and mental condition to my own have performed in the past.

Better still. This seems to solve the most obvious problems. It says that I can write blog posts (check), and I can’t speak Swahili (also check).

But it’s still not specific enough. For, even if we can clearly define what constitutes “people like me” (can we?), there are many different circumstances in which people like me have been in, and what they did has varied quite a bit, depending on those circumstances.

People in extreme emergencies have performed astonishingly feats of strength, such as lifting cars. Maybe I could do something like that, should the circumstance arise? But it certainly doesn’t seem right to say that I can lift cars.

5. The task is something that human beings in similar physical and mental condition to my own have performed in the past, in circumstances similar to my own.

That solves the above problems (provided we can sufficiently define “similar” for both people and circumstances). But it actually raises a different problem: If the circumstances were so similar, shouldn’t their behavior and mine be the same?

By that metric, it seems like the only way to know if I can do something is to actually do it. If I haven’t actually done it—in that mental state, in those circumstances—then I can’t really say I could have done it. At that point, “can” becomes a really funny way of saying “do”.

So it seems we may have narrowed down a little too much here.

And what about the idea that I could speak Swahili, if I studied hard? That seems to be something broader; maybe it’s this:

6. The task is something that human beings who are in physical or mental condition that is attainable from my own condition have performed in the past.

But now we have to ask, what do we mean by “attainable”? We come right back to asking about capability again: What kind of effort can I make in order to learn Swahili, train as a pilot, or learn to SCUBA dive?

Maybe I could lift a car, if I had to do it to save my life or the life of a loved one. But without the adrenaline rush of such emergency, I might be completely unable to do it, and even with that adrenaline rush, I’m sure the task would injure me severely. Thus, I don’t think it’s fair to say I can lift cars.

So how much can I lift? I have found that I can, as part of a normal workout, bench-press about 80 pounds. But I don’t think is the limit of what I can lift; it’s more like what I can lift safely and comfortably for multiple sets of multiple reps without causing myself undue pain. For a single rep, I could probably do considerably more—though how much more is quite hard to say. 100 pounds? 120? (There are online calculators that supposedly will convert your multi-rep weight to a single-rep max, but for some reason, they don’t seem to be able to account for multiple sets for some reason. If I do 4 sets of 10 reps, is that 10 reps, or 40 reps? This is the difference between my one-rep max being 106 and it being 186. The former seems closer to the truth, but is probably still too low.)

If I absolutely had to—say, something that heavy has fallen on me and lifting it is the only way to escape—could I bench-press my own weight of about 215 pounds? I think so. But I’m sure it would hurt like hell, and I’d probably be sore for days afterward.

Now, consider tasks that require figuring something out, something I don’t currently know but could conceivably learn or figure out. It doesn’t seem right to say that I can solve the P/NP problem or the Riemann Hypothesis. But it does seem right to say that I can at least work on those problems—I know enough about them that I can at least get started, if perhaps not make much real progress. Whereas most people, while they could theoretically read enough books about mathematics to one day know enough that they could do this, are not currently in a state where they could even begin to do that.

Here’s another question for you to ponder:

Can I write a bestselling novel?

Maybe that’s no fair. Making it a bestseller depends on all sorts of features of the market that aren’t entirely under my control. So let’s make it easier:

Can I write a novel?

I have written novels. So at first glance it seems obvious that I can write a novel.

But there are many days, especially lately, on which I procrastinate my writing and struggle to get any writing done. On such a day, can I write a novel? If someone held a gun to my head and demanded that I write the novel, could I get it done?

I honestly don’t know.

Maybe there’s some amount of pressure that would in fact compel me, even on the days of my very worst depression, to write the novel. Or maybe if you put that gun to my head, I’d just die. I don’t know.

But I do know one thing for sure: It would hurt.

Writing a novel on my worst days would require enormous effort and psychological pain—and honestly, I think it wouldn’t feel all that different from trying to lift 200 pounds.

Now we are coming to the real heart of the matter:

How much cost am I expected to pay, for it to still count as within my ability?

There are many things that I can do easily, that don’t really require much effort. But this varies too.

On most days, brushing my teeth is something I just can do—I remember to do it, I choose to do it, it happens; I don’t feel like I have exerted a great deal of effort or paid any substantial cost.

But there are days when even brushing my teeth is hard. Generally I do make it happen, so evidently I can do it—but it is no longer free and effortless the way it usually is.

There are other things which require effort, but are generally feasible, such as working out. Working out isn’t easy (essentially by design), but if I put in the effort, I can make it happen.

But again, some days are much harder than others.

Then there are things which require so much effort they feel impossible, even if they theoretically aren’t.

Right now, that’s where I’m at with trying to submit my work to journals or publishers. Each individual action is certainly something I should be physically able to take. I know the process of what to do—I’m not trying to solve the Riemann Hypothesis here. I have even done it before.

But right now, today, I don’t feel like I can do it. There may be some sense in which I “can”, but it doesn’t feel relevant.

And I felt the same way yesterday, and the day before, and pretty much every day for at least the past year.

I’m not even sure if there is an amount of pressure that could compel me to do it—e.g. if I had a gun to my head. Maybe there is. But I honestly don’t know for sure—and if it did work, once again, it would definitely hurt.

Others in the disability community have a way of describing this experience, which probably sounds strange if you haven’t heard it before:

“Do you have enough spoons?”

(For D&D fans, I’ve also heard others substitute “spell slots”.)

The idea is this: Suppose you are endowed with a certain number of spoons, which you can consume as a resource in order to achieve various tasks. The only way to replenish your spoons is rest.

Some tasks are cheap, requiring only 1 or 2 spoons. Others may be very costly, requiring 10, or 20, or perhaps even 50 or 100 spoons.

But the number of spoons you start with each morning may not always be the same. If you start with 200, then a task that requires 2 will seem trivial. But if you only started with 5, even those 2 will feel like a lot.

As you deplete your available spoons, you will find you need to ration which tasks you are able to complete; thus, on days when you wake up with fewer spoons, things that you would ordinarily do may end up not getting done.

I think submitting to a research journal is a 100-spoon task, and I simply haven’t woken up with more than 50 spoons in any given day within the last six months.

I don’t usually hear it formulated this way, but for me, I think the cost varies too.

I think that on a good day, brushing my teeth is a 0-spoon task (a “cantrip”, if you will); I could do it as many times as necessary without expending any detectable effort. But on a very bad day, it will cost me a couple of spoons just to do that. I’ll still get it done, but I’ll feel drained by it. I couldn’t keep doing it indefinitely. It will prevent me from being able to do something else, later in the day.

Writing is something that seems to vary a great deal in its spoon cost. On a really good day when I’m feeling especially inspired, I might get 5000 words written and feel like I’ve only spent 20 spoons; while on a really bad day, that same 20 spoons won’t even get me a single paragraph.

It may occur to you to ask:

What is the actual resource being depleted here?

Just what are the spoons, anyway?

That, I really can’t say.

I don’t think it’s as simple as brain glucose, though there were a few studies that seemed to support such a view. If it were, drinking something sugary ought to fix it, and generally that doesn’t work (and if you do that too often, it’s bad for your health). Even weirder is that, for some people, just tasting sugar seems to help with self-control. My own guess is that if your particular problem is hypoglycemia, drinking sugar works, and otherwise, not so much.

There could be literally some sort of neurotransmitter reserves that get depleted, or receptors that get overloaded; but I suspect it’s not even that simple either. These are the models we use because they’re the best we have—but the brain is in reality far more complicated than any of our models.

I’ve heard people say “I ran out of serotonin today”, but I’m fairly sure they didn’t actually get their cerebrospinal fluid tested first. (And since most of your serotonin is actually in your gut, if they really ran out they should be having severe gastrointestinal symptoms.) (I had my cerebrospinal fluid tested once; most agonizing pain of my life. To say that I don’t recommend the experience is such an understatement, it’s rather like saying Hell sounds like a bad vacation spot. Indeed, if I believed in Hell, I would have to imagine it feels like getting a spinal tap every day for eternity.)

So for now, the best I can say is, I really don’t know what spoons are. And I still don’t entirely know what “can” means. But at least maybe now you’re as confused as I am.

Bundling the stakes to recalibrate ourselves

Mar 31 JDN 2460402

In a previous post I reflected on how our minds evolved for an environment of immediate return: An immediate threat with high chance of success and life-or-death stakes. But the world we live in is one of delayed return: delayed consequences with low chance of success and minimal stakes.

We evolved for a world where you need to either jump that ravine right now or you’ll die; but we live in a world where you’ll submit a hundred job applications before finally getting a good offer.

Thus, our anxiety system is miscalibrated for our modern world, and this miscalibration causes us to have deep, chronic anxiety which is pathological, instead of brief, intense anxiety that would protect us from harm.

I had an idea for how we might try to jury-rig this system and recalibrate ourselves:

Bundle the stakes.

Consider job applications.

The obvious way to think about it is to consider each application, and decide whether it’s worth the effort.

Any particular job application in today’s market probably costs you 30 minutes, but you won’t hear back for 2 weeks, and you have maybe a 2% chance of success. But if you fail, all you lost was that 30 minutes. This is the exact opposite of what our brains evolved to handle.

So now suppose if you think of it in terms of sending 100 job applications.

That will cost you 30 times 100 minutes = 50 hours. You still won’t hear back for weeks, but you’ve spent weeks, so that won’t feel as strange. And your chances of success after 100 applications are something like 1-(0.98)^100 = 87%.

Even losing 50 hours over a few weeks is not the disaster that falling down a ravine is. But it still feels a lot more reasonable to be anxious about that than to be anxious about losing 30 minutes.

More importantly, we have radically changed the chances of success.

Each individual application will almost certainly fail, but all 100 together will probably succeed.

If we were optimally rational, these two methods would lead to the same outcomes, by a rather deep mathematical law, the linearity of expectation:
E[nX] = n E[X]

Thus, the expected utility of doing something n times is precisely n times the expected utility of doing it once (all other things equal); and so, it doesn’t matter which way you look at it.

But of course we aren’t perfectly rational. We don’t actually respond to the expected utility. It’s still not entirely clear how we do assess probability in our minds (prospect theory seems to be onto something, but it’s computationally harder than rational probability, which means it makes absolutely no sense to evolve it).

If instead we are trying to match up our decisions with a much simpler heuristic that evolved for things like jumping over ravines, our representation of probability may be very simple indeed, something like “definitely”, “probably”, “maybe”, “probably not”, “definitely not”. (This is essentially my categorical prospect theory, which, like the stochastic overload model, is a half-baked theory that I haven’t published and at this point probably never will.)

2% chance of success is solidly “probably not” (or maybe something even stronger, like “almost definitely not”). Then, outcomes that are in that category are presumably weighted pretty low, because they generally don’t happen. Unless they are really good or really bad, it’s probably safest to ignore them—and in this case, they are neither.

But 87% chance of success is a clear “probably”; and outcomes in that category deserve our attention, even if their stakes aren’t especially high. And in fact, by bundling them, we have even made the stakes a bit higher—likely making the outcome a bit more salient.

The goal is to change “this will never work” to “this is going to work”.

For an individual application, there’s really no way to do that (without self-delusion); maybe you can make the odds a little better than 2%, but you surely can’t make them so high they deserve to go all the way up to “probably”. (At best you might manage a “maybe”, if you’ve got the right contacts or something.)

But for the whole set of 100 applications, this is in fact the correct assessment. It will probably work. And if 100 doesn’t, 150 might; if 150 doesn’t, 200 might. At no point do you need to delude yourself into over-estimating the odds, because the actual odds are in your favor.

This isn’t perfect, though.

There’s a glaring problem with this technique that I still can’t resolve: It feels overwhelming.

Doing one job application is really not that big a deal. It accomplishes very little, but also costs very little.

Doing 100 job applications is an enormous undertaking that will take up most of your time for multiple weeks.

So if you are feeling demotivated, asking you to bundle the stakes is asking you to take on a huge, overwhelming task that surely feels utterly beyond you.

Also, when it comes to this particular example, I even managed to do 100 job applications and still get a pretty bad outcome: My only offer was Edinburgh, and I ended up being miserable there. I have reason to believe that these were exceptional circumstances (due to COVID), but it has still been hard to shake the feeling of helplessness I learned from that ordeal.

Maybe there’s some additional reframing that can help here. If so, I haven’t found it yet.

But maybe stakes bundling can help you, or someone out there, even if it can’t help me.

How I feel is how things are

Mar 17 JDN 2460388

One of the most difficult things in life to learn is how to treat your own feelings and perceptions as feelings and perceptions—rather than simply as the way the world is.

A great many errors people make can be traced to this.

When we disagree with someone (whether it is as trivial as pineapple on pizza or as important as international law), we feel like they must be speaking in bad faith, they must be lying—because, to us, they are denying the way the world is. If the subject is important enough, we may become convinced that they are evil—for only someone truly evil could deny such important truths. (Ultimately, even holy wars may come from this perception.)


When we are overconfident, we not only can’t see that; we can scarcely even consider that it could be true. Because we don’t simply feel confident; we are sure we will succeed. And thus if we do fail, as we often do, the result is devastating; it feels as if the world itself has changed in order to make our wishes not come true.

Conversely, when we succumb to Impostor Syndrome, we feel inadequate, and so become convinced that we are inadequate, and thus that anyone who says they believe we are competent must either be lying or else somehow deceived. And then we fear to tell anyone, because we know that our jobs and our status depend upon other people seeing us as competent—and we are sure that if they knew the truth, they’d no longer see us that way.

When people see their beliefs as reality, they don’t even bother to check whether their beliefs are accurate.

Why would you need to check whether the way things are is the way things are?

This is how common misconceptions persist—the information needed to refute them is widely available, but people simply don’t realize they needed to be looking for that information.

For lots of things, misconceptions aren’t very consequential. But some common misconceptions do have large consequences.

For instance, most Americans think that crime is increasing and worse now than it was 30 or 50 years ago. (I tested this on my mother this morning; she thought so too.) It is in fact much, much better—violent crimes are about half as common in the US today as they were in the 1970s. Republicans are more likely to get this wrong than Democrats—but an awful lot of Democrats still get it wrong.

It’s not hard to see how that kind of misconception could drive voters into supporting “tough on crime” candidates who will enact needlessly harsh punishments and waste money on excessive police and incarceration. Indeed, when you look at our world-leading spending on police and incarceration (highest in absolute terms, third-highest as a portion of GDP), it’s pretty clear this is exactly what’s happening.

And it would be so easy—just look it up, right here, or here, or here—to correct that misconception. But people don’t even think to bother; they just know that their perception must be the truth. It never even occurs to them that they could be wrong, and so they don’t even bother to look.

This is not because people are stupid or lazy. (I mean, compared to what?) It’s because perceptions feel like the truth, and it’s shockingly difficult to see them as anything other than the truth.

It takes a very dedicated effort, and no small amount of training, to learn to see your own perceptions as how you see things rather than simply how things are.

I think part of what makes this so difficult is the existential terror that results when you realize that anything you believe—even anything you perceive—could potentially be wrong. Basically the entire field of epistemology is dedicated to understanding what we can and can’t be certain of—and the “can’t” is a much, much bigger set than the “can”.

In a sense, you can be certain of what you feel and perceive—you can be certain that you feel and perceive them. But you can’t be certain whether those feelings and perceptions correspond to your external reality.

When you are sad, you know that you are sad. You can be certain of that. But you don’t know whether you should be sad—whether you have a reason to be sad. Often, perhaps even usually, you do. But sometimes, the sadness comes from within you, or from misperceiving the world.

Once you learn to recognize your perceptions as perceptions, you can question them, doubt them, challenge them. Training your mind to do this is an important part of mindfulness meditation, and also of cognitive behavioral therapy.

But even after years of training, it’s still shockingly hard to do this, especially in the throes of a strong emotion. Simply seeing that what you’re feeling—about yourself, or your situation, or the world—is not an entirely accurate perception can take an incredible mental effort.

We really seem to be wired to see our perceptions as reality.

This makes a certain amount of sense, in evolutionary terms. In an ancestral environment where death was around every corner, we really didn’t have time to stop and thinking carefully about whether our perceptions were accurate.

Two ancient hominids hear a sound that might be a tiger. One immediately perceives it as a tiger, and runs away. The other stops to think, and then begins carefully examining his surroundings, looking for more conclusive evidence to determine whether it is in fact a tiger.

The latter is going to have more accurate beliefs—right up until the point where it is a tiger and he gets eaten.

But in our world today, it may be more dangerous to hold onto false beliefs than to analyze and challenge our beliefs. We may harm ourselves—and others—more by trusting our perceptions too much rather than by taking the time to analyze them.

Against Self-Delusion

Mar 10 JDN 2460381

Is there a healthy amount of self-delusion? Would we be better off convincing ourselves that the world is better than it really is, in order to be happy?


A lot of people seem to think so.

I most recently encountered this attitude in Kathryn Schulz’s book Being Wrong (I liked the TED talk much better, in part because it didn’t have this), but there are plenty of other examples.

You’ll even find advocates for this attitude in the scientific literature, particularly when talking about the Lake Wobegon Effect, optimism bias, and depressive realism.

Fortunately, the psychology community seems to be turning away from this, perhaps because of mounting empirical evidence that “depressive realism” isn’t a robust effect. When I searched today, it was easier to find pop psych articles against self-delusion than in favor of it. (I strongly suspect that would not have been true about 10 years ago.)

I have come up with a very simple, powerful argument against self-delusion:

If you’re allowed to delude yourself, why not just believe everything is perfect?

If you can paint your targets after shooting, why not always paint a bullseye?

The notion seems to be that deluding yourself will help you achieve your goals. But if you’re going to delude yourself, why bother achieving goals? You could just pretend to achieve goals. You could just convince yourself that you have achieved goals. Wouldn’t that be so much easier?

The idea seems to be, for instance, to get an aspiring writer to actually finish the novel and submit it to the publisher. But why shouldn’t she simply imagine she has already done so? Why not simply believe she’s already a bestselling author?

If there’s something wrong with deluding yourself into thinking you’re a bestselling author, why isn’t that exact same thing wrong with deluding yourself into thinking you’re a better writer than you are?

Once you have opened this Pandora’s Box of lies, it’s not clear how you can ever close it again. Why shouldn’t you just stop working, stop eating, stop doing anything at all, but convince yourself that your life is wonderful and die in a state of bliss?

Granted, this is not generally what people who favor (so-called) “healthy self-delusion” advocate. But it’s difficult to see any principled reason why they should reject it. Once you give up on tying your beliefs to reality, it’s difficult to see why you shouldn’t just say that anything goes.

Why are some deviations from reality okay, but not others? Is it because they are small? Small changes in belief can still have big consequences: Believe a car is ten meters behind where it really is, and it may just run you over.

The general approach of “healthy self-delusion” seems to be that it’s all right to believe that you are smarter, prettier, healthier, wiser, and more competent than you actually are, because that will make you more confident and therefore more successful.

Well, first of all, it’s worth pointing out that some people obviously go way too far in that direction and become narcissists. But okay, let’s say we find a way to avoid that. (It’s unclear exactly how, since, again, by construction, we aren’t tying ourselves to reality.)

In practice, the people who most often get this sort of advice are people who currently lack self-confidence, who doubt their own abilities—people who suffer from Impostor Syndrome. And for people like that (and I count myself among them), a certain amount of greater self-confidence would surely be a good thing.

The idea seems to be that deluding yourself to increase your confidence will get you to face challenges and take risks you otherwise wouldn’t have, and that this will yield good outcomes.

But there’s a glaring hole in this argument:

If you have to delude yourself in order to take a risk, you shouldn’t take that risk.

Risk-taking is not an unalloyed good. Russian Roulette is certainly risky, but it’s not a good career path.

There are in fact a lot of risks you simply shouldn’t take, because they aren’t worth it.

The right risks to take are the ones for which the expected benefit outweighs the expected cost: The one with the highest expected utility. (That sounds simple, and in principle it is; but in practice, it can be extraordinarily difficult to determine.)

In other words, the right risks to take are the ones that are rational. The ones that a correct view of the world will instruct you to take.

That aspiring novelist, then, should write the book and submit it to publishers—if she’s actually any good at writing. If she’s actually terrible, then never submitting the book is the correct decision; she should spend more time honing her craft before she tries to finish it—or maybe even give up on it and do something else with her life.

What she needs, therefore, is not a confident assessment of her abilities, but an accurate one. She needs to believe that she is competent if and only if she actually is competent.

But I can also see how self-delusion can seem like good advice—and even work for some people.

If you start from an excessively negative view of yourself or the world, then giving yourself a more positive view will likely cause you to accomplish more things. If you’re constantly telling yourself that you are worthless and hopeless, then convincing yourself that you’re better than you thought is absolutely what you need to do. (Because it’s true.)

I can even see how convincing yourself that you are the best is useful—even though, by construction, most people aren’t. When you live in a hyper-competitive society like ours, where we are constantly told that winning is everything, losers are worthless, and second place is as bad as losing, it may help you get by to tell yourself that you really are the best, that you really can win. (Even weirder: “Winning isn’t everything; it’s the only thing.” Uh, that’s just… obviously false? Like, what is this even intended to mean that “Winning is everything” didn’t already say better?)

But that’s clearly not the right answer. You’re solving one problem by adding another. You shouldn’t believe you are the best; you should recognize that you don’t have to be. Second place is not as bad as losing—and neither is fifth, or tenth, or fiftieth place. The 100th-most successful author in the world still makes millions writing. The 1,000th-best musician does regular concert tours. The 10,000th-best accountant has a steady job. Even the 100,000th-best trucker can make a decent living. (Well, at least until the robots replace him.)

Honestly, it’d be great if our whole society would please get this memo. It’s no problem that “only a minority of schools play sport to a high level”—indeed, that’s literally inevitable. It’s also not clear that “60% of students read below grade level” is a problem, when “grade level” seems to be largely defined by averages. (Literacy is great and all, but what’s your objective standard for “what a sixth grader should be able to read”?)

We can’t all be the best. We can’t all even be above-average.

That’s okay. Below-average does not mean inadequate.

That’s the message we need to be sending:

You don’t have to be the best in order to succeed.

You don’t have to be perfect in order to be good enough.

You don’t even have to be above-average.

This doesn’t require believing anything that isn’t true. It doesn’t require overestimating your abilities or your chances. In fact, it asks you to believe something that is more true than “You have to be the best” or “Winning is everything”.

If what you want to do is actually worth doing, an accurate assessment will tell you that. And if an accurate assessment tells you not to do it, then you shouldn’t do it. So you have no reason at all to strive for anything other than accurate beliefs.

With this in mind, the fact that the empirical evidence for “depressive realism” is shockingly weak is not only unsurprising; it’s almost irrelevant. You can’t have evidence against being rational. If deluded people succeed more, that means something is very, very wrong; and the solution is clearly not to make more people deluded.

Of course, it’s worth pointing out that the evidence is shockingly weak: Depressed people show different biases, not less bias. And in fact they seem to be more overconfident in the following sense: They are more certain that what they predict will happen is what will actually happen.

So while most people think they will succeed when they will probably fail, depressed people are certain they will fail when in fact they could succeed. Both beliefs are inaccurate, but the depressed one is in an important sense more inaccurate: It tells you to give up, which is the wrong thing to do.

“Healthy self-delusion” ultimately amounts to trying to get you to do the right thing for the wrong reasons. But why? Do the right thing for the right reasons! If it’s really the right thing, it should have the right reasons!

Serenity and its limits

Feb 25 JDN 2460367

God grant me the serenity
to accept the things I cannot change;
courage to change the things I can;
and wisdom to know the difference.

Of course I don’t care for its religious message (and the full prayer is even more overtly religious), but the serenity prayer does capture an important insight into some of the most difficult parts of human existence.

Some things are as we would like them to be. They don’t require our intervention. (Though we may still stand to benefit from teaching ourselves to savor them and express gratitude for them.)

Other things are not as we would like them to be. The best option, of course, would be to change them.

But such change is often difficult, and sometimes practically impossible.

Sometimes we don’t even know whether change is possible—that’s where the wisdom to know the difference comes in. This is a wisdom we often lack, but it’s at least worth striving for.

If it is impossible to change what we want to change, then we are left with only one choice:

Do we accept it, or not?

The serenity prayer tells us to accept it. There is wisdom in this. Often it is the right answer. Some things about our lives are awful, but simply cannot be changed by any known means.

Death, for instance.

Someday, perhaps, we will finally conquer death, and humanity—or whatever humanity has become—will enter a new era of existence. But today is not that day. When grieving the loss of people we love, ultimately our only option is to accept that they are gone, and do our best to appreciate what they left behind, and the parts of them that are still within us. They would want us to carry on and live full lives, not forever be consumed by grief.

There are many other things we’d like to change, and maybe someday we will, but right now, we simply don’t know how: diseases we can’t treat, problems we can’t solve, questions we can’t answer. It’s often useful for someone to be trying to push those frontiers, but for any given person, the best option is often to find a way to accept things as they are.

But there are also things I cannot change and yet will not accept.

Most of these things fall into one broad category:

Injustice.

I can’t end war, or poverty, or sexism, or racism, or homophobia. Neither can you. Neither can any one person, or any hundred people, or any thousand people, or probably even any million people. (If all it took were a million dreams, we’d be there already. A billion might be enough—though it would depend which billion people shared the dream.)

I can’t. You can’t. But we can.

And here I mean “we” in a very broad sense indeed: Humanity as a collective whole. All of us together can end injustice—and indeed that is the only way it ever could be ended, by our collective action. Collective action is what causes injustice, and collective action is what can end it.

I therefore consider serenity in the face of injustice to be a very dangerous thing.

At times, and to certain degrees, that serenity may be necessary.

Those who are right now in the grips of injustice may need to accept it in order to survive. Reflecting on the horror of a concentration camp won’t get you out of it. Embracing the terror of war won’t save you from being bombed. Weeping about the sorrow of being homeless won’t get you off the streets.

Even for those of us who are less directly affected, it may sometimes be wisest to blunt our rage and sorrow at injustice—for otherwise they could be paralyzing, and if we are paralyzed, we can’t help anyone.

Sometimes we may even need to withdraw from the fight for justice, simply because we are too exhausted to continue. I read recently of a powerful analogy about this:

A choir can sing the same song forever, as long as its singers take turns resting.

If everyone tries to sing their very hardest all the time, the song must eventually end, as no one can sing forever. But if we rotate our efforts, so that at any given moment some are singing while others are resting, then we theoretically could sing for all time—as some of us die, others would be born to replace us in the song.

For a literal choir this seems absurd: Who even wants to sing the same song forever? (Lamb Chop, I guess.)

But the fight for justice probably is one we will need to continue forever, in different forms in different times and places. There may never be a perfectly just society, and even if there is, there will be no guarantee that it remains so without eternal vigilance. Yet the fight is worth it: in so many ways our society is already more just than it once was, and could be made more so in the future.

This fight will only continue if we don’t accept the way things are. Even when any one of us can’t change the world—even if we aren’t sure how many of us it would take to change the world—we still have to keep trying.

But as in the choir, each one of us also needs to rest.

We can’t all be fighting all the time as hard as we can. (I suppose if literally everyone did that, the fight for justice would be immediately and automatically won. But that’s never going to happen. There will always be opposition.)

And when it is time for each of us to rest, perhaps some serenity is what we need after all. Perhaps there is a balance to be found here: We do not accept things as they are, but we do accept that we cannot change them immediately or single-handedly. We accept that our own strength is limited and sometimes we must withdraw from the fight.

So yes, we need some serenity. But not too much.

Enough serenity to accept that we won’t win the fight immediately or by ourselves, and sometimes we’ll need to stop fighting and rest. But not so much serenity that we give up the fight altogether.

For there are many things that I can’t change—but we can.

Administering medicine to the dead

Jan 28 JDN 2460339

Here are a couple of pithy quotes that go around rationalist circles from time to time:

“To argue with a man who has renounced the use and authority of reason, […] is like administering medicine to the dead[…].”

Thomas Paine, The American Crisis

“It is useless to attempt to reason a man out of a thing he was never reasoned into.”

Jonathan Swift

You usually hear that abridged version, but Thomas Paine’s full quotation is actually rather interesting:

“To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.”

― Thomas Paine, The American Crisis

It is indeed quite ineffective to convert an atheist by scripture (though that doesn’t seem to stop them from trying). Yet this quotation seems to claim that the opposite should be equally ineffective: It should be impossible to convert a theist by reason.

Well, then, how else are we supposed to do it!?

Indeed, how did we become atheists in the first place!?

You were born an atheist? No, you were born having absolutely no opinion about God whatsoever. (You were born not realizing that objects don’t fade from existence when you stop seeing them! In a sense, we were all born believing ourselves to be God.)

Maybe you were raised by atheists, and religion never tempted you at all. Lucky you. I guess you didn’t have to be reasoned into atheism.

Well, most of us weren’t. Most of us were raised into religion, and told that it held all the most important truths of morality and the universe, and that believing anything else was horrible and evil and would result in us being punished eternally.

And yet, somehow, somewhere along the way, we realized that wasn’t true. And we were able to realize that because people made rational arguments.

Maybe we heard those arguments in person. Maybe we read them online. Maybe we read them in books that were written by people who died long before we were born. But somehow, somewhere people actually presented the evidence for atheism, and convinced us.

That is, they reasoned us out of something that we were not reasoned into.

I know it can happen. I have seen it happen. It has happened to me.

And it was one of the most important events in my entire life. More than almost anything else, it made me who I am today.

I’m scared that if you keep saying it’s impossible, people will stop trying to do it—and then it will stop happening to people like me.

So please, please stop telling people it’s impossible!

Quotes like these encourage you to simply write off entire swaths of humanity—most of humanity, in fact—judging them as worthless, insane, impossible to reach. When you should be reaching out and trying to convince people of the truth, quotes like these instead tell you to give up and consider anyone who doesn’t already agree with you as your enemy.

Indeed, it seems to me that the only logical conclusion of quotes like these is violence. If it’s impossible to reason with people who oppose us, then what choice do we have, but to fight them?

Violence is a weapon anyone can use.

Reason is the one weapon in the universe that works better when you’re right.

Reason is the sword that only the righteous can wield. Reason is the shield that only protects the truth. Reason is the only way we can ever be sure that the right people win—instead of just whoever happens to be strongest.

Yes, it’s true: reason isn’t always effective, and probably isn’t as effective as it should be. Convincing people to change their minds through rational argument is difficult and frustrating and often painful for both you and them—but it absolutely does happen, and our civilization would have long ago collapsed if it didn’t.

Even people who claim to have renounced all reason really haven’t: they still know 2+2=4 and they still look both ways when they cross the street. Whatever they’ve renounced, it isn’t reason; and maybe, with enough effort, we can help them see that—by reason, of course.

In fact, maybe even literally administering medicine to the dead isn’t such a terrible idea.

There are degrees of death, after all: Someone whose heart has stopped is in a different state than someone whose cerebral activity has ceased, and both of them clearly stand a better chance of being resuscitated than someone who has been vaporized by an explosion.

As our technology improves, more and more states that were previously considered irretrievably dead will instead be considered severe states of illness or injury from which it is possible to recover. We can now restart many stopped hearts; we are working on restarting stopped brains. (Of course we’ll probably never be able to restore someone who got vaporized—unless we figure out how to make backup copies of people?)

Most of the people who now live in the world’s hundreds of thousands of ICU beds would have been considered dead even just 100 years ago. But many of them will recover, because we didn’t give up on them.

So don’t give up on people with crazy beliefs either.

They may seem like they are too far gone, like nothing in the world could ever bring them back to the light of reason. But you don’t actually know that for sure, and the only way to find out is to try.

Of course, you won’t convince everyone of everything immediately. No matter how good your evidence is, that’s just not how this works. But you probably will convince someone of something eventually, and that is still well worthwhile.

You may not even see the effects yourself—people are often loathe to admit when they’ve been persuaded. But others will see them. And you will see the effects of other people’s persuasion.

And in the end, reason is really all we have. It’s the only way to know that what we’re trying to make people believe is the truth.

Don’t give up on reason.

And don’t give up on other people, whatever they might believe.