Expressivism

Sep 29 JDN 2460583

The theory of expressivism, often posited as an alternative to moral realism, is based on the observation by Hume that factual knowledge is not intrinsically motivating. I can believe that a food is nutritious and that I need nutrition to survive, but without some emotional experience to motivate me—hunger—I will nonetheless remain unmotivated to eat the nutritious food. Because morality is meant to be intrinsically motivating, says Hume, it must not involve statements of fact.

Yet really all Hume has shown is that if indeed facts are not intrinsically motivating, and moral statements are intrinsically motivating, then moral statements are not merely statements of fact. But even statements of fact are rarely merely statements of fact! If I were to walk down the street stating facts at random (lemurs have rings on their tails, the Sun is over one million kilometers in diameter, bicycles have two wheels, people sit on chairs, time dilates as you approach the speed of light, LGBT people suffer the highest per capita rate of hate crimes in the US, Coca-Cola in the United States contains high fructose corn syrup, humans and chimpanzees share 95-98% of our DNA), I would be seen as a very odd sort of person indeed. Even when I state a fact, I do so out of some motivation, frequently an emotional motivation. I’m often trying to explain, or to convince. Sometimes I am angry, and I want to express my anger and frustration. Other times I am sad and seeking consolation. I have many emotions, and I often use words to express them. Nonetheless, in the process I will make many statements of fact that are either true or false: “Humans and chimpanzees share 95-98% of our DNA” I might use to argue in favor of common descent; “Time dilates as you approach the speed of light” I have used in to explain relativity theory; “LGBT people suffer the highest per capita rate of hate crimes in the US” I might use to argue in favor of some sort of gay rights policy. When I say “genocide is wrong!” I probably have some sort of emotional motivation for this—likely my outrage at an ongoing genocide. Nonetheless I’m pretty sure it’s true that genocide is wrong.

Expressivism says that moral statements don’t express propositions at all, they express attitudes, relations to ideas that are not of the same kind as belief and disbelief, truth and falsehood. Much as “Hello!” or “Darn it!” don’t really state facts or inquire about facts, expressivists like Simon Blackburn and Allan Gibbard would say that “Genocide is wrong” doesn’t say anything about the facts of genocide, it merely expresses my attitude of moral disapproval toward genocide.

Yet expressivists can’t abandon all normativity—otherwise even the claim “expressivism is true” has no moral force. Allan Gibbard, like most expressivists, supports epistemic normativity—the principle that we ought to believe what is true. But this seems to me already a moral principle, and one that is not merely an attitude that some people happen to have, but in fact a fundamental axiom that ought to apply to any rational beings in any possible universe. Even more, Gibbard agrees that some moral attitudes are more warranted than others, that “genocide is wrong” is more legitimate than “genocide is good”. But once we agree that there are objective normative truths and moral attitudes can be more or less justified, how is this any different from moral realism?

Indeed, in terms of cognitive science I’m not sure beliefs and emotions are so easily separable in the first place. In some sense I think statements of fact can be intrinsically motivating—or perhaps it is better to put it this way: If your brain is working properly, certain beliefs and emotions will necessarily coincide. If you believe that you are about to be attacked by a tiger, and you don’t experience the emotion of fear, something is wrong; if you believe that you are about to die of starvation, and you don’t experience the emotion of hunger, something is wrong. Conversely, if you believe that you are safe from all danger, and yet you experience fear, something is wrong; if you believe that you have eaten plenty of food, yet you still experience hunger, something is wrong. When your beliefs and emotions don’t align, either your beliefs or your emotions are defective. I would say that the same is true of moral beliefs. If you believe that genocide is wrong but you are not motivated to resist genocide, something is wrong; if you believe that feeding your children is obligatory but you are not motivated to feed your children, something is wrong.

It may well be that without emotion, facts would never motivate us; but emotions can warranted by facts. That is how we distinguish depression from sadness, mania from joy, phobia from fear. Indeed I am dubious of the entire philosophical project of noncognitivism, of which expressivism is the moral form. Noncognitivism is the idea that a given domain of mental processing is not cognitive—not based on thinking, reason, or belief. There is often a sense that noncognitive mental processing is “lower” than cognition, usually based on the idea that it is more phylogenetically conserved—that we think as men but feel as rats.

Yet in fact this is not how human emotions work at all. Poetry—mere words—often evokes the strongest of emotions. A text message of “I love you” or “I think we should see other people” can change the course of our lives. An ambulance in the driveway will pale the face of any parent. In 2001 the video footage of airplanes colliding with skyscrapers gave all of America nightmares for weeks. Yet stop and think about what text messages, ambulances, video footage, airplanes, and skyscrapers are—they are technologies so advanced, so irreducibly cognitive, that even the world’s technological superpower had none of them 200 years ago. (We didn’t have text messages forty years ago!) Even something as apparently dry as numbers can have profound emotional effects: In the statements “Your blood sugar is X mg/dL” to a diabetic, “You have Y years to live” to a cancer patient, or “Z people died” in a news report, the emotional effects are almost wholly dependent upon the value of the numbers X, Y, and Z—values of X = 100, Y = 50 and Z = 0 would be no cause for alarm (or perhaps even cause for celebration!), while values of X = 400, Y = 2, and Z = 10,000 would trigger immediate shock, terror and despair. The entire discipline of cognitive-behavioral psychotherapy depends upon the fact that talking to people about their thoughts and beliefs can have profound effects upon their emotions and actions—and in empirical studies, cognitive-behavioral psychotherapy is verified to work in a variety of circumstances and is more effective than medication for virtually every mental disorder. We do not think as men but feel as rats; we thinkandfeel as human beings.

Because they are evolved instincts, we have limited control over them, and other animals have them, we are often inclined to suppose that emotions are simple, stupid, irrational—but on the contrary they are mind-bogglingly complex, brilliantly intelligent, and the essence of what it means to be a rational being. People who don’t have emotions aren’t rational—they are inert. In psychopathology a loss of capacity for emotion is known as flat affect, and it is often debilitating; it is often found in schizophrenia and autism, and in its most extreme forms it causes catatoniathat is, a total lack of body motion. From Plato to Star Trek, Western culture has taught us to think that a loss of emotion would improve our rationality; but on the contrary, a loss of all emotion would render us completely vegetative. Lieutenant Commander Data without his emotion chip should stand in one place and do nothing—for this is what people without emotion actually do.

Indeed, attractive and aversive experiences—that is, emotions—are the core of goal-seeking behavior, without which rationality is impossible. Apparently simple experiences like pleasure and pain (let alone obviously complicated ones like jealousy and patriotism) are so complex that the most advanced robots in the world cannot even get close to simulating them. Injure a rat, and it will withdraw and cry out in pain; damage a robot (at least any less than a state-of-the-art research robot), and it will not react at all, continuing ineffectually through the same motions it was attempting a moment ago. This shows that rats are smarter than robots—an organism that continues on its way regardless of the stimulus is more like a plant than an animal.

Our emotions do sometimes fail us. They hurt us, they put us at risk, they make us behave in ways that are harmful or irrational. Yet to declare on these grounds that emotions are the enemy of reason would be like declaring that we should all poke out our eyes because sometimes we are fooled by optical illusions. It would be like saying that a shirt with one loose thread is unwearable, that a mathematician who once omits a negative sign should never again be trusted. This is not rationality but perfectionism. Like human eyes, human emotions are rational the vast majority of the time, and when they aren’t, this is cause for concern. Truly irrational emotions include mania, depression, phobia, and paranoia—and it’s no accident that we respond to these emotions with psychotherapy and medication.

Expressivism is legitimate precisely because it is not a challenger to moral realism. Personally, I think that expressivism is wrong because moral claims express facts as much as they express attitudes; but given our present state of knowledge about cognitive science, that is the sort of question upon which reasonable people can disagree. Moreover, the close ties between emotion and reason may ultimately entail that we are wrong to make the distinction in the first place. It is entirely reasonable, at our present state of knowledge, to think that moral judgments are primarily emotional rather than propositional. What isnot reasonable, however, is the claim that moral statements cannot be objectively justified—the evidence against this claim is simply too compelling to ignore. If moral claims are emotions, they are emotions that can be objectively justified.

Against Moral Anti-Realism

Sep 22 JDN 2460576

Moral anti-realism is more philosophically sophisticated than relativism, but it is equally mistaken. It is what is sounds like, the negation of moral realism. Moral anti-realists hold that moral truths are meaningless because they rest upon presumptions about the world that fail to hold. To an anti-realist, “genocide is wrong” is meaningless because there is no such thing as “wrong”, much as to any sane person “unicorns have purple feathers” is meaningless because there are no such things as unicorns. They aren’t saying that genocide isn’t wrong—they’re saying that wrong itself is a defective concept.

The vast majority of people profess strong beliefs in moral truth, and indeed strong beliefs about particular moral issues, such as abortion, capital punishment, same-sex marriage, euthanasia, contraception, civil liberties, and war. There is at the very least a troubling tension here between academia and daily life.

This does not by itself prove that moral truths exist. Ordinary people could be simply wrong about these core beliefs. Indeed, I must acknowledge that most ordinary people clearly are deeply ignorant about certain things, as only 55\% of Americans believe that the theory of evolution is true, and only 66\% of Americans agree that the majority of recent changes in Earth’s climate has been caused by human activity, when in reality these are scientific facts, empirically demonstrable through multiple lines of evidence, verified beyond all reasonable doubt, and both evolution and climate change are universally accepted within the scientific community. In scientific terms there is no more doubt about evolution or climate change than there is about the shape of the Earth or the structure of the atom.

If there were similarly compelling reasons to be moral anti-realists, then the fact that most people believe in morality would be little different: Perhaps most ordinary people are simply wrong about these issues. But when asked to provide similarly compelling evidence for why they reject the moral views of ordinary people, moral anti-realists have little to offer.

Many anti-realists will note the diversity of moral opinions in the world, as John Burgess did, which would be rather like noting the diversity of beliefs about the soul as an argument against neuroscience, or noting the diversity of beliefs about the history of life as an argument against evolution. Many people are wrong about many things that science has shown to be the case; this is worrisome for various reasons, but it is not an argument against the validity of scientific knowledge. Similarly, a diversity of opinions about morality is worrisome, but hardly evidence against the validity of morality.

In fact, when they talk about such fundamental disagreements in morality, anti-realists don’t have very compelling examples. It’s easy to find fundamental disagreements about biology—ask an evolutionary biologist and a Creationist whether humans share an ancestor with chimpanzees. It’s easy to find fundamental disagreements about cosmology—ask a physicist and an evangelical Christian how the Earth began. It’s easy to find fundamental disagreements about climate—ask a climatologist and an oil company executive whether human beings are causing global warming. But where are these fundamental disagreements in morality? Sure, on specific matters there is some disagreement. There are differences between cultures regarding what animals it is acceptable to eat, and differences between cultures about what constitutes acceptable clothing, and differences on specific political issues. But in what society is it acceptable to kill people arbitrarily? Where is it all right to steal whatever you want? Where is lying viewed as a good thing? Where is it obligatory to eat only dirt? In what culture has wearing clothes been a crime? Moral realists are by no means committed to saying that everyone agrees about everything—but it does support our case to point out that most people agree on most things most of the time.

There are a few compelling cases of moral disagreement, but they hardly threaten moral realism. How might we show one culture’s norms to be better than another’s? Compare homicide rates. Compare levels of poverty. Compare overall happiness, perhaps using surveys—or even brain scans. This kind of data exists, and it has a fairly clear pattern: people living in social democratic societies (such as Sweden and Norway) are wealthier, safer, longer-lived, and overall happier than people in other societies. Moreover, using the same publicly-available data, democratic societies in general do much better than authoritarian societies, by almost any measure. This is an empirical fact. It doesn’t necessarily mean that such societies are doing everything right—but they are clearly doing something right. And it really isn’t so implausible to say that what they are doing right is enforcing a good system of moral, political, and cultural norms.

Then again, perhaps some people would accept these empirical facts but still insist that their culture is superior; suppose the disagreement really is radical and intractable. This still leaves two possibilities for moral realism.

The most obvious answer would be to say that one group is wrong—that, objectively, one culture is better than another.

But even if that doesn’t work, there is another way: Perhaps both are right, or more precisely, perhaps these two cultural systems are equally good but incompatible. Is this relativism? Some might call it that, but if it is, it’s relativism of a very narrow kind. I am emphatically not saying that all existing cultures are equal, much less that all possible cultures are equal. Instead, I am saying that it is entirely possible to have two independent moral systems which prescribe different behaviors yet nonetheless result in equally-good overall outcomes.

I could make a mathematical argument involving local maxima of nonlinear functions, but instead I think I’ll use an example: Traffic laws.

In the United States, we drive on the right side of the road. In the United Kingdom, they drive on the left side. Which way is correct? Both are—both systems work well, and neither is superior in any discernible way. In fact, there are other systems that would be just as effective, like the system of all one-way roads that prevails in Manhattan.

Yet does this mean that we should abandon reason in our traffic planning, throw up our hands and declare that any traffic system is as good as any other? On the contrary—there are plenty of possible traffic systems that clearly don’t work. Pointing several one-way roads into one another with no exit is clearly not going to result in good traffic flow. Having each driver flip a coin to decide whether to drive on the left or the right would result in endless collisions. Moreover, our own system clearly isn’t perfect. Nearly 40,000 Americans die of car collisions every year; perhaps we can find a better system that will prevent some or all of these deaths. The mere fact that two, or three, or even 400 different systems of laws or morals are equally good does not entail that all systems are equally good. Even if two cultures really are equal, that doesn’t mean we need to abandon moral realism; it merely means that some problems have multiple solutions. “X2 = 4; what is X?” has two perfectly correct answers (2 and -2), but it also has an infinite variety of wrong answers.

In fact, moral disagreement may not be evidence of anti-realism at all. In order to disagree with someone, you must think that there is an objective fact to be decided. If moral statements were seen as arbitrary and subjective, then people wouldn’t argue about them very much. Imagine an argument, “Chocolate is the best flavor of ice cream!” “No, vanilla is the best!”. This sort of argument might happen on occasion between seven-year-olds, but it is definitely not the sort of thing we hear from mature adults. This is because as adults we realize that tastes in ice cream really are largely subjective. An anti-realist can, in theory, account for this, if they can explain why moral values are falsely perceived as objective while values in taste are not; but if all values are all really arbitrary and subjective, why is it that this is obvious to everyone in the one case and not the other? In fact, there are compelling reasons to think that we couldn’t perceive moral values as arbitrary even if we tried. Some people say “abortion is a right”, others say “abortion is murder”. Even if we were to say that these are purely arbitrary, we would still be left with the task of deciding what laws to make on abortion. Regardless of where the goals come from, some goals are just objectively incompatible.

Another common anti-realist argument rests upon the way that arguments about morality often become emotional and irrational. Charles Stevenson has made this argument; apparently Stevenson has never witnessed an argument about religion, science, or policy, certainly not one outside academia. Many laypeople will insist passionately that the free market is perfect, global warming is a lie, or the Earth is only 6,000 years old. (Often the same people, come to think of it.) People will grow angry and offended if such beliefs are disputed. Yet these are objectively false claims. Unless we want to be anti-realists about GDP, temperature and radiometric dating, emotional and irrational arguments cannot compel us to abandon realism.

Another frequent claim, commonly known as the “argument from queerness”, says that moral facts would need to be something very strange, usually imagined as floating obligations existing somewhere in space; but this is rather like saying that mathematical facts cannot exist because we do not see floating theorems in space and we have never met a perfect triangle. In fact, there is no such thing as a floating speed of light or a floating Schrodinger’s equation either, but no one thinks this is an argument against physics.

A subtler version of this argument, the original “argument from queerness” put forth by J.L. Mackie, says that moral facts are strange because they are intrinsically motivating, something no other kind of facts would be. This is no doubt true; but it seems to me a fairly trivial observation, since part of the definition of “moral fact” is that anything which has this kind of motivational force is a moral (or at least normative) fact. Any well-defined natural kind is subject to the same sort of argument. Spheres are perfectly round three-dimensional objects, something no other object is. Eyes are organs that perceive light, something no other organ does. Moral facts are indeed facts that categorically motivate action, which no other thing does—but so what? All this means is that we have a well-defined notion of what it means to be a moral fact.

Finally, it is often said that moral claims are too often based on religion, and religion is epistemically unfounded, so morality must fall as well. Now, unlike most people, I completely agree that religion is epistemically unfounded. Instead, the premise I take issue with is the idea that moral claims have anything to do with religion. A lot of people seem to think so; but in fact our most important moral values transcend religion and in many cases actually contradict it.

Now, it may well be that the majority of claims people make about morality are to some extent based in their religious beliefs. The majority of governments in history have been tyrannical; does that mean that government is inherently tyrannical, there is no such thing as a just government? The vast majority of human beings have never traveled in outer space; does that mean space travel is impossible? Similarly, I see no reason to say that simply because the majority of moral claims (maybe) are religious, therefore moral claims are inherently religious.

Generally speaking, moral anti-realists make a harsh distinction between morality and other domains of knowledge. They agree that there are such things as trucks and comets and atoms, but do not agree that there are such things as obligations and rights. Indeed, a typical moral anti-realist speaks as if they are being very rigorous and scientific while we moral realists are being foolish, romantic, even superstitious. Moral anti-realism has an attitude of superciliousness not seen in a scientific faction since behaviorism.

But in fact, I think moral anti-realism is the result of a narrow understanding of fundamental physics and cognitive science. It is a failure to drink deep enough of the Pierian springs. This is not surprising, since fundamental physics and cognitive science are so mind-bogglingly difficult that even the geniuses of the world barely grasp them. Quoth Feynman: “I think I can safely say that nobody understands quantum mechanics.” This was of course a bit overstated—Feynman surely knew that there are things we do understand about quantum physics, for he was among those who best understood them. Still, even the brightest minds in the world face total bafflement before problems like dark energy, quantum gravity, the binding problem, and the Hard Problem. It is no moral failing to have a narrow understanding of fundamental physics and cognitive science, for the world’s greatest minds have a scarcely broader understanding.

The failing comes from trying to apply this narrow understanding of fundamental science to moral problems without the humility to admit that the answers are never so simple. “Neuroscience proves we have no free will.” No it doesn’t! It proves we don’t have the kind of free will you thought we did. “We are all made of atoms, therefore there can be no such thing as right and wrong.” And what do you suppose we would have been made of if there were such things as right and wrong? Magical fairy dust?

Here is what I think moral anti-realists get wrong: They hear only part of what scientists say. Neuroscientists explain to them that the mind is a function of matter, and they hear it as if we had said there is only mindless matter. Physicists explain to them that we have much more precise models of atomic phenomena than we do of human behavior, and they hear it as if we had said that scientific models of human behavior are fundamentally impossible. They trust that we know very well what atoms are made of and very poorly what is right and wrong—when quite the opposite is the case.

In fact, the more we learn about physics and cognitive science, the more similar the two fields seem. There was a time when Newtonian mechanics ruled, when everyone thought that physical objects are made of tiny billiard balls bouncing around according to precise laws, while consciousness was some magical, “higher” spiritual substance that defied explanation. But now we understand that quantum physics is all chaos and probability, while cognitive processes can be mathematically modeled and brain waves can be measured in the laboratory. Something as apparently simple as a proton—let alone an extended, complex object, like a table or a comet—is fundamentally a functional entity, a unit of structure rather than substance. To be a proton is to be organized the way protons are and to do what protons do; and so to be human is to be organized the way humans are and to do what humans do. The eternal search for “stuff” of which everything is made has come up largely empty; eventually we may find the ultimate “stuff”, but when we do, it will already have long been apparent that substance is nowhere near as important as structure. Reductionism isn’t so much wrong as beside the point—when we want to understand what makes a table a table or what makes a man a man, it simply doesn’t matter what stuff they are made of. The table could be wood, glass, plastic, or metal; the man could be carbon, nitrogen and water like us, or else silicon and tantalum like Lieutenant Commander Data on Star Trek. Yes, structure must be made of something, and the substance does affect the structures that can be made out of it, but the structure is what really matters, not the substance.

Hence, I think it is deeply misguided to suggest that because human beings are made of molecules, this means that we are just the same thing as our molecules. Love is indeed made of oxytocin (among other things), but only in the sense that a table is made of wood. To know that love is made of oxytocin really doesn’t tell us very much about love; we need also to understand how oxytocin interacts with the bafflingly complex system that is a human brain—and indeed how groups of brains get together in relationships and societies. This is because love, like so much else, is not substance but function—something you do, not something you are made of.

It is not hard, rigorous science that says love is just oxytocin and happiness is just dopamine; it is naive, simplistic science. It is the sort of “science” that comes from overlaying old prejudices (like “matter is solid, thoughts are ethereal”) with a thin veneer of knowledge. To be a realist about protons but not about obligations is to be a realist about some functional relations and not others. It is to hear “mind is matter”, and fail to understand the is—the identity between them—instead acting as if we had said “there is no mind; there is only matter”. You may find it hard to believe that mind can be made of matter, as do we all; yet the universe cares not about our incredulity. The perfect correlation between neurochemical activity and cognitive activity has been verified in far too many experiments to doubt. Somehow, that kilogram of wet, sparking gelatin in your head is actually thinking and feeling—it is actually you.

And once we realize this, I do not think it is a great leap to realize that the vast collection of complex, interacting bodies moving along particular trajectories through space that was the Holocaust was actually wrong, really, objectively wrong.

Are eliminativists zombies?

May 19 JDN 2460450

There are lots of little variations, but basically all views on the philosophy of mind boil down to four possibilities:

  1. Dualism: Mind and body are two separate types of thing
  2. Monism: Mind and body are the same type of thing
  3. Idealism: Only mind exists; body isn’t real
  4. Eliminativism: Only body exists; mind isn’t real

Like most philosophers and cognitive scientists, I am a die-hard monist, specifically a physicalist: The mind and the body are the same type of thing. Indeed, they are parts of the same physical system.

I call it the Basic Fact of Cognitive Science, which so many fail to understand at their own peril:

You are your brain.

You are not a product of your brain; you are not an illusion created by your brain; you are not connected to your brain. You are your brain. Your consciousness is generated by the activity of your brain.

Understanding how this works is beyond current human knowledge. I ask only that you understand that it works. Treat it as a brute fact of the universe if you must.

But precisely because understanding this mechanism is so difficult it has been aptly dubbed The Hard Problem, I am at least somewhat sympathetic to dualists, who say that the reason we can’t understand how the mind and brain are the same is that they aren’t, that there is some extra thing, the soul, which somehow makes consciousness and isn’t made of any material substance.

(If you want to get into the weeds a bit more, there are also “property dualists”, who try to bridge the gap between dualism and physicalism, but I think they are trying to have their cake and eat it too. So-called “predicate dualism” is really just physicalism; nobody says that tables or hurricanes are non-physical just because they are multiply-realizable.)

The problem, of course, is that dualism doesn’t actually explain anything. In fact, it adds a bunch of other mysteries that would then need to be explained, because there are clear, direct ways that consciousness interacts with physical matter. Affecting the body affects the mind, and vice-versa.

You don’t need anything as exotic as fMRI or brain injury studies to understand this. All you need to do is take a drug. In fact, all you need to do is get hungry and eat food. Eating food—obviously a physical process—makes you no longer hungry—a change in your conscious state. And the reason you ate food in the first place was because you were hungry—your mental state intervened on your bodily action.

The fact that mind and body are deeply connected is therefore an obvious fact, which should have been apparent to anyone throughout history. It doesn’t require any kind of deep scientific knowledge; all you have to do is pay close enough attention to your ordinary life.

But I can at least understand the temptation to be a dualist. Consciousness is weird and mysterious. It’s tempting to posit some whole new class of substance beyond anything we know in order to explain it.

Then there’s idealism, which theoretically, in principle, could be true—it’s just absurdly, vanishingly unlikely. Technically, all that I experience, qua experience, happens in my mind. So I can’t completely rule out the possibility that everything I think of as physical reality is actually just an illusion, and only my mind exists. It’s just that, well… the whole of my experience points pretty strongly to this not being the case. At the very least, it’s utterly impractical to live your life according to such a remote possibility.

That leaves eliminativism. And this, I confess, is the one I really don’t get.

Idealism, I can’t technically rule out; dualism, I understand the temptation; monism is in fact the truth. But eliminativism? I just can’t grok how anyone can actually believe it.

Then again, I think they sort of admit that.

The weirdest thing about eliminativism is that what they are actually saying is that things like beliefs and knowledge and feelings don’t actually exist.

If you ask an eliminativist if they believe eliminativism is true, they should answer “no”: because their assertion is precisely that nobody believes anything at all.

The more sophisticated eliminativists say that these “folk terms” are rough approximations to deeper concepts that cognitive science will someday understand. That’s not so ridiculous, but it still seems pretty bizarre to me to say that iron doesn’t exist because we now understand that an iron atom has precisely 26 protons. Perhaps indeed we will understand the mechanisms underlying beliefs better than we do now; but why would we need to stop calling them beliefs?

But some eliminativists—particularly behaviorists—seem to think that the these “folk terms” are just stupid, unscientific notions that will be one day discarded the same way that phlogiston and elan vital were discarded. And that I absolutely cannot fathom.

Consciousness isn’t an explanation; it is what we were trying to explain.

You can’t just discardthe phenomenonyou were trying to make sense of! This isn’t giving up on phlogiston; it’s giving up on fire. This isn’t abandoning the notion of elan vital; it’s abandoning the distinction between life and death.

But the more I think about this, the more I wonder:

Maybe eliminativists are right—about themselves?

Maybe the reason they think the rest of us don’t have feelings and beliefs is that they actually don’t. They don’t understand all this talk about the inner light of consciousness, because they just don’t have it.

In other words:

Are eliminativists zombies?

No, not the shambling, “Brains! Brains!” kind of zombie; the philosophical concept of a zombie (sometimes written “p-zombie” to clarify). A zombie is a being that looks human, acts human, is externally indistinguishable from a human, yet has no internal experience. They walk and talk, but they don’t actually think. A zombie acts like us, but lacks the inner light of consciousness.

Of course, what I’d really be saying here is that they are almost indistinguishable, but you can sometimes tell them apart by their babbling about the non-existence of consciousness.

But really, almost indistinguishable makes more sense anyway; if they were literally impossible to tell apart under any conceivable test, it’s difficult to even make sense of what we mean when we say they are different. (I am certainly not the first to point this out, and indeed it’s often used as an argument against the existence of zombies.)

Do I actually think that eliminativists are zombies?

No. I don’t.

But the weird thing is that they seem to, and so I feel some compulsion to let them self-identify that way. It feels wrong to attribute beliefs to someone that they say they don’t actually hold, and eliminativists have said that they don’t hold any beliefs whatsoever.

Yet, somehow, I don’t think they’ll appreciate being called zombies, either.

Everyone includes your mother and Los Angeles

Apr 28 JDN 2460430

What are the chances that artificial intelligence will destroy human civilization?

A bunch of experts were surveyed on that question and similar questions, and half of respondents gave a probability of 5% or more; some gave probabilities as high as 99%.

This is incredibly bizarre.

Most AI experts are people who work in AI. They are actively participating in developing this technology. And yet more than half of them think that the technology they are working on right now has a more than 5% chance of destroying human civilization!?

It feels to me like they honestly don’t understand what they’re saying. They can’t really grasp at an intuitive level just what a 5% or 10% chance of global annihilation means—let alone a 99% chance.

If something has a 5% chance of killing everyone, we should consider that at least as bad as something that is guaranteed to kill 5% of people.

Probably worse, in fact, because you can recover from losing 5% of the population (we have, several times throughout history). But you cannot recover from losing everyone. So really, it’s like losing 5% of all future people who will ever live—which could be a very large number indeed.

But let’s be a little conservative here, and just count people who already, currently exist, and use 5% of that number.

5% of 8 billion people is 400 million people.

So anyone who is working on AI and also says that AI has a 5% chance of causing human extinction is basically saying: “In expectation, I’m supporting 20 Holocausts.”

If you really think the odds are that high, why aren’t you demanding that any work on AI be tried as a crime against humanity? Why aren’t you out there throwing Molotov cocktails at data centers?

(To be fair, Eliezer Yudkowsky is actually calling for a global ban on AI that would be enforced by military action. That’s the kind of thing you should be doing if indeed you believe the odds are that high. But most AI doomsayers don’t call for such drastic measures, and many of them even continue working in AI as if nothing is wrong.)

I think this must be scope neglector something even worse.

If you thought a drug had a 99% chance of killing your mother, you would never let her take the drug, and you would probably sue the company for making it.

If you thought a technology had a 99% chance of destroying Los Angeles, you would never even consider working on that technology, and you would want that technology immediately and permanently banned.

So I would like to remind anyone who says they believe the danger is this great and yet continues working in the industry:

Everyone includes your mother and Los Angeles.

If AI destroys human civilization, that means AI destroys Los Angeles. However shocked and horrified you would be if a nuclear weapon were detonated in the middle of Hollywood, you should be at least that shocked and horrified by anyone working on advancing AI, if indeed you truly believe that there is at least a 5% chance of AI destroying human civilization.

But people just don’t seem to think this way. Their minds seem to take on a totally different attitude toward “everyone” than they would take toward any particular person or even any particular city. The notion of total human annihilation is just so remote, so abstract, they can’t even be afraid of it the way they are afraid of losing their loved ones.

This despite the fact that everyone includes all your loved ones.

If a drug had a 5% chance of killing your mother, you might let her take it—but only if that drug was the best way to treat some very serious disease. Chemotherapy can be about that risky—but you don’t go on chemo unless you have cancer.

If a technology had a 5% chance of destroying Los Angeles, I’m honestly having trouble thinking of scenarios in which we would be willing to take that risk. But the closest I can come to it is the Manhattan Project. If you’re currently fighting a global war against fascist imperialists, and they are also working on making an atomic bomb, then being the first to make an atomic bomb may in fact be the best option, even if you know that it carries a serious risk of utter catastrophe.

In any case, I think one thing is clear: You don’t take that kind of serious risk unless there is some very large benefit. You don’t take chemotherapy on a whim. You don’t invent atomic bombs just out of curiosity.

Where’s the huge benefit of AI that would justify taking such a huge risk?

Some forms of automation are clearly beneficial, but so far AI per se seems to have largely made our society worse. ChatGPT lies to us. Robocalls inundate us. Deepfakes endanger journalism. What’s the upside here? It makes a ton of money for tech companies, I guess?

Now, fortunately, I think 5% is too high an estimate.

(Scientific American agrees.)

My own estimate is that, over the next two centuries, there is about a 1% chance that AI destroys human civilization, and only a 0.1% chance that it results in human extinction.

This is still really high.

People seem to have trouble with that too.

“Oh, there’s a 99.9% chance we won’t all die; everything is fine, then?” No. There are plenty of other scenarios that would also be very bad, and a total extinction scenario is so terrible that even a 0.1% chance is not something we can simply ignore.

0.1% of people is still 8 million people.

I find myself in a very odd position: On the one hand, I think the probabilities that doomsayers are giving are far too high. On the other hand, I think the actions that are being taken—even by those same doomsayers—are far too small.

Most of them don’t seem to consider a 5% chance to be worthy of drastic action, while I consider a 0.1% chance to be well worthy of it. I would support a complete ban on all AI research immediately, just from that 0.1%.

The only research we should be doing that is in any way related to AI should involve how to make AI safer—absolutely no one should be trying to make it more powerful or apply it to make money. (Yet in reality, almost the opposite is the case.)

Because 8 million people is still a lot of people.

Is it fair to treat a 0.1% chance of killing everyone as equivalent to killing 0.1% of people?

Well, first of all, we have to consider the uncertainty. The difference between a 0.05% chance and a 0.015% chance is millions of people, but there’s probably no way we can actually measure it that precisely.

But it seems to me that something expected to kill between 4 million and 12 million people would still generally be considered very bad.

More importantly, there’s also a chance that AI will save people, or have similarly large benefits. We need to factor that in as well. Something that will kill 4-12 million people but also save 15-30 million people is probably still worth doing (but we should also be trying to find ways to minimize the harm and maximize the benefit).

The biggest problem is that we are deeply uncertain about both the upsides and the downsides. There are a vast number of possible outcomes from inventing AI. Many of those outcomes are relatively mundane; some are moderately good, others are moderately bad. But the moral question seems to be dominated by the big outcomes: With some small but non-negligible probability, AI could lead to either a utopian future or an utter disaster.

The way we are leaping directly into applying AI without even being anywhere close to understanding AI seems to me especially likely to lean toward disaster. No other technology has ever become so immediately widespread while also being so poorly understood.

So far, I’ve yet to see any convincing arguments that the benefits of AI are anywhere near large enough to justify this kind of existential risk. In the near term, AI really only promises economic disruption that will largely be harmful. Maybe one day AI could lead us into a glorious utopia of automated luxury communism, but we really have no way of knowing that will happen—and it seems pretty clear that Google is not going to do that.

Artificial intelligence technology is moving too fast. Even if it doesn’t become powerful enough to threaten our survival for another 50 years (which I suspect it won’t), if we continue on our current path of “make money now, ask questions never”, it’s still not clear that we would actually understand it well enough to protect ourselves by then—and in the meantime it is already causing us significant harm for little apparent benefit.

Why are we even doing this? Why does halting AI research feel like stopping a freight train?

I dare say it’s because we have handed over so much power to corporations.

The paperclippers are already here.

Surviving in an ad-supported world

Apr 21 JDN 2460423

Advertising is as old as money—perhaps even older. Scams have likewise been a part of human society since time immemorial.

But I think it’s fair to say that recently, since the dawn of the Internet at least, both advertising and scams have been proliferating, far beyond what they used to be.

We live in an ad-supported world.

News sites are full of ads. Search engines are full of ads. Even shopping sites are full of ads now; we literally came here planning to buy something, but that wasn’t good enough for you; you want us to also buy something else. Most of the ads are for legitimate products; but some are for scams. (And then there’s multi-level marketing, which is somewhere in between: technically not a scam.)

We’re so accustomed to getting spam emails, phone calls, and texts full of ads and scams that we just accept it as a part of our lives. But these are not something people had to live with even 50 years ago. This is a new, fresh Hell we have wrought for ourselves as a civilization.

AI promises to make this problem even worse. AI still isn’t very good at doing anything particularly useful; you can’t actually trust it to drive a truck or diagnose an X-ray. (There are people working on this sort of thing, but they haven’t yet succeeded.) But it’s already pretty good at making spam texts and phone calls. It’s already pretty good at catfishing people. AI isn’t smart enough to really help us, but it is smart enough to hurt us, especially those of us who are most vulnerable.

I think that this causes a great deal more damage to our society than is commonly understood.

It’s not just that ads are annoying (though they are), or that they undermine our attention span (though they do), or that they exploit the vulnerable (though they do).

I believe that an ad-supported world is a world where trust goes to die.

When the vast majority of your interactions with other people involve those people trying to get your money, some of them by outright fraud—but none of them really honestly—you have no choice but to ratchet down your sense of trust. It begins to feel as this financial transactions are the only form of interaction there is in the world.

But in fact most people can be trusted, and should be trusted—you are missing out on a great deal of what makes life worth living if you do not know how to trust.

The question is whom you trust. You should trust people you know, people you interact with personally and directly. Even strangers are more trustworthy than any corporation will ever be. And never are corporations more dishonest than when they are sending out ads.


The more the world fills with ads, the less room it has for trust.

Is there any way to stem this tide? Or are we simply doomed to live in the cyberpunk dystopia our forebears warned about, where everything is for sale and all available real estate is used for advertising?

Ads and scams only exist because they are profitable; so our goal should be to make them no longer profitable.

Here is one very simple piece of financial advice that will help protect you. Indeed, I believe it can protect so well, that if everyone followed it consistently, we would stem the tide.

Only give money to people you have sought out yourself.

Only buy things you already knew you wanted.

Yes, of course you must buy things. We live in a capitalist society. You can’t survive without buying things. But this is how buying things should work:

You check your fridge and see you are out of milk. So you put “milk” on your grocery list, you go to the grocery store, you find some milk that looks good, and you buy it.

Or, your car is getting old and expensive to maintain, and you decide you need a new one. You run the numbers on your income and expenses, and come up with a budget for a new car. You go to the dealership, they help you pick out a car that fits your needs and your budget, and you buy it.

Your tennis shoes are getting frayed, and it’s time to replace them. You go online and search for “tennis shoes”, looking up sizes and styles until you find a pair that suits you. You order that pair.

You should be the one to decide that you need a thing, and then you should go out looking for it.

It’s okay to get help searching, or even listen to some sales pitches, as long as the whole thing was your idea from the start.

But if someone calls you, texts you, or emails you, asking for your money for something?

Don’t give them a cent.

Just don’t. Don’t do it. Even if it sounds like a good product. Even if it is a good product. If the product they are selling sounds so great that you decide you actually want to buy it, go look for it on your own. Shop around. If you can, go out of your way to buy it from a competing company.

Your attention is valuable. Don’t reward them for stealing it.

This applies to donations, too. Donation asks aren’t as awful as ads, let alone scams, but they are pretty obnoxious, and they only send those things out because people respond to them. If we all stopped responding, they’d stop sending.

Yes, you absolutely should give money to charity. But you should seek out the charities to donate to. You should use trusted sources (like GiveWell and Charity Navigator) to vet them for their reliability, transparency, and cost-effectiveness.

If you just receive junk mail asking you for donations, feel free to take out any little gifts they gave you (it’s often return address labels, for some reason), and then recycle the rest.

Don’t give to the ones who ask for it. Give to the ones who will use it the best.

Reward the charities that do good, not the charities that advertise well.

This is the rule to follow:

If someone contacts you—if they initiate the contact—refuse to give them any money. Ever.

Does this rule seem too strict? It is quite strict, in fact. It requires you to pass up many seemingly-appealing opportunities, and the more ads there are, the more opportunities you’ll need to pass up.

There may even be a few exceptions; no great harm befalls us if we buy Girl Scout cookies or donate to the ASPCA because the former knocked on our doors and the latter showed us TV ads. (Then again, you could just donate to feminist and animal rights charities without any ads or sales pitches.)

But in general, we live in a society that is absolutely inundated with people accosting us and trying to take our money, and they’re only ever going to stop trying to get our money if we stop giving it to them. They will not stop it out of the goodness of their hearts—no, not even the charities, who at least do have some goodness in their hearts. (And certainly not the scammers, who have none.)

They will only stop if it stops working.

So we need to make it stop working. We need to draw this line.

Trust the people around you, who have earned it. Do not trust anyone who seeks you out asking for money.

Telemarketing calls? Hang up. Spam emails? Delete. Junk mail? Recycle. TV ads? Mute and ignore.

And then, perhaps, future generations won’t have to live in an ad-supported world.

What does “can” mean, anyway?

Apr 7 JDN 2460409

I don’t remember where, but I believe I once heard a “philosopher” defined as someone who asks the sort of question everyone knows the answer to, and doesn’t know the answer.

By that definition, I’m feeling very much a philosopher today.

“can” is one of the most common words in the English language; the Oxford English Corpus lists it as the 53rd most common word. Similar words are found in essentially every language, and nearly always rank among their most common.

Yet when I try to precisely define what we mean by this word, it’s surprisingly hard.

Why, you might even say I can’t.

The very concept of “capability” is surprisingly slippery—just what is someone capable of?

My goal in this post is basically to make you as confused about the concept as I am.

I think that experiencing disabilities that include executive dysfunction has made me especially aware of just how complicated the concept of ability really is. This also relates back to my previous post questioning the idea of “doing your best”.

Here are some things that “can” might mean, or even sometimes seems to mean:

1. The laws of physics do not explicitly prevent it.

This seems far too broad. By this definition, you “can” do almost anything—as long as you don’t make free energy, reduce entropy, or exceed the speed of light.

2. The task is something that other human beings have performed in the past.

This is surely a lot better; it doesn’t say that I “can” fly to Mars or turn into a tree. But by this definition, I “can” sprint as fast as Usain Bolt and swim as long as Michael Phelps—which certainly doesn’t seem right. Indeed, not only would I say I can’t do that; I’d say I couldn’t do that, no matter how hard I tried.

3. The task is something that human beings in similar physical condition to my own have performed in the past.

Okay, we’re getting warmer. But just what do we mean, “similar condition”? No one else in the world is in exactly the same condition I am.

And even if those other people are in the same physical condition, their mental condition could be radically different. Maybe they’re smarter than I am, or more creative—or maybe they just speak Swahili. It doesn’t seem right to say that I can speak Swahili. Maybe I could speak Swahili, if I spent a lot of time and effort learning it. But at present, I can’t.

4. The task is something that human beings in similar physical and mental condition to my own have performed in the past.

Better still. This seems to solve the most obvious problems. It says that I can write blog posts (check), and I can’t speak Swahili (also check).

But it’s still not specific enough. For, even if we can clearly define what constitutes “people like me” (can we?), there are many different circumstances in which people like me have been in, and what they did has varied quite a bit, depending on those circumstances.

People in extreme emergencies have performed astonishingly feats of strength, such as lifting cars. Maybe I could do something like that, should the circumstance arise? But it certainly doesn’t seem right to say that I can lift cars.

5. The task is something that human beings in similar physical and mental condition to my own have performed in the past, in circumstances similar to my own.

That solves the above problems (provided we can sufficiently define “similar” for both people and circumstances). But it actually raises a different problem: If the circumstances were so similar, shouldn’t their behavior and mine be the same?

By that metric, it seems like the only way to know if I can do something is to actually do it. If I haven’t actually done it—in that mental state, in those circumstances—then I can’t really say I could have done it. At that point, “can” becomes a really funny way of saying “do”.

So it seems we may have narrowed down a little too much here.

And what about the idea that I could speak Swahili, if I studied hard? That seems to be something broader; maybe it’s this:

6. The task is something that human beings who are in physical or mental condition that is attainable from my own condition have performed in the past.

But now we have to ask, what do we mean by “attainable”? We come right back to asking about capability again: What kind of effort can I make in order to learn Swahili, train as a pilot, or learn to SCUBA dive?

Maybe I could lift a car, if I had to do it to save my life or the life of a loved one. But without the adrenaline rush of such emergency, I might be completely unable to do it, and even with that adrenaline rush, I’m sure the task would injure me severely. Thus, I don’t think it’s fair to say I can lift cars.

So how much can I lift? I have found that I can, as part of a normal workout, bench-press about 80 pounds. But I don’t think is the limit of what I can lift; it’s more like what I can lift safely and comfortably for multiple sets of multiple reps without causing myself undue pain. For a single rep, I could probably do considerably more—though how much more is quite hard to say. 100 pounds? 120? (There are online calculators that supposedly will convert your multi-rep weight to a single-rep max, but for some reason, they don’t seem to be able to account for multiple sets for some reason. If I do 4 sets of 10 reps, is that 10 reps, or 40 reps? This is the difference between my one-rep max being 106 and it being 186. The former seems closer to the truth, but is probably still too low.)

If I absolutely had to—say, something that heavy has fallen on me and lifting it is the only way to escape—could I bench-press my own weight of about 215 pounds? I think so. But I’m sure it would hurt like hell, and I’d probably be sore for days afterward.

Now, consider tasks that require figuring something out, something I don’t currently know but could conceivably learn or figure out. It doesn’t seem right to say that I can solve the P/NP problem or the Riemann Hypothesis. But it does seem right to say that I can at least work on those problems—I know enough about them that I can at least get started, if perhaps not make much real progress. Whereas most people, while they could theoretically read enough books about mathematics to one day know enough that they could do this, are not currently in a state where they could even begin to do that.

Here’s another question for you to ponder:

Can I write a bestselling novel?

Maybe that’s no fair. Making it a bestseller depends on all sorts of features of the market that aren’t entirely under my control. So let’s make it easier:

Can I write a novel?

I have written novels. So at first glance it seems obvious that I can write a novel.

But there are many days, especially lately, on which I procrastinate my writing and struggle to get any writing done. On such a day, can I write a novel? If someone held a gun to my head and demanded that I write the novel, could I get it done?

I honestly don’t know.

Maybe there’s some amount of pressure that would in fact compel me, even on the days of my very worst depression, to write the novel. Or maybe if you put that gun to my head, I’d just die. I don’t know.

But I do know one thing for sure: It would hurt.

Writing a novel on my worst days would require enormous effort and psychological pain—and honestly, I think it wouldn’t feel all that different from trying to lift 200 pounds.

Now we are coming to the real heart of the matter:

How much cost am I expected to pay, for it to still count as within my ability?

There are many things that I can do easily, that don’t really require much effort. But this varies too.

On most days, brushing my teeth is something I just can do—I remember to do it, I choose to do it, it happens; I don’t feel like I have exerted a great deal of effort or paid any substantial cost.

But there are days when even brushing my teeth is hard. Generally I do make it happen, so evidently I can do it—but it is no longer free and effortless the way it usually is.

There are other things which require effort, but are generally feasible, such as working out. Working out isn’t easy (essentially by design), but if I put in the effort, I can make it happen.

But again, some days are much harder than others.

Then there are things which require so much effort they feel impossible, even if they theoretically aren’t.

Right now, that’s where I’m at with trying to submit my work to journals or publishers. Each individual action is certainly something I should be physically able to take. I know the process of what to do—I’m not trying to solve the Riemann Hypothesis here. I have even done it before.

But right now, today, I don’t feel like I can do it. There may be some sense in which I “can”, but it doesn’t feel relevant.

And I felt the same way yesterday, and the day before, and pretty much every day for at least the past year.

I’m not even sure if there is an amount of pressure that could compel me to do it—e.g. if I had a gun to my head. Maybe there is. But I honestly don’t know for sure—and if it did work, once again, it would definitely hurt.

Others in the disability community have a way of describing this experience, which probably sounds strange if you haven’t heard it before:

“Do you have enough spoons?”

(For D&D fans, I’ve also heard others substitute “spell slots”.)

The idea is this: Suppose you are endowed with a certain number of spoons, which you can consume as a resource in order to achieve various tasks. The only way to replenish your spoons is rest.

Some tasks are cheap, requiring only 1 or 2 spoons. Others may be very costly, requiring 10, or 20, or perhaps even 50 or 100 spoons.

But the number of spoons you start with each morning may not always be the same. If you start with 200, then a task that requires 2 will seem trivial. But if you only started with 5, even those 2 will feel like a lot.

As you deplete your available spoons, you will find you need to ration which tasks you are able to complete; thus, on days when you wake up with fewer spoons, things that you would ordinarily do may end up not getting done.

I think submitting to a research journal is a 100-spoon task, and I simply haven’t woken up with more than 50 spoons in any given day within the last six months.

I don’t usually hear it formulated this way, but for me, I think the cost varies too.

I think that on a good day, brushing my teeth is a 0-spoon task (a “cantrip”, if you will); I could do it as many times as necessary without expending any detectable effort. But on a very bad day, it will cost me a couple of spoons just to do that. I’ll still get it done, but I’ll feel drained by it. I couldn’t keep doing it indefinitely. It will prevent me from being able to do something else, later in the day.

Writing is something that seems to vary a great deal in its spoon cost. On a really good day when I’m feeling especially inspired, I might get 5000 words written and feel like I’ve only spent 20 spoons; while on a really bad day, that same 20 spoons won’t even get me a single paragraph.

It may occur to you to ask:

What is the actual resource being depleted here?

Just what are the spoons, anyway?

That, I really can’t say.

I don’t think it’s as simple as brain glucose, though there were a few studies that seemed to support such a view. If it were, drinking something sugary ought to fix it, and generally that doesn’t work (and if you do that too often, it’s bad for your health). Even weirder is that, for some people, just tasting sugar seems to help with self-control. My own guess is that if your particular problem is hypoglycemia, drinking sugar works, and otherwise, not so much.

There could be literally some sort of neurotransmitter reserves that get depleted, or receptors that get overloaded; but I suspect it’s not even that simple either. These are the models we use because they’re the best we have—but the brain is in reality far more complicated than any of our models.

I’ve heard people say “I ran out of serotonin today”, but I’m fairly sure they didn’t actually get their cerebrospinal fluid tested first. (And since most of your serotonin is actually in your gut, if they really ran out they should be having severe gastrointestinal symptoms.) (I had my cerebrospinal fluid tested once; most agonizing pain of my life. To say that I don’t recommend the experience is such an understatement, it’s rather like saying Hell sounds like a bad vacation spot. Indeed, if I believed in Hell, I would have to imagine it feels like getting a spinal tap every day for eternity.)

So for now, the best I can say is, I really don’t know what spoons are. And I still don’t entirely know what “can” means. But at least maybe now you’re as confused as I am.

Bundling the stakes to recalibrate ourselves

Mar 31 JDN 2460402

In a previous post I reflected on how our minds evolved for an environment of immediate return: An immediate threat with high chance of success and life-or-death stakes. But the world we live in is one of delayed return: delayed consequences with low chance of success and minimal stakes.

We evolved for a world where you need to either jump that ravine right now or you’ll die; but we live in a world where you’ll submit a hundred job applications before finally getting a good offer.

Thus, our anxiety system is miscalibrated for our modern world, and this miscalibration causes us to have deep, chronic anxiety which is pathological, instead of brief, intense anxiety that would protect us from harm.

I had an idea for how we might try to jury-rig this system and recalibrate ourselves:

Bundle the stakes.

Consider job applications.

The obvious way to think about it is to consider each application, and decide whether it’s worth the effort.

Any particular job application in today’s market probably costs you 30 minutes, but you won’t hear back for 2 weeks, and you have maybe a 2% chance of success. But if you fail, all you lost was that 30 minutes. This is the exact opposite of what our brains evolved to handle.

So now suppose if you think of it in terms of sending 100 job applications.

That will cost you 30 times 100 minutes = 50 hours. You still won’t hear back for weeks, but you’ve spent weeks, so that won’t feel as strange. And your chances of success after 100 applications are something like 1-(0.98)^100 = 87%.

Even losing 50 hours over a few weeks is not the disaster that falling down a ravine is. But it still feels a lot more reasonable to be anxious about that than to be anxious about losing 30 minutes.

More importantly, we have radically changed the chances of success.

Each individual application will almost certainly fail, but all 100 together will probably succeed.

If we were optimally rational, these two methods would lead to the same outcomes, by a rather deep mathematical law, the linearity of expectation:
E[nX] = n E[X]

Thus, the expected utility of doing something n times is precisely n times the expected utility of doing it once (all other things equal); and so, it doesn’t matter which way you look at it.

But of course we aren’t perfectly rational. We don’t actually respond to the expected utility. It’s still not entirely clear how we do assess probability in our minds (prospect theory seems to be onto something, but it’s computationally harder than rational probability, which means it makes absolutely no sense to evolve it).

If instead we are trying to match up our decisions with a much simpler heuristic that evolved for things like jumping over ravines, our representation of probability may be very simple indeed, something like “definitely”, “probably”, “maybe”, “probably not”, “definitely not”. (This is essentially my categorical prospect theory, which, like the stochastic overload model, is a half-baked theory that I haven’t published and at this point probably never will.)

2% chance of success is solidly “probably not” (or maybe something even stronger, like “almost definitely not”). Then, outcomes that are in that category are presumably weighted pretty low, because they generally don’t happen. Unless they are really good or really bad, it’s probably safest to ignore them—and in this case, they are neither.

But 87% chance of success is a clear “probably”; and outcomes in that category deserve our attention, even if their stakes aren’t especially high. And in fact, by bundling them, we have even made the stakes a bit higher—likely making the outcome a bit more salient.

The goal is to change “this will never work” to “this is going to work”.

For an individual application, there’s really no way to do that (without self-delusion); maybe you can make the odds a little better than 2%, but you surely can’t make them so high they deserve to go all the way up to “probably”. (At best you might manage a “maybe”, if you’ve got the right contacts or something.)

But for the whole set of 100 applications, this is in fact the correct assessment. It will probably work. And if 100 doesn’t, 150 might; if 150 doesn’t, 200 might. At no point do you need to delude yourself into over-estimating the odds, because the actual odds are in your favor.

This isn’t perfect, though.

There’s a glaring problem with this technique that I still can’t resolve: It feels overwhelming.

Doing one job application is really not that big a deal. It accomplishes very little, but also costs very little.

Doing 100 job applications is an enormous undertaking that will take up most of your time for multiple weeks.

So if you are feeling demotivated, asking you to bundle the stakes is asking you to take on a huge, overwhelming task that surely feels utterly beyond you.

Also, when it comes to this particular example, I even managed to do 100 job applications and still get a pretty bad outcome: My only offer was Edinburgh, and I ended up being miserable there. I have reason to believe that these were exceptional circumstances (due to COVID), but it has still been hard to shake the feeling of helplessness I learned from that ordeal.

Maybe there’s some additional reframing that can help here. If so, I haven’t found it yet.

But maybe stakes bundling can help you, or someone out there, even if it can’t help me.

The Butlerian Jihad is looking better all the time

Mar 24 JDN 2460395

A review of The Age of Em by Robin Hanson

In the Dune series, the Butlerian Jihad was a holy war against artificial intelligence that resulted in a millenias-long taboo against all forms of intelligent machines. It was effectively a way to tell a story about the distant future without basically everything being about robots or cyborgs.

After reading Robin Hanson’s book, I’m starting to think that maybe we should actually do it.

Thus it is written: “Thou shalt not make a machine in the likeness of a human mind.”

Hanson says he’s trying to reserve judgment and present objective predictions without evaluation, but it becomes very clear throughout that this is the future he wants, as well as—or perhaps even instead of—the world he expects.

In many ways, it feels like he has done his very best to imagine a world of true neoclassical rational agents in perfect competition, a sort of sandbox for the toys he’s always wanted to play with. Throughout he very much takes the approach of a neoclassical economist, making heroic assumptions and then following them to their logical conclusions, without ever seriously asking whether those assumptions actually make any sense.

To his credit, Hanson does not buy into the hype that AGI will be successful any day now. He predicts that we will achieve the ability to fully emulate human brains and thus create a sort of black-box AGI that behaves very much like a human within about 100 years. Given how the Blue Brain Project has progressed (much slower than its own hype machine told us it would—and let it be noted that I predicted this from the very beginning), I think this is a fairly plausible time estimate. He refers to a mind emulated in this way as an “em”; I have mixed feelings about the term, but I suppose we did need some word for that, and it certainly has conciseness on its side.

Hanson believes that a true understanding of artificial intelligence will only come later, and the sort of AGI that can be taken apart and reprogrammed for specific goals won’t exist for at least a century after that. Both of these sober, reasonable predictions are deeply refreshing in a field that’s been full of people saying “any day now” for the last fifty years.

But Hanson’s reasonableness just about ends there.

In The Age of Em, government is exactly as strong as Hanson needs it to be. Somehow it simultaneously ensures a low crime rate among a population that doubles every few months while also having no means of preventing that population growth. Somehow ensures that there is no labor collusion and corporations never break the law, but without imposing any regulations that might reduce efficiency in any way.

All of this begins to make more sense when you realize that Hanson’s true goal here is to imagine a world where neoclassical economics is actually true.

He realized it didn’t work on humans, so instead of giving up the theory, he gave up the humans.

Hanson predicts that ems will casually make short-term temporary copies of themselves called “spurs”, designed to perform a particular task and then get erased. I guess maybe he would, but I for one would not so cavalierly create another person and then make their existence dedicated to doing a single job before they die. The fact that I created this person, and they are very much like me, seem like reasons to care more about their well-being, not less! You’re asking me to enslave and murder my own child. (Honestly, the fact that Robin Hanson thinks ems will do this all the time says more about Robin Hanson than anything else.) Any remotely sane society of ems would ban the deletion of another em under any but the most extreme circumstances, and indeed treat it as tantamount to murder.

Hanson predicts that we will only copy the minds of a few hundred people. This is surely true at some point—the technology will take time to develop, and we’ll have to start somewhere. But I don’t see why we’d stop there, when we could continue to copy millions or billions of people; and his choices of who would be emulated, while not wildly implausible, are utterly terrifying.

He predicts that we’d emulate genius scientists and engineers; okay, fair enough, that seems right. I doubt that the benefits of doing so will be as high as many people imagine, because scientific progress actually depends a lot more on the combined efforts of millions of scientists than on rare sparks of brilliance by lone geniuses; but those people are definitely very smart, and having more of them around could be a good thing. I can also see people wanting to do this, and thus investing in making it happen.

He also predicts that we’d emulate billionaires. Now, as a prediction, I have to admit that this is actually fairly plausible; billionaires are precisely the sort of people who are rich enough to pay to be emulated and narcissistic enough to want to. But where Hanson really goes off the deep end here is that he sees this as a good thing. He seems to honestly believe that billionaires are so rich because they are so brilliant and productive. He thinks that a million copies of Elon Musks would produce a million hectobillionaires—when in reality it would produce a million squabbling narcissists, who at best had to split the same $200 billion wealth between them, and might very well end up with less because they squander it.

Hanson has a long section on trying to predict the personalities of ems. Frankly this could just have been dropped entirely; it adds almost nothing to the book, and the book is much too long. But the really striking thing to me about that section is what isn’t there. He goes through a long list of studies that found weak correlations between various personality traits like extroversion or openness and wealth—mostly comparing something like the 20th percentile to the 80th percentile—and then draws sweeping conclusions about what ems will be like, under the assumption that ems are all drawn from people in the 99.99999th percentile. (Yes, upper-middle-class people are, on average, more intelligent and more conscientious than lower-middle-class people. But do we even have any particular reason to think that the personalities of people who make $150,000 are relevant to understanding the behavior of people who make $15 billion?) But he completely glosses over the very strong correlations that specifically apply to people in that very top super-rich class: They’re almost all narcissists and/or psychopaths.

Hanson predicts a world where each em is copied many, many times—millions, billions, even trillions of times, and also in which the very richest ems are capable of buying parallel processing time that lets them accelerate their own thought processes to a million times faster than a normal human. (Is that even possible? Does consciousness work like that? Who knows!?) The world that Hanson is predicting is thus one where all the normal people get outnumbered and overpowered by psychopaths.

Basically this is the most abjectly dystopian cyberpunk hellscape imaginable. And he talks about it the whole time as if it were good.

It’s like he played the game Action Potential and thought, “This sounds great! I’d love to live there!” I mean, why wouldn’t you want to owe a life-debt on your own body and have to work 120-hour weeks for a trillion-dollar corporation just to make the payments on it?

Basically, Hanson doesn’t understand how wealth is actually acquired. He is educated as an economist, yet his understanding of capitalism basically amounts to believing in magic. He thinks that competitive markets just somehow perfectly automatically allocate wealth to whoever is most productive, and thus concludes that whoever is wealthy now must just be that productive.

I can see no other way to explain his wildly implausible predictions that the em economy will double every month or two. A huge swath of the book depends upon this assumption, but he waits until halfway through the book to even try to defend it, and then does an astonishingly bad job of doing so. (Honestly, even if you buy his own arguments—which I don’t—they seem to predict that population would grow with Moore’s Law—doubling every couple of years, not every couple of months.)

Whereas Keynes predicted based on sound economic principles that economic growth would more or less proceed apace and got his answer spot-on, Hanson predicts that for mysterious, unexplained reasons economic growth will suddenly increase by two orders of magnitude—and I’m pretty sure he’s going to be wildly wrong.

Hanson also predicts that ems will be on average poorer than we are, based on some sort of perfect-competition argument that doesn’t actually seem to mesh at all with his predictions of spectacularly rapid economic and technological growth. I think the best way to make sense of this is to assume that it means the trend toward insecure affluence will continue: Ems will have an objectively high standard of living in terms of what they own, what games they play, where they travel, and what they eat and drink (in simulation), but they will constantly be struggling to keep up with the rent on their homes—or even their own bodies. This is a world where (the very finest simulation of) Dom Perignon is $7 a bottle and wages are $980 an hour—but monthly rent is $284,000.

Early in the book Hanson argues that this life of poverty and scarcity will lead to more conservative values, on the grounds that people who are poorer now seem to be more conservative, and this has something to do with farmers versus foragers. Hanson’s explanation of all this is baffling; I will quote it at length, just so it’s clear I’m not misrepresenting it:

The other main (and independent) axis of value variation ranges between poor and rich societies. Poor societies place more value on conformity, security, and traditional values such as marriage, heterosexuality, religion, patriotism, hard work, and trust in authority. In contrast, rich societies place more value on individualism, self-direction, tolerance, pleasure, nature, leisure, and trust. When the values of individuals within a society vary on the same axis, we call this a left/liberal (rich) versus right/conservative (poor) axis.

Foragers tend to have values more like those of rich/liberal people today, while subsistence farmers tend to have values more like those of poor/conservative people today. As industry has made us richer, we have on average moved from conservative/farmer values to liberal/forager values. This value movement can make sense if cultural evolution used the social pressures farmers faced, such as conformity and religion, to induce humans, who evolved to find forager behaviors natural, to instead act like farmers. As we become rich, we don’t as strongly fear the threats behind these social pressures. This connection may result in part from disease; rich people are healthier, and healthier societies fear less.

The alternate theory that we have instead learned that rich forager values are more true predicts that values should have followed a random walk over time, and be mostly common across space. It also predicts the variance of value changes tracking the rate at which relevant information appears. But in fact industrial-era value changes have tracked the wealth of each society in much more steady and consistent fashion. And on this theory, why did foragers ever acquire farmer values?

[…]

In the scenario described in this book, many strange-to-forager behaviors are required, and median per-person (i.e. per-em) incomes return to near-subsistence levels. This suggests that the em era may reverse the recent forager-like trend toward more liberality; ems may have more farmer-like values.

The Age of Em, p. 26-27

There’s a lot to unpack here, but maybe it’s better to burn the whole suitcase.

First of all, it’s not entirely clear that this is really a single axis of variation, that foragers and farmers differ from each other in the same way as liberals and conservatives. There’s some truth to that at least—both foragers and liberals tend to be more generous, both farmers and conservatives tend to enforce stricter gender norms. But there are also clear ways that liberal values radically deviate from forager values: Forager societies are extremely xenophobic, and typically very hostile to innovation, inequality, or any attempts at self-aggrandizement (a phenomenon called “fierce egalitarianism“). San Francisco epitomizes rich, liberal values, but it would be utterly alien and probably regarded as evil by anyone from the Yanomamo.

Second, there is absolutely no reason to predict any kind of random walk. That’s just nonsense. Would you predict that scientific knowledge is a random walk, with each new era’s knowledge just a random deviation from the last’s? Maybe next century we’ll return to geocentrism, or phrenology will be back in vogue? On the theory that liberal values (or at least some liberal values) are objectively correct, we would expect them to advance as knowledge doesimproving over time, and improving faster in places that have better institutions for research, education, and free expression. And indeed, this is precisely the pattern we have observed. (Those places are also richer, but that isn’t terribly surprising either!)

Third, while poorer regions are indeed more conservative, poorer people within a region actually tend to be more liberal. Nigeria is poorer and more conservative than Norway, and Mississippi is poorer and more conservative than Massachusetts. But higher-income households in the United States are more likely to vote Republican. I think this is particularly true of people living under insecure affluence: We see the abundance of wealth around us, and don’t understand why we can’t learn to share it better. We’re tired of fighting over scraps while the billionaires claim more and more. Millennials and Zoomers absolutely epitomize insecure affluence, and we also absolutely epitomize liberalism. So, if indeed ems live a life of insecure affluence, we should expect them to be like Zoomers: “Trans liberation now!” and “Eat the rich!” (Or should I say, “Delete the rich!”)

And really, doesn’t that make more sense? Isn’t that the trend our society has been on, for at least the last century? We’ve been moving toward more and more acceptance of women and minorities, more and more deviation from norms, more and more concern for individual rights and autonomy, more and more resistance to authority and inequality.

The funny thing is, that world sounds a lot better than the one Hanson is predicting.

A world of left-wing ems would probably run things a lot better than Hanson imagines: Instead of copying the same hundred psychopaths over and over until we fill the planet, have no room for anything else, and all struggle to make enough money just to stay alive, we could moderate our population to a more sustainable level, preserve diversity and individuality, and work toward living in greater harmony with each other and the natural world. We could take this economic and technological abundance and share it and enjoy it, instead of killing ourselves and each other to make more of it for no apparent reason.

The one good argument Hanson makes here is expressed in a single sentence: “And on this theory, why did foragers ever acquire farmer values?” That actually is a good question; why did we give up on leisure and egalitarianism when we transitioned from foraging to agriculture?

I think scarcity probably is relevant here: As food became scarcer, maybe because of climate change, people were forced into an agricultural lifestyle just to have enough to eat. Early agricultural societies were also typically authoritarian and violent. Under those conditions, people couldn’t be so generous and open-minded; they were surrounded by threats and on the verge of starvation.

I guess if Hanson is right that the em world is also one of poverty and insecurity, we might go back to those sort of values, borne of desperation. But I don’t see any reason to think we’d give up all of our liberal values. I would predict that ems will still be feminist, for instance; in fact, Hanson himself admits that since VR avatars would let us change gender presentation at will, gender would almost certainly become more fluid in a world of ems. Far from valuing heterosexuality more highly (as conservatives do, a “farmer value” according to Hanson), I suspect that ems will have no further use for that construct, because reproduction will be done by manufacturing, not sex, and it’ll be so easy to swap your body into a different one that hardly anyone will even keep the same gender their whole life. They’ll think it’s quaint that we used to identify so strongly with our own animal sexual dimorphism.

But maybe it is true that the scarcity induced by a hyper-competitive em world would make people more selfish, less generous, less trusting, more obsessed with work. Then let’s not do that! We don’t have to build that world! This isn’t a foregone conclusion!

There are many other paths yet available to us.

Indeed, perhaps the simplest would be to just ban artificial intelligence, at least until we can get a better handle on what we’re doing—and perhaps until we can institute the kind of radical economic changes necessary to wrest control of the world away from the handful of psychopaths currently trying their best to run it into the ground.

I admit, it would kind of suck to not get any of the benefits of AI, like self-driving cars, safer airplanes, faster medical research, more efficient industry, and better video games. It would especially suck if we did go full-on Butlerian Jihad and ban anything more complicated than a pocket calculator. (Our lifestyle might have to go back to what it was in—gasp! The 1950s!)

But I don’t think it would suck nearly as much as the world Robin Hanson thinks is in store for us if we continue on our current path.

So I certainly hope he’s wrong about all this.

Fortunately, I think he probably is.

How I feel is how things are

Mar 17 JDN 2460388

One of the most difficult things in life to learn is how to treat your own feelings and perceptions as feelings and perceptions—rather than simply as the way the world is.

A great many errors people make can be traced to this.

When we disagree with someone (whether it is as trivial as pineapple on pizza or as important as international law), we feel like they must be speaking in bad faith, they must be lying—because, to us, they are denying the way the world is. If the subject is important enough, we may become convinced that they are evil—for only someone truly evil could deny such important truths. (Ultimately, even holy wars may come from this perception.)


When we are overconfident, we not only can’t see that; we can scarcely even consider that it could be true. Because we don’t simply feel confident; we are sure we will succeed. And thus if we do fail, as we often do, the result is devastating; it feels as if the world itself has changed in order to make our wishes not come true.

Conversely, when we succumb to Impostor Syndrome, we feel inadequate, and so become convinced that we are inadequate, and thus that anyone who says they believe we are competent must either be lying or else somehow deceived. And then we fear to tell anyone, because we know that our jobs and our status depend upon other people seeing us as competent—and we are sure that if they knew the truth, they’d no longer see us that way.

When people see their beliefs as reality, they don’t even bother to check whether their beliefs are accurate.

Why would you need to check whether the way things are is the way things are?

This is how common misconceptions persist—the information needed to refute them is widely available, but people simply don’t realize they needed to be looking for that information.

For lots of things, misconceptions aren’t very consequential. But some common misconceptions do have large consequences.

For instance, most Americans think that crime is increasing and worse now than it was 30 or 50 years ago. (I tested this on my mother this morning; she thought so too.) It is in fact much, much better—violent crimes are about half as common in the US today as they were in the 1970s. Republicans are more likely to get this wrong than Democrats—but an awful lot of Democrats still get it wrong.

It’s not hard to see how that kind of misconception could drive voters into supporting “tough on crime” candidates who will enact needlessly harsh punishments and waste money on excessive police and incarceration. Indeed, when you look at our world-leading spending on police and incarceration (highest in absolute terms, third-highest as a portion of GDP), it’s pretty clear this is exactly what’s happening.

And it would be so easy—just look it up, right here, or here, or here—to correct that misconception. But people don’t even think to bother; they just know that their perception must be the truth. It never even occurs to them that they could be wrong, and so they don’t even bother to look.

This is not because people are stupid or lazy. (I mean, compared to what?) It’s because perceptions feel like the truth, and it’s shockingly difficult to see them as anything other than the truth.

It takes a very dedicated effort, and no small amount of training, to learn to see your own perceptions as how you see things rather than simply how things are.

I think part of what makes this so difficult is the existential terror that results when you realize that anything you believe—even anything you perceive—could potentially be wrong. Basically the entire field of epistemology is dedicated to understanding what we can and can’t be certain of—and the “can’t” is a much, much bigger set than the “can”.

In a sense, you can be certain of what you feel and perceive—you can be certain that you feel and perceive them. But you can’t be certain whether those feelings and perceptions correspond to your external reality.

When you are sad, you know that you are sad. You can be certain of that. But you don’t know whether you should be sad—whether you have a reason to be sad. Often, perhaps even usually, you do. But sometimes, the sadness comes from within you, or from misperceiving the world.

Once you learn to recognize your perceptions as perceptions, you can question them, doubt them, challenge them. Training your mind to do this is an important part of mindfulness meditation, and also of cognitive behavioral therapy.

But even after years of training, it’s still shockingly hard to do this, especially in the throes of a strong emotion. Simply seeing that what you’re feeling—about yourself, or your situation, or the world—is not an entirely accurate perception can take an incredible mental effort.

We really seem to be wired to see our perceptions as reality.

This makes a certain amount of sense, in evolutionary terms. In an ancestral environment where death was around every corner, we really didn’t have time to stop and thinking carefully about whether our perceptions were accurate.

Two ancient hominids hear a sound that might be a tiger. One immediately perceives it as a tiger, and runs away. The other stops to think, and then begins carefully examining his surroundings, looking for more conclusive evidence to determine whether it is in fact a tiger.

The latter is going to have more accurate beliefs—right up until the point where it is a tiger and he gets eaten.

But in our world today, it may be more dangerous to hold onto false beliefs than to analyze and challenge our beliefs. We may harm ourselves—and others—more by trusting our perceptions too much rather than by taking the time to analyze them.

Against Self-Delusion

Mar 10 JDN 2460381

Is there a healthy amount of self-delusion? Would we be better off convincing ourselves that the world is better than it really is, in order to be happy?


A lot of people seem to think so.

I most recently encountered this attitude in Kathryn Schulz’s book Being Wrong (I liked the TED talk much better, in part because it didn’t have this), but there are plenty of other examples.

You’ll even find advocates for this attitude in the scientific literature, particularly when talking about the Lake Wobegon Effect, optimism bias, and depressive realism.

Fortunately, the psychology community seems to be turning away from this, perhaps because of mounting empirical evidence that “depressive realism” isn’t a robust effect. When I searched today, it was easier to find pop psych articles against self-delusion than in favor of it. (I strongly suspect that would not have been true about 10 years ago.)

I have come up with a very simple, powerful argument against self-delusion:

If you’re allowed to delude yourself, why not just believe everything is perfect?

If you can paint your targets after shooting, why not always paint a bullseye?

The notion seems to be that deluding yourself will help you achieve your goals. But if you’re going to delude yourself, why bother achieving goals? You could just pretend to achieve goals. You could just convince yourself that you have achieved goals. Wouldn’t that be so much easier?

The idea seems to be, for instance, to get an aspiring writer to actually finish the novel and submit it to the publisher. But why shouldn’t she simply imagine she has already done so? Why not simply believe she’s already a bestselling author?

If there’s something wrong with deluding yourself into thinking you’re a bestselling author, why isn’t that exact same thing wrong with deluding yourself into thinking you’re a better writer than you are?

Once you have opened this Pandora’s Box of lies, it’s not clear how you can ever close it again. Why shouldn’t you just stop working, stop eating, stop doing anything at all, but convince yourself that your life is wonderful and die in a state of bliss?

Granted, this is not generally what people who favor (so-called) “healthy self-delusion” advocate. But it’s difficult to see any principled reason why they should reject it. Once you give up on tying your beliefs to reality, it’s difficult to see why you shouldn’t just say that anything goes.

Why are some deviations from reality okay, but not others? Is it because they are small? Small changes in belief can still have big consequences: Believe a car is ten meters behind where it really is, and it may just run you over.

The general approach of “healthy self-delusion” seems to be that it’s all right to believe that you are smarter, prettier, healthier, wiser, and more competent than you actually are, because that will make you more confident and therefore more successful.

Well, first of all, it’s worth pointing out that some people obviously go way too far in that direction and become narcissists. But okay, let’s say we find a way to avoid that. (It’s unclear exactly how, since, again, by construction, we aren’t tying ourselves to reality.)

In practice, the people who most often get this sort of advice are people who currently lack self-confidence, who doubt their own abilities—people who suffer from Impostor Syndrome. And for people like that (and I count myself among them), a certain amount of greater self-confidence would surely be a good thing.

The idea seems to be that deluding yourself to increase your confidence will get you to face challenges and take risks you otherwise wouldn’t have, and that this will yield good outcomes.

But there’s a glaring hole in this argument:

If you have to delude yourself in order to take a risk, you shouldn’t take that risk.

Risk-taking is not an unalloyed good. Russian Roulette is certainly risky, but it’s not a good career path.

There are in fact a lot of risks you simply shouldn’t take, because they aren’t worth it.

The right risks to take are the ones for which the expected benefit outweighs the expected cost: The one with the highest expected utility. (That sounds simple, and in principle it is; but in practice, it can be extraordinarily difficult to determine.)

In other words, the right risks to take are the ones that are rational. The ones that a correct view of the world will instruct you to take.

That aspiring novelist, then, should write the book and submit it to publishers—if she’s actually any good at writing. If she’s actually terrible, then never submitting the book is the correct decision; she should spend more time honing her craft before she tries to finish it—or maybe even give up on it and do something else with her life.

What she needs, therefore, is not a confident assessment of her abilities, but an accurate one. She needs to believe that she is competent if and only if she actually is competent.

But I can also see how self-delusion can seem like good advice—and even work for some people.

If you start from an excessively negative view of yourself or the world, then giving yourself a more positive view will likely cause you to accomplish more things. If you’re constantly telling yourself that you are worthless and hopeless, then convincing yourself that you’re better than you thought is absolutely what you need to do. (Because it’s true.)

I can even see how convincing yourself that you are the best is useful—even though, by construction, most people aren’t. When you live in a hyper-competitive society like ours, where we are constantly told that winning is everything, losers are worthless, and second place is as bad as losing, it may help you get by to tell yourself that you really are the best, that you really can win. (Even weirder: “Winning isn’t everything; it’s the only thing.” Uh, that’s just… obviously false? Like, what is this even intended to mean that “Winning is everything” didn’t already say better?)

But that’s clearly not the right answer. You’re solving one problem by adding another. You shouldn’t believe you are the best; you should recognize that you don’t have to be. Second place is not as bad as losing—and neither is fifth, or tenth, or fiftieth place. The 100th-most successful author in the world still makes millions writing. The 1,000th-best musician does regular concert tours. The 10,000th-best accountant has a steady job. Even the 100,000th-best trucker can make a decent living. (Well, at least until the robots replace him.)

Honestly, it’d be great if our whole society would please get this memo. It’s no problem that “only a minority of schools play sport to a high level”—indeed, that’s literally inevitable. It’s also not clear that “60% of students read below grade level” is a problem, when “grade level” seems to be largely defined by averages. (Literacy is great and all, but what’s your objective standard for “what a sixth grader should be able to read”?)

We can’t all be the best. We can’t all even be above-average.

That’s okay. Below-average does not mean inadequate.

That’s the message we need to be sending:

You don’t have to be the best in order to succeed.

You don’t have to be perfect in order to be good enough.

You don’t even have to be above-average.

This doesn’t require believing anything that isn’t true. It doesn’t require overestimating your abilities or your chances. In fact, it asks you to believe something that is more true than “You have to be the best” or “Winning is everything”.

If what you want to do is actually worth doing, an accurate assessment will tell you that. And if an accurate assessment tells you not to do it, then you shouldn’t do it. So you have no reason at all to strive for anything other than accurate beliefs.

With this in mind, the fact that the empirical evidence for “depressive realism” is shockingly weak is not only unsurprising; it’s almost irrelevant. You can’t have evidence against being rational. If deluded people succeed more, that means something is very, very wrong; and the solution is clearly not to make more people deluded.

Of course, it’s worth pointing out that the evidence is shockingly weak: Depressed people show different biases, not less bias. And in fact they seem to be more overconfident in the following sense: They are more certain that what they predict will happen is what will actually happen.

So while most people think they will succeed when they will probably fail, depressed people are certain they will fail when in fact they could succeed. Both beliefs are inaccurate, but the depressed one is in an important sense more inaccurate: It tells you to give up, which is the wrong thing to do.

“Healthy self-delusion” ultimately amounts to trying to get you to do the right thing for the wrong reasons. But why? Do the right thing for the right reasons! If it’s really the right thing, it should have the right reasons!