Expressivism

Sep 29 JDN 2460583

The theory of expressivism, often posited as an alternative to moral realism, is based on the observation by Hume that factual knowledge is not intrinsically motivating. I can believe that a food is nutritious and that I need nutrition to survive, but without some emotional experience to motivate me—hunger—I will nonetheless remain unmotivated to eat the nutritious food. Because morality is meant to be intrinsically motivating, says Hume, it must not involve statements of fact.

Yet really all Hume has shown is that if indeed facts are not intrinsically motivating, and moral statements are intrinsically motivating, then moral statements are not merely statements of fact. But even statements of fact are rarely merely statements of fact! If I were to walk down the street stating facts at random (lemurs have rings on their tails, the Sun is over one million kilometers in diameter, bicycles have two wheels, people sit on chairs, time dilates as you approach the speed of light, LGBT people suffer the highest per capita rate of hate crimes in the US, Coca-Cola in the United States contains high fructose corn syrup, humans and chimpanzees share 95-98% of our DNA), I would be seen as a very odd sort of person indeed. Even when I state a fact, I do so out of some motivation, frequently an emotional motivation. I’m often trying to explain, or to convince. Sometimes I am angry, and I want to express my anger and frustration. Other times I am sad and seeking consolation. I have many emotions, and I often use words to express them. Nonetheless, in the process I will make many statements of fact that are either true or false: “Humans and chimpanzees share 95-98% of our DNA” I might use to argue in favor of common descent; “Time dilates as you approach the speed of light” I have used in to explain relativity theory; “LGBT people suffer the highest per capita rate of hate crimes in the US” I might use to argue in favor of some sort of gay rights policy. When I say “genocide is wrong!” I probably have some sort of emotional motivation for this—likely my outrage at an ongoing genocide. Nonetheless I’m pretty sure it’s true that genocide is wrong.

Expressivism says that moral statements don’t express propositions at all, they express attitudes, relations to ideas that are not of the same kind as belief and disbelief, truth and falsehood. Much as “Hello!” or “Darn it!” don’t really state facts or inquire about facts, expressivists like Simon Blackburn and Allan Gibbard would say that “Genocide is wrong” doesn’t say anything about the facts of genocide, it merely expresses my attitude of moral disapproval toward genocide.

Yet expressivists can’t abandon all normativity—otherwise even the claim “expressivism is true” has no moral force. Allan Gibbard, like most expressivists, supports epistemic normativity—the principle that we ought to believe what is true. But this seems to me already a moral principle, and one that is not merely an attitude that some people happen to have, but in fact a fundamental axiom that ought to apply to any rational beings in any possible universe. Even more, Gibbard agrees that some moral attitudes are more warranted than others, that “genocide is wrong” is more legitimate than “genocide is good”. But once we agree that there are objective normative truths and moral attitudes can be more or less justified, how is this any different from moral realism?

Indeed, in terms of cognitive science I’m not sure beliefs and emotions are so easily separable in the first place. In some sense I think statements of fact can be intrinsically motivating—or perhaps it is better to put it this way: If your brain is working properly, certain beliefs and emotions will necessarily coincide. If you believe that you are about to be attacked by a tiger, and you don’t experience the emotion of fear, something is wrong; if you believe that you are about to die of starvation, and you don’t experience the emotion of hunger, something is wrong. Conversely, if you believe that you are safe from all danger, and yet you experience fear, something is wrong; if you believe that you have eaten plenty of food, yet you still experience hunger, something is wrong. When your beliefs and emotions don’t align, either your beliefs or your emotions are defective. I would say that the same is true of moral beliefs. If you believe that genocide is wrong but you are not motivated to resist genocide, something is wrong; if you believe that feeding your children is obligatory but you are not motivated to feed your children, something is wrong.

It may well be that without emotion, facts would never motivate us; but emotions can warranted by facts. That is how we distinguish depression from sadness, mania from joy, phobia from fear. Indeed I am dubious of the entire philosophical project of noncognitivism, of which expressivism is the moral form. Noncognitivism is the idea that a given domain of mental processing is not cognitive—not based on thinking, reason, or belief. There is often a sense that noncognitive mental processing is “lower” than cognition, usually based on the idea that it is more phylogenetically conserved—that we think as men but feel as rats.

Yet in fact this is not how human emotions work at all. Poetry—mere words—often evokes the strongest of emotions. A text message of “I love you” or “I think we should see other people” can change the course of our lives. An ambulance in the driveway will pale the face of any parent. In 2001 the video footage of airplanes colliding with skyscrapers gave all of America nightmares for weeks. Yet stop and think about what text messages, ambulances, video footage, airplanes, and skyscrapers are—they are technologies so advanced, so irreducibly cognitive, that even the world’s technological superpower had none of them 200 years ago. (We didn’t have text messages forty years ago!) Even something as apparently dry as numbers can have profound emotional effects: In the statements “Your blood sugar is X mg/dL” to a diabetic, “You have Y years to live” to a cancer patient, or “Z people died” in a news report, the emotional effects are almost wholly dependent upon the value of the numbers X, Y, and Z—values of X = 100, Y = 50 and Z = 0 would be no cause for alarm (or perhaps even cause for celebration!), while values of X = 400, Y = 2, and Z = 10,000 would trigger immediate shock, terror and despair. The entire discipline of cognitive-behavioral psychotherapy depends upon the fact that talking to people about their thoughts and beliefs can have profound effects upon their emotions and actions—and in empirical studies, cognitive-behavioral psychotherapy is verified to work in a variety of circumstances and is more effective than medication for virtually every mental disorder. We do not think as men but feel as rats; we thinkandfeel as human beings.

Because they are evolved instincts, we have limited control over them, and other animals have them, we are often inclined to suppose that emotions are simple, stupid, irrational—but on the contrary they are mind-bogglingly complex, brilliantly intelligent, and the essence of what it means to be a rational being. People who don’t have emotions aren’t rational—they are inert. In psychopathology a loss of capacity for emotion is known as flat affect, and it is often debilitating; it is often found in schizophrenia and autism, and in its most extreme forms it causes catatoniathat is, a total lack of body motion. From Plato to Star Trek, Western culture has taught us to think that a loss of emotion would improve our rationality; but on the contrary, a loss of all emotion would render us completely vegetative. Lieutenant Commander Data without his emotion chip should stand in one place and do nothing—for this is what people without emotion actually do.

Indeed, attractive and aversive experiences—that is, emotions—are the core of goal-seeking behavior, without which rationality is impossible. Apparently simple experiences like pleasure and pain (let alone obviously complicated ones like jealousy and patriotism) are so complex that the most advanced robots in the world cannot even get close to simulating them. Injure a rat, and it will withdraw and cry out in pain; damage a robot (at least any less than a state-of-the-art research robot), and it will not react at all, continuing ineffectually through the same motions it was attempting a moment ago. This shows that rats are smarter than robots—an organism that continues on its way regardless of the stimulus is more like a plant than an animal.

Our emotions do sometimes fail us. They hurt us, they put us at risk, they make us behave in ways that are harmful or irrational. Yet to declare on these grounds that emotions are the enemy of reason would be like declaring that we should all poke out our eyes because sometimes we are fooled by optical illusions. It would be like saying that a shirt with one loose thread is unwearable, that a mathematician who once omits a negative sign should never again be trusted. This is not rationality but perfectionism. Like human eyes, human emotions are rational the vast majority of the time, and when they aren’t, this is cause for concern. Truly irrational emotions include mania, depression, phobia, and paranoia—and it’s no accident that we respond to these emotions with psychotherapy and medication.

Expressivism is legitimate precisely because it is not a challenger to moral realism. Personally, I think that expressivism is wrong because moral claims express facts as much as they express attitudes; but given our present state of knowledge about cognitive science, that is the sort of question upon which reasonable people can disagree. Moreover, the close ties between emotion and reason may ultimately entail that we are wrong to make the distinction in the first place. It is entirely reasonable, at our present state of knowledge, to think that moral judgments are primarily emotional rather than propositional. What isnot reasonable, however, is the claim that moral statements cannot be objectively justified—the evidence against this claim is simply too compelling to ignore. If moral claims are emotions, they are emotions that can be objectively justified.

Against Moral Anti-Realism

Sep 22 JDN 2460576

Moral anti-realism is more philosophically sophisticated than relativism, but it is equally mistaken. It is what is sounds like, the negation of moral realism. Moral anti-realists hold that moral truths are meaningless because they rest upon presumptions about the world that fail to hold. To an anti-realist, “genocide is wrong” is meaningless because there is no such thing as “wrong”, much as to any sane person “unicorns have purple feathers” is meaningless because there are no such things as unicorns. They aren’t saying that genocide isn’t wrong—they’re saying that wrong itself is a defective concept.

The vast majority of people profess strong beliefs in moral truth, and indeed strong beliefs about particular moral issues, such as abortion, capital punishment, same-sex marriage, euthanasia, contraception, civil liberties, and war. There is at the very least a troubling tension here between academia and daily life.

This does not by itself prove that moral truths exist. Ordinary people could be simply wrong about these core beliefs. Indeed, I must acknowledge that most ordinary people clearly are deeply ignorant about certain things, as only 55\% of Americans believe that the theory of evolution is true, and only 66\% of Americans agree that the majority of recent changes in Earth’s climate has been caused by human activity, when in reality these are scientific facts, empirically demonstrable through multiple lines of evidence, verified beyond all reasonable doubt, and both evolution and climate change are universally accepted within the scientific community. In scientific terms there is no more doubt about evolution or climate change than there is about the shape of the Earth or the structure of the atom.

If there were similarly compelling reasons to be moral anti-realists, then the fact that most people believe in morality would be little different: Perhaps most ordinary people are simply wrong about these issues. But when asked to provide similarly compelling evidence for why they reject the moral views of ordinary people, moral anti-realists have little to offer.

Many anti-realists will note the diversity of moral opinions in the world, as John Burgess did, which would be rather like noting the diversity of beliefs about the soul as an argument against neuroscience, or noting the diversity of beliefs about the history of life as an argument against evolution. Many people are wrong about many things that science has shown to be the case; this is worrisome for various reasons, but it is not an argument against the validity of scientific knowledge. Similarly, a diversity of opinions about morality is worrisome, but hardly evidence against the validity of morality.

In fact, when they talk about such fundamental disagreements in morality, anti-realists don’t have very compelling examples. It’s easy to find fundamental disagreements about biology—ask an evolutionary biologist and a Creationist whether humans share an ancestor with chimpanzees. It’s easy to find fundamental disagreements about cosmology—ask a physicist and an evangelical Christian how the Earth began. It’s easy to find fundamental disagreements about climate—ask a climatologist and an oil company executive whether human beings are causing global warming. But where are these fundamental disagreements in morality? Sure, on specific matters there is some disagreement. There are differences between cultures regarding what animals it is acceptable to eat, and differences between cultures about what constitutes acceptable clothing, and differences on specific political issues. But in what society is it acceptable to kill people arbitrarily? Where is it all right to steal whatever you want? Where is lying viewed as a good thing? Where is it obligatory to eat only dirt? In what culture has wearing clothes been a crime? Moral realists are by no means committed to saying that everyone agrees about everything—but it does support our case to point out that most people agree on most things most of the time.

There are a few compelling cases of moral disagreement, but they hardly threaten moral realism. How might we show one culture’s norms to be better than another’s? Compare homicide rates. Compare levels of poverty. Compare overall happiness, perhaps using surveys—or even brain scans. This kind of data exists, and it has a fairly clear pattern: people living in social democratic societies (such as Sweden and Norway) are wealthier, safer, longer-lived, and overall happier than people in other societies. Moreover, using the same publicly-available data, democratic societies in general do much better than authoritarian societies, by almost any measure. This is an empirical fact. It doesn’t necessarily mean that such societies are doing everything right—but they are clearly doing something right. And it really isn’t so implausible to say that what they are doing right is enforcing a good system of moral, political, and cultural norms.

Then again, perhaps some people would accept these empirical facts but still insist that their culture is superior; suppose the disagreement really is radical and intractable. This still leaves two possibilities for moral realism.

The most obvious answer would be to say that one group is wrong—that, objectively, one culture is better than another.

But even if that doesn’t work, there is another way: Perhaps both are right, or more precisely, perhaps these two cultural systems are equally good but incompatible. Is this relativism? Some might call it that, but if it is, it’s relativism of a very narrow kind. I am emphatically not saying that all existing cultures are equal, much less that all possible cultures are equal. Instead, I am saying that it is entirely possible to have two independent moral systems which prescribe different behaviors yet nonetheless result in equally-good overall outcomes.

I could make a mathematical argument involving local maxima of nonlinear functions, but instead I think I’ll use an example: Traffic laws.

In the United States, we drive on the right side of the road. In the United Kingdom, they drive on the left side. Which way is correct? Both are—both systems work well, and neither is superior in any discernible way. In fact, there are other systems that would be just as effective, like the system of all one-way roads that prevails in Manhattan.

Yet does this mean that we should abandon reason in our traffic planning, throw up our hands and declare that any traffic system is as good as any other? On the contrary—there are plenty of possible traffic systems that clearly don’t work. Pointing several one-way roads into one another with no exit is clearly not going to result in good traffic flow. Having each driver flip a coin to decide whether to drive on the left or the right would result in endless collisions. Moreover, our own system clearly isn’t perfect. Nearly 40,000 Americans die of car collisions every year; perhaps we can find a better system that will prevent some or all of these deaths. The mere fact that two, or three, or even 400 different systems of laws or morals are equally good does not entail that all systems are equally good. Even if two cultures really are equal, that doesn’t mean we need to abandon moral realism; it merely means that some problems have multiple solutions. “X2 = 4; what is X?” has two perfectly correct answers (2 and -2), but it also has an infinite variety of wrong answers.

In fact, moral disagreement may not be evidence of anti-realism at all. In order to disagree with someone, you must think that there is an objective fact to be decided. If moral statements were seen as arbitrary and subjective, then people wouldn’t argue about them very much. Imagine an argument, “Chocolate is the best flavor of ice cream!” “No, vanilla is the best!”. This sort of argument might happen on occasion between seven-year-olds, but it is definitely not the sort of thing we hear from mature adults. This is because as adults we realize that tastes in ice cream really are largely subjective. An anti-realist can, in theory, account for this, if they can explain why moral values are falsely perceived as objective while values in taste are not; but if all values are all really arbitrary and subjective, why is it that this is obvious to everyone in the one case and not the other? In fact, there are compelling reasons to think that we couldn’t perceive moral values as arbitrary even if we tried. Some people say “abortion is a right”, others say “abortion is murder”. Even if we were to say that these are purely arbitrary, we would still be left with the task of deciding what laws to make on abortion. Regardless of where the goals come from, some goals are just objectively incompatible.

Another common anti-realist argument rests upon the way that arguments about morality often become emotional and irrational. Charles Stevenson has made this argument; apparently Stevenson has never witnessed an argument about religion, science, or policy, certainly not one outside academia. Many laypeople will insist passionately that the free market is perfect, global warming is a lie, or the Earth is only 6,000 years old. (Often the same people, come to think of it.) People will grow angry and offended if such beliefs are disputed. Yet these are objectively false claims. Unless we want to be anti-realists about GDP, temperature and radiometric dating, emotional and irrational arguments cannot compel us to abandon realism.

Another frequent claim, commonly known as the “argument from queerness”, says that moral facts would need to be something very strange, usually imagined as floating obligations existing somewhere in space; but this is rather like saying that mathematical facts cannot exist because we do not see floating theorems in space and we have never met a perfect triangle. In fact, there is no such thing as a floating speed of light or a floating Schrodinger’s equation either, but no one thinks this is an argument against physics.

A subtler version of this argument, the original “argument from queerness” put forth by J.L. Mackie, says that moral facts are strange because they are intrinsically motivating, something no other kind of facts would be. This is no doubt true; but it seems to me a fairly trivial observation, since part of the definition of “moral fact” is that anything which has this kind of motivational force is a moral (or at least normative) fact. Any well-defined natural kind is subject to the same sort of argument. Spheres are perfectly round three-dimensional objects, something no other object is. Eyes are organs that perceive light, something no other organ does. Moral facts are indeed facts that categorically motivate action, which no other thing does—but so what? All this means is that we have a well-defined notion of what it means to be a moral fact.

Finally, it is often said that moral claims are too often based on religion, and religion is epistemically unfounded, so morality must fall as well. Now, unlike most people, I completely agree that religion is epistemically unfounded. Instead, the premise I take issue with is the idea that moral claims have anything to do with religion. A lot of people seem to think so; but in fact our most important moral values transcend religion and in many cases actually contradict it.

Now, it may well be that the majority of claims people make about morality are to some extent based in their religious beliefs. The majority of governments in history have been tyrannical; does that mean that government is inherently tyrannical, there is no such thing as a just government? The vast majority of human beings have never traveled in outer space; does that mean space travel is impossible? Similarly, I see no reason to say that simply because the majority of moral claims (maybe) are religious, therefore moral claims are inherently religious.

Generally speaking, moral anti-realists make a harsh distinction between morality and other domains of knowledge. They agree that there are such things as trucks and comets and atoms, but do not agree that there are such things as obligations and rights. Indeed, a typical moral anti-realist speaks as if they are being very rigorous and scientific while we moral realists are being foolish, romantic, even superstitious. Moral anti-realism has an attitude of superciliousness not seen in a scientific faction since behaviorism.

But in fact, I think moral anti-realism is the result of a narrow understanding of fundamental physics and cognitive science. It is a failure to drink deep enough of the Pierian springs. This is not surprising, since fundamental physics and cognitive science are so mind-bogglingly difficult that even the geniuses of the world barely grasp them. Quoth Feynman: “I think I can safely say that nobody understands quantum mechanics.” This was of course a bit overstated—Feynman surely knew that there are things we do understand about quantum physics, for he was among those who best understood them. Still, even the brightest minds in the world face total bafflement before problems like dark energy, quantum gravity, the binding problem, and the Hard Problem. It is no moral failing to have a narrow understanding of fundamental physics and cognitive science, for the world’s greatest minds have a scarcely broader understanding.

The failing comes from trying to apply this narrow understanding of fundamental science to moral problems without the humility to admit that the answers are never so simple. “Neuroscience proves we have no free will.” No it doesn’t! It proves we don’t have the kind of free will you thought we did. “We are all made of atoms, therefore there can be no such thing as right and wrong.” And what do you suppose we would have been made of if there were such things as right and wrong? Magical fairy dust?

Here is what I think moral anti-realists get wrong: They hear only part of what scientists say. Neuroscientists explain to them that the mind is a function of matter, and they hear it as if we had said there is only mindless matter. Physicists explain to them that we have much more precise models of atomic phenomena than we do of human behavior, and they hear it as if we had said that scientific models of human behavior are fundamentally impossible. They trust that we know very well what atoms are made of and very poorly what is right and wrong—when quite the opposite is the case.

In fact, the more we learn about physics and cognitive science, the more similar the two fields seem. There was a time when Newtonian mechanics ruled, when everyone thought that physical objects are made of tiny billiard balls bouncing around according to precise laws, while consciousness was some magical, “higher” spiritual substance that defied explanation. But now we understand that quantum physics is all chaos and probability, while cognitive processes can be mathematically modeled and brain waves can be measured in the laboratory. Something as apparently simple as a proton—let alone an extended, complex object, like a table or a comet—is fundamentally a functional entity, a unit of structure rather than substance. To be a proton is to be organized the way protons are and to do what protons do; and so to be human is to be organized the way humans are and to do what humans do. The eternal search for “stuff” of which everything is made has come up largely empty; eventually we may find the ultimate “stuff”, but when we do, it will already have long been apparent that substance is nowhere near as important as structure. Reductionism isn’t so much wrong as beside the point—when we want to understand what makes a table a table or what makes a man a man, it simply doesn’t matter what stuff they are made of. The table could be wood, glass, plastic, or metal; the man could be carbon, nitrogen and water like us, or else silicon and tantalum like Lieutenant Commander Data on Star Trek. Yes, structure must be made of something, and the substance does affect the structures that can be made out of it, but the structure is what really matters, not the substance.

Hence, I think it is deeply misguided to suggest that because human beings are made of molecules, this means that we are just the same thing as our molecules. Love is indeed made of oxytocin (among other things), but only in the sense that a table is made of wood. To know that love is made of oxytocin really doesn’t tell us very much about love; we need also to understand how oxytocin interacts with the bafflingly complex system that is a human brain—and indeed how groups of brains get together in relationships and societies. This is because love, like so much else, is not substance but function—something you do, not something you are made of.

It is not hard, rigorous science that says love is just oxytocin and happiness is just dopamine; it is naive, simplistic science. It is the sort of “science” that comes from overlaying old prejudices (like “matter is solid, thoughts are ethereal”) with a thin veneer of knowledge. To be a realist about protons but not about obligations is to be a realist about some functional relations and not others. It is to hear “mind is matter”, and fail to understand the is—the identity between them—instead acting as if we had said “there is no mind; there is only matter”. You may find it hard to believe that mind can be made of matter, as do we all; yet the universe cares not about our incredulity. The perfect correlation between neurochemical activity and cognitive activity has been verified in far too many experiments to doubt. Somehow, that kilogram of wet, sparking gelatin in your head is actually thinking and feeling—it is actually you.

And once we realize this, I do not think it is a great leap to realize that the vast collection of complex, interacting bodies moving along particular trajectories through space that was the Holocaust was actually wrong, really, objectively wrong.

People need permission to disagree

Jul 21 JDN 2460513

Obviously, most of the blame for the rise of far-right parties in various countries has to go to the right-wing people who either joined up or failed to stop their allies from joining up. I would hope that goes without saying, but it probably doesn’t, so there, I said it; it’s mostly their fault.

But there is still some fault to go around, and I think we on the left need to do some soul-searching about this.

There is a very common mode of argumentation that is popular on the left, which I think is very dangerous:

What? You don’t already agree with [policy idea]? You bigot!”

Often it’s not quite that blatant, but the implication is still there: If you don’t agree with this policy involving race, you’re a racist. If you don’t agree with this policy involving transgender rights, you’re a transphobe. If you don’t agree with this policy involving women’s rights, you are a sexist. And so on.

I understand why people think this way. But I also think it has pushed some people over to the right who might otherwise have been possible to persuade to our own side.

And here comes the comeback, I know:

If being mistreated turns you into a Nazi, you were never a good ally to begin with.”

Well, first of all, not everyone who was pushed away from the left became a full-blown Nazi. Some of them just stopped listening to us, and started listening to whatever the right wing was saying instead.

Second, life is more complicated than that. Most people don’t really have well-defined political views, believe it or not. Most people sort of form their political views on the spot based on whoever else is around them and who they hear talking the loudest. Most swing voters are really low-information voters who really don’t follow politics and make up their minds based on frankly stupid reasons.

And with this in mind, the mere fact that we are pushing people away with our rhetoric means that we are shifting what those low-information voters hear—and thereby giving away elections to the right.

When people disagree about moral questions, isn’t someone morally wrong?

Yes, by construction. (At least one must be; possibly everyone is.)

But we don’t always know who is wrong—and generally speaking, everyone goes into a conversation assuming that they themselves are right. But our ultimate goal of moral conversation is to get more people to be right and fewer people to be wrong, yes? If we treat it as morally wrong to disagree in the first place,we are shutting down any hope of reaching that goal.

Not everyone knows everything about everything.

That may seem perfectly obvious to you, but when you leap from “disagree with [policy]” to “bigot”, you are basically assuming the opposite. You are assuming that whoever you are speaking with knows everything you know about all the relevant considerations of politics and social science, and the only possible reason they could come to a different conclusion is that they have a fundamentally different preference, namely, they are a bigot.

Maybe you are indeed such an enlightened individual that you never get any moral questions wrong. (Maybe.) But can you really expect everyone else to be like that? Isn’t it unfair to ask that of absolutely everyone?

This is why:

People need permission to disagree.

In order for people to learn and grow in their understanding, they need permission to not know all the answers right away. In order for people to change their beliefs, they need permission to believe something that might turn out to be wrong later.


This is exactly the permission we are denying when we accuse anyone we disagree with of being a bigot. Instead of continuing the conversation in the hopes of persuading people to our point of view, we are shutting the conversation down with vitriol and name-calling.

Try to consider this from the opposite perspective.

You enter a conversation about an important political or moral issue. You hear their view expressed, and then you express your own. Immediately, they start accusing you of being morally defective, a racist, sexist, homophobic, and/or transphobic bigot. How likely are you to continue that conversation? How likely are you to go on listening to this person? How likely are you to change your mind about the original political issue?

In fact, might you even be less likely to change your mind than you would have been if you’d just heard their view expressed and then ended the conversation? I think so. I think just respectfully expressing an alternative view pushes people a little—not a lot, but a little—in favor of whatever view you have expressed. It tells them that someone else who is reasonable and intelligent believes X, so maybe X isn’t so unreasonable.

Conversely, when someone resorts to name-calling, what does that do to your evaluation of their views? They suddenly seem unreasonable. You begin to doubt everything they’re saying. You may even try to revise your view further away out of spite (though this is clearly not rational—reversed stupidity is not intelligence).

Think about that, before you resort to name-calling your opponents.

But now I know you’re thinking:

But some people really are bigots!”

Yes, that’s true. And some of them may even be the sort of irredeemable bigot you’re imagining right now, someone for whom no amount of conversation could ever change their mind.

But I don’t think most people are like that. In fact, I don’t think most bigots are like that. I think even most people who hold bigoted views about whatever population could in fact be persuaded out of those views, under the right circumstances. And I think that the right circumstances involves a lot more patient, respectful conversation than it does angry name-calling. For we are all Judy Hopps.

Maybe I’m wrong. Maybe it doesn’t matter how patiently we argue. But it’s still morally better to be respectful and kind, so I’m going to do it.

You have my permission to disagree.

Against deontology

Aug 6 JDN 2460163

In last week’s post I argued against average utilitarianism, basically on the grounds that it devalues the lives of anyone who isn’t of above average happiness. But you might be tempted to take these as arguments against utilitarianism in general, and that is not my intention.

In fact I believe that utilitarianism is basically correct, though it needs some particular nuances that are often lost in various presentations of it.

Its leading rival is deontology, which is really a broad class of moral theories, some a lot better than others.

What characterizes deontology as a class is that it uses rules, rather than consequences; an act is just right or wrong regardless of its consequences—or even its expected consequences.

There are certain aspects of this which are quite appealing: In fact, I do think that rules have an important role to play in ethics, and as such I am basically a rule utilitarian. Actually trying to foresee all possible consequences of every action we might take is an absurd demand far beyond the capacity of us mere mortals, and so in practice we have no choice but to develop heuristic rules that can guide us.

But deontology says that these are no mere heuristics: They are in fact the core of ethics itself. Under deontology, wrong actions are wrong even if you know for certain that their consequences will be good.

Kantian ethics is one of the most well-developed deontological theories, and I am quite sympathetic to Kantian ethics In fact I used to consider myself one of its adherents, but I now consider that view a mistaken one.

Let’s first dispense with the views of Kant himself, which are obviously wrong. Kant explicitly said that lying is always, always, always wrong, and even when presented with obvious examples where you could tell a small lie to someone obviously evil in order to save many innocent lives, he stuck to his guns and insisted that lying is always wrong.

This is a bit anachronistic, but I think this example will be more vivid for modern readers, and it absolutely is consistent with what Kant wrote about the actual scenarios he was presented with:

You are living in Germany in 1945. You have sheltered a family of Jews in your attic to keep them safe from the Holocaust. Nazi soldiers have arrived at your door, and ask you: “Are there any Jews in this house?” Do you tell the truth?

I think it’s utterly, agonizingly obvious that you should not tell the truth. Exactly what you should do is less obvious: Do you simply lie and hope they buy it? Do you devise a clever ruse? Do you try to distract them in some way? Do you send them on a wild goose chase elsewhere? If you could overpower them and kill them, should you? What if you aren’t sure you can; should you still try? But one thing is clear: You don’t hand over the Jewish family to the Nazis.

Yet when presented with similar examples, Kant insisted that lying is always wrong. He had a theory to back it up, his Categorical Imperative: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”

And, so his argument goes: Since it would be obviously incoherent to say that everyone should always lie, lying is wrong, and you’re never allowed to do it. He actually bites that bullet the size of a Howitzer round.

Modern deontologists—even though who consider themselves Kantians—are more sophisticated than this. They realize that you could make a rule like “Never lie, except to save the life of an innocent person.” or “Never lie, except to stop a great evil.” Either of these would be quite adequate to solve this particular dilemma. And it’s absolutely possible to will that these would be universal laws, in the sense that they would apply to anyone. ‘Universal’ doesn’t have to mean ‘applies equally to all possible circumstances’.

There are also a couple of things that deontology does very well, which are worth preserving. One of them is supererogation: The idea that some acts are above and beyond the call of duty, that something can be good without being obligatory.

This is something most forms of utilitarianism are notoriously bad at. They show us a spectrum of worlds from the best to the worst, and tell us to make things better. But there’s nowhere we are allowed to stop, unless we somehow manage to make it all the way to the best possible world.

I find this kind of moral demand very tempting, which often leads me to feel a tremendous burden of guilt. I always know that I could be doing more than I do. I’ve written several posts about this in the past, in the hopes of fighting off this temptation in myself and others. (I am not entirely sure how well I’ve succeeded.)

Deontology does much better in this regard: Here are some rules. Follow them.

Many of the rules are in fact very good rules that most people successfully follow their entire lives: Don’t murder. Don’t rape. Don’t commit robbery. Don’t rule a nation tyrannically. Don’t commit war crimes.

Others are oft more honored in the breach than the observance: Don’t lie. Don’t be rude. Don’t be selfish. Be brave. Be generous. But a well-developed deontology can even deal with this, by saying that some rules are more important than others, and thus some sins are more forgivable than others.

Whereas a utilitarian—at least, anything but a very sophisticated utilitarian—can only say who is better and who is worse, a deontologist can say who is good enough: who has successfully discharged their moral obligations and is otherwise free to live their life as they choose. Deontology absolves us of guilt in a way that utilitarianism is very bad at.

Another good deontological principle is double-effect: Basically this says that if you are doing something that will have bad outcomes as well as good ones, it matters whether you intend the bad one and what you do to try to mitigate it. There does seem to be a morally relevant difference between a bombing that kills civilians accidentally as part of an attack on a legitimate military target, and a so-called “strategic bombing” that directly targets civilians in order to maximize casualties—even if both occur as part of a justified war. (Both happen a lot—and it may even be the case that some of the latter were justified. The Tokyo firebombing and atomic bombs on Hiroshima and Nagasaki were very much in the latter category.)

There are ways to capture this principle (or something very much like it) in a utilitarian framework, but like supererogation, it requires a sophisticated, nuanced approach that most utilitarians don’t seem willing or able to take.

Now that I’ve said what’s good about it, let’s talk about what’s really wrong with deontology.

Above all: How do we choose the rules?

Kant seemed to think that mere logical coherence would yield a sufficiently detailed—perhaps even unique—set of rules for all rational beings in the universe to follow. This is obviously wrong, and seems to be simply a failure of his imagination. There is literally a countably infinite space of possible ethical rules that are logically consistent. (With probability 1 any given one is utter nonsense: “Never eat cheese on Thursdays”, “Armadillos should rule the world”, and so on—but these are still logically consistent.)

If you require the rules to be simple and general enough to always apply to everyone everywhere, you can narrow the space substantially; but this is also how you get obviously wrong rules like “Never lie.”

In practice, there are two ways we actually seem to do this: Tradition and consequences.

Let’s start with tradition. (It came first historically, after all.) You can absolutely make a set of rules based on whatever your culture has handed down to you since time immemorial. You can even write them down in a book that you declare to be the absolute infallible truth of the universe—and, amazingly enough, you can get millions of people to actually buy that.

The result, of course, is what we call religion. Some of its rules are good: Thou shalt not kill. Some are flawed but reasonable: Thou shalt not steal. Thou shalt not commit adultery. Some are nonsense: Thou shalt not covet thy neighbor’s goods.

And some, well… some rules of tradition are the source of many of the world’s most horrific human rights violations. Thou shalt not suffer a witch to live (Exodus 22:18). If a man also lie with mankind, as he lieth with a woman, both of them have committed an abomination: they shall surely be put to death; their blood shall be upon them (Leviticus 20:13).

Tradition-based deontology has in fact been the major obstacle to moral progress throughout history. It is not a coincidence that utilitarianism began to become popular right before the abolition of slavery, and there is an even more direct casual link between utilitarianism and the advancement of rights for women and LGBT people. When the sole argument you can make for moral rules is that they are ancient (or allegedly handed down by a perfect being), you can make rules that oppress anyone you want. But when rules have to be based on bringing happiness or preventing suffering, whole classes of oppression suddenly become untenable. “God said so” can justify anything—but “Who does it hurt?” can cut through.

It is an oversimplification, but not a terribly large one, to say that the arc of moral history has been drawn by utilitarians dragging deontologists kicking and screaming into a better future.

There is a better way to make rules, and that is based on consequences. And, in practice, most people who call themselves deontologists these days do this. They develop a system of moral rules based on what would be expected to lead to the overall best outcomes.

I like this approach. In fact, I agree with this approach. But it basically amounts to abandoning deontology and surrendering to utilitarianism.

Once you admit that the fundamental justification for all moral rules is the promotion of happiness and the prevention of suffering, you are basically a rule utilitarian. Rules then become heuristics for promoting happiness, not the fundamental source of morality itself.

I suppose it could be argued that this is not a surrender but a synthesis: We are looking for the best aspects of deontology and utilitarianism. That makes a lot of sense. But I keep coming back to the dark history of traditional rules, the fact that deontologists have basically been holding back human civilization since time immemorial. If deontology wants to be taken seriously now, it needs to prove that it has broken with that dark tradition. And frankly the easiest answer to me seems to be to just give up on deontology.

How much should we give of ourselves?

Jul 23 JDN 2460149

This is a question I’ve written about before, but it’s a very important one—perhaps the most important question I deal with on this blog—so today I’d like to come back to it from a slightly different angle.

Suppose you could sacrifice all the happiness in the rest of your life, making your own existence barely worth living, in exchange for saving the lives of 100 people you will never meet.

  1. Would it be good for you do so?
  2. Should you do so?
  3. Are you a bad person if you don’t?
  4. Are all of the above really the same question?

Think carefully about your answer. It may be tempting to say “yes”. It feels righteous to say “yes”.

But in fact this is not hypothetical. It is the actual situation you are in.

This GiveWell article is entitled “Why is it so expensive to save a life?” but that’s incredibly weird, because the actual figure they give is astonishingly, mind-bogglingly, frankly disgustingly cheap: It costs about $4500 to save one human life. I don’t know how you can possibly find that expensive. I don’t understand how anyone can think, “Saving this person’s life might max out a credit card or two; boy, that sure seems expensive!

The standard for healthcare policy in the US is that something is worth doing if it is able to save one quality-adjusted life year for less than $50,000. That’s one year for ten times as much. Even accounting for the shorter lifespans and worse lives in poor countries, saving someone from a poor country for $4500 is at least one hundred times as cost-effective as that.

To put it another way, if you are a typical middle-class person in the First World, with an after-tax income of about $25,000 per year, and you were to donate 90% of that after-tax income to high-impact charities, you could be expected to save 5 lives every year. Over the course of a 30-year career, that’s 150 lives saved.

You would of course be utterly miserable for those 30 years, having given away all the money you could possibly have used for any kind of entertainment or enjoyment, not to mention living in the cheapest possible housing—maybe even a tent in a homeless camp—and eating the cheapest possible food. But you could do it, and you would in fact be expected to save over 100 lives by doing so.

So let me ask you again:

  1. Would it be good for you do so?
  2. Should you do so?
  3. Are you a bad person if you don’t?
  4. Are all of the above really the same question?

Peter Singer often writes as though the answer to all these questions is “yes”. But even he doesn’t actually live that way. He gives a great deal to charity, mind you; no one seems to know exactly how much, but estimates range from at least 10% to up to 50% of his income. My general impression is that he gives about 10% of his ordinary income and more like 50% of big prizes he receives (which are in fact quite numerous). Over the course of his life he has certainly donated at least a couple million dollars. Yet he clearly could give more than he does: He lives a comfortable, upper-middle-class life.

Peter Singer’s original argument for his view, from his essay “Famine, Affluence, and Morality”, is actually astonishingly weak. It involves imagining a scenario where a child is drowning in a lake and you could go save them, but only at the cost of ruining your expensive suit.

Obviously, you should save the child. We all agree on that. You are in fact a terrible person if you wouldn’t save the child.

But Singer tries to generalize this into a principle that requires us to donate all most of our income to international charities, and that just doesn’t follow.

First of all, that suit is not worth $4500. Not if you’re a middle-class person. That’s a damn Armani. No one who isn’t a millionaire wears suits like that.

Second, in the imagined scenario, you’re the only one who can help the kid. All I have to do is change that one thing and already the answer is different: If right next to you there is a trained, certified lifeguard, they should save the kid, not you. And if there are a hundred other people at the lake, and none of them is saving the kid… probably there’s a good reason for that? (It could be bystander effect, but actually that’s much weaker than a lot of people think.) The responsibility doesn’t uniquely fall upon you.

Third, the drowning child is a one-off, emergency scenario that almost certainly will never happen to you, and if it does ever happen, will almost certainly only happen once. But donation is something you could always do, and you could do over and over and over again, until you have depleted all your savings and run up massive debts.

Fourth, in the hypothetical scenario, there is only one child. What if there were ten—or a hundred—or a thousand? What if you couldn’t possibly save them all by yourself? Should you keep going out there and saving children until you become exhausted and you yourself drown? Even if there is a lifeguard and a hundred other bystanders right there doing nothing?

And finally, in the drowning child scenario, you are right there. This isn’t some faceless stranger thousands of miles away. You can actually see that child in front of you. Peter Singer thinks that doesn’t matter—actually his central point seems to be that it doesn’t matter. But I think it does.

Singer writes:

It makes no moral difference whether the person I can help is a neighbor’s child ten yards away from me or a Bengali whose name I shall never know, ten thousand miles away.

That’s clearly wrong, isn’t it? Relationships mean nothing? Community means nothing? There is no moral value whatsoever to helping people close to us rather than random strangers on the other side of the planet?

One answer might be to say that the answer to question 4 is “no”. You aren’t a bad person for not doing everything you should, and even though something would be good if you did it, that doesn’t necessarily mean you should do it.

Perhaps some things are above and beyond the call of duty: Good, perhaps even heroic, if you’re willing to do them, but not something we are all obliged to do. The formal term for this is supererogatory. While I think that overall utilitarianism is basically correct and has done great things for human society, one thing I think most utilitarians miss is that they seem to deny that supererogatory actions exist.

Even then, I’m not entirely sure it is good to be this altruistic.

Someone who really believed that we owe as much to random strangers as we do to our friends and family would never show up to any birthday parties, because any time spent at a birthday party would be more efficiently spent earning-to-give to some high-impact charity. They would never visit their family on Christmas, because plane tickets are expensive and airplanes burn a lot of carbon.

They also wouldn’t concern themselves with whether their job is satisfying or even not totally miserable; they would only care whether the total positive impact they can have on the world is positive, either directly through their work or by raising as much money as possible and donating it all to charity.

They would rest only the minimum amount they require to remain functional, eat only the barest minimum of nutritious food, and otherwise work, work, work, constantly, all the time. If their body was capable of doing the work, they would continue doing the work. For there is not a moment to waste when lives are on the line!

A world full of people like that would be horrible. We would all live our entire lives in miserable drudgery trying to maximize the amount we can donate to faceless strangers on the other side of the planet. There would be no joy or friendship in that world, only endless, endless toil.

When I bring this up in the Effective Altruism community, I’ve heard people try to argue otherwise, basically saying that we would never need everyone to devote themselves to the cause at this level, because we’d soon solve all the big problems and be able to go back to enjoying our lives. I think that’s probably true—but it also kind of misses the point.

Yes, if everyone gave their fair share, that fair share wouldn’t have to be terribly large. But we know for a fact that most people are not giving their fair share. So what now? What should we actually do? Do you really want to live in a world where the morally best people are miserable all the time sacrificing themselves at the altar of altruism?

Yes, clearly, most people don’t do enough. In fact, most people give basically nothing to high-impact charities. We should be trying to fix that. But if I am already giving far more than my fair share, far more than I would have to give if everyone else were pitching in as they should—isn’t there some point at which I’m allowed to stop? Do I have to give everything I can or else I’m a monster?

The conclusion that we ought to make ourselves utterly miserable in order to save distant strangers feels deeply unsettling. It feels even worse if we say that we ought to do so, and worse still if we feel we are bad people if we don’t.

One solution would be to say that we owe absolutely nothing to these distant strangers. Yet that clearly goes too far in the opposite direction. There are so many problems in this world that could be fixed if more people cared just a little bit about strangers on the other side of the planet. Poverty, hunger, war, climate change… if everyone in the world (or really even just everyone in power) cared even 1% as much about random strangers as they do about themselves, all these would be solved.

Should you donate to charity? Yes! You absolutely should. Please, I beseech you, give some reasonable amount to charity—perhaps 5% of your income, or if you can’t manage that, maybe 1%.

Should you make changes in your life to make the world better? Yes! Small ones. Eat less meat. Take public transit instead of driving. Recycle. Vote.

But I can’t ask you to give 90% of your income and spend your entire life trying to optimize your positive impact. Even if it worked, it would be utter madness, and the world would be terrible if all the good people tried to do that.

I feel quite strongly that this is the right approach: Give something. Your fair share, or perhaps even a bit more, because you know not everyone will.

Yet it’s surprisingly hard to come up with a moral theory on which this is the right answer.

It’s much easier to develop a theory on which we owe absolutely nothing: egoism, or any deontology on which charity is not an obligation. And of course Singer-style utilitarianism says that we owe virtually everything: As long as QALYs can be purchased cheaper by GiveWell than by spending on yourself, you should continue donating to GiveWell.

I think part of the problem is that we have developed all these moral theories as if we were isolated beings, who act in a world that is simply beyond our control. It’s much like the assumption of perfect competition in economics: I am but one producer among thousands, so whatever I do won’t affect the price.

But what we really needed was a moral theory that could work for a whole society. Something that would still make sense if everyone did it—or better yet, still make sense if half the people did it, or 10%, or 5%. The theory cannot depend upon the assumption that you are the only one following it. It cannot simply “hold constant” the rest of society.

I have come to realize that the Effective Altruism movement, while probably mostly good for the world as a whole, has actually been quite harmful to the mental health of many of its followers, including myself. It has made us feel guilty for not doing enough, pressured us to burn ourselves out working ever harder to save the world. Because we do not give our last dollar to charity, we are told that we are murderers.

But there are real murderers in this world. While you were beating yourself up over not donating enough, Vladmir Putin was continuing his invasion of Ukraine, ExxonMobil was expanding its offshore drilling, Daesh was carrying out hundreds of terrorist attacks, Qanon was deluding millions of people, and the human trafficking industry was making $150 billion per year.

In other words, by simply doing nothing you are considerably better than the real monsters responsible for most of the world’s horror.

In fact, those starving children in Africa that you’re sending money to help? They wouldn’t need it, were it not for centuries of colonial imperialism followed by a series of corrupt and/or incompetent governments ruled mainly by psychopaths.

Indeed the best way to save those people, in the long run, would be to fix their governments—as has been done in places like Namibia and Botswana. According to the World Development Indicators, the proportion of people living below the UN extreme poverty line (currently $2.15 per day at purchasing power parity) has fallen from 36% to 16% in Namibia since 2003, and from 42% to 15% in Botswana since 1984. Compare this to some countries that haven’t had good governments over that time: In Cote d’Ivoire the same poverty rate was 8% in 1985 but is 11% today (and was actually as high as 33% in 2015), while in Congo it remains at 35%. Then there are countries that are trying, but just started out so poor it’s a long way to go: Burkina Faso’s extreme poverty rate has fallen from 82% in 1994 to 30% today.

In other words, if you’re feeling bad about not giving enough, remember this: if everyone in the world were as good as you, you wouldn’t need to give a cent.

Of course, simply feeling good about yourself for not being a psychopath doesn’t accomplish very much either. Somehow we have to find a balance: Motivate people enough so that they do something, get them to do their share; but don’t pressure them to sacrifice themselves at the altar of altruism.

I think part of the problem here—and not just here—is that the people who most need to change are the ones least likely to listen. The kind of person who reads Peter Singer is already probably in the top 10% of most altruistic people, and really doesn’t need much more than a slight nudge to be doing their fair share. And meanwhile the really terrible people in the world have probably never picked up an ethics book in their lives, or if they have, they ignored everything it said.

I don’t quite know what to do about that. But I hope I can least convince you—and myself—to take some of the pressure off when it feels like we’re not doing enough.

We ignorant, incompetent gods

May 21 JDN 2460086

A review of Homo Deus

The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.

E.O. Wilson

Homo Deus is a very good read—and despite its length, a quick one; as you can see, I read it cover to cover in a week. Yuval Noah Harari’s central point is surely correct: Our technology is reaching a threshold where it grants us unprecedented power and forces us to ask what it means to be human.

Biotechnology and artificial intelligence are now advancing so rapidly that advancements in other domains, such as aerospace and nuclear energy, seem positively mundane. Who cares about making flight or electricity a bit cleaner when we will soon have the power to modify ourselves or we’ll all be replaced by machines?

Indeed, we already have technology that would have seemed to ancient people like the powers of gods. We can fly; we can witness or even control events thousands of miles away; we can destroy mountains; we can wipeout entire armies in an instant; we can even travel into outer space.

Harari rightly warns us that our not-so-distant descendants are likely to have powers that we would see as godlike: Immortality, superior intelligence, self-modification, the power to create life.

And where it is scary to think about what they might do with that power if they think the way we do—as ignorant and foolish and tribal as we are—Harari points out that it is equally scary to think about what they might do if they don’t think the way we do—for then, how do they think? If their minds are genetically modified or even artificially created, who will they be? What values will they have, if not ours? Could they be better? What if they’re worse?

It is of course difficult to imagine values better than our own—if we thought those values were better, we’d presumably adopt them. But we should seriously consider the possibility, since presumably most of us believe that our values today are better than what most people’s values were 1000 years ago. If moral progress continues, does it not follow that people’s values will be better still 1000 years from now? Or at least that they could be?

I also think Harari overestimates just how difficult it is to anticipate the future. This may be a useful overcorrection; the world is positively infested with people making overprecise predictions about the future, often selling them for exorbitant fees (note that Harari was quite well-compensated for this book as well!). But our values are not so fundamentally alien from those of our forebears, and we have reason to suspect that our descendants’ values will be no more different from ours.

For instance, do you think that medieval people thought suffering and death were good? I assure you they did not. Nor did they believe that the supreme purpose in life is eating cheese. (They didn’t even believe the Earth was flat!) They did not have the concept of GDP, but they could surely appreciate the value of economic prosperity.

Indeed, our world today looks very much like a medieval peasant’s vision of paradise. Boundless food in endless variety. Near-perfect security against violence. Robust health, free from nearly all infectious disease. Freedom of movement. Representation in government! The land of milk and honey is here; there they are, milk and honey on the shelves at Walmart.

Of course, our paradise comes with caveats: Not least, we are by no means free of toil, but instead have invented whole new kinds of toil they could scarcely have imagined. If anything I would have to guess that coding a robot or recording a video lecture probably isn’t substantially more satisfying than harvesting wheat or smithing a sword; and reconciling receivables and formatting spreadsheets is surely less. Our tasks are physically much easier, but mentally much harder, and it’s not obvious which of those is preferable. And we are so very stressed! It’s honestly bizarre just how stressed we are, given the abudance in which we live; there is no reason for our lives to have stakes so high, and yet somehow they do. It is perhaps this stress and economic precarity that prevents us from feeling such joy as the medieval peasants would have imagined for us.

Of course, we don’t agree with our ancestors on everything. The medieval peasants were surely more religious, more ignorant, more misogynistic, more xenophobic, and more racist than we are. But projecting that trend forward mostly means less ignorance, less misogyny, less racism in the future; it means that future generations should see the world world catch up to what the best of us already believe and strive for—hardly something to fear. The values that I believe are surely not what we as a civilization act upon, and I sorely wish they were. Perhaps someday they will be.

I can even imagine something that I myself would recognize as better than me: Me, but less hypocritical. Strictly vegan rather than lacto-ovo-vegetarian, or at least more consistent about only buying free range organic animal products. More committed to ecological sustainability, more willing to sacrifice the conveniences of plastic and gasoline. Able to truly respect and appreciate all life, even humble insects. (Though perhaps still not mosquitoes; this is war. They kill more of us than any other animal, including us.) Not even casually or accidentally racist or sexist. More courageous, less burnt out and apathetic. I don’t always live up to my own ideals. Perhaps someday someone will.

Harari fears something much darker, that we will be forced to give up on humanist values and replace them with a new techno-religion he calls Dataism, in which the supreme value is efficient data processing. I see very little evidence of this. If it feels like data is worshipped these days, it is only because data is profitable. Amazon and Google constantly seek out ever richer datasets and ever faster processing because that is how they make money. The real subject of worship here is wealth, and that is nothing new. Maybe there are some die-hard techno-utopians out there who long for us all to join the unified oversoul of all optimized data processing, but I’ve never met one, and they are clearly not the majority. (Harari also uses the word ‘religion’ in an annoyingly overbroad sense; he refers to communism, liberalism, and fascism as ‘religions’. Ideologies, surely; but religions?)

Harari in fact seems to think that ideologies are strongly driven by economic structures, so maybe he would even agree that it’s about profit for now, but thinks it will become religion later. But I don’t really see history fitting this pattern all that well. If monotheism is directly tied to the formation of organized bureaucracy and national government, then how did Egypt and Rome last so long with polytheistic pantheons? If atheism is the natural outgrowth of industrialized capitalism, then why are Africa and South America taking so long to get the memo? I do think that economic circumstances can constrain culture and shift what sort of ideas become dominant, including religious ideas; but there clearly isn’t this one-to-one correspondence he imagines. Moreover, there was never Coalism or Oilism aside from the greedy acquisition of these commodities as part of a far more familiar ideology: capitalism.

He also claims that all of science is now, or is close to, following a united paradigm under which everything is a data processing algorithm, which suggests he has not met very many scientists. Our paradigms remain quite varied, thank you; and if they do all have certain features in common, it’s mainly things like rationality, naturalism and empiricism that are more or less inherent to science. It’s not even the case that all cognitive scientists believe in materialism (though it probably should be); there are still dualists out there.

Moreover, when it comes to values, most scientists believe in liberalism. This is especially true if we use Harari’s broad sense (on which mainline conservatives and libertarians are ‘liberal’ because they believe in liberty and human rights), but even in the narrow sense of center-left. We are by no means converging on a paradigm where human life has no value because it’s all just data processing; maybe some scientists believe that, but definitely not most of us. If scientists ran the world, I can’t promise everything would be better, but I can tell you that Bush and Trump would never have been elected and we’d have a much better climate policy in place by now.

I do share many of Harari’s fears of the rise of artificial intelligence. The world is clearly not ready for the massive economic disruption that AI is going to cause all too soon. We still define a person’s worth by their employment, and think of ourselves primarily as collection of skills; but AI is going to make many of those skills obsolete, and may make many of us unemployable. It would behoove us to think in advance about who we truly are and what we truly want before that day comes. I used to think that creative intellectual professions would be relatively secure; ChatGPT and Midjourney changed my mind. Even writers and artists may not be safe much longer.

Harari is so good at sympathetically explaining other views he takes it to a fault. At times it is actually difficult to know whether he himself believes something and wants you to, or if he is just steelmanning someone else’s worldview. There’s a whole section on ‘evolutionary humanism’ where he details a worldview that is at best Nietschean and at worst Nazi, but he makes it sound so seductive. I don’t think it’s what he believes, in part because he has similarly good things to say about liberalism and socialism—but it’s honestly hard to tell.

The weakest part of the book is when Harari talks about free will. Like most people, he just doesn’t get compatibilism. He spends a whole chapter talking about how science ‘proves we have no free will’, and it’s just the same old tired arguments hard determinists have always made.

He talks about how we can make choices based on our desires, but we can’t choose our desires; well of course we can’t! What would that even mean? If you could choose your desires, what would you choose them based on, if not your desires? Your desire-desires? Well, then, can you choose your desire-desires? What about your desire-desire-desires?

What even is this ultimate uncaused freedom that libertarian free will is supposed to consist in? No one seems capable of even defining it. (I’d say Kant got the closest: He defined it as the capacity to act based upon what ought rather than what is. But of course what we believe about ‘ought’ is fundamentally stored in our brains as a particular state, a way things are—so in the end, it’s an ‘is’ we act on after all.)

Maybe before you lament that something doesn’t exist, you should at least be able to describe that thing as a coherent concept? Woe is me, that 2 plus 2 is not equal to 5!

It is true that as our technology advances, manipulating other people’s desires will become more and more feasible. Harari overstates the case on so-called robo-rats; they aren’t really mind-controlled, it’s more like they are rewarded and punished. The rat chooses to go left because she knows you’ll make her feel good if she does; she’s still freely choosing to go left. (Dangling a carrot in front of a horse is fundamentally the same thing—and frankly, paying a wage isn’t all that different.) The day may yet come where stronger forms of control become feasible, and woe betide us when it does. Yet this is no threat to the concept of free will; we already knew that coercion was possible, and mind control is simply a more precise form of coercion.

Harari reports on a lot of interesting findings in neuroscience, which are important for people to know about, but they do not actually show that free will is an illusion. What they do show is that free will is thornier than most people imagine. Our desires are not fully unified; we are often ‘of two minds’ in a surprisingly literal sense. We are often tempted by things we know are wrong. We often aren’t sure what we really want. Every individual is in fact quite divisible; we literally contain multitudes.

We do need a richer account of moral responsibility that can deal with the fact that human beings often feel multiple conflicting desires simultaneously, and often experience events differently than we later go on to remember them. But at the end of the day, human consciousness is mostly unified, our choices are mostly rational, and our basic account of moral responsibility is mostly valid.

I think for now we should perhaps be less worried about what may come in the distant future, what sort of godlike powers our descendants may have—and more worried about what we are doing with the godlike powers we already have. We have the power to feed the world; why aren’t we? We have the power to save millions from disease; why don’t we? I don’t see many people blindly following this ‘Dataism’, but I do see an awful lot blinding following a 19th-century vision of capitalism.

And perhaps if we straighten ourselves out, the future will be in better hands.

Updating your moral software

Oct 23 JDN 2459876

I’ve noticed an odd tendency among politically active people, particular social media slacktivists (a term I do not use pejoratively: slacktivism is highly cost-effective). They adopt new ideas very rapidly, trying to stay on the cutting edge of moral and political discourse—and then they denigrate and disparage anyone who fails to do the same as an irredeemable monster.

This can take many forms, such as “if you don’t buy into my specific take on Critical Race Theory, you are a racist”, “if you have any uncertainty about the widespread use of puberty blockers you are a transphobic bigot”, “if you give any credence to the medical consensus on risks of obesity you are fatphobic“, “if you think disabilities should be cured you’re an ableist”, and “if you don’t support legalizing abortion in all circumstances you are a misogynist”.

My intention here is not to evaluate any particular moral belief, though I’ll say the following: I am skeptical of Critical Race Theory, especially the 1619 project which seems to be to include substantial distortions of history. I am cautiously supportive of puberty blockers, because the medical data on their risks are ambiguous—while the sociological data on how much happier trans kids are when accepted are totally unambiguous. I am well aware of the medical data saying that the risks of obesity are overblown (but also not negligible, particular for those who are very obese). Speaking as someone with a disability that causes me frequent, agonizing pain, yes, I want disabilities to be cured, thank you very much; accommodations are nice in the meantime, but the best long-term solution is to not need accommodations. (I’ll admit to some grey areas regarding certain neurodivergences such as autism and ADHD, and I would never want to force cures on people who don’t want them; but paralysis, deafness, blindness, diabetes, depression, and migraine are all absolutely worth finding cures for—the QALY at stake here are massive—and it’s silly to say otherwise.) I think abortion should generally be legal and readily available in the first trimester (which is when most abortions happen anyway), but much more strictly regulated thereafter—but denying it to children and rape victims is a human rights violation.

What I really want to talk about today is not the details of the moral belief, but the attitude toward those who don’t share it. There are genuine racists, transphobes, fatphobes, ableists, and misogynists in the world. There are also structural institutions that can lead to discrimination despite most of the people involved having no particular intention to discriminate. It’s worthwhile to talk about these things, and to try to find ways to fix them. But does calling anyone who disagrees with you a monster accomplish that goal?

This seems particularly bad precisely when your own beliefs are so cutting-edge. If you have a really basic, well-established sort of progressive belief like “hiring based on race should be illegal”, “women should be allowed to work outside the home” or “sodomy should be legal”, then people who disagree with you pretty much are bigots. But when you’re talking about new, controversial ideas, there is bound to be some lag; people who adopted the last generation’s—or even the last year’s—progressive beliefs may not yet be ready to accept the new beliefs, and that doesn’t make them bigots.

Consider this: Were you born believing in your current moral and political beliefs?

I contend that you were not. You may have been born intelligent, open-minded, and empathetic. You may have been born into a progressive, politically-savvy family. But the fact remains that any particular belief you hold about race, or gender, or ethics was something you had to learn. And if you learned it, that means that at some point you didn’t already know it. How would you have felt back then, if, instead of calmly explaining it to you, people called you names for not believing in it?

Now, perhaps it is true that as soon as you heard your current ideas, you immediately adopted them. But that may not be the case—it may have taken you some time to learn or change your mind—and even if it was, it’s still not fair to denigrate anyone who takes a bit longer to come around. There are many reasons why someone might not be willing to change their beliefs immediately, and most of them are not indicative of bigotry or deep moral failings.

It may be helpful to think about this in terms of updating your moral software. You were born with a very minimal moral operating system (emotions such as love and guilt, the capacity for empathy), and over time you have gradually installed more and more sophisticated software on top of that OS. If someone literally wasn’t born with the right OS—we call these people psychopaths—then, yes, you have every right to hate, fear, and denigrate them. But most of the people we’re talking about do have that underlying operating system, they just haven’t updated all their software to the same version as yours. It’s both unfair and counterproductive to treat them as irredeemably defective simply because they haven’t updated to the newest version yet. They have the hardware, they have the operating system; maybe their download is just a little slower than yours.

In fact, if you are very fast to adopt new, trendy moral beliefs, you may in fact be adopting them too quickly—they haven’t been properly vetted by human experience just yet. You can think of this as like a beta version: The newest update has some great new features, but it’s also buggy and unstable. It may need to be fixed before it is really ready for widespread release. If that’s the case, then people aren’t even wrong not to adopt them yet! It isn’t necessarily bad that you have adopted the new beliefs; we need beta testers. But you should be aware of your status as a beta tester and be prepared both to revise your own beliefs if needed, and also to cut other people slack if they disagree with you.

I understand that it can be immensely frustrating to be thoroughly convinced that something is true and important and yet see so many people disagreeing with it. (I am an atheist activist after all, so I absolutely know what that feels like.) I understand that it can be immensely painful to watch innocent people suffer because they have to live in a world where other people have harmful beliefs. But you aren’t changing anyone’s mind or saving anyone from harm by calling people names. Patience, tact, and persuasion will win the long game, and the long game is really all we have.

And if it makes you feel any better, the long game may not be as long as it seems. The arc of history may have tighter curvature than we imagine. We certainly managed a complete flip of the First World consensus on gay marriage in just a single generation. We may be able to achieve similarly fast social changes in other areas too. But we haven’t accomplished the progress we have so far by being uncharitable or aggressive toward those who disagree.

I am emphatically not saying you should stop arguing for your beliefs. We need you to argue for your beliefs. We need you to argue forcefully and passionately. But when doing so, try not to attack the people who don’t yet agree with you—for they are precisely the people we need to listen to you.

The injustice of talent

Sep 4 JDN 2459827

Consider the following two principles of distributive justice.

A: People deserve to be rewarded in proportion to what they accomplish.

B: People deserve to be rewarded in proportion to the effort they put in.

Both principles sound pretty reasonable, don’t they? They both seem like sensible notions of fairness, and I think most people would broadly agree with both them.

This is a problem, because they are mutually contradictory. We cannot possibly follow them both.

For, as much as our society would like to pretend otherwise—and I think this contradiction is precisely why our society would like to pretend otherwise—what you accomplish is not simply a function of the effort you put in.

Don’t get me wrong; it is partly a function of the effort you put in. Hard work does contribute to success. But it is neither sufficient, nor strictly necessary.

Rather, success is a function of three factors: Effort, Environment, and Talent.

Effort is the work you yourself put in, and basically everyone agrees you deserve to be rewarded for that.

Environment includes all the outside factors that affect you—including both natural and social environment. Inheritance, illness, and just plain luck are all in here, and there is general, if not universal, agreement that society should make at least some efforts to minimize inequality created by such causes.

And then, there is talent. Talent includes whatever capacities you innately have. It could be strictly genetic, or it could be acquired in childhood or even in the womb. But by the time you are an adult and responsible for your own life, these factors are largely fixed and immutable. This includes things like intelligence, disability, even height. The trillion-dollar question is: How much should we reward talent?

For talent clearly does matter. I will never swim like Michael Phelps, run like Usain Bolt, or shoot hoops like Steph Curry. It doesn’t matter how much effort I put in, how many hours I spend training—I will never reach their level of capability. Never. It’s impossible. I could certainly improve from my current condition; perhaps it would even be good for me to do so. But there are certain hard fundamental constraints imposed by biology that give them more potential in these skills than I will ever have.

Conversely, there are likely things I can do that they will never be able to do, though this is less obvious. Could Michael Phelps never be as good a programmer or as skilled a mathematician as I am? He certainly isn’t now. Maybe, with enough time, enough training, he could be; I honestly don’t know. But I can tell you this: I’m sure it would be harder for him than it was for me. He couldn’t breeze through college-level courses in differential equations and quantum mechanics the way I did. There is something I have that he doesn’t, and I’m pretty sure I was born with it. Call it spatial working memory, or mathematical intuition, or just plain IQ. Whatever it is, math comes easy to me in not so different a way from how swimming comes easy to Michael Phelps. I have talent for math; he has talent for swimming.

Moreover, these are not small differences. It’s not like we all come with basically the same capabilities with a little bit of variation that can be easily washed out by effort. We’d like to believe that—we have all sorts of cultural tropes that try to inculcate that belief in us—but it’s obviously not true. The vast majority of quantum physicists are people born with high IQ. The vast majority of pro athletes are people born with physical prowess. The vast majority of movie stars are people born with pretty faces. For many types of jobs, the determining factor seems to be talent.

This isn’t too surprising, actually—even if effort matters a lot, we would still expect talent to show up as the determining factor much of the time.

Let’s go back to that contest function model I used to analyze the job market awhile back (the one that suggests we spend way too much time and money in the hiring process). This time let’s focus on the perspective of the employees themselves.

Each employee has a level of talent, h. Employee X has talent hx and exerts effort x, producing output of a quality that is the product of these: hx x. Similarly, employee Z has talent hz and exerts effort z, producing output hz z.

Then, there’s a certain amount of luck that factors in. The most successful output isn’t necessarily the best, or maybe what should have been the best wasn’t because some random circumstance prevailed. But we’ll say that the probability an individual succeeds is proportional to the quality of their output.

So the probability that employee X succeeds is: hx x / ( hx x + hz z)

I’ll skip the algebra this time (if you’re interested you can look back at that previous post), but to make a long story short, in Nash equilibrium the two employees will exert exactly the same amount of effort.

Then, which one succeeds will be entirely determined by talent; because x = z, the probability that X succeeds is hx / ( hx + hz).

It’s not that effort doesn’t matter—it absolutely does matter, and in fact in this model, with zero effort you get zero output (which isn’t necessarily the case in real life). It’s that in equilibrium, everyone is exerting the same amount of effort; so what determines who wins is innate talent. And I gotta say, that sounds an awful lot like how professional sports works. It’s less clear whether it applies to quantum physicists.

But maybe we don’t really exert the same amount of effort! This is true. Indeed, it seems like actually effort is easier for people with higher talent—that the same hour spent running on a track is easier for Usain Bolt than for me, and the same hour studying calculus is easier for me than it would be for Usain Bolt. So in the end our equilibrium effort isn’t the same—but rather than compensating, this effect only serves to exaggerate the difference in innate talent between us.

It’s simple enough to generalize the model to allow for such a thing. For instance, I could say that the cost of producing a unit of effort is inversely proportional to your talent; then instead of hx / ( hx + hz ), in equilibrium the probability of X succeeding would become hx2 / ( hx2 + hz2). The equilibrium effort would also be different, with x > z if hx > hz.

Once we acknowledge that talent is genuinely important, we face an ethical problem. Do we want to reward people for their accomplishment (A), or for their effort (B)? There are good cases to be made for each.

Rewarding for accomplishment, which we might call meritocracy,will tend to, well, maximize accomplishment. We’ll get the best basketball players playing basketball, the best surgeons doing surgery. Moreover, accomplishment is often quite easy to measure, even when effort isn’t.

Rewarding for effort, which we might call egalitarianism, will give people the most control over their lives, and might well feel the most fair. Those who succeed will be precisely those who work hard, even if they do things they are objectively bad at. Even people who are born with very little talent will still be able to make a living by working hard. And it will ensure that people do work hard, which meritocracy can actually fail at: If you are extremely talented, you don’t really need to work hard because you just automatically succeed.

Capitalism, as an economic system, is very good at rewarding accomplishment. I think part of what makes socialism appealing to so many people is that it tries to reward effort instead. (Is it very good at that? Not so clear.)

The more extreme differences are actually in terms of disability. There’s a certain baseline level of activities that most people are capable of, which we think of as “normal”: most people can talk; most people can run, if not necessarily very fast; most people can throw a ball, if not pitch a proper curveball. But some people can’t throw. Some people can’t run. Some people can’t even talk. It’s not that they are bad at it; it’s that they are literally not capable of it. No amount of effort could have made Stephen Hawking into a baseball player—not even a bad one.

It’s these cases when I think egalitarianism becomes most appealing: It just seems deeply unfair that people with severe disabilities should have to suffer in poverty. Even if they really can’t do much productive work on their own, it just seems wrong not to help them, at least enough that they can get by. But capitalism by itself absolutely would not do that—if you aren’t making a profit for the company, they’re not going to keep you employed. So we need some kind of social safety net to help such people. And it turns out that such people are quite numerous, and our current system is really not adequate to help them.

But meritocracy has its pull as well. Especially when the job is really important—like surgery, not so much basketball—we really want the highest quality work. It’s not so important whether the neurosurgeon who removes your tumor worked really hard at it or found it a breeze; what we care about is getting that tumor out.

Where does this leave us?

I think we have no choice but to compromise, on both principles. We will reward both effort and accomplishment, to greater or lesser degree—perhaps varying based on circumstances. We will never be able to entirely reward accomplishment or entirely reward effort.

This is more or less what we already do in practice, so why worry about it? Well, because we don’t like to admit that it’s what we do in practice, and a lot of problems seem to stem from that.

We have people acting like billionaires are such brilliant, hard-working people just because they’re rich—because our society rewards effort, right? So they couldn’t be so successful if they didn’t work so hard, right? Right?

Conversely, we have people who denigrate the poor as lazy and stupid just because they are poor. Because it couldn’t possibly be that their circumstances were worse than yours? Or hey, even if they are genuinely less talented than you—do less talented people deserve to be homeless and starving?

We tell kids from a young age, “You can be whatever you want to be”, and “Work hard and you’ll succeed”; and these things simply aren’t true. There are limitations on what you can achieve through effort—limitations imposed by your environment, and limitations imposed by your innate talents.

I’m not saying we should crush children’s dreams; I’m saying we should help them to build more realistic dreams, dreams that can actually be achieved in the real world. And then, when they grow up, they either will actually succeed, or when they don’t, at least they won’t hate themselves for failing to live up to what you told them they’d be able to do.

If you were wondering why Millennials are so depressed, that’s clearly a big part of it: We were told we could be and do whatever we wanted if we worked hard enough, and then that didn’t happen; and we had so internalized what we were told that we thought it had to be our fault that we failed. We didn’t try hard enough. We weren’t good enough. I have spent years feeling this way—on some level I do still feel this way—and it was not because adults tried to crush my dreams when I was a child, but on the contrary because they didn’t do anything to temper them. They never told me that life is hard, and people fail, and that I would probably fail at my most ambitious goals—and it wouldn’t be my fault, and it would still turn out okay.

That’s really it, I think: They never told me that it’s okay not to be wildly successful. They never told me that I’d still be good enough even if I never had any great world-class accomplishments. Instead, they kept feeding me the lie that I would have great world-class accomplishments; and then, when I didn’t, I felt like a failure and I hated myself. I think my own experience may be particularly extreme in this regard, but I know a lot of other people in my generation who had similar experiences, especially those who were also considered “gifted” as children. And we are all now suffering from depression, anxiety, and Impostor Syndrome.

All because nobody wanted to admit that talent, effort, and success are not the same thing.

Are people basically good?

Mar 20 JDN 2459659

I recently finished reading Human Kind by Rutger Bregman. His central thesis is a surprisingly controversial one, yet one I largely agree with: People are basically good. Most people, in most circumstances, try to do the right thing.

Neoclassical economists in particular seem utterly scandalized by any such suggestion. No, they insist, people are selfish! They’ll take any opportunity to exploit each other! On this, Bregman is right and the neoclassical economists are wrong.

One of the best parts of the book is Bregman’s tale of several shipwrecked Tongan boys who were stranded on the remote island of ‘Ata, sometimes called “the real Lord of the Flies but with an outcome quite radically different from that of the novel. There were of course conflicts during their long time stranded, but the boys resolved most of these conflicts peacefully, and by the time help came over a year later they were still healthy and harmonious. Bregman himself was involved in the investigative reporting about these events, and his tale of how he came to meet some of the (now elderly) survivors and tell their tale is both enlightening and heartwarming.

Bregman spends a lot of time (perhaps too much time) analyzing classic experiments meant to elucidate human nature. He does a good job of analyzing the Milgram experiment—it’s valid, but it says more about our willingness to serve a cause than our blind obedience to authority. He utterly demolishes the Zimbardo experiment; I knew it was bad, but I hadn’t even realized how utterly worthless that so-called “experiment” actually is. Zimbardo basically paid people to act like abusive prison guards—specifically instructing them how to act!—and then claimed that he had discovered something deep in human nature. Bregman calls it a “hoax”, which might be a bit too strong—but it’s about as accurate as calling it an “experiment”. I think it’s more like a form of performance art.

Bregman’s criticism of Steven Pinker I find much less convincing. He cites a few other studies that purported to show the following: (1) the archaeological record is unreliable in assessing death rates in prehistoric societies (fair enough, but what else do we have?), (2) the high death rates in prehistoric cultures could be from predators such as lions rather than other humans (maybe, but that still means civilization is providing vital security!), (3) the Long Peace could be a statistical artifact because data on wars is so sparse (I find this unlikely, but I admit the Russian invasion of Ukraine does support such a notion), or (4) the Long Peace is the result of nuclear weapons, globalized trade, and/or international institutions rather than a change in overall attitudes toward violence (perfectly reasonable, but I’m not even sure Pinker would disagree).

I appreciate that Bregman does not lend credence to the people who want to use absolute death counts instead of relative death rates, who apparently would rather live in a prehistoric village of 100 people that gets wiped out by a plague (or for that matter on a Mars colony of 100 people who all die of suffocation when the life support fails) rather than remain in a modern city of a million people that has a few dozen murders each year. Zero homicides is better than 40, right? Personally, I care most about the question “How likely am I to die at any given time?”; and for that, relative death rates are the only valid measure. I don’t even see why we should particularly care about homicide versus other causes of death—I don’t see being shot as particularly worse than dying of Alzheimer’s (indeed, quite the contrary, other than the fact that Alzheimer’s is largely limited to old age and shooting isn’t). But all right, if violence is the question, then go ahead and use homicides—but it certainly should be rates and not absolute numbers. A larger human population is not an inherently bad thing.

I even appreciate that Bregman offers a theory (not an especially convincing one, but not an utterly ridiculous one either) of how agriculture and civilization could emerge even if hunter-gatherer life was actually better. It basically involves agriculture being discovered by accident, and then people gradually transitioning to a sedentary mode of life and not realizing their mistake until generations had passed and all the old skills were lost. There are various holes one can poke in this theory (Were the skills really lost? Couldn’t they be recovered from others? Indeed, haven’t people done that, in living memory, by “going native”?), but it’s at least better than simply saying “civilization was a mistake”.

Yet Bregman’s own account, particularly his discussion of how early civilizations all seem to have been slave states, seems to better support what I think is becoming the historical consensus, which is that civilization emerged because a handful of psychopaths gathered armies to conquer and enslave everyone around them. This is bad news for anyone who holds to a naively Whiggish view of history as a continuous march of progress (which I have oft heard accused but rarely heard endorsed), but it’s equally bad news for anyone who believes that all human beings are basically good and we should—or even could—return to a state of blissful anarchism.

Indeed, this is where Bregman’s view and mine part ways. We both agree that most people are mostly good most of the time. He even acknowledges that about 2% of people are psychopaths, which is a very plausible figure. (The figures I find most credible are about 1% of women and about 4% of men, which averages out to 2.5%. The prevalence you get also depends on how severely lacking in empathy someone needs to be in order to qualify. I’ve seen figures as low as 1% and as high as 4%.) What he fails to see is how that 2% of people can have large effects on society, wildly disproportionate to their number.

Consider the few dozen murders that are committed in any given city of a million people each year. Who is committing those murders? By and large, psychopaths. That’s more true of premeditated murder than of crimes of passion, but even the latter are far more likely to be committed by psychopaths than the general population.

Or consider those early civilizations that were nearly all authoritarian slave-states. What kind of person tends to govern an authoritarian slave-state? A psychopath. Sure, probably not every Roman emperor was a psychopath—but I’m quite certain that Commodus and Caligula were, and I suspect that Augustus and several others were as well. And the ones who don’t seem like psychopaths (like Marcus Aurelius) still seem like narcissists. Indeed, I’m not sure it’s possible to be an authoritarian emperor and not be at least a narcissist; should an ordinary person somehow find themselves in the role, I think they’d immediately set out to delegate authority and improve civil liberties.

This suggests that civilization was not so much a mistake as it was a crime—civilization was inflicted upon us by psychopaths and their greed for wealth and power. Like I said, not great for a “march of progress” view of history. Yet a lot has changed in the last few thousand years, and life in the 21st century at least seems overall pretty good—and almost certainly better than life on the African savannah 50,000 years ago.

In essence, what I think happened was we invented a technology to turn the tables of civilization, use the same tools psychopaths had used to oppress us as a means to contain them. This technology was called democracy. The institutions of democracy allowed us to convert government from a means by which psychopaths oppress and extract wealth from the populace to a means by which the populace could prevent psychopaths from committing wanton acts of violence.

Is it perfect? Certainly not. Indeed, there are many governments today that much better fit the “psychopath oppressing people” model (e.g. Russia, China, North Korea), and even in First World democracies there are substantial abuses of power and violations of human rights. In fact, psychopaths are overrepresented among the police and also among politicians. Perhaps there are superior modes of governance yet to be found that would further reduce the power psychopaths have and thereby make life better for everyone else.

Yet it remains clear that democracy is better than anarchy. This is not so much because anarchy results in everyone behaving badly and causes immediate chaos (as many people seem to erroneously believe), but because it results in enough people behaving badly to be a problem—and because some of those people are psychopaths who will take advantage of power vacuum to seize control for themselves.

Yes, most people are basically good. But enough people aren’t that it’s a problem.

Bregman seems to think that simply outnumbering the psychopaths is enough to keep them under control, but history clearly shows that it isn’t. We need institutions of governance to protect us. And for the most part, First World democracies do a fairly good job of that.

Indeed, I think Bregman’s perspective may be a bit clouded by being Dutch, as the Netherlands has one of the highest rates of trust in the world. Nearly 90% of people in the Netherlands trust their neighbors. Even the US has high levels of trust by world standards, at about 84%; a more typical country is India or Mexico at 64%, and the least-trusting countries are places like Gabon with 31% or Benin with a dismal 23%. Trust in government varies widely, from an astonishing 94% in Norway (then again, have you seen Norway? Their government is doing a bang-up job!) to 79% in the Netherlands, to closer to 50% in most countries (in this the US is more typical), all the way down to 23% in Nigeria (which seems equally justified). Some mysteries remain, like why more people trust the government in Russia than in Namibia. (Maybe people in Namibia are just more willing to speak their minds? They’re certainly much freer to do so.)

In other words, Dutch people are basically good. Not that the Netherlands has no psychopaths; surely they have a few just like everyone else. But they have strong, effective democratic institutions that provide both liberty and security for the vast majority of the population. And with the psychopaths under control, everyone else can feel free to trust each other and cooperate, even in the absence of obvious government support. It’s precisely because the government of the Netherlands is so unusually effective that someone living there can come to believe that government is unnecessary.

In short, Bregman is right that we should have donation boxes—and a lot of people seem to miss that (especially economists!). But he seems to forget that we need to keep them locked.

Locked donation boxes and moral variation

Aug 8 JDN 2459435

I haven’t been able to find the quote, but I think it was Kahneman who once remarked: “Putting locks on donation boxes shows that you have the correct view of human nature.”

I consider this a deep insight. Allow me to explain.

Some people think that human beings are basically good. Rousseau is commonly associated with this view, a notion that, left to our own devices, human beings would naturally gravitate toward an anarchic but peaceful society.

The question for people who think this needs to be: Why haven’t we? If your answer is “government holds us back”, you still need to explain why we have government. Government was not imposed upon us from On High in time immemorial. We were fairly anarchic (though not especially peaceful) in hunter-gatherer tribes for nearly 200,000 years before we established governments. How did that happen?

And if your answer to that is “a small number of tyrannical psychopaths forced government on everyone else”, you may not be wrong about that—but it already breaks your original theory, because we’ve just shown that human society cannot maintain a peaceful anarchy indefinitely.

Other people think that human beings are basically evil. Hobbes is most commonly associated with this view, that humans are innately greedy, violent, and selfish, and only by the overwhelming force of a government can civilization be maintained.

This view more accurately predicts the level of violence and death that generally accompanies anarchy, and can at least explain why we’d want to establish government—but it still has trouble explaining how we would establish government. It’s not as if we’re ruled by a single ubermensch with superpowers, or an army of robots created by a mad scientist in a secret undergroud laboratory. Running a government involves cooperation on an absolutely massive scale—thousands or even millions of unrelated, largely anonymous individuals—and this cooperation is not maintained entirely by force: Yes, there is some force involved, but most of what a government does most of the time is mediated by norms and customs, and if a government did ever try to organize itself entirely by force—not paying any of the workers, not relying on any notion of patriotism or civic duty—it would immediately and catastrophically collapse.

What is the right answer? Humans aren’t basically good or basically evil. Humans are basically varied.

I would even go so far as to say that most human beings are basically good. They follow a moral code, they care about other people, they work hard to support others, they try not to break the rules. Nobody is perfect, and we all make various mistakes. We disagree about what is right and wrong, and sometimes we even engage in actions that we ourselves would recognize as morally wrong. But most people, most of the time, try to do the right thing.

But some people are better than others. There are great humanitarians, and then there are ordinary folks. There are people who are kind and compassionate, and people who are selfish jerks.

And at the very opposite extreme from the great humanitarians is the roughly 1% of people who are outright psychopaths. About 5-10% of people have significant psychopathic traits, but about 1% are really full-blown psychopaths.

I believe it is fair to say that psychopaths are in fact basically evil. They are incapable of empathy or compassion. Morality is meaningless to them—they literally cannot distinguish moral rules from other rules. Other people’s suffering—even their very lives—means nothing to them except insofar as it is instrumentally useful. To a psychopath, other people are nothing more than tools, resources to be exploited—or obstacles to be removed.

Some philosophers have argued that this means that psychopaths are incapable of moral responsibility. I think this is wrong. I think it relies on a naive, pre-scientific notion of what “moral responsibility” is supposed to mean—one that was inevitably going to be destroyed once we had a greater understanding of the brain. Do psychopaths understand the consequences of their actions? Yes. Do rewards motivate psychopaths to behave better? Yes. Does the threat of punishment motivate them? Not really, but it was never that effective on anyone else, either. What kind of “moral responsibility” are we still missing? And how would our optimal action change if we decided that they do or don’t have moral responsibility? Would you still imprison them for crimes either way? Maybe it doesn’t matter whether or not it’s really a blegg.

Psychopaths are a small portion of our population, but are responsible for a large proportion of violent crimes. They are also overrepresented in top government positions as well as police officers, and it’s pretty safe to say that nearly every murderous dictator was a psychopath of one shade or another.

The vast majority of people are not psychopaths, and most people don’t even have any significant psychopathic traits. Yet psychopaths have an enormously disproportionate impact on society—nearly all of it harmful. If psychopaths did not exist, Rousseau might be right after all; we wouldn’t need government. If most people were psychopaths, Hobbes would be right; we’d long for the stability and security of government, but we could never actually cooperate enough to create it.

This brings me back to the matter of locked donation boxes.

Having a donation box is only worthwhile if most people are basically good: Asking people to give money freely in order to achieve some good only makes any sense if people are capable of altruism, empathy, cooperation. And it can’t be just a few, because you’d never raise enough money to be useful that way. It doesn’t have to be everyone, or maybe even a majority; but it has to be a large fraction. 90% is more than enough.

But locking things is only worthwhile if some people are basically evil: For a lock to make sense, there must be at least a few people who would be willing to break in and steal the money, even if it was earmarked for a very worthy cause. It doesn’t take a huge fraction of people, but it must be more than a negligible one. 1% to 10% is just about the right sort of range.

Hence, locked donation boxes are a phenomenon that would only exist in a world where most people are basically good—but some people are basically evil.

And this is in fact the world in which we live. It is a world where the Holocaust could happen but then be followed by the founding of the United Nations, a world where nuclear weapons would be invented and used to devastate cities, but then be followed by an era of nearly unprecedented peace. It is a world where governments are necessary to reign in violence, but also a world where governments can function (reasonably well) even in countries with hundreds of millions of people. It is a world with crushing poverty and people who work tirelessly to end it. It is a world where Exxon and BP despoil the planet for riches while WWF and Greenpeace fight back. It is a world where religions unite millions of people under a banner of peace and justice, and then go on crusadees to murder thousands of other people who united under a different banner of peace and justice. It is a world of richness, complexity, uncertainty, conflict—variance.

It is not clear how much of this moral variance is innate versus acquired. If we somehow rewound the film of history and started it again with a few minor changes, it is not clear how many of us would end up the same and how many would be far better or far worse than we are. Maybe psychopaths were born the way they are, or maybe they were made that way by culture or trauma or lead poisoning. Maybe with the right upbringing or brain damage, we, too, could be axe murderers. Yet the fact remains—there are axe murderers, but we, and most people, are not like them.

So, are people good, or evil? Was Rousseau right, or Hobbes? Yes. Both. Neither. There is no one human nature; there are many human natures. We are capable of great good and great evil.

When we plan how to run a society, we must make it work the best we can with that in mind: We can assume that most people will be good most of the time—but we know that some people won’t, and we’d better be prepared for them as well.

Set out your donation boxes with confidence. But make sure they are locked.