Please, don’t let Trump win this

Oct 20 JDN 2460604

It’s almost time for the Presidential election in the United States. Right now, the race is too close to call; as of writing this post, FiveThirtyEight gives Harris a 53% chance of winning, and Trump a 46% chance.

It should not be this close. It should never have been this close. We have already seen what Trump is like in office, and it should have made absolutely no one happy. He is authoritarian, corrupt, incompetent, and narcissistic, and lately he’s starting to show signs of cognitive decline. He is a convicted felon and was involved in an attempted insurrection. His heavy-handed trade tariffs would surely cause severe economic damage both here and abroad, and above all, he wants to roll back rights for millions of Americans.

Almost anyone would be better than Trump. Harris would be obviously, dramatically better in almost every way. Yet somehow Trump is still doing well in the polls, and could absolutely still win this.

Please, do everything you can to stop that from happening.

Donate. Volunteer. Get out the vote. And above all, vote.

Part of the problem is our two-party system, which comes ultimately from our plurality voting system. As RangeVoting.org has remarked, our current system is basically the worst possible system that can still be considered democratic. Range voting would be clearly the best system, but failing that, at least we could have approval voting, or some kind of ranked-choice system. Only voting for a single candidate causes huge, fundamental flaws in representation, especially when it comes to candidate cloning: Multiple similar candidates that people like can lose to a single candidate that people dislike, because the vote gets split between them.

In fact, that’s almost certainly what happened with Trump: The only reason he won the primary the first time was that he had a small group of ardent supporters, while all the other candidates were similar and so got the mainstream Republican vote split between them. (Though it looks like the second time around he’d still win even if all the other similar candidates were consolidated—which frankly horrifies me.)

But it isn’t just our voting system. The really terrifying thing about Trump is how popular he is among Republicans. Democrats hate him, but Republicans love him. I have tried talking with Republican family members about what they like about Trump, and they struggle to give me a sensible answer. It’s not his personality or his competence (how could it be?). For the most part, it wasn’t even particular policies he supports. It was just this weird free-floating belief that he was a good President and would be again.

There was one major exception to that: Single-issue voters who want to ban abortion. For these people, the only thing that matters is that Trump appointed the Supreme Court justices who overturned Roe v. Wade. I don’t know what to say to such people, since it seems so obvious to me that (1) a total abortion ban is too extreme, even if you want to reduce the abortion rate, (2) there are so many other issues that matter aside from abortion; you can’t simply ignore them all, (3) several other Republican candidates are equally committed to banning abortion but not nearly as corrupt or incompetent, and (4) the Supreme Court has already been appointed; there’s nothing more for Trump to do in that department that he hasn’t already done. But I guess there is at least something resembling a coherent policy preference here, if a baffling one.

Others also talked about his ideas on trade and immigration, but they didn’t seem to have a coherent idea of what a sensible trade or immigration policy looks like. They imagined that it was a reasonable thing to simply tariff all imports massively or expel all immigrants, despite the former being economically absurd and the latter being a human rights violation (and also an economic disaster). I guess that also counts as a policy preference, but it’s not simply baffling; it’s horrifying. I don’t know what to say to these people either.

But maybe that’s a terror I need to come to terms with: Some people don’t like Trump in spite of his terrible policy ideas; they like him because of them. They want a world where rights are rolled back for minorities and LGBT people and (above all) immigrants. They want a world where global trade is shut down and replaced by autarky. They imagine that these changes will somehow benefit them, even when all the evidence suggests that it would do nothing of the sort.

I have never feared Trump himself nearly so much as I fear the people of a country that could elect him. And should we re-elect him, I will fear the people of this country even more.

Please, don’t let that happen.

Freedom and volition

Oct 13 JDN 2460597

Introduction

What freedom do we have to choose some actions over others, and how are we responsible for what we do? Without some kind of freedom and responsibility, morality becomes meaningless—what does it matter what we ought to do if what we will do is completely inevitable? Morality becomes a trivial exercise, trying to imagine fanciful worlds in which things were not only other than they are, but other than they ever could be.

Many people think that science and morality are incompatible precisely because science requires determinism—the causal unity of the universe, wherein all effects have causes and all systems obey conservation laws. This seems to limit our capacity for freedom, since all our actions are determined by physical causes, and could (in principle) be predicted far in advance from the state of the universe around us. In fact, quantum mechanics isn’t necessarily deterministic (though in my preferred version, the Bohm interpretation, it is), but a small amount of randomness at the level of atoms and molecules doesn’t seem to add much in the way of human freedom.

The fear is that determinism undermines human agency; if we are part of a closed causal system, how can we be free to make our own choices? In fact, this is a mistake. Determinism isn’t the right question to be asking at all. There are really four possibilities to consider:

  • Acausalism: Actions are uncaused but inevitable; everything is ultimately random and meaningless.
  • Libertarianism: Actions are uncaused and free; we are the masters of our own destiny, independent of the laws of nature.
  • Fatalism: Actions are caused and inevitable; the universe is a clockwork machine of which we are components.
  • Compatibilism: Actions are caused but free; we are rational participants in the universe’s causal mechanism.

Acausalism

Hardly anyone holds to acausalism, but it is a logically coherent position. Perhaps the universe is ultimately random, meaningless—our actions are done neither by the laws of nature nor by our own wills, but simply by the random flutterings of molecular motion. In such a universe, we are not ultimately responsible for our actions, but nor can we stop ourselves from pretending that we are, for everything we think, say, and do is determined only by the roll of the dice. This is a hopeless, terrifying approach to reality, and it would drive one to suicide but for the fact that if it is true, suicide, just like everything else, must ultimately be decided by chance.

Libertarianism

Most people, if asked—including evolutionary biologists—seem to believe something like libertarianism. (This is metaphysical libertarianism, the claim that free will is real and intrinsically uncaused; it is not to be confused with political Libertarianism.) As human beings we have an intuitive sense that we are not like the rest of the universe. Leaves fall, but people climb; everything decays, but we construct. If this is right, then morality is unproblematic: Moral rules apply to agents with this sort of deep free will, and not to other things.

But libertarian free will runs into serious metaphysical problems. If I am infected by a virus, do I choose to become sick? If I am left without food, do I choose to starve? If I am hit by a car, do I choose to be injured? Anyone can see that this is not the case: No one chooses these things—they happen, as a result of the laws of nature—physics, chemistry, biology.

Yet, so much of our lives is determined by these kinds of events: How can Stephen Hawking be said to have chosen life as a physicist and not a basketball player when he spent his whole adult life crippled by amytropic lateral sclerosis? He could not possibly have been a professional basketball player, no matter how badly he might have desired to be. Perhaps he could have been an artist or a philosopher—but still, his options were severely limited by his biology.

Indeed, it is worse than this, for we do not choose our parents, our culture, our genes; yet all of these things strongly influence who we are. I have myopia and migraines not because I wanted to, not because I did something to cause it to happen, but because I was born this way—and while myopia isn’t a serious problem with eyeglasses, migraines have adversely affected my life in many ways, and while treatment has helped me enormously, a full cure remains elusive. Culture influences us even more: It is entirely beyond my control that I speak English and live in an upper-middle-class American family; though I’m fairly happy with this result, I was never given a choice in the matter. All of these things have influenced what schools I’ve attended, what friends I’ve made, even what ideas I have considered. My brain itself is a physical system bound to the determinism of the universe. Therefore, in what sense can anything I do be considered free?

Fatalism

This reasoning leads quickly to fatalism, the notion that because everything we do is controlled by laws of nature, nothing we do is free, and we cannot rightly be held responsible for any of our actions. If this is true, then we still can’t stop ourselves from acting the way we do. People who murder will murder, people who punish murderers will punish murderers—it’s all inevitable. There may be slightly more hope in fatalism than acausalism, since it suggests that everything we do is done in some sense for a purpose, if not any purpose we would recognize or understand. Still, the thought that death and suffering, larceny and rape, starvation and genocide, are in all instances inevitable—this is the sort of idea that will keep a thoughtful person awake at night.

By way of reconciling determinism with libertarian free will, some thinkers (such as Michael Shermer) have suggested that free will is a “useful fiction”.

But the very concept of anything being useful depends upon at least a minimal degree of free will—the ability to choose actions based upon their usefulness. A fiction can only be useful if beliefs affect actions. If there even is such a thing as a “useful fiction” (I’m quite dubious of the notion), free will is certainly not an example, for in order for anything to ever be useful we must have at least some degree of free will. The best one could say under fatalism would be something like “some people happen to believe in free will and can’t change that”; but that doesn’t make free will true, it just makes many people incorrigibly wrong.

Yet the inference to fatalism is not, itself, inevitable; it doesn’t follow from the fact that much or even most of what we do is beyond our control that all we do is beyond our control. Indeed, it makes intuitive sense to say that we are in control of certain things—what we eat, what we say, how we move our bodies. We feel at least that we are in control of these things, and we can operate quite effectively on this presumption.

On the other hand, different levels of analysis yield different results. At the level of the brain, at the level of biochemistry, and especially at the level of quantum physics, there is little difference between what we choose to do and what merely happens to us. In a powerful enough microscope, being hit by a car and punching someone in the face look the same: It’s all protons and electrons interacting by exchanging photons.

Compatibilism

But free will is not inherently opposed to causality. In order to exercise free will, we must act not from chance, but from character; someone whose actions are random is not choosing freely, and conversely someone can freely choose to be completely predictable. It can be rational to choose some degree of randomness, but it cannot be rational to choose total randomness. As John Baer convincingly argues, at least some degree of causal determinacy is necessary for free will—hence, libertarianism is not viable, and a lack of determinism would lead only to acausalism. In the face of this knowledge, compatibilism is the obvious choice.

One thing that humans do that only a few other things do—some animals, perhaps computers if we’re generous—is reason; we consider alternatives and select the one we consider best. When water flows down a hill, it never imagines doing otherwise. When asteroids collide, they don’t consider other options. Yet we humans behave quite differently; we consider possibilities, reflect on our desires, seek to choose the best option. This process we call volition, and it is central to our experience of choice and freedom.

Another thing we do that other things don’t—except animals again, but definitely not computers this time—is feel emotion; we love and hurt, feel joy and sorrow. It is our emotions that motivate our actions, give them purpose. Water flowing downhill not only doesn’t choose to do so, it doesn’t care whether it does so. Sometimes things happen to us that we do not choose, but we always care.

This is what I mean when I say “free will”: experiences, beliefs, and actions are part of the same causal system. What we are affects what we think, what we think affects what we do. What we do affects what we are, and the system feeds back into itself. From this realization I can make sense of claims that people are good and bad, that acts are right and wrong; and without it I don’t think we could make sense of anything at all.

It’s not that we have some magical soul that lives outside our bodies; we are our bodies. Our brains are our souls. (I call this the Basic Fact of Cognitive Science: We are our brains.) Nor is it that neuron firings somehow “make” our thoughts and feelings as some kind of extra bonus; the patterns of neuron firings and the information that they process are our thoughts and feelings. Free will isn’t some mystical dualism; it is a direct consequence of the fact that we have capacities for conscious volition. Yes, our actions can be ultimately explained by the patterns in our brains. Of course they can! The patterns in our brains comprise our personalities, our beliefs, our memories, our desires.

Yes, the software of human consciousness is implemented on the hardware of the human brain. Why should we have expected something different? Whatever stuff makes consciousness, it is still stuff, and it obeys the laws that stuff obeys. We can imagine that we might be made of invisible fairy dust, but if that were so, then invisible fairy dust would need to be a real phenomenon and hence obey physical laws like the conservation of energy. Cognition is not opposed to physics; it is a subset of physics. Just as a computer obeys Turing’s laws if you program it but also Newton’s laws if you throw it, so humans are both mental and physical beings.

In fact, the intuitive psychology of free will is among the most powerfully and precisely predictive scientific theories ever devised, right alongside Darwinian evolution and quantum physics.

Consider the following experiment, conducted about twenty years ago. In November of 2006, I planned a road trip with several of my friends from our home in Ann Arbor to the Secular Student Alliance conference in Boston that was coming in April 2007. Months in advance, we researched hotels, we registered for the conference, we planned out how much we would need to spend. When the time came, we gathered in my car and drove the 1300 kilometers to the conference. Now, stop and think for a moment: How did I know, in November 2006, that in April 2007, on a particular date and time, E.O. Wilson would be in a particular room and so would I? Because that’s what the schedule said. Consider for a moment these two extremely complicated extended bodies in space, each interacting with thousands of other such bodies continuously; no physicist could possibly have gathered enough data to predict six months in advance that the two bodies would each travel hundreds of kilometers over the Earth’s surface in order to meet within 10 meters of one another, remain there for roughly an hour, and then split apart and henceforth remain hundreds of kilometers apart. Yet our simple intuitive psychology could, and did, make just that prediction correctly. Of course in the face of incomplete data, no theory is perfect, and the prediction could have been wrong. Indeed because Boston is exceedingly difficult to navigate (we got lost), the prediction that I and Steven Pinker would be in the same room at the same time the previous evening turned out not to be accurate. But even this is something that intuitive psychology could have taken into account better than any other scientific theory we have. Neither quantum physics nor stoichiometric chemistry nor evolutionary biology could have predicted that we’d get lost, nor recommend that if we ever return to Boston we should bring a smartphone with a GPS uplink; yet intuitive psychology can.

Moreover, intuitive psychology explicitly depends upon rational volition. If you had thought that I didn’t want to go to the conference, or that I was mistaken about the conference’s location, then you would have predicted that I would not occupy that spatial location at that time; and had these indeed been the case, that prediction would have been completely accurate. And yet, these predictions insist upon such entities as desires (wanting to go) and beliefs (being mistaken) that eliminativists, behaviorists, and epiphenomenalists have been insisting for years are pseudoscientific. Quite the opposite is the case: Eliminativism, behaviorism, and epiphenomenalism are pseudosciences.

Understanding the constituent parts of a process does not make the process an illusion. Rain did not stop falling when we developed mathematical models of meteorology. Fire did not stop being hot when we formalized statistical dynamics. Thunder did not stop being loud when we explained the wave properties of sound. Advances in computer technology have now helped us realize how real information processing can occur in systems made of physical parts that obey physical laws; it isn’t too great a stretch to think that human minds operate on similar principles. Just as the pattern of electrical firings in my computer really is Windows, the pattern of electrochemical firings in my brain really is my consciousness.

There is a kind of naive theology called “God of the gaps”; it rests upon the notion that whenever a phenomenon cannot be explained by science, this leaves room for God as an explanation. This theology is widely rejected by philosophers, because it implies that whenever science advances, religion must retreat. Libertarianism and fatalism rest upon the presumption of something quite similar, what I would call “free will of the gaps”. As cognitive science advances, we will discover more and more about the causation of human mental states; if this is enough to make us doubt free will, then “free will” was just another name for ignorance of cognitive science. I defend a much deeper sense of free will than this, one that is not at all threatened by scientific advancement.

Yes, our actions are caused—caused by what we think about the world! We are responsible for what we do not because it lacks causation, but because it has causation, specifically causation in our own beliefs, desires, and intentions. These beliefs, desires, and intentions are themselves implemented upon physical hardware, and we don’t fully understand how this implementation operates; but nonetheless the hardware is real and the phenomena are real, at least as real as such things as rocks, rivers, clouds, trees, dogs, and televisions, all of which are also complex functional ensembles of many smaller, simpler parts.

Conclusion

Libertarianism is largely discredited; we don’t have the mystical sort of free will that allows us to act outside of causal laws. But this doesn’t mean that we must accept fatalism; compatibilism is the answer. We have discovered many surprising things about cognitive science, and we will surely need to discover many more; but the fundamental truth of rational volition remains untarnished.

We know, to a high degree of certainty, that human beings are capable of volitional action. I contend that this is all the freedom we need—perhaps even all we could ever have. When a comet collides with Jupiter, and we ask “Why?”, the only sensible answer involves happenstance and laws of physics. When a leaf falls from a tree, and we ask “Why?”, we can do better, talking about evolutionary adaptations in the phylogenetic history of trees. But when a human being robs a bank, starts a war, feeds a child, or writes a book, and we ask “Why?”, we can move away from simple causes and talk about reasons—desires, intentions, beliefs; reasons, unlike mere causes, can make more or less sense, be more or less justified.

Psychological and neurological experiments have shown that volition is more complicated than we usually think—it can be strongly affected by situational factors, and it has more to do with inhibiting and selecting actions than with generating them, what Sukhvinder Obhi and Patrick Haggard call “not free will but free won’t”; yet still we have volitional control over many of our actions, and hence responsibility for them. In simple tasks, there is brain activity that predicts our behavior several seconds before we actually consciously experience the decision—but this is hardly surprising, since the brain needs to use processing power to actually generate a decision. Deliberation requires processing, not all of which can be conscious. It’s a little surprising that the activity can predict the decision in advance of the conscious experience of volition, but it can’t predict the decision perfectly, even in very simple tasks. (And in true real-life tasks, like choosing a college or a spouse, it basically can’t predict at all.) This shows that the conscious volition is doing something—perhaps inhibiting undesired behaviors or selecting desired ones. No compatibilist needs to be committed to the claim that subconscious urges have nothing to do with our decisions—since at least Freud that kind of free will has been clearly discredited.

Indeed, evolutionary psychology would be hard-pressed to explain an illusion of free will that isn’t free will. It simply doesn’t make sense for conscious volition to evolve unless it does something that affects our behavior in some way. Illusions are a waste of brain matter, which in turn is a waste of metabolic energy. (The idea that we would want to have free will in order to feel like life is worth living is profoundly silly: If our beliefs didn’t affect our behavior, our survival would be unrelated to whether we thought life was worth living!) You can make excuses and say that conscious experience is just an epiphenomenon upon neurological processes—an effect but not a cause—but there is no such thing as an “epiphenomenon” in physics as we know it. The smoke of a flame can smother that flame; the sound of a train is a sonic pressure wave that shakes the metal of the track. Anything that moves has energy, and energy is conserved. Epiphenomenalism would require new laws of physics, by which consciousness can be created ex nihilo, a new entity that requires no energy to make and “just happens” whenever certain matter is arranged in the right way.

Windows is not an “epiphenomenon” upon the electrons running through my computer’s processor core; the functional arrangement of those electrons is Windows—it implements Windows. I don’t see why we can’t say the same thing about my consciousness—that it is a software implementation by the computational hardware of my brain. Epiphenomenalists will often insist that they are being tough-minded scientists accepting the difficult facts while the rest of us are being silly and mystical; but they are talking about mysterious new physics and I’m talking about software-hardware interaction—so really, who is being mystical here?

In the future it may be possible to predict people’s behavior relatively accurately based on their brain activity—but so what? This only goes to show that the brain is the source of our decisions, which is precisely what compatibilism says. One can easily predict that rain will fall from clouds of a certain composition; but rain still falls from clouds. The fact that I can sometimes predict your behavior doesn’t make your behavior any less volitional; it only makes me a better psychologist (and for that matter a more functional human being). Moreover, detailed predictions of long-term behaviors will probably always remain impossible, due to the deep computational complexity involved. (If it were simple to predict who you’d marry, why would your brain expend so much effort working on the problem?)

For all these reasons, I say: Yes, we do have free will.

Defending Moral Realism


Oct 6 JDN 2460590

In the last few posts I have only considered arguments against moral realism, and shown them to be lacking. Yet if you were already convinced of moral anti-realism, this probably didn’t change your mind—it’s entirely possible to have a bad argument for a good idea. (Consider the following argument: “Whales are fish, fish are mammals, therefore whales are mammals.”) What you need is arguments for moral realism.

Fortunately, such arguments are not hard to find. My personal favorite was offered by one of my professors in a philosophy course: “I fail all moral anti-realists. If you think that’s unfair, don’t worry: You’re not a moral anti-realist.” In other words, if you want to talk coherently at all about what actions are good or bad, fair or unfair, then you cannot espouse moral anti-realism; and if you do espouse moral anti-realism, there is no reason for us not to simply ignore you (or imprison you!) and go on living out our moral beliefs—especially if you are right that morality is a fiction. Indeed, the reason we don’t actually imprison all moral anti-realists is precisely because we are moral realists, and we think it is morally wrong to imprison someone for espousing unpopular or even ridiculous beliefs.

That of course is a pragmatic argument, not very compelling on epistemological grounds, but there are other arguments that cut deeper. Perhaps the most compelling is the realization that rationality itself is a moral principle—it says that we ought to believe what conforms to reason and ought not to believe what does not. We need at least some core notion of normativity even to value truth and honesty, to seek knowledge, to even care whether moral realism is correct or incorrect. In a total moral vacuum, we can fight over our values and beliefs, we can kill each other over them, but we cannot discuss them or debate them, for discussion and debate themselves presuppose certain moral principles.

Typically moral anti-realists expect us to accept epistemic normativity, but if they do this then they cannot deny the legitimacy of all normative claims. If their whole argument rests upon undermining normativity, then it is self-defeating. If it doesn’t, then anti-realists need to explain the difference between “moral” and “normative”, and explain why the former is so much more suspect than the latter—but even then we have objective obligations that bind our behavior. The difference, I suppose, would involve a tight restriction on the domains of discourse in which normativity applies. Scientific facts? Normative. Interpersonal relations? Subjective. I suppose it’s logically coherent to say that it is objectively wrong to be a Creationist but not objectively wrong to be a serial killer; but this is nothing if not counter-intuitive.

Moreover, it is unclear to me what a universe would be like if it had no moral facts. In what sort of universe would it not be best to believe what is true? In what sort of universe would it not be wrong to harm others for selfish gains? In what sort of world would it be wrong to keep a promise, or good to commit genocide? It seems to me that we are verging on nonsense, rather like what happens if we try to imagine a universe where 2+2=5.

Moreover, there is a particular moral principle, which depends upon moral realism, yet is almost universally agreed upon, even by people who otherwise profess to be moral relativists or anti-realists.

I call it the Hitler Principle, and it’s quite simple:

The Holocaust was bad.

In large part, ethical philosophy since 1945 has been the attempt to systematically justify the Hitler Principle. Only if moral realism is true can we say that the Holocaust was bad, morally bad, unequivocally, objectively, universally, regardless of the beliefs, feelings, desires, culture or upbringing of its perpetrators. And if we can’t even say that, can we say anything at all? If the Holocaust wasn’t wrong, nothing is. And if nothing is wrong, then does it even matter if we believe what is true?

But then, stop and think for a moment: If we know this—if it’s so obvious to just about everyone that the Holocaust was wrong, so obvious that anyone who denies it we immediately recognize as evil or insane (or lying or playing games)—then doesn’t that already offer us an objective moral standard?

I contend that it does—that the Hitler Principle is so self-evident that it can form an objective standard by which to measure all moral theory. I would sooner believe the Sun revolves around the Earth than deny the Holocaust was wrong. I would sooner consider myself a brain in a vat than suppose that systematic extermination of millions of innocent people could ever be morally justified. Richard Swinburne, a philosopher of religion at Oxford, put it well: “it is more obvious to almost all of us that the genocide conducted by Hitler was morally wrong than that we are not now dreaming, or that the Earth is many millions of years old.” Because at least this one moral fact is so obviously, incorrigibly true, we can use it to test our theories of morality. Just as we would immediately reject any theory of physics which denied that the sky is blue, we should also reject any theory of morality which denies that the Holocaust was wrong. This might seem obvious, but by itself it is sufficient to confirm moral realism.

Similar arguments can be made for other moral propositions that virtually everyone accepts, like the following:

  1. Theft is wrong.
  2. Homicide is wrong.
  3. Lying is wrong.
  4. Rape is wrong.
  5. Kindness is good.
  6. Keeping promises is good.
  7. Happiness is good.
  8. Suffering is bad.

With appropriate caveats (lying isn’t always wrong, if it is justified by some greater good; homicide is permissible in self-defense; promises made under duress do not oblige; et cetera), all of these propositions are accepted by almost everyone, and most people hold them with greater certainty than they would hold any belief about empirical science. “Science proves that time is relative” is surprising and counter-intuitive, but people can accept it; “Science proves that homicide is good” is not something anyone would believe for an instant. There is wider agreement and greater confidence about these basic moral truths than there is about any fact in science, even “the Earth is round” or “gravity pulls things toward each other”—for well before Newton or even Archimedes, people still knew that homicide was wrong.

Though there are surely psychopaths who disagree (basically because their brains are defective), the vast majority of people agree on these fundamental moral claims. At least 95\% of humans who have ever lived share this universal moral framework, under which the wrongness of genocide is as directly apprehensible as the blueness of the sky and the painfulness of a burn. Moral realism is on as solid an epistemic footing as any fact in science.

Expressivism

Sep 29 JDN 2460583

The theory of expressivism, often posited as an alternative to moral realism, is based on the observation by Hume that factual knowledge is not intrinsically motivating. I can believe that a food is nutritious and that I need nutrition to survive, but without some emotional experience to motivate me—hunger—I will nonetheless remain unmotivated to eat the nutritious food. Because morality is meant to be intrinsically motivating, says Hume, it must not involve statements of fact.

Yet really all Hume has shown is that if indeed facts are not intrinsically motivating, and moral statements are intrinsically motivating, then moral statements are not merely statements of fact. But even statements of fact are rarely merely statements of fact! If I were to walk down the street stating facts at random (lemurs have rings on their tails, the Sun is over one million kilometers in diameter, bicycles have two wheels, people sit on chairs, time dilates as you approach the speed of light, LGBT people suffer the highest per capita rate of hate crimes in the US, Coca-Cola in the United States contains high fructose corn syrup, humans and chimpanzees share 95-98% of our DNA), I would be seen as a very odd sort of person indeed. Even when I state a fact, I do so out of some motivation, frequently an emotional motivation. I’m often trying to explain, or to convince. Sometimes I am angry, and I want to express my anger and frustration. Other times I am sad and seeking consolation. I have many emotions, and I often use words to express them. Nonetheless, in the process I will make many statements of fact that are either true or false: “Humans and chimpanzees share 95-98% of our DNA” I might use to argue in favor of common descent; “Time dilates as you approach the speed of light” I have used in to explain relativity theory; “LGBT people suffer the highest per capita rate of hate crimes in the US” I might use to argue in favor of some sort of gay rights policy. When I say “genocide is wrong!” I probably have some sort of emotional motivation for this—likely my outrage at an ongoing genocide. Nonetheless I’m pretty sure it’s true that genocide is wrong.

Expressivism says that moral statements don’t express propositions at all, they express attitudes, relations to ideas that are not of the same kind as belief and disbelief, truth and falsehood. Much as “Hello!” or “Darn it!” don’t really state facts or inquire about facts, expressivists like Simon Blackburn and Allan Gibbard would say that “Genocide is wrong” doesn’t say anything about the facts of genocide, it merely expresses my attitude of moral disapproval toward genocide.

Yet expressivists can’t abandon all normativity—otherwise even the claim “expressivism is true” has no moral force. Allan Gibbard, like most expressivists, supports epistemic normativity—the principle that we ought to believe what is true. But this seems to me already a moral principle, and one that is not merely an attitude that some people happen to have, but in fact a fundamental axiom that ought to apply to any rational beings in any possible universe. Even more, Gibbard agrees that some moral attitudes are more warranted than others, that “genocide is wrong” is more legitimate than “genocide is good”. But once we agree that there are objective normative truths and moral attitudes can be more or less justified, how is this any different from moral realism?

Indeed, in terms of cognitive science I’m not sure beliefs and emotions are so easily separable in the first place. In some sense I think statements of fact can be intrinsically motivating—or perhaps it is better to put it this way: If your brain is working properly, certain beliefs and emotions will necessarily coincide. If you believe that you are about to be attacked by a tiger, and you don’t experience the emotion of fear, something is wrong; if you believe that you are about to die of starvation, and you don’t experience the emotion of hunger, something is wrong. Conversely, if you believe that you are safe from all danger, and yet you experience fear, something is wrong; if you believe that you have eaten plenty of food, yet you still experience hunger, something is wrong. When your beliefs and emotions don’t align, either your beliefs or your emotions are defective. I would say that the same is true of moral beliefs. If you believe that genocide is wrong but you are not motivated to resist genocide, something is wrong; if you believe that feeding your children is obligatory but you are not motivated to feed your children, something is wrong.

It may well be that without emotion, facts would never motivate us; but emotions can warranted by facts. That is how we distinguish depression from sadness, mania from joy, phobia from fear. Indeed I am dubious of the entire philosophical project of noncognitivism, of which expressivism is the moral form. Noncognitivism is the idea that a given domain of mental processing is not cognitive—not based on thinking, reason, or belief. There is often a sense that noncognitive mental processing is “lower” than cognition, usually based on the idea that it is more phylogenetically conserved—that we think as men but feel as rats.

Yet in fact this is not how human emotions work at all. Poetry—mere words—often evokes the strongest of emotions. A text message of “I love you” or “I think we should see other people” can change the course of our lives. An ambulance in the driveway will pale the face of any parent. In 2001 the video footage of airplanes colliding with skyscrapers gave all of America nightmares for weeks. Yet stop and think about what text messages, ambulances, video footage, airplanes, and skyscrapers are—they are technologies so advanced, so irreducibly cognitive, that even the world’s technological superpower had none of them 200 years ago. (We didn’t have text messages forty years ago!) Even something as apparently dry as numbers can have profound emotional effects: In the statements “Your blood sugar is X mg/dL” to a diabetic, “You have Y years to live” to a cancer patient, or “Z people died” in a news report, the emotional effects are almost wholly dependent upon the value of the numbers X, Y, and Z—values of X = 100, Y = 50 and Z = 0 would be no cause for alarm (or perhaps even cause for celebration!), while values of X = 400, Y = 2, and Z = 10,000 would trigger immediate shock, terror and despair. The entire discipline of cognitive-behavioral psychotherapy depends upon the fact that talking to people about their thoughts and beliefs can have profound effects upon their emotions and actions—and in empirical studies, cognitive-behavioral psychotherapy is verified to work in a variety of circumstances and is more effective than medication for virtually every mental disorder. We do not think as men but feel as rats; we thinkandfeel as human beings.

Because they are evolved instincts, we have limited control over them, and other animals have them, we are often inclined to suppose that emotions are simple, stupid, irrational—but on the contrary they are mind-bogglingly complex, brilliantly intelligent, and the essence of what it means to be a rational being. People who don’t have emotions aren’t rational—they are inert. In psychopathology a loss of capacity for emotion is known as flat affect, and it is often debilitating; it is often found in schizophrenia and autism, and in its most extreme forms it causes catatoniathat is, a total lack of body motion. From Plato to Star Trek, Western culture has taught us to think that a loss of emotion would improve our rationality; but on the contrary, a loss of all emotion would render us completely vegetative. Lieutenant Commander Data without his emotion chip should stand in one place and do nothing—for this is what people without emotion actually do.

Indeed, attractive and aversive experiences—that is, emotions—are the core of goal-seeking behavior, without which rationality is impossible. Apparently simple experiences like pleasure and pain (let alone obviously complicated ones like jealousy and patriotism) are so complex that the most advanced robots in the world cannot even get close to simulating them. Injure a rat, and it will withdraw and cry out in pain; damage a robot (at least any less than a state-of-the-art research robot), and it will not react at all, continuing ineffectually through the same motions it was attempting a moment ago. This shows that rats are smarter than robots—an organism that continues on its way regardless of the stimulus is more like a plant than an animal.

Our emotions do sometimes fail us. They hurt us, they put us at risk, they make us behave in ways that are harmful or irrational. Yet to declare on these grounds that emotions are the enemy of reason would be like declaring that we should all poke out our eyes because sometimes we are fooled by optical illusions. It would be like saying that a shirt with one loose thread is unwearable, that a mathematician who once omits a negative sign should never again be trusted. This is not rationality but perfectionism. Like human eyes, human emotions are rational the vast majority of the time, and when they aren’t, this is cause for concern. Truly irrational emotions include mania, depression, phobia, and paranoia—and it’s no accident that we respond to these emotions with psychotherapy and medication.

Expressivism is legitimate precisely because it is not a challenger to moral realism. Personally, I think that expressivism is wrong because moral claims express facts as much as they express attitudes; but given our present state of knowledge about cognitive science, that is the sort of question upon which reasonable people can disagree. Moreover, the close ties between emotion and reason may ultimately entail that we are wrong to make the distinction in the first place. It is entirely reasonable, at our present state of knowledge, to think that moral judgments are primarily emotional rather than propositional. What isnot reasonable, however, is the claim that moral statements cannot be objectively justified—the evidence against this claim is simply too compelling to ignore. If moral claims are emotions, they are emotions that can be objectively justified.

Against Moral Anti-Realism

Sep 22 JDN 2460576

Moral anti-realism is more philosophically sophisticated than relativism, but it is equally mistaken. It is what is sounds like, the negation of moral realism. Moral anti-realists hold that moral truths are meaningless because they rest upon presumptions about the world that fail to hold. To an anti-realist, “genocide is wrong” is meaningless because there is no such thing as “wrong”, much as to any sane person “unicorns have purple feathers” is meaningless because there are no such things as unicorns. They aren’t saying that genocide isn’t wrong—they’re saying that wrong itself is a defective concept.

The vast majority of people profess strong beliefs in moral truth, and indeed strong beliefs about particular moral issues, such as abortion, capital punishment, same-sex marriage, euthanasia, contraception, civil liberties, and war. There is at the very least a troubling tension here between academia and daily life.

This does not by itself prove that moral truths exist. Ordinary people could be simply wrong about these core beliefs. Indeed, I must acknowledge that most ordinary people clearly are deeply ignorant about certain things, as only 55\% of Americans believe that the theory of evolution is true, and only 66\% of Americans agree that the majority of recent changes in Earth’s climate has been caused by human activity, when in reality these are scientific facts, empirically demonstrable through multiple lines of evidence, verified beyond all reasonable doubt, and both evolution and climate change are universally accepted within the scientific community. In scientific terms there is no more doubt about evolution or climate change than there is about the shape of the Earth or the structure of the atom.

If there were similarly compelling reasons to be moral anti-realists, then the fact that most people believe in morality would be little different: Perhaps most ordinary people are simply wrong about these issues. But when asked to provide similarly compelling evidence for why they reject the moral views of ordinary people, moral anti-realists have little to offer.

Many anti-realists will note the diversity of moral opinions in the world, as John Burgess did, which would be rather like noting the diversity of beliefs about the soul as an argument against neuroscience, or noting the diversity of beliefs about the history of life as an argument against evolution. Many people are wrong about many things that science has shown to be the case; this is worrisome for various reasons, but it is not an argument against the validity of scientific knowledge. Similarly, a diversity of opinions about morality is worrisome, but hardly evidence against the validity of morality.

In fact, when they talk about such fundamental disagreements in morality, anti-realists don’t have very compelling examples. It’s easy to find fundamental disagreements about biology—ask an evolutionary biologist and a Creationist whether humans share an ancestor with chimpanzees. It’s easy to find fundamental disagreements about cosmology—ask a physicist and an evangelical Christian how the Earth began. It’s easy to find fundamental disagreements about climate—ask a climatologist and an oil company executive whether human beings are causing global warming. But where are these fundamental disagreements in morality? Sure, on specific matters there is some disagreement. There are differences between cultures regarding what animals it is acceptable to eat, and differences between cultures about what constitutes acceptable clothing, and differences on specific political issues. But in what society is it acceptable to kill people arbitrarily? Where is it all right to steal whatever you want? Where is lying viewed as a good thing? Where is it obligatory to eat only dirt? In what culture has wearing clothes been a crime? Moral realists are by no means committed to saying that everyone agrees about everything—but it does support our case to point out that most people agree on most things most of the time.

There are a few compelling cases of moral disagreement, but they hardly threaten moral realism. How might we show one culture’s norms to be better than another’s? Compare homicide rates. Compare levels of poverty. Compare overall happiness, perhaps using surveys—or even brain scans. This kind of data exists, and it has a fairly clear pattern: people living in social democratic societies (such as Sweden and Norway) are wealthier, safer, longer-lived, and overall happier than people in other societies. Moreover, using the same publicly-available data, democratic societies in general do much better than authoritarian societies, by almost any measure. This is an empirical fact. It doesn’t necessarily mean that such societies are doing everything right—but they are clearly doing something right. And it really isn’t so implausible to say that what they are doing right is enforcing a good system of moral, political, and cultural norms.

Then again, perhaps some people would accept these empirical facts but still insist that their culture is superior; suppose the disagreement really is radical and intractable. This still leaves two possibilities for moral realism.

The most obvious answer would be to say that one group is wrong—that, objectively, one culture is better than another.

But even if that doesn’t work, there is another way: Perhaps both are right, or more precisely, perhaps these two cultural systems are equally good but incompatible. Is this relativism? Some might call it that, but if it is, it’s relativism of a very narrow kind. I am emphatically not saying that all existing cultures are equal, much less that all possible cultures are equal. Instead, I am saying that it is entirely possible to have two independent moral systems which prescribe different behaviors yet nonetheless result in equally-good overall outcomes.

I could make a mathematical argument involving local maxima of nonlinear functions, but instead I think I’ll use an example: Traffic laws.

In the United States, we drive on the right side of the road. In the United Kingdom, they drive on the left side. Which way is correct? Both are—both systems work well, and neither is superior in any discernible way. In fact, there are other systems that would be just as effective, like the system of all one-way roads that prevails in Manhattan.

Yet does this mean that we should abandon reason in our traffic planning, throw up our hands and declare that any traffic system is as good as any other? On the contrary—there are plenty of possible traffic systems that clearly don’t work. Pointing several one-way roads into one another with no exit is clearly not going to result in good traffic flow. Having each driver flip a coin to decide whether to drive on the left or the right would result in endless collisions. Moreover, our own system clearly isn’t perfect. Nearly 40,000 Americans die of car collisions every year; perhaps we can find a better system that will prevent some or all of these deaths. The mere fact that two, or three, or even 400 different systems of laws or morals are equally good does not entail that all systems are equally good. Even if two cultures really are equal, that doesn’t mean we need to abandon moral realism; it merely means that some problems have multiple solutions. “X2 = 4; what is X?” has two perfectly correct answers (2 and -2), but it also has an infinite variety of wrong answers.

In fact, moral disagreement may not be evidence of anti-realism at all. In order to disagree with someone, you must think that there is an objective fact to be decided. If moral statements were seen as arbitrary and subjective, then people wouldn’t argue about them very much. Imagine an argument, “Chocolate is the best flavor of ice cream!” “No, vanilla is the best!”. This sort of argument might happen on occasion between seven-year-olds, but it is definitely not the sort of thing we hear from mature adults. This is because as adults we realize that tastes in ice cream really are largely subjective. An anti-realist can, in theory, account for this, if they can explain why moral values are falsely perceived as objective while values in taste are not; but if all values are all really arbitrary and subjective, why is it that this is obvious to everyone in the one case and not the other? In fact, there are compelling reasons to think that we couldn’t perceive moral values as arbitrary even if we tried. Some people say “abortion is a right”, others say “abortion is murder”. Even if we were to say that these are purely arbitrary, we would still be left with the task of deciding what laws to make on abortion. Regardless of where the goals come from, some goals are just objectively incompatible.

Another common anti-realist argument rests upon the way that arguments about morality often become emotional and irrational. Charles Stevenson has made this argument; apparently Stevenson has never witnessed an argument about religion, science, or policy, certainly not one outside academia. Many laypeople will insist passionately that the free market is perfect, global warming is a lie, or the Earth is only 6,000 years old. (Often the same people, come to think of it.) People will grow angry and offended if such beliefs are disputed. Yet these are objectively false claims. Unless we want to be anti-realists about GDP, temperature and radiometric dating, emotional and irrational arguments cannot compel us to abandon realism.

Another frequent claim, commonly known as the “argument from queerness”, says that moral facts would need to be something very strange, usually imagined as floating obligations existing somewhere in space; but this is rather like saying that mathematical facts cannot exist because we do not see floating theorems in space and we have never met a perfect triangle. In fact, there is no such thing as a floating speed of light or a floating Schrodinger’s equation either, but no one thinks this is an argument against physics.

A subtler version of this argument, the original “argument from queerness” put forth by J.L. Mackie, says that moral facts are strange because they are intrinsically motivating, something no other kind of facts would be. This is no doubt true; but it seems to me a fairly trivial observation, since part of the definition of “moral fact” is that anything which has this kind of motivational force is a moral (or at least normative) fact. Any well-defined natural kind is subject to the same sort of argument. Spheres are perfectly round three-dimensional objects, something no other object is. Eyes are organs that perceive light, something no other organ does. Moral facts are indeed facts that categorically motivate action, which no other thing does—but so what? All this means is that we have a well-defined notion of what it means to be a moral fact.

Finally, it is often said that moral claims are too often based on religion, and religion is epistemically unfounded, so morality must fall as well. Now, unlike most people, I completely agree that religion is epistemically unfounded. Instead, the premise I take issue with is the idea that moral claims have anything to do with religion. A lot of people seem to think so; but in fact our most important moral values transcend religion and in many cases actually contradict it.

Now, it may well be that the majority of claims people make about morality are to some extent based in their religious beliefs. The majority of governments in history have been tyrannical; does that mean that government is inherently tyrannical, there is no such thing as a just government? The vast majority of human beings have never traveled in outer space; does that mean space travel is impossible? Similarly, I see no reason to say that simply because the majority of moral claims (maybe) are religious, therefore moral claims are inherently religious.

Generally speaking, moral anti-realists make a harsh distinction between morality and other domains of knowledge. They agree that there are such things as trucks and comets and atoms, but do not agree that there are such things as obligations and rights. Indeed, a typical moral anti-realist speaks as if they are being very rigorous and scientific while we moral realists are being foolish, romantic, even superstitious. Moral anti-realism has an attitude of superciliousness not seen in a scientific faction since behaviorism.

But in fact, I think moral anti-realism is the result of a narrow understanding of fundamental physics and cognitive science. It is a failure to drink deep enough of the Pierian springs. This is not surprising, since fundamental physics and cognitive science are so mind-bogglingly difficult that even the geniuses of the world barely grasp them. Quoth Feynman: “I think I can safely say that nobody understands quantum mechanics.” This was of course a bit overstated—Feynman surely knew that there are things we do understand about quantum physics, for he was among those who best understood them. Still, even the brightest minds in the world face total bafflement before problems like dark energy, quantum gravity, the binding problem, and the Hard Problem. It is no moral failing to have a narrow understanding of fundamental physics and cognitive science, for the world’s greatest minds have a scarcely broader understanding.

The failing comes from trying to apply this narrow understanding of fundamental science to moral problems without the humility to admit that the answers are never so simple. “Neuroscience proves we have no free will.” No it doesn’t! It proves we don’t have the kind of free will you thought we did. “We are all made of atoms, therefore there can be no such thing as right and wrong.” And what do you suppose we would have been made of if there were such things as right and wrong? Magical fairy dust?

Here is what I think moral anti-realists get wrong: They hear only part of what scientists say. Neuroscientists explain to them that the mind is a function of matter, and they hear it as if we had said there is only mindless matter. Physicists explain to them that we have much more precise models of atomic phenomena than we do of human behavior, and they hear it as if we had said that scientific models of human behavior are fundamentally impossible. They trust that we know very well what atoms are made of and very poorly what is right and wrong—when quite the opposite is the case.

In fact, the more we learn about physics and cognitive science, the more similar the two fields seem. There was a time when Newtonian mechanics ruled, when everyone thought that physical objects are made of tiny billiard balls bouncing around according to precise laws, while consciousness was some magical, “higher” spiritual substance that defied explanation. But now we understand that quantum physics is all chaos and probability, while cognitive processes can be mathematically modeled and brain waves can be measured in the laboratory. Something as apparently simple as a proton—let alone an extended, complex object, like a table or a comet—is fundamentally a functional entity, a unit of structure rather than substance. To be a proton is to be organized the way protons are and to do what protons do; and so to be human is to be organized the way humans are and to do what humans do. The eternal search for “stuff” of which everything is made has come up largely empty; eventually we may find the ultimate “stuff”, but when we do, it will already have long been apparent that substance is nowhere near as important as structure. Reductionism isn’t so much wrong as beside the point—when we want to understand what makes a table a table or what makes a man a man, it simply doesn’t matter what stuff they are made of. The table could be wood, glass, plastic, or metal; the man could be carbon, nitrogen and water like us, or else silicon and tantalum like Lieutenant Commander Data on Star Trek. Yes, structure must be made of something, and the substance does affect the structures that can be made out of it, but the structure is what really matters, not the substance.

Hence, I think it is deeply misguided to suggest that because human beings are made of molecules, this means that we are just the same thing as our molecules. Love is indeed made of oxytocin (among other things), but only in the sense that a table is made of wood. To know that love is made of oxytocin really doesn’t tell us very much about love; we need also to understand how oxytocin interacts with the bafflingly complex system that is a human brain—and indeed how groups of brains get together in relationships and societies. This is because love, like so much else, is not substance but function—something you do, not something you are made of.

It is not hard, rigorous science that says love is just oxytocin and happiness is just dopamine; it is naive, simplistic science. It is the sort of “science” that comes from overlaying old prejudices (like “matter is solid, thoughts are ethereal”) with a thin veneer of knowledge. To be a realist about protons but not about obligations is to be a realist about some functional relations and not others. It is to hear “mind is matter”, and fail to understand the is—the identity between them—instead acting as if we had said “there is no mind; there is only matter”. You may find it hard to believe that mind can be made of matter, as do we all; yet the universe cares not about our incredulity. The perfect correlation between neurochemical activity and cognitive activity has been verified in far too many experiments to doubt. Somehow, that kilogram of wet, sparking gelatin in your head is actually thinking and feeling—it is actually you.

And once we realize this, I do not think it is a great leap to realize that the vast collection of complex, interacting bodies moving along particular trajectories through space that was the Holocaust was actually wrong, really, objectively wrong.

Against Moral Relativism

Moral relativism is surprisingly common, especially among undergraduate students. There are also some university professors who espouse it, typically but not always from sociology, gender studies or anthropology departments (examples include Marshall Sahlins, Stanley Fish, Susan Harding, Richard Rorty, Michael Fischer, and Alison Renteln). There is a fairly long tradition of moral relativism, from Edvard Westermarck in the 1930s to Melville Herskovits, to more recently Francis Snare and David Wong in the 1980s. University of California Press at Berkeley.} In 1947, the American Anthropological Association released a formal statement declaring that moral relativism was the official position of the anthropology community, though this has since been retracted.

All of this is very, very bad, because moral relativism is an incredibly naive moral philosophy and a dangerous one at that. Vitally important efforts to advance universal human rights are conceptually and sometimes even practically undermined by moral relativists. Indeed, look at that date again: 1947, two years after the end of World War II. The world’s civilized cultures had just finished the bloodiest conflict in history, including some ten million people murdered in cold blood for their religion and ethnicity, and the very survival of the human species hung in the balance with the advent of nuclear weapons—and the American Anthropological Association was insisting that morality is meaningless independent of cultural standards? Were they trying to offer an apologia for genocide?

What is relativism trying to say, anyway? Often the arguments get tied up in knots. Consider a particular example, infanticide. Moral relativists will sometimes argue, for example, that infanticide is wrong in the modern United States but permissible in ancient Inuit society. But is this itself an objectively true normative claim? If it is, then we are moral realists. Indeed, the dire circumstances of ancient Inuit society would surely justify certain life-and-death decisions we wouldn’t otherwise accept. (Compare “If we don’t strangle this baby, we may all starve to death” and “If we don’t strangle this baby, we will have to pay for diapers and baby food”.) Circumstances can change what is moral, and this includes the circumstances of our cultural and ecological surroundings. So there could well be an objective normative fact that infanticide is justified by the circumstances of ancient Inuit life. But if there are objective normative facts, this is moral realism. And if there are no objective normative facts, then all moral claims are basically meaningless. Someone could just as well claim that infanticide is good for modern Americans and bad for ancient Inuits, or that larceny is good for liberal-arts students but bad for engineering students.

If instead all we mean is that particular acts are perceived as wrong in some societies but not in others, this is a factual claim, and on certain issues the evidence bears it out. But without some additional normative claim about whose beliefs are right, it is morally meaningless. Indeed, the idea that whatever society believes is right is a particularly foolish form of moral realism, as it would justify any behavior—torture, genocide, slavery, rape—so long as society happens to practice it, and it would never justify any kind of change in any society, because the status quo is by definition right. Indeed, it’s not even clear that this is logically coherent, because different cultures disagree, and within each culture, individuals disagree. To say that an action is “right for some, wrong for others” doesn’t solve the problem—because either it is objectively normatively right or it isn’t. If it is, then it’s right, and it can’t be wrong; and if it isn’t—if nothing is objectively normatively right—then relativism itself collapses as no more sound than any other belief.

In fact, the most difficult part of defending common-sense moral realism is explaining why it isn’t universally accepted. Why are there so many relativists? Why do so many anthropologists and even some philosophers scoff at the most fundamental beliefs that virtually everyone in the world has?

I should point out that it is indeed relativists, and not realists, who scoff at the most fundamental beliefs of other people. Relativists are fond of taking a stance of indignant superiority in which moral realism is just another form of “ethnocentrism” or “imperialism”. The most common battleground of contention recently is the issue of female circumcision, which is considered completely normal or even good in some African societies but is viewed with disgust and horror by most Western people. Other common choices include abortion, clothing, especially Islamic burqa and hijab, male circumcision, and marriage; given the incredible diversity in human food, clothing, language, religion, behavior, and technology, there are surprisingly few moral issues on which different cultures disagree—but relativists like to milk them for all they’re worth!

But I dare you, anthropologists: Take a poll. Ask people which is more important to them, their belief that, say, female circumcision is immoral, or their belief that moral right and wrong are objective truths? Virtually anyone in any culture anywhere in the world would sooner admit they are wrong about some particular moral issue than they would assent to the claim that there is no such thing as a wrong moral belief. I for one would be more willing to abandon just about any belief I hold before I would abandon the belief that there are objective normative truths. I would sooner agree that the Earth is flat and 6,000 years old, that the sky is green, that I am a brain in a vat, that homosexuality is a crime, that women are inferior to men, or that the Holocaust was a good thing—than I would ever agree that there is no such thing as right or wrong. This is of course because once I agreed that there is no objective normative truth, I would be forced to abandon everything else as well—since without objective normativity there is no epistemic normativity, and hence no justice, no truth, no knowledge, no science. If there is nothing objective to say about how we ought to think and act, then we might as well say the Earth is flat and the sky is green.

So yes, when I encounter other cultures with other values and ideas, I am forced to deal with the fact that they and I disagree about many things, important things that people really should agree upon. We disagree about God, about the afterlife, about the nature of the soul; we disagree about many specific ethical norms, like those regarding racial equality, feminism, sexuality and vegetarianism. We may disagree about economics, politics, social justice, even family values. But as long as we are all humans, we probably agree about a lot of other important things, like “murder is wrong”, “stealing is bad”, and “the sky is blue”. And one thing we definitely do not disagree about—the one cornerstone upon which all future communication can rest—is that these things matter, that they really do describe actual features of an actual world that are worth knowing. If it turns out that I am wrong about these things, \I would want to know! I’d much rather find out I’d been living the wrong way than continue to live the same pretending that it doesn’t matter. I don’t think I am alone in this; indeed, I suspect that the reason people get so angry when I tell them that religion is untrue is precisely because they realize how important it is. One thing religious people never say is “Well, God is imaginary to you, perhaps; but to me God is real. Truth is relative.” I’ve heard atheists defend other people’s beliefs in such terms—but no one ever defends their own beliefs that way. No Evangelical Baptist thinks that Christianity is an arbitrary social construction. No Muslim thinks that Islam is just one equally-valid perspective among many. It is you, relativists, who deny people’s fundamental beliefs.

Yet the fact that relativists accuse realists of being chauvinistic hints at the deeper motivations of moral relativism. In a word: Guilt. Moral relativism is an outgrowth of the baggage of moral guilt and self-loathing that Western societies have built up over the centuries. Don’t get me wrong: Western cultures have done terrible things, many terrible things, all too recently. We needn’t go so far back as the Crusades or the ethnocidal “colonization” of the Americas; we need only look to the carpet-bombing of Dresden in 1945 or the defoliation of Vietnam in the 1960s, or even the torture program as recently as 2009. There is much evil that even the greatest nations of the world have to answer for. For all our high ideals, even America, the nation of “life, liberty, and the pursuit of happiness”, the culture of “liberty and justice for all”, has murdered thousands of innocent people—and by “murder” I mean murder, killing not merely by accident in the collateral damage of necessary war, but indeed in acts of intentional and selfish cruelty. Not all war is evil—but many wars are, and America has fought in some of them. No Communist radical could ever burn so much of the flag as the Pentagon itself has burned in acts of brutality.

Yet it is an absurd overreaction to suggest that there is nothing good about Western culture, nothing valuable about secularism, liberal democracy, market economics, or technological development. It is even more absurd to carry the suggestion further, to the idea that civilization was a mistake and we should all go back to our “natural” state as hunter-gatherers. Yet there are anthropologists working today who actually say such things. And then, as if we had not already traversed so far beyond the shores of rationality that we can no longer see the light of home, then relativists take it one step further and assert that any culture is as good as any other.

Think about what this would mean, if it were true. To say that all cultures are equal is to say that science, education, wealth, technology, medicine—all of these are worthless. It is to say that democracy is no better than tyranny, security is no better than civil war, secularism is no better than theocracy. It is to say that racism is as good as equality, sexism is as good as feminism, feudalism is as good as capitalism.

Many relativists seem worried that moral realism can be used by the powerful and privileged to oppress others—the cishet White males who rule the world (and let’s face it, cishet White males do, pretty much, rule the world!) can use the persuasive force of claiming objective moral truth in order to oppress women and minorities. Yet what is wrong with oppressing women and minorities, if there is no such thing as objective moral truth? Only under moral realism is oppression truly wrong.

Why is America so bad at public transit?

Sep 8 JDN 2460562

In most of Europe, 20-30% of the population commutes daily by public transit. In the US, only 13% do.

Even countries much poorer than the US have more widespread use of public transit; Kenya, Russia, and Venezuela all have very high rates of public transit use.

Cities around the world are rapidly expanding and improving their subway systems; but we are not here in the US.

Germany, France, Spain, Italy, and Japan are all building huge high-speed rail networks. We have essentially none.

Even Canada has better public transit than we do, and their population is just as spread out as ours.

Why are we so bad at this?

Surprisingly, it isn’t really that we are lacking in rail network. We actually have more kilometers of rail than China or the EU—though shockingly little of it is electrified, and we had nearly twice as many kilometers of rail a century ago. But we use this rail network almost entirely for freight, not passengers.

Is it that we aren’t spending enough government funds? Sort of. But it’s worth noting that we cover a higher proportion of public transit costs with government funds than most other countries. How can this be? It’s because transit systems get more efficient as they get larger, and attract more passengers as they provide better service. So when you provide really bad service, you end up spending more per passenger, and you need more government subsidies to stay afloat.

Cost is definitely part of it: It costs between two and seven times as much to build the same amount of light rail network in the US as it does in most EU countries. But that just raises another question: Why is it so much more expensive here?

This isn’t comparing with China—of course China is cheaper; they have a dictatorship, they abuse their workers, they pay peanuts. None of that is true of France or Germany, democracies where wages are just as high and worker protections are actually a good deal stronger than here. Yet it still costs two to seven times as much to build the same amount of rail in the US as it does in France or Germany.

Another part of the problem seems to be that public transit in the US is viewed as a social welfare program, rather than an infrastructure program: Rather than seeing it as a vital function of government that supports a strong economy, we see it as a last resort for people too poor to buy cars. And then it becomes politicized, because the right wing in the US hates social welfare programs and will do anything to make sure that they are cut down as much as possible.

It wasn’t always this way.

As recently as 1970, most US major cities had strong public transit systems. But now it’s really only the coastal cities that have them; cities throughout the South and Midwest have massively divested from their public transit. This goes along with a pattern of deindustrialization and suburbanization: These cities are stagnating economically and their citizens are moving out to the suburbs, so there’s no money for public transit and there’s more need for roads.

But the decline of US public transit goes back even further than that. Average transit trips per person in the US fell from 115 per year in 1950 to 36 per year in 1970.

This long, slow decline has only gotten worse as a result of the COVID pandemic; with more and more people working remotely, there’s just less need for commuting in general. (Then again, that also means fewer car miles, so it’s probably a good thing from an environmental perspective.)

Once public transit starts failing, it becomes a vicious cycle: They lose revenue, so they cut back on service, so they become more inconvenient, so they lose more revenue. Really successful public transit systems require very heavy investment in order to maintain fast, convenient service across an entire city. Any less than that, and people will just turn to cars instead.

Currently, the public transit systems in most US cities are suffering severe financial problems, largely as a result of the pandemic; they are facing massive shortfalls in their budgets. The federal government often helps with the capital costs of buying vehicles and laying down new lines, but not with the operating costs of actually running the system.

There seems to be some kind of systemic failure in the US in particular; something about our politics, or our economy, or our culture just makes us uniquely bad at building and maintaining public transit.

What should we do about this?

One option would be to do nothing—laissez faire. Maybe cars are just a more efficient mode of transportation, or better for what Americans want, and we should accept that.

But when you look at the externalities involved, it becomes clear that this is not the right approach. While cars produce enormous amounts of pollution and carbon emissions, public transit is much, much cleaner. (Electric cars are better than diesel buses, but still worse than trams and light rail—and besides, the vast majority of cars use gasoline.) Just for clean air and climate change alone, we have strong reasons to want fewer cars and more public transit.

And there are positive externalities of public transit too; it’s been estimated that for every $1 spent on public transit, a city gains $5 in economic activity. We’re leaving a lot of money on the table by failing to invest in something so productive.

We need a fundamental shift in how Americans think about public transit. Not as a last resort for the poor, but as a default option for everyone. Not as a left-wing social welfare program, but as a vital component of our nation’s infrastructure.

Whenever people get stuck in traffic, instead of resenting other drivers (who are in exactly the same boat!), they should resent that the government hasn’t supported more robust public transit systems—and then they should go out and vote for candidates and policies that will change that.

Of course, with everything else that’s wrong with our economy and our political system, I can understand why this might not be a priority right now. But sooner or later we are going to need to fix this, or it’s just going to keep getting worse and worse.

Housing should be cheap

Sep 1 JDN 2460555

We are of two minds about housing in our society. On the one hand, we recognize that shelter is a necessity, and we want it to be affordable for all. On the other hand, we see real estate as an asset, and we want it to appreciate in value and thereby provide a store of wealth. So on the one hand we want it to be cheap, but on the other hand we want it to be expensive. And of course it can’t be both.

This is not a uniquely American phenomenon. As Noah Smith points out, it seems to be how things are done in almost every country in the world. It may be foolish for me to try to turn such a tide. But I’m going to try anyway.

Housing should be cheap.

For some reason, inflation is seen as a bad thing for every other good, necessity and luxury alike; but when it comes to housing in particular—the single biggest expense for almost everyone—suddenly we are conflicted about it, and think that maybe inflation is a good thing actually.

This is because owning a home that appreciates in value provides the illusion of increasing wealth.

Yes, I said illusion. In some particular circumstances it can sometimes increase real wealth, but when housing is getting more expensive everywhere at once (which is basically true), it doesn’t actually increase real wealth—because you still need to have a home. So while you’d get more money if you sold your current home, you’d have to go buy another home that would be just as expensive. That extra wealth is largely imaginary.

In fact, what isn’t an illusion is your increased property tax bill. If you aren’t planning on selling your home any time soon, you should really see its appreciation as a bad thing; now you suddenly owe more in taxes.

Home equity lines of credit complicate this a bit; for some reason we let people collateralize part of the home—even though the whole home is already collateralized with a mortgage to someone else—and thereby turn that largely-imaginary wealth into actual liquid cash. This is just one more way that our financial system is broken; we shouldn’t be offering these lines of credit, just as we shouldn’t be creating mortgage-backed securities. Cleverness is not a virtue in finance; banking should be boring.

But you’re probably still not convinced. So I’d like you to consider a simple thought experiment, where we take either view to the extreme: Make housing 100 times cheaper or 100 times more expensive.

Currently, houses cost about $400,000. So in Cheap World, houses cost $4,000. In Expensive World, they cost $40 million.

In Cheap World, there is no homelessness. Seriously, zero. It would make no sense at all for the government not to simply buy everyone a house. If you want to also buy your own house—or a dozen—go ahead, that’s fine; but you get one for free, paid for by tax dollars, because that’s cheaper than a year of schooling for a high-school student; it’s in fact not much more than what we’d currently spend to house someone in a homeless shelter for a year. So given the choice of offering someone two years at a shelter versus never homeless ever again, it’s pretty obvious we should choose the latter. Thus, in Cheap World, we all have a roof over our heads. And instead of storing their wealth in their homes in Cheap World, people store their wealth in stocks and bonds, which have better returns anyway.

In Expensive World, the top 1% are multi-millionaires who own homes, maybe the top 10% can afford rent, and the remaining 89% of the population are homeless. There’s simply no way to allocate the wealth of our society such that a typical middle class household has $40 million. We’re just not that rich. We probably never will be that rich. It may not even be possible to make a society that rich. In Expensive World, most people live in tents on the streets, because housing has been priced out of reach for all but the richest families.

Cheap World sounds like an amazing place to live. Expensive World is a horrific dystopia. The only thing I changed was the price of housing.


Yes, I changed it a lot; but that was to make the example as clear as possible, and it’s not even as extreme as it probably sounds. At 10% annual growth, 100 times more expensive only takes 49 years. At the current growth rate of housing prices of about 5% per year, it would take 95 years. A century from now, if we don’t fix our housing market, we will live in Expensive World. (Yes, we’ll most likely be richer then too; but will we be that much richer? Median income has not been rising nearly as fast as median housing price. If current trends continue, median income will be 5 times bigger and housing prices will be 100 times bigger—that’s still terrible.)

We’re already seeing something that feels a lot like Expensive World in some of our most expensive cities. San Francisco has ludicrously expensive housing and also a massive homelessness crisis—this is not a coincidence. Homelessness does still exist in more affordable cities, but clearly not at the same crisis level.

I think part of the problem is that people don’t really understand what wealth is. They see the number go up, and they think that means there is more wealth. Real wealth consists in goods, not in prices. The wealth we have is made of real things, not monetary prices. Prices merely decide how wealth is allocated.

A home is wealth, yes. But it’s the same amount of real wealth regardless of what price it has, because what matters is what it’s good for. If you become genuinely richer by selling an appreciated home, you gained that extra wealth from somewhere else; it was not contained within your home. You have appropriated wealth that someone else used to have. You haven’t created wealth; you’ve merely obtained it.

For you as an individual, that may not make a difference; you still get richer. But as a society, it makes all the difference: Moving wealth around doesn’t make our society richer, and all higher prices can do is move wealth around.

This means that rising housing prices simply cannot make our whole society richer. Better houses could do that. More houses could do that. But simply raising the price tag isn’t making our society richer. If it makes anyone richer—which, again, typically it does not—it does so by moving wealth from somewhere else. And since homeowners are generally richer than non-homeowners (even aside from their housing wealth!), more expensive homes means moving wealth from poorer people to richer people—increased inequality.

We used to have affordable housing, just a couple of generations ago. But we may never have truly affordable housing again, because people really don’t like to see that number go down, and they vote for policies accordingly—especially at the local level. Our best hope right now seems to be to keep it from going up faster than the growth rate of income, so that homes don’t become any more unaffordable than they already are.

But frankly I’m not optimistic. I think part of the cyberpunk dystopia we’re careening towards is Expensive World.

How to detect discrimination, empirically

Aug 25 JDN 2460548

For concreteness, I’ll use men and women as my example, though the same principles would apply for race, sexual orientation, and so on. Suppose we find that there are more men than women in a given profession; does this mean that women are being discriminated against?

Not necessarily. Maybe women are less interested in that kind of work, or innately less qualified. Is there a way we can determine empirically that it really is discrimination?

It turns out that there is. All we need is a reliable measure of performance in that profession. Then, we compare performance between men and women, and that comparison can tell us whether discrimination is happening or not. The key insight is that workers in a job are not a random sample; they are a selected sample. The results of that selection can tell us whether discrimination is happening.

Here’s a simple model to show how this works.

Suppose there are five different skill levels in the job, from 1 to 5 where 5 is the most skilled. And suppose there are 5 women and 5 men in the population.

1. Baseline

The baseline case to consider is when innate talents are equal and there is no discrimination. In that case, we should expect men and women to be equally represented in the profession.

For the simplest case, let’s say that there is one person at each skill level:

MenWomen
11
22
33
44
55

Now suppose that everyone above a certain skill threshold gets hired. Since we’re assuming no discrimination, the threshold should be the same for men and women. Let’s say it’s 3; then these are the people who get hired:

Hired MenHired Women
33
44
55

The result is that not only are there the same number of men and women in the job, their skill levels are also the same. There are just as many highly-competent men as highly-competent women.

2. Innate Differences

Now, suppose there is some innate difference in talent between men and women for this job. For most jobs this seems suspicious, but consider pro sports: Men really are better at basketball, in general, than women, and this is pretty clearly genetic. So it’s not absurd to suppose that for at least some jobs, there might be some innate differences. What would that look like?


Again suppose a population of 5 men and 5 women, but now the women are a bit less qualified: There are two 1s and no 5s among the women.

MenWomen
11
21
32
43
54

Then, this is the group that will get hired:

Hired MenHired Women
33
44
5

The result will be fewer women who are on average less qualified. The most highly-qualified individuals at that job will be almost entirely men. (In this simple model, entirely men; but you can easily extend it so that there are a few top-qualified women.)

This is in fact what we see for a lot of pro sports; in a head-to-head match, even the best WNBA teams would generally lose against most NBA teams. That’s what it looks like when there are real innate differences.

But it’s hard to find clear examples outside of sports. The genuine, large differences in size and physical strength between the sexes just don’t seem to be associated with similar differences in mental capabilities or even personality. You can find some subtler effects, but nothing very large—and certainly nothing large enough to explain the huge gender gaps in various industries.

3. Discrimination

What does it look like when there is discrimination?

Now assume that men and women are equally qualified, but it’s harder for women to get hired, because of discrimination. The key insight here is that this amounts to women facing a higher threshold. Where men only need to have level 3 competence to get hired, women need level 4.

So if the population looks like this:

MenWomen
11
22
33
44
55

The hired employees will look like this:

Hired MenHired Women
3
44
55

Once again we’ll have fewer women in the profession, but they will be on average more qualified. The top-performing individuals will be as likely to be women as they are to be men, while the lowest-performing individuals will be almost entirely men.

This is the kind of pattern we observe when there is discrimination. Do we see it in real life?

Yes, we see it all the time.

Corporations with women CEOs are more profitable.

Women doctors have better patient outcomes.

Startups led by women are more likely to succeed.

This shows that there is some discrimination happening, somewhere in the process. Does it mean that individual firms are actively discriminating in their hiring process? No, it doesn’t. The discrimination could be happening somewhere else; maybe it happens during education, or once women get hired. Maybe it’s a product of sexism in society as a whole, that isn’t directly under the control of employers. But it must be in there somewhere. If women are both rarer and more competent, there must be some discrimination going on.

What if there is also innate difference? We can detect that too!

4. Both

Suppose now that men are on average more talented, but there is also discrimination against women. Then the population might look like this:

MenWomen
11
21
32
43
54

And the hired employees might look like this:

Hired MenHired Women
3
4
54

In such a scenario, you’ll see a large gender imbalance, but there may not be a clear difference in competence. The tiny fraction of women who get hired will perform about as well as the men, on average.

Of course, this assumes that the two effects are of equal strength. In reality, we might see a whole spectrum of possibilities, from very strong discrimination with no innate differences, all the way to very large innate differences with no discrimination. The outcomes will then be similarly along a spectrum: When discrimination is much larger than innate difference, women will be rare but more competent. When innate difference is much larger than discrimination, women will be rare and less competent. And when there is a mix of both, women will be rare but won’t show as much difference in competence.

Moreover, if you look closer at the distribution of performance, you can still detect the two effects independently. If the lowest-performing workers are almost all men, that’s evidence of discrimination against women; while if the highest-performing workers are almost all men, that’s evidence of innate difference. And if you look at the table above, that’s exactly what we see: Both the 3 and the 5 are men, indicating the presence of both effects.

What does affirmative action do?

Effectively, affirmative action lowers the threshold for hiring women (or minorities) in order to equalize representation in the workplace. In the presence of discrimination raising that threshold, this is exactly what we need! It can take us from case 3 (discrimination) to case 1 (equality), or from case 4 (both discrimination and innate difference) to case 2 (innate difference only).

Of course, it’s possible for us to overshoot, using more affirmative action than we should have. If we achieve better representation of women, but the lowest performers at the job are women, then we have overshot, effectively now discriminating against men. Fortunately, there is very little evidence of this in practice. In general, even with affirmative action programs in place, we tend to find that the lowest performers are still men—so there is still discrimination against women that we’ve failed to compensate for.

What if we can’t measure competence?

Of course, it’s possible that we don’t have good measures of competence in a given industry. (One must wonder how firms decide who to hire, but frankly I’m prepared to believe they’re just really bad at it.) Then we can’t observe discrimination statistically in this way. What do we do then?

Well, there is at least one avenue left for us to detect discrimination: We can do direct experiments comparing resumes with male names versus female names. These sorts of experiments typically don’t find very much, though—at least for women. For different races, they absolutely do find strong results. They also find evidence of discrimination against people with disabilities, older people, and people who are physically unattractive. There’s also evidence of intersectional effects, where women of particular ethnic groups get discriminated against even when women in general don’t.

But this will only pick up discrimination if it occurs during the hiring process. The advantage of having a competence measure is that it can detect discrimination that occurs anywhere—even outside employer control. Of course, if we don’t know where the discrimination is happening, that makes it very hard to fix; so the two approaches are complementary.

And there is room for new methods too; right now we don’t have a good way to detect discrimination in promotion decisions, for example. Many of us suspect that it occurs, but unless you have a good measure of competence, you can’t really distinguish promotion discrimination from innate differences in talent. We don’t have a good method for testing that in a direct experiment, either, because unlike hiring, we can’t just use fake resumes with masculine or feminine names on them.

Why are groceries so expensive?

Aug 18 JDN 2460541

There has been unusually high inflation the past few years, mostly attributable to the COVID pandemic and its aftermath. But groceries in particular seem to have gotten especially more expensive. We’ve all felt it: Eggs, milk, and toilet paper especially soared to extreme prices and then, even when they came back down, never came down all the way.

Why would this be?

Did it involve supply chain disruptions? Sure. Was it related to the war in Ukraine? Probably.

But it clearly wasn’t just those things—because, as the FTC recently found, grocery stores have been colluding and price-gouging. Large grocery chains like Walmart and Kroger have a lot of market power, and they used that power to raise prices considerably faster than was necessary to keep up with their increased costs; as a result, they made record profits. Their costs did genuinely increase, but they increased their prices even more, and ended up being better off.

The big chains were also better able to protect their own supply chains than smaller companies, and so the effects of the pandemic further entrenched the market power of a handful of corporations. Some of them also imposed strict delivery requirements on their suppliers, pressuring them to prioritize the big companies over the small ones.

This kind of thing is what happens when we let oligopolies take control. When only a few companies control the market, prices go up, quality goes down, and inequality gets worse.

For far too long, institutions like the FTC have failed to challenge the ever tighter concentration of our markets in the hands of a small number of huge corporations.

And it’s not just grocery stores.

Our media is dominated by five corporations: Disney, WarnerMedia, NBCUniversal, Sony, and Paramount.

Our cell phone service is 99% controlled by three corporations: T-Mobile, Verizon, and AT&T.

Our music industry is dominated by three corporations: Sony, Universal, and Warner.

Two-thirds of US airline traffic are in four airlines: American, Delta, Southwest, and United.

Nearly 40% of US commercial banking assets are controlled by just three banks: JPMorgan Chase, Bank of America, and Citigroup.

Do I even need to mention the incredible market share Google has in search—over 90%—or Facebook has in social media—over 50%?

And most of these lists used to be longer. Disney recently acquired 21st Century Fox. Viacom recently merged with CBS and then became Paramount. Universal recently acquired EMI. Our markets aren’t simply alarmingly concentrated; they have also been getting more concentrated over time.

Institutions like the FTC are supposed to be protecting us from oligopolies, by ensuring that corporations can’t merge and acquire each other once they reach a certain market share. But decades of underfunding and laissez-faire ideology have weakened these institutions. So many mergers that obviously shouldn’t have been allowed were allowed, because no regulatory agency had the will and the strength to stop them.

The good news is that this is finally beginning to change: The FTC has recently (finally!) sued Google for maintaining a monopoly on Internet search. And among grocery stores in particular, the FTC is challenging Kroger’s acquisition of Albertson’s—though it remains unclear whether that challenge will succeed.

Hopefully this is a sign that the FTC has found its teeth again, and will continue to prosecute anti-trust cases against oligopolies. A lot of that may depend on who ends up in the White House this November.