Against Moral Anti-Realism

Sep 22 JDN 2460576

Moral anti-realism is more philosophically sophisticated than relativism, but it is equally mistaken. It is what is sounds like, the negation of moral realism. Moral anti-realists hold that moral truths are meaningless because they rest upon presumptions about the world that fail to hold. To an anti-realist, “genocide is wrong” is meaningless because there is no such thing as “wrong”, much as to any sane person “unicorns have purple feathers” is meaningless because there are no such things as unicorns. They aren’t saying that genocide isn’t wrong—they’re saying that wrong itself is a defective concept.

The vast majority of people profess strong beliefs in moral truth, and indeed strong beliefs about particular moral issues, such as abortion, capital punishment, same-sex marriage, euthanasia, contraception, civil liberties, and war. There is at the very least a troubling tension here between academia and daily life.

This does not by itself prove that moral truths exist. Ordinary people could be simply wrong about these core beliefs. Indeed, I must acknowledge that most ordinary people clearly are deeply ignorant about certain things, as only 55\% of Americans believe that the theory of evolution is true, and only 66\% of Americans agree that the majority of recent changes in Earth’s climate has been caused by human activity, when in reality these are scientific facts, empirically demonstrable through multiple lines of evidence, verified beyond all reasonable doubt, and both evolution and climate change are universally accepted within the scientific community. In scientific terms there is no more doubt about evolution or climate change than there is about the shape of the Earth or the structure of the atom.

If there were similarly compelling reasons to be moral anti-realists, then the fact that most people believe in morality would be little different: Perhaps most ordinary people are simply wrong about these issues. But when asked to provide similarly compelling evidence for why they reject the moral views of ordinary people, moral anti-realists have little to offer.

Many anti-realists will note the diversity of moral opinions in the world, as John Burgess did, which would be rather like noting the diversity of beliefs about the soul as an argument against neuroscience, or noting the diversity of beliefs about the history of life as an argument against evolution. Many people are wrong about many things that science has shown to be the case; this is worrisome for various reasons, but it is not an argument against the validity of scientific knowledge. Similarly, a diversity of opinions about morality is worrisome, but hardly evidence against the validity of morality.

In fact, when they talk about such fundamental disagreements in morality, anti-realists don’t have very compelling examples. It’s easy to find fundamental disagreements about biology—ask an evolutionary biologist and a Creationist whether humans share an ancestor with chimpanzees. It’s easy to find fundamental disagreements about cosmology—ask a physicist and an evangelical Christian how the Earth began. It’s easy to find fundamental disagreements about climate—ask a climatologist and an oil company executive whether human beings are causing global warming. But where are these fundamental disagreements in morality? Sure, on specific matters there is some disagreement. There are differences between cultures regarding what animals it is acceptable to eat, and differences between cultures about what constitutes acceptable clothing, and differences on specific political issues. But in what society is it acceptable to kill people arbitrarily? Where is it all right to steal whatever you want? Where is lying viewed as a good thing? Where is it obligatory to eat only dirt? In what culture has wearing clothes been a crime? Moral realists are by no means committed to saying that everyone agrees about everything—but it does support our case to point out that most people agree on most things most of the time.

There are a few compelling cases of moral disagreement, but they hardly threaten moral realism. How might we show one culture’s norms to be better than another’s? Compare homicide rates. Compare levels of poverty. Compare overall happiness, perhaps using surveys—or even brain scans. This kind of data exists, and it has a fairly clear pattern: people living in social democratic societies (such as Sweden and Norway) are wealthier, safer, longer-lived, and overall happier than people in other societies. Moreover, using the same publicly-available data, democratic societies in general do much better than authoritarian societies, by almost any measure. This is an empirical fact. It doesn’t necessarily mean that such societies are doing everything right—but they are clearly doing something right. And it really isn’t so implausible to say that what they are doing right is enforcing a good system of moral, political, and cultural norms.

Then again, perhaps some people would accept these empirical facts but still insist that their culture is superior; suppose the disagreement really is radical and intractable. This still leaves two possibilities for moral realism.

The most obvious answer would be to say that one group is wrong—that, objectively, one culture is better than another.

But even if that doesn’t work, there is another way: Perhaps both are right, or more precisely, perhaps these two cultural systems are equally good but incompatible. Is this relativism? Some might call it that, but if it is, it’s relativism of a very narrow kind. I am emphatically not saying that all existing cultures are equal, much less that all possible cultures are equal. Instead, I am saying that it is entirely possible to have two independent moral systems which prescribe different behaviors yet nonetheless result in equally-good overall outcomes.

I could make a mathematical argument involving local maxima of nonlinear functions, but instead I think I’ll use an example: Traffic laws.

In the United States, we drive on the right side of the road. In the United Kingdom, they drive on the left side. Which way is correct? Both are—both systems work well, and neither is superior in any discernible way. In fact, there are other systems that would be just as effective, like the system of all one-way roads that prevails in Manhattan.

Yet does this mean that we should abandon reason in our traffic planning, throw up our hands and declare that any traffic system is as good as any other? On the contrary—there are plenty of possible traffic systems that clearly don’t work. Pointing several one-way roads into one another with no exit is clearly not going to result in good traffic flow. Having each driver flip a coin to decide whether to drive on the left or the right would result in endless collisions. Moreover, our own system clearly isn’t perfect. Nearly 40,000 Americans die of car collisions every year; perhaps we can find a better system that will prevent some or all of these deaths. The mere fact that two, or three, or even 400 different systems of laws or morals are equally good does not entail that all systems are equally good. Even if two cultures really are equal, that doesn’t mean we need to abandon moral realism; it merely means that some problems have multiple solutions. “X2 = 4; what is X?” has two perfectly correct answers (2 and -2), but it also has an infinite variety of wrong answers.

In fact, moral disagreement may not be evidence of anti-realism at all. In order to disagree with someone, you must think that there is an objective fact to be decided. If moral statements were seen as arbitrary and subjective, then people wouldn’t argue about them very much. Imagine an argument, “Chocolate is the best flavor of ice cream!” “No, vanilla is the best!”. This sort of argument might happen on occasion between seven-year-olds, but it is definitely not the sort of thing we hear from mature adults. This is because as adults we realize that tastes in ice cream really are largely subjective. An anti-realist can, in theory, account for this, if they can explain why moral values are falsely perceived as objective while values in taste are not; but if all values are all really arbitrary and subjective, why is it that this is obvious to everyone in the one case and not the other? In fact, there are compelling reasons to think that we couldn’t perceive moral values as arbitrary even if we tried. Some people say “abortion is a right”, others say “abortion is murder”. Even if we were to say that these are purely arbitrary, we would still be left with the task of deciding what laws to make on abortion. Regardless of where the goals come from, some goals are just objectively incompatible.

Another common anti-realist argument rests upon the way that arguments about morality often become emotional and irrational. Charles Stevenson has made this argument; apparently Stevenson has never witnessed an argument about religion, science, or policy, certainly not one outside academia. Many laypeople will insist passionately that the free market is perfect, global warming is a lie, or the Earth is only 6,000 years old. (Often the same people, come to think of it.) People will grow angry and offended if such beliefs are disputed. Yet these are objectively false claims. Unless we want to be anti-realists about GDP, temperature and radiometric dating, emotional and irrational arguments cannot compel us to abandon realism.

Another frequent claim, commonly known as the “argument from queerness”, says that moral facts would need to be something very strange, usually imagined as floating obligations existing somewhere in space; but this is rather like saying that mathematical facts cannot exist because we do not see floating theorems in space and we have never met a perfect triangle. In fact, there is no such thing as a floating speed of light or a floating Schrodinger’s equation either, but no one thinks this is an argument against physics.

A subtler version of this argument, the original “argument from queerness” put forth by J.L. Mackie, says that moral facts are strange because they are intrinsically motivating, something no other kind of facts would be. This is no doubt true; but it seems to me a fairly trivial observation, since part of the definition of “moral fact” is that anything which has this kind of motivational force is a moral (or at least normative) fact. Any well-defined natural kind is subject to the same sort of argument. Spheres are perfectly round three-dimensional objects, something no other object is. Eyes are organs that perceive light, something no other organ does. Moral facts are indeed facts that categorically motivate action, which no other thing does—but so what? All this means is that we have a well-defined notion of what it means to be a moral fact.

Finally, it is often said that moral claims are too often based on religion, and religion is epistemically unfounded, so morality must fall as well. Now, unlike most people, I completely agree that religion is epistemically unfounded. Instead, the premise I take issue with is the idea that moral claims have anything to do with religion. A lot of people seem to think so; but in fact our most important moral values transcend religion and in many cases actually contradict it.

Now, it may well be that the majority of claims people make about morality are to some extent based in their religious beliefs. The majority of governments in history have been tyrannical; does that mean that government is inherently tyrannical, there is no such thing as a just government? The vast majority of human beings have never traveled in outer space; does that mean space travel is impossible? Similarly, I see no reason to say that simply because the majority of moral claims (maybe) are religious, therefore moral claims are inherently religious.

Generally speaking, moral anti-realists make a harsh distinction between morality and other domains of knowledge. They agree that there are such things as trucks and comets and atoms, but do not agree that there are such things as obligations and rights. Indeed, a typical moral anti-realist speaks as if they are being very rigorous and scientific while we moral realists are being foolish, romantic, even superstitious. Moral anti-realism has an attitude of superciliousness not seen in a scientific faction since behaviorism.

But in fact, I think moral anti-realism is the result of a narrow understanding of fundamental physics and cognitive science. It is a failure to drink deep enough of the Pierian springs. This is not surprising, since fundamental physics and cognitive science are so mind-bogglingly difficult that even the geniuses of the world barely grasp them. Quoth Feynman: “I think I can safely say that nobody understands quantum mechanics.” This was of course a bit overstated—Feynman surely knew that there are things we do understand about quantum physics, for he was among those who best understood them. Still, even the brightest minds in the world face total bafflement before problems like dark energy, quantum gravity, the binding problem, and the Hard Problem. It is no moral failing to have a narrow understanding of fundamental physics and cognitive science, for the world’s greatest minds have a scarcely broader understanding.

The failing comes from trying to apply this narrow understanding of fundamental science to moral problems without the humility to admit that the answers are never so simple. “Neuroscience proves we have no free will.” No it doesn’t! It proves we don’t have the kind of free will you thought we did. “We are all made of atoms, therefore there can be no such thing as right and wrong.” And what do you suppose we would have been made of if there were such things as right and wrong? Magical fairy dust?

Here is what I think moral anti-realists get wrong: They hear only part of what scientists say. Neuroscientists explain to them that the mind is a function of matter, and they hear it as if we had said there is only mindless matter. Physicists explain to them that we have much more precise models of atomic phenomena than we do of human behavior, and they hear it as if we had said that scientific models of human behavior are fundamentally impossible. They trust that we know very well what atoms are made of and very poorly what is right and wrong—when quite the opposite is the case.

In fact, the more we learn about physics and cognitive science, the more similar the two fields seem. There was a time when Newtonian mechanics ruled, when everyone thought that physical objects are made of tiny billiard balls bouncing around according to precise laws, while consciousness was some magical, “higher” spiritual substance that defied explanation. But now we understand that quantum physics is all chaos and probability, while cognitive processes can be mathematically modeled and brain waves can be measured in the laboratory. Something as apparently simple as a proton—let alone an extended, complex object, like a table or a comet—is fundamentally a functional entity, a unit of structure rather than substance. To be a proton is to be organized the way protons are and to do what protons do; and so to be human is to be organized the way humans are and to do what humans do. The eternal search for “stuff” of which everything is made has come up largely empty; eventually we may find the ultimate “stuff”, but when we do, it will already have long been apparent that substance is nowhere near as important as structure. Reductionism isn’t so much wrong as beside the point—when we want to understand what makes a table a table or what makes a man a man, it simply doesn’t matter what stuff they are made of. The table could be wood, glass, plastic, or metal; the man could be carbon, nitrogen and water like us, or else silicon and tantalum like Lieutenant Commander Data on Star Trek. Yes, structure must be made of something, and the substance does affect the structures that can be made out of it, but the structure is what really matters, not the substance.

Hence, I think it is deeply misguided to suggest that because human beings are made of molecules, this means that we are just the same thing as our molecules. Love is indeed made of oxytocin (among other things), but only in the sense that a table is made of wood. To know that love is made of oxytocin really doesn’t tell us very much about love; we need also to understand how oxytocin interacts with the bafflingly complex system that is a human brain—and indeed how groups of brains get together in relationships and societies. This is because love, like so much else, is not substance but function—something you do, not something you are made of.

It is not hard, rigorous science that says love is just oxytocin and happiness is just dopamine; it is naive, simplistic science. It is the sort of “science” that comes from overlaying old prejudices (like “matter is solid, thoughts are ethereal”) with a thin veneer of knowledge. To be a realist about protons but not about obligations is to be a realist about some functional relations and not others. It is to hear “mind is matter”, and fail to understand the is—the identity between them—instead acting as if we had said “there is no mind; there is only matter”. You may find it hard to believe that mind can be made of matter, as do we all; yet the universe cares not about our incredulity. The perfect correlation between neurochemical activity and cognitive activity has been verified in far too many experiments to doubt. Somehow, that kilogram of wet, sparking gelatin in your head is actually thinking and feeling—it is actually you.

And once we realize this, I do not think it is a great leap to realize that the vast collection of complex, interacting bodies moving along particular trajectories through space that was the Holocaust was actually wrong, really, objectively wrong.

Against Moral Relativism

Moral relativism is surprisingly common, especially among undergraduate students. There are also some university professors who espouse it, typically but not always from sociology, gender studies or anthropology departments (examples include Marshall Sahlins, Stanley Fish, Susan Harding, Richard Rorty, Michael Fischer, and Alison Renteln). There is a fairly long tradition of moral relativism, from Edvard Westermarck in the 1930s to Melville Herskovits, to more recently Francis Snare and David Wong in the 1980s. University of California Press at Berkeley.} In 1947, the American Anthropological Association released a formal statement declaring that moral relativism was the official position of the anthropology community, though this has since been retracted.

All of this is very, very bad, because moral relativism is an incredibly naive moral philosophy and a dangerous one at that. Vitally important efforts to advance universal human rights are conceptually and sometimes even practically undermined by moral relativists. Indeed, look at that date again: 1947, two years after the end of World War II. The world’s civilized cultures had just finished the bloodiest conflict in history, including some ten million people murdered in cold blood for their religion and ethnicity, and the very survival of the human species hung in the balance with the advent of nuclear weapons—and the American Anthropological Association was insisting that morality is meaningless independent of cultural standards? Were they trying to offer an apologia for genocide?

What is relativism trying to say, anyway? Often the arguments get tied up in knots. Consider a particular example, infanticide. Moral relativists will sometimes argue, for example, that infanticide is wrong in the modern United States but permissible in ancient Inuit society. But is this itself an objectively true normative claim? If it is, then we are moral realists. Indeed, the dire circumstances of ancient Inuit society would surely justify certain life-and-death decisions we wouldn’t otherwise accept. (Compare “If we don’t strangle this baby, we may all starve to death” and “If we don’t strangle this baby, we will have to pay for diapers and baby food”.) Circumstances can change what is moral, and this includes the circumstances of our cultural and ecological surroundings. So there could well be an objective normative fact that infanticide is justified by the circumstances of ancient Inuit life. But if there are objective normative facts, this is moral realism. And if there are no objective normative facts, then all moral claims are basically meaningless. Someone could just as well claim that infanticide is good for modern Americans and bad for ancient Inuits, or that larceny is good for liberal-arts students but bad for engineering students.

If instead all we mean is that particular acts are perceived as wrong in some societies but not in others, this is a factual claim, and on certain issues the evidence bears it out. But without some additional normative claim about whose beliefs are right, it is morally meaningless. Indeed, the idea that whatever society believes is right is a particularly foolish form of moral realism, as it would justify any behavior—torture, genocide, slavery, rape—so long as society happens to practice it, and it would never justify any kind of change in any society, because the status quo is by definition right. Indeed, it’s not even clear that this is logically coherent, because different cultures disagree, and within each culture, individuals disagree. To say that an action is “right for some, wrong for others” doesn’t solve the problem—because either it is objectively normatively right or it isn’t. If it is, then it’s right, and it can’t be wrong; and if it isn’t—if nothing is objectively normatively right—then relativism itself collapses as no more sound than any other belief.

In fact, the most difficult part of defending common-sense moral realism is explaining why it isn’t universally accepted. Why are there so many relativists? Why do so many anthropologists and even some philosophers scoff at the most fundamental beliefs that virtually everyone in the world has?

I should point out that it is indeed relativists, and not realists, who scoff at the most fundamental beliefs of other people. Relativists are fond of taking a stance of indignant superiority in which moral realism is just another form of “ethnocentrism” or “imperialism”. The most common battleground of contention recently is the issue of female circumcision, which is considered completely normal or even good in some African societies but is viewed with disgust and horror by most Western people. Other common choices include abortion, clothing, especially Islamic burqa and hijab, male circumcision, and marriage; given the incredible diversity in human food, clothing, language, religion, behavior, and technology, there are surprisingly few moral issues on which different cultures disagree—but relativists like to milk them for all they’re worth!

But I dare you, anthropologists: Take a poll. Ask people which is more important to them, their belief that, say, female circumcision is immoral, or their belief that moral right and wrong are objective truths? Virtually anyone in any culture anywhere in the world would sooner admit they are wrong about some particular moral issue than they would assent to the claim that there is no such thing as a wrong moral belief. I for one would be more willing to abandon just about any belief I hold before I would abandon the belief that there are objective normative truths. I would sooner agree that the Earth is flat and 6,000 years old, that the sky is green, that I am a brain in a vat, that homosexuality is a crime, that women are inferior to men, or that the Holocaust was a good thing—than I would ever agree that there is no such thing as right or wrong. This is of course because once I agreed that there is no objective normative truth, I would be forced to abandon everything else as well—since without objective normativity there is no epistemic normativity, and hence no justice, no truth, no knowledge, no science. If there is nothing objective to say about how we ought to think and act, then we might as well say the Earth is flat and the sky is green.

So yes, when I encounter other cultures with other values and ideas, I am forced to deal with the fact that they and I disagree about many things, important things that people really should agree upon. We disagree about God, about the afterlife, about the nature of the soul; we disagree about many specific ethical norms, like those regarding racial equality, feminism, sexuality and vegetarianism. We may disagree about economics, politics, social justice, even family values. But as long as we are all humans, we probably agree about a lot of other important things, like “murder is wrong”, “stealing is bad”, and “the sky is blue”. And one thing we definitely do not disagree about—the one cornerstone upon which all future communication can rest—is that these things matter, that they really do describe actual features of an actual world that are worth knowing. If it turns out that I am wrong about these things, \I would want to know! I’d much rather find out I’d been living the wrong way than continue to live the same pretending that it doesn’t matter. I don’t think I am alone in this; indeed, I suspect that the reason people get so angry when I tell them that religion is untrue is precisely because they realize how important it is. One thing religious people never say is “Well, God is imaginary to you, perhaps; but to me God is real. Truth is relative.” I’ve heard atheists defend other people’s beliefs in such terms—but no one ever defends their own beliefs that way. No Evangelical Baptist thinks that Christianity is an arbitrary social construction. No Muslim thinks that Islam is just one equally-valid perspective among many. It is you, relativists, who deny people’s fundamental beliefs.

Yet the fact that relativists accuse realists of being chauvinistic hints at the deeper motivations of moral relativism. In a word: Guilt. Moral relativism is an outgrowth of the baggage of moral guilt and self-loathing that Western societies have built up over the centuries. Don’t get me wrong: Western cultures have done terrible things, many terrible things, all too recently. We needn’t go so far back as the Crusades or the ethnocidal “colonization” of the Americas; we need only look to the carpet-bombing of Dresden in 1945 or the defoliation of Vietnam in the 1960s, or even the torture program as recently as 2009. There is much evil that even the greatest nations of the world have to answer for. For all our high ideals, even America, the nation of “life, liberty, and the pursuit of happiness”, the culture of “liberty and justice for all”, has murdered thousands of innocent people—and by “murder” I mean murder, killing not merely by accident in the collateral damage of necessary war, but indeed in acts of intentional and selfish cruelty. Not all war is evil—but many wars are, and America has fought in some of them. No Communist radical could ever burn so much of the flag as the Pentagon itself has burned in acts of brutality.

Yet it is an absurd overreaction to suggest that there is nothing good about Western culture, nothing valuable about secularism, liberal democracy, market economics, or technological development. It is even more absurd to carry the suggestion further, to the idea that civilization was a mistake and we should all go back to our “natural” state as hunter-gatherers. Yet there are anthropologists working today who actually say such things. And then, as if we had not already traversed so far beyond the shores of rationality that we can no longer see the light of home, then relativists take it one step further and assert that any culture is as good as any other.

Think about what this would mean, if it were true. To say that all cultures are equal is to say that science, education, wealth, technology, medicine—all of these are worthless. It is to say that democracy is no better than tyranny, security is no better than civil war, secularism is no better than theocracy. It is to say that racism is as good as equality, sexism is as good as feminism, feudalism is as good as capitalism.

Many relativists seem worried that moral realism can be used by the powerful and privileged to oppress others—the cishet White males who rule the world (and let’s face it, cishet White males do, pretty much, rule the world!) can use the persuasive force of claiming objective moral truth in order to oppress women and minorities. Yet what is wrong with oppressing women and minorities, if there is no such thing as objective moral truth? Only under moral realism is oppression truly wrong.

Why is America so bad at public transit?

Sep 8 JDN 2460562

In most of Europe, 20-30% of the population commutes daily by public transit. In the US, only 13% do.

Even countries much poorer than the US have more widespread use of public transit; Kenya, Russia, and Venezuela all have very high rates of public transit use.

Cities around the world are rapidly expanding and improving their subway systems; but we are not here in the US.

Germany, France, Spain, Italy, and Japan are all building huge high-speed rail networks. We have essentially none.

Even Canada has better public transit than we do, and their population is just as spread out as ours.

Why are we so bad at this?

Surprisingly, it isn’t really that we are lacking in rail network. We actually have more kilometers of rail than China or the EU—though shockingly little of it is electrified, and we had nearly twice as many kilometers of rail a century ago. But we use this rail network almost entirely for freight, not passengers.

Is it that we aren’t spending enough government funds? Sort of. But it’s worth noting that we cover a higher proportion of public transit costs with government funds than most other countries. How can this be? It’s because transit systems get more efficient as they get larger, and attract more passengers as they provide better service. So when you provide really bad service, you end up spending more per passenger, and you need more government subsidies to stay afloat.

Cost is definitely part of it: It costs between two and seven times as much to build the same amount of light rail network in the US as it does in most EU countries. But that just raises another question: Why is it so much more expensive here?

This isn’t comparing with China—of course China is cheaper; they have a dictatorship, they abuse their workers, they pay peanuts. None of that is true of France or Germany, democracies where wages are just as high and worker protections are actually a good deal stronger than here. Yet it still costs two to seven times as much to build the same amount of rail in the US as it does in France or Germany.

Another part of the problem seems to be that public transit in the US is viewed as a social welfare program, rather than an infrastructure program: Rather than seeing it as a vital function of government that supports a strong economy, we see it as a last resort for people too poor to buy cars. And then it becomes politicized, because the right wing in the US hates social welfare programs and will do anything to make sure that they are cut down as much as possible.

It wasn’t always this way.

As recently as 1970, most US major cities had strong public transit systems. But now it’s really only the coastal cities that have them; cities throughout the South and Midwest have massively divested from their public transit. This goes along with a pattern of deindustrialization and suburbanization: These cities are stagnating economically and their citizens are moving out to the suburbs, so there’s no money for public transit and there’s more need for roads.

But the decline of US public transit goes back even further than that. Average transit trips per person in the US fell from 115 per year in 1950 to 36 per year in 1970.

This long, slow decline has only gotten worse as a result of the COVID pandemic; with more and more people working remotely, there’s just less need for commuting in general. (Then again, that also means fewer car miles, so it’s probably a good thing from an environmental perspective.)

Once public transit starts failing, it becomes a vicious cycle: They lose revenue, so they cut back on service, so they become more inconvenient, so they lose more revenue. Really successful public transit systems require very heavy investment in order to maintain fast, convenient service across an entire city. Any less than that, and people will just turn to cars instead.

Currently, the public transit systems in most US cities are suffering severe financial problems, largely as a result of the pandemic; they are facing massive shortfalls in their budgets. The federal government often helps with the capital costs of buying vehicles and laying down new lines, but not with the operating costs of actually running the system.

There seems to be some kind of systemic failure in the US in particular; something about our politics, or our economy, or our culture just makes us uniquely bad at building and maintaining public transit.

What should we do about this?

One option would be to do nothing—laissez faire. Maybe cars are just a more efficient mode of transportation, or better for what Americans want, and we should accept that.

But when you look at the externalities involved, it becomes clear that this is not the right approach. While cars produce enormous amounts of pollution and carbon emissions, public transit is much, much cleaner. (Electric cars are better than diesel buses, but still worse than trams and light rail—and besides, the vast majority of cars use gasoline.) Just for clean air and climate change alone, we have strong reasons to want fewer cars and more public transit.

And there are positive externalities of public transit too; it’s been estimated that for every $1 spent on public transit, a city gains $5 in economic activity. We’re leaving a lot of money on the table by failing to invest in something so productive.

We need a fundamental shift in how Americans think about public transit. Not as a last resort for the poor, but as a default option for everyone. Not as a left-wing social welfare program, but as a vital component of our nation’s infrastructure.

Whenever people get stuck in traffic, instead of resenting other drivers (who are in exactly the same boat!), they should resent that the government hasn’t supported more robust public transit systems—and then they should go out and vote for candidates and policies that will change that.

Of course, with everything else that’s wrong with our economy and our political system, I can understand why this might not be a priority right now. But sooner or later we are going to need to fix this, or it’s just going to keep getting worse and worse.

Housing should be cheap

Sep 1 JDN 2460555

We are of two minds about housing in our society. On the one hand, we recognize that shelter is a necessity, and we want it to be affordable for all. On the other hand, we see real estate as an asset, and we want it to appreciate in value and thereby provide a store of wealth. So on the one hand we want it to be cheap, but on the other hand we want it to be expensive. And of course it can’t be both.

This is not a uniquely American phenomenon. As Noah Smith points out, it seems to be how things are done in almost every country in the world. It may be foolish for me to try to turn such a tide. But I’m going to try anyway.

Housing should be cheap.

For some reason, inflation is seen as a bad thing for every other good, necessity and luxury alike; but when it comes to housing in particular—the single biggest expense for almost everyone—suddenly we are conflicted about it, and think that maybe inflation is a good thing actually.

This is because owning a home that appreciates in value provides the illusion of increasing wealth.

Yes, I said illusion. In some particular circumstances it can sometimes increase real wealth, but when housing is getting more expensive everywhere at once (which is basically true), it doesn’t actually increase real wealth—because you still need to have a home. So while you’d get more money if you sold your current home, you’d have to go buy another home that would be just as expensive. That extra wealth is largely imaginary.

In fact, what isn’t an illusion is your increased property tax bill. If you aren’t planning on selling your home any time soon, you should really see its appreciation as a bad thing; now you suddenly owe more in taxes.

Home equity lines of credit complicate this a bit; for some reason we let people collateralize part of the home—even though the whole home is already collateralized with a mortgage to someone else—and thereby turn that largely-imaginary wealth into actual liquid cash. This is just one more way that our financial system is broken; we shouldn’t be offering these lines of credit, just as we shouldn’t be creating mortgage-backed securities. Cleverness is not a virtue in finance; banking should be boring.

But you’re probably still not convinced. So I’d like you to consider a simple thought experiment, where we take either view to the extreme: Make housing 100 times cheaper or 100 times more expensive.

Currently, houses cost about $400,000. So in Cheap World, houses cost $4,000. In Expensive World, they cost $40 million.

In Cheap World, there is no homelessness. Seriously, zero. It would make no sense at all for the government not to simply buy everyone a house. If you want to also buy your own house—or a dozen—go ahead, that’s fine; but you get one for free, paid for by tax dollars, because that’s cheaper than a year of schooling for a high-school student; it’s in fact not much more than what we’d currently spend to house someone in a homeless shelter for a year. So given the choice of offering someone two years at a shelter versus never homeless ever again, it’s pretty obvious we should choose the latter. Thus, in Cheap World, we all have a roof over our heads. And instead of storing their wealth in their homes in Cheap World, people store their wealth in stocks and bonds, which have better returns anyway.

In Expensive World, the top 1% are multi-millionaires who own homes, maybe the top 10% can afford rent, and the remaining 89% of the population are homeless. There’s simply no way to allocate the wealth of our society such that a typical middle class household has $40 million. We’re just not that rich. We probably never will be that rich. It may not even be possible to make a society that rich. In Expensive World, most people live in tents on the streets, because housing has been priced out of reach for all but the richest families.

Cheap World sounds like an amazing place to live. Expensive World is a horrific dystopia. The only thing I changed was the price of housing.


Yes, I changed it a lot; but that was to make the example as clear as possible, and it’s not even as extreme as it probably sounds. At 10% annual growth, 100 times more expensive only takes 49 years. At the current growth rate of housing prices of about 5% per year, it would take 95 years. A century from now, if we don’t fix our housing market, we will live in Expensive World. (Yes, we’ll most likely be richer then too; but will we be that much richer? Median income has not been rising nearly as fast as median housing price. If current trends continue, median income will be 5 times bigger and housing prices will be 100 times bigger—that’s still terrible.)

We’re already seeing something that feels a lot like Expensive World in some of our most expensive cities. San Francisco has ludicrously expensive housing and also a massive homelessness crisis—this is not a coincidence. Homelessness does still exist in more affordable cities, but clearly not at the same crisis level.

I think part of the problem is that people don’t really understand what wealth is. They see the number go up, and they think that means there is more wealth. Real wealth consists in goods, not in prices. The wealth we have is made of real things, not monetary prices. Prices merely decide how wealth is allocated.

A home is wealth, yes. But it’s the same amount of real wealth regardless of what price it has, because what matters is what it’s good for. If you become genuinely richer by selling an appreciated home, you gained that extra wealth from somewhere else; it was not contained within your home. You have appropriated wealth that someone else used to have. You haven’t created wealth; you’ve merely obtained it.

For you as an individual, that may not make a difference; you still get richer. But as a society, it makes all the difference: Moving wealth around doesn’t make our society richer, and all higher prices can do is move wealth around.

This means that rising housing prices simply cannot make our whole society richer. Better houses could do that. More houses could do that. But simply raising the price tag isn’t making our society richer. If it makes anyone richer—which, again, typically it does not—it does so by moving wealth from somewhere else. And since homeowners are generally richer than non-homeowners (even aside from their housing wealth!), more expensive homes means moving wealth from poorer people to richer people—increased inequality.

We used to have affordable housing, just a couple of generations ago. But we may never have truly affordable housing again, because people really don’t like to see that number go down, and they vote for policies accordingly—especially at the local level. Our best hope right now seems to be to keep it from going up faster than the growth rate of income, so that homes don’t become any more unaffordable than they already are.

But frankly I’m not optimistic. I think part of the cyberpunk dystopia we’re careening towards is Expensive World.

How to detect discrimination, empirically

Aug 25 JDN 2460548

For concreteness, I’ll use men and women as my example, though the same principles would apply for race, sexual orientation, and so on. Suppose we find that there are more men than women in a given profession; does this mean that women are being discriminated against?

Not necessarily. Maybe women are less interested in that kind of work, or innately less qualified. Is there a way we can determine empirically that it really is discrimination?

It turns out that there is. All we need is a reliable measure of performance in that profession. Then, we compare performance between men and women, and that comparison can tell us whether discrimination is happening or not. The key insight is that workers in a job are not a random sample; they are a selected sample. The results of that selection can tell us whether discrimination is happening.

Here’s a simple model to show how this works.

Suppose there are five different skill levels in the job, from 1 to 5 where 5 is the most skilled. And suppose there are 5 women and 5 men in the population.

1. Baseline

The baseline case to consider is when innate talents are equal and there is no discrimination. In that case, we should expect men and women to be equally represented in the profession.

For the simplest case, let’s say that there is one person at each skill level:

MenWomen
11
22
33
44
55

Now suppose that everyone above a certain skill threshold gets hired. Since we’re assuming no discrimination, the threshold should be the same for men and women. Let’s say it’s 3; then these are the people who get hired:

Hired MenHired Women
33
44
55

The result is that not only are there the same number of men and women in the job, their skill levels are also the same. There are just as many highly-competent men as highly-competent women.

2. Innate Differences

Now, suppose there is some innate difference in talent between men and women for this job. For most jobs this seems suspicious, but consider pro sports: Men really are better at basketball, in general, than women, and this is pretty clearly genetic. So it’s not absurd to suppose that for at least some jobs, there might be some innate differences. What would that look like?


Again suppose a population of 5 men and 5 women, but now the women are a bit less qualified: There are two 1s and no 5s among the women.

MenWomen
11
21
32
43
54

Then, this is the group that will get hired:

Hired MenHired Women
33
44
5

The result will be fewer women who are on average less qualified. The most highly-qualified individuals at that job will be almost entirely men. (In this simple model, entirely men; but you can easily extend it so that there are a few top-qualified women.)

This is in fact what we see for a lot of pro sports; in a head-to-head match, even the best WNBA teams would generally lose against most NBA teams. That’s what it looks like when there are real innate differences.

But it’s hard to find clear examples outside of sports. The genuine, large differences in size and physical strength between the sexes just don’t seem to be associated with similar differences in mental capabilities or even personality. You can find some subtler effects, but nothing very large—and certainly nothing large enough to explain the huge gender gaps in various industries.

3. Discrimination

What does it look like when there is discrimination?

Now assume that men and women are equally qualified, but it’s harder for women to get hired, because of discrimination. The key insight here is that this amounts to women facing a higher threshold. Where men only need to have level 3 competence to get hired, women need level 4.

So if the population looks like this:

MenWomen
11
22
33
44
55

The hired employees will look like this:

Hired MenHired Women
3
44
55

Once again we’ll have fewer women in the profession, but they will be on average more qualified. The top-performing individuals will be as likely to be women as they are to be men, while the lowest-performing individuals will be almost entirely men.

This is the kind of pattern we observe when there is discrimination. Do we see it in real life?

Yes, we see it all the time.

Corporations with women CEOs are more profitable.

Women doctors have better patient outcomes.

Startups led by women are more likely to succeed.

This shows that there is some discrimination happening, somewhere in the process. Does it mean that individual firms are actively discriminating in their hiring process? No, it doesn’t. The discrimination could be happening somewhere else; maybe it happens during education, or once women get hired. Maybe it’s a product of sexism in society as a whole, that isn’t directly under the control of employers. But it must be in there somewhere. If women are both rarer and more competent, there must be some discrimination going on.

What if there is also innate difference? We can detect that too!

4. Both

Suppose now that men are on average more talented, but there is also discrimination against women. Then the population might look like this:

MenWomen
11
21
32
43
54

And the hired employees might look like this:

Hired MenHired Women
3
4
54

In such a scenario, you’ll see a large gender imbalance, but there may not be a clear difference in competence. The tiny fraction of women who get hired will perform about as well as the men, on average.

Of course, this assumes that the two effects are of equal strength. In reality, we might see a whole spectrum of possibilities, from very strong discrimination with no innate differences, all the way to very large innate differences with no discrimination. The outcomes will then be similarly along a spectrum: When discrimination is much larger than innate difference, women will be rare but more competent. When innate difference is much larger than discrimination, women will be rare and less competent. And when there is a mix of both, women will be rare but won’t show as much difference in competence.

Moreover, if you look closer at the distribution of performance, you can still detect the two effects independently. If the lowest-performing workers are almost all men, that’s evidence of discrimination against women; while if the highest-performing workers are almost all men, that’s evidence of innate difference. And if you look at the table above, that’s exactly what we see: Both the 3 and the 5 are men, indicating the presence of both effects.

What does affirmative action do?

Effectively, affirmative action lowers the threshold for hiring women (or minorities) in order to equalize representation in the workplace. In the presence of discrimination raising that threshold, this is exactly what we need! It can take us from case 3 (discrimination) to case 1 (equality), or from case 4 (both discrimination and innate difference) to case 2 (innate difference only).

Of course, it’s possible for us to overshoot, using more affirmative action than we should have. If we achieve better representation of women, but the lowest performers at the job are women, then we have overshot, effectively now discriminating against men. Fortunately, there is very little evidence of this in practice. In general, even with affirmative action programs in place, we tend to find that the lowest performers are still men—so there is still discrimination against women that we’ve failed to compensate for.

What if we can’t measure competence?

Of course, it’s possible that we don’t have good measures of competence in a given industry. (One must wonder how firms decide who to hire, but frankly I’m prepared to believe they’re just really bad at it.) Then we can’t observe discrimination statistically in this way. What do we do then?

Well, there is at least one avenue left for us to detect discrimination: We can do direct experiments comparing resumes with male names versus female names. These sorts of experiments typically don’t find very much, though—at least for women. For different races, they absolutely do find strong results. They also find evidence of discrimination against people with disabilities, older people, and people who are physically unattractive. There’s also evidence of intersectional effects, where women of particular ethnic groups get discriminated against even when women in general don’t.

But this will only pick up discrimination if it occurs during the hiring process. The advantage of having a competence measure is that it can detect discrimination that occurs anywhere—even outside employer control. Of course, if we don’t know where the discrimination is happening, that makes it very hard to fix; so the two approaches are complementary.

And there is room for new methods too; right now we don’t have a good way to detect discrimination in promotion decisions, for example. Many of us suspect that it occurs, but unless you have a good measure of competence, you can’t really distinguish promotion discrimination from innate differences in talent. We don’t have a good method for testing that in a direct experiment, either, because unlike hiring, we can’t just use fake resumes with masculine or feminine names on them.

Why are groceries so expensive?

Aug 18 JDN 2460541

There has been unusually high inflation the past few years, mostly attributable to the COVID pandemic and its aftermath. But groceries in particular seem to have gotten especially more expensive. We’ve all felt it: Eggs, milk, and toilet paper especially soared to extreme prices and then, even when they came back down, never came down all the way.

Why would this be?

Did it involve supply chain disruptions? Sure. Was it related to the war in Ukraine? Probably.

But it clearly wasn’t just those things—because, as the FTC recently found, grocery stores have been colluding and price-gouging. Large grocery chains like Walmart and Kroger have a lot of market power, and they used that power to raise prices considerably faster than was necessary to keep up with their increased costs; as a result, they made record profits. Their costs did genuinely increase, but they increased their prices even more, and ended up being better off.

The big chains were also better able to protect their own supply chains than smaller companies, and so the effects of the pandemic further entrenched the market power of a handful of corporations. Some of them also imposed strict delivery requirements on their suppliers, pressuring them to prioritize the big companies over the small ones.

This kind of thing is what happens when we let oligopolies take control. When only a few companies control the market, prices go up, quality goes down, and inequality gets worse.

For far too long, institutions like the FTC have failed to challenge the ever tighter concentration of our markets in the hands of a small number of huge corporations.

And it’s not just grocery stores.

Our media is dominated by five corporations: Disney, WarnerMedia, NBCUniversal, Sony, and Paramount.

Our cell phone service is 99% controlled by three corporations: T-Mobile, Verizon, and AT&T.

Our music industry is dominated by three corporations: Sony, Universal, and Warner.

Two-thirds of US airline traffic are in four airlines: American, Delta, Southwest, and United.

Nearly 40% of US commercial banking assets are controlled by just three banks: JPMorgan Chase, Bank of America, and Citigroup.

Do I even need to mention the incredible market share Google has in search—over 90%—or Facebook has in social media—over 50%?

And most of these lists used to be longer. Disney recently acquired 21st Century Fox. Viacom recently merged with CBS and then became Paramount. Universal recently acquired EMI. Our markets aren’t simply alarmingly concentrated; they have also been getting more concentrated over time.

Institutions like the FTC are supposed to be protecting us from oligopolies, by ensuring that corporations can’t merge and acquire each other once they reach a certain market share. But decades of underfunding and laissez-faire ideology have weakened these institutions. So many mergers that obviously shouldn’t have been allowed were allowed, because no regulatory agency had the will and the strength to stop them.

The good news is that this is finally beginning to change: The FTC has recently (finally!) sued Google for maintaining a monopoly on Internet search. And among grocery stores in particular, the FTC is challenging Kroger’s acquisition of Albertson’s—though it remains unclear whether that challenge will succeed.

Hopefully this is a sign that the FTC has found its teeth again, and will continue to prosecute anti-trust cases against oligopolies. A lot of that may depend on who ends up in the White House this November.

How games enrich our lives

Aug 11 JDN 2460534

I’m writing this post just after getting back from Gen Con, one of the world’s largest gaming conventions. After several days of basically constant activity from the time we woke up to the time we went to bed, I’m looking forward to some downtime to recuperate.

This year, we were there not just to have fun, but also to pitch our own game, a card-based storytelling game called Pax ad Astra. We already have one offer from a small publisher, but we’re currently waiting to hear back from several others to see if we can do better.

Games might seem like a frivolous thing, a waste of valuable time; but in fact they can enrich our lives in many ways. They deserve to be respected as an art form unto themselves.

Gen Con is primarily a tabletop game convention, but some of the best examples of what I want to say come from video games, so I’ll be using examples of both.

Games can be beautiful. Climb up a mountain in Breath of the Wild and just look out over the expanse. It’s not quite the same as overlooking a real mountain vista, but it’s shockingly close.

Games can be moving. The Life is Strange series has so many powerful emotional moments it’s honestly a little overwhelming.

Games can be political. The game Monopoly was originally intended as an argument against monopoly capitalism (which is deeply ironic in hindsight). Cyberpunk fiction has always been trying to warn us about the future we’re building, and that message comes across even clearer when immersed in game, either the tabletop version Cyberpunk RED or the video game version Cyberpunk 2077. Even a game like Call of Duty: Black Ops, which many might initially dismiss as another mindless shooter, can actually have some profound statements to make about war, covert operations, and the moral compromises that they always entail.

Games can challenge us to think. Even some of the most ancient games, like Senet and Go, required deep strategic thinking in order to win. Modern games continue this tradition in an endless variety of ways, from Catan to StarCraft.

Games can teach us. I don’t just mean games that are designed to be educational, though certainly plenty of those exist. Minecraft involves active participation in building and changing the world around you, every bit as good a learning toy as Lego, but with almost endless blocks to work with.

Games let us explore our own identity. One of the great things about role-playing games such as Dungeons & Dragons (or its digital counterpart, Baldur’s Gate 3) is that they allow us to inhabit someone different from ourselves, and explore what it’s like to be someone else. We can learn a lot about ourselves and others through such experiences. I know an awful lot of transgender people who played RPGs as different genders before they transitioned.

Games are immersive. One certainly can get immersed into a book or a film, but the interactivity of a game makes that immersion much more powerful. The difference between hearing about someone doing something, watching them do something, and doing it yourself can be quite profound. Video games are especially immersive; they can really make it feel like you are right there, actually participating in the action. Part of what makes Call of Duty: Black Ops so effective in its political messaging is the fact that you aren’t just seeing all these morally-ambiguous actions; you’re actively participating in them, and being forced to make your own difficult choices.

But in the end, games are fun. Maybe sometimes they are a frivolous time-wasting activity—and maybe, as a society, we need to have more respect for frivolous time-wasting activities. Human beings need rest and recreation to function. We aren’t machines. We can’t constantly be productive all the time.

Can Kamala Harris win this?

Aug 4 JDN 2460527

This election is historic in several ways.

First of all, there’s Trump, who is now on record saying “after this one, you won’t have to vote anymore”. (His own side is trying to downplay this, but does that not sound incredibly authoritarian? Is he not suggesting that there will be no future elections, or that all future elections will be shams? How else are we supposed to interpret this?)

Second, we have had a major candidate for President suddenly step down in the middle of the campaign, leaving his Vice President to take on the nomination. No previous candidate has ever stepped down this late in the race.

But third and perhaps most importantly, we have a woman of color running as a major party candidate for President of the United States. Even if she loses, it will be historic. And if she wins, it will be even more so.

I do think that Biden was right to step down. The narrative had swung too hard against him: People saw him as old, weak, even senile. Whether or not this was really an accurate assessment of his abilities, I honestly don’t know. But I do know that enough people believed it that it was clearly hurting his chances of winning the election—and when the alternative is Trump, that’s just not something we could afford.

But now the big question arises:

Can Kamala Harris succeed where Joe Biden could not?

It definitely seems like voters are more passionate about Harris than they were about Biden; maybe America wasn’t ready for yet another rich White straight male Anglo-Saxon Protestant President. (Or at least maybe Democrats weren’t; Republicans don’t seem to mind Trump.)

But will that passion really translate to electoral success where we need it most?

A more objective answer comes from looking at poll numbers: Are hers better than his? Yes, they are, by several percentage points—but it still looks like a tossup with Trump. Depending on which poll you read on which day, Harris may be up by several points—or Trump may be ahead by a few points instead. Basically, we are within margin of error here.

This is scary particularly because of the idiocy of the Electoral College; right now it looks like the most likely scenario is that Harris wins the popular vote, but Trump still becomes President—just like what happened with Hillary Clinton the first time Trump won.

The Electoral College was supposed to prevent “tyranny of the majority” by stopping authoritarian populist demagogues from taking office. Since it literally caused exactly the outcome it was designed to prevent, it has clearly failed, and needs to be destroyed. Seriously, we need to enact the National Popular Vote Interstate Compact ASAP. It would only take a few more states—or one big state—to put us over the threshold and render the Electoral College irrelevant.

Unfortunately, that doesn’t seem very likely to happen in time for November. Which means that in order to win this election, we have to not only get the most votes; we also need to win enough swing states. It’s incredibly stupid and undemocratic that this is the case, but it is the case. (Frankly, it’s stupid and undemocratic that we have a single first-past-the-post vote instead of ranked-choice or range voting; but that’s also something we seem to be stuck with for the time being.)

A lot of this is going to come down to who Harris chooses as her running mate. Fortunately, Trump seems to have chosen a pretty bad choice in J. D. Vance; that’s good news for Democrats, and ultimately good news for America. Harris is a lot more competent than Trump, and will almost certainly choose a better running mate.

And perhaps that, in the end, is the greatest reason to have hope:

Competence and reasonableness have advantages.

What’s the deal with Trump supporters?

Jul 28 JDN 2460520


I have never understood how this Presidential election is a close one. On the one hand, we have a decent President with many redeeming qualities who has done a great job, but is getting old; on the other hand, we have a narcissistic, authoritarian con man (who is almost as old). It should be obvious who the right choice is here.

And yet, half the country disagrees. I really don’t get it. Other Republican candidates actually have had redeeming qualities, and I could understand why someone might support them; but Trump has basically none.

I have even asked some of my relatives who support Trump why they do, what they see in him, and I could never get a straight answer.

I now think I know why: They don’t want to admit the true answer.

Political scientists have been studying this, and they’ve come to some very unsettling conclusions. The two strongest predictors of support for Trump are authoritarianism and hatred of minorities.

In other words, people support Trump not in spite of what makes him awful, but because of it. They are happy to finally have a political publicly supporting their hateful, bigoted views. And since they believe in authoritarian hierarchy, his desire to become a dictator doesn’t worry them; they may even welcome it, believing that he’ll use that power to hurt the right people. They like him because he promises retribution against social change. He also uses a lot of fear-mongering.

This isn’t the conclusion I was hoping for. I wanted there to be something sympathetic, some alternative view of the world that could be reasoned with. But when bigotry and authoritarianism are the main predictors of a candidate’s support, it seems that reasonableness has pretty much failed.

I wanted there to be something I had missed, something I wasn’t seeing about Trump—or about Biden—that would explain how good, reasonable people could support the former over the latter. But the data just doesn’t seem to show anything. There is an urban/rural divide; there is a generational divide; and there is an educational divide. Maybe there’s something there; certainly I can sympathize with old people in rural areas with low education. But by far the best way to tell whether someone supports Trump is to find out whether they are racist, sexist, xenophobic, and authoritarian. How am I supposed to sympathize with that? Where can we find common ground here?

There seems to be something deep and primal that motivates Trump supporters: Fear of change, tribal identity, or simply anger. It doesn’t seem to be rational. Ask them what policies Trump has done or plans to do that they like, and they often can’t name any. But they are certain in their hearts that he will “Make America Great Again”.

What do we do about this? We can win this election—maybe—but that’s only the beginning. Somehow we need to root out the bigotry that drives support for Trump and his ilk, and I really don’t know how to do that.

I don’t know what else to say here. This all feels so bleak. This election has become a battle for the soul of America: Are we a pluralistic democracy that celebrates diversity, or are we a nation of racist, sexist, xenophobic authoritarians?

Did we push too hard, too fast for social change? Did we leave too many people behind, people who felt coerced into compliance rather than persuaded of our moral correctness? Is this a temporary backlash that we can bear as the arc of the moral universe bends toward justice? Or is this the beginning of a slow and agonizing march toward neo-fascism?

I have never feared Trump himself nearly so much as I fear a nation that could elect him—especially one that could re-elect him.

People need permission to disagree

Jul 21 JDN 2460513

Obviously, most of the blame for the rise of far-right parties in various countries has to go to the right-wing people who either joined up or failed to stop their allies from joining up. I would hope that goes without saying, but it probably doesn’t, so there, I said it; it’s mostly their fault.

But there is still some fault to go around, and I think we on the left need to do some soul-searching about this.

There is a very common mode of argumentation that is popular on the left, which I think is very dangerous:

What? You don’t already agree with [policy idea]? You bigot!”

Often it’s not quite that blatant, but the implication is still there: If you don’t agree with this policy involving race, you’re a racist. If you don’t agree with this policy involving transgender rights, you’re a transphobe. If you don’t agree with this policy involving women’s rights, you are a sexist. And so on.

I understand why people think this way. But I also think it has pushed some people over to the right who might otherwise have been possible to persuade to our own side.

And here comes the comeback, I know:

If being mistreated turns you into a Nazi, you were never a good ally to begin with.”

Well, first of all, not everyone who was pushed away from the left became a full-blown Nazi. Some of them just stopped listening to us, and started listening to whatever the right wing was saying instead.

Second, life is more complicated than that. Most people don’t really have well-defined political views, believe it or not. Most people sort of form their political views on the spot based on whoever else is around them and who they hear talking the loudest. Most swing voters are really low-information voters who really don’t follow politics and make up their minds based on frankly stupid reasons.

And with this in mind, the mere fact that we are pushing people away with our rhetoric means that we are shifting what those low-information voters hear—and thereby giving away elections to the right.

When people disagree about moral questions, isn’t someone morally wrong?

Yes, by construction. (At least one must be; possibly everyone is.)

But we don’t always know who is wrong—and generally speaking, everyone goes into a conversation assuming that they themselves are right. But our ultimate goal of moral conversation is to get more people to be right and fewer people to be wrong, yes? If we treat it as morally wrong to disagree in the first place,we are shutting down any hope of reaching that goal.

Not everyone knows everything about everything.

That may seem perfectly obvious to you, but when you leap from “disagree with [policy]” to “bigot”, you are basically assuming the opposite. You are assuming that whoever you are speaking with knows everything you know about all the relevant considerations of politics and social science, and the only possible reason they could come to a different conclusion is that they have a fundamentally different preference, namely, they are a bigot.

Maybe you are indeed such an enlightened individual that you never get any moral questions wrong. (Maybe.) But can you really expect everyone else to be like that? Isn’t it unfair to ask that of absolutely everyone?

This is why:

People need permission to disagree.

In order for people to learn and grow in their understanding, they need permission to not know all the answers right away. In order for people to change their beliefs, they need permission to believe something that might turn out to be wrong later.


This is exactly the permission we are denying when we accuse anyone we disagree with of being a bigot. Instead of continuing the conversation in the hopes of persuading people to our point of view, we are shutting the conversation down with vitriol and name-calling.

Try to consider this from the opposite perspective.

You enter a conversation about an important political or moral issue. You hear their view expressed, and then you express your own. Immediately, they start accusing you of being morally defective, a racist, sexist, homophobic, and/or transphobic bigot. How likely are you to continue that conversation? How likely are you to go on listening to this person? How likely are you to change your mind about the original political issue?

In fact, might you even be less likely to change your mind than you would have been if you’d just heard their view expressed and then ended the conversation? I think so. I think just respectfully expressing an alternative view pushes people a little—not a lot, but a little—in favor of whatever view you have expressed. It tells them that someone else who is reasonable and intelligent believes X, so maybe X isn’t so unreasonable.

Conversely, when someone resorts to name-calling, what does that do to your evaluation of their views? They suddenly seem unreasonable. You begin to doubt everything they’re saying. You may even try to revise your view further away out of spite (though this is clearly not rational—reversed stupidity is not intelligence).

Think about that, before you resort to name-calling your opponents.

But now I know you’re thinking:

But some people really are bigots!”

Yes, that’s true. And some of them may even be the sort of irredeemable bigot you’re imagining right now, someone for whom no amount of conversation could ever change their mind.

But I don’t think most people are like that. In fact, I don’t think most bigots are like that. I think even most people who hold bigoted views about whatever population could in fact be persuaded out of those views, under the right circumstances. And I think that the right circumstances involves a lot more patient, respectful conversation than it does angry name-calling. For we are all Judy Hopps.

Maybe I’m wrong. Maybe it doesn’t matter how patiently we argue. But it’s still morally better to be respectful and kind, so I’m going to do it.

You have my permission to disagree.