Homeschooling and too much freedom

Nov 19 JDN 2460268

Allowing families to homeschool their children increases freedom, quite directly and obviously. This is a large part of the political argument in favor of homeschooling, and likely a large part of why homeschooling is so popular within the United States in particular.

In the US, about 3% of people are homeschooled. This seems like a small proportion, but it’s enough to have some cultural and political impact, and it’s considerably larger than the proportion who are homeschooled in most other countries.

Moreover, homeschooling rates greatly increased as a result of COVID, and it’s anyone’s guess when, or even whether, they will go back down. I certainly hope they do; here’s why.

A lot of criticism about homeschooling involves academic outcomes: Are the students learning enough English and math? This is largely unfounded; statistically, academic outcomes of homeschooled students don’t seem to be any worse than those of public school students; by some measures, they are actually better.Nor is there clear evidence that homeschooled kids are any less developed socially; most of them get that social development through other networks, such as churches and sports teams.

No, my concern is not that they won’t learn enough English and math. It’s that they won’t learn enough history and science. Specifically, the parts of history and science that contradict the religious beliefs of the parents who are homeschooling them.

One way to study this would be to compare test scores by homeschooled kids on, say, algebra and chemistry (which do not directly threaten Christian evangelical beliefs) to those on, say, biology and neuroscience (which absolutely, fundamentally do). Lying somewhere in between are physics (F=ma is no threat to Christianity, but the Big Bang is) and history (Christian nationalists happily teach that Thomas Jefferson wrote the Declaration of Independence, but often omit that he owned slaves). If homeschooled kids are indeed indoctrinated, we should see particular lacunas in their knowledge where the facts contradict their ideology. In any case, I wasn’t able to find any such studies.

But even if their academic outcomes are worse in certain domains, so what? What about the freedom of parents to educate their children how they choose? What about the freedom of children to not be subjected to the pain of public school?

It will come as no surprise to most of you that I did well in school. In almost everything, really: math, science, philosophy, English, and Latin were my best subjects, and I earned basically flawless grades in them. But I also did very well in creative writing, history, art, and theater, and fairly well in music. My only poor performance was in gym class (as I’ve written about before).

It may come as some surprise when I tell you that I did not particularly enjoy school. In elementary school I had few friends—and one of my closest ended up being abusive to me. Middle school I mostly enjoyed—despite the onset of my migraines. High school started out utterly miserable, though it got a little better—a little—once I transferred to Community High School. Throughout high school, I was lonely, stressed, anxious, and depressed most of the time, and had migraine headaches of one intensity or another nearly every single day. (Sadly, most of that is true now as well; but I at least had a period of college and grad school where it wasn’t, and hopefully I will again once this job is behind me.)

I was good at school. I enjoyed much of the content of school. But I did not particularly enjoy school.

Thus, I can quite well understand why it is tempting to say that kids should be allowed to be schooled at home, if that is what they and their parents want. (Of course, a problem already arises there: What if child and parent disagree? Whose choice actually matters? In practice, it’s usually the parent’s.)

On the whole, public school is a fairly toxic social environment: Cliquish, hyper-competitive, stressful, often full of conflict between genders, races, classes, sexual orientations, and of course the school-specific one, nerds versus jocks (I’d give you two guesses which team I was on, but you’re only gonna need one). Public school sucks.

Then again, many of these problems and conflicts persist into adult life—so perhaps it’s better preparation than we care to admit. Maybe it’s better to be exposed to bias and conflict so that you can learn to cope with them, rather than sheltered from them.

But there is a more important reason why we may need public school, why it may even be worth coercing parents and children into that system against their will.

Public school forces you to interact with people different from you.

At a public school, you cannot avoid being thrown in the same classroom with students of other races, classes, and religions. This is of course more true if your school system is diverse rather than segregated—and all the more reason that the persistent segregation of many of our schools is horrific—but it’s still somewhat true even in a relatively homogeneous school. I was fortunate enough to go to a public school in Ann Arbor, where there was really quite substantial diversity. But even where there is less diversity, there is still usually some diversity—if not race, then class, or religion.

Certainly any public school has more diversity than homeschooling, where parents have the power to specifically choose precisely which other families their children will interact with, and will almost always choose those of the same race, class, and—above all—religious denomination as themselves.

The result is that homeschooled children often grow up indoctrinated into a dogmatic, narrow-minded worldview, convinced that the particular beliefs they were raised in are the objectively, absolutely correct ones and all others are at best mistaken and at worst outright evil. They are trained to reject conflict and dissent, to not even expose themselves to other people’s ideas, because those are seen as dangerous—corrupting.

Moreover, for most homeschooling parents—not all, but most—this is clearly the express intent. They want to raise their children in a particular set of beliefs. They want to inoculate them against the corrupting influences of other ideas. They are not afraid of their kids being bullied in school; they are afraid of them reading books that contradict the Bible.

This article has the headline “Homeschooled children do not grow up to be more religious”, yet its core finding is exactly the opposite of that:

The Cardus Survey found that homeschooled young adults were not noticeably different in their religious lives from their peers who had attended private religious schools, though they were more religious than peers who had attended public or Catholic schools.

No more religious than private religious schools!? That’s still very religious. No, the fair comparison is to public schools, which clearly show lower rates of religiosity among the same demographics. (The interesting case is Catholic schools; they, it turns out, also churn out atheists with remarkable efficiency; I credit the Jesuit norm of top-quality liberal education.) This is clear evidence that religious homeschooling does make children more religious, and so does most private religious education.

Another finding in that same article sounds good, but is misleading:

Indiana University professor Robert Kunzman, in his careful study of six homeschooling families, found that, at least for his sample, homeschooled children tended to become more tolerant and less dogmatic than their parents as they grew up.


This is probably just regression to the mean. The parents who give their kids religious homeschooling are largely the most dogmatic and intolerant, so we would expect by sheer chance that their kids were less dogmatic and intolerant—but probably still pretty dogmatic and intolerant. (Also, do I have to pount out that n=6 barely even constitutes a study!?) This is like the fact that the sons of NBA players are usually shorter than their fathers—but still quite tall.

Homeschooling is directly linked to a lot of terrible things: Young-Earth Creationism, Christian nationalism, homophobia, and shockingly widespread child abuse.

While most right-wing families don’t homeschool, most homeschooling families are right-wing: Between 60% and 70% of homeschooling families vote Republican in most elections. More left-wing voters are homeschooling now with the recent COVID-driven surge in homeschooling, but the right-wing still retains a strong majority for now.

Of course, there are a growing number of left-wing and non-religious families who use homeschooling. Does this mean that the threat of indoctrination is gone? I don’t think so. I once knew someone who was homeschooled by a left-wing non-religious family and still ended up adopting an extremely narrow-minded extremist worldview—simply a left-wing non-religious one. In some sense a left-wing non-religious narrow-minded extremism is better than a right-wing religious narrow-minded extremism, but it’s still narrow-minded extremism. Whatever such a worldview gets right is mainly by the Stopped Clock Principle. It still misses many important nuances, and is still closed to new ideas and new evidence.

Of course this is not a necessary feature of homeschooling. One absolutely could homeschool children into a worldview that is open-minded and tolerant. Indeed, I’m sure some parents do. But statistics suggest that most do not, and this makes sense: When parents want to indoctrinate their children into narrow-minded worldviews, homeschooling allows them to do that far more effectively than if they had sent their children to public school. Whereas if you want to teach your kids open-mindedness and tolerance, exposing them to a diverse environment makes that easier, not harder.

In other words, the problem is that homeschooling gives parents too much control; in a very real sense, this is too much freedom.

When can freedom be too much? It seems absurd at first. But there are at least two cases where it makes sense to say that someone has too much freedom.

The first is paternalism: Sometimes people really don’t know what’s best for them, and giving them more freedom will just allow them to hurt themselves. This notion is easily abused—it has been abused many times, for example against disabled people and colonized populations. For that reason, we are right to be very skeptical of it when applied to adults of sound mind. But what about children? That’s who we are talking about after all. Surely it’s not absurd to suggest that children don’t always know what’s best for them.

The second is the paradox of tolerance: The freedom to take away other people’s freedom is not a freedom we can afford to protect. And homeschooling that indoctrinates children into narrow-minded worldviews is a threat to other people’s freedom—not only those who will be oppressed by a new generation of extremists, but also the children themselves who are never granted the chance to find their own way.

Both reasons apply in this case: paternalism for the children, the paradox of tolerance for the parents. We have a civic responsibility to ensure that children grow up in a rich and diverse environment, so that they learn open-mindedness and tolerance. This is important enough that we should be willing to impose constraints on freedom in order to achieve it. Democracy cannot survive a citizenry who are molded from birth into narrow-minded extremists. There are parents who want to mold their children that way—and we cannot afford to let them.

From where I’m sitting, that means we need to ban homeschooling, or at least very strictly regulate it.

Israel, Palestine, and the World Bank’s disappointing priorities

Nov 12 JDN 2460261

Israel and Palestine are once again at war. (There are a disturbing number of different years in which one could have written that sentence.) The BBC has a really nice section of their website dedicated to reporting on various facets of the war. The New York Times also has a section on it, but it seems a little tilted in favor of Israel.

This time, it started with a brutal attack by Hamas, and now Israel has—as usual—overreacted and retaliated with a level of force that is sure to feed the ongoing cycle of extremism. All across social media I see people wanting me to take one side or the other, often even making good points: “Hamas slaughters innocents” and “Israel is a de facto apartheid state” are indeed both important points I agree with. But if you really want to know my ultimate opinion, it’s that this whole thing is fundamentally evil and stupid because human beings are suffering and dying over nothing but lies. All religions are false, most of them are evil, and we need to stop killing each other over them.

Anti-Semitism and Islamophobia are both morally wrong insofar as they involve harming, abusing or discriminating against actual human beings. Let people dress however they want, celebrate whatever holidays they want, read whatever books they want. Even if their beliefs are obviously wrong, don’t hurt them if they aren’t hurting anyone else. But both Judaism and Islam—and Christianity, and more besides—are fundamentally false, wrong, evil, stupid, and detrimental to the advancement of humanity.

That’s the thing that so much of the public conversation is too embarrassed to say; we’re supposed to pretend that they aren’t fighting over beliefs that obviously false. We’re supposed to respect each particular flavor of murderous nonsense, and always find some other cause to explain the conflict. It’s over culture (what culture?); it’s over territory (whose territory?); it’s a retaliation for past conflict (over what?). We’re not supposed to say out loud that all of this violence ultimately hinges upon people believing in nonsense. Even if the conflict wouldn’t disappear overnight if everyone suddenly stopped believing in God—and are we sure it wouldn’t? Let’s try it—it clearly could never have begun, if everyone had started with rational beliefs in the first place.

But I don’t really want to talk about that right now. I’ve said enough. Instead I want to talk about something a little more specific, something less ideological and more symptomatic of systemic structural failures. Something you might have missed amidst the chaos.

The World Bank recently released a report on the situation focused heavily on the looming threat of… higher oil prices. (And of course there has been breathless reporting from various outlets regarding a headline figure of $150 per barrel which is explicitly stated in the report as an unlikely “worst-case scenario”.)

There are two very big reasons why I found this dismaying.


The first, of course, is that there are obviously far more important concerns here than commodity prices. Yes, I know that this report is part of an ongoing series of Commodity Markets Outlook reports, but the fact that this is the sort of thing that the World Bank has ongoing reports about is also saying something important about the World Bank’s priorities. They release monthly commodity forecasts and full Commodity Markets Outlook reports that come out twice a year, unlike the World Development Reports that only come out once a year. The World Bank doesn’t release a twice-annual Conflict Report or a twice-annual Food Security Report. (Even the FAO, which publishes an annual State of Food Security and Nutrition in the World report, also publishes a State of Agricultural Marketsreport just as often.)

The second is that, when reading the report, one can clearly tell that whoever wrote it thinks that rising oil and gas prices are inherently bad. They keep talking about all of these negative consequences that higher oil prices could have, and seem utterly unaware of the really enormous upside here: We may finally get a chance to do something about climate change.

You see, one of the most basic reasons why we haven’t been able to fix climate change is that oil is too damn cheap. Its market price has consistently failed to reflect its actual costs. Part of that is due to oil subsidies around the world, which have held the price lower than it would be even in a free market; but most of it is due to the simple fact that pollution and carbon emissions don’t cost money for the people who produce them, even though they do cost the world.

Fortunately, wind and solar power are also getting very cheap, and are now at the point where they can outcompete oil and gas for electrical power generation. But that’s not enough. We need to remove oil and gas from everything: heating, manufacturing, agriculture, transportation. And that is far easier to do if oil and gas suddenly become more expensive and so people are forced to stop using them.

Now, granted, many of the downsides in that report are genuine: Because oil and gas are such vital inputs to so many economic processes, it really is true that making them more expensive will make lots of other things more expensive, and in particular could increase food insecurity by making farming more expensive. But if that’s what we’re concerned about, we should be focusing on that: What policies can we use to make sure that food remains available to all? And one of the best things we could be doing toward that goal is finding ways to make agriculture less dependent on oil.

By focusing on oil prices instead, the World Bank is encouraging the world to double down on the very oil subsidies that are holding climate policy back. Even food subsides—which certainly have their own problems—would be an obviously better solution, and yet they are barely mentioned.

In fact, if you actually read the report, it shows that fears of food insecurity seem unfounded: Food prices are actually declining right now. Grain prices in particular seem to be falling back down remarkably quickly after their initial surge when Russia invaded Ukraine. Of course that could change, but it’s a really weird attitude toward the world to see something good and respond with, “Yes, but it might change!” This is how people with anxiety disorders (and I would know) think—which makes it seem as though much of the economic policy community suffers from some kind of collective equivalent of an anxiety disorder.

There also seems to be a collective sense that higher prices are always bad. This is hardly just a World Bank phenomenon; on the contrary, it seems to pervade all of economic thought, including the most esteemed economists, the most powerful policymakers, and even most of the general population of citizens. (The one major exception seems to be housing, where the sense is that higher prices are always good—even when the world is in a chronic global housing shortage that leaves millions homeless.) But prices can be too low or too high. And oil prices are clearly, definitely too low. Prices should reflect the real cost of production—all the real costs of production. It should cost money to pollute other people’s air.

In fact I think the whole report is largely a nothingburger: Oil prices haven’t even risen all that much so far—we’re still at $80 per barrel last I checked—and the one thing that is true about the so-called Efficient Market Hypothesis is that forecasting future prices is a fool’s errand. But it’s still deeply unsettling to see such intelligent, learned experts so clearly panicking over the mere possibility that there could be a price change which would so obviously be good for the long-term future of humanity.

There is plenty more worth saying about the Israel-Palestine conflict, and in particular what sort of constructive policy solutions we might be able to find that would actually result in any kind of long-term peace. I’m no expert on peace negotiations, and frankly I admit it would probably be a liability that if I were ever personally involved in such a negotiation, I’d be tempted to tell both sides that they are idiots and fanatics. (The headline the next morning: “Israeli and Palestinian Delegates Agree on One Thing: They Hate the US Ambassador”.)

The World Bank could have plenty to offer here, yet so far they’ve been too focused on commodity prices. Their thinking is a little too much ‘bank’ and not enough ‘world’.

It is a bit ironic, though also vaguely encouraging, that there are those within the World Bank itself who recognize this problem: Just a few weeks ago Ajay Banga gave a speech to the World Bank about “a world free of poverty on a livable planet”.

Yes. Those sound like the right priorities. Now maybe you could figure out how to turn that lip service into actual policy.

The inequality of factor mobility

Sep 24 JDN 2460212

I’ve written before about how free trade has brought great benefits, but also great costs. It occurred to me this week that there is a fairly simple reason why free trade has never been as good for the world as the models would suggest: Some factors of production are harder to move than others.

To some extent this is due to policy, especially immigration policy. But it isn’t just that.There are certain inherent limitations that render some kinds of inputs more mobile than others.

Broadly speaking, there are five kinds of inputs to production: Land, labor, capital, goods, and—oft forgotten—ideas.

You can of course parse them differently: Some would subdivide different types of labor or capital, and some things are hard to categorize this way. The same product, such as an oven or a car, can be a good or capital depending on how it’s used. (Or, consider livestock: is that labor, or capital? Or perhaps it’s a good? Oddly, it’s often discussed as land, which just seems absurd.) Maybe ideas can be considered a form of capital. There is a whole literature on human capital, which I increasingly find distasteful, because it seems to imply that economists couldn’t figure out how to value human beings except by treating them as a machine or a financial asset.

But this four-way categorization is particularly useful for what I want to talk about today. Because the rate at which those things move is very different.

Ideas move instantly. It takes literally milliseconds to transmit an idea anywhere in the world. This wasn’t always true; in ancient times ideas didn’t move much faster than people, and it wasn’t until the invention of the telegraph that their transit really became instantaneous. But it is certainly true now; once this post is published, it can be read in a hundred different countries in seconds.

Goods move in hours. Air shipping can take a product just about anywhere in less than a day. Sea shipping is a bit slower, but not radically so. It’s never been easier to move goods all around the world, and this has been the great success of free trade.

Capital moves in weeks. Here it might be useful to subdivide different types of capital: It’s surely faster to move an oven or even a car (the more good-ish sort of capital) than it is to move an entire factory (capital par excellence). But all in all, we can move stuff pretty fast these days. If you want to move your factory to China or Indonesia, you can probably get it done in a matter of weeks or at most months.

Labor moves in months. This one is a bit ironic, since it is surely easier to carry a single human person—or even a hundred human people—than all the equipment necessary to run an entire factory. But moving labor isn’t just a matter of physically carrying people from one place to another. It’s not like tourism, where you just pack and go. Moving labor requires uprooting people from where they used to live and letting them settle in a new place. It takes a surprisingly long time to establish yourself in a new environment—frankly even after two years in Edinburgh I’m not sure I quite managed it. And all the additional restrictions we’ve added involving border crossings and immigration laws and visas only make it that much slower.

Land moves never. This one seems perfectly obvious, but is also often neglected. You can’t pick up a mountain, a lake, a forest, or even a corn field and carry it across the border. (Yes, eventually plate tectonics will move our land around—but that’ll be millions of years.) Basically, land stays put—and so do all the natural environments and ecosystems on that land. Land isn’t as important for production as it once was; before industrialization, we were dependent on the land for almost everything. But we absolutely still are dependent on the land! If all the topsoil in the world suddenly disappeared, the economy wouldn’t simply collapse: the human race would face extinction. Moreover, a lot of fixed infrastructure, while technically capital, is no more mobile than land. We couldn’t much more easily move the Interstate Highway System to China than we could move Denali.

So far I have said nothing particularly novel. Yeah, clearly it’s much easier to move a mathematical theorem (if such a thing can even be said to “move”) than it is to move a factory, and much easier to move a factory than to move a forest. So what?

But now let’s consider the impact this has on free trade.

Ideas can move instantly, so free trade in ideas would allow all the world to instantaneously share all ideas. This isn’t quite what happens—but in the Internet age, we’re remarkably close to it. If anything, the world’s governments seem to be doing their best to stop this from happening: One of our most strictly-enforced trade agreements, the TRIPS Accord, is about stopping ideas from spreading too easily. And as far as I can tell, region-coding on media goes against everything free trade stands for, yet here we are. (Why, it’s almost as if these policies are more about corporate profits than they ever were about freedom!)

Goods and capital can move quickly. This is where we have really felt the biggest effects of free trade: Everything in the US says “made in China” because the capital is moved to China and then the goods are moved back to the US.

But it would honestly have made more sense to move all those workers instead. For all their obvious flaws, US institutions and US infrastructure are clearly superior to those in China. (Indeed, consider this: We may be so aware of the flaws because the US is especially transparent.) So, the most absolutely efficient way to produce all those goods would be to leave the factories in the US, and move the workers from China instead. If free trade were to achieve its greatest promises, this is the sort of thing we would be doing.


Of course that is not what we did. There are various reasons for this: A lot of the people in China would rather not have to leave. The Chinese government would not want them to leave. A lot of people in the US would not want them to come. The US government might not want them to come.

Most of these reasons are ultimately political: People don’t want to live around people who are from a different nation and culture. They don’t consider those people to be deserving of the same rights and status as those of their own country.

It may sound harsh to say it that way, but it’s clearly the truth. If the average American person valued a random Chinese person exactly the same as they valued a random other American person, our immigration policy would look radically different. US immigration is relatively permissive by world standards, and that is a great part of American success. Yet even here there is a very stark divide between the citizen and the immigrant.

There are morally and economically legitimate reasons to regulate immigration. There may even be morally and economically legitimate reasons to value those in your own nation above those in other nations (though I suspect they would not justify the degree that most people do). But the fact remains that in terms of pure efficiency, the best thing to do would obviously be to move all the people to the place where productivity is highest and do everything there.

But wouldn’t moving people there reduce the productivity? Yes. Somewhat. If you actually tried to concentrate the entire world’s population into the US, productivity in the US would surely go down. So, okay, fine; stop moving people to a more productive place when it has ceased to be more productive. What this should do is average out all the world’s labor productivity to the same level—but a much higher level than the current world average, and frankly probably quite close to its current maximum.

Once you consider that moving people and things does have real costs, maybe fully equaling productivity wouldn’t make sense. But it would be close. The differences in productivity across countries would be small.

They are not small.

Labor productivity worldwide varies tremendously. I don’t count Ireland, because that’s Leprechaun Economics (this is really US GDP with accounting tricks, not Irish GDP). So the prize for highest productivity goes to Norway, at $100 per worker hour (#ScandinaviaIsBetter). The US is doing the best among large countries, at an impressive $73 per hour. And at the very bottom of the list, we have places like Bangladesh at $4.79 per hour and Cambodia at $3.43 per hour. So, roughly speaking, there is about a 20-to-1 ratio between the most productive and least productive countries.

I could believe that it’s not worth it to move US production at $73 per hour to Norway to get it up to $100 per hour. (For one thing, where would we fit it all?) But I find it far more dubious that it wouldn’t make sense to move most of Cambodia’s labor to the US. (Even all 16 million people is less than what the US added between 2010 and 2020.) Even given the fact that these Cambodian workers are less healthy and less educated than American workers, they would almost certainly be more productive on the other side of the Pacific, quite likely ten times as productive as they are now. Yet we haven’t moved them, and have no plans to.

That leaves the question of whether we will move our capital to them. We have been doing so in China, and it worked (to a point). Before that, we did it in Korea and Japan, and it worked. Cambodia will probably come along sooner or later. For now, that seems to be the best we can do.

But I still can’t shake the thought that the world is leaving trillions of dollars on the table by refusing to move people. The inequality of factor mobility seems to be a big part of the world’s inequality, period.

What is anxiety for?

Sep 17 JDN 2460205

As someone who experiences a great deal of anxiety, I have often struggled to understand what it could possibly be useful for. We have this whole complex system of evolved emotions, and yet more often than not it seems to harm us rather than help us. What’s going on here? Why do we even have anxiety? What even is anxiety, really? And what is it for?

There’s actually an extensive body of research on this, though very few firm conclusions. (One of the best accounts I’ve read, sadly, is paywalled.)

For one thing, there seem to be a lot of positive feedback loops involved in anxiety: Panic attacks make you more anxious, triggering more panic attacks; being anxious disrupts your sleep, which makes you more anxious. Positive feedback loops can very easily spiral out of control, resulting in responses that are wildly disproportionate to the stimulus that triggered them.

A certain amount of stress response is useful, even when the stakes are not life-or-death. But beyond a certain point, more stress becomes harmful rather than helpful. This is the Yerkes-Dodson effect, for which I developed my stochastic overload model (which I still don’t know if I’ll ever publish, ironically enough, because of my own excessive anxiety). Realizing that anxiety can have benefits can also take some of the bite out of having chronic anxiety, and, ironically, reduce that anxiety a little. The trick is finding ways to break those positive feedback loops.

I think one of the most useful insights to come out of this research is the smoke-detector principle, which is a fundamentally economic concept. It sounds quite simple: When dealing with an uncertain danger, sound the alarm if the expected benefit of doing so exceeds the expected cost.

This has profound implications when risk is highly asymmetric—as it usually is. Running away from a shadow or a noise that probably isn’t a lion carries some cost; you wouldn’t want to do it all the time. But it is surely nowhere near as bad as failing to run away when there is an actual lion. Indeed, it might be fair to say that failing to run away from an actual lion counts as one of the worst possible things that could ever happen to you, and could easily be 100 times as bad as running away when there is nothing to fear.

With this in mind, if you have a system for detecting whether or not there is a lion, how sensitive should you make it? Extremely sensitive. You should in fact try to calibrate it so that 99% of the time you experience the fear and want to run away, there is not a lion. Because the 1% of the time when there is one, it’ll all be worth it.

Yet this is far from a complete explanation of anxiety as we experience it. For one thing, there has never been, in my entire life, even a 1% chance that I’m going to be attacked by a lion. Even standing in front of a lion enclosure at the zoo, my chances of being attacked are considerably less than that—for a zoo that allowed 1% of its customers to be attacked would not stay in business very long.

But for another thing, it isn’t really lions I’m afraid of. The things that make me anxious are generally not things that would be expected to do me bodily harm. Sure, I generally try to avoid walking down dark alleys at night, and I look both ways before crossing the street, and those are activities directly designed to protect me from bodily harm. But I actually don’t feel especially anxious about those things! Maybe I would if I actually had to walk through dark alleys a lot, but I don’t, and in the rare occasion I would, I think I’d feel afraid at the time but fine afterward, rather than experiencing persistent, pervasive, overwhelming anxiety. (Whereas, if I’m anxious about reading emails, and I do manage to read emails, I’m usually still anxious afterward.) When it comes to crossing the street, I feel very little fear at all, even though perhaps I should—indeed, it had been remarked that when it comes to the perils of motor vehicles, human beings suffer from a very dangerous lack of fear. We should be much more afraid than we are—and our failure to be afraid kills thousands of people.

No, the things that make me anxious are invariably social: Meetings, interviews, emails, applications, rejection letters. Also parties, networking events, and back when I needed them, dates. They involve interacting with other people—and in particular being evaluated by other people. I never felt particularly anxious about exams, except maybe a little before my PhD qualifying exam and my thesis defenses; but I can understand those who do, because it’s the same thing: People are evaluating you.

This suggests that anxiety, at least of the kind that most of us experience, isn’t really about danger; it’s about status. We aren’t worried that we will be murdered or tortured or even run over by a car. We’re worried that we will lose our friends, or get fired; we are worried that we won’t get a job, won’t get published, or won’t graduate.

And yet it is striking to me that it often feels just as bad as if we were afraid that we were going to die. In fact, in the most severe instances where anxiety feeds into depression, it can literally make people want to die. How can that be evolutionarily adaptive?

Here it may be helpful to remember that in our ancestral environment, status and survival were oft one and the same. Humans are the most social organisms on Earth; I even sometimes describe us as hypersocial, a whole new category of social that no other organism seems to have achieved. We cooperate with others of our species on a mind-bogglingly grand scale, and are utterly dependent upon vast interconnected social systems far too large and complex for us to truly understand, let alone control.

At this historical epoch, these social systems are especially vast and incomprehensible; but at least for most of us in First World countries, they are also forgiving in a way that is fundamentally alien to our ancestors’ experience. It was not so long ago that a failed hunt or a bad harvest would let your family starve unless you could beseech your community for aid successfully—which meant that your very survival could depend upon being in the good graces of that community. But now we have food stamps, so even if everyone in your town hates you, you still get to eat. Of course some societies are more forgiving (Sweden) than others (the United States); and virtually all societies could be even more forgiving than they are. But even the relatively cutthroat competition of the US today has far less genuine risk of truly catastrophic failure than what most human beings lived through for most of our existence as a species.

I have found this realization helpful—hardly a cure, but helpful, at least: What are you really afraid of? When you feel anxious, your body often tells you that the stakes are overwhelming, life-or-death; but if you stop and think about it, in the world we live in today, that’s almost never true. Failing at one important task at work probably won’t get you fired—and even getting fired won’t really make you starve.

In fact, we might be less anxious if it were! For our bodies’ fear system seems to be optimized for the following scenario: An immediate threat with high chance of success and life-or-death stakes. Spear that wild animal, or jump over that chasm. It will either work or it won’t, you’ll know immediately; it probably will work; and if it doesn’t, well, that may be it for you. So you’d better not fail. (I think it’s interesting how much of our fiction and media involves these kinds of events: The hero would surely and promptly die if he fails, but he won’t fail, for he’s the hero! We often seem more comfortable in that sort of world than we do in the one we actually live in.)

Whereas the life we live in now is one of delayed consequences with low chance of success and minimal stakes. Send out a dozen job applications. Hear back in a week from three that want to interview you. Do those interviews and maybe one will make you an offer—but honestly, probably not. Next week do another dozen. Keep going like this, week after week, until finally one says yes. Each failure actually costs you very little—but you will fail, over and over and over and over.

In other words, we have transitioned from an environment of immediate return to one of delayed return.

The result is that a system which was optimized to tell us never fail or you will die is being put through situations where failure is constantly repeated. I think deep down there is a part of us that wonders, “How are you still alive after failing this many times?” If you had fallen in as many ravines as I have received rejection letters, you would assuredly be dead many times over.

Yet perhaps our brains are not quite as miscalibrated as they seem. Again I come back to the fact that anxiety always seems to be about people and evaluation; it’s different from immediate life-or-death fear. I actually experience very little life-or-death fear, which makes sense; I live in a very safe environment. But I experience anxiety almost constantly—which also makes a certain amount of sense, seeing as I live in an environment where I am being almost constantly evaluated by other people.

One theory posits that anxiety and depression are a dual mechanism for dealing with social hierarchy: You are anxious when your position in the hierarchy is threatened, and depressed when you have lost it. Primates like us do seem to care an awful lot about hierarchies—and I’ve written before about how this explains some otherwise baffling things about our economy.

But I for one have never felt especially invested in hierarchy. At least, I have very little desire to be on top of the hiearchy. I don’t want to be on the bottom (for I know how such people are treated); and I strongly dislike most of the people who are actually on top (for they’re most responsible for treating the ones on the bottom that way). I also have ‘a problem with authority’; I don’t like other people having power over me. But if I were to somehow find myself ruling the world, one of the first things I’d do is try to figure out a way to transition to a more democratic system. So it’s less like I want power, and more like I want power to not exist. Which means that my anxiety can’t really be about fearing to lose my status in the hierarchy—in some sense, I want that, because I want the whole hierarchy to collapse.

If anxiety involved the fear of losing high status, we’d expect it to be common among those with high status. Quite the opposite is the case. Anxiety is more common among people who are more vulnerable: Women, racial minorities, poor people, people with chronic illness. LGBT people have especially high rates of anxiety. This suggests that it isn’t high status we’re afraid of losing—though it could still be that we’re a few rungs above the bottom and afraid of falling all the way down.

It also suggests that anxiety isn’t entirely pathological. Our brains are genuinely responding to circumstances. Maybe they are over-responding, or responding in a way that is not ultimately useful. But the anxiety is at least in part a product of real vulnerabilities. Some of what we’re worried about may actually be real. If you cannot carry yourself with the confidence of a mediocre White man, it may be simply because his status is fundamentally secure in a way yours is not, and he has been afforded a great many advantages you never will be. He never had a Supreme Court ruling decide his rights.

I cannot offer you a cure for anxiety. I cannot even really offer you a complete explanation of where it comes from. But perhaps I can offer you this: It is not your fault. Your brain evolved for a very different world than this one, and it is doing its best to protect you from the very different risks this new world engenders. Hopefully one day we’ll figure out a way to get it calibrated better.

Why are political speeches so vacuous?

Aug 27 JDN 2460184

In last week’s post I talked about how posters for shows at the Fringe seem to be attention-grabbing but almost utterly devoid of useful information.

This brings to mind another sort of content that also fits that description: political speeches.

While there are some exceptions—including in fact some of the greatest political speeches ever made, such as Martin Luther King’s “I have a dream” or Dwight Eisenhower’s “Cross of Iron”—on the whole, most political speeches seem to be incredibly vacuous.

Each country probably has its own unique flavor of vacuousness, but in the US they talk about motherhood, and apple pie, and American exceptionalism. “I love my great country, we are an amazing country, I’m so proud to live here” is basically the extent of the information conveyed within what could well be a full hour-long oration.

This raises a question: Why? Why don’t political speeches typically contain useful information?

It’s not that there’s no useful information to be conveyed: There are all sorts of things that people would like to know about a political candidate, including how honest they are, how competent they are, and the whole range of policies they intend to support or oppose on a variety of issues.

But most of what you’d like to know about a candidate actually comes in one of two varieties: Cheap talk, or controversy.

Cheap talk is the part related to being honest and competent. Basically every voter wants candidates who are honest and competent, and we know all too well that not all candidates qualify. The problem is, how do they show that they are honest and competent? They could simply assert it, but that’s basically meaningless—anybody could assert it. In fact, Donald Trump is the candidate who leaps to mind as the most eager to frequently assert his own honesty and competence, and also the most successful candidate in at least my lifetime who seems to utterly and totally lack anything resembling these qualities.

So unless you are clever enough to find ways to demonstrate your honesty and competence, you’re really not accomplishing anything by asserting it. Most people simply won’t believe you, and they’re right not to. So it doesn’t make much sense to spend a lot of effort trying to make such assertions.

Alternatively, you could try to talk about policy, say what you would like to do regarding climate change, the budget, or the military, or the healthcare system, or any of dozens of other political questions. That would absolutely be useful information for voters, and it isn’t just cheap talk, because different candidates and voters do intend different things and voters would like to know which ones are which.

The problem, then, is that it’s controversial. Not everyone is going to agree with your particular take on any given political issue—even within your own party there is bound to be substantial disagreement.

If enough voters were sufficiently rational about this, and could coolly evaluate a candidate’s policies, accepting the pros and cons, then it would still make sense to deliver this information. I for one would rather vote for someone I know agrees with me 90% of the time than someone who won’t even tell me what they intend to do while in office.

But in fact most voters are not sufficiently rational about this. Voters react much more strongly to negative information than positive information: A candidate you agree with 9 times out of 10 can still make you utterly outraged by their stance on issue number 10. This is a specific form of the more general phenomenon of negativity bias: Psychologically, people just react a lot more strongly to bad things than to good things. Negativity bias has strong effects on how people vote, especially young people.

Rather than a cool-headed, rational assessment of pros and cons, most voters base their decision on deal-breakers: “I could never vote for a Republican” or “I could never vote for someone who wants to cut the military”. Only after they’ve excluded a large portion of candidates based on these heuristics do they even try to look closer at the detailed differences between candidates.

This means that, if you are a candidate, your best option is to avoid offering any deal-breakers. You want to say things that almost nobody will strongly disagree with—because any strong disagreement could be someone’s deal-breaker and thereby hurt your poll numbers.

And what’s the best way to not say anything that will offend or annoy anyone? Not say anything at all. Campaign managers basically need to Mirandize their candidates: You have the right to remain silent. Anything you say can and will be used against you in the court of public opinion.

But in fact you can’t literally remain silent—when running for office, you are expected to make a lot of speeches. So you do the next best thing: You say a lot of words, but convey very little meaning. You say things like “America is great” and “I love apple pie” and “Moms are heroes” that, while utterly vapid, are very unlikely to make anyone particularly angry at you or be any voter’s deal-breaker.

And then we get into a Nash equilibrium where everyone is talking like this, nobody is saying anything, and political speeches become entirely devoid of useful content.

What can we as voters do about this? Individually, perhaps nothing. Collectively, literally everything.

If we could somehow shift the equilibrium so that candidates who are brave enough to make substantive, controversial claims get rewarded for it—even when we don’t entirely agree with them—while those who continue to recite insipid nonsense are punished, then candidates will absolutely change how they speak.

But this would require a lot of people to change, more or less all at once. A sufficiently large critical mass of voters would need to be willing to support candidates specifically because they made detailed policy proposals, even if we didn’t particularly like those policy proposals.

Obviously, if their policy proposals were terrible, we’d have good reason to reject them; but for this to work, we need to be willing to support a lot of things that are just… kind of okay. Because it’s vanishingly unlikely that the first candidates who are brave enough to say what they intend will also be ones whose intentions we entirely agree with. We need to set some kind of threshold of minimum agreement, and reward anyone who exceeds it. We need to ask ourselves if our deal-breakers really need to be deal-breakers.

Against deontology

Aug 6 JDN 2460163

In last week’s post I argued against average utilitarianism, basically on the grounds that it devalues the lives of anyone who isn’t of above average happiness. But you might be tempted to take these as arguments against utilitarianism in general, and that is not my intention.

In fact I believe that utilitarianism is basically correct, though it needs some particular nuances that are often lost in various presentations of it.

Its leading rival is deontology, which is really a broad class of moral theories, some a lot better than others.

What characterizes deontology as a class is that it uses rules, rather than consequences; an act is just right or wrong regardless of its consequences—or even its expected consequences.

There are certain aspects of this which are quite appealing: In fact, I do think that rules have an important role to play in ethics, and as such I am basically a rule utilitarian. Actually trying to foresee all possible consequences of every action we might take is an absurd demand far beyond the capacity of us mere mortals, and so in practice we have no choice but to develop heuristic rules that can guide us.

But deontology says that these are no mere heuristics: They are in fact the core of ethics itself. Under deontology, wrong actions are wrong even if you know for certain that their consequences will be good.

Kantian ethics is one of the most well-developed deontological theories, and I am quite sympathetic to Kantian ethics In fact I used to consider myself one of its adherents, but I now consider that view a mistaken one.

Let’s first dispense with the views of Kant himself, which are obviously wrong. Kant explicitly said that lying is always, always, always wrong, and even when presented with obvious examples where you could tell a small lie to someone obviously evil in order to save many innocent lives, he stuck to his guns and insisted that lying is always wrong.

This is a bit anachronistic, but I think this example will be more vivid for modern readers, and it absolutely is consistent with what Kant wrote about the actual scenarios he was presented with:

You are living in Germany in 1945. You have sheltered a family of Jews in your attic to keep them safe from the Holocaust. Nazi soldiers have arrived at your door, and ask you: “Are there any Jews in this house?” Do you tell the truth?

I think it’s utterly, agonizingly obvious that you should not tell the truth. Exactly what you should do is less obvious: Do you simply lie and hope they buy it? Do you devise a clever ruse? Do you try to distract them in some way? Do you send them on a wild goose chase elsewhere? If you could overpower them and kill them, should you? What if you aren’t sure you can; should you still try? But one thing is clear: You don’t hand over the Jewish family to the Nazis.

Yet when presented with similar examples, Kant insisted that lying is always wrong. He had a theory to back it up, his Categorical Imperative: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”

And, so his argument goes: Since it would be obviously incoherent to say that everyone should always lie, lying is wrong, and you’re never allowed to do it. He actually bites that bullet the size of a Howitzer round.

Modern deontologists—even though who consider themselves Kantians—are more sophisticated than this. They realize that you could make a rule like “Never lie, except to save the life of an innocent person.” or “Never lie, except to stop a great evil.” Either of these would be quite adequate to solve this particular dilemma. And it’s absolutely possible to will that these would be universal laws, in the sense that they would apply to anyone. ‘Universal’ doesn’t have to mean ‘applies equally to all possible circumstances’.

There are also a couple of things that deontology does very well, which are worth preserving. One of them is supererogation: The idea that some acts are above and beyond the call of duty, that something can be good without being obligatory.

This is something most forms of utilitarianism are notoriously bad at. They show us a spectrum of worlds from the best to the worst, and tell us to make things better. But there’s nowhere we are allowed to stop, unless we somehow manage to make it all the way to the best possible world.

I find this kind of moral demand very tempting, which often leads me to feel a tremendous burden of guilt. I always know that I could be doing more than I do. I’ve written several posts about this in the past, in the hopes of fighting off this temptation in myself and others. (I am not entirely sure how well I’ve succeeded.)

Deontology does much better in this regard: Here are some rules. Follow them.

Many of the rules are in fact very good rules that most people successfully follow their entire lives: Don’t murder. Don’t rape. Don’t commit robbery. Don’t rule a nation tyrannically. Don’t commit war crimes.

Others are oft more honored in the breach than the observance: Don’t lie. Don’t be rude. Don’t be selfish. Be brave. Be generous. But a well-developed deontology can even deal with this, by saying that some rules are more important than others, and thus some sins are more forgivable than others.

Whereas a utilitarian—at least, anything but a very sophisticated utilitarian—can only say who is better and who is worse, a deontologist can say who is good enough: who has successfully discharged their moral obligations and is otherwise free to live their life as they choose. Deontology absolves us of guilt in a way that utilitarianism is very bad at.

Another good deontological principle is double-effect: Basically this says that if you are doing something that will have bad outcomes as well as good ones, it matters whether you intend the bad one and what you do to try to mitigate it. There does seem to be a morally relevant difference between a bombing that kills civilians accidentally as part of an attack on a legitimate military target, and a so-called “strategic bombing” that directly targets civilians in order to maximize casualties—even if both occur as part of a justified war. (Both happen a lot—and it may even be the case that some of the latter were justified. The Tokyo firebombing and atomic bombs on Hiroshima and Nagasaki were very much in the latter category.)

There are ways to capture this principle (or something very much like it) in a utilitarian framework, but like supererogation, it requires a sophisticated, nuanced approach that most utilitarians don’t seem willing or able to take.

Now that I’ve said what’s good about it, let’s talk about what’s really wrong with deontology.

Above all: How do we choose the rules?

Kant seemed to think that mere logical coherence would yield a sufficiently detailed—perhaps even unique—set of rules for all rational beings in the universe to follow. This is obviously wrong, and seems to be simply a failure of his imagination. There is literally a countably infinite space of possible ethical rules that are logically consistent. (With probability 1 any given one is utter nonsense: “Never eat cheese on Thursdays”, “Armadillos should rule the world”, and so on—but these are still logically consistent.)

If you require the rules to be simple and general enough to always apply to everyone everywhere, you can narrow the space substantially; but this is also how you get obviously wrong rules like “Never lie.”

In practice, there are two ways we actually seem to do this: Tradition and consequences.

Let’s start with tradition. (It came first historically, after all.) You can absolutely make a set of rules based on whatever your culture has handed down to you since time immemorial. You can even write them down in a book that you declare to be the absolute infallible truth of the universe—and, amazingly enough, you can get millions of people to actually buy that.

The result, of course, is what we call religion. Some of its rules are good: Thou shalt not kill. Some are flawed but reasonable: Thou shalt not steal. Thou shalt not commit adultery. Some are nonsense: Thou shalt not covet thy neighbor’s goods.

And some, well… some rules of tradition are the source of many of the world’s most horrific human rights violations. Thou shalt not suffer a witch to live (Exodus 22:18). If a man also lie with mankind, as he lieth with a woman, both of them have committed an abomination: they shall surely be put to death; their blood shall be upon them (Leviticus 20:13).

Tradition-based deontology has in fact been the major obstacle to moral progress throughout history. It is not a coincidence that utilitarianism began to become popular right before the abolition of slavery, and there is an even more direct casual link between utilitarianism and the advancement of rights for women and LGBT people. When the sole argument you can make for moral rules is that they are ancient (or allegedly handed down by a perfect being), you can make rules that oppress anyone you want. But when rules have to be based on bringing happiness or preventing suffering, whole classes of oppression suddenly become untenable. “God said so” can justify anything—but “Who does it hurt?” can cut through.

It is an oversimplification, but not a terribly large one, to say that the arc of moral history has been drawn by utilitarians dragging deontologists kicking and screaming into a better future.

There is a better way to make rules, and that is based on consequences. And, in practice, most people who call themselves deontologists these days do this. They develop a system of moral rules based on what would be expected to lead to the overall best outcomes.

I like this approach. In fact, I agree with this approach. But it basically amounts to abandoning deontology and surrendering to utilitarianism.

Once you admit that the fundamental justification for all moral rules is the promotion of happiness and the prevention of suffering, you are basically a rule utilitarian. Rules then become heuristics for promoting happiness, not the fundamental source of morality itself.

I suppose it could be argued that this is not a surrender but a synthesis: We are looking for the best aspects of deontology and utilitarianism. That makes a lot of sense. But I keep coming back to the dark history of traditional rules, the fact that deontologists have basically been holding back human civilization since time immemorial. If deontology wants to be taken seriously now, it needs to prove that it has broken with that dark tradition. And frankly the easiest answer to me seems to be to just give up on deontology.

How much should we give of ourselves?

Jul 23 JDN 2460149

This is a question I’ve written about before, but it’s a very important one—perhaps the most important question I deal with on this blog—so today I’d like to come back to it from a slightly different angle.

Suppose you could sacrifice all the happiness in the rest of your life, making your own existence barely worth living, in exchange for saving the lives of 100 people you will never meet.

  1. Would it be good for you do so?
  2. Should you do so?
  3. Are you a bad person if you don’t?
  4. Are all of the above really the same question?

Think carefully about your answer. It may be tempting to say “yes”. It feels righteous to say “yes”.

But in fact this is not hypothetical. It is the actual situation you are in.

This GiveWell article is entitled “Why is it so expensive to save a life?” but that’s incredibly weird, because the actual figure they give is astonishingly, mind-bogglingly, frankly disgustingly cheap: It costs about $4500 to save one human life. I don’t know how you can possibly find that expensive. I don’t understand how anyone can think, “Saving this person’s life might max out a credit card or two; boy, that sure seems expensive!

The standard for healthcare policy in the US is that something is worth doing if it is able to save one quality-adjusted life year for less than $50,000. That’s one year for ten times as much. Even accounting for the shorter lifespans and worse lives in poor countries, saving someone from a poor country for $4500 is at least one hundred times as cost-effective as that.

To put it another way, if you are a typical middle-class person in the First World, with an after-tax income of about $25,000 per year, and you were to donate 90% of that after-tax income to high-impact charities, you could be expected to save 5 lives every year. Over the course of a 30-year career, that’s 150 lives saved.

You would of course be utterly miserable for those 30 years, having given away all the money you could possibly have used for any kind of entertainment or enjoyment, not to mention living in the cheapest possible housing—maybe even a tent in a homeless camp—and eating the cheapest possible food. But you could do it, and you would in fact be expected to save over 100 lives by doing so.

So let me ask you again:

  1. Would it be good for you do so?
  2. Should you do so?
  3. Are you a bad person if you don’t?
  4. Are all of the above really the same question?

Peter Singer often writes as though the answer to all these questions is “yes”. But even he doesn’t actually live that way. He gives a great deal to charity, mind you; no one seems to know exactly how much, but estimates range from at least 10% to up to 50% of his income. My general impression is that he gives about 10% of his ordinary income and more like 50% of big prizes he receives (which are in fact quite numerous). Over the course of his life he has certainly donated at least a couple million dollars. Yet he clearly could give more than he does: He lives a comfortable, upper-middle-class life.

Peter Singer’s original argument for his view, from his essay “Famine, Affluence, and Morality”, is actually astonishingly weak. It involves imagining a scenario where a child is drowning in a lake and you could go save them, but only at the cost of ruining your expensive suit.

Obviously, you should save the child. We all agree on that. You are in fact a terrible person if you wouldn’t save the child.

But Singer tries to generalize this into a principle that requires us to donate all most of our income to international charities, and that just doesn’t follow.

First of all, that suit is not worth $4500. Not if you’re a middle-class person. That’s a damn Armani. No one who isn’t a millionaire wears suits like that.

Second, in the imagined scenario, you’re the only one who can help the kid. All I have to do is change that one thing and already the answer is different: If right next to you there is a trained, certified lifeguard, they should save the kid, not you. And if there are a hundred other people at the lake, and none of them is saving the kid… probably there’s a good reason for that? (It could be bystander effect, but actually that’s much weaker than a lot of people think.) The responsibility doesn’t uniquely fall upon you.

Third, the drowning child is a one-off, emergency scenario that almost certainly will never happen to you, and if it does ever happen, will almost certainly only happen once. But donation is something you could always do, and you could do over and over and over again, until you have depleted all your savings and run up massive debts.

Fourth, in the hypothetical scenario, there is only one child. What if there were ten—or a hundred—or a thousand? What if you couldn’t possibly save them all by yourself? Should you keep going out there and saving children until you become exhausted and you yourself drown? Even if there is a lifeguard and a hundred other bystanders right there doing nothing?

And finally, in the drowning child scenario, you are right there. This isn’t some faceless stranger thousands of miles away. You can actually see that child in front of you. Peter Singer thinks that doesn’t matter—actually his central point seems to be that it doesn’t matter. But I think it does.

Singer writes:

It makes no moral difference whether the person I can help is a neighbor’s child ten yards away from me or a Bengali whose name I shall never know, ten thousand miles away.

That’s clearly wrong, isn’t it? Relationships mean nothing? Community means nothing? There is no moral value whatsoever to helping people close to us rather than random strangers on the other side of the planet?

One answer might be to say that the answer to question 4 is “no”. You aren’t a bad person for not doing everything you should, and even though something would be good if you did it, that doesn’t necessarily mean you should do it.

Perhaps some things are above and beyond the call of duty: Good, perhaps even heroic, if you’re willing to do them, but not something we are all obliged to do. The formal term for this is supererogatory. While I think that overall utilitarianism is basically correct and has done great things for human society, one thing I think most utilitarians miss is that they seem to deny that supererogatory actions exist.

Even then, I’m not entirely sure it is good to be this altruistic.

Someone who really believed that we owe as much to random strangers as we do to our friends and family would never show up to any birthday parties, because any time spent at a birthday party would be more efficiently spent earning-to-give to some high-impact charity. They would never visit their family on Christmas, because plane tickets are expensive and airplanes burn a lot of carbon.

They also wouldn’t concern themselves with whether their job is satisfying or even not totally miserable; they would only care whether the total positive impact they can have on the world is positive, either directly through their work or by raising as much money as possible and donating it all to charity.

They would rest only the minimum amount they require to remain functional, eat only the barest minimum of nutritious food, and otherwise work, work, work, constantly, all the time. If their body was capable of doing the work, they would continue doing the work. For there is not a moment to waste when lives are on the line!

A world full of people like that would be horrible. We would all live our entire lives in miserable drudgery trying to maximize the amount we can donate to faceless strangers on the other side of the planet. There would be no joy or friendship in that world, only endless, endless toil.

When I bring this up in the Effective Altruism community, I’ve heard people try to argue otherwise, basically saying that we would never need everyone to devote themselves to the cause at this level, because we’d soon solve all the big problems and be able to go back to enjoying our lives. I think that’s probably true—but it also kind of misses the point.

Yes, if everyone gave their fair share, that fair share wouldn’t have to be terribly large. But we know for a fact that most people are not giving their fair share. So what now? What should we actually do? Do you really want to live in a world where the morally best people are miserable all the time sacrificing themselves at the altar of altruism?

Yes, clearly, most people don’t do enough. In fact, most people give basically nothing to high-impact charities. We should be trying to fix that. But if I am already giving far more than my fair share, far more than I would have to give if everyone else were pitching in as they should—isn’t there some point at which I’m allowed to stop? Do I have to give everything I can or else I’m a monster?

The conclusion that we ought to make ourselves utterly miserable in order to save distant strangers feels deeply unsettling. It feels even worse if we say that we ought to do so, and worse still if we feel we are bad people if we don’t.

One solution would be to say that we owe absolutely nothing to these distant strangers. Yet that clearly goes too far in the opposite direction. There are so many problems in this world that could be fixed if more people cared just a little bit about strangers on the other side of the planet. Poverty, hunger, war, climate change… if everyone in the world (or really even just everyone in power) cared even 1% as much about random strangers as they do about themselves, all these would be solved.

Should you donate to charity? Yes! You absolutely should. Please, I beseech you, give some reasonable amount to charity—perhaps 5% of your income, or if you can’t manage that, maybe 1%.

Should you make changes in your life to make the world better? Yes! Small ones. Eat less meat. Take public transit instead of driving. Recycle. Vote.

But I can’t ask you to give 90% of your income and spend your entire life trying to optimize your positive impact. Even if it worked, it would be utter madness, and the world would be terrible if all the good people tried to do that.

I feel quite strongly that this is the right approach: Give something. Your fair share, or perhaps even a bit more, because you know not everyone will.

Yet it’s surprisingly hard to come up with a moral theory on which this is the right answer.

It’s much easier to develop a theory on which we owe absolutely nothing: egoism, or any deontology on which charity is not an obligation. And of course Singer-style utilitarianism says that we owe virtually everything: As long as QALYs can be purchased cheaper by GiveWell than by spending on yourself, you should continue donating to GiveWell.

I think part of the problem is that we have developed all these moral theories as if we were isolated beings, who act in a world that is simply beyond our control. It’s much like the assumption of perfect competition in economics: I am but one producer among thousands, so whatever I do won’t affect the price.

But what we really needed was a moral theory that could work for a whole society. Something that would still make sense if everyone did it—or better yet, still make sense if half the people did it, or 10%, or 5%. The theory cannot depend upon the assumption that you are the only one following it. It cannot simply “hold constant” the rest of society.

I have come to realize that the Effective Altruism movement, while probably mostly good for the world as a whole, has actually been quite harmful to the mental health of many of its followers, including myself. It has made us feel guilty for not doing enough, pressured us to burn ourselves out working ever harder to save the world. Because we do not give our last dollar to charity, we are told that we are murderers.

But there are real murderers in this world. While you were beating yourself up over not donating enough, Vladmir Putin was continuing his invasion of Ukraine, ExxonMobil was expanding its offshore drilling, Daesh was carrying out hundreds of terrorist attacks, Qanon was deluding millions of people, and the human trafficking industry was making $150 billion per year.

In other words, by simply doing nothing you are considerably better than the real monsters responsible for most of the world’s horror.

In fact, those starving children in Africa that you’re sending money to help? They wouldn’t need it, were it not for centuries of colonial imperialism followed by a series of corrupt and/or incompetent governments ruled mainly by psychopaths.

Indeed the best way to save those people, in the long run, would be to fix their governments—as has been done in places like Namibia and Botswana. According to the World Development Indicators, the proportion of people living below the UN extreme poverty line (currently $2.15 per day at purchasing power parity) has fallen from 36% to 16% in Namibia since 2003, and from 42% to 15% in Botswana since 1984. Compare this to some countries that haven’t had good governments over that time: In Cote d’Ivoire the same poverty rate was 8% in 1985 but is 11% today (and was actually as high as 33% in 2015), while in Congo it remains at 35%. Then there are countries that are trying, but just started out so poor it’s a long way to go: Burkina Faso’s extreme poverty rate has fallen from 82% in 1994 to 30% today.

In other words, if you’re feeling bad about not giving enough, remember this: if everyone in the world were as good as you, you wouldn’t need to give a cent.

Of course, simply feeling good about yourself for not being a psychopath doesn’t accomplish very much either. Somehow we have to find a balance: Motivate people enough so that they do something, get them to do their share; but don’t pressure them to sacrifice themselves at the altar of altruism.

I think part of the problem here—and not just here—is that the people who most need to change are the ones least likely to listen. The kind of person who reads Peter Singer is already probably in the top 10% of most altruistic people, and really doesn’t need much more than a slight nudge to be doing their fair share. And meanwhile the really terrible people in the world have probably never picked up an ethics book in their lives, or if they have, they ignored everything it said.

I don’t quite know what to do about that. But I hope I can least convince you—and myself—to take some of the pressure off when it feels like we’re not doing enough.

How to make political conversation possible

Jun 25 JDN 2460121

Every man has the right to an opinion, but no man has a right to be wrong in his facts.

~Bernard Baruch

We shouldn’t expect political conversation to be easy. Politics inherently involves confllict. There are various competing interests and different ethical views involved in any political decision. Budgets are inherently limited, and spending must be prioritized. Raising taxes supports public goods but hurts taxpayers. A policy that reduces inflation may increase unemployment. A policy that promotes growth may also increase inequality. Freedom must sometimes be weighed against security. Compromises must be made that won’t make everyone happy—often they aren’t anyone’s first choice.

But in order to have useful political conversations, we need to have common ground. It’s one thing to disagree about what should be done—it’s quite another to ‘disagree’ about the basic facts of the world. Reasonable people can disagree about what constitutes the best policy choice. But when you start insisting upon factual claims that are empirically false, you become inherently unreasonable.

What terrifies me about our current state of political discourse is that we do not seem to have this common ground. We can’t even agree about basic facts of the world. Unless we can fix this, political conversation will be impossible.

I am tempted to say “anymore”—it at least feels to me like politics used to be different. But maybe it’s always been this way, and the Internet simply made the unreasonable voices louder. Overall rates of belief in most conspiracy theories haven’t changed substantially over time. Many other times have declared themselves ‘the golden age of conspiracy theory’. Maybe this has always been a problem. Maybe the greatest reason humanity has never been able to achieve peace is that large swaths of humanity can’t even agree on the basic facts.

Donald Trump exemplified this fact-less approach to politics, and QAnon remains a disturbingly significant force in our politics today. It’s impossible to have a sensible conversation with people who are convinced that you’re supporting a secret cabal of Satanic child molesters—and all the more impossible because they were willing to become convinced of that on literally zero evidence. But Trump was not the first conspiracist candidate, and will not be the last.

Robert F. Kennedy Jr. now seems to be challenging Trump for the title of ‘most unreasonable Presidential candidate’, as he has now advocated for an astonishing variety of bizarre unfounded claims: that vaccines are deadly, that antidepressants are responsible for mass shootings, that COVID was a Chinese bioweapon. He even claims things that can be quickly refuted simply by looking up the figures: He says that Switzerland’s gun ownership rate is comparable to the US, when in fact it’s only about one-fourth as high. No other country even comes close to the extraordinarily high rate of gun ownership in the US; we are the only country in the world with more privately-owned guns than private citizens to own them—more guns than people. (We also have by far the most military weapons as well, but that’s a somewhat different issue.)

What should we be doing about this? I think at this point it’s clear that simply sitting back and hoping it goes away on its own is not working. There is a widespread fear that engaging with bizarre theories simply grants them attention, but I think we have no serious alternative. They aren’t going to disappear if we simply ignore them.

That still leaves the question of how to engage. Simply arguing with their claims directly and presenting mainstream scientific evidence appears to be remarkably ineffective. They will simply dismiss the credibility of the scientific evidence, often by exaggerating genuine flaws in scientific institutions. The journal system is broken? Big Pharma has far too much influence? Established ideas take too long to become unseated? All true. But that doesn’t mean that magic beans cure cancer.

A more effective—not easy, and certainly not infallible, but more effective—strategy seems to be to look deeper into why people say the things they do. I emphasize the word ‘say’ here, because it often seems to be the case that people don’t really believe in conspiracy theories the way they believe in ordinary facts. It’s more the mythology mindset.

Rather than address the claims directly, you need to address the person making the claims. Before getting into any substantive content, you must first build rapport and show empathy—a process some call pre-suasion. Then, rather than seeking out the evidence that support their claims—as there will be virtually none—try to find out what emotional need the conspiracy theory satisfies for them: How does it help them make sense of the terrifying chaos of the world? How does professing belief in something that initially seems absurd and horrific actually make the world seem more orderly and secure in their mind?


For instance, consider the claim that 9/11 was an inside job. At face value, this is horrifying: The US government is so evil it was prepared to launch an attack on our own soil, against our own citizens, in order to justify starting a war in another country? Against such a government, I think violent insurrection is the only viable response. But if you consider it from another perspective, it makes the world less terrifying: At least, there is someone in control. An attack like 9/11 means that the world is governed by chaos: Even we in the seemingly-impregnable fortress of American national security are in fact vulnerable to random attacks by small groups of dedicated fanatics. In the conspiracist vision of the world, the US government becomes a terrible villain; but at least the world is governed by powerful, orderly forces—not random chaos.

Or consider one of the most widespread (and, to be fair, one of the least implausible) conspiracy theories: That JFK was assassinated not by a single fanatic, but by an organized agency—the KGB, or the CIA, or the Vice President. In the real world, the President of the United States—the most powerful man on the entire planet—can occasionally be felled by a single individual who is dedicated enough and lucky enough. In the conspiracist world, such a powerful man can only be killed by someone similarly powerful. The world may be governed by an evil elite—but at least it is governed. The rules may be evil, but at least there are rules.

Understanding this can give you some sympathy for people who profess conspiracies: They are struggling to cope with the pain of living in a chaotic, unpredictable, disorderly world. They cannot deny that terrible events happen, but by attributing them to unseen, organized forces, they can at least believe that those terrible events are part of some kind of orderly plan.


At the same time, you must constantly guard against seeming arrogant or condescending. (This is where I usually fail; it’s so hard for me to take these ideas seriously.) You must present yourself as open-minded and interested in speaking in good faith. If they sense that you aren’t taking them seriously, people will simply shut down and refuse to talk any further.

It’s also important to recognize that most people with bizarre beliefs aren’t simply gullible. It isn’t that they believe whatever anyone tells them. On the contrary, they seem to suffer from misplaced skepticism: They doubt the credible sources and believe the unreliable ones. They are hyper-aware of the genuine problems with mainstream sources, and yet somehow totally oblivious to the far more glaring failures of the sources they themselves trust.

Moreover, you should never expect to change someone’s worldview in a single conversation. That simply isn’t how human beings work. The only times I have ever seen anyone completely change their opinion on something in a single sitting involved mathematical proofs—showing a proper proof really can flip someone’s opinion all by itself. Yet even scientists working in their own fields of expertise generally require multiple sources of evidence, combined over some period of time, before they will truly change their minds.

Your goal, then, should not be to convince someone that their bizarre belief is wrong. Rather, convince them that some of the sources they trust are just as unreliable as the ones they doubt. Or point out some gaps in the story they hadn’t considered. Or offer an alternative account of events that explains the outcome without requiring the existence of a secret evil cabal. Don’t try to tear down the entire wall all at once; chip away at it, one little piece at a time—and one day, it will crumble.

Hopefully if we do this enough, we can make useful political conversation possible.

We ignorant, incompetent gods

May 21 JDN 2460086

A review of Homo Deus

The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.

E.O. Wilson

Homo Deus is a very good read—and despite its length, a quick one; as you can see, I read it cover to cover in a week. Yuval Noah Harari’s central point is surely correct: Our technology is reaching a threshold where it grants us unprecedented power and forces us to ask what it means to be human.

Biotechnology and artificial intelligence are now advancing so rapidly that advancements in other domains, such as aerospace and nuclear energy, seem positively mundane. Who cares about making flight or electricity a bit cleaner when we will soon have the power to modify ourselves or we’ll all be replaced by machines?

Indeed, we already have technology that would have seemed to ancient people like the powers of gods. We can fly; we can witness or even control events thousands of miles away; we can destroy mountains; we can wipeout entire armies in an instant; we can even travel into outer space.

Harari rightly warns us that our not-so-distant descendants are likely to have powers that we would see as godlike: Immortality, superior intelligence, self-modification, the power to create life.

And where it is scary to think about what they might do with that power if they think the way we do—as ignorant and foolish and tribal as we are—Harari points out that it is equally scary to think about what they might do if they don’t think the way we do—for then, how do they think? If their minds are genetically modified or even artificially created, who will they be? What values will they have, if not ours? Could they be better? What if they’re worse?

It is of course difficult to imagine values better than our own—if we thought those values were better, we’d presumably adopt them. But we should seriously consider the possibility, since presumably most of us believe that our values today are better than what most people’s values were 1000 years ago. If moral progress continues, does it not follow that people’s values will be better still 1000 years from now? Or at least that they could be?

I also think Harari overestimates just how difficult it is to anticipate the future. This may be a useful overcorrection; the world is positively infested with people making overprecise predictions about the future, often selling them for exorbitant fees (note that Harari was quite well-compensated for this book as well!). But our values are not so fundamentally alien from those of our forebears, and we have reason to suspect that our descendants’ values will be no more different from ours.

For instance, do you think that medieval people thought suffering and death were good? I assure you they did not. Nor did they believe that the supreme purpose in life is eating cheese. (They didn’t even believe the Earth was flat!) They did not have the concept of GDP, but they could surely appreciate the value of economic prosperity.

Indeed, our world today looks very much like a medieval peasant’s vision of paradise. Boundless food in endless variety. Near-perfect security against violence. Robust health, free from nearly all infectious disease. Freedom of movement. Representation in government! The land of milk and honey is here; there they are, milk and honey on the shelves at Walmart.

Of course, our paradise comes with caveats: Not least, we are by no means free of toil, but instead have invented whole new kinds of toil they could scarcely have imagined. If anything I would have to guess that coding a robot or recording a video lecture probably isn’t substantially more satisfying than harvesting wheat or smithing a sword; and reconciling receivables and formatting spreadsheets is surely less. Our tasks are physically much easier, but mentally much harder, and it’s not obvious which of those is preferable. And we are so very stressed! It’s honestly bizarre just how stressed we are, given the abudance in which we live; there is no reason for our lives to have stakes so high, and yet somehow they do. It is perhaps this stress and economic precarity that prevents us from feeling such joy as the medieval peasants would have imagined for us.

Of course, we don’t agree with our ancestors on everything. The medieval peasants were surely more religious, more ignorant, more misogynistic, more xenophobic, and more racist than we are. But projecting that trend forward mostly means less ignorance, less misogyny, less racism in the future; it means that future generations should see the world world catch up to what the best of us already believe and strive for—hardly something to fear. The values that I believe are surely not what we as a civilization act upon, and I sorely wish they were. Perhaps someday they will be.

I can even imagine something that I myself would recognize as better than me: Me, but less hypocritical. Strictly vegan rather than lacto-ovo-vegetarian, or at least more consistent about only buying free range organic animal products. More committed to ecological sustainability, more willing to sacrifice the conveniences of plastic and gasoline. Able to truly respect and appreciate all life, even humble insects. (Though perhaps still not mosquitoes; this is war. They kill more of us than any other animal, including us.) Not even casually or accidentally racist or sexist. More courageous, less burnt out and apathetic. I don’t always live up to my own ideals. Perhaps someday someone will.

Harari fears something much darker, that we will be forced to give up on humanist values and replace them with a new techno-religion he calls Dataism, in which the supreme value is efficient data processing. I see very little evidence of this. If it feels like data is worshipped these days, it is only because data is profitable. Amazon and Google constantly seek out ever richer datasets and ever faster processing because that is how they make money. The real subject of worship here is wealth, and that is nothing new. Maybe there are some die-hard techno-utopians out there who long for us all to join the unified oversoul of all optimized data processing, but I’ve never met one, and they are clearly not the majority. (Harari also uses the word ‘religion’ in an annoyingly overbroad sense; he refers to communism, liberalism, and fascism as ‘religions’. Ideologies, surely; but religions?)

Harari in fact seems to think that ideologies are strongly driven by economic structures, so maybe he would even agree that it’s about profit for now, but thinks it will become religion later. But I don’t really see history fitting this pattern all that well. If monotheism is directly tied to the formation of organized bureaucracy and national government, then how did Egypt and Rome last so long with polytheistic pantheons? If atheism is the natural outgrowth of industrialized capitalism, then why are Africa and South America taking so long to get the memo? I do think that economic circumstances can constrain culture and shift what sort of ideas become dominant, including religious ideas; but there clearly isn’t this one-to-one correspondence he imagines. Moreover, there was never Coalism or Oilism aside from the greedy acquisition of these commodities as part of a far more familiar ideology: capitalism.

He also claims that all of science is now, or is close to, following a united paradigm under which everything is a data processing algorithm, which suggests he has not met very many scientists. Our paradigms remain quite varied, thank you; and if they do all have certain features in common, it’s mainly things like rationality, naturalism and empiricism that are more or less inherent to science. It’s not even the case that all cognitive scientists believe in materialism (though it probably should be); there are still dualists out there.

Moreover, when it comes to values, most scientists believe in liberalism. This is especially true if we use Harari’s broad sense (on which mainline conservatives and libertarians are ‘liberal’ because they believe in liberty and human rights), but even in the narrow sense of center-left. We are by no means converging on a paradigm where human life has no value because it’s all just data processing; maybe some scientists believe that, but definitely not most of us. If scientists ran the world, I can’t promise everything would be better, but I can tell you that Bush and Trump would never have been elected and we’d have a much better climate policy in place by now.

I do share many of Harari’s fears of the rise of artificial intelligence. The world is clearly not ready for the massive economic disruption that AI is going to cause all too soon. We still define a person’s worth by their employment, and think of ourselves primarily as collection of skills; but AI is going to make many of those skills obsolete, and may make many of us unemployable. It would behoove us to think in advance about who we truly are and what we truly want before that day comes. I used to think that creative intellectual professions would be relatively secure; ChatGPT and Midjourney changed my mind. Even writers and artists may not be safe much longer.

Harari is so good at sympathetically explaining other views he takes it to a fault. At times it is actually difficult to know whether he himself believes something and wants you to, or if he is just steelmanning someone else’s worldview. There’s a whole section on ‘evolutionary humanism’ where he details a worldview that is at best Nietschean and at worst Nazi, but he makes it sound so seductive. I don’t think it’s what he believes, in part because he has similarly good things to say about liberalism and socialism—but it’s honestly hard to tell.

The weakest part of the book is when Harari talks about free will. Like most people, he just doesn’t get compatibilism. He spends a whole chapter talking about how science ‘proves we have no free will’, and it’s just the same old tired arguments hard determinists have always made.

He talks about how we can make choices based on our desires, but we can’t choose our desires; well of course we can’t! What would that even mean? If you could choose your desires, what would you choose them based on, if not your desires? Your desire-desires? Well, then, can you choose your desire-desires? What about your desire-desire-desires?

What even is this ultimate uncaused freedom that libertarian free will is supposed to consist in? No one seems capable of even defining it. (I’d say Kant got the closest: He defined it as the capacity to act based upon what ought rather than what is. But of course what we believe about ‘ought’ is fundamentally stored in our brains as a particular state, a way things are—so in the end, it’s an ‘is’ we act on after all.)

Maybe before you lament that something doesn’t exist, you should at least be able to describe that thing as a coherent concept? Woe is me, that 2 plus 2 is not equal to 5!

It is true that as our technology advances, manipulating other people’s desires will become more and more feasible. Harari overstates the case on so-called robo-rats; they aren’t really mind-controlled, it’s more like they are rewarded and punished. The rat chooses to go left because she knows you’ll make her feel good if she does; she’s still freely choosing to go left. (Dangling a carrot in front of a horse is fundamentally the same thing—and frankly, paying a wage isn’t all that different.) The day may yet come where stronger forms of control become feasible, and woe betide us when it does. Yet this is no threat to the concept of free will; we already knew that coercion was possible, and mind control is simply a more precise form of coercion.

Harari reports on a lot of interesting findings in neuroscience, which are important for people to know about, but they do not actually show that free will is an illusion. What they do show is that free will is thornier than most people imagine. Our desires are not fully unified; we are often ‘of two minds’ in a surprisingly literal sense. We are often tempted by things we know are wrong. We often aren’t sure what we really want. Every individual is in fact quite divisible; we literally contain multitudes.

We do need a richer account of moral responsibility that can deal with the fact that human beings often feel multiple conflicting desires simultaneously, and often experience events differently than we later go on to remember them. But at the end of the day, human consciousness is mostly unified, our choices are mostly rational, and our basic account of moral responsibility is mostly valid.

I think for now we should perhaps be less worried about what may come in the distant future, what sort of godlike powers our descendants may have—and more worried about what we are doing with the godlike powers we already have. We have the power to feed the world; why aren’t we? We have the power to save millions from disease; why don’t we? I don’t see many people blindly following this ‘Dataism’, but I do see an awful lot blinding following a 19th-century vision of capitalism.

And perhaps if we straighten ourselves out, the future will be in better hands.

Why does democracy work?

May 14 JDN 2460079

A review of Democracy for Realists

I don’t think it can be seriously doubted that democracy does, in fact, work. Not perfectly, by any means; but the evidence is absolutely overwhelming that more democratic societies are better than more authoritarian societies by just about any measure you could care to use.

When I first started reading Democracy for Realists and saw their scathing, at times frothing criticism of mainstream ideas of democracy, I thought they were going to try to disagree with that; but in the end they don’t. Achen and Bartels do agree that democracy works; they simply think that why and how it works is radically different from what most people think.

For it is a very long-winded book, and in dire need of better editing. Most of the middle section of the book is taken up by a deluge of empirical analysis, most of which amounts to over-interpreting the highly ambiguous results of underpowered linear regressions on extremely noisy data. The sheer quantity of them seems intended to overwhelm any realization that no particular one is especially compelling. But a hundred weak arguments don’t add up to a single strong one.

To their credit, the authors often include the actual scatter plots; but when you look at those scatter plots, you find yourself wondering how anyone could be so convinced these effects are real and important. Many of them seem more prone to new constellations.

Their econometric techniques are a bit dubious, as well; at one point they said they “removed outliers” but then the examples they gave as “outliers” were the observations most distant from their regression line rather than the rest of the data. Removing the things furthest from your regression line will always—always—make your regression seem stronger. But that’s not what outliers are. Other times, they add weird controls or exclude parts of the sample for dubious reasons, and I get the impression that these are the cherry-picked results of a much larger exploration. (Why in the world would you exclude Catholics from a study of abortion attitudes? And this study on shark attacks seems awfully specific….) And of course if you try 20 regressions at random, you can expect that at least 1 of them will probably show up with p < 0.05. I think they are mainly just following the norms of their discipline—but those norms are quite questionable.

They don’t ever get into much detail as to what sort of practical institutional changes they would recommend, so it’s hard to know whether I would agree with those. Some of their suggestions, such as more stringent rules on campaign spending, I largely agree with. Others, such as their opposition to popular referenda and recommendation for longer term limits, I have more mixed feelings about. But none seem totally ridiculous or even particularly radical, and they really don’t offer much detail about any of them. I thought they were going to tell me that appointment of judges is better than election (which many experts widely agree), or that the Electoral College is a good system (which far fewer experts would assent to, at least since George W. Bush and Donald Trump). In fact they didn’t do that; they remain eerily silent on substantive questions like this.

Honestly, what little they have to say about institutional policy feels a bit tacked on at the end, as if they suddenly realized that they ought to say something useful rather than just spend the whole time tearing down another theory.

In fact, I came to wonder if they really were tearing down anyone’s actual theory, or if this whole book was really just battering a strawman. Does anyone really think that voters are completely rational? At one point they speak of an image of the ‘sovereign omnicompetent voter’; is that something anyone really believes in?

It does seem like many people believe in making government more responsive to the people, whereas Achen and Bartels seem to have the rather distinct goal of making government make better decisions. They were able to find at least a few examples—though I know not how far and wide they had to search—where it seemed like more popular control resulted in worse outcomes, such as water fluoridation and funding for fire departments. So maybe the real substantive disagreement here is over whether more or less direct democracy is a good idea. And that is indeed a reasonable question. But one need not believe that voters are superhuman geniuses to think that referenda are better than legislation. Simply showing that voters are limited in their capacity and bound to group identity is not enough to answer that question.


In fact, I think that Achen and Bartels seriously overestimate the irrationality of voters, because they don’t seem to appreciate that group identity is often a good proxy for policy—in fact, they don’t even really seem to see social policy as policy at all. Consider this section (p. 238):

“In this pre-Hitlerian age it must have seemed to most Jews that there were no crucial issues dividing the major parties” (Fuchs 1956, 63). Yet by 1923, a very substantial majority of Jews had abandoned their Republican loyalties and begun voting for the Democrats. What had changed was not foreign policy, but rather the social status of Jews within one of America’s major political parties. In a very visible way, the Democrats had become fully accepting and incorporating of religious minorities, both Catholics and Jews. The result was a durable Jewish partisan realignment grounded in “ethnic solidarity”, in Gamm’s characterization.

Gee, I wonder why Jews would suddenly care a great deal which party was more respectful toward people like them? Okay, the Holocaust hadn’t happened yet, but anti-Semitism is very old indeed, and it was visibly creeping upward during that era. And just in general, if one party is clearly more anti-Semitic than the other, why wouldn’t Jews prefer the one that is less hateful toward them? How utterly blinded by privilege do you need to be to not see that this is an important policy difference?

Perhaps because they are both upper-middle-class straight White cisgender men (I would also venture a guess nominally but not devoutly Protestant), Achens and Bartel seem to have no concept that social policy directly affects people of minority identity, that knowing that one party accepts people like you and the other doesn’t is a damn good reason to prefer one over the other. This is not a game where we are rooting for our home team. This directly affects our lives.

I know quite a few transgender people, and not a single one is a Republican. It’s not because all trans people hate low taxes. It’s because the Republican Party has declared war on trans people.

This may also lead to trans people being more left-wing generally, as once you’re in a group you tend to absorb some views from others in that group (and, I’ll admit, Marxists and anarcho-communists seem overrepresented among LGBT people). But I absolutely know some LGBT people who would like to vote conservative for economic policy reasons, but realize they can’t, because it means voting for bigots who hate them and want to actively discriminate against them. There is nothing irrational or even particularly surprising about this choice. It would take a very powerful overriding reason for anyone to want to vote for someone who publicly announces hatred toward them.

Indeed, for me the really baffling thing is that there are political parties that publicly announce hatred toward particular groups. It seems like a really weird strategy for winning elections. That is the thing that needs to be explained here; why isn’t inclusiveness—at least a smarmy lip-service toward inclusiveness, like ‘Diversity, Equity, and Inclusion’ offices at universities—the default behavior of all successful politicians? Why don’t they all hug a Latina trans woman after kissing a baby and taking a selfie with the giant butter cow? Why is not being an obvious bigot considered a left-wing position?

Since it obviously is the case that many voters don’t want this hatred (at the very least, its targets!), in order for it not to damage electoral changes, it must be that some other voters do want this hatred. Perhaps they themselves define their own identity in opposition to other people’s identities. They certainly talk that way a lot: We hear White people fearing ‘replacement‘ by shifting racial demographics, when no sane forecaster thinks that European haplotypes are in any danger of disappearing any time soon. The central argument against gay marriage was always that it would somehow destroy straight marriage, by some mechanism never explained.

Indeed, perhaps it is this very blindness toward social policy that makes Achen and Bartels unable to see the benefits of more direct democracy. When you are laser-focused on economic policy, as they are, then it seems to you as though policy questions are mainly technical matters of fact, and thus what we need are qualified experts. (Though even then, it is not purely a matter of fact whether we should care more about inequality than growth, or more about unemployment than inflation.)

But once you include social policy, you see that politics often involves very real, direct struggles between conflicting interests and differing moral views, and that by the time you’ve decided which view is the correct one, you already have your answer for what must be done. There is no technical question of gay marriage; there is only a moral one. We don’t need expertise on such questions; we need representation. (Then again, it’s worth noting that courts have sometimes advanced rights more effectively than direct democratic votes; so having your interests represented isn’t as simple as getting an equal vote.)

Achen and Bartels even include a model in the appendix where politicians are modeled as either varying in competence or controlled by incentives; never once does it consider that they might differ in whose interests they represent. Yet I don’t vote for a particular politician just because I think they are more intelligent, or as part of some kind of deterrence mechanism to keep them from misbehaving (I certainly hope the courts do a better job of that!); I vote for them because I think they represent the goals and interests I care about. We aren’t asking who is smarter, we are asking who is on our side.

The central question that I think the book raises is one that the authors don’t seem to have much to offer on: If voters are so irrational, why does democracy work? I do think there is strong evidence that voters are irrational, though maybe not as irrational as Achen and Bartels seem to think. Honestly, I don’t see how anyone can watch Donald Trump get elected President of the United States and not think that voters are irrational. (The book was written before that; apparently there’s a new edition with a preface about Trump, but my copy doesn’t have that.) But it isn’t at all obvious to me what to do with that information, because even if so-called elites are in fact more competent than average citizens—which may or may not be true—the fact remains that their interests are never completely aligned. Thus far, representative democracy of one stripe or another seems to be the best mechanism we have for finding people who have sufficient competence while also keeping them on a short enough leash.

And perhaps that’s why democracy works as well as it does; it gives our leaders enough autonomy to let them generally advance their goals, but also places limits on how badly misaligned our leaders’ goals can be from our own.