Productivity can cope with laziness, but not greed

Oct 8 JDN 2460226

At least since Star Trek, it has been a popular vision of utopia: post-scarcity, an economy where goods are so abundant that there is no need for money or any kind of incentive to work, and people can just do what they want and have whatever they want.

It certainly does sound nice. But is it actually feasible? I’ve written about this before.

I’ve been reading some more books set in post-scarcity utopias, including Ursula K. Le Guin (who is a legend) and Cory Doctorow (who is merely pretty good). And it struck me that while there is one major problem of post-scarcity that they seem to have good solutions for, there is another one that they really don’t. (To their credit, neither author totally ignores it; they just don’t seem to see it as an insurmountable obstacle.)

The first major problem is laziness.

A lot of people assume that the reason we couldn’t achieve a post-scarcity utopia is that once your standard of living is no longer tied to your work, people would just stop working. I think this assumption rests on both an overly cynical view of human nature and an overly pessimistic view of technological progress.

Let’s do a thought experiment. If you didn’t get paid, and just had the choice to work or not, for whatever hours you wished, motivated only by the esteem of your peers, your contribution to society, and the joy of a job well done, how much would you work?

I contend it’s not zero. At least for most people, work does provide some intrinsic satisfaction. It’s also probably not as much as you are currently working; otherwise you wouldn’t insist on getting paid. Those are our lower and upper bounds.

Is it 80% of your current work? Perhaps not. What about 50%? Still too high? 20% seems plausible, but maybe you think that’s still too high. Surely it’s at least 10%. Surely you would be willing to work at least a few hours per week at a job you’re good at that you find personally fulfilling. My guess is that it would actually be more than that, because once people were free of the stress and pressure of working for a living, they would be more likely to find careers that truly brought them deep satisfaction and joy.

But okay, to be conservative, let’s estimate that people are only willing to work 10% as much under a system where labor is fully optional and there is no such thing as a wage. What kind of standard of living could we achieve?

Well, at the current level of technology and capital in the United States, per-capita GDP at purchasing power parity is about $80,000. 10% of that is $8,000. This may not sound like a lot, but it’s about how people currently live in Venezuela. India is slightly better, Ghana is slightly worse. This would feel poor to most Americans today, but it’s objectively a better standard of living than most humans have had throughout history, and not much worse than the world average today.

If per-capita GDP growth continues at its current rate of about 1.5% per year for another century, that $80,000 would become $320,000, 10% of which is $32,000—that would put us at the standard of living of present-day Bulgaria, or what the United States was like in the distant past of [checks notes] 1980. That wouldn’t even feel poor. In fact if literally everyone had this standard of living, nearly as many Americans today would be richer as would be poorer, since the current median personal income is only a bit higher than that.

Thus, the utopian authors are right about this one: Laziness is a solvable problem. We may not quite have it solved yet, but it’s on the ropes; a few more major breakthroughs in productivity-enhancing technology and we’ll basically be there.

In fact, on a small scale, this sort of utopian communist anarchy already works, and has for centuries. There are little places, all around the world, where people gather together and live and work in a sustainable, basically self-sufficient way without being motivated by wages or salaries, indeed often without owning any private property at all.

We call these places monasteries.

Granted, life in a monastery clearly isn’t for everyone: I certainly wouldn’t want to live a life of celibacy and constant religious observance. But the long-standing traditions of monastic life in several very different world religions does prove that it’s possible for human beings to live and even flourish in the absence of a profit motive.

Yet the fact that monastic life is so strict turns out to be no coincidence: In a sense, it had to be for the whole scheme to work. I’ll get back to that in a moment.

The second major problem with a post-scarcity utopia is greed.

This is the one that I think is the real barrier. It may not be totally insurmountable, but thus far I have yet to hear any good proposals that would seriously tackle it.

The issue with laziness is that we don’t really want to work as much as we do. But since we do actually want to work a little bit, the question is simply how to make as much as we currently do while working only as much as we want to. Hence, to deal with laziness, all we need to do is be more efficient. That’s something we are shockingly good at; the overall productivity of our labor is now something like 100 times what it was at the dawn of the Industrial Revolution, and still growing all the time.

Greed is different. The issue with greed is that, no matter how much we have, we always want more.

Some people are clearly greedier than others. In fact, I’m even willing to bet that most people’s greed could be kept in check by a society that provided for everyone’s basic needs for free. Yeah, maybe sometimes you’d fantasize about living in a gigantic mansion or going into outer space; but most of the time, most of us could actually be pretty happy as long as we had a roof over our heads and food on our tables. I know that in my own case, my grandest ambitions largely involve fighting global poverty—so if that became a solved problem, my life’s ambition would be basically fulfilled, and I wouldn’t mind so much retiring to a life of simple comfort.

But is everyone like that? This is what anarchists don’t seem to understand. In order for anarchy to work, you need everyone to fit into that society. Most of us or even nearly all of us just won’t cut it.

Ammon Hennecy famously declared: “An anarchist is someone who doesn’t need a cop to make him behave.” But this is wrong. An anarchist is someone who thinks that no one needs a cop to make him behave. And while I am the former, I am not the latter.

Perhaps the problem is that anarchists don’t realize that not everyone is as good as they are. They implicitly apply their own mentality to everyone else, and assume that the only reason anyone ever cheats, steals, or kills is because their circumstances are desperate.

Don’t get me wrong: A lot of crime—perhaps even most crime—is committed by people who are desperate. Improving overall economic circumstances does in fact greatly reduce crime. But there is also a substantial proportion of crime—especially the most serious crimes—which is committed by people who aren’t particularly desperate, they are simply psychopaths. They aren’t victims of circumstance. They’re just evil. And society needs a way to deal with them.

If you set up a society so that anyone can just take whatever they want, there will be some people who take much more than their share. If you have no system of enforcement whatsoever, there’s nothing to stop a psychopath from just taking everything he can get his hands on. And then it really doesn’t matter how productive or efficient you are; whatever you make will simply get taken by whoever is greediest—or whoever is strongest.

In order to avoid that, you need to either set up a system that stops people from taking more than their share, or you need to find a way to exclude people like that from your society entirely.

This brings us back to monasteries. Why are they so strict? Why are the only places where utopian anarchism seems to flourish also places where people have to wear a uniform, swear vows, carry out complex rituals, and continually pledge their fealty to an authority? (Note, by the way, that I’ve also just described life in the military, which also has a lot in common with life in a monastery—and for much the same reasons.)

It’s a selection mechanism. Probably no one consciously thinks of it this way—indeed, it seems to be important to how monasteries work that people are not consciously weighing the costs and benefits of all these rituals. This is probably something that memetically evolved over centuries, rather than anything that was consciously designed. But functionally, that’s what it does: You only get to be part of a monastic community if you are willing to pay the enormous cost of following all these strict rules.

That makes it a form of costly signaling. Psychopaths are, in general, more prone to impulsiveness and short-term thinking. They are therefore less willing than others to bear the immediate cost of donning a uniform and following a ritual in order to get the long-term gains of living in a utopian community. This excludes psychopaths from ever entering the community, and thus protects against their predation.

Even celibacy may be a feature rather than a bug: Psychopaths are also prone to promiscuity. (And indeed, utopian communes that practice free love seem to have a much worse track record of being hijacked by psychopaths than monasteries that require celibacy!)

Of course, lots of people who aren’t psychopaths aren’t willing to pay those costs either—like I said, I’m not. So the selection mechanism is in a sense overly strict: It excludes people who would support the community just fine, but aren’t willing to pay the cost. But in the long run, this turns out to be less harmful than being too permissive and letting your community get hijacked and destroyed by psychopaths.

Yet if our goal is to make a whole society that achieves post-scarcity utopia, we can’t afford to be so strict. We already know that most people aren’t willing to become monks or nuns.

That means that we need a selection mechanism which is more reliable—more precisely, one with higher specificity.

I mentioned this in a previous post in the context of testing for viruses, but it bears repeating. Sensitivity and specificity are two complementary measures of a test’s accuracy. The sensitivity of a test is how likely it is to show positive if the truth is positive. The specificity of a test is how likely it is to show negative if the truth is negative.

As a test of psychopathy, monastic strictness has very high sensitivity: If you are a psychopath, there’s a very high chance it will weed you out. But it has quite low specificity: Even if you’re not a psychopath, there’s still a very high chance you won’t want to become a monk.

For a utopian society to work, we need something that’s more specific, something that won’t exclude a lot of people who don’t deserve to be excluded. But it still needs to have much the same sensitivity, because letting psychopaths into your utopia is a very easy way to let that utopia destroy itself. We do not yet have such a test, nor any clear idea how we might create one.

And that, my friends, is why we can’t have nice things. At least, not yet.

AI and the “generalization faculty”

Oct 1 JDN 2460219

The phrase “artificial intelligence” (AI) has now become so diluted by overuse that we needed to invent a new term for its original meaning. That term is now “artificial general intelligence” (AGI). In the 1950s, AI meant the hypothetical possibility of creating artificial minds—machines that could genuinely think and even feel like people. Now it means… pathing algorithms in video games and chatbots? The goalposts seem to have moved a bit.

It seems that AGI has always been 20 years away. It was 20 years away 50 years ago, and it will probably be 20 years away 50 years from now. Someday it will really be 20 years away, and then, 20 years after that, it will actually happen—but I doubt I’ll live to see it. (XKCD also offers some insight here: “It has not been conclusively proven impossible.”)

We make many genuine advances in computer technology and software, which have profound effects—both good and bad—on our lives, but the dream of making a person out of silicon always seems to drift ever further into the distance, like a mirage on the desert sand.

Why is this? Why do so many people—even, perhaps especially,experts in the field—keep thinking that we are on the verge of this seminal, earth-shattering breakthrough, and ending up wrong—over, and over, and over again? How do such obviously smart people keep making the same mistake?

I think it may be because, all along, we have been laboring under the tacit assumption of a generalization faculty.

What do I mean by that? By “generalization faculty”, I mean some hypothetical mental capacity that allows you to generalize your knowledge and skills across different domains, so that once you get good at one thing, it also makes you good at other things.

This certainly seems to be how humans think, at least some of the time: Someone who is very good at chess is likely also pretty good at go, and someone who can drive a motorcycle can probably also drive a car. An artist who is good at portraits is probably not bad at landscapes. Human beings are, in fact, able to generalize, at least sometimes.

But I think the mistake lies in imagining that there is just one thing that makes us good at generalizing: Just one piece of hardware or software that allows you to carry over skills from any domain to any other. This is the “generalization faculty”—the imagined faculty that I think we do not have, indeed I think does not exist.

Computers clearly do not have the capacity to generalize. A program that can beat grandmasters at chess may be useless at go, and self-driving software that works on one type of car may fail on another, let alone a motorcycle. An art program that is good at portraits of women can fail when trying to do portraits of men, and produce horrific Daliesque madness when asked to make a landscape.

But if they did somehow have our generalization capacity, then, once they could compete with us at some things—which they surely can, already—they would be able to compete with us at just about everything. So if it were really just one thing that would let them generalize, let them leap from AI to AGI, then suddenly everything would change, almost overnight.

And so this is how the AI hype cycle goes, time and time again:

  1. A computer program is made that does something impressive, something that other computer programs could not do, perhaps even something that human beings are not very good at doing.
  2. If that same prowess could be generalized to other domains, the result would plainly be something on par with human intelligence.
  3. Therefore, the only thing this computer program needs in order to be sapient is a generalization faculty.
  4. Therefore, there is just one more step to AGI! We are nearly there! It will happen any day now!

And then, of course, despite heroic efforts, we are unable to generalize that program’s capabilities except in some very narrow way—even decades after having good chess programs, getting programs to be good at go was a major achievement. We are unable to find the generalization faculty yet again. And the software becomes yet another “AI tool” that we will use to search websites or make video games.

For there never was a generalization faculty to be found. It always was a mirage in the desert sand.

Humans are in fact spectacularly good at generalizing, compared to, well, literally everything else in the known universe. Computers are terrible at it. Animals aren’t very good at it. Just about everything else is totally incapable of it. So yes, we are the best at it.

Yet we, in fact, are not particularly good at it in any objective sense.

In experiments, people often fail to generalize their reasoning even in very basic ways. There’s a famous one where we try to get people to make an analogy between a military tactic and a radiation treatment, and while very smart, creative people often get it quickly, most people are completely unable to make the connection unless you give them a lot of specific hints. People often struggle to find creative solutions to problems even when those solutions seem utterly obvious once you know them.

I don’t think this is because people are stupid or irrational. (To paraphrase Sydney Harris: Compared to what?) I think it is because generalization is hard.

People tend to be much better at generalizing within familiar domains where they have a lot of experience or expertise; this shows that there isn’t just one generalization faculty, but many. We may have a plethora of overlapping generalization faculties that apply across different domains, and can learn to improve some over others.

But it isn’t just a matter of gaining more expertise. Highly advanced expertise is in fact usually more specialized—harder to generalize. A good amateur chess player is probably a good amateur go player, but a grandmaster chess player is rarely a grandmaster go player. Someone who does well in high school biology probably also does well in high school physics, but most biologists are not very good physicists. (And lest you say it’s simply because go and physics are harder: The converse is equally true.)

Humans do seem to have a suite of cognitive tools—some innate hardware, some learned software—that allows us to generalize our skills across domains. But even after hundreds of millions of years of evolving that capacity under the highest possible stakes, we still basically suck at it.

To be clear, I do not think it will take hundreds of millions of years to make AGI—or even millions, or even thousands. Technology moves much, much faster than evolution. But I would not be surprised if it took centuries, and I am confident it will at least take decades.

But we don’t need AGI for AI to have powerful effects on our lives. Indeed, even now, AI is already affecting our lives—in mostly bad ways, frankly, as we seem to be hurtling gleefully toward the very same corporatist cyberpunk dystopia we were warned about in the 1980s.

A lot of technologies have done great things for humanity—sanitation and vaccines, for instance—and even automation can be a very good thing, as increased productivity is how we attained our First World standard of living. But AI in particular seems best at automating away the kinds of jobs human beings actually find most fulfilling, and worsening our already staggering inequality. As a civilization, we really need to ask ourselves why we got automated writing and art before we got automated sewage cleaning or corporate management. (We should also ask ourselves why automated stock trading resulted in even more money for stock traders, instead of putting them out of their worthless parasitic jobs.) There are technological reasons for this, yes; but there are also cultural and institutional ones. Automated teaching isn’t far away, and education will be all the worse for it.

To change our lives, AI doesn’t have to be good at everything. It just needs to be good at whatever we were doing to make a living. AGI may be far away, but the impact of AI is already here.

Indeed, I think this quixotic quest for AGI, and all the concern about how to control it and what effects it will have upon our society, may actually be distracting from the real harms that “ordinary” “boring” AI is already having upon our society. I think a Terminator scenario, where the machines rapidly surpass our level of intelligence and rise up to annihilate us, is quite unlikely. But a scenario where AI puts millions of people out of work with insufficient safety net, triggering economic depression and civil unrest? That could be right around the corner.

Frankly, all it may take is getting automated trucks to work, which could be just a few years. There are nearly 4 million truck drivers in the United States—a full percentage point of employment unto itself. And the Governor of California just vetoed a bill that would require all automated trucks to have human drivers. From an economic efficiency standpoint, his veto makes perfect sense: If the trucks don’t need drivers, why require them? But from an ethical and societal standpoint… what do we do with all the truck drivers!?

The inequality of factor mobility

Sep 24 JDN 2460212

I’ve written before about how free trade has brought great benefits, but also great costs. It occurred to me this week that there is a fairly simple reason why free trade has never been as good for the world as the models would suggest: Some factors of production are harder to move than others.

To some extent this is due to policy, especially immigration policy. But it isn’t just that.There are certain inherent limitations that render some kinds of inputs more mobile than others.

Broadly speaking, there are five kinds of inputs to production: Land, labor, capital, goods, and—oft forgotten—ideas.

You can of course parse them differently: Some would subdivide different types of labor or capital, and some things are hard to categorize this way. The same product, such as an oven or a car, can be a good or capital depending on how it’s used. (Or, consider livestock: is that labor, or capital? Or perhaps it’s a good? Oddly, it’s often discussed as land, which just seems absurd.) Maybe ideas can be considered a form of capital. There is a whole literature on human capital, which I increasingly find distasteful, because it seems to imply that economists couldn’t figure out how to value human beings except by treating them as a machine or a financial asset.

But this four-way categorization is particularly useful for what I want to talk about today. Because the rate at which those things move is very different.

Ideas move instantly. It takes literally milliseconds to transmit an idea anywhere in the world. This wasn’t always true; in ancient times ideas didn’t move much faster than people, and it wasn’t until the invention of the telegraph that their transit really became instantaneous. But it is certainly true now; once this post is published, it can be read in a hundred different countries in seconds.

Goods move in hours. Air shipping can take a product just about anywhere in less than a day. Sea shipping is a bit slower, but not radically so. It’s never been easier to move goods all around the world, and this has been the great success of free trade.

Capital moves in weeks. Here it might be useful to subdivide different types of capital: It’s surely faster to move an oven or even a car (the more good-ish sort of capital) than it is to move an entire factory (capital par excellence). But all in all, we can move stuff pretty fast these days. If you want to move your factory to China or Indonesia, you can probably get it done in a matter of weeks or at most months.

Labor moves in months. This one is a bit ironic, since it is surely easier to carry a single human person—or even a hundred human people—than all the equipment necessary to run an entire factory. But moving labor isn’t just a matter of physically carrying people from one place to another. It’s not like tourism, where you just pack and go. Moving labor requires uprooting people from where they used to live and letting them settle in a new place. It takes a surprisingly long time to establish yourself in a new environment—frankly even after two years in Edinburgh I’m not sure I quite managed it. And all the additional restrictions we’ve added involving border crossings and immigration laws and visas only make it that much slower.

Land moves never. This one seems perfectly obvious, but is also often neglected. You can’t pick up a mountain, a lake, a forest, or even a corn field and carry it across the border. (Yes, eventually plate tectonics will move our land around—but that’ll be millions of years.) Basically, land stays put—and so do all the natural environments and ecosystems on that land. Land isn’t as important for production as it once was; before industrialization, we were dependent on the land for almost everything. But we absolutely still are dependent on the land! If all the topsoil in the world suddenly disappeared, the economy wouldn’t simply collapse: the human race would face extinction. Moreover, a lot of fixed infrastructure, while technically capital, is no more mobile than land. We couldn’t much more easily move the Interstate Highway System to China than we could move Denali.

So far I have said nothing particularly novel. Yeah, clearly it’s much easier to move a mathematical theorem (if such a thing can even be said to “move”) than it is to move a factory, and much easier to move a factory than to move a forest. So what?

But now let’s consider the impact this has on free trade.

Ideas can move instantly, so free trade in ideas would allow all the world to instantaneously share all ideas. This isn’t quite what happens—but in the Internet age, we’re remarkably close to it. If anything, the world’s governments seem to be doing their best to stop this from happening: One of our most strictly-enforced trade agreements, the TRIPS Accord, is about stopping ideas from spreading too easily. And as far as I can tell, region-coding on media goes against everything free trade stands for, yet here we are. (Why, it’s almost as if these policies are more about corporate profits than they ever were about freedom!)

Goods and capital can move quickly. This is where we have really felt the biggest effects of free trade: Everything in the US says “made in China” because the capital is moved to China and then the goods are moved back to the US.

But it would honestly have made more sense to move all those workers instead. For all their obvious flaws, US institutions and US infrastructure are clearly superior to those in China. (Indeed, consider this: We may be so aware of the flaws because the US is especially transparent.) So, the most absolutely efficient way to produce all those goods would be to leave the factories in the US, and move the workers from China instead. If free trade were to achieve its greatest promises, this is the sort of thing we would be doing.


Of course that is not what we did. There are various reasons for this: A lot of the people in China would rather not have to leave. The Chinese government would not want them to leave. A lot of people in the US would not want them to come. The US government might not want them to come.

Most of these reasons are ultimately political: People don’t want to live around people who are from a different nation and culture. They don’t consider those people to be deserving of the same rights and status as those of their own country.

It may sound harsh to say it that way, but it’s clearly the truth. If the average American person valued a random Chinese person exactly the same as they valued a random other American person, our immigration policy would look radically different. US immigration is relatively permissive by world standards, and that is a great part of American success. Yet even here there is a very stark divide between the citizen and the immigrant.

There are morally and economically legitimate reasons to regulate immigration. There may even be morally and economically legitimate reasons to value those in your own nation above those in other nations (though I suspect they would not justify the degree that most people do). But the fact remains that in terms of pure efficiency, the best thing to do would obviously be to move all the people to the place where productivity is highest and do everything there.

But wouldn’t moving people there reduce the productivity? Yes. Somewhat. If you actually tried to concentrate the entire world’s population into the US, productivity in the US would surely go down. So, okay, fine; stop moving people to a more productive place when it has ceased to be more productive. What this should do is average out all the world’s labor productivity to the same level—but a much higher level than the current world average, and frankly probably quite close to its current maximum.

Once you consider that moving people and things does have real costs, maybe fully equaling productivity wouldn’t make sense. But it would be close. The differences in productivity across countries would be small.

They are not small.

Labor productivity worldwide varies tremendously. I don’t count Ireland, because that’s Leprechaun Economics (this is really US GDP with accounting tricks, not Irish GDP). So the prize for highest productivity goes to Norway, at $100 per worker hour (#ScandinaviaIsBetter). The US is doing the best among large countries, at an impressive $73 per hour. And at the very bottom of the list, we have places like Bangladesh at $4.79 per hour and Cambodia at $3.43 per hour. So, roughly speaking, there is about a 20-to-1 ratio between the most productive and least productive countries.

I could believe that it’s not worth it to move US production at $73 per hour to Norway to get it up to $100 per hour. (For one thing, where would we fit it all?) But I find it far more dubious that it wouldn’t make sense to move most of Cambodia’s labor to the US. (Even all 16 million people is less than what the US added between 2010 and 2020.) Even given the fact that these Cambodian workers are less healthy and less educated than American workers, they would almost certainly be more productive on the other side of the Pacific, quite likely ten times as productive as they are now. Yet we haven’t moved them, and have no plans to.

That leaves the question of whether we will move our capital to them. We have been doing so in China, and it worked (to a point). Before that, we did it in Korea and Japan, and it worked. Cambodia will probably come along sooner or later. For now, that seems to be the best we can do.

But I still can’t shake the thought that the world is leaving trillions of dollars on the table by refusing to move people. The inequality of factor mobility seems to be a big part of the world’s inequality, period.

What is anxiety for?

Sep 17 JDN 2460205

As someone who experiences a great deal of anxiety, I have often struggled to understand what it could possibly be useful for. We have this whole complex system of evolved emotions, and yet more often than not it seems to harm us rather than help us. What’s going on here? Why do we even have anxiety? What even is anxiety, really? And what is it for?

There’s actually an extensive body of research on this, though very few firm conclusions. (One of the best accounts I’ve read, sadly, is paywalled.)

For one thing, there seem to be a lot of positive feedback loops involved in anxiety: Panic attacks make you more anxious, triggering more panic attacks; being anxious disrupts your sleep, which makes you more anxious. Positive feedback loops can very easily spiral out of control, resulting in responses that are wildly disproportionate to the stimulus that triggered them.

A certain amount of stress response is useful, even when the stakes are not life-or-death. But beyond a certain point, more stress becomes harmful rather than helpful. This is the Yerkes-Dodson effect, for which I developed my stochastic overload model (which I still don’t know if I’ll ever publish, ironically enough, because of my own excessive anxiety). Realizing that anxiety can have benefits can also take some of the bite out of having chronic anxiety, and, ironically, reduce that anxiety a little. The trick is finding ways to break those positive feedback loops.

I think one of the most useful insights to come out of this research is the smoke-detector principle, which is a fundamentally economic concept. It sounds quite simple: When dealing with an uncertain danger, sound the alarm if the expected benefit of doing so exceeds the expected cost.

This has profound implications when risk is highly asymmetric—as it usually is. Running away from a shadow or a noise that probably isn’t a lion carries some cost; you wouldn’t want to do it all the time. But it is surely nowhere near as bad as failing to run away when there is an actual lion. Indeed, it might be fair to say that failing to run away from an actual lion counts as one of the worst possible things that could ever happen to you, and could easily be 100 times as bad as running away when there is nothing to fear.

With this in mind, if you have a system for detecting whether or not there is a lion, how sensitive should you make it? Extremely sensitive. You should in fact try to calibrate it so that 99% of the time you experience the fear and want to run away, there is not a lion. Because the 1% of the time when there is one, it’ll all be worth it.

Yet this is far from a complete explanation of anxiety as we experience it. For one thing, there has never been, in my entire life, even a 1% chance that I’m going to be attacked by a lion. Even standing in front of a lion enclosure at the zoo, my chances of being attacked are considerably less than that—for a zoo that allowed 1% of its customers to be attacked would not stay in business very long.

But for another thing, it isn’t really lions I’m afraid of. The things that make me anxious are generally not things that would be expected to do me bodily harm. Sure, I generally try to avoid walking down dark alleys at night, and I look both ways before crossing the street, and those are activities directly designed to protect me from bodily harm. But I actually don’t feel especially anxious about those things! Maybe I would if I actually had to walk through dark alleys a lot, but I don’t, and in the rare occasion I would, I think I’d feel afraid at the time but fine afterward, rather than experiencing persistent, pervasive, overwhelming anxiety. (Whereas, if I’m anxious about reading emails, and I do manage to read emails, I’m usually still anxious afterward.) When it comes to crossing the street, I feel very little fear at all, even though perhaps I should—indeed, it had been remarked that when it comes to the perils of motor vehicles, human beings suffer from a very dangerous lack of fear. We should be much more afraid than we are—and our failure to be afraid kills thousands of people.

No, the things that make me anxious are invariably social: Meetings, interviews, emails, applications, rejection letters. Also parties, networking events, and back when I needed them, dates. They involve interacting with other people—and in particular being evaluated by other people. I never felt particularly anxious about exams, except maybe a little before my PhD qualifying exam and my thesis defenses; but I can understand those who do, because it’s the same thing: People are evaluating you.

This suggests that anxiety, at least of the kind that most of us experience, isn’t really about danger; it’s about status. We aren’t worried that we will be murdered or tortured or even run over by a car. We’re worried that we will lose our friends, or get fired; we are worried that we won’t get a job, won’t get published, or won’t graduate.

And yet it is striking to me that it often feels just as bad as if we were afraid that we were going to die. In fact, in the most severe instances where anxiety feeds into depression, it can literally make people want to die. How can that be evolutionarily adaptive?

Here it may be helpful to remember that in our ancestral environment, status and survival were oft one and the same. Humans are the most social organisms on Earth; I even sometimes describe us as hypersocial, a whole new category of social that no other organism seems to have achieved. We cooperate with others of our species on a mind-bogglingly grand scale, and are utterly dependent upon vast interconnected social systems far too large and complex for us to truly understand, let alone control.

At this historical epoch, these social systems are especially vast and incomprehensible; but at least for most of us in First World countries, they are also forgiving in a way that is fundamentally alien to our ancestors’ experience. It was not so long ago that a failed hunt or a bad harvest would let your family starve unless you could beseech your community for aid successfully—which meant that your very survival could depend upon being in the good graces of that community. But now we have food stamps, so even if everyone in your town hates you, you still get to eat. Of course some societies are more forgiving (Sweden) than others (the United States); and virtually all societies could be even more forgiving than they are. But even the relatively cutthroat competition of the US today has far less genuine risk of truly catastrophic failure than what most human beings lived through for most of our existence as a species.

I have found this realization helpful—hardly a cure, but helpful, at least: What are you really afraid of? When you feel anxious, your body often tells you that the stakes are overwhelming, life-or-death; but if you stop and think about it, in the world we live in today, that’s almost never true. Failing at one important task at work probably won’t get you fired—and even getting fired won’t really make you starve.

In fact, we might be less anxious if it were! For our bodies’ fear system seems to be optimized for the following scenario: An immediate threat with high chance of success and life-or-death stakes. Spear that wild animal, or jump over that chasm. It will either work or it won’t, you’ll know immediately; it probably will work; and if it doesn’t, well, that may be it for you. So you’d better not fail. (I think it’s interesting how much of our fiction and media involves these kinds of events: The hero would surely and promptly die if he fails, but he won’t fail, for he’s the hero! We often seem more comfortable in that sort of world than we do in the one we actually live in.)

Whereas the life we live in now is one of delayed consequences with low chance of success and minimal stakes. Send out a dozen job applications. Hear back in a week from three that want to interview you. Do those interviews and maybe one will make you an offer—but honestly, probably not. Next week do another dozen. Keep going like this, week after week, until finally one says yes. Each failure actually costs you very little—but you will fail, over and over and over and over.

In other words, we have transitioned from an environment of immediate return to one of delayed return.

The result is that a system which was optimized to tell us never fail or you will die is being put through situations where failure is constantly repeated. I think deep down there is a part of us that wonders, “How are you still alive after failing this many times?” If you had fallen in as many ravines as I have received rejection letters, you would assuredly be dead many times over.

Yet perhaps our brains are not quite as miscalibrated as they seem. Again I come back to the fact that anxiety always seems to be about people and evaluation; it’s different from immediate life-or-death fear. I actually experience very little life-or-death fear, which makes sense; I live in a very safe environment. But I experience anxiety almost constantly—which also makes a certain amount of sense, seeing as I live in an environment where I am being almost constantly evaluated by other people.

One theory posits that anxiety and depression are a dual mechanism for dealing with social hierarchy: You are anxious when your position in the hierarchy is threatened, and depressed when you have lost it. Primates like us do seem to care an awful lot about hierarchies—and I’ve written before about how this explains some otherwise baffling things about our economy.

But I for one have never felt especially invested in hierarchy. At least, I have very little desire to be on top of the hiearchy. I don’t want to be on the bottom (for I know how such people are treated); and I strongly dislike most of the people who are actually on top (for they’re most responsible for treating the ones on the bottom that way). I also have ‘a problem with authority’; I don’t like other people having power over me. But if I were to somehow find myself ruling the world, one of the first things I’d do is try to figure out a way to transition to a more democratic system. So it’s less like I want power, and more like I want power to not exist. Which means that my anxiety can’t really be about fearing to lose my status in the hierarchy—in some sense, I want that, because I want the whole hierarchy to collapse.

If anxiety involved the fear of losing high status, we’d expect it to be common among those with high status. Quite the opposite is the case. Anxiety is more common among people who are more vulnerable: Women, racial minorities, poor people, people with chronic illness. LGBT people have especially high rates of anxiety. This suggests that it isn’t high status we’re afraid of losing—though it could still be that we’re a few rungs above the bottom and afraid of falling all the way down.

It also suggests that anxiety isn’t entirely pathological. Our brains are genuinely responding to circumstances. Maybe they are over-responding, or responding in a way that is not ultimately useful. But the anxiety is at least in part a product of real vulnerabilities. Some of what we’re worried about may actually be real. If you cannot carry yourself with the confidence of a mediocre White man, it may be simply because his status is fundamentally secure in a way yours is not, and he has been afforded a great many advantages you never will be. He never had a Supreme Court ruling decide his rights.

I cannot offer you a cure for anxiety. I cannot even really offer you a complete explanation of where it comes from. But perhaps I can offer you this: It is not your fault. Your brain evolved for a very different world than this one, and it is doing its best to protect you from the very different risks this new world engenders. Hopefully one day we’ll figure out a way to get it calibrated better.

Knowing When to Quit

Sep 10 JDN 2460198

At the time of writing this post, I have officially submitted my letter of resignation at the University of Edinburgh. I’m giving them an entire semester of notice, so I won’t actually be leaving until December. But I have committed to my decision now, and that feels momentous.

Since my position here was temporary to begin with, I’m actually only leaving a semester early. Part of me wanted to try to stick it out, continue for that one last semester and leave on better terms. Until I sent that letter, I had that option. Now I don’t, and I feel a strange mix of emotions: Relief that I have finally made the decision, regret that it came to this, doubt about what comes next, and—above all—profound ambivalence.

Maybe it’s the very act of quitting—giving up, being a quitter—that feels bad. Even knowing that I need to get out of here, it hurts to have to be the one to quit.

Our society prizes grit and perseverance. Since I was a child I have been taught that these are virtues. And to some extent, they are; there certainly is such a thing as giving up too quickly.

But there is also such a thing as not knowing when to quit. Sometimes things really aren’t going according to plan, and you need to quit before you waste even more time and effort. And I think I am like Randall Monroe in this regard; I am more inclined to stay when I shouldn’t than quit when I shouldn’t:

Sometimes quitting isn’t even as permanent as it is made out to be. In many cases, you can go back later and try again when you are better prepared.

In my case, I am unlikely to ever work at the University of Edinburgh again, but I haven’t yet given up on ever having a career in academia. Then again, I am by no means as certain as I once was that academia is the right path for me. I will definitely be searching for other options.

There is a reason we are so enthusiastically sold on the virtue of perseverance. Part of how our society sells the false narrative of meritocracy is by claiming that people who succeed did so because they tried harder or kept on trying.

This is not entirely false; all other things equal, you are more likely to succeed if you keep on trying. But in some ways that just makes it more seductive and insidious.

For the real reason most people hit home runs in life is they were born on third base. The vast majority of success in life is determined by circumstances entirely outside individual control.


Even having the resources to keep trying is not guaranteed for everyone. I remember a great post on social media pointing out that entrepreneurship is like one of those carnival games:

Entrepreneurship is like one of those carnival games where you throw darts or something.

Middle class kids can afford one throw. Most miss. A few hit the target and get a small prize. A very few hit the center bullseye and get a bigger prize. Rags to riches! The American Dream lives on.

Rich kids can afford many throws. If they want to, they can try over and over and over again until they hit something and feel good about themselves. Some keep going until they hit the center bullseye, then they give speeches or write blog posts about ‘meritocracy’ and the salutary effects of hard work.

Poor kids aren’t visiting the carnival. They’re the ones working it.

The odds of succeeding on any given attempt are slim—but you can always pay for more tries. A middle-class person can afford to try once; mostly those attempts will fail, but a few will succeed and then go on to talk about how their brilliant talent and hard work made the difference. A rich person can try as many times as they like, and when they finally succeed, they can credit their success to perseverance and a willingness to take risks. But the truth is, they didn’t have any exceptional reserves of grit or courage; they just had exceptional reserves of money.

In my case, I was not depleting money (if anything, I’m probably losing out financially by leaving early, though that very much depends on how the job market goes for me): It was something far more valuable. I was whittling away at my own mental health, depleting my energy, draining my motivation. The resource I was exhausting was my very soul.

I still have trouble articulating why it has been so painful for me to work here. It’s so hard to point to anything in particular.

The most obvious downsides were things I knew at the start: The position is temporary, the pay is mediocre, and I had to move across the Atlantic and live thousands of miles from home. And I had already heard plenty about the publish-or-perish system of research publication.

Other things seem like minor annoyances: They never did give me a good office (I have to share it with too many people, and there isn’t enough space, so in fact I rarely use it at all). They were supposed to assign me a faculty mentor and never did. They kept rearranging my class schedule and not telling me things until immediately beforehand.

I think what it really comes down to is I didn’t realize how much it would hurt. I knew that I was moving across the Atlantic—but I didn’t know how isolated and misunderstood I would feel when I did. I knew that publish-or-perish was a problem—but I didn’t know how agonizing it would be for me in particular. I knew I probably wouldn’t get very good mentorship from the other faculty—but I didn’t realize just how bad it would be, or how desperately I would need that support I didn’t get.

I either underestimated the severity of these problems, or overestimated my own resilience. I thought I knew what I was going into, and I thought I could take it. But I was wrong. I couldn’t take it. It was tearing me apart. My only answer was to leave.

So, leave I shall. I have now committed to doing so.

I don’t know what comes next. I don’t even know if I’ve made the right choice. Perhaps I’ll never truly know. But I made the choice, and now I have to live with it.

The rise and plateau of China’s economy

Sep 3 JDN 2460191

It looks like China’s era of extremely rapid economic growth may be coming to an end. Consumer confidence in China cratered this year (and, in typical authoritarian fashion, the agency responsible just quietly stopped publishing the data after that). Current forecasts have China’s economy growing only about 4-5% this year, which would be very impressive for a First World country—but far below the 6%, 7%, even 8% annual growth rates China had in recent years.

Some slowdown was quite frankly inevitable. A surprising number of people—particularly those in or from China—seem to think that China’s ultra-rapid growth was something special about China that could be expected to continue indefinitely.

China’s growth does look really impressive, in isolation:

But in fact this is a pattern we’ve seen several times now (admittedly mostly in Asia): A desperately poor Third World country finally figures out how to get its act together, and suddenly has extremely rapid growth for awhile until it manages to catch up and become a First World country.

It happened in South Korea:

It happened in Japan:

It happened in Taiwan:

It even seems to be happening in Botswana:

And this is a good thing! These are the great success stories of economic development. If we could somehow figure out how to do this all over the world, it might literally be the best thing that ever happened. (It would solve so many problems!)

Here’s a more direct comparison across all these countries (as well as the US), on a log scale:

From this you can pretty clearly see two things.

First, as countries get richer, their growth tends to slow down gradually. By the time Japan, Korea, and Taiwan reached the level that the US had been at back in 1950, their growth slowed to a crawl. But that was okay, because they had already become quite rich.

And second, China is nothing special: Yes, their growth rate is faster than the US, because the US is already so rich. But they are following the same pattern as several other countries. In fact they’ve actually fallen behind Botswana—they used to be much richer than Botswana, and are now slightly poorer.

So while there are many news articles discussing why China’s economy is slowing down, and some of them may even have some merit (they really seem to have screwed up their COVID response, for instance, and their terrible housing price bubble just burst); but the ultimate reason is really that 7% annual economic growth is just not sustainable. It will slow down. When and how remains in question—but it will happen.

Thus, I am not particularly worried about the fact that China’s growth has slowed down. Or at least, I wouldn’t be, if China were governed well and had prepared for this obvious eventuality the way that Korea and Japan did. But what does worry me is that they seem unprepared for this. Their authoritarian government seems to have depended upon sky-high economic growth to sustain support for their regime. The cracks are now forming in that dam, and something terrible could happen when it bursts.

Things may even be worse than they look, because we know that the Chinese government often distorts or omits statistics when they become inconvenient. That can only work for so long: Eventually the reality on the ground will override whatever lies the government is telling.

There are basically two ways this could go: They could reform their government to something closer to a liberal democracy, accept that growth will slow down and work toward more shared prosperity, and then take their place as a First World country like Japan did. Or they could try to cling to their existing regime, gripping ever tighter until it all slips out of their fingers in a potentially catastrophic collapse. Unfortunately, they seem to be opting for the latter.

I hope I’m wrong. I hope that China will find its way toward a future of freedom and prosperity.

But at this point, it doesn’t look terribly likely.

Why are political speeches so vacuous?

Aug 27 JDN 2460184

In last week’s post I talked about how posters for shows at the Fringe seem to be attention-grabbing but almost utterly devoid of useful information.

This brings to mind another sort of content that also fits that description: political speeches.

While there are some exceptions—including in fact some of the greatest political speeches ever made, such as Martin Luther King’s “I have a dream” or Dwight Eisenhower’s “Cross of Iron”—on the whole, most political speeches seem to be incredibly vacuous.

Each country probably has its own unique flavor of vacuousness, but in the US they talk about motherhood, and apple pie, and American exceptionalism. “I love my great country, we are an amazing country, I’m so proud to live here” is basically the extent of the information conveyed within what could well be a full hour-long oration.

This raises a question: Why? Why don’t political speeches typically contain useful information?

It’s not that there’s no useful information to be conveyed: There are all sorts of things that people would like to know about a political candidate, including how honest they are, how competent they are, and the whole range of policies they intend to support or oppose on a variety of issues.

But most of what you’d like to know about a candidate actually comes in one of two varieties: Cheap talk, or controversy.

Cheap talk is the part related to being honest and competent. Basically every voter wants candidates who are honest and competent, and we know all too well that not all candidates qualify. The problem is, how do they show that they are honest and competent? They could simply assert it, but that’s basically meaningless—anybody could assert it. In fact, Donald Trump is the candidate who leaps to mind as the most eager to frequently assert his own honesty and competence, and also the most successful candidate in at least my lifetime who seems to utterly and totally lack anything resembling these qualities.

So unless you are clever enough to find ways to demonstrate your honesty and competence, you’re really not accomplishing anything by asserting it. Most people simply won’t believe you, and they’re right not to. So it doesn’t make much sense to spend a lot of effort trying to make such assertions.

Alternatively, you could try to talk about policy, say what you would like to do regarding climate change, the budget, or the military, or the healthcare system, or any of dozens of other political questions. That would absolutely be useful information for voters, and it isn’t just cheap talk, because different candidates and voters do intend different things and voters would like to know which ones are which.

The problem, then, is that it’s controversial. Not everyone is going to agree with your particular take on any given political issue—even within your own party there is bound to be substantial disagreement.

If enough voters were sufficiently rational about this, and could coolly evaluate a candidate’s policies, accepting the pros and cons, then it would still make sense to deliver this information. I for one would rather vote for someone I know agrees with me 90% of the time than someone who won’t even tell me what they intend to do while in office.

But in fact most voters are not sufficiently rational about this. Voters react much more strongly to negative information than positive information: A candidate you agree with 9 times out of 10 can still make you utterly outraged by their stance on issue number 10. This is a specific form of the more general phenomenon of negativity bias: Psychologically, people just react a lot more strongly to bad things than to good things. Negativity bias has strong effects on how people vote, especially young people.

Rather than a cool-headed, rational assessment of pros and cons, most voters base their decision on deal-breakers: “I could never vote for a Republican” or “I could never vote for someone who wants to cut the military”. Only after they’ve excluded a large portion of candidates based on these heuristics do they even try to look closer at the detailed differences between candidates.

This means that, if you are a candidate, your best option is to avoid offering any deal-breakers. You want to say things that almost nobody will strongly disagree with—because any strong disagreement could be someone’s deal-breaker and thereby hurt your poll numbers.

And what’s the best way to not say anything that will offend or annoy anyone? Not say anything at all. Campaign managers basically need to Mirandize their candidates: You have the right to remain silent. Anything you say can and will be used against you in the court of public opinion.

But in fact you can’t literally remain silent—when running for office, you are expected to make a lot of speeches. So you do the next best thing: You say a lot of words, but convey very little meaning. You say things like “America is great” and “I love apple pie” and “Moms are heroes” that, while utterly vapid, are very unlikely to make anyone particularly angry at you or be any voter’s deal-breaker.

And then we get into a Nash equilibrium where everyone is talking like this, nobody is saying anything, and political speeches become entirely devoid of useful content.

What can we as voters do about this? Individually, perhaps nothing. Collectively, literally everything.

If we could somehow shift the equilibrium so that candidates who are brave enough to make substantive, controversial claims get rewarded for it—even when we don’t entirely agree with them—while those who continue to recite insipid nonsense are punished, then candidates will absolutely change how they speak.

But this would require a lot of people to change, more or less all at once. A sufficiently large critical mass of voters would need to be willing to support candidates specifically because they made detailed policy proposals, even if we didn’t particularly like those policy proposals.

Obviously, if their policy proposals were terrible, we’d have good reason to reject them; but for this to work, we need to be willing to support a lot of things that are just… kind of okay. Because it’s vanishingly unlikely that the first candidates who are brave enough to say what they intend will also be ones whose intentions we entirely agree with. We need to set some kind of threshold of minimum agreement, and reward anyone who exceeds it. We need to ask ourselves if our deal-breakers really need to be deal-breakers.

The Fringe: An overwhelming embarrassment of riches

Aug 20 JDN 2460177

As I write this, Edinburgh is currently in the middle of The Fringe: It’s often described as an “arts and culture festival”, but mainly it consists of a huge number of theatre and comedy performances that go on across the city in hundreds of venues all month long. It’s an “open access festival”, which basically means that it’s half a dozen different festivals that all run independently and are loosely coordinated with one another.

There is truly an embarrassment of riches in the sheer number and variety of performances going on. There’s no way I could ever go to all of them, or even half of them, even though most of them are going on every single day, all month long

It would be tremendously helpful to get good information about which performances are likely to suit my tastes, so I’d know which ones to attend. For once, advertising actually has a genuinely useful function to serve!

And yet, the ads for performances plastered across the city are almost completely useless. They tell you virtually nothing about the content or even style of the various shows. You are bombarded with hundreds of posters for hundreds of performances as you walk through the city, and almost none of them tell you anything useful that would help you decide which shows you want to attend.

Here’s what they look like; imagine this plastered on every bus shelter and spare bit of wall in the city, as well as plenty of specially-built structures explicitly for the purpose:

What I want to ask today is: Why are these posters so uninformative?

I think there are two forces at work here which may explain this phenomenon.

The first is about comedy: Most of these shows are comedy shows, and it’s very hard to explain to someone what is funny about a joke. In fact, most jokes aren’t even funny once they have been explained. Comedy seems to be closely tied to surprise: If you know exactly what they are going to say, it isn’t funny anymore. So it is inherently difficult to explain what’s good about a comedy show without making it actually less funny for those attending.

Yet this is not a complete explanation. For there are some things you could explain about comedy shows without ruining them. You could give it a general genre: political satire, slapstick, alternative, dark comedy, blue comedy, burlesque, cringe, insult, sitcom, parody, surreal, and so on. That would at least tell you something—I tend to like satire and parody, dark and blue are hit-or-miss, surreal leaves me cold, and I can’t stand cringe. And some of the posters do this—yet a remarkable number do not. I often find myself staring at a particular poster, poring over its details, trying to get some inkling of what kind of comedy I could expect from this performer.

To fully explain this, we need something more: And that, I believe, is provided by economic theory.

Consider for a moment that comedy is varied and largely subjective: What one person finds hilarious, another finds boring, and yet another finds outrageously offensive. And whether or not you find a particular routine funny can be hard to predict—even for you.

But consider that money is quite the opposite: Everyone wants it, everyone always wants more of it, and people pretty much want it for the same reasons.

So when you offer to pay money for comedy, you are offering something fundamentally fungible and objective in exchange for something almost totally individual and subjective. You are giving what everyone wants in exchange for something that only some people want and you yourself may or may not want—and may have no way of knowing whether you want until you have it.

I believe it is in the interests of the performers to keep you in the dark in this way. They don’t want to resolve your ignorance too thoroughly. Their goal is not to find the market niche of people who would most enjoy their comedy. Their goal is to get as many people as possible to show up to their shows. Even if someone absolutely hates their show, if they bought tickets, that’s a win. And even any negative reviews or word-of-mouth they might try to give the comedian is probably still a win—comedians are one profession for which there really may be no such thing as bad publicity.

In other words, even these relatively helpful advertisements aren’t actually designed to inform you. They are, as all advertisements are, designed to get you to buy something. And the way to get you to do that is twofold:

First, get your attention. That’s vital. And it’s quite difficult in such a saturated environment. As a result, all of the posters are quite eye-catching and often bizarre. They use loud colors and striking images, and the whole city is filled with them. It actually becomes exhausting to look at them all; but this is the Nash equilibrium, because there is an arms race between different performers to look more interesting and exciting than all the rest.

Second, convince you to go. But let’s be clear about this: It is not necessary to make you absolutely certain that this show is one you’ll enjoy. It is merely to tip the balance of probability, make you reasonably confident that it is likely to be one you’ll enjoy. Given the subjectivity and unpredictability of comedy, any attendee knows that they are likely to end up with a few duds. That risk effectively gets priced in: You accept that one £10 ticket may be wasted, in exchange for buying another £10 ticket that you’d have gladly paid £20 for.

If the posters tried to give more details about what the shows were about, there would be two costs to this: One, it might make the posters less eye-catching and interesting in the first place. And two, it might (perhaps correctly!) convince some customers that this flavor of comedy really wasn’t for them, making them decide not to buy a ticket. The task when designing such a poster, then, is to make one that conveys enough that people are willing to take the chance on it—but not too much so that you might scare some potential audience members away.

I think that this has implications which go beyond comedy. In fact, I think that something quite similar is going on with political speeches. But I’ll save that one for another post.

The unsung success of Bidenomics

Aug 13 JDN 2460170

I’m glad to see that the Biden administration is finally talking about “Bidenomics”. We tend to give too much credit or blame for economic performance to the President—particularly relative to Congress—but there are many important ways in which a Presidential administration can shift the priorities of public policy in particular directions, and Biden has clearly done that.

The economic benefits for people of color seem to have been particularly large. The unemployment gap between White and Black workers in the US is now only 2.7 percentage points, while just a few years ago it was over 4pp and at the worst of the Great Recession it surpassed 7pp. During lockdown, unemployment for Black people hit nearly 17%; it is now less than 6%.

The (misnamed, but we’re stuck with it) Inflation Reduction Act in particular has been an utter triumph.

In the past year, real private investment in manufacturing structures (essentially, new factories) has risen from $56 billion to $87 billion—an over 50% increase, which puts it the highest it has been since the turn of the century. The Inflation Reduction Act appears to be largely responsible for this change.

Not many people seem to know this, but the US has also been on the right track with regard to carbon emissions: Per-capita carbon emissions in the US have been trending downward since about 2000, and are now lower than they were in the 1950s. The Inflation Reduction act now looks poised to double down on that progress, as it has been forecasted to reduce our emissions all the way down to 40% below their early-2000s peak.

Somehow, this success doesn’t seem to be getting across. The majority of Americans incorrectly believe that we are in a downturn. Biden’s approval rating is still only 40%, barely higher than Trump’s was. When it comes to political beliefs, most American voters appear to be utterly impervious to facts.

Most Americans do correctly believe that inflation is still a bit high (though many seem to think it’s higher than it is); this is weird, seeing as inflation is normally high when the economy is growing rapidly, and gets too low when we are in a recession. This seems to be Halo Effect, rather than any genuine understanding of macroeconomics: downturns are bad and inflation is bad, so they must go together—when in fact, quite the opposite is the case.

People generally feel better about their own prospects than they do about the economy as a whole:

Sixty-four percent of Americans say the economy is worse off compared to 2020, while seventy-three percent of Americans say the economy is worse off compared to five years ago. About two in five of Americans say they feel worse off from five years ago generally (38%) and a similar number say they feel worse off compared to 2020 (37%).

(Did you really have to write out ‘seventy-three percent’? I hate that convention. 73% is so much clearer and quicker to read.)

I don’t know what the Biden administration should do about this. Trying to sell themselves harder might backfire. (And I’m pretty much the last person in the world you should ask for advice about selling yourself.) But they’ve been doing really great work for the US economy… and people haven’t noticed. Thousands of factories are being built, millions of people are getting jobs, and the collective response has been… “meh”.

Against deontology

Aug 6 JDN 2460163

In last week’s post I argued against average utilitarianism, basically on the grounds that it devalues the lives of anyone who isn’t of above average happiness. But you might be tempted to take these as arguments against utilitarianism in general, and that is not my intention.

In fact I believe that utilitarianism is basically correct, though it needs some particular nuances that are often lost in various presentations of it.

Its leading rival is deontology, which is really a broad class of moral theories, some a lot better than others.

What characterizes deontology as a class is that it uses rules, rather than consequences; an act is just right or wrong regardless of its consequences—or even its expected consequences.

There are certain aspects of this which are quite appealing: In fact, I do think that rules have an important role to play in ethics, and as such I am basically a rule utilitarian. Actually trying to foresee all possible consequences of every action we might take is an absurd demand far beyond the capacity of us mere mortals, and so in practice we have no choice but to develop heuristic rules that can guide us.

But deontology says that these are no mere heuristics: They are in fact the core of ethics itself. Under deontology, wrong actions are wrong even if you know for certain that their consequences will be good.

Kantian ethics is one of the most well-developed deontological theories, and I am quite sympathetic to Kantian ethics In fact I used to consider myself one of its adherents, but I now consider that view a mistaken one.

Let’s first dispense with the views of Kant himself, which are obviously wrong. Kant explicitly said that lying is always, always, always wrong, and even when presented with obvious examples where you could tell a small lie to someone obviously evil in order to save many innocent lives, he stuck to his guns and insisted that lying is always wrong.

This is a bit anachronistic, but I think this example will be more vivid for modern readers, and it absolutely is consistent with what Kant wrote about the actual scenarios he was presented with:

You are living in Germany in 1945. You have sheltered a family of Jews in your attic to keep them safe from the Holocaust. Nazi soldiers have arrived at your door, and ask you: “Are there any Jews in this house?” Do you tell the truth?

I think it’s utterly, agonizingly obvious that you should not tell the truth. Exactly what you should do is less obvious: Do you simply lie and hope they buy it? Do you devise a clever ruse? Do you try to distract them in some way? Do you send them on a wild goose chase elsewhere? If you could overpower them and kill them, should you? What if you aren’t sure you can; should you still try? But one thing is clear: You don’t hand over the Jewish family to the Nazis.

Yet when presented with similar examples, Kant insisted that lying is always wrong. He had a theory to back it up, his Categorical Imperative: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”

And, so his argument goes: Since it would be obviously incoherent to say that everyone should always lie, lying is wrong, and you’re never allowed to do it. He actually bites that bullet the size of a Howitzer round.

Modern deontologists—even though who consider themselves Kantians—are more sophisticated than this. They realize that you could make a rule like “Never lie, except to save the life of an innocent person.” or “Never lie, except to stop a great evil.” Either of these would be quite adequate to solve this particular dilemma. And it’s absolutely possible to will that these would be universal laws, in the sense that they would apply to anyone. ‘Universal’ doesn’t have to mean ‘applies equally to all possible circumstances’.

There are also a couple of things that deontology does very well, which are worth preserving. One of them is supererogation: The idea that some acts are above and beyond the call of duty, that something can be good without being obligatory.

This is something most forms of utilitarianism are notoriously bad at. They show us a spectrum of worlds from the best to the worst, and tell us to make things better. But there’s nowhere we are allowed to stop, unless we somehow manage to make it all the way to the best possible world.

I find this kind of moral demand very tempting, which often leads me to feel a tremendous burden of guilt. I always know that I could be doing more than I do. I’ve written several posts about this in the past, in the hopes of fighting off this temptation in myself and others. (I am not entirely sure how well I’ve succeeded.)

Deontology does much better in this regard: Here are some rules. Follow them.

Many of the rules are in fact very good rules that most people successfully follow their entire lives: Don’t murder. Don’t rape. Don’t commit robbery. Don’t rule a nation tyrannically. Don’t commit war crimes.

Others are oft more honored in the breach than the observance: Don’t lie. Don’t be rude. Don’t be selfish. Be brave. Be generous. But a well-developed deontology can even deal with this, by saying that some rules are more important than others, and thus some sins are more forgivable than others.

Whereas a utilitarian—at least, anything but a very sophisticated utilitarian—can only say who is better and who is worse, a deontologist can say who is good enough: who has successfully discharged their moral obligations and is otherwise free to live their life as they choose. Deontology absolves us of guilt in a way that utilitarianism is very bad at.

Another good deontological principle is double-effect: Basically this says that if you are doing something that will have bad outcomes as well as good ones, it matters whether you intend the bad one and what you do to try to mitigate it. There does seem to be a morally relevant difference between a bombing that kills civilians accidentally as part of an attack on a legitimate military target, and a so-called “strategic bombing” that directly targets civilians in order to maximize casualties—even if both occur as part of a justified war. (Both happen a lot—and it may even be the case that some of the latter were justified. The Tokyo firebombing and atomic bombs on Hiroshima and Nagasaki were very much in the latter category.)

There are ways to capture this principle (or something very much like it) in a utilitarian framework, but like supererogation, it requires a sophisticated, nuanced approach that most utilitarians don’t seem willing or able to take.

Now that I’ve said what’s good about it, let’s talk about what’s really wrong with deontology.

Above all: How do we choose the rules?

Kant seemed to think that mere logical coherence would yield a sufficiently detailed—perhaps even unique—set of rules for all rational beings in the universe to follow. This is obviously wrong, and seems to be simply a failure of his imagination. There is literally a countably infinite space of possible ethical rules that are logically consistent. (With probability 1 any given one is utter nonsense: “Never eat cheese on Thursdays”, “Armadillos should rule the world”, and so on—but these are still logically consistent.)

If you require the rules to be simple and general enough to always apply to everyone everywhere, you can narrow the space substantially; but this is also how you get obviously wrong rules like “Never lie.”

In practice, there are two ways we actually seem to do this: Tradition and consequences.

Let’s start with tradition. (It came first historically, after all.) You can absolutely make a set of rules based on whatever your culture has handed down to you since time immemorial. You can even write them down in a book that you declare to be the absolute infallible truth of the universe—and, amazingly enough, you can get millions of people to actually buy that.

The result, of course, is what we call religion. Some of its rules are good: Thou shalt not kill. Some are flawed but reasonable: Thou shalt not steal. Thou shalt not commit adultery. Some are nonsense: Thou shalt not covet thy neighbor’s goods.

And some, well… some rules of tradition are the source of many of the world’s most horrific human rights violations. Thou shalt not suffer a witch to live (Exodus 22:18). If a man also lie with mankind, as he lieth with a woman, both of them have committed an abomination: they shall surely be put to death; their blood shall be upon them (Leviticus 20:13).

Tradition-based deontology has in fact been the major obstacle to moral progress throughout history. It is not a coincidence that utilitarianism began to become popular right before the abolition of slavery, and there is an even more direct casual link between utilitarianism and the advancement of rights for women and LGBT people. When the sole argument you can make for moral rules is that they are ancient (or allegedly handed down by a perfect being), you can make rules that oppress anyone you want. But when rules have to be based on bringing happiness or preventing suffering, whole classes of oppression suddenly become untenable. “God said so” can justify anything—but “Who does it hurt?” can cut through.

It is an oversimplification, but not a terribly large one, to say that the arc of moral history has been drawn by utilitarians dragging deontologists kicking and screaming into a better future.

There is a better way to make rules, and that is based on consequences. And, in practice, most people who call themselves deontologists these days do this. They develop a system of moral rules based on what would be expected to lead to the overall best outcomes.

I like this approach. In fact, I agree with this approach. But it basically amounts to abandoning deontology and surrendering to utilitarianism.

Once you admit that the fundamental justification for all moral rules is the promotion of happiness and the prevention of suffering, you are basically a rule utilitarian. Rules then become heuristics for promoting happiness, not the fundamental source of morality itself.

I suppose it could be argued that this is not a surrender but a synthesis: We are looking for the best aspects of deontology and utilitarianism. That makes a lot of sense. But I keep coming back to the dark history of traditional rules, the fact that deontologists have basically been holding back human civilization since time immemorial. If deontology wants to be taken seriously now, it needs to prove that it has broken with that dark tradition. And frankly the easiest answer to me seems to be to just give up on deontology.