What is the real impact of AI on the environment?

Oct 19 JDN 2460968

The conventional wisdom is that AI is consuming a huge amount of electricity and water for very little benefit, but when I delved a bit deeper into the data, the results came out a lot more ambiguous. I still agree with the “very little benefit” part, but the energy costs of AI may not actually be as high as many people believe.

So how much energy does AI really use?

This article in MIT Technology Reviewestimates that by 2028, AI will account for 50% of data center usage and 6% of all US energy. But two things strike me about that:

  1. This is a forecast. It’s not what’s currently happening.
  2. 6% of all US energy doesn’t really sound that high, actually.

Note that transportation accounts for 37% of US energy consumed. Clearly we need to bring that down; but it seems odd to panic about a forecast of something that uses one-sixth of that.

Currently, AI is only 14% of data center energy usage. That forecast has it rising to 50%. Could that happen? Sure. But it hasn’t happened yet. Data centers are being rapidly expanded, but that’s not just for AI; it’s for everything the Internet does, as more and more people get access to the Internet and use it for more and more demanding tasks (like cloud computing and video streaming).

Indeed, a lot of the worry really seems to be related to forecasts. Here’s an even more extreme forecast suggesting that AI will account for 21% of global energy usage by 2030. What’s that based on? I have no idea; they don’t say. The article just basically says it “could happen”; okay, sure, a lot of things could happen. And I feel like this sort of forecast comes from the same wide-eyed people who say that the Singularity is imminent and AI will soon bring us to a glorious utopia. (And hey, if it did, that would obviously be worth 21% of global energy usage!)

Even more striking to me is the fact that a lot of other uses of data centers are clearly much more demanding. YouTube uses about 50 times as much energy as ChatGPT; yet nobody seems to be panicking that YouTube is an environmental disaster.

What is a genuine problem is that data centers have strong economies of scale, and so it’s advantageous to build a few very large ones instead of a lot of small ones; and when you build a large data center in a small town it puts a lot of strain on the local energy grid. But that’s not the same thing as saying that data centers in general are wastes of energy; on the contrary, they’re the backbone of the Internet and we all use them almost constantly every day. We should be working on ways to make sure that small towns aren’t harmed by building data centers near them; but we shouldn’t stop building data centers.

What about water usage?

Well, here’s an article estimating that training ChatGPT-3 evaporated hundreds of thousands of liters of fresh water. Once again I have a few notes about that:

  1. Evaporating water is just about the best thing you could do to it aside from leaving it there. It’s much better than polluting it (which is what most water usage does); it’s not even close. That water will simply rain back down later.
  2. Total water usage in the US is estimated at over 300 billion gallons (1.1 trillion liters) per day. Most of that is due to power generation and irrigation. (The best way to save water as a consumer? Become vegetarian—then you’re getting a lot more calories per irrigated acre.)
  3. A typical US household uses about 100 gallons (380 liters) of water per person per day.

So this means that training ChatGPT-3 cost about 4 seconds of US water consumption, or the same as what a single small town uses each day. Once again, that doesn’t seem like something worth panicking over.

A lot of this seems to be that people hear big-sounding numbers and don’t really have the necessary perspective on those numbers. Of course any service that is used by millions of people is going to consume what sounds like a lot of electricity. But in terms of usage per person, or compared to other services with similar reach, AI really doesn’t seem to be uniquely demanding.

This is not to let AI off the hook.

I still agree that the benefits of AI have so far been small, and the risks—both in the relatively short term, of disrupting our economy and causing unemployment, and in the long term, even endangering human civilization itself—are large. I would in fact support an international ban on all for-profit and military research and development of AI; a technology this powerful should be under the control of academic institutions and civilian governments, not corporations.

But I don’t think we need to worry too much about the environmental impact of AI just yet. If we clean up our energy grid (which has just gotten much easier thanks to cheap renewables) and transportation systems, the additional power draw from data centers really won’t be such a big problem.

For my mother, on her 79th birthday

Sep 21 JDN 2460940

When this post goes live, it will be mother’s 79th birthday. I think birthdays are not a very happy time for her anymore.

I suppose nobody really likes getting older; children are excited to grow up, but once you hit about 25 or 26 (the age at which you can rent a car at the normal rate and the age at which you have to get your own health insurance, respectively) and it becomes “getting older” instead of “growing up”, the excitement rapidly wears off. Even by 30, I don’t think most people are very enthusiastic about their birthdays. Indeed, for some people, I think it might be downhill past 21—you wanted to become an adult, but you had no interest in aging beyond that point.

But I think it gets worse as you get older. As you get into your seventies and eighties, you begin to wonder which birthday will finally be your last; actually I think my mother has been wondering about this even earlier than that, because her brothers died in their fifties, her sister died in her sixties, and my father died at 63. At this point she has outlived a lot of people she loved. I think there is a survivor’s guilt that sets in: “Why do I get to keep going, when they didn’t?”

These are also very hard times in general; Trump and the people who enable him have done tremendous damage to our government, our society, and the world at large in a shockingly short amount of time. It feels like all the safeguards we were supposed to have suddenly collapsed and we gave free rein to a madman.

But while there are many loved ones we have lost, there are many we still have; and nor need our set of loved ones be fixed, only to dwindle with each new funeral. We can meet new people, and they can become part of our lives. New children can be born into our family, and they can make our family grow. It is my sincere hope that my mother still has grandchildren yet to meet; in my case they would probably need to be adopted, as the usual biological route is pretty much out of the question, and surrogacy seems beyond our budget for the foreseeable future. But we would still love them, and she could still love them, and it is worth sticking around in this world in order to be a part of their lives.

I also believe that this is not the end for American liberal democracy. This is a terrible time, no doubt. Much that we thought would never happen already has, and more still will. It must be so unsettling, so uncanny, for someone who grew up in the triumphant years after America helped defeat fascism in Europe, to grow older and then see homegrown American fascism rise ascendant here. Even those of us who knew history all too well still seem doomed to repeat it.

At this point it is clear that victory over corruption, racism, and authoritarianism will not be easy, will not be swift, may never be permanent—and is not even guaranteed. But it is still possible. There is still enough hope left that we can and must keep fighting for an America worth saving. I do not know when we will win; I do not even know for certain that we will, in fact, win. But I believe we will.

I believe that while it seems powerful—and does everything it can to both promote that image and abuse what power it does have—fascism is a fundamentally weak system, a fundamentally fragile system, which simply cannot sustain itself once a handful of critical leaders are dead, deposed, or discredited. Liberal democracy is kinder, gentler—and also slower, at times even clumsier—than authoritarianism, and so it may seem weak to those whose view of strength is that of the savanna ape or the playground bully; but this is an illusion. Liberal democracy is fundamentally strong, fundamentally resilient. There is power in kindness, inclusion, and cooperation that the greedy and cruel cannot see. Fascism in Germany arrived and disappeared within a generation; democracy in America has stood for nearly 250 years.

We don’t know how much more time we have, Mom; none of us do. I have heard it said that you should live your life as though you will live both a short life and a long one; but honestly, you should probably live your life as though you will live a randomly-decided amount of time that is statistically predicted by actuarial tables—because you will. Yes, the older you get, the less time you have left (almost tautologically); but especially in this age of rapid technological change, none of us really know whether we’ll die tomorrow or live another hundred years.

I think right now, you feel like there isn’t much left to look forward to. But I promise you there is. Maybe it’s hard to see right now; indeed, maybe you—or I, or anyone—won’t even ever get to see it. But a brighter future is possible, and it’s worth it to keep going, especially if there’s any way that we might be able to make that brighter future happen sooner.

What does nonviolence mean?

Jun 15 JDN 2460842

As I write this, the LA protests and the crackdown upon them have continued since Friday and it is now Wednesday. In a radical and authoritarian move by Trump, Marines have been deployed (with shockingly incompetent logistics unbefitting the usually highly-efficient US military); but so far they have done very little. Reuters has been posting live updates on new developments.

The LAPD has deployed a variety of less-lethal weapons to disperse the protests, including rubber bullets, tear gas, and pepper balls; but so far they have not used lethal force. Protesters have been arrested, some for specific crimes—and others simply for violating curfew.

More recently, the protests have spread to other cities, including New York, Atlanta, Austin, Chicago, San Fransisco, and Philadelphia. By the time this post goes live, there will probably be even more cities involved, and there may also be more escalation.

But for now, at least, the protests have been largely nonviolent.

And I thought it would be worthwhile to make it very clear what I mean by that, and why it is important.

I keep seeing a lot of leftist people on social media accepting the narrative that these protests are violent, but actively encouraging that; and some of them have taken to arrogantly accuse anyone who supports nonviolent protests over violent ones of either being naive idiots or acting in bad faith. (The most baffling part of this is that they seem to be saying that Martin Luther King and Mahatma Gandhi were naive idiots or were acting in bad faith? Is that what they meant to say?)

First of all, let me be absolutely clear that nonviolence does not mean comfortable or polite or convenient.

Anyone objecting to blocking traffic, strikes, or civil disobedience because they cause disorder and inconvenience genuinely does not understand the purpose of protest (or is a naive idiot or acting in bad faith). Effective protests are disruptive and controversial. They cause disorder.

Nonviolence does not mean always obeying the law.

Sometimes the law is itself unjust, and must be actively disobeyed. Most of the Holocaust was legal, after all.

Other times, it is necessary to break some laws (such as property laws, curfews, and laws against vandalism) in the service of higher goals.

I wouldn’t say that a law against vandalism is inherently unjust; but I would say that spray-painting walls and vehicles in the service of protecting human rights is absolutely justified, and even sometimes it’s necessary to break some windows or set some fires.

Nonviolence does not mean that nobody tries to call it violence.

Most governments are well aware that most of their citizens are much more willing to support a nonviolent movement than a violent moment—more on this later—and thus will do whatever they can to characterize nonviolent movements as violence. They have two chief strategies for doing so:

  1. Characterize nonviolent but illegal acts, such as vandalism and destruction of property, as violence
  2. Actively try to instigate violence by treating nonviolent protesters as if they were violent, and then characterizing their attempts at self-defense as violence

As a great example of the latter, a man in Phoenix was arrested for assault because he kicked a tear gas canister back at police. But kicking back a canister that was shot at you is the most paradigmatic example of self-defense I could possibly imagine. If the system weren’t so heavily biased in fair of the police, a judge would order his release immediately.

Nonviolence does not mean that no one at the protests gets violent.

Any large group of people will contain outliers. Gather a protest of thousands of people, and surely some fraction of them will be violent radicals, or just psychopaths looking for an excuse to hurt someone. A nonviolent protest is one in which most people are nonviolent, and in which anyone who does get violent is shunned by the organizers of the movement.

Nonviolence doesn’t mean that violence will never be used against you.

On the contrary, the more authoritarian the regime—and thus the more justified your protest—the more likely it is that violent force will be used to suppress your nonviolent protests.

In some places it will be limited to less-lethal means (as it has so far in the current protests); but in others, even in ostensibly-democratic countries, it can result in lethal force being deployed against innocent people (as it did at Kent State in 1970).

When this happens, are you supposed to just stand there and get shot?

Honestly? Yes. I know that requires tremendous courage and self-sacrifice, but yes.

I’m not going to fault anyone for running or hiding or even trying to fight back (I’d be more of the “run” persuasion myself), but the most heroic action you could possibly take in that situation is in fact to stand there and get shot. Becoming a martyr is a terrible sacrifice, and one I’m not sure it’s one I myself could ever make; but it really, really works. (Seriously, whole religions have been based on this!)

And when you get shot, for the love of all that is good in the world, make sure someone gets it on video.

The best thing you can do for your movement is to show the oppressors for what they truly are. If they are willing to shoot unarmed innocent people, and the world finds out about that, the world will turn against them. The more peaceful and nonviolent you can appear at the moment they shoot you, the more compelling that video will be when it is all over the news tomorrow.

A shockingly large number of social movements have pivoted sharply in public opinion after a widely-publicized martyrdom incident. If you show up peacefully to speak your minds and they shoot you, that is nonviolent protest working. That is your protest being effective.

I never said that nonviolent protest was easy or safe.

What is the core of nonviolence?

It’s really very simple. So simple, honestly, that I don’t understand why it’s hard to get across to people:

Nonviolence means you don’t initiate bodily harm against other human beings.

It does not necessarily preclude self-defense, so long as that self-defense is reasonable and proportionate; and it certainly does not in any way preclude breaking laws, damaging property, or disrupting civil order.


Nonviolence means you never throw the first punch.

Nonviolence is not simply a moral position, but a strategic one.

Some of the people you would be harming absolutely deserve it. I don’t believe in ACAB, but I do believe in SCAB, and nearly 30% of police officers are domestic abusers, who absolutely would deserve a good punch to the face. And this is all the more true of ICE officers, who aren’t just regular bastards; they are bastards whose core job is now enforcing the human rights violations of President Donald Trump. Kidnapping people with their unmarked uniforms and unmarked vehicles, ICE is basically the Gestapo.

But it’s still strategically very unwise for us to deploy violence. Why? Two reasons:

  1. Using violence is a sure-fire way to turn most Americans against our cause.
  2. We would probably lose.

Nonviolent protest is nearly twice as effective as violent insurrection. (If you take nothing else from this post, please take that.)

And the reason that nonviolent protest is so effective is that it changes minds.

Violence doesn’t do that; in fact, it tends to make people rally against you. Once you start killing people, even people who were on your side may start to oppose you—let alone anyone who was previously on the fence.

A successful violent revolution results in you having to build a government and enforce your own new laws against a population that largely still disagrees with you—and if you’re a revolution made of ACAB people, that sounds spectacularly difficult!

A successful nonviolent protest movement results in a country that agrees with you—and it’s extremely hard for even a very authoritarian regime to hang onto power when most of the people oppose it.

By contrast, the success rate of violent insurrections is not very high. Why?

Because they have all the guns, you idiot.

States try to maintain a monopoly on violence in their territory. They are usually pretty effective at doing so. Thus attacking a state when you are not a state puts you at a tremendous disadvantage.

Seriously; we are talking about the United States of America right now, the most powerful military hegemon the world has ever seen.

Maybe the people advocating violence don’t really understand this, but the US has not lost a major battle since 1945. Oh, yes, they’ve “lost wars”, but what that really means is that public opinion has swayed too far against the war for them to maintain morale (Vietnam) or their goals for state-building were so over-ambitious that they were basically impossible for anyone to achieve (Iraq and Afghanistan). If you tally up the actual number of soldiers killed, US troops always kill more than they lose, and typically by a very wide margin.


And even with the battles the US lost in WW1 and WW2, they still very much won the actual wars. So genuinely defeating the United States in open military conflict is not something that has happened since… I’m pretty sure the War of 1812.

Basically, advocating for a violent response to Trump is saying that you intend to do something that literally no one in the world—including major world military powers—has been able to accomplish in 200 years. The last time someone got close, the US nuked them.

If the protests in LA were genuinely the insurrectionists that Trump has been trying to characterize them as, those Marines would not only have been deployed, they would have started shooting. And I don’t know if you realize this, but US Marines are really good at shooting. It’s kind of their thing. Instead of skirmishes with rubber bullets and tear gas, we would have an absolute bloodbath. It would probably end up looking like the Tet Offensive, a battle where “unprepared” US forces “lost” because they lost 6,000 soldiers and “only” killed 45,000 in return. (The US military is so hegemonic that a kill ratio of more than 7 to 1 is considered a “loss” in the media and public opinion.)

Granted, winning a civil war is different from winning a conventional war; even if a civil war broke out, it’s unlikely that nukes would be used on American soil, for instance. But you’re still talking about a battle so uphill it’s more like trying to besiege Edinburgh Castle.

Our best hope in such a scenario, in fact, would probably be to get blue-state governments to assert control over US military forces in their own jurisdiction—which means that antagonizing Gavin Newsom, as I’ve been seeing quite a few leftists doing lately, seems like a really bad idea.

I’m not saying that winning a civil war would be completely impossible. Since we might be able to get blue-state governors to take control of forces in their own states and we would probably get support from Canada, France, and the United Kingdom, it wouldn’t be completely hopeless. But it would be extremely costly, millions of people would die, and victory would by no means be assured despite the overwhelming righteousness of our cause.

How about, for now at least, we stick to the methods that historically have proven twice as effective?

On land acknowledgments

Dec 29 JDN 2460674

Noah Smith and Brad DeLong, both of whom I admire, have recently written about the practice of land acknowledgments. Smith is wholeheartedly against them. DeLong has a more nuanced view. Smith in fact goes so far as to argue that there is no moral basis for considering these lands to be ‘Native lands’ at all, which DeLong rightly takes issue with.

I feel like this might be an issue where it would be better to focus on Native American perspectives. (Not that White people aren’t allowed to talk about it; just that we tend to hear from them on everything, and this is something where maybe they’re less likely to know what they’re talking about.)

It turns out that Native views on land acknowledgments are also quite mixed; some see them as a pointless, empty gesture; others see them as a stepping-stone to more serious policy changes that are necessary. There is general agreement that more concrete actions, such as upholding treaties and maintaining tribal sovereignty, are more important.

I have to admit I’m much more in the ’empty gesture’ camp. I’m only one-fourth Native (so I’m Whiter than I am not), but my own view on this is that land acknowledgments aren’t really accomplishing very much, and in fact aren’t even particularly morally defensible.

Now, I know that it’s not realistic to actually “give back” all the land in the United States (or Australia, or anywhere where indigenous people were forced out by colonialism). Many of the tribes that originally lived on the land are gone, scattered to the winds, or now living somewhere else that they were forced to (predominantly Oklahoma). Moreover, there are now more non-Native people living on that land than there ever were Native people living on it, and forcing them all out would be just as violent and horrific as forcing out the Native people was in the first place.

I even appreciate Smith’s point that there is something problematic about assigning ownership of land to bloodlines of people just because they happened to be the first ones living there. Indeed, as he correctly points out, they often weren’t the first ones living there; different tribes have been feuding and warring with each other since time immemorial, and it’s likely that any given plot of land was held by multiple different tribes at different times even before colonization.

Let’s make this a little more concrete.

Consider the Beaver Wars.


The Beaver Wars were a series of conflicts between the Haudenosaunee (that’s what they call themselves; to a non-Native audience they are better known by what the French called them, Iroquois) and several other tribes. Now, that was after colonization, and the French were involved, and part of what they were fighting over was the European fur trade—so the story is a bit complicated by that. But it’s a conflict we have good historical records of, and it’s pretty clear that many of these rivalries long pre-dated the arrival of the French.

The Haudenosaunee were brutal in the Beaver Wars. They slaughtered thousands, including many helpless civilians, and effectively wiped out several entire tribes, including the Erie and Susquehannock, and devastated several others, including the Mohicans and the Wyandot. Many historians consider these to be acts of genocide. Surely any land that the Haundenosaunee claimed as a result of the Beaver Wars is as illegitimate as land claimed by colonial imperialism? Indeed, isn’t it colonial imperialism?

Yet we have no reason to believe that these brutal wars were unique to the Haundenosaunee, or that they only occurred after colonization. Our historical records aren’t as clear going that far back, because many Native tribes didn’t keep written records—in fact, many didn’t even have a written language. But what we do know suggests that a great many tribes warred with a great many other tribes, and land was gained and lost in warfare, going back thousands of years.

Indeed, it seems to be a sad fact of human history that virtually all land, indigenous or colonized, is actually owned by a group that conquered another group (that conquered another group, that conquered another group…). European colonialism was simply the most recent conquest.

But this doesn’t make European colonialism any more justifiable. Rather, it raises a deeper question:

How should we decide who owns what land?

The simplest way, and the way that we actually seem to use most of the time, is to simply take whoever currently owns the land as its legitimate ownership. “Possession is nine-tenths of the law” was always nonsense when it comes to private property (that’s literally what larceny means!), but when it comes to national sovereignty, it is basically correct. Once a group manages to organize itself well enough to enforce control over a territory, we pretty much say that it’s their territory now and they’re allowed to keep it.

Does that mean that anyone is just allowed to take whatever land they can successfully conquer and defend? That the world must simply accept that chaos and warfare are inevitable? Fortunately, there is a solution to this problem.

The Westphalian solution.

The current solution to this problem is what’s called Westphalian sovereignty, after the Peace of Westphalia, two closely-related treaties that were signed in Westphalia (a region of Germany) in 1648. Those treaties established a precedent in international law that nations are entitled to sovereignty over their own territory; other nations are not allowed to invade and conquer them, and if anyone tries, the whole international community should fight to resist any such attempt.

Effectively, what Westphalia did was establish that whoever controlled a given territory right now (where “right now” means 1648) now gets the right to hold it forever—and everyone else not only has to accept that, they are expected to defend it. Now, clearly this has not been followed precisely; new nations have gained independence from their empires (like the United States), nations have separated into pieces (like India and Pakistan, the Balkans, and most recently South Sudan), and sometimes even nations have successfully conquered each other and retained control—but the latter has been considerably rarer than it was before the establishment of Westphalian sovereignty. (Indeed, part of what makes the Ukraine War such an aberration is that it is a brazen violation of Westphalian sovereignty the likes of which we haven’t seen since the Second World War.)

This was, as far as I can tell, a completely pragmatic solution, with absolutely no moral basis whatsoever. We knew in 1648, and we know today, that virtually every nation on Earth was founded in bloodshed, its land taken from others (who took it from others, who took it from others…). And it was timed in such a way that European colonialism became etched in stone—no European power was allowed to take over another European power’s colonies anymore, but they were all allowed to keep all the colonies they already had, and the people living in those colonies didn’t get any say in the matter.

Since then, most (but by no means all) of those colonies have revolted and gained their own independence. But by the time it happened, there were large populations of former colonists, and the indigenous populations were often driven out, dramatically reduced, or even outright exterminated. There is something unsettling about founding a new democracy like the United States or Australia after centuries of injustice and oppression have allowed a White population to establish a majority over the indigenous population; had indigenous people been democratically represented all along, things would probably have gone a lot differently.

What do land acknowledgments accomplish?

I think that the intent behind land acknowledgments is to recognize and commemorate this history of injustice, in the hopes of somehow gaining some kind of at least partial restitution. The intentions here are good, and the injustices are real.

But there is something fundamentally wrong with the way most land acknowledgments are done, because they basically just push the sovereignty back one step: They assert that whoever held the land before Europeans came along is the land’s legitimate owner. But what about the people before them (and the people before them, and the people before them)? How far back in the chain of violence are we supposed to go before we declare a given group’s conquests legitimate?

How far back can we go?

Most of these events happened many centuries ago and were never written down, and all we have now is vague oral histories that may or may not even be accurate. Particularly when one tribe forces out another, it rather behooves the conquering tribe to tell the story in their own favor, as one of “reclaiming” land that was rightfully theirs all along, whether or not that was actually true—as they say, history is written by the victors. (I think it’s actually more true when the history is never actually written.) And in some cases it’s probably even true! In others, that land may have been contested between the two tribes for so long that nobody honestly knows who owned it first.

It feels wrong to legitimate the conquests of colonial imperialism, but it feels just as wrong to simply push it back one step—or three steps, or seven steps.

I think that ultimately what we must do is acknowledge this entire history.

We must acknowledge that this land was stolen by force from Native Americans, and also that most of those Native Americans acquired their land by stealing it by force from other Native Americans, and the chain goes back farther than we have records. We must acknowledge that this is by no means unique to the United States but in fact a universal feature of almost all land held by anyone anywhere in the world. We must acknowledge that this chain of violence and conquest has been a part of human existence since time immemorial—and affirm our commitment to end it, once and for all.

That doesn’t simply mean accepting the current allocation of land; land, like many other resources, is clearly distributed unequally and unfairly. But it does mean that however we choose to allocate land, we must do so by a fair and peaceful process, not by force and conquest. The chain of violence that has driven human history for thousands of years must finally be brought to an end.

Why is America so bad at public transit?

Sep 8 JDN 2460562

In most of Europe, 20-30% of the population commutes daily by public transit. In the US, only 13% do.

Even countries much poorer than the US have more widespread use of public transit; Kenya, Russia, and Venezuela all have very high rates of public transit use.

Cities around the world are rapidly expanding and improving their subway systems; but we are not here in the US.

Germany, France, Spain, Italy, and Japan are all building huge high-speed rail networks. We have essentially none.

Even Canada has better public transit than we do, and their population is just as spread out as ours.

Why are we so bad at this?

Surprisingly, it isn’t really that we are lacking in rail network. We actually have more kilometers of rail than China or the EU—though shockingly little of it is electrified, and we had nearly twice as many kilometers of rail a century ago. But we use this rail network almost entirely for freight, not passengers.

Is it that we aren’t spending enough government funds? Sort of. But it’s worth noting that we cover a higher proportion of public transit costs with government funds than most other countries. How can this be? It’s because transit systems get more efficient as they get larger, and attract more passengers as they provide better service. So when you provide really bad service, you end up spending more per passenger, and you need more government subsidies to stay afloat.

Cost is definitely part of it: It costs between two and seven times as much to build the same amount of light rail network in the US as it does in most EU countries. But that just raises another question: Why is it so much more expensive here?

This isn’t comparing with China—of course China is cheaper; they have a dictatorship, they abuse their workers, they pay peanuts. None of that is true of France or Germany, democracies where wages are just as high and worker protections are actually a good deal stronger than here. Yet it still costs two to seven times as much to build the same amount of rail in the US as it does in France or Germany.

Another part of the problem seems to be that public transit in the US is viewed as a social welfare program, rather than an infrastructure program: Rather than seeing it as a vital function of government that supports a strong economy, we see it as a last resort for people too poor to buy cars. And then it becomes politicized, because the right wing in the US hates social welfare programs and will do anything to make sure that they are cut down as much as possible.

It wasn’t always this way.

As recently as 1970, most US major cities had strong public transit systems. But now it’s really only the coastal cities that have them; cities throughout the South and Midwest have massively divested from their public transit. This goes along with a pattern of deindustrialization and suburbanization: These cities are stagnating economically and their citizens are moving out to the suburbs, so there’s no money for public transit and there’s more need for roads.

But the decline of US public transit goes back even further than that. Average transit trips per person in the US fell from 115 per year in 1950 to 36 per year in 1970.

This long, slow decline has only gotten worse as a result of the COVID pandemic; with more and more people working remotely, there’s just less need for commuting in general. (Then again, that also means fewer car miles, so it’s probably a good thing from an environmental perspective.)

Once public transit starts failing, it becomes a vicious cycle: They lose revenue, so they cut back on service, so they become more inconvenient, so they lose more revenue. Really successful public transit systems require very heavy investment in order to maintain fast, convenient service across an entire city. Any less than that, and people will just turn to cars instead.

Currently, the public transit systems in most US cities are suffering severe financial problems, largely as a result of the pandemic; they are facing massive shortfalls in their budgets. The federal government often helps with the capital costs of buying vehicles and laying down new lines, but not with the operating costs of actually running the system.

There seems to be some kind of systemic failure in the US in particular; something about our politics, or our economy, or our culture just makes us uniquely bad at building and maintaining public transit.

What should we do about this?

One option would be to do nothing—laissez faire. Maybe cars are just a more efficient mode of transportation, or better for what Americans want, and we should accept that.

But when you look at the externalities involved, it becomes clear that this is not the right approach. While cars produce enormous amounts of pollution and carbon emissions, public transit is much, much cleaner. (Electric cars are better than diesel buses, but still worse than trams and light rail—and besides, the vast majority of cars use gasoline.) Just for clean air and climate change alone, we have strong reasons to want fewer cars and more public transit.

And there are positive externalities of public transit too; it’s been estimated that for every $1 spent on public transit, a city gains $5 in economic activity. We’re leaving a lot of money on the table by failing to invest in something so productive.

We need a fundamental shift in how Americans think about public transit. Not as a last resort for the poor, but as a default option for everyone. Not as a left-wing social welfare program, but as a vital component of our nation’s infrastructure.

Whenever people get stuck in traffic, instead of resenting other drivers (who are in exactly the same boat!), they should resent that the government hasn’t supported more robust public transit systems—and then they should go out and vote for candidates and policies that will change that.

Of course, with everything else that’s wrong with our economy and our political system, I can understand why this might not be a priority right now. But sooner or later we are going to need to fix this, or it’s just going to keep getting worse and worse.

Reflections at the crossroads

Jan 21 JDN 2460332

When this post goes live, I will have just passed my 36th birthday. (That means I’ve lived for about 1.1 billion seconds, so in order to be as rich as Elon Musk, I’d need to have made, on average, since birth, $200 per second—$720,000 per hour.)

I certainly feel a lot better turning 36 than I did 35. I don’t have any particular additional accomplishments to point to, but my life has already changed quite a bit, in just that one year: Most importantly, I quit my job at the University of Edinburgh, and I am currently in the process of moving out of the UK and back home to Michigan. (We moved the cat over Christmas, and the movers have already come and taken most of our things away; it’s really just us and our luggage now.)

But I still don’t know how to field the question that people have been asking me since I announced my decision to do this months ago:

“What’s next?”

I’m at a crossroads now, trying to determine which path to take. Actually maybe it’s more like a roundabout; it has a whole bunch of different paths, surely not just two or three. The road straight ahead is labeled “stay in academia”; the others at the roundabout are things like “freelance writing”, “software programming”, “consulting”, and “tabletop game publishing”. There’s one well-paved and superficially enticing road that I’m fairly sure I don’t want to take, labeled “corporate finance”.

Right now, I’m just kind of driving around in circles.

Most people don’t seem to quit their jobs without a clear plan for where they will go next. Often they wait until they have another offer in hand that they intend to take. But when I realized just how miserable that job was making me, I made the—perhaps bold, perhaps courageous, perhaps foolish—decision to get out as soon as I possibly could.

It’s still hard for me to fully understand why working at Edinburgh made me so miserable. Many features of an academic career are very appealing to me. I love teaching, I like doing research; I like the relatively flexible hours (and kinda need them, because of my migraines).

I often construct formal decision models to help me make big choices—generally it’s a linear model, where I simply rate each option by its relative quality in a particular dimension, then try different weightings of all the different dimensions. I’ve used this successfully to pick out cars, laptops, even universities. I’m not entrusting my decisions to an algorithm; I often find myself tweaking the parameters to try to get a particular result—but that in itself tells me what I really want, deep down. (Don’t do that in research—people do, and it’s bad—but if the goal is to make yourself happy, your gut feelings are important too.)

My decision models consistently rank university teaching quite high. It generally only gets beaten by freelance writing—which means that maybe I should give freelance writing another try after all.

And yet, my actual experience at Edinburgh was miserable.

What went wrong?

Well, first of all, I should acknowledge that when I separate out the job “university professor” into teaching and research as separate jobs in my decision model, and include all that goes into both jobs—not just the actual teaching, but the grading and administrative tasks; not just doing the research, but also trying to fund and publish it—they both drop lower on the list, and research drops down a lot.

Also, I would rate them both even lower now, having more direct experience of just how awful the exam-grading, grant-writing and journal-submitting can be.

Designing and then grading an exam was tremendously stressful: I knew that many of my students’ futures rested on how they did on exams like this (especially in the UK system, where exams are absurdly overweighted! In most of my classes, the final exam was at least 60% of the grade!). I struggled mightily to make the exam as fair as I could, all the while knowing that it would never really feel fair and I didn’t even have the time to make it the best it could be. You really can’t assess how well someone understands an entire subject in a multiple-choice exam designed to take 90 minutes. It’s impossible.

The worst part of research for me was the rejection.

I mentioned in a previous post how I am hypersensitive to rejection; applying for grants and submitting to journals was clearly the worst feelings of rejection I’ve felt in any job. It felt like they were evaluting not only the value of my work, but my worth as a scientist. Failure felt like being told that my entire career was a waste of time.

It was even worse than the feeling of rejection in freelance writing (which is one of the few things that my model tells me is bad about freelancing as a career for me, along with relatively low and uncertain income). I think the difference is that a book publisher is saying “We don’t think we can sell it.”—’we’ and ‘sell’ being vital. They aren’t saying “this is a bad book; it shouldn’t exist; writing it was a waste of time.”; they’re just saying “It’s not a subgenre we generally work with.” or “We don’t think it’s what the market wants right now.” or even “I personally don’t care for it.”. They acknowledge their own subjective perspective and the fact that it’s ultimately dependent on forecasting the whims of an extremely fickle marketplace. They aren’t really judging my book, and they certainly aren’t judging me.

But in research publishing, it was different. Yes, it’s all in very polite language, thoroughly spiced with sophisticated jargon (though some reviewers are more tactful than others). But when your grant application gets rejected by a funding agency or your paper gets rejected by a journal, the sense really basically is “This project is not worth doing.”; “This isn’t good science.”; “It was/would be a waste of time and money.”; “This (theory or experiment you’ve spent years working on) isn’t interesting or important.” Nobody ever came out and said those things, nor did they come out and say “You’re a bad economist and you should feel bad.”; but honestly a couple of the reviews did kinda read to me like they wanted to say that. They thought that the whole idea that human beings care about each other is fundamentally stupid and naive and not worth talking about, much less running experiments on.

It isn’t so much that I believed them that my work was bad science. I did make some mistakes along the way (but nothing vital; I’ve seen far worse errors by Nobel Laureates). I didn’t have very large samples (because every person I add to the experiment is money I have to pay, and therefore funding I have to come up with). But overall I do believe that my work is sufficiently rigorous to be worth publishing in scientific journals.

It’s more that I came to feel that my work is considered bad, that the kind of work I wanted to do would forever be an uphill battle against an implacable enemy. I already feel exhausted by that battle, and it had only barely begun. I had thought that behavioral economics was a more successful paradigm by now, that it had largely displaced the neoclassical assumptions that came before it; but I was wrong. Except specifically in journals dedicated to experimental and behavioral economics (of which prestigious journals are few—I quickly exhausted them), it really felt like a lot of the feedback I was getting amounted to, “I refuse to believe your paradigm.”.

Part of the problem, also, was that there simply aren’t that many prestigious journals, and they don’t take that many papers. The top 5 journals—which, for whatever reason, command far more respect than any other journals among economists—each accept only about 5-10% of their submissions. Surely more than that are worth publishing; and, to be fair, much of what they reject probably gets published later somewhere else. But it makes a shockingly large difference in your career how many “top 5s” you have; other publications almost don’t matter at all. So once you don’t get into any of those (which of course I didn’t), should you even bother trying to publish somewhere else?

And what else almost doesn’t matter? Your teaching. As long as you show up to class and grade your exams on time (and don’t, like, break the law or something), research universities basically don’t seem to care how good a teacher you are. That was certainly my experience at Edinburgh. (Honestly even their responses to professors sexually abusing their students are pretty unimpressive.)

Some of the other faculty cared, I could tell; there were even some attempts to build a community of colleagues to support each other in improving teaching. But the administration seemed almost actively opposed to it; they didn’t offer any funding to support the program—they wouldn’t even buy us pizza at the meetings, the sort of thing I had as an undergrad for my activist groups—and they wanted to take the time we spent in such pedagogy meetings out of our grading time (probably because if they didn’t, they’d either have to give us less grading, or some of us would be over our allotted hours and they’d owe us compensation).

And honestly, it is teaching that I consider the higher calling.

The difference between 0 people knowing something and 1 knowing it is called research; the difference between 1 person knowing it and 8 billion knowing it is called education.

Yes, of course, research is important. But if all the research suddenly stopped, our civilization would stagnate at its current level of technology, but otherwise continue unimpaired. (Frankly it might spare us the cyberpunk dystopia/AI apocalypse we seem to be hurtling rapidly toward.) Whereas if all education suddenly stopped, our civilization would slowly decline until it ultimately collapsed into the Stone Age. (Actually it might even be worse than that; even Stone Age cultures pass on knowledge to their children, just not through formal teaching. If you include all the ways parents teach their children, it may be literally true that humans cannot survive without education.)

Yet research universities seem to get all of their prestige from their research, not their teaching, and prestige is the thing they absolutely value above all else, so they devote the vast majority of their energy toward valuing and supporting research rather than teaching. In many ways, the administrators seem to see teaching as an obligation, as something they have to do in order to make money that they can spend on what they really care about, which is research.

As such, they are always making classes bigger and bigger, trying to squeeze out more tuition dollars (well, in this case, pounds) from the same number of faculty contact hours. It becomes impossible to get to know all of your students, much less give them all sufficient individual attention. At Edinburgh they even had the gall to refer to their seminars as “tutorials” when they typically had 20+ students. (That is not tutoring!)And then of course there were the lectures, which often had over 200 students.

I suppose it could be worse: It could be athletics they spend all their money on, like most Big Ten universities. (The University of Michigan actually seems to strike a pretty good balance: they are certainly not hurting for athletic funding, but they also devote sizeable chunks of their budget to research, medicine, and yes, even teaching. And unlike virtually all other varsity athletic programs, University of Michigan athletics turns a profit!)

If all the varsity athletics in the world suddenly disappeared… I’m not convinced we’d be any worse off, actually. We’d lose a source of entertainment, but it could probably be easily replaced by, say, Netflix. And universities could re-focus their efforts on academics, instead of acting like a free training and selection system for the pro leagues. The University of California, Irvine certainly seemed no worse off for its lack of varsity football. (Though I admit it felt a bit strange, even to a consummate nerd like me, to have a varsity League of Legends team.)

They keep making the experience of teaching worse and worse, even as they cut faculty salaries and make our jobs more and more precarious.

That might be what really made me most miserable, knowing how expendable I was to the university. If I hadn’t quit when I did, I would have been out after another semester anyway, and going through this same process a bit later. It wasn’t even that I was denied tenure; it was never on the table in the first place. And perhaps because they knew I wouldn’t stay anyway, they didn’t invest anything in mentoring or supporting me. Ostensibly I was supposed to be assigned a faculty mentor immediately; I know the first semester was crazy because of COVID, but after two and a half years I still didn’t have one. (I had a small research budget, which they reduced in the second year; that was about all the support I got. I used it—once.)

So if I do continue on that “academia” road, I’m going to need to do a lot of things differently. I’m not going to put up with a lot of things that I did. I’ll demand a long-term position—if not tenure-track, at least renewable indefinitely, like a lecturer position (as it is in the US, where the tenure-track position is called “assistant professor” and “lecturer” is permanent but not tenured; in the UK, “lecturers” are tenure-track—except at Oxford, and as of 2021, Cambridge—just to confuse you). Above all, I’ll only be applying to schools that actually have some track record for valuing teaching and supporting their faculty.

And if I can’t find any such positions? Then I just won’t apply at all. I’m not going in with the “I’ll take what I can get” mentality I had last time. Our household finances are stable enough that I can afford to wait awhile.

But maybe I won’t even do that. Maybe I’ll take a different path entirely.

For now, I just don’t know.

Productivity can cope with laziness, but not greed

Oct 8 JDN 2460226

At least since Star Trek, it has been a popular vision of utopia: post-scarcity, an economy where goods are so abundant that there is no need for money or any kind of incentive to work, and people can just do what they want and have whatever they want.

It certainly does sound nice. But is it actually feasible? I’ve written about this before.

I’ve been reading some more books set in post-scarcity utopias, including Ursula K. Le Guin (who is a legend) and Cory Doctorow (who is merely pretty good). And it struck me that while there is one major problem of post-scarcity that they seem to have good solutions for, there is another one that they really don’t. (To their credit, neither author totally ignores it; they just don’t seem to see it as an insurmountable obstacle.)

The first major problem is laziness.

A lot of people assume that the reason we couldn’t achieve a post-scarcity utopia is that once your standard of living is no longer tied to your work, people would just stop working. I think this assumption rests on both an overly cynical view of human nature and an overly pessimistic view of technological progress.

Let’s do a thought experiment. If you didn’t get paid, and just had the choice to work or not, for whatever hours you wished, motivated only by the esteem of your peers, your contribution to society, and the joy of a job well done, how much would you work?

I contend it’s not zero. At least for most people, work does provide some intrinsic satisfaction. It’s also probably not as much as you are currently working; otherwise you wouldn’t insist on getting paid. Those are our lower and upper bounds.

Is it 80% of your current work? Perhaps not. What about 50%? Still too high? 20% seems plausible, but maybe you think that’s still too high. Surely it’s at least 10%. Surely you would be willing to work at least a few hours per week at a job you’re good at that you find personally fulfilling. My guess is that it would actually be more than that, because once people were free of the stress and pressure of working for a living, they would be more likely to find careers that truly brought them deep satisfaction and joy.

But okay, to be conservative, let’s estimate that people are only willing to work 10% as much under a system where labor is fully optional and there is no such thing as a wage. What kind of standard of living could we achieve?

Well, at the current level of technology and capital in the United States, per-capita GDP at purchasing power parity is about $80,000. 10% of that is $8,000. This may not sound like a lot, but it’s about how people currently live in Venezuela. India is slightly better, Ghana is slightly worse. This would feel poor to most Americans today, but it’s objectively a better standard of living than most humans have had throughout history, and not much worse than the world average today.

If per-capita GDP growth continues at its current rate of about 1.5% per year for another century, that $80,000 would become $320,000, 10% of which is $32,000—that would put us at the standard of living of present-day Bulgaria, or what the United States was like in the distant past of [checks notes] 1980. That wouldn’t even feel poor. In fact if literally everyone had this standard of living, nearly as many Americans today would be richer as would be poorer, since the current median personal income is only a bit higher than that.

Thus, the utopian authors are right about this one: Laziness is a solvable problem. We may not quite have it solved yet, but it’s on the ropes; a few more major breakthroughs in productivity-enhancing technology and we’ll basically be there.

In fact, on a small scale, this sort of utopian communist anarchy already works, and has for centuries. There are little places, all around the world, where people gather together and live and work in a sustainable, basically self-sufficient way without being motivated by wages or salaries, indeed often without owning any private property at all.

We call these places monasteries.

Granted, life in a monastery clearly isn’t for everyone: I certainly wouldn’t want to live a life of celibacy and constant religious observance. But the long-standing traditions of monastic life in several very different world religions does prove that it’s possible for human beings to live and even flourish in the absence of a profit motive.

Yet the fact that monastic life is so strict turns out to be no coincidence: In a sense, it had to be for the whole scheme to work. I’ll get back to that in a moment.

The second major problem with a post-scarcity utopia is greed.

This is the one that I think is the real barrier. It may not be totally insurmountable, but thus far I have yet to hear any good proposals that would seriously tackle it.

The issue with laziness is that we don’t really want to work as much as we do. But since we do actually want to work a little bit, the question is simply how to make as much as we currently do while working only as much as we want to. Hence, to deal with laziness, all we need to do is be more efficient. That’s something we are shockingly good at; the overall productivity of our labor is now something like 100 times what it was at the dawn of the Industrial Revolution, and still growing all the time.

Greed is different. The issue with greed is that, no matter how much we have, we always want more.

Some people are clearly greedier than others. In fact, I’m even willing to bet that most people’s greed could be kept in check by a society that provided for everyone’s basic needs for free. Yeah, maybe sometimes you’d fantasize about living in a gigantic mansion or going into outer space; but most of the time, most of us could actually be pretty happy as long as we had a roof over our heads and food on our tables. I know that in my own case, my grandest ambitions largely involve fighting global poverty—so if that became a solved problem, my life’s ambition would be basically fulfilled, and I wouldn’t mind so much retiring to a life of simple comfort.

But is everyone like that? This is what anarchists don’t seem to understand. In order for anarchy to work, you need everyone to fit into that society. Most of us or even nearly all of us just won’t cut it.

Ammon Hennecy famously declared: “An anarchist is someone who doesn’t need a cop to make him behave.” But this is wrong. An anarchist is someone who thinks that no one needs a cop to make him behave. And while I am the former, I am not the latter.

Perhaps the problem is that anarchists don’t realize that not everyone is as good as they are. They implicitly apply their own mentality to everyone else, and assume that the only reason anyone ever cheats, steals, or kills is because their circumstances are desperate.

Don’t get me wrong: A lot of crime—perhaps even most crime—is committed by people who are desperate. Improving overall economic circumstances does in fact greatly reduce crime. But there is also a substantial proportion of crime—especially the most serious crimes—which is committed by people who aren’t particularly desperate, they are simply psychopaths. They aren’t victims of circumstance. They’re just evil. And society needs a way to deal with them.

If you set up a society so that anyone can just take whatever they want, there will be some people who take much more than their share. If you have no system of enforcement whatsoever, there’s nothing to stop a psychopath from just taking everything he can get his hands on. And then it really doesn’t matter how productive or efficient you are; whatever you make will simply get taken by whoever is greediest—or whoever is strongest.

In order to avoid that, you need to either set up a system that stops people from taking more than their share, or you need to find a way to exclude people like that from your society entirely.

This brings us back to monasteries. Why are they so strict? Why are the only places where utopian anarchism seems to flourish also places where people have to wear a uniform, swear vows, carry out complex rituals, and continually pledge their fealty to an authority? (Note, by the way, that I’ve also just described life in the military, which also has a lot in common with life in a monastery—and for much the same reasons.)

It’s a selection mechanism. Probably no one consciously thinks of it this way—indeed, it seems to be important to how monasteries work that people are not consciously weighing the costs and benefits of all these rituals. This is probably something that memetically evolved over centuries, rather than anything that was consciously designed. But functionally, that’s what it does: You only get to be part of a monastic community if you are willing to pay the enormous cost of following all these strict rules.

That makes it a form of costly signaling. Psychopaths are, in general, more prone to impulsiveness and short-term thinking. They are therefore less willing than others to bear the immediate cost of donning a uniform and following a ritual in order to get the long-term gains of living in a utopian community. This excludes psychopaths from ever entering the community, and thus protects against their predation.

Even celibacy may be a feature rather than a bug: Psychopaths are also prone to promiscuity. (And indeed, utopian communes that practice free love seem to have a much worse track record of being hijacked by psychopaths than monasteries that require celibacy!)

Of course, lots of people who aren’t psychopaths aren’t willing to pay those costs either—like I said, I’m not. So the selection mechanism is in a sense overly strict: It excludes people who would support the community just fine, but aren’t willing to pay the cost. But in the long run, this turns out to be less harmful than being too permissive and letting your community get hijacked and destroyed by psychopaths.

Yet if our goal is to make a whole society that achieves post-scarcity utopia, we can’t afford to be so strict. We already know that most people aren’t willing to become monks or nuns.

That means that we need a selection mechanism which is more reliable—more precisely, one with higher specificity.

I mentioned this in a previous post in the context of testing for viruses, but it bears repeating. Sensitivity and specificity are two complementary measures of a test’s accuracy. The sensitivity of a test is how likely it is to show positive if the truth is positive. The specificity of a test is how likely it is to show negative if the truth is negative.

As a test of psychopathy, monastic strictness has very high sensitivity: If you are a psychopath, there’s a very high chance it will weed you out. But it has quite low specificity: Even if you’re not a psychopath, there’s still a very high chance you won’t want to become a monk.

For a utopian society to work, we need something that’s more specific, something that won’t exclude a lot of people who don’t deserve to be excluded. But it still needs to have much the same sensitivity, because letting psychopaths into your utopia is a very easy way to let that utopia destroy itself. We do not yet have such a test, nor any clear idea how we might create one.

And that, my friends, is why we can’t have nice things. At least, not yet.

The inequality of factor mobility

Sep 24 JDN 2460212

I’ve written before about how free trade has brought great benefits, but also great costs. It occurred to me this week that there is a fairly simple reason why free trade has never been as good for the world as the models would suggest: Some factors of production are harder to move than others.

To some extent this is due to policy, especially immigration policy. But it isn’t just that.There are certain inherent limitations that render some kinds of inputs more mobile than others.

Broadly speaking, there are five kinds of inputs to production: Land, labor, capital, goods, and—oft forgotten—ideas.

You can of course parse them differently: Some would subdivide different types of labor or capital, and some things are hard to categorize this way. The same product, such as an oven or a car, can be a good or capital depending on how it’s used. (Or, consider livestock: is that labor, or capital? Or perhaps it’s a good? Oddly, it’s often discussed as land, which just seems absurd.) Maybe ideas can be considered a form of capital. There is a whole literature on human capital, which I increasingly find distasteful, because it seems to imply that economists couldn’t figure out how to value human beings except by treating them as a machine or a financial asset.

But this four-way categorization is particularly useful for what I want to talk about today. Because the rate at which those things move is very different.

Ideas move instantly. It takes literally milliseconds to transmit an idea anywhere in the world. This wasn’t always true; in ancient times ideas didn’t move much faster than people, and it wasn’t until the invention of the telegraph that their transit really became instantaneous. But it is certainly true now; once this post is published, it can be read in a hundred different countries in seconds.

Goods move in hours. Air shipping can take a product just about anywhere in less than a day. Sea shipping is a bit slower, but not radically so. It’s never been easier to move goods all around the world, and this has been the great success of free trade.

Capital moves in weeks. Here it might be useful to subdivide different types of capital: It’s surely faster to move an oven or even a car (the more good-ish sort of capital) than it is to move an entire factory (capital par excellence). But all in all, we can move stuff pretty fast these days. If you want to move your factory to China or Indonesia, you can probably get it done in a matter of weeks or at most months.

Labor moves in months. This one is a bit ironic, since it is surely easier to carry a single human person—or even a hundred human people—than all the equipment necessary to run an entire factory. But moving labor isn’t just a matter of physically carrying people from one place to another. It’s not like tourism, where you just pack and go. Moving labor requires uprooting people from where they used to live and letting them settle in a new place. It takes a surprisingly long time to establish yourself in a new environment—frankly even after two years in Edinburgh I’m not sure I quite managed it. And all the additional restrictions we’ve added involving border crossings and immigration laws and visas only make it that much slower.

Land moves never. This one seems perfectly obvious, but is also often neglected. You can’t pick up a mountain, a lake, a forest, or even a corn field and carry it across the border. (Yes, eventually plate tectonics will move our land around—but that’ll be millions of years.) Basically, land stays put—and so do all the natural environments and ecosystems on that land. Land isn’t as important for production as it once was; before industrialization, we were dependent on the land for almost everything. But we absolutely still are dependent on the land! If all the topsoil in the world suddenly disappeared, the economy wouldn’t simply collapse: the human race would face extinction. Moreover, a lot of fixed infrastructure, while technically capital, is no more mobile than land. We couldn’t much more easily move the Interstate Highway System to China than we could move Denali.

So far I have said nothing particularly novel. Yeah, clearly it’s much easier to move a mathematical theorem (if such a thing can even be said to “move”) than it is to move a factory, and much easier to move a factory than to move a forest. So what?

But now let’s consider the impact this has on free trade.

Ideas can move instantly, so free trade in ideas would allow all the world to instantaneously share all ideas. This isn’t quite what happens—but in the Internet age, we’re remarkably close to it. If anything, the world’s governments seem to be doing their best to stop this from happening: One of our most strictly-enforced trade agreements, the TRIPS Accord, is about stopping ideas from spreading too easily. And as far as I can tell, region-coding on media goes against everything free trade stands for, yet here we are. (Why, it’s almost as if these policies are more about corporate profits than they ever were about freedom!)

Goods and capital can move quickly. This is where we have really felt the biggest effects of free trade: Everything in the US says “made in China” because the capital is moved to China and then the goods are moved back to the US.

But it would honestly have made more sense to move all those workers instead. For all their obvious flaws, US institutions and US infrastructure are clearly superior to those in China. (Indeed, consider this: We may be so aware of the flaws because the US is especially transparent.) So, the most absolutely efficient way to produce all those goods would be to leave the factories in the US, and move the workers from China instead. If free trade were to achieve its greatest promises, this is the sort of thing we would be doing.


Of course that is not what we did. There are various reasons for this: A lot of the people in China would rather not have to leave. The Chinese government would not want them to leave. A lot of people in the US would not want them to come. The US government might not want them to come.

Most of these reasons are ultimately political: People don’t want to live around people who are from a different nation and culture. They don’t consider those people to be deserving of the same rights and status as those of their own country.

It may sound harsh to say it that way, but it’s clearly the truth. If the average American person valued a random Chinese person exactly the same as they valued a random other American person, our immigration policy would look radically different. US immigration is relatively permissive by world standards, and that is a great part of American success. Yet even here there is a very stark divide between the citizen and the immigrant.

There are morally and economically legitimate reasons to regulate immigration. There may even be morally and economically legitimate reasons to value those in your own nation above those in other nations (though I suspect they would not justify the degree that most people do). But the fact remains that in terms of pure efficiency, the best thing to do would obviously be to move all the people to the place where productivity is highest and do everything there.

But wouldn’t moving people there reduce the productivity? Yes. Somewhat. If you actually tried to concentrate the entire world’s population into the US, productivity in the US would surely go down. So, okay, fine; stop moving people to a more productive place when it has ceased to be more productive. What this should do is average out all the world’s labor productivity to the same level—but a much higher level than the current world average, and frankly probably quite close to its current maximum.

Once you consider that moving people and things does have real costs, maybe fully equaling productivity wouldn’t make sense. But it would be close. The differences in productivity across countries would be small.

They are not small.

Labor productivity worldwide varies tremendously. I don’t count Ireland, because that’s Leprechaun Economics (this is really US GDP with accounting tricks, not Irish GDP). So the prize for highest productivity goes to Norway, at $100 per worker hour (#ScandinaviaIsBetter). The US is doing the best among large countries, at an impressive $73 per hour. And at the very bottom of the list, we have places like Bangladesh at $4.79 per hour and Cambodia at $3.43 per hour. So, roughly speaking, there is about a 20-to-1 ratio between the most productive and least productive countries.

I could believe that it’s not worth it to move US production at $73 per hour to Norway to get it up to $100 per hour. (For one thing, where would we fit it all?) But I find it far more dubious that it wouldn’t make sense to move most of Cambodia’s labor to the US. (Even all 16 million people is less than what the US added between 2010 and 2020.) Even given the fact that these Cambodian workers are less healthy and less educated than American workers, they would almost certainly be more productive on the other side of the Pacific, quite likely ten times as productive as they are now. Yet we haven’t moved them, and have no plans to.

That leaves the question of whether we will move our capital to them. We have been doing so in China, and it worked (to a point). Before that, we did it in Korea and Japan, and it worked. Cambodia will probably come along sooner or later. For now, that seems to be the best we can do.

But I still can’t shake the thought that the world is leaving trillions of dollars on the table by refusing to move people. The inequality of factor mobility seems to be a big part of the world’s inequality, period.

There should be a glut of nurses.

Jan 15 JDN 2459960

It will not be news to most of you that there is a worldwide shortage of healthcare staff, especially nurses and emergency medical technicians (EMTs). I would like you to stop and think about the utterly terrible policy failure this represents. Maybe if enough people do, we can figure out a way to fix it.

It goes without saying—yet bears repeating—that people die when you don’t have enough nurses and EMTs. Indeed, surely a large proportion of the 2.6 million (!) deaths each year from medical errors are attributable to this. It is likely that at least one million lives per year could be saved by fixing this problem worldwide. In the US alone, over 250,000 deaths per year are caused by medical errors; so we’re looking at something like 100,000 lives we could safe each year by removing staffing shortages.

Precisely because these jobs have such high stakes, the mere fact that we would ever see the word “shortage” beside “nurse” or “EMT” was already clear evidence of dramatic policy failure.

This is not like other jobs. A shortage of accountants or baristas or even teachers, while a bad thing, is something that market forces can be expected to correct in time, and it wouldn’t be unreasonable to simply let them do so—meaning, let wages rise on their own until the market is restored to equilibrium. A “shortage” of stockbrokers or corporate lawyers would in fact be a boon to our civilization. But a shortage of nurses or EMTs or firefighters (yes, there are those too!) is a disaster.

Partly this is due to the COVID pandemic, which has been longer and more severe than any but the most pessimistic analysts predicted. But there shortages of nurses before COVID. There should not have been. There should have been a massive glut.

Even if there hadn’t been a shortage of healthcare staff before the pandemic, the fact that there wasn’t a glut was already a problem.

This is what a properly-functioning healthcare policy would look like: Most nurses are bored most of the time. They are widely regarded as overpaid. People go into nursing because it’s a comfortable, easy career with very high pay and usually not very much work. Hospitals spend most of their time with half their beds empty and half of their ambulances parked while the drivers and EMTs sit around drinking coffee and watching football games.

Why? Because healthcare, especially emergency care, involves risk, and the stakes couldn’t be higher. If the number of severely sick people doubles—as in, say, a pandemic—a hospital that usually runs at 98% capacity won’t be able to deal with them. But a hospital that usually runs at 50% capacity will.

COVID exposed to the world what a careful analysis would already have shown: There was not nearly enough redundancy in our healthcare system. We had been optimizing for a narrow-minded, short-sighted notion of “efficiency” over what we really needed, which was resiliency and robustness.

I’d like to compare this to two other types of jobs.

The first is stockbrokers.Set aside for a moment the fact that most of what they do is worthless is not actively detrimental to human society. Suppose that their most adamant boosters are correct and what they do is actually really important and beneficial.

Their experience is almost like what I just said nurses ought to be. They are widely regarded (correctly) as very overpaid. There is never any shortage of them; there are people lining up to be hired. People go into the work not because they care about it or even because they are particularly good at it, but because they know it’s an easy way to make a lot of money.

The one thing that seems to be different from my image may not be as different as it seems. Stockbrokers work long hours, but nobody can really explain why. Frankly most of what they do can be—and has been—successfully automated. Since there simply isn’t that much work for them to do, my guess is that most of the time they spend “working” 60-80 hour weeks is actually not actually working, but sitting around pretending to work. Since most financial forecasters are outperformed by a simple diversified portfolio, the most profitable action for most stock analysts to take most of the time would be nothing.

It may also be that stockbrokers work hard at sales—trying to convince people to buy and sell for bad reasons in order to earn sales commissions. This would at least explain why they work so many hours, though it would make it even harder to believe that what they do benefits society. So if we imagine our “ideal” stockbroker who makes the world a better place, I think they mostly just use a simple algorithm and maybe adjust it every month or two. They make better returns than their peers, but spend 38 hours a week goofing off.

There is a massive glut of stockbrokers. This is what it looks like when a civilization is really optimized to be good at something.

The second is soldiers. Say what you will about them, no one can dispute that their job has stakes of life and death. A lot of people seem to think that the world would be better off without them, but that’s at best only true if everyone got rid of them; if you don’t have soldiers but other countries do, you’re going to be in big trouble. (“We’ll beat our swords into liverwurst / Down by the East Riverside; / But no one wants to be the first!”) So unless and until we can solve that mother of all coordination problems, we need to have soldiers around.

What is life like for a soldier? Well, they don’t seem overpaid; if anything, underpaid. (Maybe some of the officers are overpaid, but clearly not most of the enlisted personnel. Part of the problem there is that “pay grade” is nearly synonymous with “rank”—it’s a primate hierarchy, not a rational wage structure. Then again, so are most industries; the military just makes it more explicit.) But there do seem to be enough of them. Military officials may lament of “shortages” of soldiers, but they never actually seem to want for troops to deploy when they really need them. And if a major war really did start that required all available manpower, the draft could be reinstated and then suddenly they’d have it—the authority to coerce compliance is precisely how you can avoid having a shortage while keeping your workers underpaid. (Russia’s soldier shortage is genuine—something about being utterly outclassed by your enemy’s technological superiority in an obviously pointless imperialistic war seems to hurt your recruiting numbers.)

What is life like for a typical soldier? The answer may surprise you. The overwhelming answer in surveys and interviews (which also fits with the experiences I’ve heard about from friends and family in the military) is that life as a soldier is boring. All you do is wake up in the morning and push rubbish around camp.” Bosnia was scary for about 3 months. After that it was boring. That is pretty much day to day life in the military. You are bored.”

This isn’t new, nor even an artifact of not being in any major wars: Union soldiers in the US Civil War had the same complaint. Even in World War I, a typical soldier spent only half the time on the front, and when on the front only saw combat 1/5 of the time. War is boring.

In other words, there is a massive glut of soldiers. Most of them don’t even know what to do with themselves most of the time.

This makes perfect sense. Why? Because an army needs to be resilient. And to be resilient, you must be redundant. If you only had exactly enough soldiers to deploy in a typical engagement, you’d never have enough for a really severe engagement. If on average you had enough, that means you’d spend half the time with too few. And the costs of having too few soldiers are utterly catastrophic.

This is probably an evolutionary outcome, in fact; civilizations may have tried to have “leaner” militaries that didn’t have so much redundancy, and those civilizations were conquered by other civilizations that were more profligate. (This is not to say that we couldn’t afford to cut military spending at all; it’s one thing to have the largest military in the world—I support that, actually—but quite another to have more than the next 10 combined.)

What’s the policy solution here? It’s actually pretty simple.

Pay nurses and EMTs more. A lot more. Whatever it takes to get to the point where we not only have enough, but have so many people lining up to join we don’t even know what to do with them all. If private healthcare firms won’t do it, force them to—or, all the more reason to nationalize healthcare. The stakes are far too high to leave things as they are.

Would this be expensive? Sure.

Removing the shortage of EMTs wouldn’t even be that expensive. There are only about 260,000 EMTs in the US, and they get paid the apallingly low median salary of $36,000. That means we’re currently spending only about $9 billion per year on EMTs. We could double their salaries and double their numbers for only an extra $27 billion—about 0.1% of US GDP.

Nurses would cost more. There are about 5 million nurses in the US, with an average salary of about $78,000, so we’re currently spending about $390 billion a year on nurses. We probably can’t afford to double both salary and staffing. But maybe we could increase both by 20%, costing about an extra $170 billion per year.

Altogether that would cost about $200 billion per year. To save one hundred thousand lives.

That’s $2 million per life saved, or about $40,000 per QALY. The usual estimate for the value of a statistical life is about $10 million, and the usual threshold for a cost-effective medical intervention is $50,000-$100,000 per QALY; so we’re well under both. This isn’t as efficient as buying malaria nets in Africa, but it’s more efficient than plenty of other things we’re spending on. And this isn’t even counting additional benefits of better care that go beyond lives saved.

In fact if we nationalized US healthcare we could get more than these amounts in savings from not wasting our money on profits for insurance and drug companies—simply making the US healthcare system as cost-effective as Canada’s would save $6,000 per American per year, or a whopping $1.9 trillion. At that point we could double the number of nurses and their salaries and still be spending less.

No, it’s not because nurses and doctors are paid much less in Canada than the US. That’s true in some countries, but not Canada. The median salary for nurses in Canada is about $95,500 CAD, which is $71,000 US at current exchange rates. Doctors in Canada can make anywhere from $80,000 to $400,000 CAD, which is $60,000 to $300,000 US. Nor are healthcare outcomes in Canada worse than the US; if anything, they’re better, as Canadians live an average of four years longer than Americans. No, the radical difference in cost—a factor of 2 to 1—between Canada and the US comes from privatization. Privatization is supposed to make things more efficient and lower costs, but it has absolutely not done that in US healthcare.

And if our choice is between spending more money and letting hundreds of thousands or millions of people die every year, that’s no choice at all.

Why do poor people dislike inflation?

Jun 5 JDN 2459736

The United States and United Kingdom are both very unaccustomed to inflation. Neither has seen double-digit inflation since the 1980s.

Here’s US inflation since 1990:

And here is the same graph for the UK:

While a return to double-digits remains possible, at this point it likely won’t happen, and if it does, it will occur only briefly.

This is no doubt a major reason why the dollar and the pound are widely used as reserve currencies (especially the dollar), and is likely due to the fact that they are managed by the world’s most competent central banks. Brexit would almost have made sense if the UK had been pressured to join the Euro; but they weren’t, because everyone knew the pound was better managed.

The Euro also doesn’t have much inflation, but if anything they err on the side of too low, mainly because Germany appears to believe that inflation is literally Hitler. In fact, the rise of the Nazis didn’t have much to do with the Weimar hyperinflation. The Great Depression was by far a greater factor—unemployment is much, much worse than inflation. (By the way, it’s weird that you can put that graph back to the 1980s. It, uh, wasn’t the Euro then. Euros didn’t start circulating until 1999. Is that an aggregate of the franc and the deutsche mark and whatever else? The Euro itself has never had double-digit inflation—ever.)

But it’s always a little surreal for me to see how panicked people in the US and UK get when our inflation rises a couple of percentage points. There seems to be an entire subgenre of economics news that basically consists of rich people saying the sky is falling because inflation has risen—or will, or may rise—by two points. (Hey, anybody got any ideas how we can get them to panic like this over rises in sea level or aggregate temperature?)

Compare this to some other countries thathave real inflation: In Brazil, 10% inflation is a pretty typical year. In Argentina, 10% is a really good year—they’re currently pushing 60%. Kenya’s inflation is pretty well under control now, but it went over 30% during the crisis in 2008. Botswana was doing a nice job of bringing down their inflation until the COVID pandemic threw them out of whack, and now they’re hitting double-digits too. And of course there’s always Zimbabwe, which seemed to look at Weimar Germany and think, “We can beat that.” (80,000,000,000% in one month!? Any time you find yourself talking about billion percent, something has gone terribly, terribly wrong.)

Hyperinflation is a real problem—it isn’t what put Hitler into power, but it has led to real crises in Germany, Zimbabwe, and elsewhere. Once you start getting over 100% per year, and especially when it starts rapidly accelerating, that’s a genuine crisis. Moreover, even though they clearly don’t constitute hyperinflation, I can see why people might legitimately worry about price increases of 20% or 30% per year. (Let alone 60% like Argentina is dealing with right now.) But why is going from 2% to 6% any cause for alarm? Yet alarmed we seem to be.

I can even understand why rich people would be upset about inflation (though the magnitudeof their concern does still seem disproportionate). Inflation erodes the value of financial assets, because most bonds, options, etc. are denominated in nominal, not inflation-adjusted terms. (Though there are such things as inflation-indexed bonds.) So high inflation can in fact make rich people slightly less rich.

But why in the world are so many poor people upset about inflation?

Inflation doesn’t just erode the value of financial assets; it also erodes the value of financial debts. And most poor people have more debts than they have assets—indeed, it’s not uncommon for poor people to have substantial debt and no financial assets to speak of (what little wealth they have being non-financial, e.g. a car or a home). Thus, their net wealth position improves as prices rise.

The interest rate response can compensate for this to some extent, but most people’s debts are fixed-rate. Moreover, if it’s the higher interest rates you’re worried about, you should want the Federal Reserve and the Bank of England not to fight inflation too hard, because the way they fight it is chiefly by raising interest rates.

In surveys, almost everyone thinks that inflation is very bad: 92% think that controlling inflation should be a high priority, and 90% think that if inflation gets too high, something very bad will happen. This is greater agreement among Americans than is found for statements like “I like apple pie” or “kittens are nice”, and comparable to “fair elections are important”!

I admit, I question the survey design here: I would answer ‘yes’ to both questions if we’re talking about a theoretical 10,000% hyperinflation, but ‘no’ if we’re talking about a realistic 10% inflation. So I would like to see, but could not find, a survey asking people what level of inflation is sufficient cause for concern. But since most of these people seemed concerned about actual, realistic inflation (85% reported anger at seeing actual, higher prices), it still suggests a lot of strong feelings that even mild inflation is bad.

So it does seem to be the case that a lot of poor and middle-class people really strongly dislike inflation even in the actual, mild levels in which it occurs in the US and UK.

The main fear seems to be that inflation will erode people’s purchasing power—that as the price of gasoline and groceries rise, people won’t be able to eat as well or drive as much. And that, indeed, would be a real loss of utility worth worrying about.

But in fact this makes very little sense: Most forms of income—particularly labor income, which is the only real income for some 80%-90% of the population—actually increases with inflation, more or less one-to-one. Yes, there’s some delay—you won’t get your annual cost-of-living raise immediately, but several months down the road. But this could have at most a small effect on your real consumption.

To see this, suppose that inflation has risen from 2% to 6%. (Really, you need not suppose; it has.) Now consider your cost-of-living raise, which nearly everyone gets. It will presumably rise the same way: So if it was 3% before, it will now be 7%. Now consider how much your purchasing power is affected over the course of the year.

For concreteness, let’s say your initial income was $3,000 per month at the start of the year (a fairly typical amount for a middle-class American, indeed almost exactly the median personal income). Let’s compare the case of no inflation with a 1% raise, 2% inflation with a 3% raise, and 5% inflation with a 6% raise.

If there was no inflation, your real income would remain simply $3,000 per month, until the end of the year when it would become $3,030 per month. That’s the baseline to compare against.

If inflation is 2%, your real income would gradually fall, by about 0.16% per month, before being bumped up 3% at the end of the year. So in January you’d have $3,000, in February $2,995, in March $2,990. Come December, your real income has fallen to $2,941. But then next January it will immediately be bumped up 3% to $3,029, almost the same as it would have been with no inflation at all. The total lost income over the entire year is about $380, or about 1% of your total income.

If inflation instead rises to 6%, your real income will fall by 0.49% per month, reaching a minimum of $2,830 in December before being bumped back up to $3,028 next January. Your total loss for the whole year will be about $1110, or about 3% of your total income.

Indeed, it’s a pretty good heuristic to say that for an inflation rate of x% with annual cost-of-living raises, your loss of real income relative to having no inflation at all is about (x/2)%. (This breaks down for really high levels of inflation, at which point it becomes a wild over-estimate, since even 200% inflation doesn’t make your real income go to zero.)

This isn’t nothing, of course. You’d feel it. Going from 2% to 6% inflation at an income of $3000 per month is like losing $700 over the course of a year, which could be a month of groceries for a family of four. (Not that anyone can really raise a family of four on a single middle-class income these days. When did The Simpsons begin to seem aspirational?)

But this isn’t the whole story. Suppose that this same family of four had a mortgage payment of $1000 per month; that is also decreasing in real value by the same proportion. And let’s assume it’s a fixed-rate mortgage, as most are, so we don’t have to factor in any changes in interest rates.

With no inflation, their mortgage payment remains $1000. It’s 33.3% of their income this year, and it will be 33.0% of their income next year after they get that 1% raise.

With 2% inflation, their mortgage payment will also fall by 0.16% per month; $998 in February, $996 in March, and so on, down to $980 in December. This amounts to an increase in real income of about $130—taking away a third of the loss that was introduced by the inflation.

With 6% inflation, their mortgage payment will also fall by 0.49% per month; $995 in February, $990 in March, and so on, until it’s only $943 in December. This amounts to an increase in real income of over $370—again taking away a third of the loss.

Indeed, it’s no coincidence that it’s one third; the proportion of lost real income you’ll get back by cheaper mortgage payments is precisely the proportion of your income that was spent on mortgage payments at the start—so if, like too many Americans, they are paying more than a third of their income on mortgage, their real loss of income from inflation will be even lower.

And what if they are renting instead? They’re probably on an annual lease, so that payment won’t increase in nominal terms either—and hence will decrease in real terms, in just the same way as a mortgage payment. Likewise car payments, credit card payments, any debt that has a fixed interest rate. If they’re still paying back student loans, their financial situation is almost certainly improved by inflation.

This means that the real loss from an increase of inflation from 2% to 6% is something like 1.5% of total income, or about $500 for a typical American adult. That’s clearly not nearly as bad as a similar increase in unemployment, which would translate one-to-one into lost income on average; moreover, this loss would be concentrated among people who lost their jobs, so it’s actually worse than that once you account for risk aversion. It’s clearly better to lose 1% of your income than to have a 1% chance of losing nearly all your income—and inflation is the former while unemployment is the latter.

Indeed, the only reason you lost purchasing power at all was that your cost-of-living increases didn’t occur often enough. If instead you had a labor contract that instituted cost-of-living raises every month, or even every paycheck, instead of every year, you would get all the benefits of a cheaper mortgage and virtually none of the costs of a weaker paycheck. Convince your employer to make this adjustment, and you will actually benefit from higher inflation.

So if poor and middle-class people are upset about eroding purchasing power, they should be mad at their employers for not implementing more frequent cost-of-living adjustments; the inflation itself really isn’t the problem.