Keynesian economics: It works, bitches

Jan 23 JDN 2459613

(I couldn’t resist; for the uninitiated, my slightly off-color title is referencing this XKCD comic.)

When faced with a bad recession, Keynesian economics prescribes the following response: Expand the money supply. Cut interest rates. Increase government spending, but decrease taxes. The bigger the recession, the more we should do all these things—especially increasing spending, because interest rates will often get pushed to zero, creating what’s called a liquidity trap.

Take a look at these two FRED graphs, both since the 1950s.
The first is interest rates (specifically the Fed funds effective rate):

The second is the US federal deficit as a proportion of GDP:

Interest rates were pushed to zero right after the 2008 recession, and didn’t start coming back up until 2016. Then as soon as we hit the COVID recession, they were dropped back to zero.

The deficit looks even more remarkable. At the 2009 trough of the recession, the deficit was large, nearly 10% of GDP; but then it was quickly reduced back to normal, to between 2% and 4% of GDP. And that initial surge is as much explained by GDP and tax receipts falling as by spending increasing.

Yet in 2020 we saw something quite different: The deficit became huge. Literally off the chart, nearly 15% of GDP. A staggering $2.8 trillion. We’ve not had a deficit that large as a proportion of GDP since WW2. We’ve never had a deficit that large in real billions of dollars.

Deficit hawks came out of the woodwork to complain about this, and for once I was worried they might actually be right. Their most credible complaint was that it would trigger inflation, and they weren’t wrong about that: Inflation became a serious concern for the first time in decades.

But these recessions were very large, and when you actually run the numbers, this deficit was the correct magnitude for what Keynesian models tell us to do. I wouldn’t have thought our government had the will and courage to actually do it, but I am very glad to have been wrong about that, for one very simple reason:

It worked.

In 2009, we didn’t actually fix the recession. We blunted it; we stopped it from getting worse. But we never really restored GDP, we just let it get back to its normal growth rate after it had plummeted, and eventually caught back up to where we had been.

2021 went completely differently. With a much larger deficit, we fixed this recession. We didn’t just stop the fall; we reversed it. We aren’t just back to normal growth rates—we are back to the same level of GDP, as if the recession had never happened.

This contrast is quite obvious from the GDP of US GDP:

In 2008 and 2009, GDP slumps downward, and then just… resumes its previous trend. It’s like we didn’t do anything to fix the recession, and just allowed the overall strong growth of our economy to carry us through.

The pattern in 2020 is completely different. GDP plummets downward—much further, much faster than in the Great Recession. But then it immediately surges back upward. By the end of 2021, it was above its pre-recession level, and looks to be back on its growth trend. With a recession this deep, if we’d just waited like we did last time, it would have taken four or five years to reach this point—we actually did it in less than one.

I wrote earlier about how this is a weird recession, one that actually seems to fit Real Business Cycle theory. Well, it was weird in another way as well: We fixed it. We actually had the courage to do what Keynes told us to do in 1936, and it worked exactly as it was supposed to.

Indeed, to go from unemployment almost 15% in April of 2020 to under 4% in December of 2021 is fast enough I feel like I’m getting whiplash. We have never seen unemployment drop that fast. Krugman is fond of comparing this to “morning in America”, but that’s really an understatement. Pitch black one moment, shining bright the next: this isn’t a sunrise, it’s pulling open a blackout curtain.

And all of this while the pandemic is still going on! The omicron variant has brought case numbers to their highest levels ever, though fortunately death rates so far are still below last year’s peak.

I’m not sure I have the words to express what a staggering achievement of economic policy it is to so rapidly and totally repair the economic damage caused by a pandemic while that pandemic is still happening. It’s the equivalent of repairing an airplane that is not only still in flight, but still taking anti-aircraft fire.

Why, it seems that Keynes fellow may have been onto something, eh?

Reversals in progress against poverty

Jan 16 JDN 2459606

I don’t need to tell you that the COVID pandemic has been very bad for the world. Yet perhaps the worst outcome of the pandemic is one that most people don’t recognize: It has reversed years of progress against global poverty.

Estimates of the number of people who will be thrown into extreme poverty as a result of the pandemic are consistently around 100 million, though some forecasts have predicted this will rise to 150 million, or, in the most pessimistic scenarios, even as high as 500 million.

Pre-COVID projections showed the global poverty rate falling steadily from 8.4% in 2019 to 6.3% by 2030. But COVID resulted in the first upward surge in global poverty in decades, and updated models now suggest that the global poverty rate in 2030 will be as high as 7.0%. That difference is 0.7% of a forecasted population of 8.5 billion—so that’s a difference of 59 million people.

This is a terrible reversal of fortune, and a global tragedy. Ten or perhaps even hundreds of millions of people will suffer the pain of poverty because of this global pandemic and the numerous missteps by many of the world’s governments—not least the United States—in response to it.

Yet it’s important to keep in mind that this is a short-term reversal in a long-term trend toward reduced poverty. Yes, the most optimistic predictions are turning out to be wrong—but the general pattern of dramatic reductions in global poverty over the late 20th and early 21st century are still holding up.

That post-COVID estimate of a global poverty rate of 7.0% needs to be compared against the fact that as recently as 1980 the global poverty rate at the same income level (adjust for inflation and purchasing power of course) income level was a whopping 44%.

This pattern makes me feel deeply ambivalent about the effects of globalization on inequality. While it now seems clear that globalization has exacerbated inequality within First World countries—and triggered a terrible backlash of right-wing populism as a result—it also seems clear that globalization was a major reason for the dramatic reductions in global poverty in the past few decades.

I think the best answer I’ve been able to come up with is that globalization is overall a good thing, and we must continue it—but we also need to be much more mindful of its costs, and we must make policy that mitigates those costs. Expanded trade has winners and losers, and we should be taxing the winners to compensate the losers. To make good economic policy, it simply isn’t enough to increase aggregate GDP; you actually have to make life better for everyone (or at least as many people as you can).

Unfortunately, knowing what policies to make is only half the battle. We must actually implement those policies, which means winning elections, which means restoring the public’s faith in the authority of economic experts.

Some of the people voting for Donald Trump were just what Hillary Clinton correctly (if tone-deafly) referred to as “deplorables“: racists, misogynists, xenophobes. But I think that many others weren’t voting for Trump but against Clinton; they weren’t embracing far-right populism but rather rejecting center-left technocratic globalization. They were tired of being told what to do by experts who didn’t seem to care about them or their interests.

And the thing is, they were right about that. Not about voting for Trump—that’s unforgivable—but about the fact that expert elites had been ignoring their interests and needed a wake-up call. There were a hundred better ways of making that wake-up call that didn’t involve putting a narcissistic, incompetent maniac in charge of the world’s largest economy, military and nuclear arsenal, and millions of people should be ashamed of themselves for not taking those better options. Yet the fact remains: The wake-up call was necessary, and we should be responding to it.

We expert elites (I think I can officially carry that card, now that I have a PhD and a faculty position at a leading research university) need to do a much better job of two things: First, articulating the case for our policy recommendations in a way that ordinary people can understand, so that they feel justified and not simply rammed down people’s throats; and second, recognizing the costs and downsides of these policies and taking action to mitigate them whenever possible.

For instance: Yes, we need to destroy all the coal jobs. They are killing workers and the planet. Coal companies need to be transitioned to new industries or else shut down. This is not optional. It must be done. But we also need to explain to those coal miners why it’s necessary to move on from coal to solar and nuclear, and we need to be implementing various policies to help those workers move on to better, safer jobs that pay as well and don’t involve filling their lungs with soot and the atmosphere with carbon dioxide. We need to articulate, emphasize—and loudly repeat—that this isn’t about hurting coal miners to help everyone else, but about helping everyone, coal miners included, and that if anyone gets hurt it will only be a handful of psychopathic billionaires who already have more money than any human being could possibly need or deserve.

Another example: We cannot stop trading with India and China. Hundreds of millions of innocent people would suddenly be thrown out of work and into poverty if we did. We need the products they make for us, and they need the money we pay for those products. But we must also acknowledge that trading with poor countries does put downward pressure on wages back home, and take action to help First World workers who are now forced to compete with global labor markets. Maybe this takes the form of better unemployment benefits, or job-matching programs, or government-sponsored job training. But we cannot simply shrug and let people lose their jobs and their homes because the factories they worked in were moved to China.

Strange times for the labor market

Jan 9 JDN 2459589

Labor markets have been behaving quite strangely lately, due to COVID and its consequences. As I said in an earlier post, the COVID recession was the one recession I can think of that actually seemed to follow Real Business Cycle theory—where it was labor supply, not demand, that drove employment.

I dare say that for the first time in decades, the US government actually followed Keynesian policy. US federal government spending surged from $4.8 trillion to $6.8 trillion in a single year:

That is a staggering amount of additional spending; I don’t think any country in history has ever increased their spending by that large an amount in a single year, even inflation-adjusted. Yet in response to a recession that severe, this is exactly what Keynesian models prescribed—and for once, we listened. Instead of balking at the big numbers, we went ahead and spent the money.

And apparently it worked, because unemployment spiked to the worst levels seen since the Great Depression, then suddenly plummeted back to normal almost immediately:

Nor was this just the result of people giving up on finding work. U-6, the broader unemployment measure that includes people who are underemployed or have given up looking for work, shows the same unprecedented pattern:

The oddest part is that people are now quitting their jobs at the highest rate seen in over 20 years:

[FRED_quits.png]

This phenomenon has been dubbed the Great Resignation, and while its causes are still unclear, it is clearly the most important change in the labor market in decades.

In a previous post I hypothesized that this surge in strikes and quits was a coordination effect: The sudden, consistent shock to all labor markets at once gave people a focal point to coordinate their decision to strike.

But it’s also quite possible that it was the Keynesian stimulus that did it: The relief payments made it safe for people to leave jobs they had long hated, and they leapt at the opportunity.

When that huge surge in government spending was proposed, the usual voices came out of the woodwork to warn of terrible inflation. It’s true, inflation has been higher lately than usual, nearly 7% last year. But we still haven’t hit the double-digit inflation rates we had in the late 1970s and early 1980s:

Indeed, most of the inflation we’ve had can be explained by the shortages created by the supply chain crisis, along with a very interesting substitution effect created by the pandemic. As services shut down, people bought goods instead: Home gyms instead of gym memberships, wifi upgrades instead of restaurant meals.

As a result, the price of durable goods actually rose, when it had previously been falling for decades. That broader pattern is worth emphasizing: As technology advances, services like healthcare and education get more expensive, durable goods like phones and washing machines get cheaper, and nondurable goods like food and gasoline fluctuate but ultimately stay about the same. But in the last year or so, durable goods have gotten more expensive too, because people want to buy more while supply chains are able to deliver less.

This suggests that the inflation we are seeing is likely to go away in a few years, once the pandemic is better under control (or else reduced to a new influenza where the virus is always there but we learn to live with it).

But I don’t think the effects on the labor market will be so transitory. The strikes and quits we’ve been seeing lately really are at a historic level, and they are likely to have a long-lasting effect on how work is organized. Employers are panicking about having to raise wages and whining about how “no one wants to work” (meaning, of course, no one wants to work at the current wage and conditions on offer). The correct response is the one from Goodfellas [language warning].

For the first time in decades, there are actually more job vacancies than unemployed workers:

This means that the tables have turned. The bargaining power is suddenly in the hands of workers again, after being in the hands of employers for as long as I’ve been alive. Of course it’s impossible to know whether some other shock could yield another reversal; but for now, it looks like we are finally on the verge of major changes in how labor markets operate—and I for one think it’s about time.

Reasons for optimism in 2022

Jan 2 JDN 2459582

When this post goes live, we will have begun the year 2022.

That still sounds futuristic, somehow. We’ve been in the 20th century long enough that most of my students were born in it and nearly all of them are old enough to drink (to be fair, it’s the UK, so “old enough to drink” only means 18). Yet “the year 2022” still seems like it belongs in science fiction, and not on our wall calendars.

2020 and 2021 were quite bad years. Death rates and poverty rates surged around the world. Almost all of that was directly or indirectly due to COVID.

Yet there are two things we should keep in perspective.

First, those death rates and poverty rates surged to what we used to consider normal 50 years ago. These are not uniquely bad times; indeed, they are still better than most of human history.

Second, there are many reasons to think that 2022—or perhaps a bit later than that, 2025 or 2030—will be better.

The Omicron variant is highly contagious, but so far does not appear to be as deadly as previous variants. COVID seems to be evolving to be more like influenza: Catching it will be virtually inevitable, but dying from it will be very rare.

Things are also looking quite good on the climate change front: Renewable energy production is growing at breathtaking speed and is now cheaper than almost every other form of energy. It’s awful that we panicked and locked down nuclear energy for the last 50 years, but at this point we may no longer need it: Solar and wind are just that good now.

Battery technology is also rapidly improving, giving us denser, cheaper, more stable batteries that may soon allow us to solve the intermittency problem: the wind may not always blow and the sun may not always shine, but if you have big enough batteries you don’t need them to. (You can get a really good feel for how much difference good batteries make in energy production by playing Factorio, or, more whimsically, Mewnbase.)

If we do go back to nuclear energy, it may not be fission anymore, but fusion. Now that we have nearly reached that vital milestone of break-even, investment in fusion technology has rapidly increased.


Fusion has basically all of the benefits of fission with none of the drawbacks. Unlike renewables, it can produce enormous amounts of energy in a way that can be easily scaled and controlled independently of weather conditions. Unlike fission, it requires no exotic nuclear fuels (deuterium can be readily attained from water), and produces no long-lived radioactive waste. (Indeed, development is ongoing of methods that could use fusion products to reduce the waste from fission reactors, making the effective rate of nuclear waste production for fusion negative.) Like both renewables and fission, it produces no carbon emissions other than those required to build the facility (mainly due to concrete).

Of course, technology is only half the problem: we still need substantial policy changes to get carbon emissions down. We’ve already dragged our feet for decades too long, and we will pay the price for that. But anyone saying that climate change is an inevitable catastrophe hasn’t been paying attention to recent developments in solar panels.

Technological development in general seems to be speeding up lately, after having stalled quite a bit in the early 2000s. Moore’s Law may be leveling off, but the technological frontier may simply be moving away from digital computing power and onto other things, such as biotechnology.

Star Trek told us that we’d have prototype warp drives by the 2060s but we wouldn’t have bionic implants to cure blindness until the 2300s. They seem to have gotten it backwards: We may never have warp drive, but we’ve got those bionic implants today.

Neural interfaces are allowing paralyzed people to move, speak, and now even write.

After decades of failed promises, gene therapy is finally becoming useful in treating real human diseases. CRISPR changes everything.

We are also entering a new era of space travel, thanks largely to SpaceX and their remarkable reusable rockets. The payload cost to LEO is a standard measure of the cost of space travel, which describes the cost of carrying a certain mass of cargo up to low Earth orbit. By this measure, costs have declined from nearly $20,000 per kg to only $1,500 per kg since the 1960s. Elon Musk claims that he can reduce the cost to as low as $10 per kg. I’m skeptical, to say the least—but even dropping it to $500 or $200 would be a dramatic improvement and open up many new options for space exploration and even colonization.

To put this in perspective, the cost of carrying a human being to the International Space Station (about 100 kg to LEO) has fallen from $2 million to $150,000. A further decrease to $200 per kg would lower that to $20,000, opening the possibility of space tourism; $20,000 might be something even upper-middle-class people could do as a once-in-a-lifetime vacation. If Musk is really right that he can drop it all the way to $10 per kg, the cost to carry a person to the ISS would be only $1000—something middle-class people could do regularly. (“Should we do Paris for our anniversary this year, or the ISS?”) Indeed, a cost that low would open the possibility of space-based shipping—for when you absolutely must have the product delivered from China to California in the next 2 hours.

Another way to put this in perspective is to convert these prices per mass in terms of those of commodities, such as precious metals. $20,000 per kg is nearly the price of solid platinum. $500 per kg is about the price of sterling silver. $10 per kg is roughly the price of copper.

The reasons for optimism are not purely technological. There has also been significant social progress just in the last few years, with major milestones on LGBT rights being made around the world in 2020 and 2021. Same-sex marriage is now legally recognized over nearly the entire Western Hemisphere.

None of that changes the fact that we are still in a global pandemic which seems to be increasingly out of control. I can’t tell you whether 2022 will be better than 2021, or just more of the same—or perhaps even worse.

But while these times are hard, overall the world is still making progress.

A very Omicron Christmas

Dec 26 JDN 2459575

Remember back in spring of 2020 when we thought that this pandemic would quickly get under control and life would go back to normal? How naive we were.

The newest Omicron strain seems to be the most infectious yet—even people who are fully vaccinated are catching it. The good news is that it also seems to be less deadly than most of the earlier strains. COVID is evolving to spread itself better, but not be as harmful to us—much as influenza and cold viruses evolved. While weekly cases are near an all-time peek, weekly deaths are well below the worst they had been.

Indeed, at this point, it’s looking like COVID will more or less be with us forever. In the most likely scenario, the virus will continue to evolve to be more infectious but less lethal, and then we will end up with another influenza on our hands: A virus that can’t be eradicated, gets huge numbers of people sick, but only kills a relatively small number. At some point we will decide that the risk of getting sick is low enough that it isn’t worth forcing people to work remotely or maybe even wear masks. And we’ll relax various restrictions and get back to normal with this new virus a regular part of our lives.


Merry Christmas?

But it’s not all bad news. The vaccination campaign has been staggeringly successful—now the total number of vaccine doses exceeds the world population, so the average human being has been vaccinated for COVID at least once.

And while 5.3 million deaths due to the virus over the last two years sounds terrible, it should be compared against the baseline rate of 15 million deaths during that same interval, and the fact that worldwide death rates have been rapidly declining. Had COVID not happened, 2021 would be like 2019, which had nearly the lowest death rate on record, at 7,579 deaths per million people per year. As it is, we’re looking at something more like 10,000 deaths per million people per year (1%), or roughly what we considered normal way back in the long-ago times of… the 1980s. To get even as bad as things were in the 1950s, we would have to double our current death rate.

Indeed, there’s something quite remarkable about the death rate we had in 2019, before the pandemic hit: 7,579 per million is only 0.76%. A being with a constant annual death rate of 0.76% would have a life expectancy of over 130 years. This very low death rate is partly due to demographics: The current world population is unusually young and healthy because the world recently went through huge surges in population growth. Due to demographic changes the UN forecasts that our death rate will start to climb again as fertility falls and the average age increases; but they are still predicting it will stabilize at about 11,200 per million per year, which would be a life expectancy of 90. And that estimate could well be too pessimistic, if medical technology continues advancing at anything like its current rate.

We call it Christmas, but it’s really a syncretized amalgamation of holidays: Yule, Saturnalia, various Solstice celebrations. (Indeed, there’s no particular reason to think Jesus was even born in December.) Most Northern-hemisphere civilizations have some sort of Solstice holiday, and we’ve greedily co-opted traditions from most of them. The common theme really seems to be this:

Now it is dark, but band together and have hope, for the light shall return.

Diurnal beings in northerly latitudes instinctively fear the winter, when it becomes dark and cold and life becomes more hazardous—but we have learned to overcome this fear together, and we remind ourselves that light and warmth will return by ritual celebrations.

The last two years have made those celebrations particularly difficult, as we have needed to isolate ourselves in order to keep ourselves and others safe. Humans are fundamentally social at a level most people—even most scientists—do not seem to grasp: We need contact with other human beings as deeply and vitally as we need food or sleep.

The Internet has allowed us to get some level of social contact while isolated, which has been a tremendous boon; but I think many of us underestimated how much we would miss real face-to-face contact. I think much of the vague sense of malaise we’ve all been feeling even when we aren’t sick and even when we’ve largely adapted our daily routine to working remotely comes from this: We just aren’t getting the chance to see people in person nearly as often as we want—as often as we hadn’t even realized we needed.

So, if you do travel to visit family this holiday season, I understand your need to do so. But be careful. Get vaccinated—three times, if you can. Don’t have any contact with others who are at high risk if you do have any reason to think you’re infected.

Let’s hope next Christmas is better.

The economics of interstellar travel

Dec 19 JDN 2459568

Since these are rather dark times—the Omicron strain means that COVID is still very much with us, after nearly two years—I thought we could all use something a bit more light-hearted and optimistic.

In 1978 Paul Krugman wrote a paper entitled “The Theory of Interstellar Trade”, which has what is surely one of the greatest abstracts of all time:

This paper extends interplanetary trade theory to an interstellar setting. It is chiefly concerned with the following question: how should interest charges on goods in transit be computed when the goods travel at close to the speed of light? This is a problem because the time taken in transit will appear less to an observer travelling with the goods than to a stationary observer. A solution is derived from economic theory, and two useless but true theorems are proved.

The rest of the paper is equally delightful, and well worth a read. Of particular note are these two sentences, which should give you a feel: “The rest of the paper is, will be, or has been, depending on the reader’s inertial frame, divided into three sections.” and “This extension is left as an exercise for interested readers because the author does not understand general relativity, and therefore cannot do it himself.”

As someone with training in both economics and relativistic physics, I can tell you that Krugman’s analysis is entirely valid, given its assumptions. (Really, this is unsurprising: He’s a Nobel Laureate. One could imagine he got his physics wrong, but he didn’t—and of course he didn’t get his economics wrong.) But, like much high-falutin economic theory, it relies upon assumptions that are unlikely to be true.

Set aside the assumptions of perfect competition and unlimited arbitrage that yield Krugman’s key result of equalized interest rates. These are indeed implausible, but they’re also so standard in economics as to be pedestrian.

No, what really concerns me is this: Why bother with interstellar trade at all?

Don’t get me wrong: I’m all in favor of interstellar travel and interstellar colonization. I want humanity to expand and explore the galaxy (or rather, I want that to be done by whatever humanity becomes, likely some kind of cybernetically and biogenetically enhanced transhumans in endless varieties we can scarcely imagine). But once we’ve gone through all the effort to spread ourselves to distant stars, it’s not clear to me that we’d ever have much reason to trade across interstellar distances.

If we ever manage to invent efficient, reliable, affordable faster-than-light (FTL) travel ala Star Trek, sure. In that case, there’s no fundamental difference between interstellar trade and any other kind of trade. But that’s not what Krugman’s paper is about, as its key theorems are actually about interest rates and prices in different inertial reference frames, which is only relevant if you’re limited to relativistic—that is, slower-than-light—velocities.

Moreover, as far as we can tell, that’s impossible. Yes, there are still some vague slivers of hope left with the Alcubierre Drive, wormholes, etc.; but by far the most likely scenario is that FTL travel is simply impossible and always will be.

FTL communication is much more plausible, as it merely requires the exploitation of nonlocal quantum entanglement outside quantum equilibrium; if the Bohm Interpretation is correct (as I strongly believe it is), then this is a technological problem rather than a theoretical one. At best this might one day lead to some form of nonlocal teleportation—but definitely not FTL starships. Since our souls are made of software, sending information can, in principle, send a person; but we almost surely won’t be sending mass faster than light.

So let’s assume, as Krugman did, that we will be limited to travel close to, but less than, the speed of light. (I recently picked up a term for this from Ursula K. Le Guin: “NAFAL”, “nearly-as-fast-as-light”.)

This means that any transfer of material from one star system to another will take, at minimum, years. It could even be decades or centuries, depending on how close to the speed of light we are able to get.

Assuming we have abundant antimatter or some similarly extremely energy-dense propulsion, it would reasonable to expect that we could build interstellar spacecraft that would be capable of accelerating at approximately Earth gravity (i.e. 1 g) for several years at a time. This would be quite comfortable for the crew of the ship—it would just feel like standing on Earth. And it turns out that this is sufficient to attain velocities quite close to the speed of light over the distances to nearby stars.

I will spare you the complicated derivation, but there are well-known equations which allow us to convert from proper acceleration (the acceleration felt on a spacecraft, i.e. 1 g in this case) to maximum velocity and total travel time, and they imply that a vessel which was constantly accelerating at 1 g (speeding up for the first half, then slowing down for the second half) could reach most nearby stars within about 50 to 100 years Earth time, or as little as 10 to 20 years ship time.

With higher levels of acceleration, you can shorten the trip; but that would require designing ships (or engineering crews?) in such a way as to sustain these high levels of acceleration for years at a time. Humans can sustain 3 g’s for hours, but not for years.

Even with only 1-g acceleration, the fuel costs for such a trip are staggering: Even with antimatter fuel you need dozens or hundreds of times as much mass in fuel as you have in payload—and with anything less than antimatter it’s basically just not possible. Yet there is nothing in the laws of physics saying you can’t do it, and I believe that someday we will.

Yet I sincerely doubt we would want to make such trips often. It’s one thing to send occasional waves of colonists, perhaps one each generation. It’s quite another to establish real two-way trade in goods.

Imagine placing an order for something—anything—and not receiving it for another 50 years. Even if, as I hope and believe, our descendants have attained far longer lifespans than we have, asymptotically approaching immortality, it seems unlikely that they’d be willing to wait decades for their shipments to arrive. In the same amount of time you could establish an entire industry in your own star system, built from the ground up, fully scaled to service entire planets.

In order to justify such a transit, you need to be carrying something truly impossible to produce locally. And there just won’t be very many such things.

People, yes. Definitely in the first wave of colonization, but likely in later waves as well, people will want to move themselves and their families across star systems, and will be willing to wait (especially since the time they experience on the ship won’t be nearly as daunting).

And there will be knowledge and experiences that are unique to particular star systems—but we’ll be sending that by radio signal and it will only take as many years as there are light-years between us; or we may even manage to figure out FTL ansibles and send it even faster than that.

It’s difficult for me to imagine what sort of goods could ever be so precious, so irreplaceable, that it would actually make sense to trade them across an interstellar distance. All habitable planets are likely to be made of essentially the same elements, in approximately the same proportions; whatever you may want, it’s almost certainly going to be easier to get it locally than it would be to buy it from another star system.

This is also why I think alien invasion is unlikely: There’s nothing they would particularly want from us that they couldn’t get more easily. Their most likely reason for invading would be specifically to conquer and rule us.

Certainly if you want gold or neodymium or deuterium, it’ll be thousands of times easier to get it at home. But even if you want something hard to make, like antimatter, or something organic and unique, like oregano, building up the industry to manufacture a product or the agriculture to grow a living organism is almost certainly going to be faster and easier than buying it from another solar system.

This is why I believe that for the first generation of interstellar colonists, imports will be textbooks, blueprints, and schematics to help build, and films, games, and songs to stay entertained and tied to home; exports will consist of of scientific data about the new planet as well as artistic depictions of life on an alien world. For later generations, it won’t be so lopsided: The colonies will have new ideas in science and engineering as well as new art forms to share. Billions of people on Earth and thousands or millions on each colony world will await each new transmission of knowledge and art with bated breath.

Long-distance trade historically was mainly conducted via precious metals such as gold; but if interstellar travel is feasible, gold is going to be dirt cheap. Any civilization capable of even sending a small intrepid crew of colonists to Epsilon Eridani is going to consider mining asteroids an utterly trivial task.

Will such transactions involve money? Will we sell these ideas, or simply give them away? Unlike my previous post where I focused on the local economy, here I find myself agreeing with Star Trek: Money isn’t going to make sense for interstellar travel. Unless we have very fast communication, the time lag between paying money out and then seeing it circulate back will be so long that the money returned to you will be basically worthless. And that’s assuming you figure out a way to make transactions clear that doesn’t require real-time authentication—because you won’t have it.

Consider Epsilon Eridani, a plausible choice for one of the first star systems we will colonize. That’s 10.5 light-years away, so a round-trip signal will take 21 years. If inflation is a steady 2%, that means that $100 today will need to come back as $151 to have the same value by the time you hear back from your transaction. If you had the option to invest in a 5% bond instead, you’d have $279 by then. And this is a nearby star.

It would be much easier to simply trade data for data, maybe just gigabyte for gigabyte or maybe by some more sophisticated notion of relative prices. You don’t need to worry about what your dollar will be worth 20 years from now; you know how much effort went into designing that blueprint for an antimatter processor and you know how much you’ll appreciate seeing that VR documentary on the rings of Aegir. You may even have in mind how much it cost you to pay people to design prototypes and how much you can sell the documentary for; but those monetary transactions will be conducted within your own star system, independently of whatever monetary system prevails on other stars.

Indeed, it’s likely that we wouldn’t even bother trying to negotiate how much to send—because that itself would have such overhead and face the same time-lags—and would instead simply make a habit of sending everything we possibly can. Such interchanges could be managed by governments at each end, supported by public endowments. “This year’s content from Epsilon Eridani, brought to you by the Smithsonian Institution.”

We probably won’t ever have—or need, or want—huge freighter ships carrying containers of goods from star to star. But with any luck, we will one day have art and ideas from across the galaxy shared by all of the endless variety of beings humanity has become.

Economists aren’t that crazy

Dec 12 JDN 2459561

I’ve been seeing this meme go around lately, and I felt a need to respond:

Economics: “Humans only value things monetarily.”

Sociology: “Uh, I don’t…”

Economics: “Humans are always rational and value is calculated by complex internal calculus.”

Sociology: “Uhhh, Psy, can you help?”

Psychology: “That’s not how humans…”

Economics: “ALSO MY SYSTEM WILL GROW EXPONENTIALLY FOREVER!”

Physics: drops teacup

I have plenty of criticisms to make of neoclassical economics—but this is clearly unfair.

Economists aren’t that crazy.

Above all, economists don’t actually believe in exponential growth forever. I literally have never met one who does. The mainstream, uncontroversial (I daresay milquetoast)neoclassical growth model, the Solow-Swan model, predicts a long-run decline in the rate of economic growth. Indeed, I would not be surprised to find that long-run per-capita GDP growth is asymptotic, meaning that there is some standard of living that we can never expect the world to exceed. It’s simply a question of what that limit is, and it is most likely a good deal better than how we live even in First World countries.

It’s nothing more than a strawman of neoclassical economics to assert otherwise. Yes, economists do believe that current growth can and should continue for some time yet—though even among them it is controversial how long it will continue. But they absolutely do not believe that we can expect 3% annual growth in per-capita GDP for the next 1000 years. And indeed, it is precisely their mathematical sophistication that makes this so: They would be the first to recognize that this implies a 6.8 trillion-fold increase in standard of living, which is obviously ludicrous. A much more plausible figure for that timescale is something like 0.2%, which would be only a 7-fold increase over that same interval. And if you really want to extrapolate to millions of years, the only plausible long-run economic growth rate over that period is basically 0%. Yet billions of lives hinge upon whether it is actually 0.0001%, 0.0002%, or 0.0003%—if indeed human beings don’t go extinct long before then.

What about the other two claims? Well, neoclassical economists do have a habit of assuming greater rationality than human beings actually exhibit, and of trying to value everything in monetary terms. And economists are nothing if not arrogant in their relationship to other fields of social science. So here, at least, there is a kernel of truth.

Yet that makes this at best hyperbole for comedic effect—and at worst highly misleading as to what actual economists believe. You can find a few fringe economists who might seriously assent to the claim “humans are always rational”, and you can easily find plenty of amoral corporate shills who are paid to say such things on TV. (Krugman refers to them as “professionally conservative economists”.)

Moreover, I think the behavioral economics paradigm still hasn’t penetrated fully enough—most economists will give lip service to the idea of irrational behavior without being willing to seriously face up to how frequent it is or what this implies for policy. But no serious mainstream economist actually believes that all human beings are always rational.

And while there is surely a tendency to over-emphasize monetary costs and try to put everything in monetary terms, I don’t think I’ve ever met an economist who genuinely believes that all humans value everything monetarily. At most they might think that everyone should value everything monetarily—and even then the only ones who say things like this are weird fringe figures like that guy who hates Christmas.

Am I reading too much into a joke? Maybe. But given how poorly most people understand economics, this kind of joke can do real damage. It’s already a big problem that (aforementioned) corporate shills can present themselves as economic experts, but if popular culture is accustomed to dismissing the claims of actual economic experts, that makes matters much worse. And rather than the playful ribbing that neoclassical economists well deserve (like Jon Stewart gave them: “People are screwy.” “You’re just now figuring this out?”), this meme mocks economists aggressively enough that it seems to be trying to actively undermine their credibility.

If COVID taught us anything, it should be that expertise matters. Trusting experts more than we did would have saved thousands of lives—and trusting them less would have doomed even more.

So maybe a joke that will make people trust economic experts less isn’t so harmless after all?

Low-skill jobs

Dec 5 JDN 2459554

I’ve seen this claim going around social media for awhile now: “Low-skill jobs are a classist myth created to justify poverty wages.”

I can understand why people would say things like this. I even appreciate that many low-skill jobs are underpaid and unfairly stigmatized. But it’s going a bit too far to claim that there is no such thing as a low-skill job.

Suppose all the world’s physicists and all the world’s truckers suddenly had to trade jobs for a month. Who would have a harder time?

If a mathematician were asked to do the work of a janitor, they’d be annoyed. If a janitor were asked to do the work of a mathematician, they’d be completely nonplussed.

I could keep going: Compare robotics engineers to dockworkers or software developers to fruit pickers.

Higher pay does not automatically equate to higher skills: welders are clearly more skilled than stock traders. Give any welder a million-dollar account and a few days of training, and they could do just as well as the average stock trader (which is to say, worse than the S&P 500). Give any stock trader welding equipment and a similar amount of training, and they’d be lucky to not burn their fingers off, much less actually usefully weld anything.

This is not to say that any random person off the street could do just as well as a janitor or dockworker as someone who has years of experience at that job. It is simply to say that they could do better—and pick up the necessary skills faster—than a random person trying to work as a physicist or software developer.

Moreover, this does justify some difference in pay. If some jobs are easier than others, in the sense that more people are qualified to do them, then the harder jobs will need to pay more in order to attract good talent—if they didn’t, they’d risk their high-skill workers going and working at the low-skill jobs instead.

This is of course assuming all else equal, which is clearly not the case. No two jobs are the same, and there are plenty of other considerations that go into choosing someone’s wage: For one, not simply what skills are required, but also the effort and unpleasantness involved in doing the work. I’m entirely prepared to believe that being a dockworker is less fun than being a physicist, and this should reduce the differential in pay between them. Indeed, it may have: Dockworkers are paid relatively well as far as low-skill jobs go—though nowhere near what physicists are paid. Then again, productivity is also a vital consideration, and there is a general tendency that high-skill jobs tend to be objectively more productive: A handful of robotics engineers can do what was once the work of hundreds of factory laborers.

There are also ways for a worker to be profitable without being particularly productive—that is, to be very good at rent-seeking. This is arguably the case for lawyers and real estate agents, and undeniably the case for derivatives traders and stockbrokers. Corporate executives aren’t stupid; they wouldn’t pay these workers astronomical salaries if they weren’t making money doing so. But it’s quite possible to make lots of money without actually producing anything of particular value for human society.

But that doesn’t mean that wages are always fair. Indeed, I dare say they typically are not. One of the most important determinants of wages is bargaining power. Unions don’t increase skill and probably don’t increase productivity—but they certainly increase wages, because they increase bargaining power.

And this is also something that’s correlated with lower levels of skill, because the more people there are who know how to do what you do, the harder it is for you to make yourself irreplaceable. A mathematician who works on the frontiers of conformal geometry or Teichmueller theory may literally be one of ten people in the world who can do what they do (quite frankly, even the number of people who know what they do is considerably constrained, though probably still at least in the millions). A dockworker, even one who is particularly good at loading cargo skillfully and safely, is still competing with millions of other people with similar skills. The easier a worker is to replace, the less bargaining power they have—in much the same way that a monopoly has higher profits than an oligopoly, which has higher profits that a competitive market.

This is why I support unions. I’m also a fan of co-ops, and an ardent supporter of progressive taxation and safety regulations. So don’t get me wrong: Plenty of low-skill workers are mistreated and underpaid, and they deserve better.

But that doesn’t change the fact that it’s a lot easier to be a janitor than a physicist.

Risk compensation is not a serious problem

Nov 28 JDN 2459547

Risk compensation. It’s one of those simple but counter-intuitive ideas that economists love, and it has been a major consideration in regulatory policy since the 1970s.

The idea is this: The risk we face in our actions is partly under our control. It requires effort to reduce risk, and effort is costly. So when an external source, such as a government regulation, reduces our risk, we will compensate by reducing the effort we expend, and thus our risk will decrease less, or maybe not at all. Indeed, perhaps we’ll even overcompensate and make our risk worse!

It’s often used as an argument against various kinds of safety efforts: Airbags will make people drive worse! Masks will make people go out and get infected!

The basic theory here is sound: Effort to reduce risk is costly, and people try to reduce costly things.

Indeed, it’s theoretically possible that risk compensation could yield the exact same risk, or even more risk than before—or at least, I wasn’t able to prove that for any possible risk profile and cost function it couldn’t happen.

But I wasn’t able to find any actual risk profiles or cost functions that would yield this result, even for a quite general form. Here, let me show you.

Let’s say there’s some possible harm H. There is also some probability that it will occur, which you can mitigate with some choice x. For simplicity let’s say that it’s one-to-one, so that your risk of H occurring is precisely 1-x. Since probabilities must be between 0 and 1, thus so must x.

Reducing that risk costs effort. I won’t say much about that cost, except to call it c(x) and assume the following:

(1) It is increasing: More effort reduces risk more and costs more than less effort.

(2) It is convex: Reducing risk from a high level to a low level (e.g. 0.9 to 0.8) costs less than reducing it from a low level to an even lower level (e.g. 0.2 to 0.1).

These both seem like eminently plausible—indeed, nigh-unassailable—assumptions. And they result in the following total expected cost (the opposite of your expected utility):

(1-x)H + c(x)

Now let’s suppose there’s some policy which will reduce your risk by a factor r, which must be between 0 and 1. Your cost then becomes:

r(1-x)H + c(x)

Minimizing this yields the following result:

rH = c'(x)

where c'(x) is the derivative of c(x). Since c(x) is increasing and convex, c'(x) is positive and increasing.

Thus, if I make r smaller—an external source of less risk—then I will reduce the optimal choice of x. This is risk compensation.

But have I reduced or increased the amount of risk?

The total risk is r(1-x); since r decreased and so did x, it’s not clear whether this went up or down. Indeed, it’s theoretically possible to have cost functions that would make it go up—but I’ve never seen one.

For instance, suppose we assume that c(x) = axb, where a and b are constants. This seems like a pretty general form, doesn’t it? To maintain the assumption that c(x) is increasing and convex, I need a > 0 and b > 1. (If 0 < b < 1, you get a function that’s increasing but concave. If b=1, you get a linear function and some weird corner solutions where you either expend no effort at all or all possible effort.)

Then I’m trying to minimize:

r(1-x)H + axb

This results in a closed-form solution for x:

x = (rH/ab)^(1/(b-1))

Since b>1, 1/(b-1) > 0.


Thus, the optimal choice of x is increasing in rH and decreasing in ab. That is, reducing the harm H or the overall risk r will make me put in less effort, while reducing the cost of effort (via either a or b) will make me put in more effort. These all make sense.

Can I ever increase the overall risk by reducing r? Let’s see.


My total risk r(1-x) is therefore:

r(1-x) = r[1-(rH/ab)^(1/(b-1))]

Can making r smaller ever make this larger?

Well, let’s compare it against the case when r=1. We want to see if there’s a case where it’s actually larger.

r[1-(rH/ab)^(1/(b-1))] > [1-(H/ab)^(1/(b-1))]

r – r^(1/(b-1)) (H/ab)^(1/(b-1)) > 1 – (H/ab)^(1/(b-1))

For this to be true, we would need r > 1, which would mean we didn’t reduce risk at all. Thus, reducing risk externally reduces total risk even after compensation.

Now, to be fair, this isn’t a fully general model. I had to assume some specific functional forms. But I didn’t assume much, did I?

Indeed, there is a fully general argument that externally reduced risk will never harm you. It’s quite simple.

There are three states to consider: In state A, you have your original level of risk and your original level of effort to reduce it. In state B, you have an externally reduced level of risk and your original level of effort. In state C, you have an externally reduced level of risk, and you compensate by reducing your effort.

Which states make you better off?

Well, clearly state B is better than state A: You get reduced risk at no cost to you.

Furthermore, state C must be better than state B: You voluntarily chose to risk-compensate precisely because it made you better off.

Therefore, as long as your preferences are rational, state C is better than state A.

Externally reduced risk will never make you worse off.

QED. That’s it. That’s the whole proof.

But I’m a behavioral economist, am I not? What if people aren’t being rational? Perhaps there’s some behavioral bias that causes people to overcompensate for reduced risks. That’s ultimately an empirical question.

So, what does the empirical data say? Risk compensation is almost never a serious problem in the real world. Measures designed to increase safety, lo and behold, actually increase safety. Removing safety regulations, astonishingly enough, makes people less safe and worse off.

If we ever do find a case where risk compensation is very large, then I guess we can remove that safety measure, or find some way to get people to stop overcompensating. But in the real world this has basically never happened.

It’s still a fair question whether any given safety measure is worth the cost: Implementing regulations can be expensive, after all. And while many people would like to think that “no amount of money is worth a human life”, nobody does—or should, or even can—act like that in the real world. You wouldn’t drive to work or get out of bed in the morning if you honestly believed that.

If it would cost $4 billion to save one expected life, it’s definitely not worth it. Indeed, you should still be able to see that even if you don’t think lives can be compared with other things—because $4 billion could save an awful lot of lives if you spent it more efficiently. (Probablyover a million, in fact, as current estimates of the marginal cost to save one life are about $2,300.) Inefficient safety interventions don’t just cost money—they prevent us from doing other, more efficient safety interventions.

And as for airbags and wearing masks to prevent COVID? Yes, definitely 100% worth it, as both interventions have already saved tens if not hundreds of thousands of lives.

How can we fix medical residency?

Nov 21 JDN 459540

Most medical residents work 60 or more hours per week, and nearly 20% work 80 or more hours. 66% of medical residents report sleeping 6 hours or less each night, and 20% report sleeping 5 hours or less.

It’s not as if sleep deprivation is a minor thing: Worldwide, across all jobs, nearly 750,000 deaths annually are attributable to long working hours, most of these due to sleep deprivation.


By some estimates, medical errors account for as many as 250,000 deaths per year in the US alone. Even the most conservative estimates say that at least 25,000 deaths per year in the US are attributable to medical errors. It seems quite likely that long working hours increase the rate of dangerous errors (though it has been difficult to determine precisely how much).

Indeed, the more we study stress and sleep deprivation, the more we learn how incredibly damaging they are to health and well-being. Yet we seem to have set up a system almost intentionally designed to maximize the stress and sleep deprivation of our medical professionals. Some of them simply burn out and leave the profession (about 18% of surgical residents quit); surely an even larger number of people never enter medicine in the first place because they know they would burn out.

Even once a doctor makes it through residency and has learned to cope with absurd hours, this most likely distorts their whole attitude toward stress and sleep deprivation. They are likely to not consider them “real problems”, because they were able to “tough it out”—and they are likely to assume that their patients can do the same. One of the primary functions of a doctor is to reduce pain and suffering, and by putting doctors through unnecessary pain and suffering as part of their training, we are teaching them that pain and suffering aren’t really so bad and you should just grin and bear it.

We are also systematically selecting against doctors who have disabilities that would make it difficult to work these double-time hours—which means that the doctors who are most likely to sympathize with disabled patients are being systematically excluded from the profession.

There have been some attempts to regulate the working hours of residents, but they have generally not been effective. I think this is for three reasons:

1. They weren’t actually trying hard enough. A cap of 80 hours per week is still 40 hours too high, and looks to me like you are trying to get better PR without fixing the actual problem.

2. Their enforcement mechanisms left too much opportunity to cheat the system, and in fact most medical residents simply became pressured to continue over-working and under-report their hours.

3. They don’t seem to have considered how to effect the transition in a way that won’t reduce the total number of resident-hours, and so residents got less training and hospitals were less served.

The solution to problem 1 is obvious: The cap needs to be lower. Much lower.

The solution to problem 2 is trickier: What sort of enforcement mechanism would prevent hospitals from gaming the system?

I believe the answer is very steep overtime pay requirements, coupled with regular and intensive auditing. Every hour a medical resident goes over their cap, they should have to be paid triple time. Audits should be performed frequently, randomly and without notice. And if a hospital is caught falsifying their records, they should be required to pay all missing hours to all medical residents at quintuple time. And Medicare and Medicaid should not be allowed to reimburse these additional payments—they must come directly out of the hospital’s budget.

Under the current system, the “punishment” is usually a threat of losing accreditation, which is too extreme and too harmful to the residents. Precisely because this is such a drastic measure, it almost never happens. The punishment needs to be small enough that we will actually enforce it; and it needs to hurt the hospital, not the residents—overtime pay would do precisely that.

That brings me to problem 3: How can we ensure that we don’t reduce the total number of resident-hours?

This is important for two reasons: Each resident needs a certain number of hours of training to become a skilled doctor, and residents provide a significant proportion of hospital services. Of the roughly 1 million doctors in the US, about 140,000 are medical residents.

The answer is threefold:

1. Increase the number of residency slots (we have a global doctor shortage anyway).

2. Extend the duration of residency so that each resident gets the same number of total work hours.

3. Gradually phase in so that neither increase needs to be too fast.

Currently a typical residency is about 4 years. 4 years of 80-hour weeks is equivalent to 8 years of 40-hour weeks. The goal is for each resident to get 320 hour-years of training.

With 140,000 current residents averaging 4 years, a typical cohort is about 35,000. So the goal is to each year have at least (35,000 residents per cohort)(4 cohorts)(80 hours per week) = 11 million resident-hours per week.

In cohort 1, we reduce the cap to 70 hours, and increase the number of accepted residents to 40,000. Residents in cohort 1 will continue their residency for 4 years, 7 months. This gives each one 321 hour-years of training.

In cohort 2, we reduce the cap to 60 hours, and increase the number of accepted residents to 46,000.

Residents in cohort 2 will continue their residency for 5 years, 4 months. This gives each one 320 hour-years of training.

In cohort 3, we reduce the cap to 55 hours, and increase the number of accepted residents to 50,000.

Residents in cohort 3 will continue their residency for 6 years. This gives each one 330 hour-years of training.

In cohort 4, we reduce the cap to 50 hours, and increase the number of accepted residents to 56,000. Residents in cohort 4 will continue their residency for 6 years, 6 months. This gives each one 325 hour-years of training.

In cohort 5, we reduce the cap to 45 hours, and increase the number of accepted residents to 60,000. Residents in cohort 5 will continue their residency for 7 years, 2 months. This gives each one 322 hour-years of training.

In cohort 6, we reduce the cap to 40 hours, and increase the number of accepted residents to 65,000. Residents in cohort 6 will continue their residency for 8 years. This gives each one 320 hour-years of training.

In cohort 7, we keep the cap to 40 hours, and increase the number of accepted residents to 70,000. This is now the new standard, with 8-year residencies with 40 hour weeks.

I’ve made a graph here of what this does to the available number of resident-hours each year. There is a brief 5% dip in year 4, but by the time we reach year 14 we’ve actually doubled the total number of available resident-hours at any given time—without increasing the total amount of work each resident does, simply keeping them longer and working them less intensively each year. Given that quality of work is reduced by working longer hours, it’s likely that even this brief reduction in hours would not result in any reduced quality of care for patients.

[residency_hours.png]

I have thus managed to increase the number of available resident-hours, ensure that each resident gets the same amount of training as before, and still radically reduce the work hours from 80 per week to 40 per week. The additional recruitment each year is never more than 6,000 new residents or 15% of the current number of residents.

It takes several years to effect this transition. This is unavoidable if we are trying to avoid massive increases in recruitment, though if we were prepared to simply double the number of admitted residents each year we could immediately transition to 40-hour work weeks in a single cohort and the available resident-hours would then strictly increase every year.

This plan is likely not the optimal one; I don’t know enough about the details of how costly it would be to admit more residents, and it’s possible that some residents might actually prefer a briefer, more intense residency rather than a longer, less stressful one. (Though it’s worth noting that most people greatly underestimate the harms of stress and sleep deprivation, and doctors don’t seem to be any better in this regard.)

But this plan does prove one thing: There are solutions to this problem. It can be done. If our medical system isn’t solving this problem, it is not because solutions do not exist—it is because they are choosing not to take them.