Russia has invaded Ukraine.

Mar 6 JDN 2459645

Russia has invaded Ukraine. No doubt you have heard it by now, as it’s all over the news now in dozens of outlets, from CNN to NBC to The Guardian to Al-Jazeera. And as well it should be, as this is the first time in history that a nuclear power has annexed another country. Yes, nuclear powers have fought wars before—the US just got out of one in Afghanistan as you may recall. They have even started wars and led invasions—the US did that in Iraq. And certainly, countries have been annexing and conquering other countries for millennia. But never before—never before, in human historyhas a nuclear-armed state invaded another country simply to claim it as part of itself. (Trump said he thought the US should have done something like that, and the world was rightly horrified.)

Ukraine is not a nuclear power—not anymore. The Soviet Union built up a great deal of its nuclear production in Ukraine, and in 1991 when Ukraine became independent it still had a sizable nuclear arsenal. But starting in 1994 Ukraine began disarming that arsenal, and now it is gone. Now that Russia has invaded them, the government of Ukraine has begun publicly reconsidering their agreements to disarm their nuclear arsenal.

Russia’s invasion of Ukraine has just disproved the most optimistic models of international relations, which basically said that major power wars for territory were over at the end of WW2. Some thought it was nuclear weapons, others the United Nations, still others a general improvement in trade integration and living standards around the world. But they’ve all turned out to be wrong; maybe such wars are rarer, but they can clearly still happen, because one just did.

I would say that only two major theories of the Long Peace are still left standing in light of this invasion, and that is nuclear deterrence and the democratic peace. Ukraine gave up its nuclear arsenal and later got attacked—that’s consistent with nuclear deterrence. Russia under Putin is nearly as authoritarian as the Soviet Union, and Ukraine is a “hybrid regime” (let’s call it a solid D), so there’s no reason the democratic peace would stop this invasion. But any model which posits that trade or the UN prevent war is pretty much off the table now, as Ukraine had very extensive trade with both Russia and the EU and the UN has been utterly toothless so far. (Maybe we could say the UN prevents wars except those led by permanent Security Council members.)

Well, then, what if the nuclear deterrence theory is right? What would have happened if Ukraine had kept its nuclear weapons? Would that have made this situation better, or worse? It could have made it better, if it acted as a deterrent against Russian aggression. But it could also have made it much, much worse, if it resulted in a nuclear exchange between Russia and Ukraine.

This is the problem with nukes. They are not a guarantee of safety. They are a guarantee of fat tails. To explain what I mean by that, let’s take a brief detour into statistics.

A fat-tailed distribution is one for which very extreme events have non-negligible probability. For some distributions, like a uniform distribution, events are clearly contained within a certain interval and nothing outside is even possible. For others, like a normal distribution or lognormal distribution, extreme events are theoretically possible, but so vanishingly improbable they aren’t worth worrying about. But for fat-tailed distributions like a Cauchy distribution or a Pareto distribution, extreme events are not so improbable. They may be unlikely, but they are not so unlikely they can simply be ignored. Indeed, they can actually dominate the average—most of what happens, happens in a handful of extreme events.

Deaths in war seem to be fat-tailed, even in conventional warfare. They seem to follow a Pareto distribution. There are lots of tiny skirmishes, relatively frequent regional conflicts, occasional major wars, and a handful of super-deadly global wars. This kind of pattern tends to emerge when a phenomenon is self-reinforcing by positive feedback—hence why we also see it in distributions of income and wildfire intensity.

Fat-tailed distributions typically (though not always—it’s easy to construct counterexamples, like the Cauchy distribution with low values truncated off) have another property as well, which is that minor events are common. More common, in fact, than they would be under a normal distribution. What seems to happen is that the probability mass moves away from the moderate outcomes and shifts to both the extreme outcomes and the minor ones.

Nuclear weapons fit this pattern perfectly. They may in fact reduce the probability of moderate, regional conflicts, in favor of increasing the probability of tiny skirmishes or peaceful negotiations. But they also increase the probability of utterly catastrophic outcomes—a full-scale nuclear war could kill billions of people. It probably wouldn’t wipe out all of humanity, and more recent analyses suggest that a catastrophic “nuclear winter” is unlikely. But even 2 billion people dead would be literally the worst thing that has ever happened, and nukes could make it happen in hours when such a death toll by conventional weapons would take years.

If we could somehow guarantee that such an outcome would never occur, then the lower rate of moderate conflicts nuclear weapons provide would justify their existence. But we can’t. It hasn’t happened yet, but it doesn’t have to happen often to be terrible. Really, just once would be bad enough.

Let us hope, then, that the democratic peace turns out to be the theory that’s right. Because a more democratic world would clearly be better—while a more nuclearized world could be better, but could also be much, much worse.

Who still uses cash?

Feb 27 JDN 2459638

If you had to guess, what is the most common denomination of US dollar bills? You might check your wallet: $1? $20?

No, it’s actually $100. There are 13.1 billion $1 bills, 11.7 billion $20 bills, and 16.4 billion $100 bills. And since $100 bills are worth more, the vast majority of US dollar value in circulation is in those $100 bills—indeed, $1.64 trillion of the total $2.05 trillion cash supply.

This is… odd, to say the least. When’s the last time you spent a $100 bill? Then again, when’s the last time you spent… cash? In a typical week, 30% of Americans use no cash at all.

In the United States, cash is used for 26% of transactions, compared to 28% for debit card and 23% for credit cards. The US is actually a relatively cash-heavy country by First World standards. In the Netherlands and Scandinavia, cash is almost unheard of. When I last visited Amsterdam a couple of months ago, businesses were more likely to take US credit cards than they were to take cash euros.

A list of countries most reliant on cash shows mostly very poor countries, like Chad, Angola, and Burkina Faso. But even in Sub-Saharan Africa, mobile money is dominant in Botswana, Kenya and Uganda.

And yet the cash money supply is still quite large: $2.05 trillion is only a third of the US monetary base, but it’s still a huge amount of money. If most people aren’t using it, who is? And why is so much of it in the form of $100 bills?

It turns out that the answer to the second question can provide an answer to the first. $100 bills are not widely used for consumer purchases—indeed, most businesses won’t even accept them. (Honestly that has always bothered me: What exactly does “legal tender” mean, if you’re allowed to categorically refuse $100 bills? It’d be one thing to say “we can’t accept payment when we can’t make change”, and obviously nobody seriously expects you to accept $10,000 bills; but what if you have a $97 purchase?) When people spend cash, it’s mainly ones, fives, and twenties.

Who uses $100 bills? People who want to store money in a way that is anonymous, easily transportable—including across borders—and stable against market fluctuations. Drug dealers leap to mind (and indeed the money-laundering that HSBC did for drug cartels was largely in the form of thick stacks of $100 bills). Of course it isn’t just drug dealers, or even just illegal transactions, but it is mostly people who want to cross borders. 80% of US $100 bills are in circulation outside the United States. Since 80% of US cash is in the form of $100 bills, this means that nearly two-thirds of all US dollars are outside the US.

Knowing this, I have to wonder: Why does the Federal Reserve continue printing so many $100 bills? Okay, once they’re out there, it may be hard to get them back. But they do wear out eventually. (In fact, US dollars wear out faster than most currencies, because they are made of linen instead of plastic. Surprisingly, this actually makes them less eco-friendly despite being more biodegradable. Of course, the most eco-friendly method of payment is mobile payments, since their marginal environmental impact is basically zero.) So they could simply stop printing them, and eventually the global supply would dwindle.

They clearly haven’t done this—indeed, there were more $100 bills printed last year than any previous year, increasing the global supply by 2 billion bills, or $200 billion. Why not? Are they trying to keep money flowing for drug dealers? Even if the goal is to substitute for failing currencies in other countries (a somewhat odd, if altruistic, objective), wouldn’t that be more effective with $1 and $5 bills? $100 is a lot of money for people in Chad or Angola! Chad’s per-capita GDP is a staggeringly low $600 per year; that means that a $100 bill to a typical person in Chad would be like me holding onto a $10,000 bill (those exist, technically). Surely they’d prefer $1 bills—which would still feel to them like $100 bills feel to me. Even in middle-income countries, $100 is quite a bit; Ecuador actually uses the US dollar as its main currency, but their per-capita GDP is only $5,600, so $100 to them feels like $1000 to us.

If you want to usefully increase the money supply to stimulate consumer spending, print $20 bills—or just increase some numbers in bank reserve accounts. Printing $100 bills is honestly baffling to me. It seems at best inept, and at worst possibly corrupt—maybe they do want to support drug cartels?

Cryptocurrency and its failures

Jan 30 JDN 2459620

It started out as a neat idea, though very much a solution in search of a problem. Using encryption, could we decentralize currency and eliminate the need for a central bank?

Well, it’s been a few years now, and we have now seen how well that went. Bitcoin recently crashed, but it has always been astonishingly volatile. As a speculative asset, such volatility is often tolerable—for many, even profitable. But as a currency, it is completely unbearable. People need to know that their money will be a store of value and a medium of exchange—and something that changes price one minute to the next is neither.

Some of cryptocurrency’s failures have been hilarious, like the ill-fated island called [yes, really] “Cryptoland”, which crashed and burned when they couldn’t find any investors to help them buy the island.

Others have been darkly comic, but tragic in their human consequences. Chief among these was the failed attempt by El Salvador to make Bitcoin an official currency.

At the time, President Bukele justified it by an economically baffling argument: Total value of all Bitcoin in the world is $680 billion, therefore if even 1% gets invested in El Salvador, GDP will increase by $6.8 billion, which is 25%!

First of all, that would only happen if 1% of all Bitcoin were invested in El Salvador each year—otherwise you’re looking at a one-time injection of money, not an increase in GDP.

But more importantly, this is like saying that the total US dollar supply is $6 trillion, (that’s physically cash; the actual money supply is considerably larger) so maybe by dollarizing your economy you can get 1% of that—$60 billion, baby! No, that’s not how any of this works. Dollarizing could still be a good idea (though it didn’t go all that well in El Salvador), but it won’t give you some kind of share in the US economy. You can’t collect dividends on US GDP.

It’s actually good how El Salvador’s experiment in bitcoin failed: Nobody bought into it in the first place. They couldn’t convince people to buy government assets that were backed by Bitcoin (perhaps because the assets were a strictly worse deal than just, er, buying Bitcoin). So the human cost of this idiotic experiment should be relatively minimal: It’s not like people are losing their homes over this.

That is, unless President Bukele doubles down, which he now appears to be doing. Even people who are big fans of cryptocurrency are unimpressed with El Salvador’s approach to it.

It would be one thing if there were some stable cryptocurrency that one could try pegging one’s national currency to, but there isn’t. Even so-called stablecoins are generally pegged to… regular currencies, typically the US dollar but also sometimes the Euro or a few other currencies. (I’ve seen the Australian Dollar and the Swiss Franc, but oddly enough, not the Pound Sterling.)

Or a country could try issuing its own cryptocurrency, as an all-digital currency instead of one that is partly paper. It’s not totally clear to me what advantages this would have over the current system (in which most of the money supply is bank deposits, i.e. already digital), but it would at least preserve the key advantage of having a central bank that can regulate your money supply.

But no, President Bukele decided to take an already-existing cryptocurrency, backed by nothing but the whims of the market, and make it legal tender. Somehow he missed the fact that a currency which rises and falls by 10% in a single day is generally considered bad.

Why? Is he just an idiot? I mean, maybe, though Bukele’s approval rating is astonishingly high. (And El Salvador is… mostly democratic. Unlike, say, Putin’s, I think these approval ratings are basically real.) But that’s not the only reason. My guess is that he was gripped by the same FOMO that has gripped everyone else who evangelizes for Bitcoin. The allure of easy money is often irresistible.

Consider President Bukele’s position. You’re governing a poor, war-torn country which has had economic problems of various types since its founding. When the national currency collapsed a generation ago, the country was put on the US dollar, but that didn’t solve the problem. So you’re looking for a better solution to the monetary doldrums your country has been in for decades.

You hear about a fancy new monetary technology, “cryptocurrency”, which has all the tech people really excited and seems to be making tons of money. You don’t understand a thing about it—hardly anyone seems to, in fact—but you know that people with a lot of insider knowledge of technology and finance are really invested in it, so it seems like there must be something good here. So, you decide to launch a program that will convert your country’s currency from the US dollar to one of these new cryptocurrencies—and you pick the most famous one, which is also extremely valuable, Bitcoin.

Could cryptocurrencies be the future of money, you wonder? Could this be the way to save your country’s economy?

Despite all the evidence that had already accumulated that cryptocurrency wasn’t working, I can understand why Bukele would be tempted by that dream. Just as we’d all like to get free money without having to work, he wanted to save his country’s economy without having to implement costly and unpopular reforms.

But there is no easy money. Not really. Some people get lucky; but they ultimately benefit from other people’s hard work.

The lesson here is deeper than cryptocurrency. Yes, clearly, it was a dumb idea to try to make Bitcoin a national currency, and it will get even dumber if Bukele really does double down on it. But more than that, we must all resist the lure of easy money. If it sounds too good to be true, it probably is.

Keynesian economics: It works, bitches

Jan 23 JDN 2459613

(I couldn’t resist; for the uninitiated, my slightly off-color title is referencing this XKCD comic.)

When faced with a bad recession, Keynesian economics prescribes the following response: Expand the money supply. Cut interest rates. Increase government spending, but decrease taxes. The bigger the recession, the more we should do all these things—especially increasing spending, because interest rates will often get pushed to zero, creating what’s called a liquidity trap.

Take a look at these two FRED graphs, both since the 1950s.
The first is interest rates (specifically the Fed funds effective rate):

The second is the US federal deficit as a proportion of GDP:

Interest rates were pushed to zero right after the 2008 recession, and didn’t start coming back up until 2016. Then as soon as we hit the COVID recession, they were dropped back to zero.

The deficit looks even more remarkable. At the 2009 trough of the recession, the deficit was large, nearly 10% of GDP; but then it was quickly reduced back to normal, to between 2% and 4% of GDP. And that initial surge is as much explained by GDP and tax receipts falling as by spending increasing.

Yet in 2020 we saw something quite different: The deficit became huge. Literally off the chart, nearly 15% of GDP. A staggering $2.8 trillion. We’ve not had a deficit that large as a proportion of GDP since WW2. We’ve never had a deficit that large in real billions of dollars.

Deficit hawks came out of the woodwork to complain about this, and for once I was worried they might actually be right. Their most credible complaint was that it would trigger inflation, and they weren’t wrong about that: Inflation became a serious concern for the first time in decades.

But these recessions were very large, and when you actually run the numbers, this deficit was the correct magnitude for what Keynesian models tell us to do. I wouldn’t have thought our government had the will and courage to actually do it, but I am very glad to have been wrong about that, for one very simple reason:

It worked.

In 2009, we didn’t actually fix the recession. We blunted it; we stopped it from getting worse. But we never really restored GDP, we just let it get back to its normal growth rate after it had plummeted, and eventually caught back up to where we had been.

2021 went completely differently. With a much larger deficit, we fixed this recession. We didn’t just stop the fall; we reversed it. We aren’t just back to normal growth rates—we are back to the same level of GDP, as if the recession had never happened.

This contrast is quite obvious from the GDP of US GDP:

In 2008 and 2009, GDP slumps downward, and then just… resumes its previous trend. It’s like we didn’t do anything to fix the recession, and just allowed the overall strong growth of our economy to carry us through.

The pattern in 2020 is completely different. GDP plummets downward—much further, much faster than in the Great Recession. But then it immediately surges back upward. By the end of 2021, it was above its pre-recession level, and looks to be back on its growth trend. With a recession this deep, if we’d just waited like we did last time, it would have taken four or five years to reach this point—we actually did it in less than one.

I wrote earlier about how this is a weird recession, one that actually seems to fit Real Business Cycle theory. Well, it was weird in another way as well: We fixed it. We actually had the courage to do what Keynes told us to do in 1936, and it worked exactly as it was supposed to.

Indeed, to go from unemployment almost 15% in April of 2020 to under 4% in December of 2021 is fast enough I feel like I’m getting whiplash. We have never seen unemployment drop that fast. Krugman is fond of comparing this to “morning in America”, but that’s really an understatement. Pitch black one moment, shining bright the next: this isn’t a sunrise, it’s pulling open a blackout curtain.

And all of this while the pandemic is still going on! The omicron variant has brought case numbers to their highest levels ever, though fortunately death rates so far are still below last year’s peak.

I’m not sure I have the words to express what a staggering achievement of economic policy it is to so rapidly and totally repair the economic damage caused by a pandemic while that pandemic is still happening. It’s the equivalent of repairing an airplane that is not only still in flight, but still taking anti-aircraft fire.

Why, it seems that Keynes fellow may have been onto something, eh?

Reversals in progress against poverty

Jan 16 JDN 2459606

I don’t need to tell you that the COVID pandemic has been very bad for the world. Yet perhaps the worst outcome of the pandemic is one that most people don’t recognize: It has reversed years of progress against global poverty.

Estimates of the number of people who will be thrown into extreme poverty as a result of the pandemic are consistently around 100 million, though some forecasts have predicted this will rise to 150 million, or, in the most pessimistic scenarios, even as high as 500 million.

Pre-COVID projections showed the global poverty rate falling steadily from 8.4% in 2019 to 6.3% by 2030. But COVID resulted in the first upward surge in global poverty in decades, and updated models now suggest that the global poverty rate in 2030 will be as high as 7.0%. That difference is 0.7% of a forecasted population of 8.5 billion—so that’s a difference of 59 million people.

This is a terrible reversal of fortune, and a global tragedy. Ten or perhaps even hundreds of millions of people will suffer the pain of poverty because of this global pandemic and the numerous missteps by many of the world’s governments—not least the United States—in response to it.

Yet it’s important to keep in mind that this is a short-term reversal in a long-term trend toward reduced poverty. Yes, the most optimistic predictions are turning out to be wrong—but the general pattern of dramatic reductions in global poverty over the late 20th and early 21st century are still holding up.

That post-COVID estimate of a global poverty rate of 7.0% needs to be compared against the fact that as recently as 1980 the global poverty rate at the same income level (adjust for inflation and purchasing power of course) income level was a whopping 44%.

This pattern makes me feel deeply ambivalent about the effects of globalization on inequality. While it now seems clear that globalization has exacerbated inequality within First World countries—and triggered a terrible backlash of right-wing populism as a result—it also seems clear that globalization was a major reason for the dramatic reductions in global poverty in the past few decades.

I think the best answer I’ve been able to come up with is that globalization is overall a good thing, and we must continue it—but we also need to be much more mindful of its costs, and we must make policy that mitigates those costs. Expanded trade has winners and losers, and we should be taxing the winners to compensate the losers. To make good economic policy, it simply isn’t enough to increase aggregate GDP; you actually have to make life better for everyone (or at least as many people as you can).

Unfortunately, knowing what policies to make is only half the battle. We must actually implement those policies, which means winning elections, which means restoring the public’s faith in the authority of economic experts.

Some of the people voting for Donald Trump were just what Hillary Clinton correctly (if tone-deafly) referred to as “deplorables“: racists, misogynists, xenophobes. But I think that many others weren’t voting for Trump but against Clinton; they weren’t embracing far-right populism but rather rejecting center-left technocratic globalization. They were tired of being told what to do by experts who didn’t seem to care about them or their interests.

And the thing is, they were right about that. Not about voting for Trump—that’s unforgivable—but about the fact that expert elites had been ignoring their interests and needed a wake-up call. There were a hundred better ways of making that wake-up call that didn’t involve putting a narcissistic, incompetent maniac in charge of the world’s largest economy, military and nuclear arsenal, and millions of people should be ashamed of themselves for not taking those better options. Yet the fact remains: The wake-up call was necessary, and we should be responding to it.

We expert elites (I think I can officially carry that card, now that I have a PhD and a faculty position at a leading research university) need to do a much better job of two things: First, articulating the case for our policy recommendations in a way that ordinary people can understand, so that they feel justified and not simply rammed down people’s throats; and second, recognizing the costs and downsides of these policies and taking action to mitigate them whenever possible.

For instance: Yes, we need to destroy all the coal jobs. They are killing workers and the planet. Coal companies need to be transitioned to new industries or else shut down. This is not optional. It must be done. But we also need to explain to those coal miners why it’s necessary to move on from coal to solar and nuclear, and we need to be implementing various policies to help those workers move on to better, safer jobs that pay as well and don’t involve filling their lungs with soot and the atmosphere with carbon dioxide. We need to articulate, emphasize—and loudly repeat—that this isn’t about hurting coal miners to help everyone else, but about helping everyone, coal miners included, and that if anyone gets hurt it will only be a handful of psychopathic billionaires who already have more money than any human being could possibly need or deserve.

Another example: We cannot stop trading with India and China. Hundreds of millions of innocent people would suddenly be thrown out of work and into poverty if we did. We need the products they make for us, and they need the money we pay for those products. But we must also acknowledge that trading with poor countries does put downward pressure on wages back home, and take action to help First World workers who are now forced to compete with global labor markets. Maybe this takes the form of better unemployment benefits, or job-matching programs, or government-sponsored job training. But we cannot simply shrug and let people lose their jobs and their homes because the factories they worked in were moved to China.

Strange times for the labor market

Jan 9 JDN 2459589

Labor markets have been behaving quite strangely lately, due to COVID and its consequences. As I said in an earlier post, the COVID recession was the one recession I can think of that actually seemed to follow Real Business Cycle theory—where it was labor supply, not demand, that drove employment.

I dare say that for the first time in decades, the US government actually followed Keynesian policy. US federal government spending surged from $4.8 trillion to $6.8 trillion in a single year:

That is a staggering amount of additional spending; I don’t think any country in history has ever increased their spending by that large an amount in a single year, even inflation-adjusted. Yet in response to a recession that severe, this is exactly what Keynesian models prescribed—and for once, we listened. Instead of balking at the big numbers, we went ahead and spent the money.

And apparently it worked, because unemployment spiked to the worst levels seen since the Great Depression, then suddenly plummeted back to normal almost immediately:

Nor was this just the result of people giving up on finding work. U-6, the broader unemployment measure that includes people who are underemployed or have given up looking for work, shows the same unprecedented pattern:

The oddest part is that people are now quitting their jobs at the highest rate seen in over 20 years:

[FRED_quits.png]

This phenomenon has been dubbed the Great Resignation, and while its causes are still unclear, it is clearly the most important change in the labor market in decades.

In a previous post I hypothesized that this surge in strikes and quits was a coordination effect: The sudden, consistent shock to all labor markets at once gave people a focal point to coordinate their decision to strike.

But it’s also quite possible that it was the Keynesian stimulus that did it: The relief payments made it safe for people to leave jobs they had long hated, and they leapt at the opportunity.

When that huge surge in government spending was proposed, the usual voices came out of the woodwork to warn of terrible inflation. It’s true, inflation has been higher lately than usual, nearly 7% last year. But we still haven’t hit the double-digit inflation rates we had in the late 1970s and early 1980s:

Indeed, most of the inflation we’ve had can be explained by the shortages created by the supply chain crisis, along with a very interesting substitution effect created by the pandemic. As services shut down, people bought goods instead: Home gyms instead of gym memberships, wifi upgrades instead of restaurant meals.

As a result, the price of durable goods actually rose, when it had previously been falling for decades. That broader pattern is worth emphasizing: As technology advances, services like healthcare and education get more expensive, durable goods like phones and washing machines get cheaper, and nondurable goods like food and gasoline fluctuate but ultimately stay about the same. But in the last year or so, durable goods have gotten more expensive too, because people want to buy more while supply chains are able to deliver less.

This suggests that the inflation we are seeing is likely to go away in a few years, once the pandemic is better under control (or else reduced to a new influenza where the virus is always there but we learn to live with it).

But I don’t think the effects on the labor market will be so transitory. The strikes and quits we’ve been seeing lately really are at a historic level, and they are likely to have a long-lasting effect on how work is organized. Employers are panicking about having to raise wages and whining about how “no one wants to work” (meaning, of course, no one wants to work at the current wage and conditions on offer). The correct response is the one from Goodfellas [language warning].

For the first time in decades, there are actually more job vacancies than unemployed workers:

This means that the tables have turned. The bargaining power is suddenly in the hands of workers again, after being in the hands of employers for as long as I’ve been alive. Of course it’s impossible to know whether some other shock could yield another reversal; but for now, it looks like we are finally on the verge of major changes in how labor markets operate—and I for one think it’s about time.

Reasons for optimism in 2022

Jan 2 JDN 2459582

When this post goes live, we will have begun the year 2022.

That still sounds futuristic, somehow. We’ve been in the 20th century long enough that most of my students were born in it and nearly all of them are old enough to drink (to be fair, it’s the UK, so “old enough to drink” only means 18). Yet “the year 2022” still seems like it belongs in science fiction, and not on our wall calendars.

2020 and 2021 were quite bad years. Death rates and poverty rates surged around the world. Almost all of that was directly or indirectly due to COVID.

Yet there are two things we should keep in perspective.

First, those death rates and poverty rates surged to what we used to consider normal 50 years ago. These are not uniquely bad times; indeed, they are still better than most of human history.

Second, there are many reasons to think that 2022—or perhaps a bit later than that, 2025 or 2030—will be better.

The Omicron variant is highly contagious, but so far does not appear to be as deadly as previous variants. COVID seems to be evolving to be more like influenza: Catching it will be virtually inevitable, but dying from it will be very rare.

Things are also looking quite good on the climate change front: Renewable energy production is growing at breathtaking speed and is now cheaper than almost every other form of energy. It’s awful that we panicked and locked down nuclear energy for the last 50 years, but at this point we may no longer need it: Solar and wind are just that good now.

Battery technology is also rapidly improving, giving us denser, cheaper, more stable batteries that may soon allow us to solve the intermittency problem: the wind may not always blow and the sun may not always shine, but if you have big enough batteries you don’t need them to. (You can get a really good feel for how much difference good batteries make in energy production by playing Factorio, or, more whimsically, Mewnbase.)

If we do go back to nuclear energy, it may not be fission anymore, but fusion. Now that we have nearly reached that vital milestone of break-even, investment in fusion technology has rapidly increased.


Fusion has basically all of the benefits of fission with none of the drawbacks. Unlike renewables, it can produce enormous amounts of energy in a way that can be easily scaled and controlled independently of weather conditions. Unlike fission, it requires no exotic nuclear fuels (deuterium can be readily attained from water), and produces no long-lived radioactive waste. (Indeed, development is ongoing of methods that could use fusion products to reduce the waste from fission reactors, making the effective rate of nuclear waste production for fusion negative.) Like both renewables and fission, it produces no carbon emissions other than those required to build the facility (mainly due to concrete).

Of course, technology is only half the problem: we still need substantial policy changes to get carbon emissions down. We’ve already dragged our feet for decades too long, and we will pay the price for that. But anyone saying that climate change is an inevitable catastrophe hasn’t been paying attention to recent developments in solar panels.

Technological development in general seems to be speeding up lately, after having stalled quite a bit in the early 2000s. Moore’s Law may be leveling off, but the technological frontier may simply be moving away from digital computing power and onto other things, such as biotechnology.

Star Trek told us that we’d have prototype warp drives by the 2060s but we wouldn’t have bionic implants to cure blindness until the 2300s. They seem to have gotten it backwards: We may never have warp drive, but we’ve got those bionic implants today.

Neural interfaces are allowing paralyzed people to move, speak, and now even write.

After decades of failed promises, gene therapy is finally becoming useful in treating real human diseases. CRISPR changes everything.

We are also entering a new era of space travel, thanks largely to SpaceX and their remarkable reusable rockets. The payload cost to LEO is a standard measure of the cost of space travel, which describes the cost of carrying a certain mass of cargo up to low Earth orbit. By this measure, costs have declined from nearly $20,000 per kg to only $1,500 per kg since the 1960s. Elon Musk claims that he can reduce the cost to as low as $10 per kg. I’m skeptical, to say the least—but even dropping it to $500 or $200 would be a dramatic improvement and open up many new options for space exploration and even colonization.

To put this in perspective, the cost of carrying a human being to the International Space Station (about 100 kg to LEO) has fallen from $2 million to $150,000. A further decrease to $200 per kg would lower that to $20,000, opening the possibility of space tourism; $20,000 might be something even upper-middle-class people could do as a once-in-a-lifetime vacation. If Musk is really right that he can drop it all the way to $10 per kg, the cost to carry a person to the ISS would be only $1000—something middle-class people could do regularly. (“Should we do Paris for our anniversary this year, or the ISS?”) Indeed, a cost that low would open the possibility of space-based shipping—for when you absolutely must have the product delivered from China to California in the next 2 hours.

Another way to put this in perspective is to convert these prices per mass in terms of those of commodities, such as precious metals. $20,000 per kg is nearly the price of solid platinum. $500 per kg is about the price of sterling silver. $10 per kg is roughly the price of copper.

The reasons for optimism are not purely technological. There has also been significant social progress just in the last few years, with major milestones on LGBT rights being made around the world in 2020 and 2021. Same-sex marriage is now legally recognized over nearly the entire Western Hemisphere.

None of that changes the fact that we are still in a global pandemic which seems to be increasingly out of control. I can’t tell you whether 2022 will be better than 2021, or just more of the same—or perhaps even worse.

But while these times are hard, overall the world is still making progress.

A very Omicron Christmas

Dec 26 JDN 2459575

Remember back in spring of 2020 when we thought that this pandemic would quickly get under control and life would go back to normal? How naive we were.

The newest Omicron strain seems to be the most infectious yet—even people who are fully vaccinated are catching it. The good news is that it also seems to be less deadly than most of the earlier strains. COVID is evolving to spread itself better, but not be as harmful to us—much as influenza and cold viruses evolved. While weekly cases are near an all-time peek, weekly deaths are well below the worst they had been.

Indeed, at this point, it’s looking like COVID will more or less be with us forever. In the most likely scenario, the virus will continue to evolve to be more infectious but less lethal, and then we will end up with another influenza on our hands: A virus that can’t be eradicated, gets huge numbers of people sick, but only kills a relatively small number. At some point we will decide that the risk of getting sick is low enough that it isn’t worth forcing people to work remotely or maybe even wear masks. And we’ll relax various restrictions and get back to normal with this new virus a regular part of our lives.


Merry Christmas?

But it’s not all bad news. The vaccination campaign has been staggeringly successful—now the total number of vaccine doses exceeds the world population, so the average human being has been vaccinated for COVID at least once.

And while 5.3 million deaths due to the virus over the last two years sounds terrible, it should be compared against the baseline rate of 15 million deaths during that same interval, and the fact that worldwide death rates have been rapidly declining. Had COVID not happened, 2021 would be like 2019, which had nearly the lowest death rate on record, at 7,579 deaths per million people per year. As it is, we’re looking at something more like 10,000 deaths per million people per year (1%), or roughly what we considered normal way back in the long-ago times of… the 1980s. To get even as bad as things were in the 1950s, we would have to double our current death rate.

Indeed, there’s something quite remarkable about the death rate we had in 2019, before the pandemic hit: 7,579 per million is only 0.76%. A being with a constant annual death rate of 0.76% would have a life expectancy of over 130 years. This very low death rate is partly due to demographics: The current world population is unusually young and healthy because the world recently went through huge surges in population growth. Due to demographic changes the UN forecasts that our death rate will start to climb again as fertility falls and the average age increases; but they are still predicting it will stabilize at about 11,200 per million per year, which would be a life expectancy of 90. And that estimate could well be too pessimistic, if medical technology continues advancing at anything like its current rate.

We call it Christmas, but it’s really a syncretized amalgamation of holidays: Yule, Saturnalia, various Solstice celebrations. (Indeed, there’s no particular reason to think Jesus was even born in December.) Most Northern-hemisphere civilizations have some sort of Solstice holiday, and we’ve greedily co-opted traditions from most of them. The common theme really seems to be this:

Now it is dark, but band together and have hope, for the light shall return.

Diurnal beings in northerly latitudes instinctively fear the winter, when it becomes dark and cold and life becomes more hazardous—but we have learned to overcome this fear together, and we remind ourselves that light and warmth will return by ritual celebrations.

The last two years have made those celebrations particularly difficult, as we have needed to isolate ourselves in order to keep ourselves and others safe. Humans are fundamentally social at a level most people—even most scientists—do not seem to grasp: We need contact with other human beings as deeply and vitally as we need food or sleep.

The Internet has allowed us to get some level of social contact while isolated, which has been a tremendous boon; but I think many of us underestimated how much we would miss real face-to-face contact. I think much of the vague sense of malaise we’ve all been feeling even when we aren’t sick and even when we’ve largely adapted our daily routine to working remotely comes from this: We just aren’t getting the chance to see people in person nearly as often as we want—as often as we hadn’t even realized we needed.

So, if you do travel to visit family this holiday season, I understand your need to do so. But be careful. Get vaccinated—three times, if you can. Don’t have any contact with others who are at high risk if you do have any reason to think you’re infected.

Let’s hope next Christmas is better.

What’s wrong with police unions?

Nov 14 JDN 2459531

In a previous post I talked about why unions, even though they are collusive, are generally a good thing. But there is one very important exception to this rule: Police unions are almost always harmful.

Most recently, police unions have been leading the charge to fight vaccine mandates. This despite the fact that COVID-19 now kills more police officers than any other cause. They threatened that huge numbers of officers would leave if the mandates were imposed—but it didn’t happen.

But there is a much broader pattern than this: Police unions systematically take the side of individual police offers over the interests of public safety. Even the most incompetent, negligent, or outright murderous behavior by police officers will typically be defended by police unions. (One encouraging development is that lately even some police unions have been reluctant to defend the most outrageous killings by police officers—but this very much the exception, not the rule.)

Police unions are also unusual among unions in their political ties. Conservatives generally oppose unions, but are much friendlier toward police unions. At the other end of the spectrum, socialists normally love unions, but have distanced themselves from police unions for a long time. (The argument in that article that this is because “no other job involves killing people” is a bit weird: Ostensibly, the circumstances in which police are allowed to kill people are not all that different from the circumstances in which private citizens are. Just like us, they’re only supposed to use deadly force to prevent death or grievous bodily harm to themselves or others. The main thing that police are allowed to do that we aren’t is imprison people. Killing isn’t supposed to be a major part of the job.)

Police union also have some other weird features. The total membership of all police unions exceeds the total number of police officers in the United States, because a single officer is often affiliated with multiple unions—normally not at all how unions work. Police unions are also especially powerful and well-organized among unions. They are especially well-funded, and their members are especially loyal.

If we were to adopt a categorical view that unions are always good or always bad—as many people seem to want to—it’s difficult to see why police unions should be different from teachers’ unions or factory workers’ unions. But my argument was very careful not to make such categorical statements. Unions aren’t always or inherently good; they are usually good, because of how they are correcting a power imbalance between workers and corporations.

But when it comes to police, the situation is quite different. Police unions give more bargaining power to government officers against… what? Public accountability? The democratic system? Corporate CEOs are accountable only to their shareholders, but the mayors and city councils who decide police policy are elected (in most of the UK, even police commissioners are directly elected). It’s not clear that there was an imbalance in bargaining power here we would want to correct.

A similar case could be made against all public-sector unions, and indeed that case often is extended to teachers’ unions. If we must sacrifice teachers’ unions in order to destroy police unions, I’d be prepared to bite that bullet. But there are vital differences here as well. Teachers are not responsible for imprisoning people, and bad teachers almost never kill people. (In the rare cases in which teachers have committed murder, they have been charged to the full extent of the law, just as they would be in any other profession.) There surely is some misconduct by teachers that some unions may be protecting, but the harm caused by that misconduct is far lower than the harm caused by police misconduct. Teacher unions also provide a layer of protection for teachers to exercise autonomy, promoting academic freedom.

The form of teacher misconduct I would be most concerned about is sexual abuse of students. And while I’ve seen many essays claiming that teacher unions protect sexual abusers, the only concrete evidence I could find on the subject was a teachers’ union publicly complaining that the government had failed to pass stricter laws against sexual abuse by teachers. The research on teacher misconduct mainly focuses on other casual factors aside from union representation.

Even this Fox News article cherry-picking the worst examples of unions protecting abusive teachers includes line after line like “he was ultimately fired”, “he was pressured to resign”, and “his license was suspended”. So their complaint seems to be that it wasn’t done fast enough? But a fair justice system is necessarily slow. False accusations are rare, but they do happen—we can’t just take someone’s word for it. Ensuring that you don’t get fired until the district mounts strong evidence of misconduct against you is exactly what unions should be doing.

Whether unions are good or bad in a particular industry is ultimately an empirical question. So let’s look at the data, shall we? Teacher unions are positively correlated with school performance. But police unions are positively correlated with increased violent misconduct. There you have it: Teacher unions are good, but police unions are bad.

Labor history in the making

Oct 24 JDN 2459512

To say that these are not ordinary times would be a grave understatement. I don’t need to tell you all the ways that this interminable pandemic has changed the lives of people all around the world.

But one in particular is of notice to economists: Labor in the United States is fighting back.

Quit rates are at historic highs. Over 100,000 workers in a variety of industries are simultaneously on strike, ranging from farmworkers to nurses and freelance writers to university lecturers.

After decades of quiescence to ever-worsening working conditions, it seems that finally American workers are mad as hell and not gonna take it anymore.

It’s about time, frankly. The real question is why it took this long. Working conditions in the US have been systematically worse than the rest of the First World since at least the 1980s. It was substantially easier to get the leave I needed to attend my own wedding—in the US—after starting work in the UK than it would have been at the same kind of job in the US, because UK law requires employers to grant leave from the day they start work, while US federal law and the law in many states doesn’t require leave at all for anyone—not even people who are sick or recently gave birth.

So, why did it happen now? What changed? The pandemic threw our lives into turmoil, that much is true. But it didn’t fundamentally change the power imbalance between workers and employers. Why was that enough?

I think I know why. The shock from the pandemic didn’t have to be enough to actually change people’s minds about striking—it merely had to be enough to convince people that others would show up. It wasn’t the first-order intention “I want to strike” that changed; it was the second-order belief “Other people want to strike too”.

For a labor strike is a coordination game par excellence. If 1 person strikes, they get fired and replaced. If 2 or 3 or 10 strike, most likely the same thing. But if 10,000 strike? If 100,000 strike? Suddenly corporations have no choice but to give in.

The most important question on your mind when you are deciding whether or not to strike is not, “Do I hate my job?” but “Will my co-workers have my back?”.

Coordination games exhibit a very fascinating—and still not well-understood—phenomenon known as Schelling points. People will typically latch onto certain seemingly-arbitrary features of their choices, and do so well enough that simply having such a focal point can radically increase the level of successful coordination.

I believe that the pandemic shock was just such a Schelling point. It didn’t change most people’s working conditions all that much: though I can see why nurses in particular would be upset, it’s not clear to me that being a university lecturer is much worse now than it was a year ago. But what the pandemic did do was change everyone’s working conditions, all at once. It was a sudden shock toward work dissatisfaction that applied to almost the entire workforce.

Thus, many people who were previously on the fence about striking were driven over the edge—and then this in turn made others willing to take the leap as well, suddenly confident that they would not be acting alone.

Another important feature of the pandemic shock was that it took away a lot of what people had left to lose. Consider the two following games.

Game A: You and 100 other people each separately, without communicating, decide to choose X or Y. If you all choose X, you each get $20. But if even one of you chooses Y, then everyone who chooses Y gets $1 but everyone who chooses X gets nothing.

Game B: Same as the above, except that if anyone chooses Y, everyone who chooses Y also gets nothing.

Game A is tricky, isn’t it? You want to choose X, and you’d be best off if everyone did. But can you really trust 100 other people to all choose X? Maybe you should take the safe bet and choose Y—but then, they’re thinking the same way.


Game B, on the other hand, is painfully easy: Choose X. Obviously choose X. There’s no downside, and potentially a big upside.

In terms of game theory, both games have the same two Nash equilibria: All-X and All-Y. But in the second game, I made all-X also a weak dominant strategy equilibrium, and that made all the difference.

We could run these games in the lab, and I’m pretty sure I know what we’d find: In game A, most people choose X, but some people don’t, and if you repeat the game more and more people choose Y. But in game B, almost everyone chooses X and keeps on choosing X. Maybe they don’t get unanimity every time, but they probably do get it most of the time—because why wouldn’t you choose X? (These are testable hypotheses! I could in fact run this experiment! Maybe I should?)

It’s hard to say at this point how effective these strikes will be. Surely there will be some concessions won—there are far too many workers striking for them all to get absolutely nothing. But it remains uncertain whether the concessions will be small, token changes just to break up the strikes, or serious, substantive restructuring of how work is done in the United States.

If the latter sounds overly optimistic, consider that this is basically what happened in the New Deal. Those massive—and massively successful—reforms were not generated out of nowhere; they were the result of the economic crisis of the Great Depression and substantial pressure by organized labor. We may yet see a second New Deal (a Green New Deal?) in the 2020s if labor organizations can continue putting the pressure on.

The most important thing in making such a grand effort possible is believing that it’s possible—only if enough people believe it can happen will enough people take the risk and put in the effort to make it happen. Apathy and cynicism are the most powerful weapons of the status quo.


We are witnessing history in the making. Let’s make it in the right direction.