Reasons for optimism in 2022

Jan 2 JDN 2459582

When this post goes live, we will have begun the year 2022.

That still sounds futuristic, somehow. We’ve been in the 20th century long enough that most of my students were born in it and nearly all of them are old enough to drink (to be fair, it’s the UK, so “old enough to drink” only means 18). Yet “the year 2022” still seems like it belongs in science fiction, and not on our wall calendars.

2020 and 2021 were quite bad years. Death rates and poverty rates surged around the world. Almost all of that was directly or indirectly due to COVID.

Yet there are two things we should keep in perspective.

First, those death rates and poverty rates surged to what we used to consider normal 50 years ago. These are not uniquely bad times; indeed, they are still better than most of human history.

Second, there are many reasons to think that 2022—or perhaps a bit later than that, 2025 or 2030—will be better.

The Omicron variant is highly contagious, but so far does not appear to be as deadly as previous variants. COVID seems to be evolving to be more like influenza: Catching it will be virtually inevitable, but dying from it will be very rare.

Things are also looking quite good on the climate change front: Renewable energy production is growing at breathtaking speed and is now cheaper than almost every other form of energy. It’s awful that we panicked and locked down nuclear energy for the last 50 years, but at this point we may no longer need it: Solar and wind are just that good now.

Battery technology is also rapidly improving, giving us denser, cheaper, more stable batteries that may soon allow us to solve the intermittency problem: the wind may not always blow and the sun may not always shine, but if you have big enough batteries you don’t need them to. (You can get a really good feel for how much difference good batteries make in energy production by playing Factorio, or, more whimsically, Mewnbase.)

If we do go back to nuclear energy, it may not be fission anymore, but fusion. Now that we have nearly reached that vital milestone of break-even, investment in fusion technology has rapidly increased.


Fusion has basically all of the benefits of fission with none of the drawbacks. Unlike renewables, it can produce enormous amounts of energy in a way that can be easily scaled and controlled independently of weather conditions. Unlike fission, it requires no exotic nuclear fuels (deuterium can be readily attained from water), and produces no long-lived radioactive waste. (Indeed, development is ongoing of methods that could use fusion products to reduce the waste from fission reactors, making the effective rate of nuclear waste production for fusion negative.) Like both renewables and fission, it produces no carbon emissions other than those required to build the facility (mainly due to concrete).

Of course, technology is only half the problem: we still need substantial policy changes to get carbon emissions down. We’ve already dragged our feet for decades too long, and we will pay the price for that. But anyone saying that climate change is an inevitable catastrophe hasn’t been paying attention to recent developments in solar panels.

Technological development in general seems to be speeding up lately, after having stalled quite a bit in the early 2000s. Moore’s Law may be leveling off, but the technological frontier may simply be moving away from digital computing power and onto other things, such as biotechnology.

Star Trek told us that we’d have prototype warp drives by the 2060s but we wouldn’t have bionic implants to cure blindness until the 2300s. They seem to have gotten it backwards: We may never have warp drive, but we’ve got those bionic implants today.

Neural interfaces are allowing paralyzed people to move, speak, and now even write.

After decades of failed promises, gene therapy is finally becoming useful in treating real human diseases. CRISPR changes everything.

We are also entering a new era of space travel, thanks largely to SpaceX and their remarkable reusable rockets. The payload cost to LEO is a standard measure of the cost of space travel, which describes the cost of carrying a certain mass of cargo up to low Earth orbit. By this measure, costs have declined from nearly $20,000 per kg to only $1,500 per kg since the 1960s. Elon Musk claims that he can reduce the cost to as low as $10 per kg. I’m skeptical, to say the least—but even dropping it to $500 or $200 would be a dramatic improvement and open up many new options for space exploration and even colonization.

To put this in perspective, the cost of carrying a human being to the International Space Station (about 100 kg to LEO) has fallen from $2 million to $150,000. A further decrease to $200 per kg would lower that to $20,000, opening the possibility of space tourism; $20,000 might be something even upper-middle-class people could do as a once-in-a-lifetime vacation. If Musk is really right that he can drop it all the way to $10 per kg, the cost to carry a person to the ISS would be only $1000—something middle-class people could do regularly. (“Should we do Paris for our anniversary this year, or the ISS?”) Indeed, a cost that low would open the possibility of space-based shipping—for when you absolutely must have the product delivered from China to California in the next 2 hours.

Another way to put this in perspective is to convert these prices per mass in terms of those of commodities, such as precious metals. $20,000 per kg is nearly the price of solid platinum. $500 per kg is about the price of sterling silver. $10 per kg is roughly the price of copper.

The reasons for optimism are not purely technological. There has also been significant social progress just in the last few years, with major milestones on LGBT rights being made around the world in 2020 and 2021. Same-sex marriage is now legally recognized over nearly the entire Western Hemisphere.

None of that changes the fact that we are still in a global pandemic which seems to be increasingly out of control. I can’t tell you whether 2022 will be better than 2021, or just more of the same—or perhaps even worse.

But while these times are hard, overall the world is still making progress.

A very Omicron Christmas

Dec 26 JDN 2459575

Remember back in spring of 2020 when we thought that this pandemic would quickly get under control and life would go back to normal? How naive we were.

The newest Omicron strain seems to be the most infectious yet—even people who are fully vaccinated are catching it. The good news is that it also seems to be less deadly than most of the earlier strains. COVID is evolving to spread itself better, but not be as harmful to us—much as influenza and cold viruses evolved. While weekly cases are near an all-time peek, weekly deaths are well below the worst they had been.

Indeed, at this point, it’s looking like COVID will more or less be with us forever. In the most likely scenario, the virus will continue to evolve to be more infectious but less lethal, and then we will end up with another influenza on our hands: A virus that can’t be eradicated, gets huge numbers of people sick, but only kills a relatively small number. At some point we will decide that the risk of getting sick is low enough that it isn’t worth forcing people to work remotely or maybe even wear masks. And we’ll relax various restrictions and get back to normal with this new virus a regular part of our lives.


Merry Christmas?

But it’s not all bad news. The vaccination campaign has been staggeringly successful—now the total number of vaccine doses exceeds the world population, so the average human being has been vaccinated for COVID at least once.

And while 5.3 million deaths due to the virus over the last two years sounds terrible, it should be compared against the baseline rate of 15 million deaths during that same interval, and the fact that worldwide death rates have been rapidly declining. Had COVID not happened, 2021 would be like 2019, which had nearly the lowest death rate on record, at 7,579 deaths per million people per year. As it is, we’re looking at something more like 10,000 deaths per million people per year (1%), or roughly what we considered normal way back in the long-ago times of… the 1980s. To get even as bad as things were in the 1950s, we would have to double our current death rate.

Indeed, there’s something quite remarkable about the death rate we had in 2019, before the pandemic hit: 7,579 per million is only 0.76%. A being with a constant annual death rate of 0.76% would have a life expectancy of over 130 years. This very low death rate is partly due to demographics: The current world population is unusually young and healthy because the world recently went through huge surges in population growth. Due to demographic changes the UN forecasts that our death rate will start to climb again as fertility falls and the average age increases; but they are still predicting it will stabilize at about 11,200 per million per year, which would be a life expectancy of 90. And that estimate could well be too pessimistic, if medical technology continues advancing at anything like its current rate.

We call it Christmas, but it’s really a syncretized amalgamation of holidays: Yule, Saturnalia, various Solstice celebrations. (Indeed, there’s no particular reason to think Jesus was even born in December.) Most Northern-hemisphere civilizations have some sort of Solstice holiday, and we’ve greedily co-opted traditions from most of them. The common theme really seems to be this:

Now it is dark, but band together and have hope, for the light shall return.

Diurnal beings in northerly latitudes instinctively fear the winter, when it becomes dark and cold and life becomes more hazardous—but we have learned to overcome this fear together, and we remind ourselves that light and warmth will return by ritual celebrations.

The last two years have made those celebrations particularly difficult, as we have needed to isolate ourselves in order to keep ourselves and others safe. Humans are fundamentally social at a level most people—even most scientists—do not seem to grasp: We need contact with other human beings as deeply and vitally as we need food or sleep.

The Internet has allowed us to get some level of social contact while isolated, which has been a tremendous boon; but I think many of us underestimated how much we would miss real face-to-face contact. I think much of the vague sense of malaise we’ve all been feeling even when we aren’t sick and even when we’ve largely adapted our daily routine to working remotely comes from this: We just aren’t getting the chance to see people in person nearly as often as we want—as often as we hadn’t even realized we needed.

So, if you do travel to visit family this holiday season, I understand your need to do so. But be careful. Get vaccinated—three times, if you can. Don’t have any contact with others who are at high risk if you do have any reason to think you’re infected.

Let’s hope next Christmas is better.

What’s wrong with police unions?

Nov 14 JDN 2459531

In a previous post I talked about why unions, even though they are collusive, are generally a good thing. But there is one very important exception to this rule: Police unions are almost always harmful.

Most recently, police unions have been leading the charge to fight vaccine mandates. This despite the fact that COVID-19 now kills more police officers than any other cause. They threatened that huge numbers of officers would leave if the mandates were imposed—but it didn’t happen.

But there is a much broader pattern than this: Police unions systematically take the side of individual police offers over the interests of public safety. Even the most incompetent, negligent, or outright murderous behavior by police officers will typically be defended by police unions. (One encouraging development is that lately even some police unions have been reluctant to defend the most outrageous killings by police officers—but this very much the exception, not the rule.)

Police unions are also unusual among unions in their political ties. Conservatives generally oppose unions, but are much friendlier toward police unions. At the other end of the spectrum, socialists normally love unions, but have distanced themselves from police unions for a long time. (The argument in that article that this is because “no other job involves killing people” is a bit weird: Ostensibly, the circumstances in which police are allowed to kill people are not all that different from the circumstances in which private citizens are. Just like us, they’re only supposed to use deadly force to prevent death or grievous bodily harm to themselves or others. The main thing that police are allowed to do that we aren’t is imprison people. Killing isn’t supposed to be a major part of the job.)

Police union also have some other weird features. The total membership of all police unions exceeds the total number of police officers in the United States, because a single officer is often affiliated with multiple unions—normally not at all how unions work. Police unions are also especially powerful and well-organized among unions. They are especially well-funded, and their members are especially loyal.

If we were to adopt a categorical view that unions are always good or always bad—as many people seem to want to—it’s difficult to see why police unions should be different from teachers’ unions or factory workers’ unions. But my argument was very careful not to make such categorical statements. Unions aren’t always or inherently good; they are usually good, because of how they are correcting a power imbalance between workers and corporations.

But when it comes to police, the situation is quite different. Police unions give more bargaining power to government officers against… what? Public accountability? The democratic system? Corporate CEOs are accountable only to their shareholders, but the mayors and city councils who decide police policy are elected (in most of the UK, even police commissioners are directly elected). It’s not clear that there was an imbalance in bargaining power here we would want to correct.

A similar case could be made against all public-sector unions, and indeed that case often is extended to teachers’ unions. If we must sacrifice teachers’ unions in order to destroy police unions, I’d be prepared to bite that bullet. But there are vital differences here as well. Teachers are not responsible for imprisoning people, and bad teachers almost never kill people. (In the rare cases in which teachers have committed murder, they have been charged to the full extent of the law, just as they would be in any other profession.) There surely is some misconduct by teachers that some unions may be protecting, but the harm caused by that misconduct is far lower than the harm caused by police misconduct. Teacher unions also provide a layer of protection for teachers to exercise autonomy, promoting academic freedom.

The form of teacher misconduct I would be most concerned about is sexual abuse of students. And while I’ve seen many essays claiming that teacher unions protect sexual abusers, the only concrete evidence I could find on the subject was a teachers’ union publicly complaining that the government had failed to pass stricter laws against sexual abuse by teachers. The research on teacher misconduct mainly focuses on other casual factors aside from union representation.

Even this Fox News article cherry-picking the worst examples of unions protecting abusive teachers includes line after line like “he was ultimately fired”, “he was pressured to resign”, and “his license was suspended”. So their complaint seems to be that it wasn’t done fast enough? But a fair justice system is necessarily slow. False accusations are rare, but they do happen—we can’t just take someone’s word for it. Ensuring that you don’t get fired until the district mounts strong evidence of misconduct against you is exactly what unions should be doing.

Whether unions are good or bad in a particular industry is ultimately an empirical question. So let’s look at the data, shall we? Teacher unions are positively correlated with school performance. But police unions are positively correlated with increased violent misconduct. There you have it: Teacher unions are good, but police unions are bad.

Labor history in the making

Oct 24 JDN 2459512

To say that these are not ordinary times would be a grave understatement. I don’t need to tell you all the ways that this interminable pandemic has changed the lives of people all around the world.

But one in particular is of notice to economists: Labor in the United States is fighting back.

Quit rates are at historic highs. Over 100,000 workers in a variety of industries are simultaneously on strike, ranging from farmworkers to nurses and freelance writers to university lecturers.

After decades of quiescence to ever-worsening working conditions, it seems that finally American workers are mad as hell and not gonna take it anymore.

It’s about time, frankly. The real question is why it took this long. Working conditions in the US have been systematically worse than the rest of the First World since at least the 1980s. It was substantially easier to get the leave I needed to attend my own wedding—in the US—after starting work in the UK than it would have been at the same kind of job in the US, because UK law requires employers to grant leave from the day they start work, while US federal law and the law in many states doesn’t require leave at all for anyone—not even people who are sick or recently gave birth.

So, why did it happen now? What changed? The pandemic threw our lives into turmoil, that much is true. But it didn’t fundamentally change the power imbalance between workers and employers. Why was that enough?

I think I know why. The shock from the pandemic didn’t have to be enough to actually change people’s minds about striking—it merely had to be enough to convince people that others would show up. It wasn’t the first-order intention “I want to strike” that changed; it was the second-order belief “Other people want to strike too”.

For a labor strike is a coordination game par excellence. If 1 person strikes, they get fired and replaced. If 2 or 3 or 10 strike, most likely the same thing. But if 10,000 strike? If 100,000 strike? Suddenly corporations have no choice but to give in.

The most important question on your mind when you are deciding whether or not to strike is not, “Do I hate my job?” but “Will my co-workers have my back?”.

Coordination games exhibit a very fascinating—and still not well-understood—phenomenon known as Schelling points. People will typically latch onto certain seemingly-arbitrary features of their choices, and do so well enough that simply having such a focal point can radically increase the level of successful coordination.

I believe that the pandemic shock was just such a Schelling point. It didn’t change most people’s working conditions all that much: though I can see why nurses in particular would be upset, it’s not clear to me that being a university lecturer is much worse now than it was a year ago. But what the pandemic did do was change everyone’s working conditions, all at once. It was a sudden shock toward work dissatisfaction that applied to almost the entire workforce.

Thus, many people who were previously on the fence about striking were driven over the edge—and then this in turn made others willing to take the leap as well, suddenly confident that they would not be acting alone.

Another important feature of the pandemic shock was that it took away a lot of what people had left to lose. Consider the two following games.

Game A: You and 100 other people each separately, without communicating, decide to choose X or Y. If you all choose X, you each get $20. But if even one of you chooses Y, then everyone who chooses Y gets $1 but everyone who chooses X gets nothing.

Game B: Same as the above, except that if anyone chooses Y, everyone who chooses Y also gets nothing.

Game A is tricky, isn’t it? You want to choose X, and you’d be best off if everyone did. But can you really trust 100 other people to all choose X? Maybe you should take the safe bet and choose Y—but then, they’re thinking the same way.


Game B, on the other hand, is painfully easy: Choose X. Obviously choose X. There’s no downside, and potentially a big upside.

In terms of game theory, both games have the same two Nash equilibria: All-X and All-Y. But in the second game, I made all-X also a weak dominant strategy equilibrium, and that made all the difference.

We could run these games in the lab, and I’m pretty sure I know what we’d find: In game A, most people choose X, but some people don’t, and if you repeat the game more and more people choose Y. But in game B, almost everyone chooses X and keeps on choosing X. Maybe they don’t get unanimity every time, but they probably do get it most of the time—because why wouldn’t you choose X? (These are testable hypotheses! I could in fact run this experiment! Maybe I should?)

It’s hard to say at this point how effective these strikes will be. Surely there will be some concessions won—there are far too many workers striking for them all to get absolutely nothing. But it remains uncertain whether the concessions will be small, token changes just to break up the strikes, or serious, substantive restructuring of how work is done in the United States.

If the latter sounds overly optimistic, consider that this is basically what happened in the New Deal. Those massive—and massively successful—reforms were not generated out of nowhere; they were the result of the economic crisis of the Great Depression and substantial pressure by organized labor. We may yet see a second New Deal (a Green New Deal?) in the 2020s if labor organizations can continue putting the pressure on.

The most important thing in making such a grand effort possible is believing that it’s possible—only if enough people believe it can happen will enough people take the risk and put in the effort to make it happen. Apathy and cynicism are the most powerful weapons of the status quo.


We are witnessing history in the making. Let’s make it in the right direction.

Stupid problems, stupid solutions

Oct 17 JDN 2459505

Krugman thinks we should Mint The Coin: Mint a $1 trillion platinum coin and then deposit it at the Federal Reserve, thus creating, by fiat, the money to pay for the current budget without increasing the national debt.

This sounds pretty stupid. Quite frankly, it is stupid. But sometimes stupid problems require stupid solutions. And the debt ceiling is an incredibly stupid problem.

Let’s be clear about this: Congress already passed the budget. They had a right to vote it down—that is indeed their Constitutional responsibility. But they passed it. And now that the budget is passed, including all its various changes to taxes and spending, it necessarily requires a certain amount of debt increase to make it work.

There’s really no reason to have a debt ceiling at all. This is an arbitrary self-imposed credit constraint on the US government, which is probably the single institution in the world that least needs to worry about credit constraints. The US is currently borrowing at extremely low interest rates, and has never defaulted in 200 years. There is no reason it should be worrying about taking on additional debt, especially when it is being used to pay for important long-term investments such as infrastructure and education.

But if we’re going to have a debt ceiling, it should be a simple formality. Congress does the calculation to see how much debt will be needed, and if it accepts that amount, passes the budget and raises the debt ceiling as necessary. If for whatever reason they don’t want to incur the additional debt, they should make changes to the budget accordingly—not pass the budget and then act shocked when they need to raise the debt ceiling.

In fact, there is a pretty good case to be made that the debt ceiling is a violation of the Fourteenth Amendment, which states in Section 4: “The validity of the public debt of the United States, authorized by law, including debts incurred for payment of pensions and bounties for services in suppressing insurrection or rebellion, shall not be questioned.” This was originally intended to ensure the validity of Civil War debt, but it has been interpreted by the Supreme Court to mean that all US public debt legally incurred is valid and thus render the debt ceiling un-Constitutional.

Of course, actually sending it to the Supreme Court would take a long time—too long to avoid turmoil in financial markets if the debt ceiling is not raised. So perhaps Krugman is right: Perhaps it’s time to Mint The Coin and fight stupid with stupid.

Unending nightmares

Sep 19 JDN 2459477

We are living in a time of unending nightmares.

As I write this, we have just passed the 20th anniversary of 9/11. Yet only in the past month were US troops finally withdrawn from Afghanistan—and that withdrawal was immediately followed by a total collapse of the Afghan government and a reinstatement of the Taliban. The United States had been at war for nearly 20 years, spending trillions of dollars and causing thousands of deaths—and seems to have accomplished precisely nothing.

Some left-wing circles have been saying that the Taliban offered surrender all the way back in 2001; this is not accurate. Alternet even refers to it as an “unconditional surrender” which is utter nonsense. No one in their right mind—not even the most die-hard imperialist—would ever refuse an unconditional surrender, and the US most certainly did nothing of the sort.)

The Taliban did offer a peace deal in 2001, which would have involved giving the US control of Kandahar and turning Osama bin Laden over to a neutral country (not to the US or any US ally). It would also have granted amnesty to a number of high-level Taliban leaders, which was a major sticking point for the US. In hindsight, should they have taken the deal? Obviously. But I don’t think that was nearly so clear at the time—nor would it have been particularly palatable to most of the American public to leave Osama bin Laden under house arrest in some neutral country (which they never specified by the way; somewhere without US extradition, presumably?) and grant amnesty to the top leaders of the Taliban.

Thus, even after the 20-year nightmare of the war that refused to end, we are still back to the nightmare we were in before—Afghanistan ruled by fanatics who will oppress millions.

Yet somehow this isn’t even the worst unending nightmare, for after a year and a half we are still in the throes of a global pandemic which has now caused over 4.6 million deaths. We are still wearing masks wherever we go—at least, those of us who are complying with the rules. We have gotten vaccinated already, but likely will need booster shots—at least, those of us who believe in vaccines.

The most disturbing part of it all is how many people still aren’t willing to follow the most basic demands of public health agencies.

In case you thought this was just an American phenomenon: Just a few days ago I looked out the window of my apartment to see a protest in front of the Scottish Parliament complaining about vaccine and mask mandates, with signs declaring it all a hoax. (Yes, my current temporary apartment overlooks the Scottish Parliament.)

Some of those signs displayed a perplexing innumeracy. One sign claimed that the vaccines must be stopped because they had killed 1,400 people in the UK. This is not actually true; while there have been 1,400 people in the UK who died after receiving a vaccine, 48 million people in the UK have gotten the vaccine, and many of them were old and/or sick, so, purely by statistics, we’d expect some of them to die shortly afterward. Less than 100 of these deaths are in any way attributable to the vaccine. But suppose for a moment that we took the figure at face value, and assumed, quite implausibly, that everyone who died shortly after getting the vaccine was in fact killed by the vaccine. This 1,400 figure needs to be compared against the 156,000 UK deaths attributable to COVID itself. Since 7 million people in the UK have tested positive for the virus, this is a fatality rate of over 2%. Even if we suppose that literally everyone in the UK who hasn’t been vaccinated in fact had the virus, that would still only be 20 million (the UK population of 68 million – the 48 million vaccinated) people, so the death rate for COVID itself would still be at least 0.8%—a staggeringly high fatality rate for a pandemic airborne virus. Meanwhile, even on this ridiculous overestimate of the deaths caused by the vaccine, the fatality rate for vaccination would be at most 0.003%. Thus, even by the anti-vaxers’ own claims, the vaccine is nearly 300 times safer than catching the virus. If we use the official estimates of a 1.9% COVID fatality rate and 100 deaths caused by the vaccines, the vaccines are in fact over 9000 times safer.

Yet it does seem to be worse in the United States, as while 22% of Americans described themselves as opposed to vaccination in general, only about 2% of Britons said the same.

But this did not translate to such a large difference in actual vaccination: While 70% of people in the UK have received the vaccine, 64% of people in the US have. Both of these figures are tantalizingly close to, yet clearly below, the at least 84% necessary to achieve herd immunity. (Actually some early estimates thought 60-70% might be enough—but epidemiologists no longer believe this, and some think that even 90% wouldn’t be enough.)

Indeed, the predominant tone I get from trying to keep up on the current news in epidemiology is fatalism: It’s too late, we’ve already failed to contain the virus, we won’t reach herd immunity, we won’t ever eradicate it. At this point they now all seem to think that COVID is going to become the new influenza, always with us, a major cause of death that somehow recedes into the background and seems normal to us—but COVID, unlike influenza, may stick around all year long. The one glimmer of hope is that influenza itself was severely hampered by the anti-pandemic procedures, and influenza cases and deaths are indeed down in both the US and UK (though not zero, nor as drastically reduced as many have reported).

The contrast between terrorism and pandemics is a sobering one, as pandemics kill far more people, yet somehow don’t provoke anywhere near as committed a response.

9/11 was a massive outlier in terrorism, at 3,000 deaths on a single day; otherwise the average annual death rate by terrorism is about 20,000 worldwide, mostly committed by Islamist groups. Yet the threat is not actually to Americans in particular; annual deaths due to terrorism in the US are less than 100—and most of these by right-wing domestic terrorists, not international Islamists.

Meanwhile, in an ordinary year, influenza would kill 50,000 Americans and somewhere between 300,000 and 700,000 people worldwide. COVID in the past year and a half has killed over 650,000 Americans and 4.6 million people worldwide—annualize that and it would be 400,000 per year in the US and 3 million per year worldwide.

Yet in response to terrorism we as a country were prepared to spend $2.3 trillion dollars, lose nearly 4,000 US and allied troops, and kill nearly 50,000 civilians—not even counting the over 60,000 enemy soldiers killed. It’s not even clear that this accomplished anything as far as reducing terrorism—by some estimates it actually made it worse.

Were we prepared to respond so aggressively to pandemics? Certainly not to influenza; we somehow treat all those deaths are normal or inevitable. In response to COVID we did spend a great deal of money, even more than the wars in fact—a total of nearly $6 trillion. This was a very pleasant surprise to me (it’s the first time in my lifetime I’ve witnessed a serious, not watered-down Keynesian fiscal stimulus in the United States). And we imposed lockdowns—but these were all-too quickly removed, despite the pleading of public health officials. It seems to be that our governments tried to impose an aggressive response, but then too many of the citizens pushed back against it, unwilling to give up their “freedom” (read: convenience) in the name of public safety.

For the wars, all most of us had to do was pay some taxes and sit back and watch; but for the pandemic we were actually expected to stay home, wear masks, and get shots? Forget it.

Politics was clearly a very big factor here: In the US, the COVID death rate map and the 2020 election map look almost equivalent: By and large, people who voted for Biden have been wearing masks and getting vaccinated, while people who voted for Trump have not.

But pandemic response is precisely the sort of thing you can’t do halfway. If one area is containing a virus and another isn’t, the virus will still remain uncontained. (As some have remarked, it’s rather like having a “peeing section” of a swimming pool. Much worse, actually, as urine contains relatively few bacteria—but not zero—and is quickly diluted by the huge quantities of water in a swimming pool.)

Indeed, that seems to be what has happened, and why we can’t seem to return to normal life despite months of isolation. Since enough people are refusing to make any effort to contain the virus, the virus remains uncontained, and the only way to protect ourselves from it is to continue keeping restrictions in place indefinitely.

Had we simply kept the original lockdowns in place awhile longer and then made sure everyone got the vaccine—preferably by paying them for doing it, rather than punishing them for not—we might have been able to actually contain the virus and then bring things back to normal.

But as it is, this is what I think is going to happen: At some point, we’re just going to give up. We’ll see that the virus isn’t getting any more contained than it ever was, and we’ll be so tired of living in isolation that we’ll finally just give up on doing it anymore and take our chances. Some of us will continue to get our annual vaccines, but some won’t. Some of us will continue to wear masks, but most won’t. The virus will become a part of our lives, just as influenza did, and we’ll convince ourselves that millions of deaths is no big deal.

And then the nightmare will truly never end.

An unusual recession, a rapid recovery

Jul 11 JDN 2459407

It seems like an egregious understatement to say that the last couple of years have been unusual. The COVID-19 pandemic was historic, comparable in threat—though not in outcome—to the 1918 influenza pandemic.

At this point it looks like we may not be able to fully eradicate COVID. And there are still many places around the world where variants of the virus continue to spread. I personally am a bit worried about the recent surge in the UK; it might add some obstacles (as if I needed any more) to my move to Edinburgh. Yet even in hard-hit places like India and Brazil things are starting to get better. Overall, it seems like the worst is over.

This pandemic disrupted our society in so many ways, great and small, and we are still figuring out what the long-term consequences will be.

But as an economist, one of the things I found most unusual is that this recession fit Real Business Cycle theory.

Real Business Cycle theory (henceforth RBC) posits that recessions are caused by negative technology shocks which result in a sudden drop in labor supply, reducing employment and output. This is generally combined with sophisticated mathematical modeling (DSGE or GTFO), and it typically leads to the conclusion that the recession is optimal and we should do nothing to correct it (which was after all the original motivation of the entire theory—they didn’t like the interventionist policy conclusions of Keynesian models). Alternatively it could suggest that, if we can, we should try to intervene to produce a positive technology shock (but nobody’s really sure how to do that).

For a typical recession, this is utter nonsense. It is obvious to anyone who cares to look that major recessions like the Great Depression and the Great Recession were caused by a lack of labor demand, not supply. There is no apparent technology shock to cause either recession. Instead, they seem to be preciptiated by a financial crisis, which then causes a crisis of liquidity which leads to a downward spiral of layoffs reducing spending and causing more layoffs. Millions of people lose their jobs and become desperate to find new ones, with hundreds of people applying to each opening. RBC predicts a shortage of labor where there is instead a glut. RBC predicts that wages should go up in recessions—but they almost always go down.

But for the COVID-19 recession, RBC actually had some truth to it. We had something very much like a negative technology shock—namely the pandemic. COVID-19 greatly increased the cost of working and the cost of shopping. This led to a reduction in labor demand as usual, but also a reduction in labor supply for once. And while we did go through a phase in which hundreds of people applied to each new opening, we then followed it up with a labor shortage and rising wages. A fall in labor supply should create inflation, and we now have the highest inflation we’ve had in decades—but there’s good reason to think it’s just a transitory spike that will soon settle back to normal.

The recovery from this recession was also much more rapid: Once vaccines started rolling out, the economy began to recover almost immediately. We recovered most of the employment losses in just the first six months, and we’re on track to recover completely in half the time it took after the Great Recession.

This makes it the exception that proves the rule: Now that you’ve seen a recession that actually resembles RBC, you can see just how radically different it was from a typical recession.

Moreover, even in this weird recession the usual policy conclusions from RBC are off-base. It would have been disastrous to withhold the economic relief payments—which I’m happy to say even most Republicans realized. The one thing that RBC got right as far as policy is that a positive technology shock was our salvation—vaccines.

Indeed, while the cause of this recession was very strange and not what Keynesian models were designed to handle, our government largely followed Keynesian policy advice—and it worked. We ran massive government deficits—over $3 trillion in 2020—and the result was rapid recovery in consumer spending and then employment. I honestly wouldn’t have thought our government had the political will to run a deficit like that, even when the economic models told them they should; but I’m very glad to be wrong. We ran the huge deficit just as the models said we should—and it worked. I wonder how the 2010s might have gone differently had we done the same after 2008.

Perhaps we’ve learned from some of our mistakes.

A prouder year for America, and for me

Jul 4 JDN 2459380

Living under Trump from 2017 to 2020, it was difficult to be patriotic. How can we be proud of a country that would put a man like that in charge? And then there was the COVID pandemic, which initially the US handled terribly—largely because of the aforementioned Trump.

But then Biden took office, and almost immediately things started to improve. This is a testament to how important policy can be—and how different the Democrats and Republicans have become.

The US now has one of the best rates of COVID vaccination in the world (though lately progress seems to be stalling and other countries are catching up). Daily cases in the US are now the lowest they have been since March 2020. Even real GDP is almost back up to its pre-pandemic level (even per-capita), and the surge of inflation we got as things began to re-open already seems to be subsiding.

I can actually celebrate the 4th of July with some enthusiasm this year, whereas the last four years involved continually reminding myself that I was celebrating the liberating values of America’s founding, not the current terrible state of its government. Of course our government policy still retains many significant flaws—but it isn’t the utter embarrassment it was just a year ago.

This may be my last 4th of July to celebrate for the next few years, as I will soon be moving to Scotland (more on that in a moment).

2020 was a very bad year, but even halfway through it’s clear that 2021 is going to be a lot better.

This was true for just about everyone. I was no exception.

The direct effects of the pandemic on me were relatively minor.

Transitioning to remote work was even easier than I expected it to be; in fact I was even able to run experiments online using the same research subject pool as we’d previously used for the lab. I not only didn’t suffer any financial hardship from the lockdowns, I ended up better off because of the relief payments (and the freezing of student loan payments as well as the ludicrous stock boom, which I managed to buy in near the trough of). Ordering groceries online for delivery is so convenient I’m tempted to continue it after the pandemic is over (though it does cost more).

I was careful and/or fortunate enough not to get sick (now that I am fully vaccinated, my future risk is negligible), as were most of my friends and family. I am not close to anyone who died from the virus, though I do have some second-order links to some who died (grandparents of a couple of my friends, the thesis advisor of one of my co-authors).

It was other things, that really made 2020 a miserable year for me. Some of them were indirect effects of the pandemic, and some may not even have been related.

For me, 2020 was a year full of disappointments. It was the year I nearly finished my dissertation and went on the job market, applying for over one hundred jobs—and got zero offers. It was the year I was scheduled to present at an international conference—which was then canceled. It was the year my papers were rejected by multiple journals. It was the year I was scheduled to be married—and then we were forced to postpone the wedding.

But now, in 2021, several of these situations are already improving. We will be married on October 9, and most (though assuredly not all) of the preparations for the wedding are now done. My dissertation is now done except for some formalities. After over a year of searching and applying to over two hundred postings in all, I finally found a job, a postdoc position at the University of Edinburgh. (A postdoc isn’t ideal, but on the other hand, Edinburgh is more prestigious than I thought I’d be able to get.) I still haven’t managed to publish any papers, but I no longer feel as desperate a need to do so now that I’m not scrambling to find a job. Now of course we have to plan for a move overseas, though fortunately the university will reimburse our costs for the visa and most of the moving expenses.

Of course, 2021 isn’t over—neither is the COVID pandemic. But already it looks like it’s going to be a lot better than 2020.

When to give up

Jun 6 JDN 2459372

Perseverance is widely regarded as a virtue, and for good reason. Often one of the most important deciding factors in success is the capacity to keep trying after repeated failure. I think this has been a major barrier for me personally; many things came easily to me when I was young, and I internalized the sense that if something doesn’t come easily, it must be beyond my reach.

Yet it’s also worth noting that this is not the only deciding factor—some things really are beyond our capabilities. Indeed, some things are outright impossible. And we often don’t know what is possible and what isn’t.

This raises the question: When should we persevere, and when should we give up?

There is actually reason to think that people often don’t give up when they should. Steven Levitt (of Freakonomics fame)recently published a study that asked people who were on the verge of a difficult decision to flip a coin, and then base their decision on the coin flip: Heads, make a change; tails, keep things as they are. Many didn’t actually follow the coin flip—but enough did that there was a statistical difference between those who saw heads and those who saw tails. The study found that the people who flipped heads and made a change were on average happier a couple of years later than the people who flipped tails and kept things as they were.

This question is particularly salient for me lately, because the academic job market has gone so poorly for me. I’ve spent most of my life believing that academia is where I belong; my intellect and my passion for teaching and research has convinced me and many others that this is the right path for me. But now that I have a taste of what it is actually like to apply for tenure-track jobs and submit papers to journals, I am utterly miserable. I hate every minute of it. I’ve spent the entire past year depressed and feeling like I have accomplished absolutely nothing.

In theory, once one actually gets tenure it’s supposed to get easier. But that could be a long way away—or it might never happen at all. As it is, there’s basically no chance I’ll get a tenure track position this year, and it’s unclear what my chances would be if I tried again next year.

If I could actually get a paper published, that would no doubt improve my odds of landing a better job next year. But I haven’t been able to do that, and each new rejection cuts so deep that I can barely stand to look at my papers anymore, much less actually continue submitting them. And apparently even tenured professors still get their papers rejected repeatedly, which means that this pain will never go away. I simply cannot imagine being happy if this is what I am expected to do for the rest of my life.

I found this list of criteria for when you should give up something—and most of them fit me. I’m not sure I know in my heart it can’t work out, but I increasingly suspect that. I’m not sure I want it anymore, now that I have a better idea of what it’s really like. Pursuing it is definitely making me utterly miserable. I wouldn’t say it’s the only reason, but I definitely do worry what other people will think if I quit; I feel like I’d be letting a lot of people down. I also wonder who I am without it, where I belong if not here. I don’t know what other paths are out there, but maybe there is something better. This constant stream of failure and rejection has definitely made me feel like I hate myself. And above all, when I imagine quitting, I absolutely feel an enormous sense of relief.

Publishing in journals seems to be the thing that successful academics care about most, and it means almost nothing to me anymore. I only want it because of all the pressure to have it, because of all the rewards that come from having it. It has become fully instrumental to me, with no intrinsic meaning or value. I have no particular desire to be lauded by the same system that lauded Fischer Black or Kenneth Rogoff—both of whose egregious and easily-avoidable mistakes are responsible for the suffering of millions people around the world.

I want people to read my ideas. But people don’t actually read journals. They skim them. They read the abstracts. They look at the graphs and regression tables. (You have the meeting that should have been an email? I raise you the paper that should have been a regression table.) They see if there’s something in there that they should be citing for their own work, and if there is, maybe then they actually read the paper—but everyone is so hyper-specialized that only a handful of people will ever actually want to cite any given paper. The vast majority of research papers are incredibly tedious to read and very few people actually bother. As a method for disseminating ideas, this is perhaps slightly better than standing on a street corner and shouting into a megaphone.

I would much rather write books; people sometimes actually read books, especially when they are written for a wide audience and hence not forced into the straitjacket of standard ‘scientific writing’ that no human being actually gets any enjoyment out of writing or reading. I’ve seen a pretty clear improvement in writing quality of papers written by Nobel laureates—after they get their Nobels or similar accolades. Once they establish themselves, they are free to actually write in ways that are compelling and interesting, rather than having to present everything in the most dry, tedious way possible. If your paper reads like something that a normal person would actually find interesting or enjoyable to read, you will be—as I have been—immediately told that you must remove all such dangerous flavor until the result is as tasteless as possible.

No, the purpose of research journals is not to share ideas. Its function is not to share, but to evaluate. And it isn’t even really to evaluate research—it’s to evaluate researchers. It’s to outsource the efforts of academic hiring to an utterly unaccountable and arbitrary system run mostly by for-profit corporations. It may have some secondary effect of evaluating ideas for validity; at least the really awful ideas are usually excluded. But its primary function is to decide the academic pecking order.

I had thought that scientific peer review was supposed to select for truth. Perhaps sometimes it does. It seems to do so reasonably well in the natural sciences, at least. But in the social sciences? That’s far less clear. Peer-reviewed papers are much more likely to be accurate than any randomly-selected content; but there are still a disturbingly large number of peer-reviewed published papers that are utterly wrong, and some unknown but undoubtedly vast number of good papers that have never seen the light of day.

Then again, when I imagine giving up on an academic career, I don’t just feel relief—I also feel regret and loss. I feel like I’ve wasted years of my life putting together a dream that has now crumbled in my hands. I even feel some anger, some sense that I was betrayed by those who told me that this was about doing good research when it turns out it’s actually about being thick-skinned enough that you can take an endless assault of rejections. It feels like I’ve been running a marathon, and I just rounded a curve to discover that the last five miles must be ridden on horseback, when I don’t have a horse, I have no equestrian training, and in fact I’m allergic to horses.

I wish someone had told me it would be like this. Maybe they tried and I didn’t listen. They did say that papers would get rejected. They did say that the tenure track was high-pressure and publish-or-perish was a major source of anxiety. But they never said that it would tear at my soul like this. They never said that I would have to go through multiple rounds of agony, self-doubt, and despair in order to get even the slighest recognition for my years of work. They never said that the whole field would treat me like I’m worthless because I can’t satisfy the arbitrary demands of a handful of anonymous reviewers. They never said that I would begin to feel worthless after several rounds of this.

That’s really what I want to give up on. I want to give up on hitching my financial security, my career, my future, my self-worth to a system as capricious as peer review.

I don’t want to give up on research. I don’t want to give up on teaching. I still believe strongly in discovering new truths and sharing them with others. I’m just increasingly realizing that academia isn’t nearly as good at that as I thought it was.

It isn’t even that I think it’s impossible for me to succeed in academia. I think that if I continued trying to get a tenure-track job, I would land one eventually. Maybe next year. Or maybe I’d spend a few years at a postdoc first. And I’d probably manage to publish some paper in some reasonably respectable journal at some point in the future. But I don’t know how long it would take, or how good a journal it would be—and I’m already past the point where I really don’t care anymore, where I can’t afford to care, where if I really allowed myself to care it would only devastate me when I inevitably fail again. Now that I see what is really involved in the process, how arduous and arbitrary it is, publishing in a journal means almost nothing to me. I want to be validated; I want to be appreciated; I want to be recognized. But the system is set up to provide nothing but rejection, rejection, rejection. If even the best work won’t be recognized immediately and even the worst work can make it with enough tries, then the whole system begins to seem meaningless. It’s just rolls of the dice. And I didn’t sign up to be a gambler.

The job market will probably be better next year than it was this year. But how much better? Yes, there will be more openings, but there will also be more applicants: Everyone who would normally be on the market, plus everyone like me who didn’t make it this year, plus everyone who decided to hold back this year because they knew they wouldn’t make it (as I probably should have done). Yes, in a normal year, I could be fairly confident of getting some reasonably decent position—but this wasn’t a normal year, and next year won’t be one either, and the one after that might still not be. If I can’t get a paper published in a good journal between now and then—and I’m increasingly convinced that I can’t—then I really can’t expect my odds to be greatly improved from what they were this time around. And if I don’t know that this terrible gauntlet is going to lead to something good, I’d really much rather avoid it altogether. It was miserable enough when I went into it being (over)confident that it would work out all right.

Perhaps the most important question when deciding whether to give up is this: What will happen if you do? What alternatives do you have? If giving up means dying, then don’t give up. (“Learn to let go” is very bad advice to someone hanging from the edge of a cliff.) But while it may feel that way sometimes, rarely does giving up on a career or a relationship or a project yield such catastrophic results.

When people are on the fence about making a change and then do so, even based on the flip of a coin, it usually makes them better off. Note that this is different from saying you should make all your decisions randomly; if you are confident that you don’t want to make a change, don’t make a change. This advice is for people who feel like they want a change but are afraid to take the chance, people who find themselves ambivalent about what direction to go next—people like me.

I don’t know where I should go next. I don’t know where I belong. I know it isn’t Wall Street. I’m pretty sure it’s not consulting. Maybe it’s nonprofits. Maybe it’s government. Maybe it’s freelance writing. Maybe it’s starting my own business. I guess I’d still consider working in academia; if Purdue called me back to say they made a terrible mistake and they want me after all, I’d probably take the offer. But since such an outcome is now vanishingly unlikely, perhaps it’s time, after all, to give up.

Social science is broken. Can we fix it?

May 16 JDN 2459349

Social science is broken. I am of course not the first to say so. The Atlantic recently published an article outlining the sorry state of scientific publishing, and several years ago Slate Star Codex published a lengthy post (with somewhat harsher language than I generally use on this blog) showing how parapsychology, despite being obviously false, can still meet the standards that most social science is expected to meet. I myself discussed the replication crisis in social science on this very blog a few years back.

I was pessimistic then about the incentives of scientific publishing be fixed any time soon, and I am even more pessimistic now.

Back then I noted that journals are often run by for-profit corporations that care more about getting attention than getting the facts right, university administrations are incompetent and top-heavy, and publish-or-perish creates cutthroat competition without providing incentives for genuinely rigorous research. But these are widely known facts, even if so few in the scientific community seem willing to face up to them.

Now I am increasingly concerned that the reason we aren’t fixing this system is that the people with the most power to fix it don’t want to. (Indeed, as I have learned more about political economy I have come to believe this more and more about all the broken institutions in the world. American democracy has its deep flaws because politicians like it that way. China’s government is corrupt because that corruption is profitable for many of China’s leaders. Et cetera.)

I know economics best, so that is where I will focus; but most of what I’m saying here would also apply to other social sciences such as sociology and psychology as well. (Indeed it was psychology that published Daryl Bem.)

Rogoff and Reinhart’s 2010 article “Growth in a Time of Debt”, which was a weak correlation-based argument to begin with, was later revealed (by an intrepid grad student! His name is Thomas Herndon.) to be based upon deep, fundamental errors. Yet the article remains published, without any notice of retraction or correction, in the American Economic Review, probably the most prestigious journal in economics (and undeniably in the vaunted “Top Five”). And the paper itself was widely used by governments around the world to justify massive austerity policies—which backfired with catastrophic consequences.

Why wouldn’t the AER remove the article from their website? Or issue a retraction? Or at least add a note on the page explaining the errors? If their primary concern were scientific truth, they would have done something like this. Their failure to do so is a silence that speaks volumes, a hound that didn’t bark in the night.

It’s rational, if incredibly selfish, for Rogoff and Reinhart themselves to not want a retraction. It was one of their most widely-cited papers. But why wouldn’t AER’s editors want to retract a paper that had been so embarrassingly debunked?

And so I came to realize: These are all people who have succeeded in the current system. Their work is valued, respected, and supported by the system of scientific publishing as it stands. If we were to radically change that system, as we would necessarily have to do in order to re-align incentives toward scientific truth, they would stand to lose, because they would suddenly be competing against other people who are not as good at satisfying the magical 0.05, but are in fact at least as good—perhaps even better—actual scientists than they are.

I know how they would respond to this criticism: I’m someone who hasn’t succeeded in the current system, so I’m biased against it. This is true, to some extent. Indeed, I take it quite seriously, because while tenured professors stand to lose prestige, they can’t really lose their jobs even if there is a sudden flood of far superior research. So in directly economic terms, we would expect the bias against the current system among grad students, adjuncts, and assistant professors to be larger than the bias in favor of the current system among tenured professors and prestigious researchers.

Yet there are other motives aside from money: Norms and social status are among the most powerful motivations human beings have, and these biases are far stronger in favor of the current system—even among grad students and junior faculty. Grad school is many things, some good, some bad; but one of them is a ritual gauntlet that indoctrinates you into the belief that working in academia is the One True Path, without which your life is a failure. If your claim is that grad students are upset at the current system because we overestimate our own qualifications and are feeling sour grapes, you need to explain our prevalence of Impostor Syndrome. By and large, grad students don’t overestimate our abilities—we underestimate them. If we think we’re as good at this as you are, that probably means we’re better. Indeed I have little doubt that Thomas Herndon is a better economist than Kenneth Rogoff will ever be.

I have additional evidence that insider bias is important here: When Paul Romer—Nobel laureate—left academia he published an utterly scathing criticism of the state of academic macroeconomics. That is, once he had escaped the incentives toward insider bias, he turned against the entire field.

Romer pulls absolutely no punches: He literally compares the standard methods of DSGE models to “phlogiston” and “gremlins”. And the paper is worth reading, because it’s obviously entirely correct. He pulls no punches and every single one lands on target. It’s also a pretty fun read, at least if you have the background knowledge to appreciate the dry in-jokes. (Much like “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity.” I still laugh out loud every time I read the phrase “hegemonic Zermelo-Frankel axioms”, though I realize most people would be utterly nonplussed. For the unitiated, these are the Zermelo-Frankel axioms. Can’t you just see the colonialist imperialism in sentences like “\forall x \forall y (\forall z, z \in x \iff z \in y) \implies x = y”?)

In other words, the Upton Sinclair Principle seems to be applying here: “It is difficult to get a man to understand something when his salary depends upon not understanding it.” The people with the most power to change the system of scientific publishing are journal editors and prestigious researchers, and they are the people for whom the current system is running quite swimmingly.

It’s not that good science can’t succeed in the current system—it often does. In fact, I’m willing to grant that it almost always does, eventually. When the evidence has mounted for long enough and the most adamant of the ancien regime finally retire or die, then, at last, the paradigm will shift. But this process takes literally decades longer than it should. In principle, a wrong theory can be invalidated by a single rigorous experiment. In practice, it generally takes about 30 years of experiments, most of which don’t get published, until the powers that be finally give in.

This delay has serious consequences. It means that many of the researchers working on the forefront of a new paradigm—precisely the people that the scientific community ought to be supporting most—will suffer from being unable to publish their work, get grant funding, or even get hired in the first place. It means that not only will good science take too long to win, but that much good science will never get done at all, because the people who wanted to do it couldn’t find the support they needed to do so. This means that the delay is in fact much longer than it appears: Because it took 30 years for one good idea to take hold, all the other good ideas that would have sprung from it in that time will be lost, at least until someone in the future comes up with them.

I don’t think I’ll ever forget it: At the AEA conference a few years back, I went to a luncheon celebrating Richard Thaler, one of the founders of behavioral economics, whom I regard as one of the top 5 greatest economists of the 20th century (I’m thinking something like, “Keynes > Nash > Thaler > Ramsey > Schelling”). Yes, now he is being rightfully recognized for his seminal work; he won a Nobel, and he has an endowed chair at Chicago, and he got an AEA luncheon in his honor among many other accolades. But it was not always so. Someone speaking at the luncheon offhandedly remarked something like, “Did we think Richard would win a Nobel? Honestly most of us weren’t sure he’d get tenure.” Most of the room laughed; I had to resist the urge to scream. If Richard Thaler wasn’t certain to get tenure, then the entire system is broken. This would be like finding out that Erwin Schrodinger or Niels Bohr wasn’t sure he would get tenure in physics.

A. Gary Schilling, a renowned Wall Street economist (read: One Who Has Turned to the Dark Side), once remarked (the quote is often falsely attributed to Keynes): “markets can remain irrational a lot longer than you and I can remain solvent.” In the same spirit, I would say this: the scientific community can remain wrong a lot longer than you and I can extend our graduate fellowships and tenure clocks.