# An unusual recession, a rapid recovery

Jul 11 JDN 2459407

It seems like an egregious understatement to say that the last couple of years have been unusual. The COVID-19 pandemic was historic, comparable in threat—though not in outcome—to the 1918 influenza pandemic.

At this point it looks like we may not be able to fully eradicate COVID. And there are still many places around the world where variants of the virus continue to spread. I personally am a bit worried about the recent surge in the UK; it might add some obstacles (as if I needed any more) to my move to Edinburgh. Yet even in hard-hit places like India and Brazil things are starting to get better. Overall, it seems like the worst is over.

This pandemic disrupted our society in so many ways, great and small, and we are still figuring out what the long-term consequences will be.

But as an economist, one of the things I found most unusual is that this recession fit Real Business Cycle theory.

Real Business Cycle theory (henceforth RBC) posits that recessions are caused by negative technology shocks which result in a sudden drop in labor supply, reducing employment and output. This is generally combined with sophisticated mathematical modeling (DSGE or GTFO), and it typically leads to the conclusion that the recession is optimal and we should do nothing to correct it (which was after all the original motivation of the entire theory—they didn’t like the interventionist policy conclusions of Keynesian models). Alternatively it could suggest that, if we can, we should try to intervene to produce a positive technology shock (but nobody’s really sure how to do that).

For a typical recession, this is utter nonsense. It is obvious to anyone who cares to look that major recessions like the Great Depression and the Great Recession were caused by a lack of labor demand, not supply. There is no apparent technology shock to cause either recession. Instead, they seem to be preciptiated by a financial crisis, which then causes a crisis of liquidity which leads to a downward spiral of layoffs reducing spending and causing more layoffs. Millions of people lose their jobs and become desperate to find new ones, with hundreds of people applying to each opening. RBC predicts a shortage of labor where there is instead a glut. RBC predicts that wages should go up in recessions—but they almost always go down.

But for the COVID-19 recession, RBC actually had some truth to it. We had something very much like a negative technology shock—namely the pandemic. COVID-19 greatly increased the cost of working and the cost of shopping. This led to a reduction in labor demand as usual, but also a reduction in labor supply for once. And while we did go through a phase in which hundreds of people applied to each new opening, we then followed it up with a labor shortage and rising wages. A fall in labor supply should create inflation, and we now have the highest inflation we’ve had in decades—but there’s good reason to think it’s just a transitory spike that will soon settle back to normal.

The recovery from this recession was also much more rapid: Once vaccines started rolling out, the economy began to recover almost immediately. We recovered most of the employment losses in just the first six months, and we’re on track to recover completely in half the time it took after the Great Recession.

This makes it the exception that proves the rule: Now that you’ve seen a recession that actually resembles RBC, you can see just how radically different it was from a typical recession.

Moreover, even in this weird recession the usual policy conclusions from RBC are off-base. It would have been disastrous to withhold the economic relief payments—which I’m happy to say even most Republicans realized. The one thing that RBC got right as far as policy is that a positive technology shock was our salvation—vaccines.

# Good news for a change

Mar 28 JDN 2459302

When President Biden made his promise to deliver 100 million vaccine doses to Americans within his first 100 days, many were skeptical. Perhaps we had grown accustomed to the anti-scientific attitudes and utter incompetence of Trump’s administration, and no longer believed that the US federal government could do anything right.

The skeptics were wrong. For the promise has not only been kept, it has been greatly exceeded. As of this writing, Biden has been President for 60 days and we have already administered 121 million vaccine doses. If we continue at the current rate, it is likely that we will have administered over 200 million vaccine doses and fully vaccinated over 100 million Americans by Biden’s promised 100-day timeline—twice as fast as what was originally promised. Biden has made another bold promise: Every adult in the United States vaccinated by the end of May. I admit I’m not confident it can be done—but I wasn’t confident we’d hit 100 million by now either.

In fact, the US now has one of the best rates of COVID vaccination in the world, with the proportion of our population vaccinated far above the world average and below only Israel, UAE, Chile, the UK, and Bahrain (plus some tiny countries like Monaco). In fact, we actually have the largest absolute number of vaccinated individuals in the world, surpassing even China and India.

It turns out that the now-infamous map saying that the US and UK were among the countries best-prepared for a pandemic wasn’t so wrong after all; it’s just that having such awful administration for four years made our otherwise excellent preparedness fail. Put someone good in charge, and yes, indeed, it turns out that the US can deal with pandemics quite well.

The overall rate of new COVID cases in the US began to plummet right around the time the vaccination program gained steam, and has plateaued around 50,000 per day for the past few weeks. This is still much too high, but it is is a vast improvement over the 200,000 cases per day we had in early January. Our death rate due to COVID now hovers around 1,500 people per day—that’s still a 9/11 every two days. But this is half what our death rate was at its worst. And since our baseline death rate is 7,500 deaths per day, 1,800 of them by heart disease, this now means that COVID is no longer the leading cause of death in the United States; heart disease has once again reclaimed its throne. Of course, people dying from heart disease is still a bad thing; but it’s at least a sign of returning to normalcy.

Worldwide, the pandemic is slowing down, but still by no means defeated, with over 400,000 new cases and 7,500 deaths every day. The US rate of 17 new cases per 100,000 people per day is about 3 times the world average, but comparable to Germany (17) and Norway (18), and nowhere near as bad as Chile (30), Brazil (35), France (37), or Sweden (45), let alone the very hardest-hit places like Serbia (71), Hungary (78), Jordan (83), Czechia (90), and Estonia (110). (That big gap between Norway and Sweden? It’s because Sweden resisted using lockdowns.) And there is cause for optimism even in these places, as vaccination rates already exceed total COVID cases.

I can see a few patterns in the rate of vaccination by state: very isolated states have managed to vaccinate their population fastest—Hawaii and Alaska have done very well, and even most of the territories have done quite well (though notably not Puerto Rico). The south has done poorly (for obvious reasons), but not as poorly as I might have feared; even Texas and Mississippi have given at least one dose to 21% of their population. New England has been prioritizing getting as many people with at least one dose as possible, rather than trying to fully vaccinate each person; I think this is the right strategy.

We must continue to stay home when we can and wear masks when we go out. This will definitely continue for at least a few more months, and the vaccine rollout may not even be finished in many countries by the end of the year. In the worst-case scenario, COVID may become an endemic virus that we can’t fully eradicate and we’ll have to keep getting vaccinated every year like we do for influenza (though the good news there is that it likely wouldn’t be much more dangerous than influenza at that point either—though another influenza is nothing to, er, sneeze at).

Yet there is hope at last. Things are finally getting better.

# What if everyone owned their own home?

Mar 14 JDN 2459288

In last week’s post I suggested that if we are to use the term “gentrification”, it should specifically apply to the practice of buying homes for the purpose of renting them out.

But don’t people need to be able to rent homes? Surely we couldn’t have a system where everyone always owned their own home?

Or could we?

The usual argument for why renting is necessary is that people don’t want to commit to living in one spot for 15 or 30 years, the length of a mortgage. And this is quite reasonable; very few careers today offer the kind of stability that lets you commit in advance to 15 or more years of working in the same place. (Tenured professors are one of the few exceptions, and I dare say this has given academic economists some severe blind spots regarding the costs and risks involved in changing jobs.)

But how much does renting really help with this? One does not rent a home for a few days or even few weeks at a time. If you are staying somewhere for an interval that short, you generally room with a friend or pay for a hotel. (Or get an AirBNB, which is sort of intermediate between the two.)

One only rents housing for months at a time—in fact, most leases are 12-month leases. But since the average time to sell a house is 60-90 days, in what sense is renting actually less of a commitment than buying? It feels like less of a commitment to most people—but I’m not sure it really is less of a commitment.

There is a certainty that comes with renting—you know that once your lease is up you’re free to leave, whereas selling your house will on average take two or three months, but could very well be faster or slower than that.

Another potential advantage of renting is that you have a landlord who is responsible for maintaining the property. But this advantage is greatly overstated: First of all, if they don’t do it (and many surely don’t), you actually have very little recourse in practice. Moreover, if you own your own home, you don’t actually have to do all the work yourself; you could pay carpenters and plumbers and electricians to do it for you—which is all that most landlords were going to do anyway.

All of the “additional costs” of owning over renting such as maintenance and property taxes are going to be factored into your rent in the first place. This is a good argument for recognizing that a $1000 mortgage payment is not equivalent to a$1000 rent payment—the rent payment is all-inclusive in a way the mortgage is not. But it isn’t a good argument for renting over buying in general.

Being foreclosed on a mortgage is a terrible experience—but surely no worse than being evicted from a rental. If anything, foreclosure is probably not as bad, because you can essentially only be foreclosed for nonpayment, since the bank only owns the loan; landlords can and do evict people for all sorts of reasons, because they own the home. In particular, you can’t be foreclosed for annoying your neighbors or damaging the property. If you own your home, you can cut a hole in a wall any time you like. (Not saying you should necessarily—just that you can, and nobody can take your home away for doing so.)

I think the primary reason that people rent instead of buying is the cost of a down payment. For some reason, we have decided as a society that you should be expected to pay 10%-20% of the cost of a home up front, or else you never deserve to earn any equity in your home whatsoever. This is one of many ways that being rich makes it easier to get richer—but it is probably the most important one holding back most of the middle class of the First World.

And make no mistake, that’s what this is: It’s a social norm. There is no deep economic reason why a down payment needs to be anything in particular—or even why down payments in general are necessary.

There is some evidence that higher down payments are associated with less risk of default, but it’s not as strong as many people seem to think. The big HUD study on the subject found that one percentage point of down payment reduces default risk by about as much as 5 points of credit rating: So you should prefer to offer a mortgage to someone with an 800 rating and no down payment than someone with a 650 rating and a 20% down payment.

Also, it’s not as if mortgage lenders are unprotected from default (unlike, say, credit card lenders). Above all, they can foreclose on the house. So why is it so important to reduce the risk of default in the first place? Why do you need extra collateral in the form of a down payment, when you’ve already got an entire house of collateral?

It may be that this is actually a good opportunity for financial innovation, a phrase that should in general strike terror in one’s heart. Most of the time “financial innovation” means “clever ways of disguising fraud”. Previous attempts at “innovating” mortgages have resulted in such monstrosities as “interest-only mortgages” (a literal oxymoron, since by definition a mortgage must have a termination date—a date at which the debt “dies”), “balloon payments”, and “adjustable rate mortgages”—all of which increase risk of default while as far as I can tell accomplishing absolutely nothing. “Subprime” lending created many excuses for irresponsible or outright predatory lending—and then, above all, securitization of mortgages allowed banks to offload the risk they had taken on to third parties who typically had no idea what they were getting.

Volcker was too generous when he said that the last great financial innovation was the ATM; no, that was an innovation in electronics (and we’ve had plenty of those). The last great financial innovation I can think of is the joint-stock corporation in the 1550s. But I think a new type of mortgage contract that minimizes default risk without requiring large up-front payments might actually qualify as a useful form of financial innovation.

It would also be useful to have mortgages that make it easier to move, perhaps by putting payments on hold while the home is up for sale. That way people wouldn’t have to make two mortgage payments at once as they move from one place to another, and the bank will see that money eventually—paid for by new buyer and their mortgage.

Indeed, ideally I’d like to eliminate foreclosure as well, so that no one has to be kicked out of their homes. How might we do that?

Well, as a pandemic response measure, we should have simply instituted a freeze on all evictions and foreclosures for the duration of the pandemic. Some states did, in fact—but many didn’t, and the federal moratoria on evictions were limited. This is the kind of emergency power that government should have, to protect people from a disaster. So far it appears that the number of evictions was effectively reduced from tens of millions to tens of thousands by these measures—but evicting anyone during a pandemic is a human rights violation.

But as a long-term policy, simply banning evictions wouldn’t work. No one would want to lend out mortgages, knowing that they had no recourse if the debtor stopped paying. Even buyers with good credit might get excluded from the market, since once they actually received the house they’d have very little incentive to actually make their payments on time.

But if there are no down payments and no foreclosures, that means mortgage lenders have no collateral. How are they supposed to avoid defaults?

One option would be wage garnishment. If you have the money and are simply refusing to pay it, the courts could simply require your employer to send the money directly to your creditors. If you have other assets, those could be garnished as well.

And what if you don’t have the money, perhaps because you’re unemployed? Well, then, this isn’t really a problem of incentives at all. It isn’t that you’re choosing not to pay, it’s that you can’t pay. Taking away such people’s homes would protect banks financially, but at a grave human cost.

One option would be to simply say that the banks should have to bear the risk: That’s part of what their huge profits are supposed to be compensating them for, the willingness to take on risks others won’t. The main downside here is the fact that it would probably make it more difficult to get a mortgage and raise the interest rates that you would need to pay once you do.

Another option would be some sort of government program to make up the difference, by offering grants or guaranteed loans to homeowners who can’t afford to pay their mortgages. Since most such instances are likely to be temporary, the government wouldn’t be on the hook forever—just long enough for people to get back on their feet. Here the downside would be the same as any government spending: higher taxes or larger budget deficits. But honestly it probably wouldn’t take all that much; while the total value of all mortgages is very large, only a small portion are in default at any give time. Typically only about 2-4% of all mortgages in the US are in default. Even 4% of the $10 trillion total value of all US mortgages is about$400 billion, which sounds like a lot—but the government wouldn’t owe that full amount, just whatever portion is actually late. I couldn’t easily find figures on that, but I’d be surprised if it’s more than 10% of the total value of these mortgages that would need to be paid by the government. $40 billion is about 1% of the annual federal budget. Reforms to our healthcare system would also help tremendously, as medical expenses are a leading cause of foreclosure in the United States (and literally nowhere else—every other country with the medical technology to make medicine this expensive also has a healthcare system that shares the burden). Here there is virtually no downside: Our healthcare system is ludicrously expensive without producing outcomes any better than the much cheaper single-payer systems in Canada, the UK, and France. All of this sounds difficult and complicated, I suppose. Some may think that it’s not worth it. But I believe that there is a very strong moral argument for universal homeownership and ending eviction: Your home is your own, and no one else’s. No one has a right to take your home away from you. This is also fundamentally capitalist: It is the private ownership of capital by its users, the acquisition of wealth through ownership of assets. The system of landlords and renters honestly doesn’t seem so much capitalist as it does feudal: We even call them “lords”, for goodness’ sake! As an added bonus, if everyone owned their own homes, then perhaps we wouldn’t have to worry about “gentrification”, since rising property values would always benefit residents. # In search of reasonable conservatism Feb 21JDN 2459267 This is a very tumultuous time for American politics. Donald Trump, not once, but twice was impeached—giving him the dubious title of having been impeached as many times as the previous 45 US Presidents combined. He was not convicted either time, not because the evidence for his crimes was lacking—it was in fact utterly overwhelming—but because of obvious partisan bias: Republican Senators didn’t want to vote against a Republican President. All 50 of the Democratic Senators, but only 7 of the 50 Republican Senators, voted to convict Trump. The required number of votes to convict was 67. Some degree of partisan bias is to be expected. Indeed, the votes looked an awful lot like Bill Clinton’s impeachment, in which all Democrats and only a handful of Republicans voted to acquit. But Bill Clinton’s impeachment trial was nowhere near as open-and-shut as Donald Trump’s. He was being tried for perjury and obstruction of justice, over lies he told about acts that were unethical, but not illegal or un-Constitutional. I’m a little disappointed that no Democrats voted against him, but I think acquittal was probably the right verdict. There’s something very odd about being tried for perjury because you lied about something that wasn’t even a crime. Ironically, had it been illegal, he could have invoked the Fifth Amendment instead of lying and they wouldn’t have been able to touch him. So the only way the perjury charge could actually stick was because it wasn’t illegal. But that isn’t what perjury is supposed to be about: It’s supposed to be used for things like false accusations and planted evidence. Refusing to admit that you had an affair that’s honestly no one’s business but your family’s really shouldn’t be a crime, regardless of your station. So let us not imagine an equivalency here: Bill Clinton was being tried for crimes that were only crimes because he lied about something that wasn’t a crime. Donald Trump was being tried for manipulating other countries to interfere in our elections, obstructing investigations by Congress, and above all attempting to incite a coup. Partisan bias was evident in all three trials, but only Trump’s trials were about sedition against the United States. That is to say, I expect to see partisan bias; it would be unrealistic not to. But I expect that bias to be limited. I expect there to be lines beyond which partisans will refuse to go. The Republican Party in the United States today has shown us that they have no such lines. (Or if there are, they are drawn far too high. What would he have to do, bomb an American city? He incited an invasion of the Capitol Building, for goodness’ sake! And that was after so terribly mishandling a pandemic that he caused roughly 200,000 excess American deaths!) Temperamentally, I like to compromise. I want as many people to be happy as possible, even if that means not always getting exactly what I would personally prefer. I wanted to believe that there were reasonable conservatives in our government, professional statespersons with principles who simply had honest disagreements about various matters of policy. I can now confirm that there are at most 7 such persons in the US Senate, and at most 10 such persons in the US House of Representatives. So of the 261 Republicans in Congress, no more than 17 are actually reasonable statespersons who do not let partisan bias override their most basic principles of justice and democracy. And even these 17 are by no means certain: There were good strategic reasons to vote against Trump, even if the actual justice meant nothing to you. Trump’s net disapproval rating was nearly the highest of any US President ever. Carter and Bush I had periods where they fared worse, but overall fared better. Johnson, Ford, Reagan, Obama, Clinton, Bush II, and even Nixon were consistently more approved than Trump. Kennedy and Eisenhower completely blew him out of the water—at their worst, Kennedy and Eisenhower were nearly 30 percentage points above Trump at his best. With Trump this unpopular, cutting ties with him would make sense for the same reason rats desert a sinking ship. And yet somehow partisan loyalty won out for 94% of Republicans in Congress. Politics is the mind-killer, and I fear that this sort of extreme depravity on the part of Republicans in Congress will make it all too easy to dismiss conservatism as a philosophy in general. I actually worry about that; not all conservative ideas are wrong! Low corporate taxes actually make a lot of sense. Minimum wage isn’t that harmful, but it’s also not that beneficial. Climate change is a very serious threat, but it’s simply not realistic to jump directly to fully renewable energy—we need something for the transition, probably nuclear energy. Capitalism is overall the best economic system, and isn’t particularly bad for the environment. Industrial capitalism has brought us a golden age. Rent control is a really bad idea. Fighting racism is important, but there are ways in which woke culture has clearly gone too far. Indeed, perhaps the worst thing about woke culture is the way it denies past successes for civil rights and numbs us with hopelessness. Above all, groupthink is incredibly dangerous. Once we become convinced that any deviation from the views of the group constitutes immorality or even treason, we become incapable of accepting new information and improving our own beliefs. We may start with ideas that are basically true and good, but we are not omniscient, and even the best ideas can be improved upon. Also, the world changes, and ideas that were good a generation ago may no longer be applicable to the current circumstances. The only way—the only way—to solve that problem is to always remain open to new ideas and new evidence. Therefore my lament is not just for conservatives, who now find themselves represented by craven ideologues; it is also for liberals, who no longer have an opposition party worth listening to. Indeed, it’s a little hard to feel bad for the conservatives, because they voted for these maniacs. Maybe they didn’t know what they were getting? But they’ve had chances to remove most of them, and didn’t do so. At best I’d say I pity them for being so deluded by propaganda that they can’t see the harm their votes have done. But I’m actually quite worried that the ideologues on the left will now feel vindicated; their caricatured view of Republicans as moustache-twirling cartoon villains turned out to be remarkably accurate, at least for Trump himself. Indeed, it was hard not to think of the ridiculous “destroying the environment for its own sake” of Captain Planet villains when Trump insisted on subsidizing coal power—which by the way didn’t even work. The key, I think, is to recognize that reasonable conservatives do exist—there just aren’t very many of them in Congress right now. A significant number of Americans want low taxes, deregulation, and free markets but are horrified by Trump and what the Republican Party has become—indeed, at least a few write for the National Review. The mere fact that an idea comes from Republicans is not a sufficient reason to dismiss that idea. Indeed, I’m going to say something even stronger: The mere fact that an idea comes from a racist or a bigot is not a sufficient reason to dismiss that idea. If the idea itself is racist or bigoted, yes, that’s a reason to think it is wrong. But even bad people sometimes have good ideas. The reasonable conservatives seem to be in hiding at the moment; I’ve searched for them, and had difficulty finding more than a handful. Yet we must not give up the search. Politics should not appear one-sided. # Love in a time of quarantine Feb 14JDN 2459260 This is our first Valentine’s Day of quarantine—and hopefully our last. With Biden now already taking action and the vaccine rollout proceeding more or less on schedule, there is good reason to think that this pandemic will be behind us by the end of this year. Yet for now we remain isolated from one another, attempting to substitute superficial digital interactions for the authentic comforts of real face-to-face contact. And anyone who is single, or forced to live away from their loved ones, during quarantine is surely having an especially hard time right now. I have been quite fortunate in this regard: My fiancé and I have lived together for several years, and during this long period of isolation we’ve at least had each other—if basically no one else. But even I have felt a strong difference, considerably stronger than I expected it would be: Despite many of my interactions already being conducted via the Internet, needing to do so with all interactions feels deeply constraining. Nearly all of my work can be done remotely—but not quite all, and even what can be done remotely doesn’t always work as well remotely. I am moderately introverted, and I still feel substantially deprived; I can only imagine how awful it must be for the strongly extraverted. As awkward as face-to-face interactions can be, and as much as I hate making phone calls, somehow Zoom video calls are even worse than either. Being unable to visit someone’s house for dinner and games, or go out to dinner and actually sit inside a restaurant, leaves a surprisingly large emotional void. Nothing in particular feels radically different, but the sum of so many small differences adds up to a rather large one. I think I felt it the most when we were forced to cancel our usual travel back to Michigan over the holiday season. Make no mistake: Social interaction is not simply something humans enjoy, or are good at. Social interaction is a human need. We need social interaction in much the same way that we need food or sleep. The United Nations considers solitary confinement for more than two weeks to be torture. Long periods in solitary confinement are strongly correlated with suicide—so in that sense, isolation can kill you. Think about the incredibly poor quality of social interactions that goes on in most prisons: Endless conflict, abuse, racism, frequent violence—and then consider that the one thing that inmates find most frightening is to be deprived of that social contact. This is not unlike being fed nothing but stale bread and water, and then suddenly having even that taken away from you. Even less extreme forms of social isolation—like most of us are feeling right now—have as detrimental an effect on health as smoking or alcoholism, and considerably worse than obesity. Long-term social isolation increases overall mortality risk by more than one-fourth. Robust social interaction is critical for long-term health, both physically and mentally. This does not mean that the quarantines were a bad idea—on the contrary, we should have enforced them more aggressively, so as to contain the pandemic faster and ultimately need less time in quarantine. Timing is critical here: Successfully containing the pandemic early is much easier than trying to bring it back under control once it has already spread. When the pandemic began, lockdown might have been able to stop the spread. At this point, vaccines are really our only hope of containment. But it does mean that if you feel terrible lately, there is a very good reason for this, and you are not alone. Due to forces much larger than any of us can control, forces that even the world’s most powerful governments are struggling to contain, you are currently being deprived of a basic human need. And especially if you are on your own this Valentine’s Day, remember that there are people who love you, even if they can’t be there with you right now. # What happened with GameStop? Feb 7 JDN 2459253 No doubt by now you’ve heard about the recent bubble in GameStop stock that triggered several trading stops, nearly destroyed a hedge fund, and launched a thousand memes. What really strikes me about this whole thing is how ordinary it is: This is basically the sort of thing that happens in our financial markets all the time. So why are so many people suddenly paying so much attention to it? There are a few important ways this is unusual: Most importantly, the bubble was triggered by a large number of middle-class people investing small amounts, rather than by a handful of billionaires or hedge funds. It’s also more explicitly collusive than usual, with public statements in writing about what stocks are being manipulated rather than hushed whispers between executives at golf courses. Partly as a consequence of these, the response from the government and the financial industry has been quite different as well, trying to halt trading and block transactions in a way that they would never do if the crisis had been caused by large financial institutions. If you’re interested in the technical details of what happened, what a short squeeze is and how it can make a hedge fund lose enormous amounts of money unexpectedly, I recommend this summary by KQED. But the gist of it is simple enough: Melvin Capital placed huge bets that GameStop stock would fall in price, and a coalition of middle-class traders coordinated on Reddit to screw them over by buying a bunch of GameStop stock and driving up the price. It worked, and now Melvin Capital lost something on the order of$3-5 billion in just a few days.

The particular kind of bet they placed is called a short, and it’s a completely routine practice on Wall Street despite the fact that I could never quite understand why it is a thing that should be allowed.

The essence of a short is quite simple: When you short, you are selling something you don’t own. You “borrow” it (it isn’t really even borrowing), and then sell it to someone else, promising to buy it back and return it to where you borrowed it from at some point in the future. This amounts to a bet that the price will decline, so that the price at which you buy it is lower than the price at which you sold it.

Doesn’t that seem like an odd thing to be allowed to do? Normally you can’t sell something you have merely borrowed. I can’t borrow a car and then sell it; car title in fact exists precisely to prevent this from happening. If I were to borrow your coat and then sell it to a thrift store, I’d have committed larceny. It’s really quite immaterial whether I plan to buy it back afterward; in general we do not allow people to sell things that they do not own.

Now perhaps the problem is that when I borrow your coat or your car, you expect me to return that precise object—not a similar coat or a car of equivalent Blue Book value, but your coat or your car. When I borrow a share of GameStop stock, no one really cares whether it is that specific share which I return—indeed, it would be almost impossible to even know whether it was. So in that way it’s a bit like borrowing money: If I borrow $20 from you, you don’t expect me to pay back that precise$20 bill. Indeed you’d be shocked if I did, since presumably I borrowed it in order to spend it or invest it, so how would I ever get it back?

But you also don’t sell money, generally speaking. Yes, there are currency exchanges and money-market accounts; but these are rather exceptional cases. In general, money is not bought and sold the way coats or cars are.

What about consumable commodities? You probably don’t care too much about any particular banana, sandwich, or gallon of gasoline. Perhaps in some circumstances we might “loan” someone a gallon of gasoline, intending them to repay us at some later time with a different gallon of gasoline. But far more likely, I think, would be simply giving a friend a gallon of gasoline and then not expecting any particular repayment except perhaps a vague offer of providing a similar favor in the future. I have in fact heard someone say the sentence “Can I borrow your sandwich?”, but it felt very odd when I heard it. (Indeed, I responded something like, “No, you can keep it.”)

And in order to actually be shorting gasoline (which is a thing that you, too, can do, perhaps even right now, if you have a margin account on a commodities exchange), it isn’t enough to borrow a gallon with the expectation of repaying a different gallon; you must also sell that gallon you borrowed. And now it seems very odd indeed to say to a friend, “Hey, can I borrow a gallon of gasoline so that I can sell it to someone for a profit?”

The usual arguments for why shorting should be allowed are much like the arguments for exotic financial instruments in general: “Increase liquidity”, “promote efficient markets”. These arguments are so general and so ubiquitous that they essentially amount to the strongest form of laissez-faire: Whatever Wall Street bankers feel like doing is fine and good and part of what makes American capitalism great.

In fact, I was never quite clear why margin accounts are something we decided to allow; margin trading is inherently high-leverage and thus inherently high-risk. Borrowing money in order to arbitrage financial assets doesn’t just seem like a very risky thing to do, it has been one way or another implicated in virtually every financial crisis that has ever occurred. It would be an exaggeration to say that leveraged arbitrage is the one single cause of financial crises, but it would be a shockingly small exaggeration. I think it absolutely is fair to say that if leveraged arbitrage did not exist, financial crises would be far rarer and further between.

Indeed, I am increasingly dubious of the whole idea of allowing arbitrage in general. Some amount of arbitrage may be unavoidable; there may always be people people who see that prices are different for the same item in two different markets, and then exploit that difference before anyone can stop them. But this is a bit like saying that theft is probably inevitable: Yes, every human society that has had a system of property ownership (which is most of them—even communal hunter-gatherers have rules about personal property), has had some amount of theft. That doesn’t mean there is nothing we can do to reduce theft, or that we should simply allow theft wherever it occurs.

The moral argument against arbitrage is straightforward enough: You’re not doing anything. No good is produced; no service is provided. You are making money without actually contributing any real value to anyone. You just make money by having money. This is what people in the Middle Ages found suspicious about lending money at interest; but lending money actually is doing something—sometimes people need more money than they have, and lending it to them is providing a useful service for which you deserve some compensation.

A common argument economists make is that arbitrage will make prices more “efficient”, but when you ask them what they mean by “efficient”, the answer they give is that it removes arbitrage opportunities! So the good thing about arbitrage is that it stops you from doing more arbitrage?

And what if it doesn’t stop you? Many of the ways to exploit price gaps (particularly the simplest ones like “where it’s cheap, buy it; where it’s expensive, sell it”) will automatically close those gaps, but it’s not at all clear to me that all the ways to exploit price gaps will necessarily do so. And even if it’s a small minority of market manipulation strategies that exploit gaps without closing them, those are precisely the strategies that will be most profitable in the long run, because they don’t undermine their own success. Then, left to their own devices, markets will evolve to use such strategies more and more, because those are the strategies that work.

That is, in order for arbitrage to be beneficial, it must always be beneficial; there must be no way to exploit price gaps without inevitably closing those price gaps. If that is not the case, then evolutionary pressure will push more and more of the financial system toward using methods of arbitrage that don’t close gaps—or even exacerbate them. And indeed, when you look at how ludicrously volatile and crisis-prone our financial system has become, it sure looks an awful lot like an evolutionary equilibrium where harmful arbitrage strategies have evolved to dominate.

A world where arbitrage actually led to efficient pricing would be a world where the S&P 500 rises a steady 0.02% per day, each and every day. Maybe you’d see a big move when there was actually a major event, like the start of a war or the invention of a vaccine for a pandemic. You’d probably see a jump up or down of a percentage point or two with each quarterly Fed announcement. But daily moves of even five or six percentage points would be a very rare occurrence—because the real expected long-run aggregate value of the 500 largest publicly-traded corporations in America is what the S&P 500 is supposed to represent, and that is not a number that should change very much very often. The fact that I couldn’t really tell you what that number is without multi-trillion-dollar error bars is so much the worse for anyone who thinks that financial markets can somehow get it exactly right every minute of every day.

Moreover, it’s not hard to imagine how we might close price gaps without simply allowing people to exploit them. There could be a bunch of economists at the Federal Reserve whose job it is to locate markets where there are arbitrage opportunities, and then a bundle of government funds that they can allocate to buying and selling assets in order to close those price gaps. Any profits made are received by the treasury; any losses taken are borne by the treasury. The economists would get paid a comfortable salary, and perhaps get bonuses based on doing a good job in closing large or important price gaps; but there is no need to give them even a substantial fraction of the proceeds, much less all of it. This is already how our money supply is managed, and it works quite well, indeed obviously much better than an alternative with “skin in the game”: Can you imagine the dystopian nightmare we’d live in if the Chair of the Federal Reserve actually received even a 1% share of the US money supply? (Actually I think that’s basically what happened in Zimbabwe: The people who decided how much money to print got to keep a chunk of the money that was printed.)

I don’t actually think this GameStop bubble is all that important in itself. A decade from now, it may be no more memorable than Left Shark or the Macarena. But what is really striking about it is how little it differs from business-as-usual on Wall Street. The fact that a few million Redditors can gather together to buy a stock “for the lulz” or to “stick it to the Man” and thereby bring hedge funds to their knees is not such a big deal in itself, but it is symptomatic of much deeper structural flaws in our financial system.