What happens when a bank fails

Mar 19 JDN 2460023

As of March 9, Silicon Valley Bank (SVB) has failed and officially been put into receivership under the FDIC. A bank that held $209 billion in assets has suddenly become insolvent.

This is the second-largest bank failure in US history, after Washington Mutual (WaMu) in 2008. In fact it will probably have more serious consequences than WaMu, for two reasons:

1. WaMu collapsed as part of the Great Recession, so there was already a lot of other things going on and a lot of policy responses already in place.

2. WaMu was mostly a conventional commercial bank that held deposits and loans for consumers, so its assets were largely protected by the FDIC, and thus its bankruptcy didn’t cause contagion the spread out to the rest of the system. (Other banks—shadow banks—did during the crash, but not so much WaMu.) SVB mostly served tech startups, so a whopping 89% of its deposits were not protected by FDIC insurance.

You’ve likely heard of many of the companies that had accounts at SVB: Roku, Roblox, Vimeo, even Vox. Stocks of the US financial industry lost $100 billion in value in two days.

The good news is that this will not be catastrophic. It probably won’t even trigger a recession (though the high interest rates we’ve been having lately potentially could drive us over that edge). Because this is commercial banking, it’s done out in the open, with transparency and reasonably good regulation. The FDIC knows what they are doing, and even though they aren’t covering all those deposits directly, they intend to find a buyer for the bank who will, and odds are good that they’ll be able to cover at least 80% of the lost funds.

In fact, while this one is exceptionally large, bank failures are not really all that uncommon. There have been nearly 100 failures of banks with assets over $1 billion in the US alone just since the 1970s. The FDIC exists to handle bank failures, and generally does the job well.

Then again, it’s worth asking whether we should really have a banking system in which failures are so routine.

The reason banks fail is kind of a dark open secret: They don’t actually have enough money to cover their deposits.

Banks loan away most of their cash, and rely upon the fact that most of their depositors will not want to withdraw their money at the same time. They are required to keep a certain ratio in reserves, but it’s usually fairly small, like 10%. This is called fractional-reserve banking.

As long as less than 10% of deposits get withdrawn at any given time, this works. But if a bunch of depositors suddenly decide to take out their money, the bank may not have enough to cover it all, and suddenly become insolvent.

In fact, the fear that a bank might become insolvent can actually cause it to become insolvent, in a self-fulfilling prophecy. Once depositors get word that the bank is about to fail, they rush to be the first to get their money out before it disappears. This is a bank run, and it’s basically what happened to SVB.

The FDIC was originally created to prevent or mitigate bank runs. Not only did they provide insurance that reduced the damage in the event of a bank failure; by assuring depositors that their money would be recovered even if the bank failed, they also reduced the chances of a bank run becoming a self-fulfilling prophecy.


Indeed, SVB is the exception that proves the rule, as they failed largely because their assets were mainly not FDIC insured.

Fractional-reserve banking effectively allows banks to create money, in the form of credit that they offer to borrowers. That credit gets deposited in other banks, which then go on to loan it out to still others; the result is that there is more money in the system than was ever actually printed by the central bank.

In most economies this commercial bank money is a far larger quantity than the central bank money actually printed by the central bank—often nearly 10 to 1. This ratio is called the money multiplier.

Indeed, it’s not a coincidence that the reserve ratio is 10% and the multiplier is 10; the theoretical maximum multiplier is always the inverse of the reserve ratio, so if you require reserves of 10%, the highest multiplier you can get is 10. Had we required 20% reserves, the multiplier would drop to 5.

Most countries have fractional-reserve banking, and have for centuries; but it’s actually a pretty weird system if you think about it.

Back when we were on the gold standard, fractional-reserve banking was a way of cheating, getting our money supply to be larger than the supply of gold would actually allow.

But now that we are on a pure fiat money system, it’s worth asking what fractional-reserve banking actually accomplishes. If we need more money, the central bank could just print more. Why do we delegate that task to commercial banks?

David Friedman of the Cato Institute had some especially harsh words on this, but honestly I find them hard to disagree with:

Before leaving the subject of fractional reserve systems, I should mention one particularly bizarre variant — a fractional reserve system based on fiat money. I call it bizarre because the essential function of a fractional reserve system is to reduce the resource cost of producing money, by allowing an ounce of reserves to replace, say, five ounces of currency. The resource cost of producing fiat money is zero; more precisely, it costs no more to print a five-dollar bill than a one-dollar bill, so the cost of having a larger number of dollars in circulation is zero. The cost of having more bills in circulation is not zero but small. A fractional reserve system based on fiat money thus economizes on the cost of producing something that costs nothing to produce; it adds the disadvantages of a fractional reserve system to the disadvantages of a fiat system without adding any corresponding advantages. It makes sense only as a discreet way of transferring some of the income that the government receives from producing money to the banking system, and is worth mentioning at all only because it is the system presently in use in this country.

Our banking system evolved gradually over time, and seems to have held onto many features that made more sense in an earlier era. Back when we had arbitrarily tied our central bank money supply to gold, creating a new money supply that was larger may have been a reasonable solution. But today, it just seems to be handing the reins over to private corporations, giving them more profits while forcing the rest of society to bear more risk.

The obvious alternative is full-reserve banking, where banks are simply required to hold 100% of their deposits in reserve and the multiplier drops to 1. This idea has been supported by a number of quite prominent economists, including Milton Friedman.

It’s not just a right-wing idea: The left-wing organization Positive Money is dedicated to advocating for a full-reserve banking system in the UK and EU. (The ECB VP’s criticism of the proposal is utterly baffling to me: it “would not create enough funding for investment and growth.” Um, you do know you can print more money, right? Hm, come to think of it, maybe the ECB doesn’t know that, because they think inflation is literally Hitler. There are legitimate criticisms to be had of Positive Money’s proposal, but “There won’t be enough money under this fiat money system” is a really weird take.)

There’s a relatively simple way to gradually transition from our current system to a full-reserve sytem: Simply increase the reserve ratio over time, and print more central bank money to keep the total money supply constant. If we find that it seems to be causing more problems than it solves, we could stop or reverse the trend.

Krugman has pointed out that this wouldn’t really fix the problems in the banking system, which actually seem to be much worse in the shadow banking sector than in conventional commercial banking. This is clearly right, but it isn’t really an argument against trying to improve conventional banking. I guess if stricter regulations on conventional banking push more money into the shadow banking system, that’s bad; but really that just means we should be imposing stricter regulations on the shadow banking system first (or simultaneously).

We don’t need to accept bank runs as a routine part of the financial system. There are other ways of doing things.

Is the cure for inflation worse than the disease?

Nov 13 JDN 2459897

A lot of people seem really upset about inflation. I’ve previously discussed why this is a bit weird; inflation really just isn’t that bad. In fact, I am increasingly concerned that the usual methods for fixing inflation are considerably worse than inflation itself.

To be clear, I’m not talking about hyperinflationif you are getting triple-digit inflation or more, you are clearly printing too much money and you need to stop. And there are places in the world where this happens.

But what about just regular, ordinary inflation, even when it’s fairly high? Prices rising at 8% or 9% or even 11% per year? What catastrophe befalls our society when this happens?

Okay, sure, if we could snap our fingers and make prices all stable without cost, that would be worth doing. But we can’t. All of our mechanisms for reducing inflation come with costs—and often very high costs.

The chief mechanism by which inflation is currently controlled is open-market operations by central banks such as the Federal Reserve, the Bank of England, and the European Central Bank. These central banks try to reduce inflation by selling bonds, which lowers the price of bonds and reduces capital available to banks, and thereby increases interest rates. This also effectively removes money from the economy, as banks are using that money to buy bonds instead of lending it out. (It is chiefly in this odd indirect sense that the central bank manages the “money supply”.)

But how does this actually reduce inflation? It’s remarkably indirect. It’s actually the higher interest rates which prevent people from buying houses and prevent companies from hiring workers which result in reduced economic growth—or even economic recession—which then is supposed to bring down prices. There’s actually a lot we still don’t know about how this works or how long it should be expected to take. What we do know is that the pain hits quickly and the benefits arise only months or even years later.

As Krugman has rightfully pointed out, the worst pain of the 1970s was not the double-digit inflation; it was the recessions that Paul Volcker’s economic policy triggered in response to that inflation. The inflation wasn’t exactly a good thing; but for most people, the cure was much worse than the disease.

Most laypeople seem to think that prices somehow go up without wages going up, but that simply isn’t how it works. Prices and wages rise at close to the same rate in most countries most of the time. In fact, inflation is often driven chiefly by rising wages rather than the other way around. There are often lags between when the inflation hits and when people see their wages rise; but these lags can actually be in either direction—inflation first or wages first—and for moderate amounts of inflation they are clearly less harmful than the high rates of unemployment that we would get if we fought inflation more aggressively with monetary policy.

Economists are also notoriously vague about exactly how they expect the central bank to reduce inflation. They use complex jargon or broad euphemisms. But when they do actually come out and say they want to reduce wages, it tends to outrage people. Well, that’s one of three main ways that interest rates actually reduce inflation: They reduce wages, they cause unemployment, or they stop people from buying houses. That’s pretty much all that central banks can do.

There may be other ways to reduce inflation, like windfall profits taxes, antitrust action, or even price controls. The first two are basically no-brainers; we should always be taxing windfall profits (if they really are due to a windfall outside a corporation’s control, there’s no incentive to distort), and we should absolutely be increasing antitrust action (why did we reduce it in the first place?). Price controls are riskier—they really do create shortages—but then again, is that really worse than lower wages or unemployment? Because the usual strategy involves lower wages and unemployment.

It’s a little ironic: The people who are usually all about laissez-faire are the ones who panic about inflation and want the government to take drastic action; meanwhile, I’m usually in favor of government intervention, but when it comes to moderate inflation, I think maybe we should just let it be.

The United Kingdom in transition

Oct 30 JDN 2459883

When I first decided to move to Edinburgh, I certainly did not expect it to be such a historic time. The pandemic was already in full swing, but I thought that would be all. But this year I was living in the UK when its leadership changed in two historic ways:

First, there was the death of Queen Elizabeth II, and the coronation of King Charles III.

Second, there was the resignation of Boris Johnson, the appointment of Elizabeth Truss, and then, so rapidly I feel like I have whiplash, the resignation of Elizabeth Truss.

In other words, I have seen the end of the longest-reigning monarch and the rise and fall of the shortest-reigning prime minister in the history of the United Kingdom. The three hundred-year history of the United Kingdom.

The prior probability of such a 300-year-historic event happening during my own 3-year term in the UK is approximately 1%. Yet, here we are. A new king, one of a handful of genuine First World monarchs to be coronated in the 21st century. The others are the Netherlands, Belgium, Spain, Monaco, Andorra, and Luxembourg; none of these have even a third the population of the UK, and if we include every Commonwealth Realm (believe it or not, “realm” is in fact still the official term), Charles III is now king of a supranational union with a population of over 150 million people—half the size of the United States. (Yes, he’s your king too, Canada!) Note that Charles III is not king of the entire Commonwealth of Nations, which includes now-independent nations such as India, Pakistan, and South Africa; that successor to the British Empire contains 54 nations and has a population of over 2 billion.

I still can’t quite wrap my mind around this idea of having a king. It feels even more ancient and anachronistic than the 400-year-old university I work at. Of course I knew that we had a queen before, and that she was old and would presumably die at some point and probably be replaced; but that wasn’t really salient information to me until she actually did die and then there was a ten-mile-long queue to see her body and now next spring they will be swearing in this new guy as the monarch of the fourteen realms. It now feels like I’m living in one of those gritty satirical fractured fairy tales. Maybe it’s an urban fantasy setting; it feels a lot like Shrek, to be honest.

Yet other than feeling surreal, none of this has affected my life all that much. I haven’t even really felt the effects of inflation: Groceries and restaurant meals seem a bit more expensive than they were when we arrived, but it’s well within what our budget can absorb; we don’t have a car here, so we don’t care about petrol prices; and we haven’t even been paying more than usual in natural gas because of the subsidy programs. Actually it’s probably been good for our household finances that the pound is so weak and the dollar is so strong. I have been much more directly affected by the university union strikes: being temporary contract junior faculty (read: expendable), I am ineligible to strike and hence had to cross a picket line at one point.

Perhaps this is what history has always felt like for most people: The kings and queens come and go, but life doesn’t really change. But I honestly felt more directly affected by Trump living in the US than I did by Truss living in the UK.

This may be in part because Elizabeth Truss was a very unusual politician; she combined crazy far-right economic policy with generally fairly progressive liberal social policy. A right-wing libertarian, one might say. (As Krugman notes, such people are astonishingly rare in the electorate.) Her socially-liberal stance meant that she wasn’t trying to implement horrific hateful policies against racial minorities or LGBT people the way that Trump was, and for once her horrible economic policies were recognized immediately as such and quickly rescinded. Unlike Trump, Truss did not get the chance to appoint any supreme court justices who could go on to repeal abortion rights.

Then again, Truss couldn’t have appointed any judges if she’d wanted to. The UK Supreme Court is really complicated, and I honestly don’t understand how it works; but from what I do understand, the Prime Minister appoints the Lord Chancellor, the Lord Chancellor forms a commission to appoint the President of the Supreme Court, and the President of the Supreme Court forms a commission to appoint new Supreme Court judges. But I think the monarch is considered the ultimate authority and can veto any appointment along the way. (Or something. Sometimes I get the impression that no one truly understands the UK system, and they just sort of go with doing things as they’ve always been done.) This convoluted arrangement seems to grant the court considerably more political independence than its American counterpart; also, unlike the US Supreme Court, the UK Supreme Court is not allowed to explicitly overturn primary legislation. (Fun fact: The Lord Chancellor is also the Keeper of the Great Seal of the Realm, because Great Britain hasn’t quite figured out that the 13th century ended yet.)

It’s sad and ironic that it was precisely by not being bigoted and racist that Truss ensured she would not have sufficient public support for her absurd economic policies. There’s a large segment of the population of both the US and UK—aptly, if ill-advisedly, referred to by Clinton as “deplorables”—who will accept any terrible policy as long as it hurts the right people. But Truss failed to appeal to that crucial demographic, and so could find no one to support her. Hence, her approval rating fell to a dismal 10%, and she was outlasted by a head of lettuce.

At the time of writing, the new prime minister has not yet been announced, but the smart money is on Rishi Sunak. (I mean that quite literally; he’s leading in prediction markets.) He’s also socially liberal but fiscally conservative, but unlike Truss he seems to have at least some vague understanding of how economics works. Sunak is also popular in a way Truss never was (though that popularity has been declining recently). So I think we can expect to get new policies which are in the same general direction as what Truss wanted—lower taxes on the rich, more privatization, less spent on social services—but at least Sunak is likely to do so in a way that makes the math(s?) actually add up.

All of this is unfortunate, but largely par for the course for the last few decades. It compares quite favorably to the situation in the US, where somehow a large chunk of Americans either don’t believe that an insurrection attempt occurred, are fine with it, or blame the other side, and as the guardrails of democracy continue breaking, somehow gasoline prices appear to be one of the most important issues in the midterm election.

You know what? Living through history sucks. I don’t want to live in “interesting times” anymore.

The era of the eurodollar is upon us

Oct 16 JDN 2459869

I happen to be one of those weirdos who liked the game Cyberpunk 2077. It was hardly flawless, and had many unforced errors (like letting you choose your gender, but not making voice type independent from pronouns? That has to be, like, three lines of code to make your game significantly more inclusive). But overall I thought it did a good job of representing a compelling cyberpunk world that is dystopian but not totally hopeless, and had rich, compelling characters, along with reasonably good gameplay. The high level of character customization sets a new standard (aforementioned errors notwithstanding), and I for one appreciate how they pushed the envelope for sexuality in a AAA game.

It’s still not explicit—though I’m sure there are mods for that—but at least you can in fact get naked, and people talk about sex in a realistic way. It’s still weird to me that showing a bare breast or a penis is seen as ‘adult’ in the same way as showing someone’s head blown off (Remind me: Which of the three will nearly everyone have seen from the time they were a baby? Which will at least 50% of children see from birth, guaranteed, and virtually 100% of adults sooner or later? Which can you see on Venus de Milo and David?), but it’s at least some progress in our society toward a healthier relationship with sex.

A few things about the game’s world still struck me as odd, though. Chiefly it has to be the weird alternate history where apparently we have experimental AI and mind-uploading in the 2020s, but… those things are still experimental in the 2070s? So our technological progress was through the roof for the early 2000s, and then just completely plateaued? They should have had Johnny Silverhand’s story take place in something like 2050, not 2023. (You could leave essentially everything else unchanged! V could still have grown up hearing tales of Silverhand’s legendary exploits, because 2050 was 27 years ago in 2077; canonically, V is 28 years old when the game begins. Honestly it makes more sense in other ways: Rogue looks like she’s in her 60s, not her 80s.)

Another weird thing is the currency they use: They call it the “eurodollar”, and the symbol is, as you might expect, €$. When the game first came out, that seemed especially ridiculous, since euros were clearly worth more than dollars and basically always had been.

Well, they aren’t anymore. In fact, euros and dollars are now trading almost exactly at parity, and have been for weeks. CD Projekt Red was right: In the 2020s, the era of the eurodollar is upon us after all.

Of course, we’re unlike to actually merge the two currencies any time soon. (Can you imagine how Republicans would react if such a thing were proposed?) But the weird thing is that we could! It almost is like the two currencies are interchangeable—for the first time in history.

It isn’t so much that the euro is weak; it’s that the dollar is strong. When I first moved to the UK, the pound was trading at about $1.40. It is now trading at $1.10! If it continues dropping as it has, it could even reach parity as well! We might have, for the first time in history, the dollar, the pound, and the euro functioning as one currency. Get the Canadian dollar too (currently much too weak), and we’ll have the Atlantic Union dollar I use in some of my science fiction (I imagine the AU as an expansion of NATO into an economic union that gradually becomes its own government).Then again, the pound is especially weak right now because it plunged after the new prime minister announced an utterly idiotic economic plan. (Conservatives refusing to do basic math and promising that tax cuts would fix everything? Why, it felt like being home again! In all the worst ways.)

This is largely a bad thing. A strong dollar means that the US trade deficit will increase, and also that other countries will have trouble buying our exports. Conversely, with their stronger dollars, Americans will buy more imports from other countries. The combination of these two effects will make inflation worse in other countries (though it could reduce it in the US).

It’s not so bad for me personally, as my husband’s income is largely in dollars while our expenses are in pounds. (My income is in pounds and thus unaffected.) So a strong dollar and a weak pound means our real household income is about £4,000 than it would otherwise have been—which is not a small difference!

In general, the level of currency exchange rates isn’t very important. It’s changes in exchange rates that matter. The changes in relative prices will shift around a lot of economic activity, causing friction both in the US and in its (many) trading partners. Eventually all those changes should result in the exchange rates converging to a new, stable equilibrium; but that can take a long time, and exchange rates can fluctuate remarkably fast. In the meantime, such large shifts in exchange rates are going to cause even more chaos in a world already shaken by the COVID pandemic and the war in Ukraine.

Good news on the climate, for a change

Aug 7 JDN 2459799

In what is surely the biggest political surprise of the decade—if not the century—Joe Manchin suddenly changed his mind and signed onto a budget reconciliation bill that will radically shift US climate policy. He was the last vote needed for the bill to make it through the Senate via reconciliation (as he often is, because he’s pretty much a DINO).

Because the Senate is ridiculous, there are still several layers of procedure the bill must go through before it can actually pass. But since the parliamentarian was appointed by a Democrat and the House had already passed an even stronger climate bill, it looks like at least most of it will make it through. The reconciliation process means we only need a bare majority, so even if all the Republicans vote against it—which they very likely will—it can still get through, with Vice President Harris’s tiebreaking vote. (Because our Senate is 50-50, Harris is on track to cast the most tie-breaking votes of any US Vice President by the end of her term.) Reconciliation also can’t be filibustered.

While it includes a lot of expenditures, particularly tax credits for clean energy and electric cars, the bill includes tax increases and closed loopholes so that it will actually decrease the deficit and likely reduce inflation—which Manchin said was a major reason he was willing to support it. But more importantly, it promises to reduce US carbon emissions by a staggering 40% by 2030.

The US currently produces about 15 tons of CO2 equivalent per person per year, so reducing that by 40% would drop it to only 9 tons per person per year. This would move us from nearly as bad as Saudi Arabia to nearly as good as Norway. It still won’t mean we are doing as well as France or the UK—but at least we’ll no longer be dragging down the rest of the First World.

And this isn’t a pie-in-the-sky promise: Independent forecasts suggest that these policies may really be able to reduce our emissions that much that fast. It’s honestly a little hard for me to believe; but that’s what the experts are saying.

Manchin wants to call it the Inflation Reduction Act, but it probably won’t actually reduce inflation very much. But some economists—even quite center-right ones—think it may actually reduce inflation quite a bit, and we basically all agree that it at least won’t increase inflation very much. Since the effects on inflation are likely to be small, we really don’t have to worry about them: whatever it does to inflation, the important thing is that this bill reduces carbon emissions.

Honestly, it’ll be kind of disgusting if this actually does work—because it’s so easy. This bill will have almost no downside. Its macroeconomic effects will be minor, maybe even positive. There was no reason it needed to be this hard-fought. Even if it didn’t have tax increases to offset it—which it absolutely does—the total cost of this bill over the next ten years would be less than six months of military spending, so cutting military spending by 5% would cover it. We have cured our unbearable headaches by finally realizing we could stop hitting ourselves in the head. (And the Republicans want us to keep hitting ourselves and will do whatever they can to make that happen.)

So, yes, it’s very sad that it took us this long. And even 60% of our current emissions is still too much emissions for a stable climate. But let’s take a moment to celebrate, because this is a genuine victory—and we haven’t had a lot of those in awhile.

Krugman and rockets and feathers

Jul 17 JDN 2459797

Well, this feels like a milestone: Paul Krugman just wrote a column about a topic I’ve published research on. He didn’t actually cite our paper—in fact the literature review he links to is from 2014—but the topic is very much what we were studying: Asymmetric price transmission, ‘rockets and feathers’. He’s even talking about it from the perspective of industrial organization and market power, which is right in line with our results (and a bit different from the mainstream consensus among economic policy pundits).

The phenomenon is a well-documented one: When the price of an input (say, crude oil) rises, the price of outputs made from that input (say, gasoline) rise immediately, and basically one to one, sometimes even more than one to one. But when the price of an input falls, the price of outputs only falls slowly and gradually, taking a long time to converge to the same level as the input prices. Prices go up like a rocket, but down like a feather.

Many different explanations have been proposed to explain this phenomenon, and they aren’t all mutually exclusive. They include various aspects of market structure, substitution of inputs, and use of inventories to smooth the effects of prices.

One that I find particularly unpersuasive is the notion of menu costs: That it requires costly effort to actually change your prices, and this somehow results in the asymmetry. Most gas stations have digital price boards; it requires almost zero effort for them to change prices whenever they want. Moreover, there’s no clear reason this would result in asymmetry between raising and lowering prices. Some models extend the notion of “menu cost” to include expected customer responses, which is a much better explanation; but I think that’s far beyond the original meaning of the concept. If you fear to change your price because of how customers may respond, finding a cheaper way to print price labels won’t do a thing to change that.

But our paper—and Krugman’s article—is about one factor in particular: market power. We don’t see prices behave this way in highly competitive markets. We see it the most in oligopolies: Markets where there are only a small number of sellers, who thus have some control over how they set their prices.

Krugman explains it as follows:

When oil prices shoot up, owners of gas stations feel empowered not just to pass on the cost but also to raise their markups, because consumers can’t easily tell whether they’re being gouged when prices are going up everywhere. And gas stations may hang on to these extra markups for a while even when oil prices fall.

That’s actually a somewhat different mechanism from the one we found in our experiment, which is that asymmetric price transmission can be driven by tacit collusion. Explicit collusion is illegal: You can’t just call up the other gas stations and say, “Let’s all set the price at $5 per gallon.” But you can tacitly collude by responding to how they set their prices, and not trying to undercut them even when you could get a short-run benefit from doing so. It’s actually very similar to an Iterated Prisoner’s Dilemma: Cooperation is better for everyone, but worse for you as an individual; to get everyone to cooperate, it’s vital to severely punish those who don’t.

In our experiment, the participants in our experiment were acting as businesses setting their prices. The customers were fully automated, so there was no opportunity to “fool” them in this way. We also excluded any kind of menu costs or product inventories. But we still saw prices go up like rockets and down like feathers. Moreover, prices were always substantially higher than costs, especially during that phase when they are falling down like feathers.

Our explanation goes something like this: Businesses are trying to use their market power to maintain higher prices and thereby make higher profits, but they have to worry about other businesses undercutting their prices and taking all the business. Moreover, they also have to worry about others thinking that they are trying to undercut prices—they want to be perceived as cooperating, not defecting, in order to preserve the collusion and avoid being punished.

Consider how this affects their decisions when input prices change. If the price of oil goes up, then there’s no reason not to raise the price of gasoline immediately, because that isn’t violating the collusion. If anything, it’s being nice to your fellow colluders; they want prices as high as possible. You’ll want to raise the prices as high and fast as you can get away with, and you know they’ll do the same. But if the price of oil goes down, now gas stations are faced with a dilemma: You could lower prices to get more customers and make more profits, but the other gas stations might consider that a violation of your tacit collusion and could punish you by cutting their prices even more. Your best option is to lower prices very slowly, so that you can take advantage of the change in the input market, but also maintain the collusion with other gas stations. By slowly cutting prices, you can ensure that you are doing it together, and not trying to undercut other businesses.

Krugman’s explanation and ours are not mutually exclusive; in fact I think both are probably happening. They have one important feature in common, which fits the empirical data: Markets with less competition show greater degrees of asymmetric price transmission. The more concentrated the oligopoly, the more we see rockets and feathers.

They also share an important policy implication: Market power can make inflation worse. Contrary to what a lot of economic policy pundits have been saying, it isn’t ridiculous to think that breaking up monopolies or putting pressure on oligopolies to lower their prices could help reduce inflation. It probably won’t be as reliably effective as the Fed’s buying and selling of bonds to adjust interest rates—but we’re also doing that, and the two are not mutually exclusive. Besides, breaking up monopolies is a generally good thing to do anyway.

It’s not that unusual that I find myself agreeing with Krugman. I think what makes this one feel weird is that I have more expertise on the subject than he does.

Why do poor people dislike inflation?

Jun 5 JDN 2459736

The United States and United Kingdom are both very unaccustomed to inflation. Neither has seen double-digit inflation since the 1980s.

Here’s US inflation since 1990:

And here is the same graph for the UK:

While a return to double-digits remains possible, at this point it likely won’t happen, and if it does, it will occur only briefly.

This is no doubt a major reason why the dollar and the pound are widely used as reserve currencies (especially the dollar), and is likely due to the fact that they are managed by the world’s most competent central banks. Brexit would almost have made sense if the UK had been pressured to join the Euro; but they weren’t, because everyone knew the pound was better managed.

The Euro also doesn’t have much inflation, but if anything they err on the side of too low, mainly because Germany appears to believe that inflation is literally Hitler. In fact, the rise of the Nazis didn’t have much to do with the Weimar hyperinflation. The Great Depression was by far a greater factor—unemployment is much, much worse than inflation. (By the way, it’s weird that you can put that graph back to the 1980s. It, uh, wasn’t the Euro then. Euros didn’t start circulating until 1999. Is that an aggregate of the franc and the deutsche mark and whatever else? The Euro itself has never had double-digit inflation—ever.)

But it’s always a little surreal for me to see how panicked people in the US and UK get when our inflation rises a couple of percentage points. There seems to be an entire subgenre of economics news that basically consists of rich people saying the sky is falling because inflation has risen—or will, or may rise—by two points. (Hey, anybody got any ideas how we can get them to panic like this over rises in sea level or aggregate temperature?)

Compare this to some other countries thathave real inflation: In Brazil, 10% inflation is a pretty typical year. In Argentina, 10% is a really good year—they’re currently pushing 60%. Kenya’s inflation is pretty well under control now, but it went over 30% during the crisis in 2008. Botswana was doing a nice job of bringing down their inflation until the COVID pandemic threw them out of whack, and now they’re hitting double-digits too. And of course there’s always Zimbabwe, which seemed to look at Weimar Germany and think, “We can beat that.” (80,000,000,000% in one month!? Any time you find yourself talking about billion percent, something has gone terribly, terribly wrong.)

Hyperinflation is a real problem—it isn’t what put Hitler into power, but it has led to real crises in Germany, Zimbabwe, and elsewhere. Once you start getting over 100% per year, and especially when it starts rapidly accelerating, that’s a genuine crisis. Moreover, even though they clearly don’t constitute hyperinflation, I can see why people might legitimately worry about price increases of 20% or 30% per year. (Let alone 60% like Argentina is dealing with right now.) But why is going from 2% to 6% any cause for alarm? Yet alarmed we seem to be.

I can even understand why rich people would be upset about inflation (though the magnitudeof their concern does still seem disproportionate). Inflation erodes the value of financial assets, because most bonds, options, etc. are denominated in nominal, not inflation-adjusted terms. (Though there are such things as inflation-indexed bonds.) So high inflation can in fact make rich people slightly less rich.

But why in the world are so many poor people upset about inflation?

Inflation doesn’t just erode the value of financial assets; it also erodes the value of financial debts. And most poor people have more debts than they have assets—indeed, it’s not uncommon for poor people to have substantial debt and no financial assets to speak of (what little wealth they have being non-financial, e.g. a car or a home). Thus, their net wealth position improves as prices rise.

The interest rate response can compensate for this to some extent, but most people’s debts are fixed-rate. Moreover, if it’s the higher interest rates you’re worried about, you should want the Federal Reserve and the Bank of England not to fight inflation too hard, because the way they fight it is chiefly by raising interest rates.

In surveys, almost everyone thinks that inflation is very bad: 92% think that controlling inflation should be a high priority, and 90% think that if inflation gets too high, something very bad will happen. This is greater agreement among Americans than is found for statements like “I like apple pie” or “kittens are nice”, and comparable to “fair elections are important”!

I admit, I question the survey design here: I would answer ‘yes’ to both questions if we’re talking about a theoretical 10,000% hyperinflation, but ‘no’ if we’re talking about a realistic 10% inflation. So I would like to see, but could not find, a survey asking people what level of inflation is sufficient cause for concern. But since most of these people seemed concerned about actual, realistic inflation (85% reported anger at seeing actual, higher prices), it still suggests a lot of strong feelings that even mild inflation is bad.

So it does seem to be the case that a lot of poor and middle-class people really strongly dislike inflation even in the actual, mild levels in which it occurs in the US and UK.

The main fear seems to be that inflation will erode people’s purchasing power—that as the price of gasoline and groceries rise, people won’t be able to eat as well or drive as much. And that, indeed, would be a real loss of utility worth worrying about.

But in fact this makes very little sense: Most forms of income—particularly labor income, which is the only real income for some 80%-90% of the population—actually increases with inflation, more or less one-to-one. Yes, there’s some delay—you won’t get your annual cost-of-living raise immediately, but several months down the road. But this could have at most a small effect on your real consumption.

To see this, suppose that inflation has risen from 2% to 6%. (Really, you need not suppose; it has.) Now consider your cost-of-living raise, which nearly everyone gets. It will presumably rise the same way: So if it was 3% before, it will now be 7%. Now consider how much your purchasing power is affected over the course of the year.

For concreteness, let’s say your initial income was $3,000 per month at the start of the year (a fairly typical amount for a middle-class American, indeed almost exactly the median personal income). Let’s compare the case of no inflation with a 1% raise, 2% inflation with a 3% raise, and 5% inflation with a 6% raise.

If there was no inflation, your real income would remain simply $3,000 per month, until the end of the year when it would become $3,030 per month. That’s the baseline to compare against.

If inflation is 2%, your real income would gradually fall, by about 0.16% per month, before being bumped up 3% at the end of the year. So in January you’d have $3,000, in February $2,995, in March $2,990. Come December, your real income has fallen to $2,941. But then next January it will immediately be bumped up 3% to $3,029, almost the same as it would have been with no inflation at all. The total lost income over the entire year is about $380, or about 1% of your total income.

If inflation instead rises to 6%, your real income will fall by 0.49% per month, reaching a minimum of $2,830 in December before being bumped back up to $3,028 next January. Your total loss for the whole year will be about $1110, or about 3% of your total income.

Indeed, it’s a pretty good heuristic to say that for an inflation rate of x% with annual cost-of-living raises, your loss of real income relative to having no inflation at all is about (x/2)%. (This breaks down for really high levels of inflation, at which point it becomes a wild over-estimate, since even 200% inflation doesn’t make your real income go to zero.)

This isn’t nothing, of course. You’d feel it. Going from 2% to 6% inflation at an income of $3000 per month is like losing $700 over the course of a year, which could be a month of groceries for a family of four. (Not that anyone can really raise a family of four on a single middle-class income these days. When did The Simpsons begin to seem aspirational?)

But this isn’t the whole story. Suppose that this same family of four had a mortgage payment of $1000 per month; that is also decreasing in real value by the same proportion. And let’s assume it’s a fixed-rate mortgage, as most are, so we don’t have to factor in any changes in interest rates.

With no inflation, their mortgage payment remains $1000. It’s 33.3% of their income this year, and it will be 33.0% of their income next year after they get that 1% raise.

With 2% inflation, their mortgage payment will also fall by 0.16% per month; $998 in February, $996 in March, and so on, down to $980 in December. This amounts to an increase in real income of about $130—taking away a third of the loss that was introduced by the inflation.

With 6% inflation, their mortgage payment will also fall by 0.49% per month; $995 in February, $990 in March, and so on, until it’s only $943 in December. This amounts to an increase in real income of over $370—again taking away a third of the loss.

Indeed, it’s no coincidence that it’s one third; the proportion of lost real income you’ll get back by cheaper mortgage payments is precisely the proportion of your income that was spent on mortgage payments at the start—so if, like too many Americans, they are paying more than a third of their income on mortgage, their real loss of income from inflation will be even lower.

And what if they are renting instead? They’re probably on an annual lease, so that payment won’t increase in nominal terms either—and hence will decrease in real terms, in just the same way as a mortgage payment. Likewise car payments, credit card payments, any debt that has a fixed interest rate. If they’re still paying back student loans, their financial situation is almost certainly improved by inflation.

This means that the real loss from an increase of inflation from 2% to 6% is something like 1.5% of total income, or about $500 for a typical American adult. That’s clearly not nearly as bad as a similar increase in unemployment, which would translate one-to-one into lost income on average; moreover, this loss would be concentrated among people who lost their jobs, so it’s actually worse than that once you account for risk aversion. It’s clearly better to lose 1% of your income than to have a 1% chance of losing nearly all your income—and inflation is the former while unemployment is the latter.

Indeed, the only reason you lost purchasing power at all was that your cost-of-living increases didn’t occur often enough. If instead you had a labor contract that instituted cost-of-living raises every month, or even every paycheck, instead of every year, you would get all the benefits of a cheaper mortgage and virtually none of the costs of a weaker paycheck. Convince your employer to make this adjustment, and you will actually benefit from higher inflation.

So if poor and middle-class people are upset about eroding purchasing power, they should be mad at their employers for not implementing more frequent cost-of-living adjustments; the inflation itself really isn’t the problem.

Who still uses cash?

Feb 27 JDN 2459638

If you had to guess, what is the most common denomination of US dollar bills? You might check your wallet: $1? $20?

No, it’s actually $100. There are 13.1 billion $1 bills, 11.7 billion $20 bills, and 16.4 billion $100 bills. And since $100 bills are worth more, the vast majority of US dollar value in circulation is in those $100 bills—indeed, $1.64 trillion of the total $2.05 trillion cash supply.

This is… odd, to say the least. When’s the last time you spent a $100 bill? Then again, when’s the last time you spent… cash? In a typical week, 30% of Americans use no cash at all.

In the United States, cash is used for 26% of transactions, compared to 28% for debit card and 23% for credit cards. The US is actually a relatively cash-heavy country by First World standards. In the Netherlands and Scandinavia, cash is almost unheard of. When I last visited Amsterdam a couple of months ago, businesses were more likely to take US credit cards than they were to take cash euros.

A list of countries most reliant on cash shows mostly very poor countries, like Chad, Angola, and Burkina Faso. But even in Sub-Saharan Africa, mobile money is dominant in Botswana, Kenya and Uganda.

And yet the cash money supply is still quite large: $2.05 trillion is only a third of the US monetary base, but it’s still a huge amount of money. If most people aren’t using it, who is? And why is so much of it in the form of $100 bills?

It turns out that the answer to the second question can provide an answer to the first. $100 bills are not widely used for consumer purchases—indeed, most businesses won’t even accept them. (Honestly that has always bothered me: What exactly does “legal tender” mean, if you’re allowed to categorically refuse $100 bills? It’d be one thing to say “we can’t accept payment when we can’t make change”, and obviously nobody seriously expects you to accept $10,000 bills; but what if you have a $97 purchase?) When people spend cash, it’s mainly ones, fives, and twenties.

Who uses $100 bills? People who want to store money in a way that is anonymous, easily transportable—including across borders—and stable against market fluctuations. Drug dealers leap to mind (and indeed the money-laundering that HSBC did for drug cartels was largely in the form of thick stacks of $100 bills). Of course it isn’t just drug dealers, or even just illegal transactions, but it is mostly people who want to cross borders. 80% of US $100 bills are in circulation outside the United States. Since 80% of US cash is in the form of $100 bills, this means that nearly two-thirds of all US dollars are outside the US.

Knowing this, I have to wonder: Why does the Federal Reserve continue printing so many $100 bills? Okay, once they’re out there, it may be hard to get them back. But they do wear out eventually. (In fact, US dollars wear out faster than most currencies, because they are made of linen instead of plastic. Surprisingly, this actually makes them less eco-friendly despite being more biodegradable. Of course, the most eco-friendly method of payment is mobile payments, since their marginal environmental impact is basically zero.) So they could simply stop printing them, and eventually the global supply would dwindle.

They clearly haven’t done this—indeed, there were more $100 bills printed last year than any previous year, increasing the global supply by 2 billion bills, or $200 billion. Why not? Are they trying to keep money flowing for drug dealers? Even if the goal is to substitute for failing currencies in other countries (a somewhat odd, if altruistic, objective), wouldn’t that be more effective with $1 and $5 bills? $100 is a lot of money for people in Chad or Angola! Chad’s per-capita GDP is a staggeringly low $600 per year; that means that a $100 bill to a typical person in Chad would be like me holding onto a $10,000 bill (those exist, technically). Surely they’d prefer $1 bills—which would still feel to them like $100 bills feel to me. Even in middle-income countries, $100 is quite a bit; Ecuador actually uses the US dollar as its main currency, but their per-capita GDP is only $5,600, so $100 to them feels like $1000 to us.

If you want to usefully increase the money supply to stimulate consumer spending, print $20 bills—or just increase some numbers in bank reserve accounts. Printing $100 bills is honestly baffling to me. It seems at best inept, and at worst possibly corrupt—maybe they do want to support drug cartels?

Keynesian economics: It works, bitches

Jan 23 JDN 2459613

(I couldn’t resist; for the uninitiated, my slightly off-color title is referencing this XKCD comic.)

When faced with a bad recession, Keynesian economics prescribes the following response: Expand the money supply. Cut interest rates. Increase government spending, but decrease taxes. The bigger the recession, the more we should do all these things—especially increasing spending, because interest rates will often get pushed to zero, creating what’s called a liquidity trap.

Take a look at these two FRED graphs, both since the 1950s.
The first is interest rates (specifically the Fed funds effective rate):

The second is the US federal deficit as a proportion of GDP:

Interest rates were pushed to zero right after the 2008 recession, and didn’t start coming back up until 2016. Then as soon as we hit the COVID recession, they were dropped back to zero.

The deficit looks even more remarkable. At the 2009 trough of the recession, the deficit was large, nearly 10% of GDP; but then it was quickly reduced back to normal, to between 2% and 4% of GDP. And that initial surge is as much explained by GDP and tax receipts falling as by spending increasing.

Yet in 2020 we saw something quite different: The deficit became huge. Literally off the chart, nearly 15% of GDP. A staggering $2.8 trillion. We’ve not had a deficit that large as a proportion of GDP since WW2. We’ve never had a deficit that large in real billions of dollars.

Deficit hawks came out of the woodwork to complain about this, and for once I was worried they might actually be right. Their most credible complaint was that it would trigger inflation, and they weren’t wrong about that: Inflation became a serious concern for the first time in decades.

But these recessions were very large, and when you actually run the numbers, this deficit was the correct magnitude for what Keynesian models tell us to do. I wouldn’t have thought our government had the will and courage to actually do it, but I am very glad to have been wrong about that, for one very simple reason:

It worked.

In 2009, we didn’t actually fix the recession. We blunted it; we stopped it from getting worse. But we never really restored GDP, we just let it get back to its normal growth rate after it had plummeted, and eventually caught back up to where we had been.

2021 went completely differently. With a much larger deficit, we fixed this recession. We didn’t just stop the fall; we reversed it. We aren’t just back to normal growth rates—we are back to the same level of GDP, as if the recession had never happened.

This contrast is quite obvious from the GDP of US GDP:

In 2008 and 2009, GDP slumps downward, and then just… resumes its previous trend. It’s like we didn’t do anything to fix the recession, and just allowed the overall strong growth of our economy to carry us through.

The pattern in 2020 is completely different. GDP plummets downward—much further, much faster than in the Great Recession. But then it immediately surges back upward. By the end of 2021, it was above its pre-recession level, and looks to be back on its growth trend. With a recession this deep, if we’d just waited like we did last time, it would have taken four or five years to reach this point—we actually did it in less than one.

I wrote earlier about how this is a weird recession, one that actually seems to fit Real Business Cycle theory. Well, it was weird in another way as well: We fixed it. We actually had the courage to do what Keynes told us to do in 1936, and it worked exactly as it was supposed to.

Indeed, to go from unemployment almost 15% in April of 2020 to under 4% in December of 2021 is fast enough I feel like I’m getting whiplash. We have never seen unemployment drop that fast. Krugman is fond of comparing this to “morning in America”, but that’s really an understatement. Pitch black one moment, shining bright the next: this isn’t a sunrise, it’s pulling open a blackout curtain.

And all of this while the pandemic is still going on! The omicron variant has brought case numbers to their highest levels ever, though fortunately death rates so far are still below last year’s peak.

I’m not sure I have the words to express what a staggering achievement of economic policy it is to so rapidly and totally repair the economic damage caused by a pandemic while that pandemic is still happening. It’s the equivalent of repairing an airplane that is not only still in flight, but still taking anti-aircraft fire.

Why, it seems that Keynes fellow may have been onto something, eh?

Strange times for the labor market

Jan 9 JDN 2459589

Labor markets have been behaving quite strangely lately, due to COVID and its consequences. As I said in an earlier post, the COVID recession was the one recession I can think of that actually seemed to follow Real Business Cycle theory—where it was labor supply, not demand, that drove employment.

I dare say that for the first time in decades, the US government actually followed Keynesian policy. US federal government spending surged from $4.8 trillion to $6.8 trillion in a single year:

That is a staggering amount of additional spending; I don’t think any country in history has ever increased their spending by that large an amount in a single year, even inflation-adjusted. Yet in response to a recession that severe, this is exactly what Keynesian models prescribed—and for once, we listened. Instead of balking at the big numbers, we went ahead and spent the money.

And apparently it worked, because unemployment spiked to the worst levels seen since the Great Depression, then suddenly plummeted back to normal almost immediately:

Nor was this just the result of people giving up on finding work. U-6, the broader unemployment measure that includes people who are underemployed or have given up looking for work, shows the same unprecedented pattern:

The oddest part is that people are now quitting their jobs at the highest rate seen in over 20 years:

[FRED_quits.png]

This phenomenon has been dubbed the Great Resignation, and while its causes are still unclear, it is clearly the most important change in the labor market in decades.

In a previous post I hypothesized that this surge in strikes and quits was a coordination effect: The sudden, consistent shock to all labor markets at once gave people a focal point to coordinate their decision to strike.

But it’s also quite possible that it was the Keynesian stimulus that did it: The relief payments made it safe for people to leave jobs they had long hated, and they leapt at the opportunity.

When that huge surge in government spending was proposed, the usual voices came out of the woodwork to warn of terrible inflation. It’s true, inflation has been higher lately than usual, nearly 7% last year. But we still haven’t hit the double-digit inflation rates we had in the late 1970s and early 1980s:

Indeed, most of the inflation we’ve had can be explained by the shortages created by the supply chain crisis, along with a very interesting substitution effect created by the pandemic. As services shut down, people bought goods instead: Home gyms instead of gym memberships, wifi upgrades instead of restaurant meals.

As a result, the price of durable goods actually rose, when it had previously been falling for decades. That broader pattern is worth emphasizing: As technology advances, services like healthcare and education get more expensive, durable goods like phones and washing machines get cheaper, and nondurable goods like food and gasoline fluctuate but ultimately stay about the same. But in the last year or so, durable goods have gotten more expensive too, because people want to buy more while supply chains are able to deliver less.

This suggests that the inflation we are seeing is likely to go away in a few years, once the pandemic is better under control (or else reduced to a new influenza where the virus is always there but we learn to live with it).

But I don’t think the effects on the labor market will be so transitory. The strikes and quits we’ve been seeing lately really are at a historic level, and they are likely to have a long-lasting effect on how work is organized. Employers are panicking about having to raise wages and whining about how “no one wants to work” (meaning, of course, no one wants to work at the current wage and conditions on offer). The correct response is the one from Goodfellas [language warning].

For the first time in decades, there are actually more job vacancies than unemployed workers:

This means that the tables have turned. The bargaining power is suddenly in the hands of workers again, after being in the hands of employers for as long as I’ve been alive. Of course it’s impossible to know whether some other shock could yield another reversal; but for now, it looks like we are finally on the verge of major changes in how labor markets operate—and I for one think it’s about time.