I’m glad to see that the Biden administration is finally talking about “Bidenomics”. We tend to give too much credit or blame for economic performance to the President—particularly relative to Congress—but there are many important ways in which a Presidential administration can shift the priorities of public policy in particular directions, and Biden has clearly done that.
The economic benefits for people of color seem to have been particularly large. The unemployment gap between White and Black workers in the US is now only 2.7 percentage points, while just a few years ago it was over 4pp and at the worst of the Great Recession it surpassed 7pp. During lockdown, unemployment for Black people hit nearly 17%; it is now less than 6%.
The (misnamed, but we’re stuck with it) Inflation Reduction Act in particular has been an utter triumph.
In the past year, real private investment in manufacturing structures (essentially, new factories) has risen from $56 billion to $87 billion—an over 50% increase, which puts it the highest it has been since the turn of the century. The Inflation Reduction Act appears to be largely responsible for this change.
Not many people seem to know this, but the US has also been on the right track with regard to carbon emissions: Per-capita carbon emissions in the US have been trending downward since about 2000, and are now lower than they were in the 1950s. The Inflation Reduction act now looks poised to double down on that progress, as it has been forecasted to reduce our emissions all the way down to 40% below their early-2000s peak.
Most Americans do correctly believe that inflation is still a bit high (though many seem to think it’s higher than it is); this is weird, seeing as inflation is normally high when the economy is growing rapidly, and gets too low when we are in a recession. This seems to be Halo Effect, rather than any genuine understanding of macroeconomics: downturns are bad and inflation is bad, so they must go together—when in fact, quite the opposite is the case.
Sixty-four percent of Americans say the economy is worse off compared to 2020, while seventy-three percent of Americans say the economy is worse off compared to five years ago. About two in five of Americans say they feel worse off from five years ago generally (38%) and a similar number say they feel worse off compared to 2020 (37%).
(Did you really have to write out ‘seventy-three percent’? I hate that convention. 73% is so much clearer and quicker to read.)
I don’t know what the Biden administration should do about this. Trying to sell themselves harder might backfire. (And I’m pretty much the last person in the world you should ask for advice about selling yourself.) But they’ve been doing really great work for the US economy… and people haven’t noticed. Thousands of factories are being built, millions of people are getting jobs, and the collective response has been… “meh”.
As of March 9, Silicon Valley Bank (SVB) has failed and officially been put into receivership under the FDIC. A bank that held $209 billion in assets has suddenly become insolvent.
This is the second-largest bank failure in US history, after Washington Mutual (WaMu) in 2008. In fact it will probably have more serious consequences than WaMu, for two reasons:
1. WaMu collapsed as part of the Great Recession, so there was already a lot of other things going on and a lot of policy responses already in place.
2. WaMu was mostly a conventional commercial bank that held deposits and loans for consumers, so its assets were largely protected by the FDIC, and thus its bankruptcy didn’t cause contagion the spread out to the rest of the system. (Other banks—shadow banks—did during the crash, but not so much WaMu.) SVB mostly served tech startups, so a whopping 89% of its deposits were not protected by FDIC insurance.
You’ve likely heard of many of the companies that had accounts at SVB: Roku, Roblox, Vimeo, even Vox. Stocks of the US financial industry lost $100 billion in value in two days.
The good news is that this will not be catastrophic. It probably won’t even trigger a recession (though the high interest rates we’ve been having lately potentially could drive us over that edge). Because this is commercial banking, it’s done out in the open, with transparency and reasonably good regulation. The FDIC knows what they are doing, and even though they aren’t covering all those deposits directly, they intend to find a buyer for the bank who will, and odds are good that they’ll be able to cover at least 80% of the lost funds.
Then again, it’s worth asking whether we should really have a banking system in which failures are so routine.
The reason banks fail is kind of a dark open secret: They don’t actually have enough money to cover their deposits.
Banks loan away most of their cash, and rely upon the fact that most of their depositors will not want to withdraw their money at the same time. They are required to keep a certain ratio in reserves, but it’s usually fairly small, like 10%. This is called fractional-reserve banking.
As long as less than 10% of deposits get withdrawn at any given time, this works. But if a bunch of depositors suddenly decide to take out their money, the bank may not have enough to cover it all, and suddenly become insolvent.
In fact, the fear that a bank might become insolvent can actually cause it to become insolvent, in a self-fulfilling prophecy. Once depositors get word that the bank is about to fail, they rush to be the first to get their money out before it disappears. This is a bank run, and it’s basically what happened to SVB.
The FDIC was originally created to prevent or mitigate bank runs. Not only did they provide insurance that reduced the damage in the event of a bank failure; by assuring depositors that their money would be recovered even if the bank failed, they also reduced the chances of a bank run becoming a self-fulfilling prophecy.
Indeed, SVB is the exception that proves the rule, as they failed largely because their assets were mainly not FDIC insured.
Fractional-reserve banking effectively allows banks to create money, in the form of credit that they offer to borrowers. That credit gets deposited in other banks, which then go on to loan it out to still others; the result is that there is more money in the system than was ever actually printed by the central bank.
In most economies this commercial bank money is a far larger quantity than the central bank money actually printed by the central bank—often nearly 10 to 1. This ratio is called the money multiplier.
Indeed, it’s not a coincidence that the reserve ratio is 10% and the multiplier is 10; the theoretical maximum multiplier is always the inverse of the reserve ratio, so if you require reserves of 10%, the highest multiplier you can get is 10. Had we required 20% reserves, the multiplier would drop to 5.
Most countries have fractional-reserve banking, and have for centuries; but it’s actually a pretty weird system if you think about it.
Back when we were on the gold standard, fractional-reserve banking was a way of cheating, getting our money supply to be larger than the supply of gold would actually allow.
But now that we are on a pure fiat money system, it’s worth asking what fractional-reserve banking actually accomplishes. If we need more money, the central bank could just print more. Why do we delegate that task to commercial banks?
Before leaving the subject of fractional reserve systems, I should mention one particularly bizarre variant — a fractional reserve system based on fiat money. I call it bizarre because the essential function of a fractional reserve system is to reduce the resource cost of producing money, by allowing an ounce of reserves to replace, say, five ounces of currency. The resource cost of producing fiat money is zero; more precisely, it costs no more to print a five-dollar bill than a one-dollar bill, so the cost of having a larger number of dollars in circulation is zero. The cost of having more bills in circulation is not zero but small. A fractional reserve system based on fiat money thus economizes on the cost of producing something that costs nothing to produce; it adds the disadvantages of a fractional reserve system to the disadvantages of a fiat system without adding any corresponding advantages. It makes sense only as a discreet way of transferring some of the income that the government receives from producing money to the banking system, and is worth mentioning at all only because it is the system presently in use in this country.
Our banking system evolved gradually over time, and seems to have held onto many features that made more sense in an earlier era. Back when we had arbitrarily tied our central bank money supply to gold, creating a new money supply that was larger may have been a reasonable solution. But today, it just seems to be handing the reins over to private corporations, giving them more profits while forcing the rest of society to bear more risk.
The obvious alternative is full-reserve banking, where banks are simply required to hold 100% of their deposits in reserve and the multiplier drops to 1. This idea has been supported by a number of quite prominent economists, including Milton Friedman.
It’s not just a right-wing idea: The left-wing organization Positive Money is dedicated to advocating for a full-reserve banking system in the UK and EU. (The ECB VP’s criticism of the proposal is utterly baffling to me: it “would not create enough funding for investment and growth.” Um, you do know you can print more money, right? Hm, come to think of it, maybe the ECB doesn’t know that, because they think inflation is literally Hitler. There are legitimate criticisms to be had of Positive Money’s proposal, but “There won’t be enough money under this fiat money system” is a really weird take.)
There’s a relatively simple way to gradually transition from our current system to a full-reserve sytem: Simply increase the reserve ratio over time, and print more central bank money to keep the total money supply constant. If we find that it seems to be causing more problems than it solves, we could stop or reverse the trend.
Krugman has pointed out that this wouldn’t really fix the problems in the banking system, which actually seem to be much worse in the shadow banking sector than in conventional commercial banking. This is clearly right, but it isn’t really an argument against trying to improve conventional banking. I guess if stricter regulations on conventional banking push more money into the shadow banking system, that’s bad; but really that just means we should be imposing stricter regulations on the shadow banking system first (or simultaneously).
We don’t need to accept bank runs as a routine part of the financial system. There are other ways of doing things.
A lot of people seem really upset about inflation. I’ve previously discussed why this is a bit weird; inflation really just isn’t that bad. In fact, I am increasingly concerned that the usual methods for fixing inflation are considerably worse than inflation itself.
To be clear, I’m not talking about hyperinflation—if you are getting triple-digit inflation or more, you are clearly printing too much money and you need to stop. And there are places in the world where this happens.
But what about just regular, ordinary inflation, even when it’s fairly high? Prices rising at 8% or 9% or even 11% per year? What catastrophe befalls our society when this happens?
Okay, sure, if we could snap our fingers and make prices all stable without cost, that would be worth doing. But we can’t. All of our mechanisms for reducing inflation come with costs—and often very high costs.
The chief mechanism by which inflation is currently controlled is open-market operations by central banks such as the Federal Reserve, the Bank of England, and the European Central Bank. These central banks try to reduce inflation by selling bonds, which lowers the price of bonds and reduces capital available to banks, and thereby increases interest rates. This also effectively removes money from the economy, as banks are using that money to buy bonds instead of lending it out. (It is chiefly in this odd indirect sense that the central bank manages the “money supply”.)
But how does this actually reduce inflation? It’s remarkably indirect. It’s actually the higher interest rates which prevent people from buying houses and prevent companies from hiring workers which result in reduced economic growth—or even economic recession—which then is supposed to bring down prices. There’s actually a lot we still don’t know about how this works or how long it should be expected to take. What we do know is that the pain hits quickly and the benefits arise only months or even years later.
As Krugman has rightfully pointed out, the worst pain of the 1970s was not the double-digit inflation; it was the recessions that Paul Volcker’s economic policy triggered in response to that inflation. The inflation wasn’t exactly a good thing; but for most people, the cure was much worse than the disease.
Most laypeople seem to think that prices somehow go up without wages going up, but that simply isn’t how it works. Prices and wages rise at close to the same rate in most countries most of the time. In fact, inflation is often driven chiefly by rising wages rather than the other way around. There are often lags between when the inflation hits and when people see their wages rise; but these lags can actually be in either direction—inflation first or wages first—and for moderate amounts of inflation they are clearly less harmful than the high rates of unemployment that we would get if we fought inflation more aggressively with monetary policy.
Economists are also notoriously vague about exactly how they expect the central bank to reduce inflation. They use complex jargon or broad euphemisms. But when they do actually come out and say they want to reduce wages, it tends to outrage people. Well, that’s one of three main ways that interest rates actually reduce inflation: They reduce wages, they cause unemployment, or they stop people from buying houses. That’s pretty much all that central banks can do.
There may be other ways to reduce inflation, like windfall profits taxes, antitrust action, or even price controls. The first two are basically no-brainers; we should always be taxing windfall profits (if they really are due to a windfall outside a corporation’s control, there’s no incentive to distort), and we should absolutely be increasing antitrust action (why did we reduce it in the first place?). Price controls are riskier—they really do create shortages—but then again, is that really worse than lower wages or unemployment? Because the usual strategy involves lower wages and unemployment.
It’s a little ironic: The people who are usually all about laissez-faireare the ones who panic about inflation and want the government to take drastic action; meanwhile, I’m usually in favor of government intervention, but when it comes to moderate inflation, I think maybe we should just let it be.
When I first decided to move to Edinburgh, I certainly did not expect it to be such a historic time. The pandemic was already in full swing, but I thought that would be all. But this year I was living in the UK when its leadership changed in two historic ways:
First, there was the death of Queen Elizabeth II, and the coronation of King Charles III.
Second, there was the resignation of Boris Johnson, the appointment of Elizabeth Truss, and then, so rapidly I feel like I have whiplash, the resignation of Elizabeth Truss.
In other words, I have seen the end of the longest-reigning monarch and the rise and fall of the shortest-reigning prime minister in the history of the United Kingdom. The three hundred-year history of the United Kingdom.
The prior probability of such a 300-year-historic event happening during my own 3-year term in the UK is approximately 1%. Yet, here we are. A new king, one of a handful of genuine First World monarchs to be coronated in the 21st century. The others are the Netherlands, Belgium, Spain, Monaco, Andorra, and Luxembourg; none of these have even a third the population of the UK, and if we include every Commonwealth Realm (believe it or not, “realm” is in fact still the official term), Charles III is now king of a supranational union with a population of over 150 million people—half the size of the United States. (Yes, he’s your king too, Canada!) Note that Charles III is not king of the entire Commonwealth of Nations, which includes now-independent nations such as India, Pakistan, and South Africa; that successor to the British Empire contains 54 nations and has a population of over 2 billion.
I still can’t quite wrap my mind around this idea of having a king. It feels even more ancient and anachronistic than the 400-year-old university I work at. Of course I knew that we had a queen before, and that she was old and would presumably die at some point and probably be replaced; but that wasn’t really salient information to me until she actually did die and then there was a ten-mile-long queue to see her body and now next spring they will be swearing in this new guy as the monarch of the fourteen realms. It now feels like I’m living in one of those gritty satirical fractured fairy tales. Maybe it’s an urban fantasy setting; it feels a lot like Shrek, to be honest.
Yet other than feeling surreal, none of this has affected my life all that much. I haven’t even really felt the effects of inflation: Groceries and restaurant meals seem a bit more expensive than they were when we arrived, but it’s well within what our budget can absorb; we don’t have a car here, so we don’t care about petrol prices; and we haven’t even been paying more than usual in natural gas because of the subsidy programs. Actually it’s probably been good for our household finances that the pound is so weak and the dollar is so strong. I have been much more directly affected by the university union strikes: being temporary contract junior faculty (read: expendable), I am ineligible to strike and hence had to cross a picket line at one point.
Perhaps this is what history has always felt like for most people: The kings and queens come and go, but life doesn’t really change. But I honestly felt more directly affected by Trump living in the US than I did by Truss living in the UK.
This may be in part because Elizabeth Truss was a very unusual politician; she combined crazy far-right economic policy with generally fairly progressive liberal social policy. A right-wing libertarian, one might say. (As Krugman notes, such people are astonishingly rare in the electorate.) Her socially-liberal stance meant that she wasn’t trying to implement horrific hateful policies against racial minorities or LGBT people the way that Trump was, and for once her horrible economic policies were recognized immediately as such and quickly rescinded. Unlike Trump, Truss did not get the chance to appoint any supreme court justices who could go on to repeal abortion rights.
Then again, Truss couldn’t have appointed any judges if she’d wanted to. The UK Supreme Court is really complicated, and I honestly don’t understand how it works; but from what I do understand, the Prime Minister appoints the Lord Chancellor, the Lord Chancellor forms a commission to appoint the President of the Supreme Court, and the President of the Supreme Court forms a commission to appoint new Supreme Court judges. But I think the monarch is considered the ultimate authority and can veto any appointment along the way. (Or something. Sometimes I get the impression that no one truly understands the UK system, and they just sort of go with doing things as they’ve always been done.) This convoluted arrangement seems to grant the court considerably more political independence than its American counterpart; also, unlike the US Supreme Court, the UK Supreme Court is not allowed to explicitly overturn primary legislation. (Fun fact: The Lord Chancellor is also the Keeper of the Great Seal of the Realm, because Great Britain hasn’t quite figured out that the 13th century ended yet.)
It’s sad and ironic that it was precisely by not being bigoted and racist that Truss ensured she would not have sufficient public support for her absurd economic policies. There’s a large segment of the population of both the US and UK—aptly, if ill-advisedly, referred to by Clinton as “deplorables”—who will accept any terrible policy as long as it hurts the right people. But Truss failed to appeal to that crucial demographic, and so could find no one to support her. Hence, her approval rating fell to a dismal 10%, and she was outlasted by a head of lettuce.
At the time of writing, the new prime minister has not yet been announced, but the smart money is on Rishi Sunak. (I mean that quite literally; he’s leading in prediction markets.) He’s also socially liberal but fiscally conservative, but unlike Truss he seems to have at least some vague understanding of how economics works. Sunak is also popular in a way Truss never was (though that popularity has been declining recently). So I think we can expect to get new policies which are in the same general direction as what Truss wanted—lower taxes on the rich, more privatization, less spent on social services—but at least Sunak is likely to do so in a way that makes the math(s?) actually add up.
I happen to be one of those weirdos who liked the game Cyberpunk 2077. It was hardly flawless, and had many unforced errors (like letting you choose your gender, but not making voice type independent from pronouns? That has to be, like, three lines of code to make your game significantly more inclusive). But overall I thought it did a good job of representing a compelling cyberpunk world that is dystopian but not totally hopeless, and had rich, compelling characters, along with reasonably good gameplay. The high level of character customization sets a new standard (aforementioned errors notwithstanding), and I for one appreciate how they pushed the envelope for sexuality in a AAA game.
It’s still not explicit—though I’m sure there are mods for that—but at least you can in fact get naked, and people talk about sex in a realistic way. It’s still weird to me that showing a bare breast or a penis is seen as ‘adult’ in the same way as showing someone’s head blown off (Remind me: Which of the three will nearly everyone have seen from the time they were a baby? Which will at least 50% of children see from birth, guaranteed, and virtually 100% of adults sooner or later? Which can you see on Venus de Miloand David?), but it’s at least some progress in our society toward a healthier relationship with sex.
A few things about the game’s world still struck me as odd, though. Chiefly it has to be the weird alternate history where apparently we have experimental AI and mind-uploading in the 2020s, but… those things are still experimental in the 2070s? So our technological progress was through the roof for the early 2000s, and then just completely plateaued? They should have had Johnny Silverhand’s story take place in something like 2050, not 2023. (You could leave essentially everything else unchanged! V could still have grown up hearing tales of Silverhand’s legendary exploits, because 2050 was 27 years ago in 2077; canonically, V is 28 years old when the game begins. Honestly it makes more sense in other ways: Rogue looks like she’s in her 60s, not her 80s.)
Another weird thing is the currency they use: They call it the “eurodollar”, and the symbol is, as you might expect, €$. When the game first came out, that seemed especially ridiculous, since euros were clearly worth more than dollars and basically always had been.
Of course, we’re unlike to actually merge the two currencies any time soon. (Can you imagine how Republicans would react if such a thing were proposed?) But the weird thing is that we could! It almost is like the two currencies are interchangeable—for the first time in history.
It isn’t so much that the euro is weak; it’s that the dollar is strong. When I first moved to the UK, the pound was trading at about $1.40. It is now trading at $1.10! If it continues dropping as it has, it could even reach parity as well! We might have, for the first time in history, the dollar, the pound, and the euro functioning as one currency. Get the Canadian dollar too (currently much too weak), and we’ll have the Atlantic Union dollar I use in some of my science fiction (I imagine the AU as an expansion of NATO into an economic union that gradually becomes its own government).Then again, the pound is especially weak right now because it plunged after the new prime minister announced an utterly idiotic economic plan. (Conservatives refusing to do basic math and promising that tax cuts would fix everything? Why, it felt like being home again! In all the worst ways.)
This is largely a bad thing. A strong dollar means that the US trade deficit will increase, and also that other countries will have trouble buying our exports. Conversely, with their stronger dollars, Americans will buy more imports from other countries. The combination of these two effects will make inflation worse in other countries (though it could reduce it in the US).
It’s not so bad for me personally, as my husband’s income is largely in dollars while our expenses are in pounds. (My income is in pounds and thus unaffected.) So a strong dollar and a weak pound means our real household income is about £4,000 than it would otherwise have been—which is not a small difference!
In general, the level of currency exchange rates isn’t very important. It’s changes in exchange rates that matter. The changes in relative prices will shift around a lot of economic activity, causing friction both in the US and in its (many) trading partners. Eventually all those changes should result in the exchange rates converging to a new, stable equilibrium; but that can take a long time, and exchange rates can fluctuate remarkably fast. In the meantime, such large shifts in exchange rates are going to cause even more chaos in a world already shaken by the COVID pandemic and the war in Ukraine.
Manchin wants to call it the Inflation Reduction Act, but it probably won’t actually reduce inflation very much. But some economists—even quite center-right ones—think it may actually reduce inflation quite a bit, and we basically all agree that it at least won’t increase inflation very much. Since the effects on inflation are likely to be small, we really don’t have to worry about them: whatever it does to inflation, the important thing is that this bill reduces carbon emissions.
Honestly, it’ll be kind of disgusting if this actually does work—because it’s so easy. This bill will have almost no downside. Its macroeconomic effects will be minor, maybe even positive. There was no reason it needed to be this hard-fought. Even if it didn’t have tax increases to offset it—which it absolutely does—the total cost of this bill over the next ten years would be less than six months of military spending, so cutting military spending by 5% would cover it. We have cured our unbearable headaches by finally realizing we could stop hitting ourselves in the head. (And the Republicans want us to keep hitting ourselves and will do whatever they can to make that happen.)
So, yes, it’s very sad that it took us this long. And even 60% of our current emissions is still too much emissions for a stable climate. But let’s take a moment to celebrate, because this is a genuine victory—and we haven’t had a lot of those in awhile.
Well, this feels like a milestone: Paul Krugman just wrote a column about a topic I’ve published research on. He didn’t actually cite our paper—in fact the literature review he links to is from 2014—but the topic is very much what we were studying: Asymmetric price transmission, ‘rockets and feathers’. He’s even talking about it from the perspective of industrial organization and market power, which is right in line with our results (and a bit different from the mainstream consensus among economic policy pundits).
The phenomenon is a well-documented one: When the price of an input (say, crude oil) rises, the price of outputs made from that input (say, gasoline) rise immediately, and basically one to one, sometimes even more than one to one. But when the price of an input falls, the price of outputs only falls slowly and gradually, taking a long time to converge to the same level as the input prices. Prices go up like a rocket, but down like a feather.
Many different explanations have been proposed to explain this phenomenon, and they aren’t all mutually exclusive. They include various aspects of market structure, substitution of inputs, and use of inventories to smooth the effects of prices.
One that I find particularly unpersuasive is the notion of menu costs: That it requires costly effort to actually change your prices, and this somehow results in the asymmetry. Most gas stations have digital price boards; it requires almost zero effort for them to change prices whenever they want. Moreover, there’s no clear reason this would result in asymmetry between raising and lowering prices. Some models extend the notion of “menu cost” to include expected customer responses, which is a much better explanation; but I think that’s far beyond the original meaning of the concept. If you fear to change your price because of how customers may respond, finding a cheaper way to print price labels won’t do a thing to change that.
But our paper—and Krugman’s article—is about one factor in particular: market power. We don’t see prices behave this way in highly competitive markets. We see it the most in oligopolies: Markets where there are only a small number of sellers, who thus have some control over how they set their prices.
Krugman explains it as follows:
When oil prices shoot up, owners of gas stations feel empowered not just to pass on the cost but also to raise their markups, because consumers can’t easily tell whether they’re being gouged when prices are going up everywhere. And gas stations may hang on to these extra markups for a while even when oil prices fall.
That’s actually a somewhat different mechanism from the one we found in our experiment, which is that asymmetric price transmission can be driven by tacit collusion. Explicit collusion is illegal: You can’t just call up the other gas stations and say, “Let’s all set the price at $5 per gallon.” But you can tacitly collude by responding to how they set their prices, and not trying to undercut them even when you could get a short-run benefit from doing so. It’s actually very similar to an Iterated Prisoner’s Dilemma: Cooperation is better for everyone, but worse for you as an individual; to get everyone to cooperate, it’s vital to severely punish those who don’t.
In our experiment, the participants in our experiment were acting as businesses setting their prices. The customers were fully automated, so there was no opportunity to “fool” them in this way. We also excluded any kind of menu costs or product inventories. But we still saw prices go up like rockets and down like feathers. Moreover, prices were always substantially higher than costs, especially during that phase when they are falling down like feathers.
Our explanation goes something like this: Businesses are trying to use their market power to maintain higher prices and thereby make higher profits, but they have to worry about other businesses undercutting their prices and taking all the business. Moreover, they also have to worry about others thinking that they are trying to undercut prices—they want to be perceived as cooperating, not defecting, in order to preserve the collusion and avoid being punished.
Consider how this affects their decisions when input prices change. If the price of oil goes up, then there’s no reason not to raise the price of gasoline immediately, because that isn’t violating the collusion. If anything, it’s being nice to your fellow colluders; they want prices as high as possible. You’ll want to raise the prices as high and fast as you can get away with, and you know they’ll do the same. But if the price of oil goes down, now gas stations are faced with a dilemma: You could lower prices to get more customers and make more profits, but the other gas stations might consider that a violation of your tacit collusion and could punish you by cutting their prices even more. Your best option is to lower prices very slowly, so that you can take advantage of the change in the input market, but also maintain the collusion with other gas stations. By slowly cutting prices, you can ensure that you are doing it together, and not trying to undercut other businesses.
Krugman’s explanation and ours are not mutually exclusive; in fact I think both are probably happening. They have one important feature in common, which fits the empirical data: Markets with less competition show greater degrees of asymmetric price transmission. The more concentrated the oligopoly, the more we see rockets and feathers.
They also share an important policy implication: Market power can make inflation worse. Contrary to what a lot of economic policy pundits have been saying, it isn’t ridiculous to think that breaking up monopolies or putting pressure on oligopolies to lower their prices could help reduce inflation. It probably won’t be as reliably effective as the Fed’s buying and selling of bonds to adjust interest rates—but we’re also doing that, and the two are not mutually exclusive. Besides, breaking up monopolies is a generally good thing to do anyway.
It’s not that unusual that I find myself agreeing with Krugman. I think what makes this one feel weird is that I have more expertise on the subject than he does.
While a return to double-digits remains possible, at this point it likely won’t happen, and if it does, it will occur only briefly.
This is no doubt a major reason why the dollar and the pound are widely used as reserve currencies (especially the dollar), and is likely due to the fact that they are managed by the world’s most competent central banks. Brexit would almost have made sense if the UK had been pressured to join the Euro; but they weren’t, because everyone knew the pound was better managed.
The Euro also doesn’t have much inflation, but if anything they err on the side of too low, mainly because Germany appears to believe that inflation is literally Hitler. In fact, the rise of the Nazis didn’t have much to do with the Weimar hyperinflation. The Great Depression was by far a greater factor—unemployment is much, much worse than inflation. (By the way, it’s weird that you can put that graph back to the 1980s. It, uh, wasn’t the Euro then. Euros didn’t start circulating until 1999. Is that an aggregate of the franc and the deutsche mark and whatever else? The Euro itself has never had double-digit inflation—ever.)
But it’s always a little surreal for me to see how panicked people in the US and UK get when our inflation rises a couple of percentage points. There seems to be an entire subgenre of economics news that basically consists of rich people saying the sky is falling because inflation has risen—or will, or may rise—by two points. (Hey, anybody got any ideas how we can get them to panic like this over rises in sea level or aggregate temperature?)
Hyperinflation is a real problem—it isn’t what put Hitler into power, but it has led to real crises in Germany, Zimbabwe, and elsewhere. Once you start getting over 100% per year, and especially when it starts rapidly accelerating, that’s a genuine crisis. Moreover, even though they clearly don’t constitute hyperinflation, I can see why people might legitimately worry about price increases of 20% or 30% per year. (Let alone 60% like Argentina is dealing with right now.) But why is going from 2% to 6% any cause for alarm? Yet alarmed we seem to be.
I can even understand why rich people would be upset about inflation (though the magnitudeof their concern does still seem disproportionate). Inflation erodes the value of financial assets, because most bonds, options, etc. are denominated in nominal, not inflation-adjusted terms. (Though there are such things as inflation-indexed bonds.) So high inflation can in fact make rich people slightly less rich.
But why in the world are so many poor people upset about inflation?
Inflation doesn’t just erode the value of financial assets; it also erodes the value of financial debts. And most poor people have more debts than they have assets—indeed, it’s not uncommon for poor people to have substantial debt and no financial assets to speak of (what little wealth they have being non-financial, e.g. a car or a home). Thus, their net wealth position improves as prices rise.
The interest rate response can compensate for this to some extent, but most people’s debts are fixed-rate. Moreover, if it’s the higher interest rates you’re worried about, you should want the Federal Reserve and the Bank of England not to fight inflation too hard, because the way they fight it is chiefly by raising interest rates.
I admit, I question the survey design here: I would answer ‘yes’ to both questions if we’re talking about a theoretical 10,000% hyperinflation, but ‘no’ if we’re talking about a realistic 10% inflation. So I would like to see, but could not find, a survey asking people what level of inflation is sufficient cause for concern. But since most of these people seemed concerned about actual, realistic inflation (85% reported anger at seeing actual, higher prices), it still suggests a lot of strong feelings that even mild inflation is bad.
So it does seem to be the case that a lot of poor and middle-class people really strongly dislike inflation even in the actual, mild levels in which it occurs in the US and UK.
The main fear seems to be that inflation will erode people’s purchasing power—that as the price of gasoline and groceries rise, people won’t be able to eat as well or drive as much. And that, indeed, would be a real loss of utility worth worrying about.
But in fact this makes very little sense: Most forms of income—particularly labor income, which is the only real income for some 80%-90% of the population—actually increases with inflation, more or less one-to-one. Yes, there’s some delay—you won’t get your annual cost-of-living raise immediately, but several months down the road. But this could have at most a small effect on your real consumption.
To see this, suppose that inflation has risen from 2% to 6%. (Really, you need not suppose; it has.) Now consider your cost-of-living raise, which nearly everyone gets. It will presumably rise the same way: So if it was 3% before, it will now be 7%. Now consider how much your purchasing power is affected over the course of the year.
For concreteness, let’s say your initial income was $3,000 per month at the start of the year (a fairly typical amount for a middle-class American, indeed almost exactly the median personal income). Let’s compare the case of no inflation with a 1% raise, 2% inflation with a 3% raise, and 5% inflation with a 6% raise.
If there was no inflation, your real income would remain simply $3,000 per month, until the end of the year when it would become $3,030 per month. That’s the baseline to compare against.
If inflation is 2%, your real income would gradually fall, by about 0.16% per month, before being bumped up 3% at the end of the year. So in January you’d have $3,000, in February $2,995, in March $2,990. Come December, your real income has fallen to $2,941. But then next January it will immediately be bumped up 3% to $3,029, almost the same as it would have been with no inflation at all. The total lost income over the entire year is about $380, or about 1% of your total income.
If inflation instead rises to 6%, your real income will fall by 0.49% per month, reaching a minimum of $2,830 in December before being bumped back up to $3,028 next January. Your total loss for the whole year will be about $1110, or about 3% of your total income.
Indeed, it’s a pretty good heuristic to say that for an inflation rate of x% with annual cost-of-living raises, your loss of real income relative to having no inflation at all is about (x/2)%. (This breaks down for really high levels of inflation, at which point it becomes a wild over-estimate, since even 200% inflation doesn’t make your real income go to zero.)
This isn’t nothing, of course. You’d feel it. Going from 2% to 6% inflation at an income of $3000 per month is like losing $700 over the course of a year, which could be a month of groceries for a family of four. (Not that anyone can really raise a family of four on a single middle-class income these days. When did The Simpsons begin to seem aspirational?)
But this isn’t the whole story. Suppose that this same family of four had a mortgage payment of $1000 per month; that is also decreasing in real value by the same proportion. And let’s assume it’s a fixed-rate mortgage, as most are, so we don’t have to factor in any changes in interest rates.
With no inflation, their mortgage payment remains $1000. It’s 33.3% of their income this year, and it will be 33.0% of their income next year after they get that 1% raise.
With 2% inflation, their mortgage payment will also fall by 0.16% per month; $998 in February, $996 in March, and so on, down to $980 in December. This amounts to an increase in real income of about $130—taking away a third of the loss that was introduced by the inflation.
With 6% inflation, their mortgage payment will also fall by 0.49% per month; $995 in February, $990 in March, and so on, until it’s only $943 in December. This amounts to an increase in real income of over $370—again taking away a third of the loss.
Indeed, it’s no coincidence that it’s one third; the proportion of lost real income you’ll get back by cheaper mortgage payments is precisely the proportion of your income that was spent on mortgage payments at the start—so if, like too many Americans, they are paying more than a third of their income on mortgage, their real loss of income from inflation will be even lower.
And what if they are renting instead? They’re probably on an annual lease, so that payment won’t increase in nominal terms either—and hence will decrease in real terms, in just the same way as a mortgage payment. Likewise car payments, credit card payments, any debt that has a fixed interest rate. If they’re still paying back student loans, their financial situation is almost certainly improved by inflation.
This means that the real loss from an increase of inflation from 2% to 6% is something like 1.5% of total income, or about $500 for a typical American adult. That’s clearly not nearly as bad as a similar increase in unemployment, which would translate one-to-one into lost income on average; moreover, this loss would be concentrated among people who lost their jobs, so it’s actually worse than that once you account for risk aversion. It’s clearly better to lose 1% of your income than to have a 1% chance of losing nearly all your income—and inflation is the former while unemployment is the latter.
Indeed, the only reason you lost purchasing power at all was that your cost-of-living increases didn’t occur often enough. If instead you had a labor contract that instituted cost-of-living raises every month, or even every paycheck, instead of every year, you would get all the benefits of a cheaper mortgage and virtually none of the costs of a weaker paycheck. Convince your employer to make this adjustment, and you will actually benefit from higher inflation.
So if poor and middle-class people are upset about eroding purchasing power, they should be mad at their employers for not implementing more frequent cost-of-living adjustments; the inflation itself really isn’t the problem.
No, it’s actually $100. There are 13.1 billion $1 bills, 11.7 billion $20 bills, and 16.4 billion $100 bills. And since $100 bills are worth more, the vast majority of US dollar value in circulation is in those $100 bills—indeed, $1.64 trillion of the total $2.05 trillion cash supply.
In the United States, cash is used for 26% of transactions, compared to 28% for debit card and 23% for credit cards. The US is actually a relatively cash-heavy country by First World standards. In the Netherlands and Scandinavia, cash is almost unheard of. When I last visited Amsterdam a couple of months ago, businesses were more likely to take US credit cards than they were to take cash euros.
And yet the cash money supply is still quite large: $2.05 trillion is only a third of the US monetary base, but it’s still a huge amount of money. If most people aren’t using it, who is? And why is so much of it in the form of $100 bills?
It turns out that the answer to the second question can provide an answer to the first. $100 bills are not widely used for consumer purchases—indeed, most businesses won’t even accept them. (Honestly that has always bothered me: What exactly does “legal tender” mean, if you’re allowed to categorically refuse $100 bills? It’d be one thing to say “we can’t accept payment when we can’t make change”, and obviously nobody seriously expects you to accept $10,000 bills; but what if you have a $97 purchase?) When people spend cash, it’s mainly ones, fives, and twenties.
Who uses $100 bills? People who want to store money in a way that is anonymous, easily transportable—including across borders—and stable against market fluctuations. Drug dealers leap to mind (and indeed the money-laundering that HSBC did for drug cartels was largely in the form of thick stacks of $100 bills). Of course it isn’t just drug dealers, or even just illegal transactions, but it is mostly people who want to cross borders. 80% of US $100 bills are in circulation outside the United States. Since 80% of US cash is in the form of $100 bills, this means that nearly two-thirds of all US dollars are outside the US.
Knowing this, I have to wonder: Why does the Federal Reserve continue printing so many $100 bills? Okay, once they’re out there, it may be hard to get them back. But they do wear out eventually. (In fact, US dollars wear out faster than most currencies, because they are made of linen instead of plastic. Surprisingly, this actually makes them less eco-friendly despite being more biodegradable. Of course, the most eco-friendly method of payment is mobile payments, since their marginal environmental impact is basically zero.) So they could simply stop printing them, and eventually the global supply would dwindle.
They clearly haven’t done this—indeed, there were more $100 bills printed last year than any previous year, increasing the global supply by 2 billion bills, or $200 billion. Why not? Are they trying to keep money flowing for drug dealers? Even if the goal is to substitute for failing currencies in other countries (a somewhat odd, if altruistic, objective), wouldn’t that be more effective with $1 and $5 bills? $100 is a lot of money for people in Chad or Angola! Chad’s per-capita GDP is a staggeringly low $600 per year; that means that a $100 bill to a typical person in Chad would be like me holding onto a $10,000 bill (those exist, technically). Surely they’d prefer $1 bills—which would still feel to them like $100 bills feel to me. Even in middle-income countries, $100 is quite a bit; Ecuador actually uses the US dollar as its main currency, but their per-capita GDP is only $5,600, so $100 to them feels like $1000 to us.
If you want to usefully increase the money supply to stimulate consumer spending, print $20 bills—or just increase some numbers in bank reserve accounts. Printing $100 bills is honestly baffling to me. It seems at best inept, and at worst possibly corrupt—maybe they do want to support drug cartels?
(I couldn’t resist; for the uninitiated, my slightly off-color title is referencing this XKCD comic.)
When faced with a bad recession, Keynesian economics prescribes the following response: Expand the money supply. Cut interest rates. Increase government spending, but decrease taxes. The bigger the recession, the more we should do all these things—especially increasing spending, because interest rates will often get pushed to zero, creating what’s called a liquidity trap.
Take a look at these two FRED graphs, both since the 1950s. The first is interest rates (specifically the Fed funds effective rate):
The second is the US federal deficit as a proportion of GDP:
Interest rates were pushed to zero right after the 2008 recession, and didn’t start coming back up until 2016. Then as soon as we hit the COVID recession, they were dropped back to zero.
The deficit looks even more remarkable. At the 2009 trough of the recession, the deficit was large, nearly 10% of GDP; but then it was quickly reduced back to normal, to between 2% and 4% of GDP. And that initial surge is as much explained by GDP and tax receipts falling as by spending increasing.
Yet in 2020 we saw something quite different: The deficit became huge. Literally off the chart, nearly 15% of GDP. A staggering $2.8 trillion. We’ve not had a deficit that large as a proportion of GDP since WW2. We’ve never had a deficit that large in real billions of dollars.
Deficit hawks came out of the woodwork to complain about this, and for once I was worried they might actually be right. Their most credible complaint was that it would trigger inflation, and they weren’t wrong about that: Inflation became a serious concern for the first time in decades.
But these recessions were very large, and when you actually run the numbers, this deficit was the correct magnitude for what Keynesian models tell us to do. I wouldn’t have thought our government had the will and courage to actually do it, but I am very glad to have been wrong about that, for one very simple reason:
It worked.
In 2009, we didn’t actually fix the recession. We blunted it; we stopped it from getting worse. But we never really restored GDP, we just let it get back to its normal growth rate after it had plummeted, and eventually caught back up to where we had been.
2021 went completely differently. With a much larger deficit, we fixed this recession. We didn’t just stop the fall; we reversed it. We aren’t just back to normal growth rates—we are back to the same level of GDP, as if the recession had never happened.
This contrast is quite obvious from the GDP of US GDP:
In 2008 and 2009, GDP slumps downward, and then just… resumes its previous trend. It’s like we didn’t do anything to fix the recession, and just allowed the overall strong growth of our economy to carry us through.
The pattern in 2020 is completely different. GDP plummets downward—much further, much faster than in the Great Recession. But then it immediately surges back upward. By the end of 2021, it was above its pre-recession level, and looks to be back on its growth trend. With a recession this deep, if we’d just waited like we did last time, it would have taken four or five years to reach this point—we actually did it in less than one.
Indeed, to go from unemployment almost 15% in April of 2020 to under 4% in December of 2021 is fast enough I feel like I’m getting whiplash. We have never seen unemployment drop that fast. Krugman is fond of comparing this to “morning in America”, but that’s really an understatement. Pitch black one moment, shining bright the next: this isn’t a sunrise, it’s pulling open a blackout curtain.
I’m not sure I have the words to express what a staggering achievement of economic policy it is to so rapidly and totally repair the economic damage caused by a pandemic while that pandemic is still happening. It’s the equivalent of repairing an airplane that is not only still in flight, but still taking anti-aircraft fire.
Why, it seems that Keynes fellow may have been onto something, eh?