Reflections on the Index of Necessary Expenditure

Mar 16 JDN 2460751

In last week’s post I constructed an Index of National Expenditure (INE), attempting to estimate the total cost of all of the things a family needs and can’t do without, like housing, food, clothing, cars, healthcare, and education. What I found shocked me: The median family cannot afford all necessary expenditures.

I have a couple more thoughts about that.

I still don’t understand why people care so much about gas prices.

Gasoline was a relatively small contribution to INE. It was more than clothing but less than utilities, and absolutely dwarfed by housing, food, or college. I thought maybe since I only counted a 15-mile commute, maybe I didn’t actually include enoughgasoline usage, but based on this estimate of about $2000 per driver, I was in about the right range; my estimate for the same year was $3350 for a 2-car family.

I think I still have to go with my salience hypothesis: Gasoline is the only price that we plaster in real-time on signs on the side of the road. So people are constantly aware of it, even though it isn’t actually that important.

The price surge that should be upsetting people is housing.

If the price of homes had only risen with the rate of CPI inflation instead of what it actually did, the median home price in 2024 would be only $234,000 instead of the $396,000 it actually is; and by my estimation that would save a typical family $11,000 per year—a whopping 15% of their income, and nearly enough to make the INE affordable by itself.

Now, I’ll consider some possible objections to my findings.

Objection 1: A typical family doesn’t actually spend this much on these things.

You’re right, they don’t! Because they couldn’t possibly. Even with substantial debt, you just can’t sustainably spend 125% of your after-tax household income.

My goal here was not to estimate how much families actually spend; it was to estimate how much they need to spend in order to live a good life and not feel deprived.


What I have found is that most American families feel deprived. They are forced to sacrifice something really important—like healthcare, or education, or owning a home—because they simply can’t afford it.

What I’m trying to do here is find the price of the American Dream; and what I’ve found is that the American Dream has a price that most Americans cannot afford.

Objection 2: You should use median healthcare spending, not mean.

I did in fact use mean figures instead of median for healthcare expenditures, mainly because only the mean was readily available. Mean income is higher than median income, so you might say that I’ve overestimated healthcare expenditure—and in a sense that’s definitely true. The median family spends less than this on healthcare.

But the reason that the median family spends less than this on healthcare is not that they want to, but that they have to. Healthcare isn’t a luxury that people buy more of because they are richer. People buy either as much as they need or as much as they can afford—whichever is lower, which is typically the latter. Using the mean instead of the median is a crude way to account for that, but I think it’s a defensible one.

But okay, let’s go ahead and cut the estimate of healthcare spending in half; even if you do that, the INE is still larger than after-tax median household income in most years.

Objection 3: A typical family isn’t a family of four, it’s a family of three.

Yes, the mean number of people in a family household in the US is 3.22 (the median is 3).

This is a very bad thing.

Part of what I seem to be finding here is that a family of four is unaffordable—literally impossible to afford—on a typical family income.

But a healthy society is one in which typical families have two or three children. That is what we need in order to achieve population replacement. When families get smaller than that, we aren’t having enough children, and our population will decline—which means that we’ll have too many old people relative to young people. This puts enormous pressure on healthcare and pension systems, which rely upon the fact that young people produce more, in order to pay for the fact that old people cost more.

The ideal average number of births per woman is about 2.1; this is what would give us a steady population. No US state has fertility above this level. The only reason the US population is growing rather than shrinking is that we are taking in immigrants.

This is bad. This is not sustainable. If the reason families aren’t having enough kids is that they can’t afford them—and this fits with other research on the subject—then this economic failure damages our entire society, and it needs to be fixed.

Objection 4: Many families buy their cars used.

Perhaps 1/10 of a new car every year isn’t an ideal estimate of how much people spend on their cars, but if anything I think it’s conservative, because if you only buy a car every 10 years, and it was already used when you bought it, you’re going to need to spend a lot on maintaining it—quite possibly more than it would cost to get a new one. Motley Fool actually estimates the ownership cost of just one car at substantially more than I estimated for two cars. So if anything your complaint should be that I’ve underestimated the cost by not adequately including maintenance and insurance.

Objection 5: Not everyone gets a four-year college degree.

Fair enough; a substantial proportion get associate’s degrees, and most people get no college degree at all. But some also get graduate degrees, which is even more expensive (ask me how I know).

Moreover, in today’s labor market, having a college degree makes a huge difference in your future earnings; a bachelor’s degree increases your lifetime earnings by a whopping 84%. In theory it’s okay to have a society where most people don’t go to college; in practice, in our society, not going to college puts you at a tremendous disadvantage for the rest of your life. So we either need to find a way to bring wages up for those who don’t go to college, or find a way to bring the cost of college down.

This is probably one of the things that families actually choose to scrimp on, only sending one kid to college or none at all. But because college is such a huge determinant of earnings, this perpetuates intergenerational inequality: Only rich families can afford to send their kids to college, and only kids who went to college grow up to have rich families.

Objection 6: You don’t actually need to save for college; you can use student loans.

Yes, you can, and in practice, most people who to college do. But while this solves the liquidity problem (having enough money right now), it does not solve the solvency problem (having enough money in the long run). Failing to save for college and relying on student loans just means pushing the cost of college onto your children—and since we’ve been doing that for over a generation, feel free to replace the category “college savings” with “repaying student loans”; it won’t meaningfully change the results.

Housing should be cheap

Sep 1 JDN 2460555

We are of two minds about housing in our society. On the one hand, we recognize that shelter is a necessity, and we want it to be affordable for all. On the other hand, we see real estate as an asset, and we want it to appreciate in value and thereby provide a store of wealth. So on the one hand we want it to be cheap, but on the other hand we want it to be expensive. And of course it can’t be both.

This is not a uniquely American phenomenon. As Noah Smith points out, it seems to be how things are done in almost every country in the world. It may be foolish for me to try to turn such a tide. But I’m going to try anyway.

Housing should be cheap.

For some reason, inflation is seen as a bad thing for every other good, necessity and luxury alike; but when it comes to housing in particular—the single biggest expense for almost everyone—suddenly we are conflicted about it, and think that maybe inflation is a good thing actually.

This is because owning a home that appreciates in value provides the illusion of increasing wealth.

Yes, I said illusion. In some particular circumstances it can sometimes increase real wealth, but when housing is getting more expensive everywhere at once (which is basically true), it doesn’t actually increase real wealth—because you still need to have a home. So while you’d get more money if you sold your current home, you’d have to go buy another home that would be just as expensive. That extra wealth is largely imaginary.

In fact, what isn’t an illusion is your increased property tax bill. If you aren’t planning on selling your home any time soon, you should really see its appreciation as a bad thing; now you suddenly owe more in taxes.

Home equity lines of credit complicate this a bit; for some reason we let people collateralize part of the home—even though the whole home is already collateralized with a mortgage to someone else—and thereby turn that largely-imaginary wealth into actual liquid cash. This is just one more way that our financial system is broken; we shouldn’t be offering these lines of credit, just as we shouldn’t be creating mortgage-backed securities. Cleverness is not a virtue in finance; banking should be boring.

But you’re probably still not convinced. So I’d like you to consider a simple thought experiment, where we take either view to the extreme: Make housing 100 times cheaper or 100 times more expensive.

Currently, houses cost about $400,000. So in Cheap World, houses cost $4,000. In Expensive World, they cost $40 million.

In Cheap World, there is no homelessness. Seriously, zero. It would make no sense at all for the government not to simply buy everyone a house. If you want to also buy your own house—or a dozen—go ahead, that’s fine; but you get one for free, paid for by tax dollars, because that’s cheaper than a year of schooling for a high-school student; it’s in fact not much more than what we’d currently spend to house someone in a homeless shelter for a year. So given the choice of offering someone two years at a shelter versus never homeless ever again, it’s pretty obvious we should choose the latter. Thus, in Cheap World, we all have a roof over our heads. And instead of storing their wealth in their homes in Cheap World, people store their wealth in stocks and bonds, which have better returns anyway.

In Expensive World, the top 1% are multi-millionaires who own homes, maybe the top 10% can afford rent, and the remaining 89% of the population are homeless. There’s simply no way to allocate the wealth of our society such that a typical middle class household has $40 million. We’re just not that rich. We probably never will be that rich. It may not even be possible to make a society that rich. In Expensive World, most people live in tents on the streets, because housing has been priced out of reach for all but the richest families.

Cheap World sounds like an amazing place to live. Expensive World is a horrific dystopia. The only thing I changed was the price of housing.


Yes, I changed it a lot; but that was to make the example as clear as possible, and it’s not even as extreme as it probably sounds. At 10% annual growth, 100 times more expensive only takes 49 years. At the current growth rate of housing prices of about 5% per year, it would take 95 years. A century from now, if we don’t fix our housing market, we will live in Expensive World. (Yes, we’ll most likely be richer then too; but will we be that much richer? Median income has not been rising nearly as fast as median housing price. If current trends continue, median income will be 5 times bigger and housing prices will be 100 times bigger—that’s still terrible.)

We’re already seeing something that feels a lot like Expensive World in some of our most expensive cities. San Francisco has ludicrously expensive housing and also a massive homelessness crisis—this is not a coincidence. Homelessness does still exist in more affordable cities, but clearly not at the same crisis level.

I think part of the problem is that people don’t really understand what wealth is. They see the number go up, and they think that means there is more wealth. Real wealth consists in goods, not in prices. The wealth we have is made of real things, not monetary prices. Prices merely decide how wealth is allocated.

A home is wealth, yes. But it’s the same amount of real wealth regardless of what price it has, because what matters is what it’s good for. If you become genuinely richer by selling an appreciated home, you gained that extra wealth from somewhere else; it was not contained within your home. You have appropriated wealth that someone else used to have. You haven’t created wealth; you’ve merely obtained it.

For you as an individual, that may not make a difference; you still get richer. But as a society, it makes all the difference: Moving wealth around doesn’t make our society richer, and all higher prices can do is move wealth around.

This means that rising housing prices simply cannot make our whole society richer. Better houses could do that. More houses could do that. But simply raising the price tag isn’t making our society richer. If it makes anyone richer—which, again, typically it does not—it does so by moving wealth from somewhere else. And since homeowners are generally richer than non-homeowners (even aside from their housing wealth!), more expensive homes means moving wealth from poorer people to richer people—increased inequality.

We used to have affordable housing, just a couple of generations ago. But we may never have truly affordable housing again, because people really don’t like to see that number go down, and they vote for policies accordingly—especially at the local level. Our best hope right now seems to be to keep it from going up faster than the growth rate of income, so that homes don’t become any more unaffordable than they already are.

But frankly I’m not optimistic. I think part of the cyberpunk dystopia we’re careening towards is Expensive World.

Why are groceries so expensive?

Aug 18 JDN 2460541

There has been unusually high inflation the past few years, mostly attributable to the COVID pandemic and its aftermath. But groceries in particular seem to have gotten especially more expensive. We’ve all felt it: Eggs, milk, and toilet paper especially soared to extreme prices and then, even when they came back down, never came down all the way.

Why would this be?

Did it involve supply chain disruptions? Sure. Was it related to the war in Ukraine? Probably.

But it clearly wasn’t just those things—because, as the FTC recently found, grocery stores have been colluding and price-gouging. Large grocery chains like Walmart and Kroger have a lot of market power, and they used that power to raise prices considerably faster than was necessary to keep up with their increased costs; as a result, they made record profits. Their costs did genuinely increase, but they increased their prices even more, and ended up being better off.

The big chains were also better able to protect their own supply chains than smaller companies, and so the effects of the pandemic further entrenched the market power of a handful of corporations. Some of them also imposed strict delivery requirements on their suppliers, pressuring them to prioritize the big companies over the small ones.

This kind of thing is what happens when we let oligopolies take control. When only a few companies control the market, prices go up, quality goes down, and inequality gets worse.

For far too long, institutions like the FTC have failed to challenge the ever tighter concentration of our markets in the hands of a small number of huge corporations.

And it’s not just grocery stores.

Our media is dominated by five corporations: Disney, WarnerMedia, NBCUniversal, Sony, and Paramount.

Our cell phone service is 99% controlled by three corporations: T-Mobile, Verizon, and AT&T.

Our music industry is dominated by three corporations: Sony, Universal, and Warner.

Two-thirds of US airline traffic are in four airlines: American, Delta, Southwest, and United.

Nearly 40% of US commercial banking assets are controlled by just three banks: JPMorgan Chase, Bank of America, and Citigroup.

Do I even need to mention the incredible market share Google has in search—over 90%—or Facebook has in social media—over 50%?

And most of these lists used to be longer. Disney recently acquired 21st Century Fox. Viacom recently merged with CBS and then became Paramount. Universal recently acquired EMI. Our markets aren’t simply alarmingly concentrated; they have also been getting more concentrated over time.

Institutions like the FTC are supposed to be protecting us from oligopolies, by ensuring that corporations can’t merge and acquire each other once they reach a certain market share. But decades of underfunding and laissez-faire ideology have weakened these institutions. So many mergers that obviously shouldn’t have been allowed were allowed, because no regulatory agency had the will and the strength to stop them.

The good news is that this is finally beginning to change: The FTC has recently (finally!) sued Google for maintaining a monopoly on Internet search. And among grocery stores in particular, the FTC is challenging Kroger’s acquisition of Albertson’s—though it remains unclear whether that challenge will succeed.

Hopefully this is a sign that the FTC has found its teeth again, and will continue to prosecute anti-trust cases against oligopolies. A lot of that may depend on who ends up in the White House this November.

Why does everyone work full-time?

Jun 30 JDN 2460492

Over 70% of US workers work “full-time”, that is, at least 40 hours a week. The average number of hours worked per week is 33.8, and the average number of overtime hours is only 3.6. So basically, about 2/3 of workers work almost exactly 40 hours per week.

We’re accustomed to this situation, so it may not seem strange to you. But stop and think for a moment: What are the odds that across every industry, exactly 40 hours per week is the most efficient arrangement?

Indeed, there is mounting evidence that in many industries, 40 hours is too much, and something like 5 or even 30 would actually be more efficient. Yet we continue to work 40-hour weeks.

This looks like a corner solution: Rather than choosing an optimal amount, we’re all up against some kind of constraint.


What’s the constraint? Well, the government requires (for most workers) that anything above 40 hours per week must be paid as overtime, that is, at a higher wage rate. So it looks like we would all be working more than 40 hours per week, but we hit the upper limit due to these regulations.

Does this mean we would be better off without the regulations? Clearly not. As I just pointed out, the evidence is mounting that 40 hours is too much, not too little. But why, then, would we all be trying to work so many hours?

I believe this is yet another example of hyper-competition, where competition drives us to an inefficient outcome.

Employers value employees who work a lot of hours. Indeed, I contend that they do so far more than makes any rational sense; they seem to care more about how many hours you work than about the actual quality or quantity of your output. Maybe this is because hours worked is easier to measure, or because it seems like a fairer estimate of your effort; but for whatever reason, employers really seem to reward employees who work a lot of hours, regardless of almost everything else.

In the absence of a limit on hours worked, then, employers are going to heap rewards on whoever works the most hours, and so people will be pressured to work more and more hours. Then we would all work ourselves to death, and it’s not even clear that this would be good for GDP.

Indeed, this seems to be what happened, before the 40-hour work week became the standard. In the 1800s, the average American worked over 60 hours per week. It wasn’t until the 1940s that 40-hour weeks became the norm.

But speaking of norms, that also seems to be a big factor here. The truth is, overtime isn’t really that expensive, and employers could be smarter about rewarding good work rather than more hours. But once a norm establishes itself in a society, it can be very hard to change. And right now, the norm is that 40 hours is a “normal” “standard” “full” work week—any more is above and beyond, and any less is inferior.

This is a problem, because a lot of people can’t work 40-hour weeks. Our standard for what makes someone “disabled” isn’t that you can’t work at all; it’s that you can’t work as much as society expects. I wonder how many people are currently living on disability who could have been working part-time, but there just weren’t enough part-time jobs available. The employment rate among people with a disability is only 41%, compared to 77% of the general population.

And it’s not that we need to work this much. Our productivity is now staggeringly high: We produce more than five times as much wealth per hour of work than we did as recently as the 1940s. So in theory, we should be able to live just as well while working one-fifth as much… but that’s clearly not what happened.

Keynes accurately predicted our high level of productivity; but he wrongly predicted that we would work less, when instead we just kept right on working almost as hard as before.

Indeed, it doesn’t even seem like we live five times as well while working just as much. Many things are better now—healthcare, entertainment, and of course electronics—but somehow, we really don’t feel like we are living better lives than our ancestors.

The Economic Policy Institute offers an explanation for this phenomenon: Our pay hasn’t kept up with our productivity.


Up until about 1980, productivity and pay rose in lockstep. But then they started to diverge, and they never again converged. Productivity continued to soar, while real wages only barely increased. The result is that since then, productivity has grown by 64%, and hourly pay has only grown 15%.

This is definitely part of the problem, but I think there’s more to it as well. Housing and healthcare have become so utterly unaffordable in this country that it really doesn’t matter that our cars are nice and our phones are dirt cheap. We are theoretically wealthier now, but most of that extra wealth goes into simply staying healthy and having a home. Our consumption has been necessitized.

If we can solve these problems, maybe people won’t feel a need to work so many hours. Or, maybe competition will continue to pressure them to work those hours… but at least we’ll actually feel richer when we do it.

How is the economy doing this well?

Apr 14 JDN 2460416

We are living in a very weird time, economically. The COVID pandemic created huge disruptions throughout our economy, from retail shops closing to shortages in shipping containers. The result was a severe recession with the worst unemployment since the Great Depression.

Now, a few years later, we have fully recovered.

Here’s a graph from FRED showing our unemployment and inflation rates since 1990 [technical note: I’m using the urban CPI; there are a few other inflation measures you could use instead, but they look much the same]:

Inflation fluctuates pretty quickly, while unemployment moves much slower.

There are a lot of things we can learn from this graph:

  1. Before COVID, we had pretty low inflation; from 1990 to 2019, inflation averaged about 2.4%, just over the Fed’s 2% target.
  2. Before COVID, we had moderate to high unemployment; it rarely went below 5% and and for several years after the 2008 crash it was over 7%—which is why we called it the Great Recession.
  3. The only times we actually had negative inflation—deflationwere during recessions, and coincided with high unemployment; so, no, we really don’t want prices to come down.
  4. During COVID, we had a massive spike in unemployment up to almost 15%, but then it came back down much more rapidly than it had in the Great Recession.
  5. After COVID, there was a surge in inflation, peaking at almost 10%.
  6. That inflation surge was short-lived; by the end of 2022 inflation was back down to 4%.
  7. Unemployment now stands at 3.8% while inflation is at 2.7%.

What I really want to emphasize right now is point 7, so let me repeat it:

Unemployment now stands at 3.8% while inflation is at 2.7%.

Yes, technically, 2.7% is above our inflation target. But honestly, I’m not sure it should be. I don’t see any particular reason to think that 2% is optimal, and based on what we’ve learned from the Great Recession, I actually think 3% or even 4% would be perfectly reasonable inflation targets. No, we don’t want to be going into double-digits (and we certainly don’t want true hyperinflation); but 4% inflation really isn’t a disaster, and we should stop treating it like it is.

2.7% inflation is actually pretty close to the 2.4% inflation we’d been averaging from 1990 to 2019. So I think it’s fair to say that inflation is back to normal.

But the really wild thing is that unemployment isn’t back to normal: It’s much better than that.

To get some more perspective on this, let’s extend our graph backward all the way to 1950:

Inflation has been much higher than it is now. In the late 1970s, it was consistently as high as it got during the post-COVID surge. But it has never been substantially lower than it is now; a little above the 2% target really seems to be what stable, normal inflation looks like in the United States.

On the other hand, unemployment is almost never this low. It was for a few years in the early 1950s and the late 1960s; but otherwise, it has always been higher—and sometimes much higher. It did not dip below 5% for the entire period from 1971 to 1994.

They hammer into us in our intro macroeconomics courses the Phillips Curve, which supposedly says that unemployment is inversely related to inflation, so that it’s impossible to have both low inflation and low unemployment.

But we’re looking at it, right now. It’s here, right in front of us. What wasn’t supposed to be possible has now been achieved. E pur si muove.

There was supposed to be this terrible trade-off between inflation and unemployment, leaving our government with the stark dilemma of either letting prices surge or letting millions remain out of work. I had always been on the “inflation” side: I thought that rising prices were far less of a problem than poeple out of work.

But we just learned that the entire premise was wrong.

You can have both. You don’t have to choose.

Right here, right now, we have both. All we need to do is keep doing whatever we’re doing.

One response might be: what if we can’t? What if this is unsustainable? (Then again, conservatives never seemed terribly concerned about sustainability before….)

It’s worth considering. One thing that doesn’t look so great now is the federal deficit. It got extremely high during COVID, and it’s still pretty high now. But as a proportion of GDP, it isn’t anywhere near as high as it was during WW2, and we certainly made it through that all right:

So, yeah, we should probably see if we can bring the budget back to balanced—probably by raising taxes. But this isn’t an urgent problem. We have time to sort it out. 15% unemployment was an urgent problem—and we fixed it.

In fact in some ways the economy is even doing better now than it looks. Unemployment for Black people has never been this low, since we’ve been keeping track of it:

Black people had basically learned to live with 8% or 9% unemployment as if it were normal; but now, for the first time ever—ever—their unemployment rate is down to only 5%.

This isn’t because people are dropping out of the labor force. Broad unemployment, which includes people marginally attached to the labor force, people employed part-time not by choice, and people who gave up looking for work, is also at historic lows, despite surging to almost 23% during COVID:

In fact, overall employment among people 25-54 years old (considered “prime age”—old enough to not be students, young enough to not be retired) is nearly the highest it has ever been, and radically higher than it was before the 1980s (because women entered the workforce):

So this is not an illusion: More Americans really are working now. And employment has become more inclusive of women and minorities.

I really don’t understand why President Biden isn’t more popular. Biden inherited the worst unemployment since the Great Depression, and turned it around into an economic situation so good that most economists thought it was impossible. A 39% approval rating does not seem consistent with that kind of staggering economic improvement.

And yes, there are a lot of other factors involved aside from the President; but for once I think he really does deserve a lot of the credit here. Programs he enacted to respond to COVID brought us back to work quicker than many thought possible. Then, the Inflation Reduction Act made historic progress at fighting climate change—and also, lo and behold, reduced inflation.

He’s not a particularly charismatic figure. He is getting pretty old for this job (or any job, really). But Biden’s economic policy has been amazing, and deserves more credit for that.

How do we stop overspending on healthcare?

Dec 10 JDN 2460290

I don’t think most Americans realize just how much more the US spends on healthcare than other countries. This is true not simply in absolute terms—of course it is, the US is rich and huge—but in relative terms: As a portion of GDP, our healthcare spending is a major outlier.

Here’s a really nice graph from Healthsystemtracker.org that illustrates it quite nicely: Almost all other First World countries share a simple linear relationship between their per-capita GDP and their per-capita healthcare spending. But one of these things is not like the other ones….

The outlier in the other direction is Ireland, but that’s because their GDP is wildly inflated by Leprechaun Economics. (Notice that it looks like Ireland is by far the richest country in the sample! This is clearly not the case in reality.) With a corrected estimate of their true economic output, they are also quite close to the line.

Since US GDP per capita ($70,181) is in between that of Denmark ($64,898) and Norway ($80,496) both of which have very good healthcare systems (#ScandinaviaIsBetter), we would expect that US spending on healthcare would similarly be in between. But while Denmark spends $6,384 per person per year on healthcare and Norway spends $7,065 per person per year, the US spends $12,914.

That is, the US spends nearly twice as much as it should on healthcare.

The absolute difference between what we should spend and what we actually spend is nearly $6,000 per person per year. Multiply that out by the 330 million people in the US, and…

The US overspends on healthcare by nearly $2 trillion per year.

This might be worth it, if health in the US were dramatically better than health in other countries. (In that case I’d be saying that other countries spend too little.) But plainly it is not.

Probably the simplest and most comparable measure of health across countries is life expectancy. US life expectancy is 76 years, and has increased over time. But if you look at the list of countries by life expectancy, the US is not even in the top 50. Our life expectancy looks more like middle-income countries such as Algeria, Brazil, and China than it does like Norway or Sweden, who should be our economic peers.

There are of course many things that factor into life expectancy aside from healthcare: poverty and homicide are both much worse in the US than in Scandinavia. But then again, poverty is much worse in Algeria, and homicide is much worse in Brazil, and yet they somehow manage to nearly match the US in life expectancy (actually exceeding it in some recent years).

The US somehow manages to spend more on healthcare than everyone else, while getting outcomes that are worse than any country of comparable wealth—and even some that are far poorer.

This is largely why there is a so-called “entitlements crisis” (as many a libertarian think tank is fond of calling it). Since libertarians want to cut Social Security most of all, they like to lump it in with Medicare and Medicaid as an “entitlement” in “crisis”; but in fact we only need a few minor adjustments to the tax code to make sure that Social Security remains solvent for decades to come. It’s healthcare spending that’s out of control.

Here, take a look.

This is the ratio of Social Security spending to GDP from 1966 to the present. Notice how it has been mostly flat since the 1980s, other than a slight increase in the Great Recession.

This is the ratio of Medicare spending to GDP over the same period. Even ignoring the first few years while it was ramping up, it rose from about 0.6% in the 1970s to almost 4% in 2020, and only started to decline in the last few years (and it’s probably too early to say whether that will continue).

Medicaid has a similar pattern: It rose steadily from 0.2% in 1966 to over 3% today—and actually doesn’t even show any signs of leveling off.

If you look at Medicare and Medicaid together, they surged from just over 1% of GDP in 1970 to nearly 7% today:

Put another way: in 1982, Social Security was 4.8% of GDP while Medicare and Medicaid combined were 2.4% of GDP. Today, Social Security is 4.9% of GDP while Medicare and Medicaid are 6.8% of GDP.

Social Security spending barely changed at all; healthcare spending more than doubled. If we reduced our Medicare and Medicaid spending as a portion of GDP back to what it was in 1982, we would save 4.4% of GDP—that is, 4.4% of over $25 trillion per year, so $1.1 trillion per year.

Of course, we can’t simply do that; if we cut benefits that much, millions of people would suddenly lose access to healthcare they need.

The problem is not that we are spending frivolously, wasting the money on treatments no one needs. On the contrary, both Medicare and Medicaid carefully vet what medical services they are willing to cover, and if anything probably deny services more often than they should.

No, the problem runs deeper than this.

Healthcare is too expensive in the United States.

We simply pay more for just about everything, and especially for specialist doctors and hospitals.

In most other countries, doctors are paid like any other white-collar profession. They are well off, comfortable, certainly, but few of them are truly rich. But in the US, we think of doctors as an upper-class profession, and expect them to be rich.

Median doctor salaries are $98,000 in France and $138,000 in the UK—but a whopping $316,000 in the US. Germany and Canada are somewhere in between, at $183,000 and $195,000 respectively.

Nurses, on the other hand, are paid only a little more in the US than in Western Europe. This means that the pay difference between doctors and nurses is much higher in the US than most other countries.

US prices on brand-name medication are frankly absurd. Our generic medications are typically cheaper than other countries, but our brand name pills often cost twice as much. I noticed this immediately on moving to the UK: I had always been getting generics before, because the brand name pills cost ten times as much, but when I moved here, suddenly I started getting all brand-name medications (at no cost to me), because the NHS was willing to buy the actual brand name products, and didn’t have to pay through the nose to do so.

But the really staggering differences are in hospitals.

Let’s compare the prices of a few inpatient procedures between the US and Switzerland. Switzerland, you should note, is a very rich country that spends a lot on healthcare and has nearly the world’s highest life expectancy. So it’s not like they are skimping on care. (Nor is it that prices in general are lower in Switzerland; on the contrary, they are generally higher.)

A coronary bypass in Switzerland costs about $33,000. In the US, it costs $76,000.

A spinal fusion in Switzerland costs about $21,000. In the US? $52,000.

Angioplasty in Switzerland: $9.000. In the US? $32,000.

Hip replacement: Switzerland? $16,000. The US? $28,000.

Knee replacement: Switzerland? $19,000. The US? $27,000.

Cholecystectomy: Switzerland? $8,000. The US? $16,000.

Appendectomy: Switzerland? $7,000. The US? $13,000.

Caesarian section: Switzerland? $8,000. The US? $11,000.

Hospital prices are even lower in Germany and Spain, whose life expectancies are not as high as Switzerland—but still higher than the US.

These prices are so much lower that in fact if you were considering getting surgery for a chronic condition in the US, don’t. Buy plane tickets to Europe and get the procedure done there. Spend an extra few thousand dollars on a nice European vacation and you’d still end up saving money. (Obviously if you need it urgently you have no choice but to use your nearest hospital.) I know that if I ever need a knee replacement (which, frankly, is likely, given my height), I’m gonna go to Spain and thereby save $22,000 relative to what it would cost in the US. That’s a difference of a car.

Combine this with the fact that the US is the only First World country without universal healthcare, and maybe you can see why we’re also the only country in the world where people are afraid to call an ambulance because they don’t think they can afford it. We are also the only country in the world with a medical debt crisis.

Where is all this extra money going?

Well, a lot of it goes to those doctors who are paid three times as much as in France. That, at least, seems defensible: If we want the best doctors in the world maybe we need to pay for them. (Then again, do we have the best doctors in the world? If so, why is our life expectancy so mediocre?)

But a significant portion is going to shareholders.

You probably already knew that there are pharmaceutical companies that rake in huge profits on those overpriced brand-name medications. The top five US pharma companies took in net earnings of nearly $82 billion last year. Pharmaceutical companies typically take in much higher profit margins than other companies: a typical corporation makes about 8% of its revenue in profit, while pharmaceutical companies average nearly 14%.

But you may not have realized that a surprisingly large proportion of hospitals are for-profit businesseseven though they make most of their revenue from Medicare and Medicaid.

I was surprised to find that the US is not unusual in that; in fact, for-profit hospitals exist in dozens of countries, and the fraction of US hospital capacity that is for-profit isn’t even particularly high by world standards.

What is especially large is the profits of US hospitals. 7 healthcare corporations in the US all posted net incomes over $1 billion in 2021.

Even nonprofit US hospitals are tremendously profitable—as oxymoronic as that may sound. In fact, mean operating profit is higher among nonprofit hospitals in the US than for-profit hospitals. So even the hospitals that aren’t supposed to be run for profit… pretty much still are. They get tax deductions as if they were charities—but they really don’t act like charities.

They are basically nonprofit in name only.

So fixing this will not be as simple as making all hospitals nonprofit. We must also restructure the institutions so that nonprofit hospitals are genuinely nonprofit, and no longer nonprofit in name only. It’s normal for a nonprofit to have a little bit of profit or loss—nobody can make everything always balance perfectly—but these hospitals have been raking in huge profits and keeping it all in cash instead of using it to reduce prices or improve services. In the study I linked above, those 2,219 “nonprofit” hospitals took in operating profits averaging $43 million each—for a total of $95 billion.

Between pharmaceutical companies and hospitals, that’s a total of over $170 billion per year just in profit. (That’s more than we spend on food stamps, even after surge due to COVID.) This is pure grift. It must be stopped.

But that still doesn’t explain why we’re spending $2 trillion more than we should! So after all, I must leave you with a question:

What is America doing wrong? Why is our healthcare so expensive?

Israel, Palestine, and the World Bank’s disappointing priorities

Nov 12 JDN 2460261

Israel and Palestine are once again at war. (There are a disturbing number of different years in which one could have written that sentence.) The BBC has a really nice section of their website dedicated to reporting on various facets of the war. The New York Times also has a section on it, but it seems a little tilted in favor of Israel.

This time, it started with a brutal attack by Hamas, and now Israel has—as usual—overreacted and retaliated with a level of force that is sure to feed the ongoing cycle of extremism. All across social media I see people wanting me to take one side or the other, often even making good points: “Hamas slaughters innocents” and “Israel is a de facto apartheid state” are indeed both important points I agree with. But if you really want to know my ultimate opinion, it’s that this whole thing is fundamentally evil and stupid because human beings are suffering and dying over nothing but lies. All religions are false, most of them are evil, and we need to stop killing each other over them.

Anti-Semitism and Islamophobia are both morally wrong insofar as they involve harming, abusing or discriminating against actual human beings. Let people dress however they want, celebrate whatever holidays they want, read whatever books they want. Even if their beliefs are obviously wrong, don’t hurt them if they aren’t hurting anyone else. But both Judaism and Islam—and Christianity, and more besides—are fundamentally false, wrong, evil, stupid, and detrimental to the advancement of humanity.

That’s the thing that so much of the public conversation is too embarrassed to say; we’re supposed to pretend that they aren’t fighting over beliefs that obviously false. We’re supposed to respect each particular flavor of murderous nonsense, and always find some other cause to explain the conflict. It’s over culture (what culture?); it’s over territory (whose territory?); it’s a retaliation for past conflict (over what?). We’re not supposed to say out loud that all of this violence ultimately hinges upon people believing in nonsense. Even if the conflict wouldn’t disappear overnight if everyone suddenly stopped believing in God—and are we sure it wouldn’t? Let’s try it—it clearly could never have begun, if everyone had started with rational beliefs in the first place.

But I don’t really want to talk about that right now. I’ve said enough. Instead I want to talk about something a little more specific, something less ideological and more symptomatic of systemic structural failures. Something you might have missed amidst the chaos.

The World Bank recently released a report on the situation focused heavily on the looming threat of… higher oil prices. (And of course there has been breathless reporting from various outlets regarding a headline figure of $150 per barrel which is explicitly stated in the report as an unlikely “worst-case scenario”.)

There are two very big reasons why I found this dismaying.


The first, of course, is that there are obviously far more important concerns here than commodity prices. Yes, I know that this report is part of an ongoing series of Commodity Markets Outlook reports, but the fact that this is the sort of thing that the World Bank has ongoing reports about is also saying something important about the World Bank’s priorities. They release monthly commodity forecasts and full Commodity Markets Outlook reports that come out twice a year, unlike the World Development Reports that only come out once a year. The World Bank doesn’t release a twice-annual Conflict Report or a twice-annual Food Security Report. (Even the FAO, which publishes an annual State of Food Security and Nutrition in the World report, also publishes a State of Agricultural Marketsreport just as often.)

The second is that, when reading the report, one can clearly tell that whoever wrote it thinks that rising oil and gas prices are inherently bad. They keep talking about all of these negative consequences that higher oil prices could have, and seem utterly unaware of the really enormous upside here: We may finally get a chance to do something about climate change.

You see, one of the most basic reasons why we haven’t been able to fix climate change is that oil is too damn cheap. Its market price has consistently failed to reflect its actual costs. Part of that is due to oil subsidies around the world, which have held the price lower than it would be even in a free market; but most of it is due to the simple fact that pollution and carbon emissions don’t cost money for the people who produce them, even though they do cost the world.

Fortunately, wind and solar power are also getting very cheap, and are now at the point where they can outcompete oil and gas for electrical power generation. But that’s not enough. We need to remove oil and gas from everything: heating, manufacturing, agriculture, transportation. And that is far easier to do if oil and gas suddenly become more expensive and so people are forced to stop using them.

Now, granted, many of the downsides in that report are genuine: Because oil and gas are such vital inputs to so many economic processes, it really is true that making them more expensive will make lots of other things more expensive, and in particular could increase food insecurity by making farming more expensive. But if that’s what we’re concerned about, we should be focusing on that: What policies can we use to make sure that food remains available to all? And one of the best things we could be doing toward that goal is finding ways to make agriculture less dependent on oil.

By focusing on oil prices instead, the World Bank is encouraging the world to double down on the very oil subsidies that are holding climate policy back. Even food subsides—which certainly have their own problems—would be an obviously better solution, and yet they are barely mentioned.

In fact, if you actually read the report, it shows that fears of food insecurity seem unfounded: Food prices are actually declining right now. Grain prices in particular seem to be falling back down remarkably quickly after their initial surge when Russia invaded Ukraine. Of course that could change, but it’s a really weird attitude toward the world to see something good and respond with, “Yes, but it might change!” This is how people with anxiety disorders (and I would know) think—which makes it seem as though much of the economic policy community suffers from some kind of collective equivalent of an anxiety disorder.

There also seems to be a collective sense that higher prices are always bad. This is hardly just a World Bank phenomenon; on the contrary, it seems to pervade all of economic thought, including the most esteemed economists, the most powerful policymakers, and even most of the general population of citizens. (The one major exception seems to be housing, where the sense is that higher prices are always good—even when the world is in a chronic global housing shortage that leaves millions homeless.) But prices can be too low or too high. And oil prices are clearly, definitely too low. Prices should reflect the real cost of production—all the real costs of production. It should cost money to pollute other people’s air.

In fact I think the whole report is largely a nothingburger: Oil prices haven’t even risen all that much so far—we’re still at $80 per barrel last I checked—and the one thing that is true about the so-called Efficient Market Hypothesis is that forecasting future prices is a fool’s errand. But it’s still deeply unsettling to see such intelligent, learned experts so clearly panicking over the mere possibility that there could be a price change which would so obviously be good for the long-term future of humanity.

There is plenty more worth saying about the Israel-Palestine conflict, and in particular what sort of constructive policy solutions we might be able to find that would actually result in any kind of long-term peace. I’m no expert on peace negotiations, and frankly I admit it would probably be a liability that if I were ever personally involved in such a negotiation, I’d be tempted to tell both sides that they are idiots and fanatics. (The headline the next morning: “Israeli and Palestinian Delegates Agree on One Thing: They Hate the US Ambassador”.)

The World Bank could have plenty to offer here, yet so far they’ve been too focused on commodity prices. Their thinking is a little too much ‘bank’ and not enough ‘world’.

It is a bit ironic, though also vaguely encouraging, that there are those within the World Bank itself who recognize this problem: Just a few weeks ago Ajay Banga gave a speech to the World Bank about “a world free of poverty on a livable planet”.

Yes. Those sound like the right priorities. Now maybe you could figure out how to turn that lip service into actual policy.

The unsung success of Bidenomics

Aug 13 JDN 2460170

I’m glad to see that the Biden administration is finally talking about “Bidenomics”. We tend to give too much credit or blame for economic performance to the President—particularly relative to Congress—but there are many important ways in which a Presidential administration can shift the priorities of public policy in particular directions, and Biden has clearly done that.

The economic benefits for people of color seem to have been particularly large. The unemployment gap between White and Black workers in the US is now only 2.7 percentage points, while just a few years ago it was over 4pp and at the worst of the Great Recession it surpassed 7pp. During lockdown, unemployment for Black people hit nearly 17%; it is now less than 6%.

The (misnamed, but we’re stuck with it) Inflation Reduction Act in particular has been an utter triumph.

In the past year, real private investment in manufacturing structures (essentially, new factories) has risen from $56 billion to $87 billion—an over 50% increase, which puts it the highest it has been since the turn of the century. The Inflation Reduction Act appears to be largely responsible for this change.

Not many people seem to know this, but the US has also been on the right track with regard to carbon emissions: Per-capita carbon emissions in the US have been trending downward since about 2000, and are now lower than they were in the 1950s. The Inflation Reduction act now looks poised to double down on that progress, as it has been forecasted to reduce our emissions all the way down to 40% below their early-2000s peak.

Somehow, this success doesn’t seem to be getting across. The majority of Americans incorrectly believe that we are in a downturn. Biden’s approval rating is still only 40%, barely higher than Trump’s was. When it comes to political beliefs, most American voters appear to be utterly impervious to facts.

Most Americans do correctly believe that inflation is still a bit high (though many seem to think it’s higher than it is); this is weird, seeing as inflation is normally high when the economy is growing rapidly, and gets too low when we are in a recession. This seems to be Halo Effect, rather than any genuine understanding of macroeconomics: downturns are bad and inflation is bad, so they must go together—when in fact, quite the opposite is the case.

People generally feel better about their own prospects than they do about the economy as a whole:

Sixty-four percent of Americans say the economy is worse off compared to 2020, while seventy-three percent of Americans say the economy is worse off compared to five years ago. About two in five of Americans say they feel worse off from five years ago generally (38%) and a similar number say they feel worse off compared to 2020 (37%).

(Did you really have to write out ‘seventy-three percent’? I hate that convention. 73% is so much clearer and quicker to read.)

I don’t know what the Biden administration should do about this. Trying to sell themselves harder might backfire. (And I’m pretty much the last person in the world you should ask for advice about selling yourself.) But they’ve been doing really great work for the US economy… and people haven’t noticed. Thousands of factories are being built, millions of people are getting jobs, and the collective response has been… “meh”.

What happens when a bank fails

Mar 19 JDN 2460023

As of March 9, Silicon Valley Bank (SVB) has failed and officially been put into receivership under the FDIC. A bank that held $209 billion in assets has suddenly become insolvent.

This is the second-largest bank failure in US history, after Washington Mutual (WaMu) in 2008. In fact it will probably have more serious consequences than WaMu, for two reasons:

1. WaMu collapsed as part of the Great Recession, so there was already a lot of other things going on and a lot of policy responses already in place.

2. WaMu was mostly a conventional commercial bank that held deposits and loans for consumers, so its assets were largely protected by the FDIC, and thus its bankruptcy didn’t cause contagion the spread out to the rest of the system. (Other banks—shadow banks—did during the crash, but not so much WaMu.) SVB mostly served tech startups, so a whopping 89% of its deposits were not protected by FDIC insurance.

You’ve likely heard of many of the companies that had accounts at SVB: Roku, Roblox, Vimeo, even Vox. Stocks of the US financial industry lost $100 billion in value in two days.

The good news is that this will not be catastrophic. It probably won’t even trigger a recession (though the high interest rates we’ve been having lately potentially could drive us over that edge). Because this is commercial banking, it’s done out in the open, with transparency and reasonably good regulation. The FDIC knows what they are doing, and even though they aren’t covering all those deposits directly, they intend to find a buyer for the bank who will, and odds are good that they’ll be able to cover at least 80% of the lost funds.

In fact, while this one is exceptionally large, bank failures are not really all that uncommon. There have been nearly 100 failures of banks with assets over $1 billion in the US alone just since the 1970s. The FDIC exists to handle bank failures, and generally does the job well.

Then again, it’s worth asking whether we should really have a banking system in which failures are so routine.

The reason banks fail is kind of a dark open secret: They don’t actually have enough money to cover their deposits.

Banks loan away most of their cash, and rely upon the fact that most of their depositors will not want to withdraw their money at the same time. They are required to keep a certain ratio in reserves, but it’s usually fairly small, like 10%. This is called fractional-reserve banking.

As long as less than 10% of deposits get withdrawn at any given time, this works. But if a bunch of depositors suddenly decide to take out their money, the bank may not have enough to cover it all, and suddenly become insolvent.

In fact, the fear that a bank might become insolvent can actually cause it to become insolvent, in a self-fulfilling prophecy. Once depositors get word that the bank is about to fail, they rush to be the first to get their money out before it disappears. This is a bank run, and it’s basically what happened to SVB.

The FDIC was originally created to prevent or mitigate bank runs. Not only did they provide insurance that reduced the damage in the event of a bank failure; by assuring depositors that their money would be recovered even if the bank failed, they also reduced the chances of a bank run becoming a self-fulfilling prophecy.


Indeed, SVB is the exception that proves the rule, as they failed largely because their assets were mainly not FDIC insured.

Fractional-reserve banking effectively allows banks to create money, in the form of credit that they offer to borrowers. That credit gets deposited in other banks, which then go on to loan it out to still others; the result is that there is more money in the system than was ever actually printed by the central bank.

In most economies this commercial bank money is a far larger quantity than the central bank money actually printed by the central bank—often nearly 10 to 1. This ratio is called the money multiplier.

Indeed, it’s not a coincidence that the reserve ratio is 10% and the multiplier is 10; the theoretical maximum multiplier is always the inverse of the reserve ratio, so if you require reserves of 10%, the highest multiplier you can get is 10. Had we required 20% reserves, the multiplier would drop to 5.

Most countries have fractional-reserve banking, and have for centuries; but it’s actually a pretty weird system if you think about it.

Back when we were on the gold standard, fractional-reserve banking was a way of cheating, getting our money supply to be larger than the supply of gold would actually allow.

But now that we are on a pure fiat money system, it’s worth asking what fractional-reserve banking actually accomplishes. If we need more money, the central bank could just print more. Why do we delegate that task to commercial banks?

David Friedman of the Cato Institute had some especially harsh words on this, but honestly I find them hard to disagree with:

Before leaving the subject of fractional reserve systems, I should mention one particularly bizarre variant — a fractional reserve system based on fiat money. I call it bizarre because the essential function of a fractional reserve system is to reduce the resource cost of producing money, by allowing an ounce of reserves to replace, say, five ounces of currency. The resource cost of producing fiat money is zero; more precisely, it costs no more to print a five-dollar bill than a one-dollar bill, so the cost of having a larger number of dollars in circulation is zero. The cost of having more bills in circulation is not zero but small. A fractional reserve system based on fiat money thus economizes on the cost of producing something that costs nothing to produce; it adds the disadvantages of a fractional reserve system to the disadvantages of a fiat system without adding any corresponding advantages. It makes sense only as a discreet way of transferring some of the income that the government receives from producing money to the banking system, and is worth mentioning at all only because it is the system presently in use in this country.

Our banking system evolved gradually over time, and seems to have held onto many features that made more sense in an earlier era. Back when we had arbitrarily tied our central bank money supply to gold, creating a new money supply that was larger may have been a reasonable solution. But today, it just seems to be handing the reins over to private corporations, giving them more profits while forcing the rest of society to bear more risk.

The obvious alternative is full-reserve banking, where banks are simply required to hold 100% of their deposits in reserve and the multiplier drops to 1. This idea has been supported by a number of quite prominent economists, including Milton Friedman.

It’s not just a right-wing idea: The left-wing organization Positive Money is dedicated to advocating for a full-reserve banking system in the UK and EU. (The ECB VP’s criticism of the proposal is utterly baffling to me: it “would not create enough funding for investment and growth.” Um, you do know you can print more money, right? Hm, come to think of it, maybe the ECB doesn’t know that, because they think inflation is literally Hitler. There are legitimate criticisms to be had of Positive Money’s proposal, but “There won’t be enough money under this fiat money system” is a really weird take.)

There’s a relatively simple way to gradually transition from our current system to a full-reserve sytem: Simply increase the reserve ratio over time, and print more central bank money to keep the total money supply constant. If we find that it seems to be causing more problems than it solves, we could stop or reverse the trend.

Krugman has pointed out that this wouldn’t really fix the problems in the banking system, which actually seem to be much worse in the shadow banking sector than in conventional commercial banking. This is clearly right, but it isn’t really an argument against trying to improve conventional banking. I guess if stricter regulations on conventional banking push more money into the shadow banking system, that’s bad; but really that just means we should be imposing stricter regulations on the shadow banking system first (or simultaneously).

We don’t need to accept bank runs as a routine part of the financial system. There are other ways of doing things.

Is the cure for inflation worse than the disease?

Nov 13 JDN 2459897

A lot of people seem really upset about inflation. I’ve previously discussed why this is a bit weird; inflation really just isn’t that bad. In fact, I am increasingly concerned that the usual methods for fixing inflation are considerably worse than inflation itself.

To be clear, I’m not talking about hyperinflationif you are getting triple-digit inflation or more, you are clearly printing too much money and you need to stop. And there are places in the world where this happens.

But what about just regular, ordinary inflation, even when it’s fairly high? Prices rising at 8% or 9% or even 11% per year? What catastrophe befalls our society when this happens?

Okay, sure, if we could snap our fingers and make prices all stable without cost, that would be worth doing. But we can’t. All of our mechanisms for reducing inflation come with costs—and often very high costs.

The chief mechanism by which inflation is currently controlled is open-market operations by central banks such as the Federal Reserve, the Bank of England, and the European Central Bank. These central banks try to reduce inflation by selling bonds, which lowers the price of bonds and reduces capital available to banks, and thereby increases interest rates. This also effectively removes money from the economy, as banks are using that money to buy bonds instead of lending it out. (It is chiefly in this odd indirect sense that the central bank manages the “money supply”.)

But how does this actually reduce inflation? It’s remarkably indirect. It’s actually the higher interest rates which prevent people from buying houses and prevent companies from hiring workers which result in reduced economic growth—or even economic recession—which then is supposed to bring down prices. There’s actually a lot we still don’t know about how this works or how long it should be expected to take. What we do know is that the pain hits quickly and the benefits arise only months or even years later.

As Krugman has rightfully pointed out, the worst pain of the 1970s was not the double-digit inflation; it was the recessions that Paul Volcker’s economic policy triggered in response to that inflation. The inflation wasn’t exactly a good thing; but for most people, the cure was much worse than the disease.

Most laypeople seem to think that prices somehow go up without wages going up, but that simply isn’t how it works. Prices and wages rise at close to the same rate in most countries most of the time. In fact, inflation is often driven chiefly by rising wages rather than the other way around. There are often lags between when the inflation hits and when people see their wages rise; but these lags can actually be in either direction—inflation first or wages first—and for moderate amounts of inflation they are clearly less harmful than the high rates of unemployment that we would get if we fought inflation more aggressively with monetary policy.

Economists are also notoriously vague about exactly how they expect the central bank to reduce inflation. They use complex jargon or broad euphemisms. But when they do actually come out and say they want to reduce wages, it tends to outrage people. Well, that’s one of three main ways that interest rates actually reduce inflation: They reduce wages, they cause unemployment, or they stop people from buying houses. That’s pretty much all that central banks can do.

There may be other ways to reduce inflation, like windfall profits taxes, antitrust action, or even price controls. The first two are basically no-brainers; we should always be taxing windfall profits (if they really are due to a windfall outside a corporation’s control, there’s no incentive to distort), and we should absolutely be increasing antitrust action (why did we reduce it in the first place?). Price controls are riskier—they really do create shortages—but then again, is that really worse than lower wages or unemployment? Because the usual strategy involves lower wages and unemployment.

It’s a little ironic: The people who are usually all about laissez-faire are the ones who panic about inflation and want the government to take drastic action; meanwhile, I’m usually in favor of government intervention, but when it comes to moderate inflation, I think maybe we should just let it be.