Small deviations can have large consequences.

Jun 26 JDN 2459787

A common rejoinder that behavioral economists get from neoclassical economists is that most people are mostly rational most of the time, so what’s the big deal? If humans are 90% rational, why worry so much about the other 10%?

Well, it turns out that small deviations from rationality can have surprisingly large consequences. Let’s consider an example.

Suppose we have a market for some asset. Without even trying to veil my ulterior motive, let’s make that asset Bitcoin. Its fundamental value is of course $0; it’s not backed by anything (not even taxes or a central bank), it has no particular uses that aren’t already better served by existing methods, and it’s not even scalable.

Now, suppose that 99% of the population rationally recognizes that the fundamental value of the asset is indeed $0. But 1% of the population doesn’t; they irrationally believe that the asset is worth $20,000. What will the price of that asset be, in equilibrium?

If you assume that the majority will prevail, it should be $0. If you did some kind of weighted average, you’d think maybe its price will be something positive but relatively small, like $200. But is this actually the price it will take on?

Consider someone who currently owns 1 unit of the asset, and recognizes that it is fundamentally worthless. What should they do? Well, if they also know that there are people out there who believe it is worth $20,000, the answer is obvious: They should sell it to those people. Indeed, they should sell it for something quite close to $20,000 if they can.

Now, suppose they don’t already own the asset, but are considering whether or not to buy it. They know it’s worthless, but they also know that there are people who will buy it for close to $20,000. Here’s the kicker: This is a reason for them to buy it at anything meaningfully less than $20,000.

Suppose, for instance, they could buy it for $10,000. Spending $10,000 to buy something you know is worthless seems like a terribly irrational thing to do. But it isn’t irrational, if you also know that somewhere out there is someone who will pay $20,000 for that same asset and you have a reasonable chance of finding that person and selling it to them.

The equilibrium outcome, then, is that the price of the asset will be almost $20,000! Even though 99% of the population recognizes that this asset is worthless, the fact that 1% of people believe it’s worth as much as a car will result in it selling at that price. Thus, even a slight deviation from a perfectly-rational population can result in a market that is radically at odds with reality.

And it gets worse! Suppose that in fact everyone knows that the asset is worthless, but most people think that there is some small portion of the population who believes the asset has value. Then, it will still be priced at that value in equilibrium, as people trade it back and forth searching in vain for the person who really wants it! (This is called the Greater Fool theory.)

That is, the price of an asset in a free market—even in a market where most people are mostly rational most of the time—will in fact be determined by the highest price anyone believes that anyone else thinks it has. And this is true of essentially any asset market—any market where people are buying something, not to use it, but to sell it to someone else.

Of course, beliefs—and particularly beliefs about beliefs—can very easily change, so that equilibrium price could move in any direction basically without warning.

Suddenly, the cycle of bubble and crash, boom and bust, doesn’t seem so surprising does it? The wonder is that prices ever become stable at all.


Then again, do they? Last I checked, the only prices that were remotely stable were for goods like apples and cars and televisions, goods that are bought and sold to be consumed. (Or national currencies managed by competent central banks, whose entire job involves doing whatever it takes to keep those prices stable.) For pretty much everything else—and certainly any purely financial asset that isn’t a national currency—prices are indeed precisely as wildly unpredictable and utterly irrational as this model would predict.

So much for the Efficient Market Hypothesis? Sadly I doubt that the people who still believe this nonsense will be convinced.

Multilevel selection: A tale of three tribes

Jun 19 JDN 2459780

There’s something odd about the debate in evolutionary theory about multilevel selection (sometimes called “group selection”). On one side are the mainstream theorists who insist that selection only happens at the individual level (or is the gene level?); and on the other are devout group-selectionists who insist that group selection is everywhere and the only possible explanation of altruism.

Both of these sides are wrong. Selection does happen at multiple levels, but it’s entirely possible for altruism to emerge without it.

The usual argument by the mainstream is that group selection would require the implausible assumption that group live and die on the same timescale as individuals. The usual argument by group-selectionists is that there’s no other explanation for why humans are so altruistic. But neither of these things are true.

There is plenty of discussion out there about why group selection isn’t necessary for altruism: Kin selection is probably the clearest example. So I’m going to focus on showing that group selection can work even when groups live and die much slower than individuals.

To do this, I would like to present you a model. It’s a very pared-down, simplified version, but it is nevertheless a valid evolutionary game theory model.

Consider a world where the only kind of interaction is Iterated Prisoner’s Dilemmas. For the uninitiated, an Iterated Prisoner’s Dilemma is as follows.

Time goes on forever. At each point in time, some people are born, and some people die; people have a limited lifespan and some general idea of how long it is, but nobody can predict for sure when they will die. (So far, this isn’t even a model; all of this is literally true.)

In this world, people are randomly matched with others one on one, and they play a game together, where each person can choose either “Cooperate” or “Defect”. They choose in secret and reveal simultaneously. If both choose “Cooperate”, everyone gets 3 points. If both choose “Defect”, everyone gets 2 points. If one chooses “Cooperate” and the other chooses “Defect”, the “Cooperate” person gets only 1 point while the “Defect” person gets 4 points.

What are these points? Since this is evolution, let’s call them offspring. An average lifetime score of 4 points means 4 offspring per couple per generation—you get rapid population growth. 1 point means 1 offspring per couple per generation—your genes will gradually die out.

That makes the payoffs follow this table:


CD
C3, 31, 4
D4, 12, 2

There are two very notable properties of this game; together they seem paradoxical, which is probably why the game has such broad applicability and such enduring popularity.

  1. Everyone, as a group, is always better off if more people choose “Cooperate”.
  2. Each person, as an individual, regardless of what the others do, is always better off choosing “Defect”.

Thus, Iterated Prisoner’s Dilemmas are ideal for understanding altruism, as they directly model a conflict between individual self-interest and group welfare. (They didn’t do a good job of explaining it in A Beautiful Mind, but that one line in particular was correct: the Prisoner’s Dilemma is precisely what proves “Adam Smith was wrong.”)

Each person is matched with someone else at random for a few rounds, and then re-matched with someone else; and nobody knows how long they will be with any particular person. (For technical reasons, with these particular payoffs, the chance of going to another round needs to be at least 50%; but that’s not too important for what I have to say here.)

Now, suppose there are three tribes of people, who are related by family ties but also still occasionally intermingle with one another.

In the Hobbes tribe, people always play “Defect”.

In the Rousseau tribe, people always play “Cooperate”.

In the Axelrod tribe, people play “Cooperate” the first time they meet someone, then copy whatever the other person did in the previous round. (This is called “tit for tat“.)

How will these tribes evolve? In the long run, will all tribes survive, or will some prevail over others?

The Rousseau tribe seems quite nice; everyone always gets along! Unfortunately, the Rousseau tribe will inevitably and catastrophically collapse. As soon as a single Hobbes gets in, or a mutation arises to make someone behave like a Hobbes, that individual will become far more successful than everyone else, have vastly more offspring, and ultimately take over the entire population.

The Hobbes tribe seems pretty bad, but it’ll be stable. If a Rousseau should come visit, they’ll just be ruthlessly exploited and makes the Hobbes better off. If an Axelrod arrives, they’ll learn not to be exploited (after the first encounter), but they won’t do any better than the Hobbeses do.

What about the Axelrod tribe? They seem similar to the Rousseau tribe, because everyone is choosing “Cooperate” all the time—will they suffer the same fate? No, they won’t! They’ll do just fine, it turns out. Should a Rousseau come to visit, nobody will even notice; they’ll just keep on choosing “Cooperate” and everything will be fine. And what if a Hobbes comes? They’ll try to exploit the Axelrods, and succeed at first—but soon enough they will be punished for their sins, and in the long run they’ll be worse off (this is why the probability of continuing needs to be sufficiently high).

The net result, then, will be that the Rousseau tribe dies out and only the Hobbes and Axelrod tribes remain. But that’s not the end of the story.

Look back at that payoff table. Both tribes are stable, but each round the Hobbeses are getting 2 each round, while the Axelrods are getting 3. Remember that these are offspring per couple per generation. This means that the Hobbes tribe will have a roughly constant population, while the Axelrods will have an increasing population.

If the two tribes then come into conflict, perhaps competing over resources, the larger population will most likely prevail. This means that, in the long run, the Axelrod tribe will come to dominate. In the end, all the world will be ruled by Axelrods.

And indeed, most human beings behave like Axelrods: We’re nice to most people most of the time, but we’re no chumps. Betray our trust, and you will be punished severely. (It seems we also have a small incursion of Hobbeses: We call them psychopaths. Perhaps there are a few Rousseaus among us as well, whom the Hobbeses exploit.)

What is this? It’s multilevel selection. It’s group selection, if you like that term. There’s clearly no better way to describe it.

Moreover, we can’t simply stop at the reciprocal altruism as most mainstream theorists do; yes, Axelrods exhibit reciprocal altruism. But that’s not the only equilibrium! Why is reciprocal altruism so common? Why in the real world are there fifty Axelrods for every Hobbes? Multilevel selection.

And at no point did I assume either (1) that individual selection wasn’t operating, or (2) that the timescales of groups and individuals were the same. Indeed, I’m explicitly assuming the opposite: Individual selection continues to work at every generation, and groups only live or die over many generations.

The key insight that makes this possible is that the game is iterated—it happens over many rounds, and nobody knows exactly how many. This results in multiple Nash equilibria for individual selection, and then group selection can occur over equilibria.

This is by no means restricted to the Prisoner’s Dilemma. In fact, any nontrivial game will result in multiple equilibria when it is iterated, and group selection should always favor the groups that choose a relatively cooperative, efficient outcome. As long as such a strategy emerges by mutation, and gets some chance to get a foothold, it will be successful in the long run.

Indeed, since these conditions don’t seem all that difficult to meet, we would expect that group selection should actually occur quite frequently, and should be a major explanation for a lot of important forms of altruism.

And in fact this seems to be the case. Humans look awfully group-selected. (Like I said, we behave very much like Axelrods.) Many other social species, such as apes, dolphins, and wolves, do as well. There is altruism in nature that doesn’t look group-selected, for instance among eusocial insects; but much of the really impressive altruism seems more like equilibrium selection at the group level than it does like direct selection at the individual level.

Even multicellular life can be considered group selection: A bunch of cells “agree” to set aside some of their own interest in self-replication in favor of supporting a common, unified whole. (And should any mutated cells try to defect and multiply out of control, what happens? We call that cancer.) This can only work when there are multiple equilibria to select from at the individual level—but there nearly always are.

I finally have a published paper.

Jun 12 JDN 2459773

Here it is, my first peer-reviewed publication: “Imperfect Tactic Collusion and Asymmetric Price Transmission”, in the Journal of Economic Behavior and Organization.

Due to the convention in economics that authors are displayed alphabetically, I am listed third of four, and will be typically collapsed into “Bulutay et. al.”. I don’t actually think it should be “Julius et. al.”; I think Dave Hales did the most important work, and I wanted it to be “Hales et. al.”; but anything non-alphabetical is unusual in economics, and it would have taken a strong justification to convince the others to go along with it. This is a very stupid norm (and I attribute approximately 20% of Daron Acemoglu’s superstar status to it), but like any norm, it is difficult to dislodge.

I thought I would feel different when this day finally came. I thought I would feel joy, or at least satisfaction. I had been hoping that satisfaction would finally spur me forward in resubmitting my single-author paper, “Experimental Public Goods Games with Progressive Taxation”, so I could finally get a publication that actually does have “Julius (2022)” (or, at this rate, 2023, 2024…?). But that motivating satisfaction never came.

I did feel some vague sense of relief: Thank goodness, this ordeal is finally over and I can move on. But that doesn’t have the same motivating force; it doesn’t make me want to go back to the other papers I can now hardly bear to look at.

This reaction (or lack thereof?) could be attributed to circumstances: I have been through a lot lately. I was already overwhelmed by finishing my dissertation and going on the job market, and then there was the pandemic, and I had to postpone my wedding, and then when I finally got a job we had to suddenly move abroad, and then it was awful finding a place to live, and then we actually got married (which was lovely, but still stressful), and it took months to get my medications sorted with the NHS, and then I had a sudden resurgence of migraines which kept me from doing most of my work for weeks, and then I actually caught COVID and had to deal with that for a few weeks too. So it really isn’t too surprising that I’d be exhausted and depressed after all that.

Then again, it could be something deeper. I didn’t feel this way about my wedding. That genuinely gave me the joy and satisfaction that I had been expecting; I think it really was the best day of my life so far. So it isn’t as if I’m incapable of these feelings under my current state.

Rather, I fear that I am becoming more permanently disillusioned with academia. Now that I see how the sausage is made, I am no longer so sure I want to be one of the people making it. Publishing that paper didn’t feel like I had accomplished something, or even made some significant contribution to human knowledge. In fact, the actual work of publication was mostly done by my co-authors, because I was too overwhelmed by the job market at the time. But what I did have to do—and what I’ve tried to do with my own paper—felt like a miserable, exhausting ordeal.

More and more, I’m becoming convinced that a single experiment tells us very little, and we are being asked to present each one as if it were a major achievement when it’s more like a single brick in a wall.

But whatever new knowledge our experiments may have gleaned, that part was done years ago. We could have simply posted the draft as a working paper on the web and moved on, and the world would know just as much and our lives would have been a lot easier.

Oh, but then it would not have the imprimatur of peer review! And for our careers, that means absolutely everything. (Literally, when they’re deciding tenure, nothing else seems to matter.) But for human knowledge, does it really mean much? The more referee reports I’ve read, the more arbitrary they feel to me. This isn’t an objective assessment of scientific merit; it’s the half-baked opinion of a single randomly chosen researcher who may know next to nothing about the topic—or worse, have a vested interest in defending a contrary paradigm.

Yes, of course, what gets through peer review is of considerably higher quality than any randomly-selected content on the Internet. (The latter can be horrifically bad.) But is this not also true of what gets submitted for peer review? In fact, aren’t many blogs written by esteemed economists (say, Krugman? Romer? Nate Silver?) of considerably higher quality as well, despite having virtually none of the gatekeepers? I think Krugman’s blog is nominally edited by the New York Times, and Silver has a whole staff at FiveThirtyEight (they’re hiring, in fact!), but I’m fairly certain Romer just posts whatever he wants like I do. Of course, they had to establish their reputations (Krugman and Romer each won a Nobel). But still, it seems like maybe peer-review isn’t doing the most important work here.

Even blogs by far less famous economists (e.g. Miles Kimball, Brad DeLong) are also very good, and probably contribute more to advancing the knowledge of the average person than any given peer-reviewed paper, simply because they are more readable and more widely read. What we call “research” means going from zero people knowing a thing to maybe a dozen people knowing it; “publishing” means going from a dozen to at most a thousand; to go from a thousand to a billion, we call that “education”.

They all matter, of course; but I think we tend to overvalue research relative to education. A world where a few people know something is really not much better than a world where nobody does, while a world where almost everyone knows something can be radically superior. And the more I see just how far behind the cutting edge of research most economists are—let alone most average people—the more apparent it becomes to me that we are investing far too much in expanding that cutting edge (and far, far too much in gatekeeping who gets to do that!) and not nearly enough in disseminating that knowledge to humanity.

I think maybe that’s why finally publishing a paper felt so anticlimactic for me. I know that hardly anyone will ever actually read the damn thing. Just getting to this point took far more effort than it should have; dozens if not hundreds of hours of work, months of stress and frustration, all to satisfy whatever arbitrary criteria the particular reviewers happened to use so that we could all clear this stupid hurdle and finally get that line on our CVs. (And we wonder why academics are so depressed?) Far from being inspired to do the whole process again, I feel as if I have finally emerged from the torture chamber and may at last get some chance for my wounds to heal.

Even publishing fiction was not this miserable. Don’t get me wrong; it was miserable, especially for me, as I hate and fear rejection to the very core of my being in a way most people do not seem to understand. But there at least the subjectivity and arbitrariness of the process is almost universally acknowledged. Agents and editors don’t speak of your work being “flawed” or “wrong”; they don’t even say it’s “unimportant” or “uninteresting”. They say it’s “not a good fit” or “not what we’re looking for right now”. (Journal editors sometimes make noises like that too, but there’s always a subtext of “If this were better science, we’d have taken it.”) Unlike peer reviewers, they don’t come back with suggestions for “improvements” that are often pointless or utterly infeasible.

And unlike peer reviewers, fiction publishers acknowledge their own subjectivity and that of the market they serve. Nobody really thinks that Fifty Shades of Grey was good in any deep sense; but it was popular and successful, and that’s all the publisher really cares about. As a result, failing to be the next Fifty Shades of Grey ends up stinging a lot less than failing to be the next article in American Economic Review. Indeed, I’ve never had any illusions that my work would be popular among mainstream economists. But I once labored under the belief that it would be more important that it is true; and I guess I now consider that an illusion.

Moreover, fiction writers understand that rejection hurts; I’ve been shocked how few academics actually seem to. Nearly every writing conference I’ve ever been to has at least one seminar on dealing with rejection, often several; at academic conferences, I’ve literally never seen one. There seems to be a completely different mindset among academics—at least, the successful, tenured ones—about the process of peer review, what it means, even how it feels. When I try to talk with my mentors about the pain of getting rejected, they just… don’t get it. They offer me guidance on how to deal with anger at rejection, when that is not at all what I feel—what I feel is utter, hopeless, crushing despair.

There is a type of person who reacts to rejection with anger: Narcissists. (Look no further than the textbook example, Donald Trump.) I am coming to fear that I’m just not narcissistic enough to be a successful academic. I’m not even utterly lacking in narcissism: I am almost exactly average for a Millennial on the Narcissistic Personality Inventory. I score fairly high on Authority and Superiority (I consider myself a good leader and a highly competent individual) but very low on Exploitativeness and Self-Sufficiency (I don’t like hurting people and I know no man is an island). Then again, maybe I’m just narcissistic in the wrong way: I score quite low on “grandiose narcissism”, but relatively high on “vulnerable narcissism”. I hate to promote myself, but I find rejection devastating. This combination seems to be exactly what doesn’t work in academia. But it seems to be par for the course among writers and poets. Perhaps I have the mind of a scientist, but I have the soul of a poet. (Send me through the wormhole! Please? Please!?)

Why do poor people dislike inflation?

Jun 5 JDN 2459736

The United States and United Kingdom are both very unaccustomed to inflation. Neither has seen double-digit inflation since the 1980s.

Here’s US inflation since 1990:

And here is the same graph for the UK:

While a return to double-digits remains possible, at this point it likely won’t happen, and if it does, it will occur only briefly.

This is no doubt a major reason why the dollar and the pound are widely used as reserve currencies (especially the dollar), and is likely due to the fact that they are managed by the world’s most competent central banks. Brexit would almost have made sense if the UK had been pressured to join the Euro; but they weren’t, because everyone knew the pound was better managed.

The Euro also doesn’t have much inflation, but if anything they err on the side of too low, mainly because Germany appears to believe that inflation is literally Hitler. In fact, the rise of the Nazis didn’t have much to do with the Weimar hyperinflation. The Great Depression was by far a greater factor—unemployment is much, much worse than inflation. (By the way, it’s weird that you can put that graph back to the 1980s. It, uh, wasn’t the Euro then. Euros didn’t start circulating until 1999. Is that an aggregate of the franc and the deutsche mark and whatever else? The Euro itself has never had double-digit inflation—ever.)

But it’s always a little surreal for me to see how panicked people in the US and UK get when our inflation rises a couple of percentage points. There seems to be an entire subgenre of economics news that basically consists of rich people saying the sky is falling because inflation has risen—or will, or may rise—by two points. (Hey, anybody got any ideas how we can get them to panic like this over rises in sea level or aggregate temperature?)

Compare this to some other countries thathave real inflation: In Brazil, 10% inflation is a pretty typical year. In Argentina, 10% is a really good year—they’re currently pushing 60%. Kenya’s inflation is pretty well under control now, but it went over 30% during the crisis in 2008. Botswana was doing a nice job of bringing down their inflation until the COVID pandemic threw them out of whack, and now they’re hitting double-digits too. And of course there’s always Zimbabwe, which seemed to look at Weimar Germany and think, “We can beat that.” (80,000,000,000% in one month!? Any time you find yourself talking about billion percent, something has gone terribly, terribly wrong.)

Hyperinflation is a real problem—it isn’t what put Hitler into power, but it has led to real crises in Germany, Zimbabwe, and elsewhere. Once you start getting over 100% per year, and especially when it starts rapidly accelerating, that’s a genuine crisis. Moreover, even though they clearly don’t constitute hyperinflation, I can see why people might legitimately worry about price increases of 20% or 30% per year. (Let alone 60% like Argentina is dealing with right now.) But why is going from 2% to 6% any cause for alarm? Yet alarmed we seem to be.

I can even understand why rich people would be upset about inflation (though the magnitudeof their concern does still seem disproportionate). Inflation erodes the value of financial assets, because most bonds, options, etc. are denominated in nominal, not inflation-adjusted terms. (Though there are such things as inflation-indexed bonds.) So high inflation can in fact make rich people slightly less rich.

But why in the world are so many poor people upset about inflation?

Inflation doesn’t just erode the value of financial assets; it also erodes the value of financial debts. And most poor people have more debts than they have assets—indeed, it’s not uncommon for poor people to have substantial debt and no financial assets to speak of (what little wealth they have being non-financial, e.g. a car or a home). Thus, their net wealth position improves as prices rise.

The interest rate response can compensate for this to some extent, but most people’s debts are fixed-rate. Moreover, if it’s the higher interest rates you’re worried about, you should want the Federal Reserve and the Bank of England not to fight inflation too hard, because the way they fight it is chiefly by raising interest rates.

In surveys, almost everyone thinks that inflation is very bad: 92% think that controlling inflation should be a high priority, and 90% think that if inflation gets too high, something very bad will happen. This is greater agreement among Americans than is found for statements like “I like apple pie” or “kittens are nice”, and comparable to “fair elections are important”!

I admit, I question the survey design here: I would answer ‘yes’ to both questions if we’re talking about a theoretical 10,000% hyperinflation, but ‘no’ if we’re talking about a realistic 10% inflation. So I would like to see, but could not find, a survey asking people what level of inflation is sufficient cause for concern. But since most of these people seemed concerned about actual, realistic inflation (85% reported anger at seeing actual, higher prices), it still suggests a lot of strong feelings that even mild inflation is bad.

So it does seem to be the case that a lot of poor and middle-class people really strongly dislike inflation even in the actual, mild levels in which it occurs in the US and UK.

The main fear seems to be that inflation will erode people’s purchasing power—that as the price of gasoline and groceries rise, people won’t be able to eat as well or drive as much. And that, indeed, would be a real loss of utility worth worrying about.

But in fact this makes very little sense: Most forms of income—particularly labor income, which is the only real income for some 80%-90% of the population—actually increases with inflation, more or less one-to-one. Yes, there’s some delay—you won’t get your annual cost-of-living raise immediately, but several months down the road. But this could have at most a small effect on your real consumption.

To see this, suppose that inflation has risen from 2% to 6%. (Really, you need not suppose; it has.) Now consider your cost-of-living raise, which nearly everyone gets. It will presumably rise the same way: So if it was 3% before, it will now be 7%. Now consider how much your purchasing power is affected over the course of the year.

For concreteness, let’s say your initial income was $3,000 per month at the start of the year (a fairly typical amount for a middle-class American, indeed almost exactly the median personal income). Let’s compare the case of no inflation with a 1% raise, 2% inflation with a 3% raise, and 5% inflation with a 6% raise.

If there was no inflation, your real income would remain simply $3,000 per month, until the end of the year when it would become $3,030 per month. That’s the baseline to compare against.

If inflation is 2%, your real income would gradually fall, by about 0.16% per month, before being bumped up 3% at the end of the year. So in January you’d have $3,000, in February $2,995, in March $2,990. Come December, your real income has fallen to $2,941. But then next January it will immediately be bumped up 3% to $3,029, almost the same as it would have been with no inflation at all. The total lost income over the entire year is about $380, or about 1% of your total income.

If inflation instead rises to 6%, your real income will fall by 0.49% per month, reaching a minimum of $2,830 in December before being bumped back up to $3,028 next January. Your total loss for the whole year will be about $1110, or about 3% of your total income.

Indeed, it’s a pretty good heuristic to say that for an inflation rate of x% with annual cost-of-living raises, your loss of real income relative to having no inflation at all is about (x/2)%. (This breaks down for really high levels of inflation, at which point it becomes a wild over-estimate, since even 200% inflation doesn’t make your real income go to zero.)

This isn’t nothing, of course. You’d feel it. Going from 2% to 6% inflation at an income of $3000 per month is like losing $700 over the course of a year, which could be a month of groceries for a family of four. (Not that anyone can really raise a family of four on a single middle-class income these days. When did The Simpsons begin to seem aspirational?)

But this isn’t the whole story. Suppose that this same family of four had a mortgage payment of $1000 per month; that is also decreasing in real value by the same proportion. And let’s assume it’s a fixed-rate mortgage, as most are, so we don’t have to factor in any changes in interest rates.

With no inflation, their mortgage payment remains $1000. It’s 33.3% of their income this year, and it will be 33.0% of their income next year after they get that 1% raise.

With 2% inflation, their mortgage payment will also fall by 0.16% per month; $998 in February, $996 in March, and so on, down to $980 in December. This amounts to an increase in real income of about $130—taking away a third of the loss that was introduced by the inflation.

With 6% inflation, their mortgage payment will also fall by 0.49% per month; $995 in February, $990 in March, and so on, until it’s only $943 in December. This amounts to an increase in real income of over $370—again taking away a third of the loss.

Indeed, it’s no coincidence that it’s one third; the proportion of lost real income you’ll get back by cheaper mortgage payments is precisely the proportion of your income that was spent on mortgage payments at the start—so if, like too many Americans, they are paying more than a third of their income on mortgage, their real loss of income from inflation will be even lower.

And what if they are renting instead? They’re probably on an annual lease, so that payment won’t increase in nominal terms either—and hence will decrease in real terms, in just the same way as a mortgage payment. Likewise car payments, credit card payments, any debt that has a fixed interest rate. If they’re still paying back student loans, their financial situation is almost certainly improved by inflation.

This means that the real loss from an increase of inflation from 2% to 6% is something like 1.5% of total income, or about $500 for a typical American adult. That’s clearly not nearly as bad as a similar increase in unemployment, which would translate one-to-one into lost income on average; moreover, this loss would be concentrated among people who lost their jobs, so it’s actually worse than that once you account for risk aversion. It’s clearly better to lose 1% of your income than to have a 1% chance of losing nearly all your income—and inflation is the former while unemployment is the latter.

Indeed, the only reason you lost purchasing power at all was that your cost-of-living increases didn’t occur often enough. If instead you had a labor contract that instituted cost-of-living raises every month, or even every paycheck, instead of every year, you would get all the benefits of a cheaper mortgage and virtually none of the costs of a weaker paycheck. Convince your employer to make this adjustment, and you will actually benefit from higher inflation.

So if poor and middle-class people are upset about eroding purchasing power, they should be mad at their employers for not implementing more frequent cost-of-living adjustments; the inflation itself really isn’t the problem.