A lot of people seem really upset about inflation. I’ve previously discussed why this is a bit weird; inflation really just isn’t that bad. In fact, I am increasingly concerned that the usual methods for fixing inflation are considerably worse than inflation itself.
To be clear, I’m not talking about hyperinflation—if you are getting triple-digit inflation or more, you are clearly printing too much money and you need to stop. And there are places in the world where this happens.
But what about just regular, ordinary inflation, even when it’s fairly high? Prices rising at 8% or 9% or even 11% per year? What catastrophe befalls our society when this happens?
Okay, sure, if we could snap our fingers and make prices all stable without cost, that would be worth doing. But we can’t. All of our mechanisms for reducing inflation come with costs—and often very high costs.
The chief mechanism by which inflation is currently controlled is open-market operations by central banks such as the Federal Reserve, the Bank of England, and the European Central Bank. These central banks try to reduce inflation by selling bonds, which lowers the price of bonds and reduces capital available to banks, and thereby increases interest rates. This also effectively removes money from the economy, as banks are using that money to buy bonds instead of lending it out. (It is chiefly in this odd indirect sense that the central bank manages the “money supply”.)
But how does this actually reduce inflation? It’s remarkably indirect. It’s actually the higher interest rates which prevent people from buying houses and prevent companies from hiring workers which result in reduced economic growth—or even economic recession—which then is supposed to bring down prices. There’s actually a lot we still don’t know about how this works or how long it should be expected to take. What we do know is that the pain hits quickly and the benefits arise only months or even years later.
As Krugman has rightfully pointed out, the worst pain of the 1970s was not the double-digit inflation; it was the recessions that Paul Volcker’s economic policy triggered in response to that inflation. The inflation wasn’t exactly a good thing; but for most people, the cure was much worse than the disease.
Most laypeople seem to think that prices somehow go up without wages going up, but that simply isn’t how it works. Prices and wages rise at close to the same rate in most countries most of the time. In fact, inflation is often driven chiefly by rising wages rather than the other way around. There are often lags between when the inflation hits and when people see their wages rise; but these lags can actually be in either direction—inflation first or wages first—and for moderate amounts of inflation they are clearly less harmful than the high rates of unemployment that we would get if we fought inflation more aggressively with monetary policy.
Economists are also notoriously vague about exactly how they expect the central bank to reduce inflation. They use complex jargon or broad euphemisms. But when they do actually come out and say they want to reduce wages, it tends to outrage people. Well, that’s one of three main ways that interest rates actually reduce inflation: They reduce wages, they cause unemployment, or they stop people from buying houses. That’s pretty much all that central banks can do.
There may be other ways to reduce inflation, like windfall profits taxes, antitrust action, or even price controls. The first two are basically no-brainers; we should always be taxing windfall profits (if they really are due to a windfall outside a corporation’s control, there’s no incentive to distort), and we should absolutely be increasing antitrust action (why did we reduce it in the first place?). Price controls are riskier—they really do create shortages—but then again, is that really worse than lower wages or unemployment? Because the usual strategy involves lower wages and unemployment.
It’s a little ironic: The people who are usually all about laissez-faireare the ones who panic about inflation and want the government to take drastic action; meanwhile, I’m usually in favor of government intervention, but when it comes to moderate inflation, I think maybe we should just let it be.
While a return to double-digits remains possible, at this point it likely won’t happen, and if it does, it will occur only briefly.
This is no doubt a major reason why the dollar and the pound are widely used as reserve currencies (especially the dollar), and is likely due to the fact that they are managed by the world’s most competent central banks. Brexit would almost have made sense if the UK had been pressured to join the Euro; but they weren’t, because everyone knew the pound was better managed.
The Euro also doesn’t have much inflation, but if anything they err on the side of too low, mainly because Germany appears to believe that inflation is literally Hitler. In fact, the rise of the Nazis didn’t have much to do with the Weimar hyperinflation. The Great Depression was by far a greater factor—unemployment is much, much worse than inflation. (By the way, it’s weird that you can put that graph back to the 1980s. It, uh, wasn’t the Euro then. Euros didn’t start circulating until 1999. Is that an aggregate of the franc and the deutsche mark and whatever else? The Euro itself has never had double-digit inflation—ever.)
But it’s always a little surreal for me to see how panicked people in the US and UK get when our inflation rises a couple of percentage points. There seems to be an entire subgenre of economics news that basically consists of rich people saying the sky is falling because inflation has risen—or will, or may rise—by two points. (Hey, anybody got any ideas how we can get them to panic like this over rises in sea level or aggregate temperature?)
Hyperinflation is a real problem—it isn’t what put Hitler into power, but it has led to real crises in Germany, Zimbabwe, and elsewhere. Once you start getting over 100% per year, and especially when it starts rapidly accelerating, that’s a genuine crisis. Moreover, even though they clearly don’t constitute hyperinflation, I can see why people might legitimately worry about price increases of 20% or 30% per year. (Let alone 60% like Argentina is dealing with right now.) But why is going from 2% to 6% any cause for alarm? Yet alarmed we seem to be.
I can even understand why rich people would be upset about inflation (though the magnitudeof their concern does still seem disproportionate). Inflation erodes the value of financial assets, because most bonds, options, etc. are denominated in nominal, not inflation-adjusted terms. (Though there are such things as inflation-indexed bonds.) So high inflation can in fact make rich people slightly less rich.
But why in the world are so many poor people upset about inflation?
Inflation doesn’t just erode the value of financial assets; it also erodes the value of financial debts. And most poor people have more debts than they have assets—indeed, it’s not uncommon for poor people to have substantial debt and no financial assets to speak of (what little wealth they have being non-financial, e.g. a car or a home). Thus, their net wealth position improves as prices rise.
The interest rate response can compensate for this to some extent, but most people’s debts are fixed-rate. Moreover, if it’s the higher interest rates you’re worried about, you should want the Federal Reserve and the Bank of England not to fight inflation too hard, because the way they fight it is chiefly by raising interest rates.
I admit, I question the survey design here: I would answer ‘yes’ to both questions if we’re talking about a theoretical 10,000% hyperinflation, but ‘no’ if we’re talking about a realistic 10% inflation. So I would like to see, but could not find, a survey asking people what level of inflation is sufficient cause for concern. But since most of these people seemed concerned about actual, realistic inflation (85% reported anger at seeing actual, higher prices), it still suggests a lot of strong feelings that even mild inflation is bad.
So it does seem to be the case that a lot of poor and middle-class people really strongly dislike inflation even in the actual, mild levels in which it occurs in the US and UK.
The main fear seems to be that inflation will erode people’s purchasing power—that as the price of gasoline and groceries rise, people won’t be able to eat as well or drive as much. And that, indeed, would be a real loss of utility worth worrying about.
But in fact this makes very little sense: Most forms of income—particularly labor income, which is the only real income for some 80%-90% of the population—actually increases with inflation, more or less one-to-one. Yes, there’s some delay—you won’t get your annual cost-of-living raise immediately, but several months down the road. But this could have at most a small effect on your real consumption.
To see this, suppose that inflation has risen from 2% to 6%. (Really, you need not suppose; it has.) Now consider your cost-of-living raise, which nearly everyone gets. It will presumably rise the same way: So if it was 3% before, it will now be 7%. Now consider how much your purchasing power is affected over the course of the year.
For concreteness, let’s say your initial income was $3,000 per month at the start of the year (a fairly typical amount for a middle-class American, indeed almost exactly the median personal income). Let’s compare the case of no inflation with a 1% raise, 2% inflation with a 3% raise, and 5% inflation with a 6% raise.
If there was no inflation, your real income would remain simply $3,000 per month, until the end of the year when it would become $3,030 per month. That’s the baseline to compare against.
If inflation is 2%, your real income would gradually fall, by about 0.16% per month, before being bumped up 3% at the end of the year. So in January you’d have $3,000, in February $2,995, in March $2,990. Come December, your real income has fallen to $2,941. But then next January it will immediately be bumped up 3% to $3,029, almost the same as it would have been with no inflation at all. The total lost income over the entire year is about $380, or about 1% of your total income.
If inflation instead rises to 6%, your real income will fall by 0.49% per month, reaching a minimum of $2,830 in December before being bumped back up to $3,028 next January. Your total loss for the whole year will be about $1110, or about 3% of your total income.
Indeed, it’s a pretty good heuristic to say that for an inflation rate of x% with annual cost-of-living raises, your loss of real income relative to having no inflation at all is about (x/2)%. (This breaks down for really high levels of inflation, at which point it becomes a wild over-estimate, since even 200% inflation doesn’t make your real income go to zero.)
This isn’t nothing, of course. You’d feel it. Going from 2% to 6% inflation at an income of $3000 per month is like losing $700 over the course of a year, which could be a month of groceries for a family of four. (Not that anyone can really raise a family of four on a single middle-class income these days. When did The Simpsons begin to seem aspirational?)
But this isn’t the whole story. Suppose that this same family of four had a mortgage payment of $1000 per month; that is also decreasing in real value by the same proportion. And let’s assume it’s a fixed-rate mortgage, as most are, so we don’t have to factor in any changes in interest rates.
With no inflation, their mortgage payment remains $1000. It’s 33.3% of their income this year, and it will be 33.0% of their income next year after they get that 1% raise.
With 2% inflation, their mortgage payment will also fall by 0.16% per month; $998 in February, $996 in March, and so on, down to $980 in December. This amounts to an increase in real income of about $130—taking away a third of the loss that was introduced by the inflation.
With 6% inflation, their mortgage payment will also fall by 0.49% per month; $995 in February, $990 in March, and so on, until it’s only $943 in December. This amounts to an increase in real income of over $370—again taking away a third of the loss.
Indeed, it’s no coincidence that it’s one third; the proportion of lost real income you’ll get back by cheaper mortgage payments is precisely the proportion of your income that was spent on mortgage payments at the start—so if, like too many Americans, they are paying more than a third of their income on mortgage, their real loss of income from inflation will be even lower.
And what if they are renting instead? They’re probably on an annual lease, so that payment won’t increase in nominal terms either—and hence will decrease in real terms, in just the same way as a mortgage payment. Likewise car payments, credit card payments, any debt that has a fixed interest rate. If they’re still paying back student loans, their financial situation is almost certainly improved by inflation.
This means that the real loss from an increase of inflation from 2% to 6% is something like 1.5% of total income, or about $500 for a typical American adult. That’s clearly not nearly as bad as a similar increase in unemployment, which would translate one-to-one into lost income on average; moreover, this loss would be concentrated among people who lost their jobs, so it’s actually worse than that once you account for risk aversion. It’s clearly better to lose 1% of your income than to have a 1% chance of losing nearly all your income—and inflation is the former while unemployment is the latter.
Indeed, the only reason you lost purchasing power at all was that your cost-of-living increases didn’t occur often enough. If instead you had a labor contract that instituted cost-of-living raises every month, or even every paycheck, instead of every year, you would get all the benefits of a cheaper mortgage and virtually none of the costs of a weaker paycheck. Convince your employer to make this adjustment, and you will actually benefit from higher inflation.
So if poor and middle-class people are upset about eroding purchasing power, they should be mad at their employers for not implementing more frequent cost-of-living adjustments; the inflation itself really isn’t the problem.
(I couldn’t resist; for the uninitiated, my slightly off-color title is referencing this XKCD comic.)
When faced with a bad recession, Keynesian economics prescribes the following response: Expand the money supply. Cut interest rates. Increase government spending, but decrease taxes. The bigger the recession, the more we should do all these things—especially increasing spending, because interest rates will often get pushed to zero, creating what’s called a liquidity trap.
Take a look at these two FRED graphs, both since the 1950s. The first is interest rates (specifically the Fed funds effective rate):
The second is the US federal deficit as a proportion of GDP:
Interest rates were pushed to zero right after the 2008 recession, and didn’t start coming back up until 2016. Then as soon as we hit the COVID recession, they were dropped back to zero.
The deficit looks even more remarkable. At the 2009 trough of the recession, the deficit was large, nearly 10% of GDP; but then it was quickly reduced back to normal, to between 2% and 4% of GDP. And that initial surge is as much explained by GDP and tax receipts falling as by spending increasing.
Yet in 2020 we saw something quite different: The deficit became huge. Literally off the chart, nearly 15% of GDP. A staggering $2.8 trillion. We’ve not had a deficit that large as a proportion of GDP since WW2. We’ve never had a deficit that large in real billions of dollars.
Deficit hawks came out of the woodwork to complain about this, and for once I was worried they might actually be right. Their most credible complaint was that it would trigger inflation, and they weren’t wrong about that: Inflation became a serious concern for the first time in decades.
But these recessions were very large, and when you actually run the numbers, this deficit was the correct magnitude for what Keynesian models tell us to do. I wouldn’t have thought our government had the will and courage to actually do it, but I am very glad to have been wrong about that, for one very simple reason:
It worked.
In 2009, we didn’t actually fix the recession. We blunted it; we stopped it from getting worse. But we never really restored GDP, we just let it get back to its normal growth rate after it had plummeted, and eventually caught back up to where we had been.
2021 went completely differently. With a much larger deficit, we fixed this recession. We didn’t just stop the fall; we reversed it. We aren’t just back to normal growth rates—we are back to the same level of GDP, as if the recession had never happened.
This contrast is quite obvious from the GDP of US GDP:
In 2008 and 2009, GDP slumps downward, and then just… resumes its previous trend. It’s like we didn’t do anything to fix the recession, and just allowed the overall strong growth of our economy to carry us through.
The pattern in 2020 is completely different. GDP plummets downward—much further, much faster than in the Great Recession. But then it immediately surges back upward. By the end of 2021, it was above its pre-recession level, and looks to be back on its growth trend. With a recession this deep, if we’d just waited like we did last time, it would have taken four or five years to reach this point—we actually did it in less than one.
Indeed, to go from unemployment almost 15% in April of 2020 to under 4% in December of 2021 is fast enough I feel like I’m getting whiplash. We have never seen unemployment drop that fast. Krugman is fond of comparing this to “morning in America”, but that’s really an understatement. Pitch black one moment, shining bright the next: this isn’t a sunrise, it’s pulling open a blackout curtain.
I’m not sure I have the words to express what a staggering achievement of economic policy it is to so rapidly and totally repair the economic damage caused by a pandemic while that pandemic is still happening. It’s the equivalent of repairing an airplane that is not only still in flight, but still taking anti-aircraft fire.
Why, it seems that Keynes fellow may have been onto something, eh?
Labor markets have been behaving quite strangely lately, due to COVID and its consequences. As I said in an earlier post, the COVID recession was the one recession I can think of that actually seemed to follow Real Business Cycle theory—where it was labor supply, not demand, that drove employment.
I dare say that for the first time in decades, the US government actually followed Keynesian policy. US federal government spending surged from $4.8 trillion to $6.8 trillion in a single year:
That is a staggering amount of additional spending; I don’t think any country in history has ever increased their spending by that large an amount in a single year, even inflation-adjusted. Yet in response to a recession that severe, this is exactly what Keynesian models prescribed—and for once, we listened. Instead of balking at the big numbers, we went ahead and spent the money.
And apparently it worked, because unemployment spiked to the worst levels seen since the Great Depression, then suddenly plummeted back to normal almost immediately:
Nor was this just the result of people giving up on finding work. U-6, the broader unemployment measure that includes people who are underemployed or have given up looking for work, shows the same unprecedented pattern:
The oddest part is that people are now quitting their jobs at the highest rate seen in over 20 years:
[FRED_quits.png]
This phenomenon has been dubbed the Great Resignation, and while its causes are still unclear, it is clearly the most important change in the labor market in decades.
In a previous post I hypothesized that this surge in strikes and quits was a coordination effect: The sudden, consistent shock to all labor markets at once gave people a focal point to coordinate their decision to strike.
But it’s also quite possible that it was the Keynesian stimulus that did it: The relief payments made it safe for people to leave jobs they had long hated, and they leapt at the opportunity.
When that huge surge in government spending was proposed, the usual voices came out of the woodwork to warn of terrible inflation. It’s true, inflation has been higher lately than usual, nearly 7% last year. But we still haven’t hit the double-digit inflation rates we had in the late 1970s and early 1980s:
Indeed, most of the inflation we’ve had can be explained by the shortages created by the supply chain crisis, along with a very interesting substitution effect created by the pandemic. As services shut down, people bought goods instead: Home gyms instead of gym memberships, wifi upgrades instead of restaurant meals.
As a result, the price of durable goods actually rose, when it had previously been falling for decades. That broader pattern is worth emphasizing: As technology advances, services like healthcare and education get more expensive, durable goods like phones and washing machines get cheaper, and nondurable goods like food and gasoline fluctuate but ultimately stay about the same. But in the last year or so, durable goods have gotten more expensive too, because people want to buy more while supply chains are able to deliver less.
This suggests that the inflation we are seeing is likely to go away in a few years, once the pandemic is better under control (or else reduced to a new influenza where the virus is always there but we learn to live with it).
But I don’t think the effects on the labor market will be so transitory. The strikes and quits we’ve been seeing lately really are at a historic level, and they are likely to have a long-lasting effect on how work is organized. Employers are panicking about having to raise wages and whining about how “no one wants to work” (meaning, of course, no one wants to work at the current wage and conditions on offer). The correct response is the one from Goodfellas [language warning].
For the first time in decades, there are actually more job vacancies than unemployed workers:
This means that the tables have turned. The bargaining power is suddenly in the hands of workers again, after being in the hands of employers for as long as I’ve been alive. Of course it’s impossible to know whether some other shock could yield another reversal; but for now, it looks like we are finally on the verge of major changes in how labor markets operate—and I for one think it’s about time.
It seems like an egregious understatement to say that the last couple of years have been unusual. The COVID-19 pandemic was historic, comparable in threat—though not in outcome—to the 1918 influenza pandemic.
At this point it looks like we may not be able to fully eradicate COVID. And there are still many places around the world where variants of the virus continue to spread. I personally am a bit worried about the recent surge in the UK; it might add some obstacles (as if I needed any more) to my move to Edinburgh. Yet even in hard-hit places like India and Brazil things are starting to get better. Overall, it seems like the worst is over.
This pandemic disrupted our society in so many ways, great and small, and we are still figuring out what the long-term consequences will be.
But as an economist, one of the things I found most unusual is that this recession fit Real Business Cycle theory.
Real Business Cycle theory (henceforth RBC) posits that recessions are caused by negative technology shocks which result in a sudden drop in labor supply, reducing employment and output. This is generally combined with sophisticated mathematical modeling (DSGE or GTFO), and it typically leads to the conclusion that the recession is optimal and we should do nothing to correct it (which was after all the original motivation of the entire theory—they didn’t like the interventionist policy conclusions of Keynesian models). Alternatively it could suggest that, if we can, we should try to intervene to produce a positive technology shock (but nobody’s really sure how to do that).
For a typical recession, this is utter nonsense. It is obvious to anyone who cares to look that major recessions like the Great Depression and the Great Recession were caused by a lack of labor demand, not supply. There is no apparent technology shock to cause either recession. Instead, they seem to be preciptiated by a financial crisis, which then causes a crisis of liquidity which leads to a downward spiral of layoffs reducing spending and causing more layoffs. Millions of people lose their jobs and become desperate to find new ones, with hundreds of people applying to each opening. RBC predicts a shortage of labor where there is instead a glut. RBC predicts that wages should go up in recessions—but they almost always go down.
But for the COVID-19 recession, RBC actually had some truth to it. We had something very much like a negative technology shock—namely the pandemic. COVID-19 greatly increased the cost of working and the cost of shopping. This led to a reduction in labor demand as usual, but also a reduction in labor supply for once. And while we did go through a phase in which hundreds of people applied to each new opening, we then followed it up with a labor shortage and rising wages. A fall in labor supply should create inflation, and we now have the highest inflation we’ve had in decades—but there’s good reason to think it’s just a transitory spike that will soon settle back to normal.
The recovery from this recession was also much more rapid: Once vaccines started rolling out, the economy began to recover almost immediately. We recovered most of the employment losses in just the first six months, and we’re on track to recover completely in half the time it took after the Great Recession.
This makes it the exception that proves the rule: Now that you’ve seen a recession that actually resembles RBC, you can see just how radically different it was from a typical recession.
Moreover, even in this weird recession the usual policy conclusions from RBC are off-base. It would have been disastrous to withhold the economic relief payments—which I’m happy to say even most Republicans realized. The one thing that RBC got right as far as policy is that a positive technology shock was our salvation—vaccines.
Indeed, while the cause of this recession was very strange and not what Keynesian models were designed to handle, our government largely followed Keynesian policy advice—and it worked. We ran massive government deficits—over $3 trillion in 2020—and the result was rapid recovery in consumer spending and then employment. I honestly wouldn’t have thought our government had the political will to run a deficit like that, even when the economic models told them they should; but I’m very glad to be wrong. We ran the huge deficit just as the models said we should—and it worked. I wonder how the 2010s might have gone differently had we done the same after 2008.
I don’t think there are many people who would say that 2020 was their favorite year. Even if everything else had gone right, the 1.7 million deaths from the COVID pandemic would already make this a very bad year.
And this Christmas season certainly felt quite different, with most of us unable to safely travel and forced to interact with our families only via video calls. New Year’s this year won’t feel like a celebration of a successful year so much as relief that we finally made it through.
Many of us have lost loved ones. Fortunately none of my immediate friends and family have died of COVID, but I can now count half a dozen acquaintances, friends-of-friends or distant relatives who are no longer with us. And I’ve been relatively lucky overall; both I and my partner work in jobs that are easy to do remotely, so our lives haven’t had to change all that much.
Yet 2020 is nearly over, and already there are signs that things really will get better in 2021. There are many good reasons for hope.
Maybe the success of this vaccine will finally convince some of the folks who have been doubting the safety and effectiveness of vaccines in general. (Or maybe not; it’s too soon to tell.)
Those 1.7 million deaths need to be compared against the fact that global life expectancy has increased from 45 to 73 since 1950. The world population is 7.8 billion people. The global death rate has fallen from over 20 deaths per 1000 people per year to only 7.6 deaths per 1000 people per year. Multiplied over 7.8 billion people, that’s nearly 100 millionlives saved every single year by advances in medicine and overall economic development. Indeed, if we were to sustain our current death rate indefinitely, our life expectancy would rise to over 130. There are various reasons to think that probably won’t happen, mostly related to age demographics, but in fact there are medical breakthroughs we might make that would make it possible. Even according to current forecasts, world life expectancy is expected to exceed 80 years by the end of the 21st century.
The sheepskin effect is the observation that the increase in income from graduating from college after four years, relative going through college for three years, is much higher than the increase in income from simply going through college for three years instead of two.
It has been suggested that this provides strong evidence that education is primarily due to signaling, and doesn’t provide any actual value. In this post I’m going to show why this view is mistaken. The sheepskin effect in fact tells us very little about the true value of college. (Noah Smith actually made a pretty decent argument that it provides evidence against signaling!)
To see this, consider two very simple models.
In both models, we’ll assume that markets are competitive but productivity is not directly observable, so employers sort you based on your education level and then pay a wage equal to the average productivity of people at your education level, compensated for the cost of getting that education.
Model 1:
In this model, people all start with the same productivity, and are randomly assigned by their life circumstances to go to either 0, 1, 2, 3, or 4 years of college. College itself has no long-term cost.
The first year of college you learn a lot, the next couple of years you don’t learn much because you’re trying to find your way, and then in the last year of college you learn a lot of specialized skills that directly increase your productivity.
So this is your productivity after x years of college:
Years of college
Productivity
0
10
1
17
2
22
3
25
4
31
We assumed that you’d get paid your productivity, so these are also your wages.
The increase in income each year goes from +7, to +5, to +3, then jumps up to +6. So if you compare the 4-year-minus-3-year gap (+6) with the 3-year-minus-2-year gap (+3), you get a sheepskin effect.
Model 2:
In this model, college is useless and provides no actual benefits. People vary in their intrinsic productivity, which is also directly correlated with the difficulty of making it through college.
In particular, there are five types of people:
Type
Productivity
Cost per year of college
0
10
8
1
11
6
2
14
4
3
19
3
4
31
0
The wages for different levels of college education are as follows:
Years of college
Wage
0
10
1
17
2
22
3
25
4
31
Notice that these are exactly the same wages as in scenario 1. This is of course entirely intentional. In a moment I’ll show why this is a Nash equilibrium.
Consider the choice of how many years of college to attend. You know your type, so you know the cost of college to you. You want to maximize your net benefit, which is the wage you’ll get minus the total cost of going to college.
Let’s assume that if a given year of college isn’t worth it, you won’t try to continue past it and see if more would be.
For a type-0 person, they could get 10 by not going to college at all, or 17-(1)(8) = 9 by going for 1 year, so they stop.
For a type-1 person, they could get 10 by not going to college at all, or 17-(1)(6) = 11 by going for 1 year, or 22-(2)(6) = 10 by going for 2 years, so they stop.
Filling out all the possibilities yields this table:
Years \ Type
0
1
2
3
4
0
10
10
10
10
10
1
9
11
13
14
17
2
10
14
16
22
3
13
19
25
4
19
30
I’d actually like to point out that it was much harder to find numbers that allowed me to make the sheepskin effect work in the second model, where education was all signaling. In the model where education provides genuine benefit, all I need to do is posit that the last year of college is particularly valuable (perhaps because high-level specialized courses are more beneficial to productivity). I could pretty much vary that parameter however I wanted, and get whatever magnitude of sheepskin effect I chose.
For the signaling model, I had to carefully calibrate the parameters so that the costs and benefits lined up just right to make sure that each type chose exactly the amount of college I wanted them to choose while still getting the desired sheepskin effect. It took me about two hours of very frustrating fiddling just to get numbers that worked. And that’s with the assumption that someone who finds 2 years of college not worth it won’t consider trying for 4 years of college (which, given the numbers above, they actually might want to), as well as the assumption that when type-3 individuals are indifferent between staying and dropping out they drop out.
And yet the sheepskin effect is supposed to be evidence that the world works like the signaling model?
I’m sure a more sophisticated model could make the signaling explanation a little more robust. The biggest limitation of these models is that once you observe someone’s education level, you immediately know their true productivity, whether it came from college or not. Realistically we should be allowing for unobserved variation that can’t be sorted out by years of college.
Maybe it seems implausible that the last year of college is actually more beneficial to your productivity than the previous years. This is probably the intuition behind the idea that sheepskin effects are evidence of signaling rather than genuine learning.
So how about this model?
Model 3:
As in the second model, there are four types of people, types 0, 1, 2, 3, and 4. They all start with the same level of productivity, and they have the same cost of going to college; but they get different benefits from going to college.
The problem is, people don’t start out knowing what type they are. Nor can they observe their productivity directly. All they can do is observe their experience of going to college and then try to figure out what type they must be.
Type 0s don’t benefit from college at all, and they know they are type 0; so they don’t go to college.
Type 1s benefit a tiny amount from college (+1 productivity per year), but don’t realize they are type 1s until after one year of college.
Type 2s benefit a little from college (+2 productivity per year), but don’t realize they are type 2s until after two years of college.
Type 3s benefit a moderate amount from college (+3 productivity per year), but don’t realize they are type 3s until after three years of college.
Type 4s benefit a great deal from college (+5 productivity per year), but don’t realize they are type 4s until after three years of college.
What then will happen? Type 0s will not go to college. Type 1s will go one year and then drop out. Type 2s will go two years and then drop out. Type 3s will go three years and then drop out. And type 4s will actually graduate.
That results in the following before-and-after productivity:
Type
Productivity before college
Years of college
Productivity after college
0
10
0
10
1
10
1
11
2
10
2
14
3
10
3
19
4
10
4
30
If each person is paid a wage equal to their productivity, there will be a huge sheepskin effect; wages only go up +1 for 1 year, +3 for 2 years, +5 for 3 years, but then they jump up to +11 for graduation. It appears that the benefit of that last year of college is more than the other three combined. But in fact it’s not; for any given individual, the benefits of college are the same each year. It’s just that college is more beneficial to the people who decided to stay longer.
And I could of course change that assumption too, making the early years more beneficial, or varying the distribution of types, or adding more uncertainty—and so on. But it’s really not hard at all to make a model where college is beneficial and you observe a large sheepskin effect.
Moreover, I agree that it’s worth looking at this: Insofar as college is about sorting or signaling, it’s wasteful from a societal perspective, and we should be trying to find more efficient sorting mechanisms.
But I highly doubt that all the benefits of college are due to sorting or signaling; there definitely are a lot of important things that people learn in college, not just conventional academic knowledge like how to do calculus, but also broader skills like how to manage time, how to work in groups, and how to present ideas to others. Colleges also cultivate friendships and provide opportunities for networking and exposure to a diverse community. Judging by voting patterns, I’m going to go out on a limb and say that college also makes you a better citizen, which would be well worth it by itself.
The truth is, we don’t know exactly why college is beneficial. We certainly know that it is beneficial: Unemployment rates and median earnings are directly sorted by education level. Yes, even PhDs in philosophy and sociology have lower unemployment and higher incomes (on average) than the general population. (And of course PhDs in economics do better still.)
I probably don’t need to tell you this, but getting a job is really hard. Indeed, much harder than it seems like it ought to be.
Having all but completed my PhD, I am now entering the job market. The job market for economists is quite different from the job market most people deal with, and these differences highlight some potential opportunities for improving job matching in our whole economy—which, since employment is such a large part of our lives, could have wide-ranging benefits for our society.
The most obvious difference is that the job market for economists is centralized: Job postings are made through the American Economic Association listing of Job Openings for Economists (often abbrievated AEA JOE); in a typical year about 4,000 jobs are posted there. All of them have approximately the same application deadline, near the end of the year. Then, after applying to various positions, applicants get interviewed in rapid succession, all at the annual AEA conference. Then there is a matching system, where applicants get to send two “signals” indicating their top choices and then offers are made.
This year of course is different, because of COVID-19. The conference has been canceled, with all of its presentations moved online; interviews will also be conducted online. Perhaps more worrying, the number of postings has been greatly reduced, and based on past trends may be less than half of the usual number. (The number of applicants may also be reduced, but it seems unlikely to drop as much as the number of postings does.)
There are a number of flaws in even this system. First, it’s too focused on academia; very few private-sector positions use the AEA JOE system, and almost no government positions do. So those of us who are not so sure we want to stay in academia forever end up needing to deal with both this system and the conventional system in parallel. Second, I don’t understand why they use this signaling system and not a deferred-acceptance matching algorithm. I should be able to indicate more about my preferences than simply what my top two choices are—particularly when most applicants apply to over 100 positions. Third, it isn’t quite standardized enough—some positions do have earlier deadlines or different application materials, so you can’t simply put together one application packet and send it to everyone at once.
Still, it’s quite obvious that this system is superior to the decentralized job market that most people deal with. Indeed, this becomes particularly obvious when one is participating in both markets at once, as I am. The decentralized market has a wide range of deadlines, where upon seeing an application you may need to submit to it within that week, or you may have several months to respond. Nearly all applications require a resume, but different institutions will expect different content on it. Different applications may require different materials: Cover letters, references, writing samples, and transcripts are all things that some firms will want and others won’t.
Also, this is just my impression from a relatively small sample, but I feel like the AEA JOE listings are more realistic, in the following sense: They don’t all demand huge amounts of prior experience, and those that do ask for prior experience are either high-level positions where that’s totally reasonable, or are willing to substitute education for experience. For private-sector job openings you basically have to subtract three years from whatever amount of experience they say they require, because otherwise you’d never have anywhere you could apply to. (Federal government jobs are a weird case here; they all say they require a lot of experience at a specific government pay grade, but from talking with those who have dealt with the system before, they are apparently willing to make lots of substitutions—private-sector jobs, education, and even hobbies can sometimes substitute.)
I think this may be because the decentralized market has to some extent unraveled. The job market is the epitome of a matching market; unravelingin a matching market occurs when there is fierce competition for a small number of good candidates or, conversely, a small number of good openings. Each firm has the incentive to make a binding offer earlier than the others, with a short deadline so that candidates don’t have time to shop around. As firms compete with each other, they start making deadlines earlier and earlier until candidates feel like they are in a complete crapshoot: An offer made on Monday might be gone by Friday, and you have no way of knowing if you should accept it now or wait for a better one to come along. This is a Tragedy of the Commons: Given what other firms are doing, each firm benefits from making an earlier binding offer. But once they all make early offers, that benefit disappears and the result just makes the whole system less efficient.
The centralization of the AEA JOE market prevents this from happening: Everyone has common deadlines and does their interviews at the same time. Each institution may be tempted to try to break out of the constraints of the centralized market, but they know that if they do, they will be punished by receiving fewer applicants.
The fact that the centralized market is more efficient is likely a large part of why economics PhDs have the lowest unemployment rate of any PhD graduates and nearly the lowest unemployment rate of any job sector whatsoever. In some sense we should expect this: If anyone understands how to make employment work, it should be economists. Noah Smith wrote in 2013 (and I suppose I took it to heart): “If you get a PhD, get an economics PhD.” I think PhD graduates are the right comparison group here: If we looked at the population as a whole, employment rates and salaries for economists look amazing, but that isn’t really fair since it’s so much harder to become an economist than it is to get most other jobs. But I don’t think it’s particularly easier to get a PhD in physics or biochemistry than to get one in economics, and yet economists still have a lower unemployment rate than physicists or biochemists. (Though it’s worth noting that any PhD—yes, even in the humanities—will give you a far lower risk of unemployment than the general population.) The fact that we have AEA JOE and they don’t may be a major factor here.
So, here’s my question: Why don’t we do this in more job markets? It would be straightforward enough to do this for all PhD graduates, at least—actually my understanding is that some other disciplines do have centralized markets similar to the one in economics, but I’m not sure how common this is.
The federal government could relatively easily centralize its own job market as well; maybe not for positions that need to be urgently filled, but anything that can wait several months would be worth putting into a centralized system that has deadlines once or twice a year.
But what about the private sector, which after all is where most people work? Could we centralize that system as well?
Most people want a job near where they live, so part of the solution might be to centralize only jobs within a certain region, such as a particular metro area. But if we are limited to open positions of a particular type within a particular city, there might not be enough openings at any given time to be worth centralizing. And what about applicants who don’t care so much about geography? Should they be applying separately to each regional market?
Yet even with all this in mind, I think some degree of centralization would be feasible and worthwhile. If nothing else, I think standardizing deadlines and application materials could make a significant difference—it’s far easier to apply to many places if they all use the same application and accept them at the same time.
Such a change would make our labor markets more efficient, matching people to jobs that fit them better, increasing productivity and likely decreasing turnover. Wages probably wouldn’t change much, but working in a better job for the same wage is still a major improvement in your life. Indeed, job satisfaction is one of the strongest predictors of life satisfaction, which isn’t too surprising given how much of our lives we spend at work.
Some people have argued that lockdown measures were unnecessary, or ineffective. The data definitely leans the other direction, but there’s enough uncertainty in all this that I can at least consider that a serious possibility. That doesn’t mean we were wrong to use them; in the presence of high uncertainty, assuming the worst-case scenario is often the best strategy. Far better to overreact than underreact. And indeed, I’d say that right now we still can’t be confident enough that things are safe to really re-open most of the economy. Re-opening too early could make things far worse.
But in fact, unemployment does not kill. The evidence on this is quite clear. Even in the Great Depression, with massive unemployment, terrible monetary policy, and only the most minimal social welfare measures in place, death rates did not increase. In fact, for all causes except suicide, death rates decrease during recessions—probably because pollution, traffic accidents, and work-related injury and illness go down. And the suicide rate increase isn’t enough to increase the overall death rate.
Of course, dying by suicide is not the same thing as dying from cancer—and indeed, they are most likely different people being affected in each case. So in that sense unemployment can kill people; but it typically saves more people than it kills. Almost any policy choice will cause some deaths and prevent others, so really the best we can do is look at the overall aggregate and see whether our QALY have gone up or down.
This doesn’t mean that we should go out of our way to have recessions in order to save lives; the number of lives saved is small and the loss in quality of life is probably large enough to compensate for it. (That’s why we use quality-adjusted life years after all.) But this recession isn’t arbitrary; it’s the result of trying to stop a global pandemic, so that we don’t have a repeat of what influenza did in 1918.
There is a significant chance, however, that this recession will end up being worse than it’s supposed to be, if our policymakers fail to provide adequate and timely relief to those who become unemployed.
As Donald Marron of the Urban Institute explained quite succinctly in a Twitter thread, there are three types of economic losses we need to consider here: Losses necessary to protect health, losses caused by insufficient demand, and losses caused by lost productive capacity. The first kind of loss is what we are doing on purpose; the other two are losses we should be trying to avoid. Insufficient demand is fairly easy to fix: Hand out cash. But sustaining productive capacity can be trickier.
Given the track record of the Trump administration so far, I am not optimistic. First Trump denied the virus was even a threat. Then he blamed China (which, even if partly true, doesn’t solve anything). Then his response was delayed and inadequate. And now the relief money is taking weeks to get to people—while clearly being less than many people need.
I can’t tell you how long this is going to last. I can’t tell you just how bad it’s going to get. But I am confident of a few things:
It’ll be worse than it had to be, but not as bad as it could have been. Trump will continue making everything worse, but other, better leaders will make things better. Above all, we’ll make it through this, together.