I don’t think there are many people who would say that 2020 was their favorite year. Even if everything else had gone right, the 1.7 million deaths from the COVID pandemic would already make this a very bad year.
And this Christmas season certainly felt quite different, with most of us unable to safely travel and forced to interact with our families only via video calls. New Year’s this year won’t feel like a celebration of a successful year so much as relief that we finally made it through.
Many of us have lost loved ones. Fortunately none of my immediate friends and family have died of COVID, but I can now count half a dozen acquaintances, friends-of-friends or distant relatives who are no longer with us. And I’ve been relatively lucky overall; both I and my partner work in jobs that are easy to do remotely, so our lives haven’t had to change all that much.
Yet 2020 is nearly over, and already there are signs that things really will get better in 2021. There are many good reasons for hope.
The sheepskin effect is the observation that the increase in income from graduating from college after four years, relative going through college for three years, is much higher than the increase in income from simply going through college for three years instead of two.
In both models, we’ll assume that markets are competitive but productivity is not directly observable, so employers sort you based on your education level and then pay a wage equal to the average productivity of people at your education level, compensated for the cost of getting that education.
In this model, people all start with the same productivity, and are randomly assigned by their life circumstances to go to either 0, 1, 2, 3, or 4 years of college. College itself has no long-term cost.
The first year of college you learn a lot, the next couple of years you don’t learn much because you’re trying to find your way, and then in the last year of college you learn a lot of specialized skills that directly increase your productivity.
So this is your productivity after x years of college:
Years of college
We assumed that you’d get paid your productivity, so these are also your wages.
The increase in income each year goes from +7, to +5, to +3, then jumps up to +6. So if you compare the 4-year-minus-3-year gap (+6) with the 3-year-minus-2-year gap (+3), you get a sheepskin effect.
In this model, college is useless and provides no actual benefits. People vary in their intrinsic productivity, which is also directly correlated with the difficulty of making it through college.
In particular, there are five types of people:
Cost per year of college
The wages for different levels of college education are as follows:
Years of college
Notice that these are exactly the same wages as in scenario 1. This is of course entirely intentional. In a moment I’ll show why this is a Nash equilibrium.
Consider the choice of how many years of college to attend. You know your type, so you know the cost of college to you. You want to maximize your net benefit, which is the wage you’ll get minus the total cost of going to college.
Let’s assume that if a given year of college isn’t worth it, you won’t try to continue past it and see if more would be.
For a type-0 person, they could get 10 by not going to college at all, or 17-(1)(8) = 9 by going for 1 year, so they stop.
For a type-1 person, they could get 10 by not going to college at all, or 17-(1)(6) = 11 by going for 1 year, or 22-(2)(6) = 10 by going for 2 years, so they stop.
Filling out all the possibilities yields this table:
Years \ Type
I’d actually like to point out that it was much harder to find numbers that allowed me to make the sheepskin effect work in the second model, where education was all signaling. In the model where education provides genuine benefit, all I need to do is posit that the last year of college is particularly valuable (perhaps because high-level specialized courses are more beneficial to productivity). I could pretty much vary that parameter however I wanted, and get whatever magnitude of sheepskin effect I chose.
For the signaling model, I had to carefully calibrate the parameters so that the costs and benefits lined up just right to make sure that each type chose exactly the amount of college I wanted them to choose while still getting the desired sheepskin effect. It took me about two hours of very frustrating fiddling just to get numbers that worked. And that’s with the assumption that someone who finds 2 years of college not worth it won’t consider trying for 4 years of college (which, given the numbers above, they actually might want to), as well as the assumption that when type-3 individuals are indifferent between staying and dropping out they drop out.
And yet the sheepskin effect is supposed to be evidence that the world works like the signaling model?
I’m sure a more sophisticated model could make the signaling explanation a little more robust. The biggest limitation of these models is that once you observe someone’s education level, you immediately know their true productivity, whether it came from college or not. Realistically we should be allowing for unobserved variation that can’t be sorted out by years of college.
Maybe it seems implausible that the last year of college is actually more beneficial to your productivity than the previous years. This is probably the intuition behind the idea that sheepskin effects are evidence of signaling rather than genuine learning.
So how about this model?
As in the second model, there are four types of people, types 0, 1, 2, 3, and 4. They all start with the same level of productivity, and they have the same cost of going to college; but they get different benefits from going to college.
The problem is, people don’t start out knowing what type they are. Nor can they observe their productivity directly. All they can do is observe their experience of going to college and then try to figure out what type they must be.
Type 0s don’t benefit from college at all, and they know they are type 0; so they don’t go to college.
Type 1s benefit a tiny amount from college (+1 productivity per year), but don’t realize they are type 1s until after one year of college.
Type 2s benefit a little from college (+2 productivity per year), but don’t realize they are type 2s until after two years of college.
Type 3s benefit a moderate amount from college (+3 productivity per year), but don’t realize they are type 3s until after three years of college.
Type 4s benefit a great deal from college (+5 productivity per year), but don’t realize they are type 4s until after three years of college.
What then will happen? Type 0s will not go to college. Type 1s will go one year and then drop out. Type 2s will go two years and then drop out. Type 3s will go three years and then drop out. And type 4s will actually graduate.
That results in the following before-and-after productivity:
Productivity before college
Years of college
Productivity after college
If each person is paid a wage equal to their productivity, there will be a huge sheepskin effect; wages only go up +1 for 1 year, +3 for 2 years, +5 for 3 years, but then they jump up to +11 for graduation. It appears that the benefit of that last year of college is more than the other three combined. But in fact it’s not; for any given individual, the benefits of college are the same each year. It’s just that college is more beneficial to the people who decided to stay longer.
And I could of course change that assumption too, making the early years more beneficial, or varying the distribution of types, or adding more uncertainty—and so on. But it’s really not hard at all to make a model where college is beneficial and you observe a large sheepskin effect.
Moreover, I agree that it’s worth looking at this: Insofar as college is about sorting or signaling, it’s wasteful from a societal perspective, and we should be trying to find more efficient sorting mechanisms.
But I highly doubt that all the benefits of college are due to sorting or signaling; there definitely are a lot of important things that people learn in college, not just conventional academic knowledge like how to do calculus, but also broader skills like how to manage time, how to work in groups, and how to present ideas to others. Colleges also cultivate friendships and provide opportunities for networking and exposure to a diverse community. Judging by voting patterns, I’m going to go out on a limb and say that college also makes you a better citizen, which would be well worth it by itself.
I probably don’t need to tell you this, but getting a job is really hard. Indeed, much harder than it seems like it ought to be.
Having all but completed my PhD, I am now entering the job market. The job market for economists is quite different from the job market most people deal with, and these differences highlight some potential opportunities for improving job matching in our whole economy—which, since employment is such a large part of our lives, could have wide-ranging benefits for our society.
The most obvious difference is that the job market for economists is centralized: Job postings are made through the American Economic Association listing of Job Openings for Economists (often abbrievated AEA JOE); in a typical year about 4,000 jobs are posted there. All of them have approximately the same application deadline, near the end of the year. Then, after applying to various positions, applicants get interviewed in rapid succession, all at the annual AEA conference. Then there is a matching system, where applicants get to send two “signals” indicating their top choices and then offers are made.
This year of course is different, because of COVID-19. The conference has been canceled, with all of its presentations moved online; interviews will also be conducted online. Perhaps more worrying, the number of postings has been greatly reduced, and based on past trends may be less than half of the usual number. (The number of applicants may also be reduced, but it seems unlikely to drop as much as the number of postings does.)
There are a number of flaws in even this system. First, it’s too focused on academia; very few private-sector positions use the AEA JOE system, and almost no government positions do. So those of us who are not so sure we want to stay in academia forever end up needing to deal with both this system and the conventional system in parallel. Second, I don’t understand why they use this signaling system and not a deferred-acceptance matching algorithm. I should be able to indicate more about my preferences than simply what my top two choices are—particularly when most applicants apply to over 100 positions. Third, it isn’t quite standardized enough—some positions do have earlier deadlines or different application materials, so you can’t simply put together one application packet and send it to everyone at once.
Still, it’s quite obvious that this system is superior to the decentralized job market that most people deal with. Indeed, this becomes particularly obvious when one is participating in both markets at once, as I am. The decentralized market has a wide range of deadlines, where upon seeing an application you may need to submit to it within that week, or you may have several months to respond. Nearly all applications require a resume, but different institutions will expect different content on it. Different applications may require different materials: Cover letters, references, writing samples, and transcripts are all things that some firms will want and others won’t.
Also, this is just my impression from a relatively small sample, but I feel like the AEA JOE listings are more realistic, in the following sense: They don’t all demand huge amounts of prior experience, and those that do ask for prior experience are either high-level positions where that’s totally reasonable, or are willing to substitute education for experience. For private-sector job openings you basically have to subtract three years from whatever amount of experience they say they require, because otherwise you’d never have anywhere you could apply to. (Federal government jobs are a weird case here; they all say they require a lot of experience at a specific government pay grade, but from talking with those who have dealt with the system before, they are apparently willing to make lots of substitutions—private-sector jobs, education, and even hobbies can sometimes substitute.)
I think this may be because the decentralized market has to some extent unraveled. The job market is the epitome of a matching market; unravelingin a matching market occurs when there is fierce competition for a small number of good candidates or, conversely, a small number of good openings. Each firm has the incentive to make a binding offer earlier than the others, with a short deadline so that candidates don’t have time to shop around. As firms compete with each other, they start making deadlines earlier and earlier until candidates feel like they are in a complete crapshoot: An offer made on Monday might be gone by Friday, and you have no way of knowing if you should accept it now or wait for a better one to come along. This is a Tragedy of the Commons: Given what other firms are doing, each firm benefits from making an earlier binding offer. But once they all make early offers, that benefit disappears and the result just makes the whole system less efficient.
The centralization of the AEA JOE market prevents this from happening: Everyone has common deadlines and does their interviews at the same time. Each institution may be tempted to try to break out of the constraints of the centralized market, but they know that if they do, they will be punished by receiving fewer applicants.
The fact that the centralized market is more efficient is likely a large part of why economics PhDs have the lowest unemployment rate of any PhD graduates and nearly the lowest unemployment rate of any job sector whatsoever. In some sense we should expect this: If anyone understands how to make employment work, it should be economists. Noah Smith wrote in 2013 (and I suppose I took it to heart): “If you get a PhD, get an economics PhD.” I think PhD graduates are the right comparison group here: If we looked at the population as a whole, employment rates and salaries for economists look amazing, but that isn’t really fair since it’s so much harder to become an economist than it is to get most other jobs. But I don’t think it’s particularly easier to get a PhD in physics or biochemistry than to get one in economics, and yet economists still have a lower unemployment rate than physicists or biochemists. (Though it’s worth noting that any PhD—yes, even in the humanities—will give you a far lower risk of unemployment than the general population.) The fact that we have AEA JOE and they don’t may be a major factor here.
So, here’s my question: Why don’t we do this in more job markets? It would be straightforward enough to do this for all PhD graduates, at least—actually my understanding is that some other disciplines do have centralized markets similar to the one in economics, but I’m not sure how common this is.
The federal government could relatively easily centralize its own job market as well; maybe not for positions that need to be urgently filled, but anything that can wait several months would be worth putting into a centralized system that has deadlines once or twice a year.
But what about the private sector, which after all is where most people work? Could we centralize that system as well?
Most people want a job near where they live, so part of the solution might be to centralize only jobs within a certain region, such as a particular metro area. But if we are limited to open positions of a particular type within a particular city, there might not be enough openings at any given time to be worth centralizing. And what about applicants who don’t care so much about geography? Should they be applying separately to each regional market?
Yet even with all this in mind, I think some degree of centralization would be feasible and worthwhile. If nothing else, I think standardizing deadlines and application materials could make a significant difference—it’s far easier to apply to many places if they all use the same application and accept them at the same time.
Such a change would make our labor markets more efficient, matching people to jobs that fit them better, increasing productivity and likely decreasing turnover. Wages probably wouldn’t change much, but working in a better job for the same wage is still a major improvement in your life. Indeed, job satisfaction is one of the strongest predictors of life satisfaction, which isn’t too surprising given how much of our lives we spend at work.
But in fact, unemployment does not kill. The evidence on this is quite clear. Even in the Great Depression, with massive unemployment, terrible monetary policy, and only the most minimal social welfare measures in place, death rates did not increase. In fact, for all causes except suicide, death rates decrease during recessions—probably because pollution, traffic accidents, and work-related injury and illness go down. And the suicide rate increase isn’t enough to increase the overall death rate.
Of course, dying by suicide is not the same thing as dying from cancer—and indeed, they are most likely different people being affected in each case. So in that sense unemployment can kill people; but it typically saves more people than it kills. Almost any policy choice will cause some deaths and prevent others, so really the best we can do is look at the overall aggregate and see whether our QALY have gone up or down.
This doesn’t mean that we should go out of our way to have recessions in order to save lives; the number of lives saved is small and the loss in quality of life is probably large enough to compensate for it. (That’s why we use quality-adjusted life years after all.) But this recession isn’t arbitrary; it’s the result of trying to stop a global pandemic, so that we don’t have a repeat of what influenza did in 1918.
There is a significant chance, however, that this recession will end up being worse than it’s supposed to be, if our policymakers fail to provide adequate and timely relief to those who become unemployed.
As Donald Marron of the Urban Institute explained quite succinctly in a Twitter thread, there are three types of economic losses we need to consider here: Losses necessary to protect health, losses caused by insufficient demand, and losses caused by lost productive capacity. The first kind of loss is what we are doing on purpose; the other two are losses we should be trying to avoid. Insufficient demand is fairly easy to fix: Hand out cash. But sustaining productive capacity can be trickier.
Given the track record of the Trump administration so far, I am not optimistic. First Trump denied the virus was even a threat. Then he blamed China (which, even if partly true, doesn’t solve anything). Then his response was delayed and inadequate. And now the relief money is taking weeks to get to people—while clearly being less than many people need.
I can’t tell you how long this is going to last. I can’t tell you just how bad it’s going to get. But I am confident of a few things:
It’ll be worse than it had to be, but not as bad as it could have been. Trump will continue making everything worse, but other, better leaders will make things better. Above all, we’ll make it through this, together.
For most of human history, technological advances have destroyed some jobs and created others, causing change, instability, conflict—but ultimately, not unemployment. Many economists believe that this trend will continue well into the 21st century.
Yet I am not so sure, ever since I read this chilling paragraph by Gregory Clark, which I first encountered in The Atlantic:
There was a type of employee at the beginning of the Industrial Revolution whose job and livelihood largely vanished in the early twentieth century. This was the horse. The population of working horses actually peaked in England long after the Industrial Revolution, in 1901, when 3.25 million were at work. Though they had been replaced by rail for long-distance haulage and by steam engines for driving machinery, they still plowed fields, hauled wagons and carriages short distances, pulled boats on the canals, toiled in the pits, and carried armies into battle. But the arrival of the internal combustion engine in the late nineteenth century rapidly displaced these workers, so that by 1924 there were fewer than two million. There was always a wage at which all these horses could have remained employed. But that wage was so low that it did not pay for their feed.
Indeed, in some sense I think the replacement of most human labor with robots is inevitable. It’s not a question of “if”, but only a question of “when”. In a thousand years—if we survive at all, and if we remain recognizable as human—we’re not going to have employment in the same sense we do today. In the best-case scenario, we’ll live in the Culture, all playing games, making art, singing songs, and writing stories while the robots do all the hard labor.
But a thousand years is a very long time; we’ll be dead, and so will our children and our grandchildren. Most of us are thus understandably a lot more concerned about what happens in say 20 or 50 years.
Creative jobs are also quite safe; it’s going to be at least a century, maybe more, before robots can seriously compete with artists, authors, or musicians. (Robot Beethoven is a publicity stunt, not a serious business plan.) Indeed, by the time robots reach that level, I think we’ll have to start treating them as people—so in that sense, people will still be doing those jobs.
And yet, long-haul trucking is probably not going to exist in 20 years. Short-haul and delivery trucking will probably last a bit longer, since it’s helpful to have a human being to drive around complicated city streets and carry deliveries. Automated trucks are already here, and they are just… better. While human drivers need rest, sleep, food, and bathroom breaks, rarely exceeding 11 hours of actual driving per day (which still sounds exhausting!), an automated long-haul truck can stay on the road for over 22 hours per day, even including fuel and maintenance. The capital cost of an automated truck is currently much higher than an ordinary truck, but when that changes, trucking companies aren’t going to keep around a human driver when their robots can deliver twice as fast and don’t expect to be paid wages. Automated vehicles are also safer than human drivers, which will save several thousand lives per year. For this to happen, we don’t even need truly full automation; we just need to get past our current level 3 automation and reach level 4. Prototypes of this level of automation are already under development; in about 10 years they’ll start hitting the road. The shift won’t be instantaneous; once a company has already invested in a truck and a driver, they’ll keep them around for several years. But in 20 years from now, I don’t expect to see a lot of human-driven trucks left.
Some jobs that will be automated away deserve to be automated away. I can’t shed very many tears for the loss of fast-food workers and grocery cashiers (which we can already see happening around us—been to a Taco Bell lately?); those are terrible jobs that no human being should have to do. And my only concern about automated telemarketing is that it makes telemarketing cheaper and therefore more common; I certainly am not worried about the fact that people won’t be working as telemarketers anymore.
But a lot of good jobs, even white-collar jobs, are at risk of automation. Algorithms are already performing at about the same level as human radiologists, contract reviewers, and insurance underwriters, and once they get substantially better, companies are going to have trouble justifying why they would hire a human who costs more and performs worse. Indeed, the very first job to be automated by information technology was a white-collar job: computer used to be a profession, not a machine.
Technological advancement is inherently difficult to predict: If we knew how future technology will work, we’d make it now. So any such prediction should contain large error bars: “20 years away” could mean we make a breakthrough next year, or it could stay “20 years away” for the next 50 years.
If we had a robust social safety net—a basic income, perhaps?—this would be fine. But our culture decided somewhere along the way that people only deserve to live well if they are currently performing paid services for a corporation, and as robots get better, corporations will find they don’t need so many people performing services. We could face up to this fact and use it as an opportunity for deeper reforms; but I fear that instead we’ll wait to act until the crisis is already upon us.
Many of the common critiques of economics are actually somewhat misguided, or at least outdated: While there are still some neoclassical economists who think that markets are perfect and humans are completely rational, most economists these days would admit that there are at least some exceptions to this. But there’s at least one common critique that I think still has a good deal of merit: “Good for the economy” isn’t the same thing as good.
I’ve read literally dozens, if not hundreds, of articles on economics, in both popular press and peer-reviewed journals, that all defend their conclusions in the following way: “Intervention X would statistically be expected to increase GDP/raise total surplus/reduce unemployment. Therefore, policymakers should implement intervention X.” The fact that a policy would be “good for the economy” (in a very narrow sense) is taken as a completely compelling reason that this policy must be overall good.
The clearest examples of this always turn up during a recession, when inevitably people will start saying that cutting unemployment benefits will reduce unemployment. Sometimes it’s just right-wing pundits, but often it’s actually quite serious economists.
The usual left-wing response is to deny the claim, explain all the structural causes of unemployment in a recession and point out that unemployment benefits are not what caused the surge in unemployment. This is true; it is also utterly irrelevant. It can be simultaneously true that the unemployment was caused by bad monetary policy or a financial shock, and also true that cutting unemployment benefits would in fact reduce unemployment.
Indeed, I’m fairly certain that both of those propositions are true, to greater or lesser extent. Most people who are unemployed will remain unemployed regardless of how high or low unemployment benefits are; and likewise most people who are employed will remain so. But at the margin, I’m sure there’s someone who is on the fence about searching for a job, or who is trying to find a job but could try a little harder with some extra pressure, or who has a few lousy job offers they’re not taking because they hope to find a better offer later. That is, I have little doubt that the claim “Cutting unemployment benefits would reduce unemployment” is true.
The problem is that this is in no way a sufficient argument for cutting unemployment benefits. For while it might reduce unemployment per se, more importantly it would actually increase the harm of unemployment. Indeed, those two effects are in direct proportion: Cutting unemployment benefits only reduces unemployment insofar as it makes being unemployed a more painful and miserable experience for the unemployed.
Indeed, the very same (oversimplified) economic models that predict that cutting benefits would reduce unemployment use that precise mechanism, and thereby predict, necessarily, that cutting unemployment benefits will harm those who are unemployed. It has to. In some sense, it’s supposed to; otherwise it wouldn’t have any effect at all.
That is, if your goal is actually to help the people harmed by a recession, cutting unemployment benefits is absolutely not going to accomplish that. But if your goal is actually to reduce unemployment at any cost, I suppose it would in fact do that. (Also highly effective against unemployment: Mass military conscription. If everyone’s drafted, no one is unemployed!)
Similarly, I’ve read more than a few policy briefs written to the governments of poor countries telling them how some radical intervention into their society would (probably) increase their GDP, and then either subtly implying or outright stating that this means they are obliged to enact this intervention immediately.
Don’t get me wrong: Poor countries need to increase their GDP. Indeed, it’s probably the single most important thing they need to do. Providing better security, education, healthcare, and sanitation are all things that will increase GDP—but they’re also things that will be easier if you have more GDP.
(Rich countries, on the other hand? Maybe we don’t actually need to increase GDP. We may actually be better off focusing on things like reducing inequality and improving environmental sustainability, while keeping our level of GDP roughly the same—or maybe even reducing it somewhat. Stay inside the wedge.)
But the mere fact that a policy will increase GDP is not a sufficient reason to implement that policy. You also need to consider all sorts of other effects the policy will have: Poverty, inequality, social unrest, labor standards, pollution, and so on.
To be fair, sometimes these articles only say that the policy will increase GDP, and don’t actually assert that this is a sufficient reason to implement it, theoretically leaving open the possibility that other considerations will be overriding.
But that’s really not all that comforting. If the only thing you say about a policy is a major upside, like it or not, you are implicitly endorsing that policy. Framing is vital. Everything you say could be completely, objectively, factually true; but if you only tell one side of the story, you are presenting a biased view. There’s a reason the oath is “The truth, the whole truth, and nothing but the truth.” A partial view of the facts can be as bad as an outright lie.
Of course, it’s unreasonable to expect you to present every possible consideration that could become relevant. Rather, I expect you to do two things: First, if you include some positive aspects, also include some negative ones, and vice-versa; never let your argument sound completely one-sided. Second, clearly and explicitly acknowledge that there are other considerations you haven’t mentioned.
Moreover, if you are talking about something like increasing GDP or decreasing unemployment—something that has been, many times, by many sources, treated as though it were a completely compelling reason unto itself—you must be especially careful. In such a context, an article that would be otherwise quite balanced can still come off as an unqualified endorsement.
“Guaranteeing a job with a family-sustaining wage, adequate family and medical leave, paid vacations, and retirement security to all people of the United States.”
“Providing all people of the United States with – […] (ii) affordable, safe, and adequate housing; (iii) economic security; […].
Let me start by giving you a sense of how difficult this is: No country on Earth has ever successfully guaranteed employment and housing. Even Scandinavia’s extensive social safety nets and active labor market programs are not sufficient to eliminate homelessness or unemployment (though they do dramatically reduce them).
The Soviet Union came close to guaranteed employment, but only as part of a labor system that was extremely inefficient and unproductive. Effectively, they guaranteed everyone a job by not even firing people who didn’t actually do the jobs they were given. This is clearly not a sustainable solution.
There are serious proposals on the table for a job guarantee program, but they are extremely ambitious.
The basic idea of such a program is that we can (hopefully) find various forms of public service that need to be done, and pay people to do that public service at a certain minimum level of pay and benefits. These jobs would be available to anyone who wanted them, and any time you lost a private-sector job you could always take the guaranteed job. This would effectively create a floor on wages and benefits; any job that offered a worse deal than the government job would be competed out of existence.
Maybe there is a way to solve these problems. Maybe I’m underestimating the public goods that could be produced by people with low levels of skill. But at the very least we need to face up to the fact that it is a problem. We need to actually find work that it makes sense to guarantee—we can’t just wave our hands and say that “obviously” there is plenty of valuable work to be done that will happen to line up exactly with the skills of the people who are currently unemployed.
And then we need to think about the fact that we can’t really guarantee it, not the way the Soviet Union did. We do need to be able to fire people. We need to be able to fire them for not showing up to work, for being drunk at work, for sexually harassing co-workers, or simply for being incompetent. We need to be have some sort of policy in place for what happens to people who get fired: How long before they can get another guaranteed job? And being fired should hurt: It’s supposed to be an incentive to do your job correctly. We don’t need to punish laziness or incompetence with homelessness—but we do need to punish it with something.
Ultimately what I would like to see is not guaranteed jobs but guaranteed income: A basic income that everyone gets, no questions asked. And then I would hope that our norms about work would change, and people would stop defining themselves by their paid employment and start defining themselves by other things, like creating art, supporting their family, or contributing to their community.
What about guaranteed housing? On that front I am more optimistic.
Housing is quite expensive, particularly in major cities. But homelessness is also very expensive from a societal perspective. In the long run, free housing might actually pay for itself.
One of the most successful programs at reducing homelessness is called Housing First. Rather than going through the usual machinations of shelters and transitional housing, the program just takes people off the streets and gives them homes. Like a basic income, it sounds ludicrously simple; it’s the sort of thing a five-year-old would suggest. Surely it can’t be that easy?
There is an additional population of about 500,000 transient homeless—people who are homeless for a short period after an adverse life event (such as losing a job, having a divorce, or getting their mortgage foreclosed) but will find housing within a few weeks or months. Their situation is not as dire, and the costs they impose on society are not as large. But standard estimates are still generally over $10,000 per person per year—which, if given to them in cash, would probably be enough to get most of these people into homes.
So this is not a question of affordability: We are already paying these costs, but doing so in a way that doesn’t actually solve homelessness.
The real challenge is subtler than that: How do we make this fair and politically feasible?
When we’re talking about chronically homelessness, I think we can make a pretty strong case: These people are in a really bad way and they need our help. Since we’re already spending all this money anyway, we may as well spend it in a way that would actually help them.
But transient homelessness gets a bit more complicated. Many people who are transiently homeless are not all that poor. They may be college students, or recent divorcees, or failed entrepreneurs, or people who could afford a home but not the expensive home they actually tried to buy. Once they get back on their feet, they will probably go on to maintain a middle-class standard of living. So it really does seem unfair to just hand these people free homes that other people would not get.
And making housing in general completely free is simply a pipe dream. No country has ever even gotten close to that. Housing is such a huge part of a country’s expenditures that even a country like Denmark where the government is half the economy still can’t afford to put everyone in public housing.
I think what I would do instead is provide guaranteed subsidized loans—much as we do for student loans. These loans could be used to pay rent, to pay a mortgage, or even to make a down payment. They would be available to any adult US citizen, regardless of credit history, in relatively large amounts (the average down payment in the US is about $14,000, but as high as $50,000 is not unusual), at very low interest rates (I’d say aim for 0% real interest, so target the nominal interest rate to inflation) and very generous repayment terms (like student loans, you would never be required to pay more than a certain percentage of your adjusted gross income on the loan). If someone did try to avoid paying, their wages could be garnished or their taxes could be increased—this would make the default rates very low.
This policy would allow people who are temporarily homeless to get back into a home immediately, rather than having to wait until they can get more income—which can become a paradox as most employers will require a permanent address. But it wouldn’t be a free home; this policy would cost taxpayers next to nothing. The only costs would come from subsidizing interest rates and bearing defaults, which wouldn’t be more than about 5% of the outstanding balance—even if we loaned out as much as $100 billion, that still wouldn’t be more than what we’re currently losing in social costs of homelessness.
Had this policy been in place during the 2008 crash, people who lost their homes to foreclosure would have been able to immediately re-borrow and buy new homes. This would have blunted the financial crisis and maybe even done as much as the far more expensive stimulus package and quantitative easing programs.
These policies would not, unfortunately, eliminate unemployment and homelessness. Maybe that’s not even possible. But they would at least greatly reduce the harm caused by unemployment and homelessness, and that alone makes them worth doing.
I think it probably is as bad as it looks, but the truth is: I don’t care. This is a distraction.
If you think litigating the precise events of this video is important, you are suffering from a severe case of scope neglect. You are looking at a single event between a handful of people when you should be looking at the overall trends of a country of over 300 million people.
And if you want to talk about the racist, sexist, and authoritarian leanings of Trump supporters, that’s quite important too. But it doesn’t hinge upon one person or one confrontation. I’m sure there are Trump supporters who aren’t racist; and I’m sure there are Obama supporters who are. But the overall statistical trend there is extremely strong.
I understand that most people suffer from severe scope neglect, and we have to live in a world filled with such people; so maybe there’s some symbolic value in finding one particularly egregious case that you can put a face on and share with the world. But if you’re going to do that, there’s two things I’d ask of you:
1. Make absolutely surethat this case is genuine. Nothing will destroy your persuasiveness faster than holding up an ambiguous case as if it were definitive.
2. After you’ve gotten their attention with the single example, show the statistics. There are truths, whole truths, and statistics. If you reallywant to know something, you use statistics.
The statistics are what this is really about. One person, even a hundred people—that really doesn’t matter. We need to keep our eyes on the millions of people, the directions of entire nations. For a lot of people, looking at numbers is boring; but there are people behind those numbers, and numbers are what tell us what’s really going on in the world.
Some indicators are more ambiguous: Corporate profits are near their all-time high, even in inflation-adjusted terms. That could be a sign of an overall good economy—but it also clearly has something to do with redistribution of income toward the wealthy.
Of course, all of those things were true yesterday, and will be true tomorrow. They were true last week, and will be true next week. They don’t lend themselves to a rapid-fire news cycle.
But maybe that means we don’t need a rapid-fire news cycle? Maybe that’s not the best way to understand what’s going on in the world?
Though central banks are a cornerstone of the modern financial system, I don’t think most people have a clear understanding of how they actually function. (I think this may be by design; there are many ways we could make central banking more transparent, but policymakers seem reluctant to show their hand.)
Central banks “print money” and “set interest rates”. But how exactly do they do these things, and what on Earth do they have to do with each other?
The first thing to understand is that most central banks don’t actually print money. In the US, cash is actually printed by the Department of the Treasury. But cash is only a small part of the money in circulation. The monetary base consists of cash in vaults and in circulation; the US monetary base is about $3.6 trillion. The money supplycan be measured a few different ways, but the standard way is to include checking accounts, traveler’s checks, savings accounts, money market accounts, short-term certified deposits, and basically anything that can be easily withdrawn and spent as money. This is called the M2 money supply, and in the US it is currently over $14.1 trillion. That means that only 25% of our money supply is in actual, physical cash—the rest is all digital. This is actually a relatively high proportion for actual cash, as the monetary base was greatly increased in response to the Great Recession. When we say that the Fed “prints money”, what we really mean is that they are increasing the money supply—but typically they do so in a way that involves little if any actual printing of cash.
The second thing to understand is that central banks don’t exactly set interest rates either. They target interest rates. What’s the difference, you ask?
Well, setting interest rates would mean that they made a law or something saying you have to charge exactly 2.7%, and you get fined or something if you don’t do that.
Targeting interest rates is a subtler art. The Federal Reserve decides what interest rates they want banks to charge, and then they engage in what are called open-market operationsto try to make that happen. Banks hold reserves—money that they are required to keep as collateral for their loans. Since we are in a fractional-reservesystem, they are allowed to keep only a certain proportion (usually about 10%). In open-market operations, the Fed buys and sells assets (usually US Treasury bonds) in order to either increase or decrease the amount of reserves available to banks, to try to get them to lend to each other at the targeted interest rates.
Why not simply set the interest rate by law? Because then it wouldn’t be the market-clearing interest rate. There would be shortages or gluts of assets.
It might be easier to grasp this if we step away from money for a moment and just think about the market for some other good, like televisions.
Suppose that the government wants to set the price of a television in the market to a particular value, say $500. (Why? Who knows. Let’s just run with it for a minute.)
If they simply declared by law that the price of a television must be $500, here’s what would happen: Either that would be too low, in which case there would be a shortage of televisions as demand exceeded supply; or that would be too high, in which case there would be a glut of televisions as supply exceeded demand. Only if they got spectacularly lucky and the market price already was $500 per television would they not have to worry about such things (and then, why bother?).
But suppose the government had the power to create and destroy televisions virtually at will with minimal cost.
Now, they have a better way; they can target the price of a television, and buy and sell televisions as needed to bring the market price to that target. If the price is too low, the government can buy and destroy a lot of televisions, to bring the price up. If the price is too high, the government can make and sell a lot of televisions, to bring the price down.
Now, let’s go back to money. This power to create and destroy at will is hard to believe for televisions, but absolutely true for money. The government can create and destroy almost any amount of money at will—they are limited only by the very inflation and deflation the central bank is trying to affect.
This allows central banks to intervene in the market without creating shortages or gluts; even though they are effectively controlling the interest rate, they are doing so in a way that avoids having a lot of banks wanting to take loans they can’t get or wanting to give loans they can’t find anyone to take.
The goal of all this manipulation is ultimately to reduce inflation and unemployment. Unfortunately it’s basically impossible to eliminate both simultaneously; the Phillips curvedescribes the relationship generally found that decreased inflation usually comes with increased unemployment and vice-versa. But the basic idea is that we set reasonable targets for each (usually about 2% inflation and 5% unemployment; frankly I’d prefer we swap the two, which was more or less what we did in the 1950s), and then if inflation is too high we raise interest rate targets, while if unemployment is too high we lower interest rate targets.
What if they’re both too high? Then we’re in trouble. This has happened; it is called stagflation. The money supply isn’t the other thing affecting inflation and unemployment, and sometimes we get hit with a bad shock that makes both of them high at once. In that situation, there isn’t much that monetary policy can do; we need to find other solutions.
But how does targeting interest rates lead to inflation? To be quite honest, we don’t actually know.
The basic idea is that lower interest rates should lead to more borrowing, which leads to more spending, which leads to more inflation. But beyond that, we don’t actually understand how interest rates translate into prices—this is the so-called transmission mechanism, which remains an unsolved problem in macroeconomics. Based on the empirical data, I lean toward the view that the mechanism is primarily via housing prices; lower interest rates lead to more mortgages, which raises the price of real estate, which raises the price of everything else. This also makes sense theoretically, as real estate consists of large, illiquid assets for which the long-term interest rate is very important. Your decision to buy an apple or even a television is probably not greatly affected by interest rates—but your decision to buy a house surely is.
If that is indeed the case, it’s worth thinking about whether this is really the right way to intervene on inflation and unemployment. High housing prices are an international crisis; maybe we need to be looking at ways to decrease unemployment without affecting housing prices. But that is a tale for another time.