Is a job guarantee better than a basic income?

Aug 5 JDN 2458336

In previous posts I’ve written about both the possibilities and challenges involved in creating a universal basic income. Today I’d like to address what I consider the most serious counter-argument against a basic income, an alternative proposal known as a job guarantee.

Whereas a basic income is literally just giving everyone free money, a job guarantee entails offering everyone who wants to work a job paid by the government. They’re not necessarily contradictory, but I’ve noticed a clear pattern: While basic income proponents are generally open to the idea of a job guarantee on the side, job guarantee proponents are often vociferously opposed to a basic income—even calling it “sinister”. I think the reason for this is that we see jobs as irrelevant, so we’re okay with throwing them in if you feel you must, while they see jobs as essential, so they meet any attempt to remove them with overwhelming resistance.

Where a basic income is extremely simple and could be implemented by a single act of the legislature, a job guarantee is considerably more complicated. The usual proposal for a job guarantee involves federal funding but local implementation, which is how most of our social welfare system is implemented—and why social welfare programs are so much better in liberal states like California than in conservative states like Mississippi, because California actually believes in what it’s implementing and Mississippi doesn’t. Anyone who wants a job guarantee needs to take that aspect seriously: In the places where poverty is worst, you’re offering control over the policy to the very governments that made poverty worst—and whether it is by malice or incompetence, what makes you think that won’t continue?

Another argument that I think job guarantee proponents don’t take seriously enough is the concern about “make-work”. They insist that a job guarantee is not “make-work”, but real work that’s just somehow not being done. They seem to think that there are a huge number of jobs that we could just create at the snap of a finger, which would be both necessary and useful on the one hand, and a perfect match for the existing skills of the unemployed population on the other hand. If that were the case, we would already be creating those jobs. It doesn’t even require a particularly strong faith in capitalism to understand this: If there is a profit to be made at hiring people to do something, there is probably already a business hiring people to do that. I don’t think of myself as someone with an overriding faith in capitalism, but a lot of the socialist arguments for job guarantees make me feel that way by comparison: They seem to think that there’s this huge untapped reserve of necessary work that the market is somehow failing to provide, and I’m just not seeing it.

There are public goods projects which aren’t profitable but would still be socially beneficial, like building rail lines and cleaning up rivers. But proponents of a job guarantee don’t seem to understand that these are almost all highly specialized jobs at our level of technology. We don’t need a bunch of people with shovels. We need engineers and welders and ecologists.

If you propose using people with shovels where engineers would be more efficient, that is make-work, whether you admit it or not. If you’re making people work in a less-efficient way in order to create jobs, then the jobs you are creating are fake jobs that aren’t worth creating. The line is often credited to Milton Friedman, but actually said first by William Aberhart in 1935:

Taking up the policy of a public works program as a solution for unemployment, it was criticized as a plan that took no account of the part that machinery played in modern construction, with a road-making machine instanced as an example. He saw, said Mr. Aberhart, work in progress at an airport and was told that the men were given picks and shovels in order to lengthen the work, to which he replied why not give them spoons and forks instead of picks and shovels if the object was to lengthen out the task.

I’m all for spending more on building rail lines and cleaning up rivers, but that’s not an anti-poverty program. The people who need the most help are precisely the ones who are least qualified to work on these projects: Children, old people, people with severe disabilities. Job guarantee proponents either don’t understand this fact or intentionally ignore it. If you aren’t finding jobs for 7-year-olds with autism and 70-year-olds with Parkinson’s disease, this program will not end poverty. And if you are, I find it really hard to believe that these are real, productive jobs and not useless “make-work”. A basic income would let the 7-year-olds stay in school and the 70-year-olds live in retirement homes—and keep them both out of poverty.

Another really baffling argument for a job guarantee over basic income is that a basic income would act as a wage subsidy, encouraging employers to reduce wages. That’s not how a basic income works. Not at all. A basic income would provide a pure income effect, necessarily increasing wage demands. People would not be as desperate for work, so they’d be more comfortable turning down unreasonable wage offers. A basic income would also incentivize some people to leave the labor force by retiring or going back to school; the reduction in labor supply would further increase wages. The Earned Income Tax Credit is in many respects similar to a wage subsidy. While superficially it might seem similar, a basic income would have the exact opposite effect.

One reasonable argument against a basic income is the possibility that it could cause inflation. This is something that can’t really be tested with small-scale experiments, so we really won’t know for sure until we try it. But there is reason to think that the inflation would be small, as the people removed from the labor force will largely be the ones who are least-productive to begin with. There is a growing body of empirical evidence suggesting that inflationary effects of a basic income would be small. For example, data on cash transfer programs in Mexico show only a small inflationary effect despite large reductions in poverty. The whole reason a basic income looks attractive is that automation technology is now so advanced is that we really don’t need everyone to be working anymore. Productivity is so high now that a policy of universal 40-hour work weeks just doesn’t make sense in the 21st century.

Probably the best argument for a job guarantee over a basic income concerns cost. A basic income is very expensive, there’s no doubt about that; and a job guarantee could be much cheaper. That is something I take very seriously: Saving $1.5 trillion a year is absolutely a good reason. Indeed, I don’t really object to this argument; the calculations are correct. I merely think that a basic income is enough better that its higher cost is justifiable. A job guarantee can eliminate unemployment, but not poverty.

But the argument for a job guarantee that most people seem to be find most compelling concerns meaning. The philosopher John Danaher expressed this one most cogently. Unemployment is an extremely painful experience for most people, far beyond what could be explained simply by their financial circumstances. Most people who win large sums of money in the lottery cut back their hours, but continue working—so work itself seems to have some value. What seems to happen is that when people lose the chance to work, they feel that they have lost a vital source of meaning in their lives.

Yet this raises two more questions:

First, would a job guarantee actually solve that problem?
Second, are there ways we could solve it under a basic income?

With regard to the first question, I want to re-emphasize the fact that a large proportion of these guaranteed jobs necessarily cannot be genuinely efficient production. If efficient production would have created these jobs, we would most likely already have created them. Our society does not suffer from an enormous quantity of necessary work that could be done with the skills already possessed by the unemployed population, which is somehow not getting done—indeed, it is essentially impossible for a capitalist economy with a highly-liquid financial system to suffer such a malady. If the work is so valuable, someone will probably take out a loan to hire someone to do it. If that’s not happening, either the unemployed people don’t have the necessary skills, or the work really can’t be all that productive. There are some public goods projects that would be beneficial but aren’t being done, but that’s a different problem, and the match between the public goods projects that need done and the skills of the unemployed population is extremely poor. Displaced coal miners aren’t useful for maintaining automated photovoltaic factories. Truckers who get replaced by robot trucks won’t be much good for building maglev rails.

With this in mind, it’s not clear to me that people would really be able to find much meaning in a guaranteed job. You can’t be fired, so the fact that you have the job doesn’t mean anyone is impressed by the quality of your work. Your work wasn’t actually necessary, or the private sector would already have hired someone to do it. The government went out of its way to find a job that precisely matched what you happen to be good at, regardless of whether that job was actually accomplishing anything to benefit society. How is that any better than not working at all? You are spending hours of drudgery to accomplish… what, exactly? If our goal was simply to occupy people’s time, we could do that with Netflix or video games.

With regard to the second question, note that a basic income is quite different from other social welfare programs in that everyone gets it. So it’s very difficult to attach a social stigma to receiving basic income payments—it would require attaching the stigma to literally everyone. Much of the lost meaning, I suspect, from being unemployed comes from the social stigma attached.

Now, it’s still possible to attach social stigma to people who only get the basic income—there isn’t much we can do to prevent that. But in the worst-case scenario, this means unemployed people get the same stigma as before but more money. Moreover, it’s much harder to detect a basic income recipient than, say, someone who eats at a soup kitchen or buys food using EBT; since it goes in your checking account, all everyone else sees is you spending money from your debit card, just like everyone else. People who know you personally would probably know; but people who know you personally are also less likely to destroy your well-being by imposing a high stigma. Maybe they’ll pressure you to get off the couch and get a job, but they’ll do so because they genuinely want to help you, not because they think you are “one of those lazy freeloaders”.

And, as BIEN points out, think about retired people: They don’t seem to be so unhappy. Being on basic income is more like being retired than like being unemployed. It’s something everyone gets, not some special handout for “those people”. It’s permanent, so it’s not like you need to scramble to get a job before it goes away. You just get money automatically, so you don’t have to navigate a complex bureaucracy to get it. Controlling for income, retired people don’t seem to be any less happy than working people—so maybe work doesn’t actually provide all that much meaning after all.

I guess I can’t rule out the possibility that people need jobs to find meaning in their lives, but I both hope and believe that this is not generally the case. You can find meaning in your family, your friends, your community, your hobbies. You can still work even if you don’t need to work for a living: Build a shed, mow your lawn, tune up your car, upgrade your computer, write a story, learn a musical instrument, or try your hand at painting.

If you need to be taking orders from a corporation five days a week in order to have meaning in your life, you have bigger problems. I think what has happened to many people is that employment has so drained their lives of the real sources of meaning that they cling to it as the only thing they have left. But in fact work is not the cure to your ennui—it is the cause of it. Finally being free of the endless toil that has plagued humanity since the dawn of our species will give you the chance to reconnect with what really matters in life. Show your children that you love them in person, to their faces, instead of in this painfully indirect way of “providing for” them by going to work every day. Find ways to apply your skills in volunteering or creating works of art, instead of in endless drudgery for the profit of some faceless corporation.

Is grade inflation a real problem?

Mar 4 JDN 2458182

You can’t spend much time teaching at the university level and not hear someone complain about “grade inflation”. Almost every professor seems to believe in it, and yet they must all be participating in it, if it’s really such a widespread problem.

This could be explained as a collective action problem, a Tragedy of the Commons: If the incentives are always to have the students with the highest grades—perhaps because of administrative pressure, or in order to get better reviews from students—then even if all professors would prefer a harsher grading scheme, no individual professor can afford to deviate from the prevailing norms.

But in fact I think there is a much simpler explanation: Grade inflation doesn’t exist.

In economic growth theory, economists make a sharp distinction between inflation—increase in prices without change in underlying fundamentals—and growth—increase in the real value of output. I contend that there is no such thing as grade inflation—what we are in fact observing is grade growth.
Am I saying that students are actually smarter now than they were 30 years ago?

Yes. That’s exactly what I’m saying.

But don’t take it from me. Take it from the decades of research on the Flynn Effect: IQ scores have been rising worldwide at a rate of about 0.3 IQ points per year for as long as we’ve been keeping good records. Students today are about 10 IQ points smarter than students 30 years ago—a 2018 IQ score of 95 is equivalent to a 1988 score of 105, which is equivalent to a 1958 score of 115. There is reason to think this trend won’t continue indefinitely, since the effect is mainly concentrated at the bottom end of the distribution; but it has continued for quite some time already.

This by itself would probably be enough to explain the observed increase in grades, but there’s more: College students are also a self-selected sample, admitted precisely because they were believed to be the smartest individuals in the application pool. Rising grades at top institutions are easily explained by rising selectivity at top schools: Harvard now accepts 5.6% of applicants. In 1942, Harvard accepted 92% of applicants. The odds of getting in have fallen from 9:1 in favor to 19:1 against. Today, you need a 4.0 GPA, a 36 ACT in every category, glowing letters of recommendation, and hundreds of hours of extracurricular activities (or a family member who donated millions of dollars, of course) to get into Harvard. In the 1940s, you needed a high school diploma and a B average.

In fact, when educational researchers have tried to quantitatively study the phenomenon of “grade inflation”, they usually come back with the result that they simply can’t find it. The US department of education conducted a study in 1995 showing that average university grades had declined since 1965. Given that the Flynn effect raised IQ by almost 10 points during that time, maybe we should be panicking about grade deflation.

It really wouldn’t be hard to make that case: “Back in my day, you could get an A just by knowing basic algebra! Now they want these kids to take partial derivatives?” “We used to just memorize facts to ace the exam; but now teachers keep asking for reasoning and critical thinking?”

More recently, a study in 2013 found that grades rose at the high school level, but fell at the college level, and showed no evidence of losing any informativeness as a signaling mechanism. The only recent study I could find showing genuinely compelling evidence for grade inflation was a 2017 study of UK students estimating that grades are growing about twice as fast as the Flynn effect alone would predict. Most studies don’t even consider the possibility that students are smarter than they used to be—they just take it for granted that any increase in average grades constitutes grade inflation. Many of them don’t even control for the increase in selectivity—here’s one using the fact that Harvard’s average rose from 2.7 to 3.4 from 1960 to 2000 as evidence of “grade inflation” when Harvard’s acceptance rate fell from almost 30% to only 10% during that period.

Indeed, the real mystery is why so many professors believe in grade inflation, when the evidence for it is so astonishingly weak.

I think it’s availability heuristic. Who are professors? They are the cream of the crop. They aced their way through high school, college, and graduate school, then got hired and earned tenure—they were one of a handful of individuals who won a fierce competition with hundreds of competitors at each stage. There are over 320 million people in the US, and only 1.3 million college faculty. This means that college professors represent about the top 0.4% of high-scoring students.

Combine that with the fact that human beings assort positively (we like to spend time with people who are similar to us) and use availability heuristic (we judge how likely something is based on how many times we have seen it).

Thus, when a professor compares to her own experience of college, she is remembering her fellow top-scoring students at elite educational institutions. She is recalling the extreme intellectual demands she had to meet to get where she is today, and erroneously assuming that these are representative of most the population of her generation. She probably went to school at one of a handful of elite institutions, even if she now teaches at a mid-level community college: three quarters of college faculty come from the top one quarter of graduate schools.

And now she compares to the students she has to teach, most of whom would not be able to meet such demands—but of course most people in her generation couldn’t either. She frets for the future of humanity only because not everyone is a genius like her.

Throw in the Curse of Knowledge: The professor doesn’t remember how hard it was to learn what she has learned so far, and so the fact that it seems easy now makes her think it was easy all along. “How can they not know how to take partial derivatives!?” Well, let’s see… were you born knowing how to take partial derivatives?

Giving a student an A for work far inferior to what you’d have done in their place isn’t unfair. Indeed, it would clearly be unfair to do anything less. You have years if not decades of additional education ahead of them, and you are from self-selected elite sample of highly intelligent individuals. Expecting everyone to perform as well as you would is simply setting up most of the population for failure.

There are potential incentives for grade inflation that do concern me: In particular, a lot of international student visas and scholarship programs insist upon maintaining a B or even A- average to continue. Professors are understandably loathe to condemn a student to having to drop out or return to their home country just because they scored 81% instead of 84% on the final exam. If we really intend to make C the average score, then students shouldn’t lose funding or visas just for scoring a B-. Indeed, I have trouble defending any threshold above outright failing—which is to say, a minimum score of D-. If you pass your classes, that should be good enough to keep your funding.

Yet apparently even this isn’t creating too much upward bias, as students who are 10 IQ points smarter are still getting about the same scores as their forebears. We should be celebrating that our population is getting smarter, but instead we’re panicking over “easy grading”.

But kids these days, am I right?

The “productivity paradox”

 

Dec 10, JDN 2458098

Take a look at this graph of manufacturing output per worker-hour:

Manufacturing_productivity

From 1988 to 2008, it was growing at a steady pace. In 2008 and 2009 it took a dip due to the Great Recession; no big surprise there. But then since 2012 it has been… completely flat. If we take this graph at face value, it would imply that manufacturing workers today can produce no more output than workers five years ago, and indeed only about 10% more than workers a decade ago. Whereas, a worker in 2008 was producing over 60% more than a worker in 1998, who was producing over 40% more than a worker in 1988.

Many economists call this the “productivity paradox”, and use it to argue that we don’t really need to worry about robots taking all our jobs any time soon. I think this view is mistaken.

The way we measure productivity is fundamentally wrongheaded, and is probably the sole cause of this “paradox”.

First of all, we use total hours scheduled to work, not total hours actually doing productive work. This is obviously much, much easier to measure, which is why we do it. But if you think for a moment about how the 40-hour workweek norm is going to clash with rapidly rising real productivity, it becomes apparent why this isn’t going to be a good measure.
When a worker finds a way to get done in 10 hours what used to take 40 hours, what does that worker’s boss do? Send them home after 10 hours because the job is done? Give them a bonus for their creativity? Hardly. That would be far too rational. They assign them more work, while paying them exactly the same. Recognizing this, what is such a worker to do? The obvious answer is to pretend to work the other 30 hours, while in fact doing something more pleasant than working.
And indeed, so-called “worker distraction” has been rapidly increasing. People are right to blame smartphones, I suppose, but not for the reasons they think. It’s not that smartphones are inherently distracting devices. It’s that smartphones are the cutting edge of a technological revolution that has made most of our work time unnecessary, so due to our fundamentally defective management norms they create overwhelming incentives to waste time at work to avoid getting drenched in extra tasks for no money.

That would probably be enough to explain the “paradox” by itself, but there is a deeper reason that in the long run is even stronger. It has to do with the way we measure “output”.

It might surprise you to learn that economists almost never consider output in terms of the actual number of cars produced, buildings constructed, songs written, or software packages developed. The standard measures of output are all in the form of so-called “real GDP”; that is, the dollar value of output produced.

They do adjust for indexes of inflation, but as I’ll show in a moment this still creates a fundamentally biased picture of the productivity dynamics.

Consider a world with only three industries: Housing, Food, and Music.

Productivity in Housing doesn’t change at all. Producing a house cost 10,000 worker-hours in 1950, and cost 10,000 worker-hours in 2000. Nominal price of houses has rapidly increased, from $10,000 in 1950 to $200,000 in 2000.

Productivity in Food rises moderately fast. Producing 1,000 meals cost 1,000 worker-hours in 1950, and cost 100 worker-hours in 2000. Nominal price of food has increased slowly, from $1,000 per 1,000 meals in 1950 to $5,000 per 1,000 meals in 2000.

Productivity in Music rises extremely fast. Producing 1,000 performances cost 10,000 worker-hours in 1950, and cost 1 worker-hour in 2000. Nominal price of music has collapsed, from $100,000 per 1,000 performances in 1950 to $1,000 per 1,000 performances in 2000.

This is of course an extremely stylized version of what has actually happened: Housing has gotten way more expensive, food has stayed about the same in price while farm employment has plummeted, and the rise of digital music has brought about a new Renaissance in actual music production and listening while revenue for the music industry has collapsed. There is a very nice Vox article on the “productivity paradox” showing a graph of how prices have changed in different industries.

How would productivity appear in the world I’ve just described, by standard measures? Well, to say that I actually need to say something about how consumers substitute across industries. But I think I’ll be forgiven in this case for saying that there is no substitution whatsoever; you can’t eat music or live in a burrito. There’s also a clear Maslow hierarchy here: They say that man cannot live by bread alone, but I think living by Led Zeppelin alone is even harder.

Consumers will therefore choose like this: Over 10 years, buy 1 house, 10,000 meals, and as many performances as you can afford after that. Further suppose that each person had $2,100 per year to spend in 1940-1950, and $50,000 per year to spend in 1990-2000. (This is approximately true for actual nominal US GDP per capita.)

1940-1950:
Total funds: $21,000

1 house = $10,000

10,000 meals = $10,000

Remaining funds: $1,000

Performances purchased: 10

1990-2000:

Total funds: $500,000

1 house = $200,000

10,000 meals = $50,000

Remaining funds: $250,000

Performances purchased: 250,000

(Do you really listen to this much music? 250,000 performances over 10 years is about 70 songs per day. If each song is 3 minutes, that’s only about 3.5 hours per day. If you listen to music while you work or watch a couple of movies with musical scores, yes, you really do listen to this much music! The unrealistic part is assuming that people in 1950 listen to so little, given that radio was already widespread. But if you think of music as standing in for all media, the general trend of being able to consume vastly more media in the digital age is clearly correct.)

Now consider how we would compute a price index for each time period. We would construct a basket of goods and determine the price of that basket in each time period, then adjust prices until that basket has a constant price.

Here, the basket would probably be what people bought in 1940-1950: 1 house, 10,000 meals, and 400 music performances.

In 1950, this basket cost $10,000+$10,000+$100 = $21,000.

In 2000, this basket cost $200,000+$50,000+$400 = $150,400.

This means that our inflation adjustment is $150,400/$21,000 = 7 to 1. This means that we would estimate the real per-capita GDP in 1950 at about $14,700. And indeed, that’s about the actual estimate of real per-capita GDP in 1950.

So, what would we say about productivity?

Sales of houses in 1950 were 1 per person, costing 10,000 worker hours.

Sales of food in 1950 were 10,000 per person, costing 10,000 worker hours.

Sales of music in 1950 were 400 per person, costing 4,000 worker hours.

Worker hours per person are therefore 24,000.

Sales of houses in 2000 were 1 per person, costing 10,000 worker hours.

Sales of food in 2000 were 10,000 per person, costing 1,000 worker hours.

Sales of music in 2000 were 250,000 per person, costing 25,000 worker hours.

Worker hours per person are therefore 36,000.

Therefore we would estimate that productivity rose from $14,700/24,000 = $0.61 per worker-hour to $50,000/36,000 = $1.40 per worker-hour. This is an annual growth rate of about 1.7%, which is again, pretty close to the actual estimate of productivity growth. For such a highly stylized model, my figures are doing remarkably well. (Honestly, better than I thought they would!)

But think about how much actual productivity rose, at least in the industries where it did.

We produce 10 times as much food per worker hour after 50 years, which is an annual growth rate of 4.7%, or three times the estimated growth rate.

We produce 10,000 times as much music per worker hour after 50 years, which is an annual growth rate of over 20%, or almost twelve times the estimated growth rate.

Moreover, should music producers be worried about losing their jobs to automation? Absolutely! People simply won’t be able to listen to much more music than they already are, so any continued increases in music productivity are going to make musicians lose jobs. And that was already allowing for music consumption to increase by a factor of over 600.

Of course, the real world has a lot more industries than this, and everything is a lot more complicated. We do actually substitute across some of those industries, unlike in this model.

But I hope I’ve gotten at least the basic point across that when things become drastically cheaper as technological progress often does, simply adjusting for inflation doesn’t do the job. One dollar of music today isn’t the same thing as one dollar of music a century ago, even if you inflation-adjust their dollars to match ours. We ought to be measuring in hours of music; an hour of music is much the same thing as an hour of music a century ago.

And likewise, that secretary/weather forecaster/news reporter/accountant/musician/filmmaker in your pocket that you call a “smartphone” really ought to be counted as more than just a simple inflation adjustment on its market price. The fact that it is mind-bogglingly cheaper to get these services than it used to be is the technological progress we care about; it’s not some statistical artifact to be removed by proper measurement.

Combine that with actually measuring the hours of real, productive work, and I think you’ll find that productivity is still rising quite rapidly, and that we should still be worried about what automation is going to do to our jobs.

Why are movies so expensive? Did they used to be? Do they need to be?

August 10, JDN 2457611

One of the better arguments in favor of copyright involves film production. Films are extraordinarily expensive to produce; without copyright, how would they recover their costs? $100 million is a common budget these days.

It is commonly thought that film budgets used to be much smaller, so I looked at some data from The Numbers on over 5,000 films going back to 1915, and inflation-adjusted the budgets using the CPI. (I learned some interesting LibreOffice Calc functions in the process of merging the data; also LibreOffice crashed a few times trying to make the graphs, so that’s fun. I finally realized it had copied over all the 10,000 hyperlinks from the HTML data set.)

If you just look at the nominal figures, there does seem to be some sort of upward trend:

Movie_Budgets_nominal

But once you do the proper inflation adjustment, this trend basically disappears:

Movie_Budgets_adjusted

In real terms, the grosses of some early movies are quite large. Adjusted to 2015 dollars, Gone with the Wind grossed $6.659 billion—still the highest ever. In 1937, Snow White and the Seven Dwarfs grossed over $3.043 billion in 2015 dollars. In 1950, Cinderella made it to $2.592 billion in today’s money. (Horrifyingly, The Birth of a Nation grossed $258 million in today’s money.)

Nor is there any evidence that movie production has gotten more expensive. The linear trend is actually negative, though with a very small slope that is not statistically significant. On average, the real budget of a movie falls by $1752 per year.

Movie_Budgets_trend

While the two most expensive movies came out recently (Pirates of the Caribbean: At World’s End and Avatar), the third most expensive was released in 1963 (Cleopatra). The really hugely expensive movies do seem to cluster relatively recently—but then so do the really cheap films, some of which have budgets under $10,000. It may just be that more movies are produced in general, and overall the cost of producing a film doesn’t seem to have changed in real terms. The best return on investment is My Date with Drew, released in 2005, which had a budget of $1,100 but grossed $181,000, giving it an ROI of 16,358%. The highest real profit was of course Gone with the Wind, which made an astonishing $6.592 billion, though Titanic, Avatar, Aliens and Terminator 2 combined actually beat it with a total profit of $6.651 billion, which may explain why James Cameron can now basically make any movie he wants and already has four sequels lined up for Avatar.

The biggest real loss was 1970’s Waterloo, which made back only $18 million of its $153 million budget, losing $135 million and having an ROI of -87.7%. This was not quite as bad an ROI as 2002’s The Adventures of Pluto Nash, which had an ROI of -92.91%.

But making movies has always been expensive, at least for big blockbusters. (The $8,900 budget of Primer is something I could probably put on credit cards if I had to.) It’s nothing new to spend $100 million in today’s money.

When considering the ethics and economics of copyright, it’s useful to think about what Michele Boldrin calls “pizzaright”: you can’t copy my pizza, or you are guilty of pizzaright infringement. Many of the arguments for copyright are so general—this is a valuable service, it carries some risk of failure, it wouldn’t be as profitable without the monopoly, so fewer companies might enter the business—that they would also apply to pizza. Yet somehow nobody thinks that pizzaright should be a thing. If there is a justification for copyrights, it must come from the special circumstances of works of art (broadly conceived, including writing, film, music, etc.), and the only one that really seems strong enough is the high upfront cost of certain types of art—and indeed, the only ones that really seem to fit that are films and video games.

Painting, writing, and music just aren’t that expensive. People are willing to create these things for very little money, and can do so more or less on their own, especially nowadays. If the prices are reasonable, people will still want to buy from the creators directly—and sure enough, widespread music piracy hasn’t killed music, it has only killed the corporate record industry. But movies and video games really can easily cost $100 million to make, so there’s a serious concern of what might happen if they couldn’t use copyright to recover their costs.

The question for me is, did we really need copyright to fund these budgets?

Let’s take a look at how Star Wars made its money. $6.249 billion came from box office revenue, while $873 million came from VHS and DVD sales; those would probably be substantially reduced if not for copyright. But even before The Force Awakens was released, the Star Wars franchise had already made some $12 billion in toy sales alone. “Merchandizing, merchandizing, where the real money from the movie is made!”

Did they need intellectual property to do that? Well, yes—but all they needed was trademark. Defenders of “intellectual property” like to use that term because it elides fundamental distinctions between the three types: trademark, copyright, and patent.
Trademark is unproblematic. You can’t lie about who you are or where you products came from when you’re selling something. So if you are claiming to sell official Star Wars merchandise, you’d better be selling official Star Wars merchandise, and trademark protects that.

Copyright is problematic, but may be necessary in some cases. Copyright protects the content of the movies from being copied or modified without Lucasfilm’s permission. So now rather than simply protecting against the claim that you represent Lucasfilm, we are protecting against people buying the movie, copying it, and reselling the copies—even though that is a real economic service they are providing, and is in no way fraudulent as long as they are clear about the fact that they made the copies.

Patent is, frankly, ridiculous. The concept of “owning” ideas is absurd. You came up with a good way to do something? Great! Go do it then. But don’t expect other people to pay you simply for the privilege of hearing your good idea. Of course I want to financially support researchers, but there are much, much better ways of doing that, like government grants and universities. Patents only raise revenue for research that sells, first of all—so vaccines and basic research can’t be funded that way, even though they are the most important research by far. Furthermore, there’s nothing to guarantee that the person who actually invented the idea is the one who makes the profit from it—and in our current system where corporations can own patents (and do own almost 90% of patents), it typically isn’t. Even if it were, the whole concept of owning ideas is nonsensical, and it has driven us to the insane extremes of corporations owning patents on human DNA. The best argument I’ve heard for patents is that they are a second-best solution that incentivizes transparency and avoids trade secrets from becoming commonplace; but in that case they should definitely be short, and we should never extend them. Companies should not be able to make basically cosmetic modifications and renew the patent, and expiring patents should be a cause for celebration.

Hollywood actually formed in Los Angeles precisely to escape patents, but of course they love copyright and trademark. So do they like “intellectual property”?

Could blockbuster films be produced profitably using only trademark, in the absence of copyright?

Clearly Star Wars would have still turned a profit. But not every movie can do such merchandizing, and when movies start getting written purely for merchandizing it can be painful to watch.

The real question is whether a film like Gone with the Wind or Avatar could still be made, and make a reasonable profit (if a much smaller one).

Well, there’s always porn. Porn raises over $400 million per year in revenue, despite having essentially unenforceable copyright. They too are outraged over piracy, yet somehow I don’t think porn will ever cease to exist. A top porn star can make over $200,000 per year.Then there are of course independent films that never turn a profit at all, yet people keep making them.

So clearly it is possible to make some films without copyright protection, and something like Gone with the Wind needn’t cost $100 million to make. The only reason it cost as much as it did (about $66 million in today’s money) was that movie stars could command huge winner-takes-all salaries, which would no longer be true if copyright went away. And don’t tell me people wouldn’t be willing to be movie stars for $200,000 a year instead of $1.8 million (what Clark Gable made for Gone with the Wind, adjusted for inflation).

Yet some Hollywood blockbuster budgets are genuinely necessary. The real question is whether we could have Avatar without copyright. Not having films like Avatar is something I would count as a substantial loss to our society; we would lose important pieces of our art and culture.

So, where did all that money go? I don’t have a breakdown for Avatar in particular, but I do have a full budget breakdown for The Village. Of its $71.7 million, $33.5 million was “above the line”, which basically means the winner-takes-all superstar salaries for the director, producer, and cast. That amount could be dramatically reduced with no real cost to society—let’s drop it to say $3 million. Shooting costs were $28.8 million, post-production was $8.4 million, and miscellaneous expenses added about $1 million; all of those would be much harder to reduce (they mainly go to technical staff who make reasonable salaries, not to superstars), so let’s assume the full amount is necessary. That’s about $38 million in real cost to produce. Avatar had a lot more (and better) post-production, so let’s go ahead and multiply the post-production budget by an order of magnitude to $84 million. Our new total budget is $113.8 million.
That sounds like a lot, and it is; but this could be made back without copyright. Avatar sold over 14.5 million DVDs and over 8 million Blu-Rays. Conservatively assuming that the price elasticity of demand is zero (which is ridiculous—assuming the monopoly pricing is optimal it should be -1), if those DVDs were sold for $2 each and the Blu-Rays were sold for $5 each, with 50% of those prices being profit, this would yield a total profit of $14.5 million from DVDs and $20 million from Blu-Rays. That’s already $34.5 million. With realistic assumptions about elasticity of demand, cutting the prices this much (DVDs down from an average of $16, Blu-Rays down from an average of $20) would multiply the number of DVDs sold by at least 5 and the number of Blu-Rays sold by at least 3, which would get us all the way up to $132 million—enough to cover our new budget. (Of course this is much less than they actually made, which is why they set the prices they did—but that doesn’t mean it’s optimal from society’s perspective.)

But okay, suppose I’m wrong about the elasticity, and dropping the price from $16 to $2 for a DVD somehow wouldn’t actually increase the number purchased. What other sources of revenue would they have? Well, box office tickets would still be a thing. They’d have to come down in price, but given the high-quality high-fidelity versions that cinemas require—making them quite hard to pirate—they would still get decent money from each cinema. Let’s say the price drops by 90%—all cinemas are now $1 cinemas!—and the sales again somehow remain exactly the same (rather than dramatically increasing as they actually would). What would Avatar’s worldwide box office gross be then? $278 million. They could give the DVDs away for free and still turn a profit.

And that’s Avatar, one of the most expensive movies ever made. By cutting out the winner-takes-all salaries and huge corporate profits, the budget can be substantially reduced, and then what real costs remain can be quite well covered by box office and DVD sales at reasonable prices. If you imagine that piracy somehow undercuts everything until you have to give away things for free, you might think this is impossible; but in reality pirated versions are of unreliable quality, people do want to support artists and they are willing to pay something for their entertainment. They’re just tired of paying monopoly prices to benefit the shareholders of Viacom.

Would this end the era of the multi-millionaire movie star? Yes, I suppose it might. But it would also put about $10 billion per year back in the pockets of American consumers—and there’s little reason to think it would take away future Avatars, much less future Gone with the Winds.

The credit rating agencies to be worried about aren’t the ones you think

JDN 2457499

John Oliver is probably the best investigative journalist in America today, despite being neither American nor officially a journalist; last week he took on the subject of credit rating agencies, a classic example of his mantra “If you want to do something evil, put it inside something boring.” (note that it’s on HBO, so there is foul language):

As ever, his analysis of the subject is quite good—it’s absurd how much power these agencies have over our lives, and how little accountability they have for even assuring accuracy.

But I couldn’t help but feel that he was kind of missing the point. The credit rating agencies to really be worried about aren’t Equifax, Experian, and Transunion, the ones that assess credit ratings on individuals. They are Standard & Poor’s, Moody’s, and Fitch (which would have been even easier to skewer the way John Oliver did—perhaps we can get them confused with Standardly Poor, Moody, and Filch), the agencies which assess credit ratings on institutions.

These credit rating agencies have almost unimaginable power over our society. They are responsible for rating the risk of corporate bonds, certificates of deposit, stocks, derivatives such as mortgage-backed securities and collateralized debt obligations, and even municipal and government bonds.

S&P, Moody’s, and Fitch don’t just rate the creditworthiness of Goldman Sachs and J.P. Morgan Chase; they rate the creditworthiness of Detroit and Greece. (Indeed, they played an important role in the debt crisis of Greece, which I’ll talk about more in a later post.)

Moreover, they are proven corrupt. It’s a matter of public record.

Standard and Poor’s is the worst; they have been successfully sued for fraud by small banks in Pennsylvania and by the State of New Jersey; they have also settled fraud cases with the Securities and Exchange Commission and the Department of Justice.

Moody’s has also been sued for fraud by the Department of Justice, and all three have been prosecuted for fraud by the State of New York.

But in fact this underestimates the corruption, because the worst conflicts of interest aren’t even illegal, or weren’t until Dodd-Frank was passed in 2010. The basic structure of this credit rating system is fundamentally broken; the agencies are private, for-profit corporations, and they get their revenue entirely from the banks that pay them to assess their risk. If they rate a bank’s asset as too risky, the bank stops paying them, and instead goes to another agency that will offer a higher rating—and simply the threat of doing so keeps them in line. As a result their ratings are basically uncorrelated with real risk—they failed to predict the collapse of Lehman Brothers or the failure of mortgage-backed CDOs, and they didn’t “predict” the European debt crisis so much as cause it by their panic.

Then of course there’s the fact that they are obviously an oligopoly, and furthermore one that is explicitly protected under US law. But then it dawns upon you: Wait… US law? US law decides the structure of credit rating agencies that set the bond rates of entire nations? Yes, that’s right. You’d think that such ratings would be set by the World Bank or something, but they’re not; in fact here’s a paper published by the World Bank in 2004 about how rather than reform our credit rating system, we should instead tell poor countries to reform themselves so they can better impress the private credit rating agencies.

In fact the whole concept of “sovereign debt risk” is fundamentally defective; a country that borrows in its own currency should never have to default on debt under any circumstances. National debt is almost nothing like personal or corporate debt. Their fears should be inflation and unemployment—their monetary policy should be set to minimize the harm of these two basic macroeconomic problems, understanding that policies which mitigate one may enflame the other. There is such a thing as bad fiscal policy, but it has nothing to do with “running out of money to pay your debt” unless you are forced to borrow in a currency you can’t control (as Greece is, because they are on the Euro—their debt is less like the US national debt and more like the debt of Puerto Rico, which is suffering an ongoing debt crisis you may not have heard about). If you borrow in your own currency, you should be worried about excessive borrowing creating inflation and devaluing your currency—but not about suddenly being unable to repay your creditors. The whole concept of giving a sovereign nation a credit rating makes no sense. You will be repaid on time and in full, in nominal terms; if inflation or currency exchange has devalued the currency you are repaid in, that’s sort of like a partial default, but it’s a fundamentally different kind of “default” than simply not paying back the money—and credit ratings have no way of capturing that difference.

In particular, it makes no sense for interest rates on government bonds to go up when a country is suffering some kind of macroeconomic problem.

The basic argument for why interest rates go up when risk is higher is that lenders expect to be paid more by those who do pay to compensate for what they lose from those who don’t pay. This is already much more problematic than most economists appreciate; I’ve been meaning to write a paper on how this system creates self-fulfilling prophecies of default and moral hazard from people who pay their debts being forced to subsidize those who don’t. But it at least makes some sense.

But if a country is a “high risk” in the sense of macroeconomic instability undermining the real value of their debt, we want to ensure that they can restore macroeconomic stability. But we know that when there is a surge in interest rates on government bonds, instability gets worse, not better. Fiscal policy is suddenly shifted away from real production into higher debt payments, and this creates unemployment and makes the economic crisis worse. As Paul Krugman writes about frequently, these policies of “austerity” cause enormous damage to national economies and ultimately benefit no one because they destroy the source of wealth that would have been used to repay the debt.

By letting credit rating agencies decide the rates at which governments must borrow, we are effectively treating national governments as a special case of corporations. But corporations, by design, act for profit and can go bankrupt. National governments are supposed to act for the public good and persist indefinitely. We can’t simply let Greece fail as we might let a bank fail (and of course we’ve seen that there are serious downsides even to that). We have to restructure the sovereign debt system so that it benefits the development of nations rather than detracting from it. The first step is removing the power of private for-profit corporations in the US to decide the “creditworthiness” of entire countries. If we need to assess such risks at all, they should be done by international institutions like the UN or the World Bank.

But right now people are so stuck in the idea that national debt is basically the same as personal or corporate debt that they can’t even understand the problem. For after all, one must repay one’s debts.

Why is it so hard to get a job?

JDN 2457411

The United States is slowly dragging itself out of the Second Depression.

Unemployment fell from almost 10% to about 5%.

Core inflation has been kept between 0% and 2% most of the time.

Overall inflation has been within a reasonable range:

US_inflation

Real GDP has returned to its normal growth trend, though with a permanent loss of output relative to what would have happened without the Great Recession.

US_GDP_growth

Consumption spending is also back on trend, tracking GDP quite precisely.

The Federal Reserve even raised the federal funds interest rate above the zero lower bound, signaling a return to normal monetary policy. (As I argued previously, I’m pretty sure that was their main goal actually.)

Employment remains well below the pre-recession peak, but is now beginning to trend upward once more.

The only thing that hasn’t recovered is labor force participation, which continues to decline. This is how we can have unemployment go back to normal while employment remains depressed; people leave the labor force by retiring, going back to school, or simply giving up looking for work. By the formal definition, someone is only unemployed if they are actively seeking work. No, this is not new, and it is certainly not Obama rigging the numbers. This is how we have measured unemployment for decades.

Actually, it’s kind of the opposite: Since the Clinton administration we’ve also kept track of “broad unemployment”, which includes people who’ve given up looking for work or people who have some work but are trying to find more. But we can’t directly compare it to anything that happened before 1994, because the BLS didn’t keep track of it before then. All we can do is estimate based on what we did measure. Based on such estimation, it is likely that broad unemployment in the Great Depression may have gotten as high as 50%. (I’ve found that one of the best-fitting models is actually one of the simplest; assume that broad unemployment is 1.8 times narrow unemployment. This fits much better than you might think.)

So, yes, we muddle our way through, and the economy eventually heals itself. We could have brought the economy back much sooner if we had better fiscal policy, but at least our monetary policy was good enough that we were spared the worst.

But I think most of us—especially in my generation—recognize that it is still really hard to get a job. Overall GDP is back to normal, and even unemployment looks all right; but why are so many people still out of work?

I have a hypothesis about this: I think a major part of why it is so hard to recover from recessions is that our system of hiring is terrible.

Contrary to popular belief, layoffs do not actually substantially increase during recessions. Quits are substantially reduced, because people are afraid to leave current jobs when they aren’t sure of getting new ones. As a result, rates of job separation actually go down in a recession. Job separation does predict recessions, but not in the way most people think. One of the things that made the Great Recession different from other recessions is that most layoffs were permanent, instead of temporary—but we’re still not sure exactly why.

Here, let me show you some graphs from the BLS.

This graph shows job openings from 2005 to 2015:

job_openings

This graph shows hires from 2005 to 2015:

job_hires

Both of those show the pattern you’d expect, with openings and hires plummeting in the Great Recession.

But check out this graph, of job separations from 2005 to 2015:

job_separations

Same pattern!

Unemployment in the Second Depression wasn’t caused by a lot of people losing jobs. It was caused by a lot of people not getting jobs—either after losing previous ones, or after graduating from school. There weren’t enough openings, and even when there were openings there weren’t enough hires.

Part of the problem is obviously just the business cycle itself. Spending drops because of a financial crisis, then businesses stop hiring people because they don’t project enough sales to justify it; then spending drops even further because people don’t have jobs, and we get caught in a vicious cycle.

But we are now recovering from the cyclical downturn; spending and GDP are back to their normal trend. Yet the jobs never came back. Something is wrong with our hiring system.

So what’s wrong with our hiring system? Probably a lot of things, but here’s one that’s been particularly bothering me for a long time.
As any job search advisor will tell you, networking is essential for career success.

There are so many different places you can hear this advice, it honestly gets tiring.

But stop and think for a moment about what that means. One of the most important determinants of what job you will get is… what people you know?

It’s not what you are best at doing, as it would be if the economy were optimally efficient.
It’s not even what you have credentials for, as we might expect as a second-best solution.

It’s not even how much money you already have, though that certainly is a major factor as well.

It’s what people you know.

Now, I realize, this is not entirely beyond your control. If you actively participate in your community, attend conferences in your field, and so on, you can establish new contacts and expand your network. A major part of the benefit of going to a good college is actually the people you meet there.

But a good portion of your social network is more or less beyond your control, and above all, says almost nothing about your actual qualifications for any particular job.

There are certain jobs, such as marketing, that actually directly relate to your ability to establish rapport and build weak relationships rapidly. These are a tiny minority. (Actually, most of them are the sort of job that I’m not even sure needs to exist.)

For the vast majority of jobs, your social skills are a tiny, almost irrelevant part of the actual skill set needed to do the job well. This is true of jobs from writing science fiction to teaching calculus, from diagnosing cancer to flying airliners, from cleaning up garbage to designing spacecraft. Social skills are rarely harmful, and even often provide some benefit, but if you need a quantum physicist, you should choose the recluse who can write down the Dirac equation by heart over the well-connected community leader who doesn’t know what an integral is.

At the very least, it strains credibility to suggest that social skills are so important for every job in the world that they should be one of the defining factors in who gets hired. And make no mistake: Networking is as beneficial for landing a job at a local bowling alley as it is for becoming Chair of the Federal Reserve. Indeed, for many entry-level positions networking is literally all that matters, while advanced positions at least exclude candidates who don’t have certain necessary credentials, and then make the decision based upon who knows whom.

Yet, if networking is so inefficient, why do we keep using it?

I can think of a couple reasons.

The first reason is that this is how we’ve always done it. Indeed, networking strongly pre-dates capitalism or even money; in ancient tribal societies there were certainly jobs to assign people to: who will gather berries, who will build the huts, who will lead the hunt. But there were no colleges, no certifications, no resumes—there was only your position in the social structure of the tribe. I think most people simply automatically default to a networking-based system without even thinking about it; it’s just the instinctual System 1 heuristic.

One of the few things I really liked about Debt: The First 5000 Years was the discussion of how similar the behavior of modern CEOs is to that of ancient tribal chieftans, for reasons that make absolutely no sense in terms of neoclassical economic efficiency—but perfect sense in light of human evolution. I wish Graeber had spent more time on that, instead of many of these long digressions about international debt policy that he clearly does not understand.

But there is a second reason as well, a better reason, a reason that we can’t simply give up on networking entirely.

The problem is that many important skills are very difficult to measure.

College degrees do a decent job of assessing our raw IQ, our willingness to persevere on difficult tasks, and our knowledge of the basic facts of a discipline (as well as a fantastic job of assessing our ability to pass standardized tests!). But when you think about the skills that really make a good physicist, a good economist, a good anthropologist, a good lawyer, or a good doctor—they really aren’t captured by any of the quantitative metrics that a college degree provides. Your capacity for creative problem-solving, your willingness to treat others with respect and dignity; these things don’t appear in a GPA.

This is especially true in research: The degree tells how good you are at doing the parts of the discipline that have already been done—but what we really want to know is how good you’ll be at doing the parts that haven’t been done yet.

Nor are skills precisely aligned with the content of a resume; the best predictor of doing something well may in fact be whether you have done so in the past—but how can you get experience if you can’t get a job without experience?

These so-called “soft skills” are difficult to measure—but not impossible. Basically the only reliable measurement mechanisms we have require knowing and working with someone for a long span of time. You can’t read it off a resume, you can’t see it in an interview (interviews are actually a horribly biased hiring mechanism, particularly biased against women). In effect, the only way to really know if someone will be good at a job is to work with them at that job for awhile.

There’s a fundamental information problem here I’ve never quite been able to resolve. It pops up in a few other contexts as well: How do you know whether a novel is worth reading without reading the novel? How do you know whether a film is worth watching without watching the film? When the information about the quality of something can only be determined by paying the cost of purchasing it, there is basically no way of assessing the quality of things before we purchase them.

Networking is an attempt to get around this problem. To decide whether to read a novel, ask someone who has read it. To decide whether to watch a film, ask someone who has watched it. To decide whether to hire someone, ask someone who has worked with them.

The problem is that this is such a weak measure that it’s not much better than no measure at all. I often wonder what would happen if businesses were required to hire people based entirely on resumes, with no interviews, no recommendation letters, and any personal contacts treated as conflicts of interest rather than useful networking opportunities—a world where the only thing we use to decide whether to hire someone is their documented qualifications. Could it herald a golden age of new economic efficiency and job fulfillment? Or would it result in widespread incompetence and catastrophic collapse? I honestly cannot say.

Thus ends our zero-lower-bound interest rate policy

JDN 2457383

Not with a bang, but with a whimper.

If you are reading the blogs as they are officially published, it will have been over a week since the Federal Reserve ended its policy of zero interest rates. (If you are reading this as a Patreon Blog from the Future, it will only have been a few days.)

The official announcement was made on December 16. The Federal Funds Target Rate will be raised from 0%-0.25% to 0.25%-0.5%. That one-quarter percentage point—itself no larger than the margin of error the Fed allots itself—will make all the difference.

As pointed out in the New York Times, this is the first time nominal interest rates have been raised in almost a decade. But the Fed had been promising it for some time, and thus a major reason they did it was to preserve their own credibility. They also say they think inflation is about to hit the 2% target, though it hasn’t yet (and I was never clear on why 2% was the target in the first place).

Actually, overall inflation is currently near zero. What is at 2% is what’s called “core inflation”, which excludes particularly volatile products such as oil and food. The idea is that we want to set monetary policy based upon long-run trends in the economy as a whole, not based upon sudden dips and surges in oil prices. But right now we are in the very odd scenario of the Fed raising interest rates in order to stop inflation even as the total amount most people need to spend to maintain their standard of living is the same as it was a year ago.

As MSNBC argues, it is essentially an announcement that the Second Depression is over and the economy has now returned to normal. Of course, simply announcing such a thing does not make it true.

Personally, I think this move is largely symbolic. The difference between 0% and 0.25% is unimportant for most practical purposes.

If you owe $100,000 over 30 years at 0% interest, you will pay $277.78 per month, totaling of course $100,000. If your interest rate were raised to 0.25% interest, you would instead owe $288.35 per month, totaling $103,807.28. Even over 30 years, that 0.25% interest raises your total expenditure by less than 4%.

Over shorter terms it’s even less important. If you owe $20,000 over 5 years at 0% interest, you will pay $333.33 per month totaling $20,000. At 0.25%, you would pay $335.46 per month totaling $20,127.34, a mere 0.6% more.

Moreover, if a bank was willing to take out a loan at 0%, they’ll probably still be at 0.25%.

Where it would have the largest impact is in more exotic financial instruments, like zero-amortization or negative-amortization bonds. A zero-amortization bond at 0% is literally free money forever (assuming you can keep rolling it over). A zero-amortization bond at 0.25% means you must at least pay 0.25% of the money back each year. A negative-amortization bond at 0% makes no sense mathematically (somehow you pay back less than 0% at each payment?), while a negative-amortization bond at 0.25% only doesn’t make sense practically. If both zero and negative-amortization seem really bizarre and impossible to justify, that’s because they are. They should not exist. Most exotic financial instruments have no reason to exist, aside from the fact that they can be used to bamboozle people into giving money to the financial corporations that create them. (Which reminds me, I need to see The Big Short. But of course I have to see Star Wars: The Force Awakens first; one must have priorities.)

So, what will happen as a result of this change in interest rates? Probably not much. Inflation might go down a little—which means we might have overall deflation, and that would be bad—and the rate of increase in credit might drop slightly. In the worst-case scenario, unemployment starts to rise again, the Fed realizes their mistake, and interest rates will be dropped back to zero.

I think it’s more instructive to look at why they did this—the symbolic significance behind it.

The zero lower bound is weird. It makes a lot of economists very uncomfortable. The usual rules for how monetary and fiscal policy work break down, because the equation hits up against a constraint—a corner solution, more technically. Krugman often talks about how many of the usual ideas about how interest rates and government spending work collapse at the zero-lower-bound. We have models of this sort of thing that are pretty good, but they’re weird and counter-intuitive, so policymakers never seem to actually use them.

What is the zero lower bound, you ask? Exactly what it says on the tin. There is a lower bound on how low you can set an interest rate, and for all practical purposes that limit is zero. If you start trying to set an interest rate of -5%, people won’t be willing to loan out money and will instead hoard cash. (Interestingly, a central bank with a strong currency, such as that of the US, UK, or EU, can actually set small negative nominal interest rates—because people consider their bonds safer than cash, so they’ll pay for the safety. The ECB, Europe’s Fed, actually did so for awhile.)

The zero-lower-bound actually applies to prices in general, not just interest rates. If a product is so worthless to you that you don’t even want it if it’s free, it’s very rare for anyone to actually pay you to take it—partly because there might be nothing to stop you from taking a huge amount of it and forcing them to pay you ridiculous amounts of money. “How much is this paperclip?” “-$0.75.” “I’ll have 50 billion, please.” In a few rare cases, they might be able to pay you to take it an amount that’s less than what it costs you to store and transport. Also, if they benefit from giving it to you, companies will give you things for free—think ads and free samples. But basically, if people won’t even take something for free, that thing simply doesn’t get sold.

But if we are in a recession, we really don’t want loans to stop being made altogether. So if people are unwilling to take out loans at 0% interest, we’re in trouble. Generally what we have to do is rely on inflation to reduce the real value of money over time, thus creating a real interest rate that’s negative even though the nominal interest rate remains stuck at 0%. But what if inflation is very low? Then there’s nothing you can do except find a way to raise inflation or increase demand for credit. This means relying upon unconventional methods like quantitative easing (trying to cause inflation), or preferably using fiscal policy to spend a bunch of money and thereby increase demand for credit.

What the Fed is basically trying to do here is say that we are no longer in that bad situation. We can now set interest rates where they actually belong, rather than forcing them as low as they’ll go and hoping inflation will make up the difference.

It’s actually similar to how if you take a test and score 100%, there’s no way of knowing whether you just barely got 100%, or if you would have still done as well if the test were twice as hard—but if you score 99%, you actually scored 99% and would have done worse if the test were harder. In the former case you were up against a constraint; in the latter it’s your actual value. The Fed is essentially announcing that we really want interest rates near 0%, as opposed to being bound at 0%—and the way they do that is by setting a target just slightly above 0%.

So far, there doesn’t seem to have been much effect on markets. And frankly, that’s just what I’d expect.