The “productivity paradox”

 

Dec 10, JDN 2458098

Take a look at this graph of manufacturing output per worker-hour:

Manufacturing_productivity

From 1988 to 2008, it was growing at a steady pace. In 2008 and 2009 it took a dip due to the Great Recession; no big surprise there. But then since 2012 it has been… completely flat. If we take this graph at face value, it would imply that manufacturing workers today can produce no more output than workers five years ago, and indeed only about 10% more than workers a decade ago. Whereas, a worker in 2008 was producing over 60% more than a worker in 1998, who was producing over 40% more than a worker in 1988.

Many economists call this the “productivity paradox”, and use it to argue that we don’t really need to worry about robots taking all our jobs any time soon. I think this view is mistaken.

The way we measure productivity is fundamentally wrongheaded, and is probably the sole cause of this “paradox”.

First of all, we use total hours scheduled to work, not total hours actually doing productive work. This is obviously much, much easier to measure, which is why we do it. But if you think for a moment about how the 40-hour workweek norm is going to clash with rapidly rising real productivity, it becomes apparent why this isn’t going to be a good measure.
When a worker finds a way to get done in 10 hours what used to take 40 hours, what does that worker’s boss do? Send them home after 10 hours because the job is done? Give them a bonus for their creativity? Hardly. That would be far too rational. They assign them more work, while paying them exactly the same. Recognizing this, what is such a worker to do? The obvious answer is to pretend to work the other 30 hours, while in fact doing something more pleasant than working.
And indeed, so-called “worker distraction” has been rapidly increasing. People are right to blame smartphones, I suppose, but not for the reasons they think. It’s not that smartphones are inherently distracting devices. It’s that smartphones are the cutting edge of a technological revolution that has made most of our work time unnecessary, so due to our fundamentally defective management norms they create overwhelming incentives to waste time at work to avoid getting drenched in extra tasks for no money.

That would probably be enough to explain the “paradox” by itself, but there is a deeper reason that in the long run is even stronger. It has to do with the way we measure “output”.

It might surprise you to learn that economists almost never consider output in terms of the actual number of cars produced, buildings constructed, songs written, or software packages developed. The standard measures of output are all in the form of so-called “real GDP”; that is, the dollar value of output produced.

They do adjust for indexes of inflation, but as I’ll show in a moment this still creates a fundamentally biased picture of the productivity dynamics.

Consider a world with only three industries: Housing, Food, and Music.

Productivity in Housing doesn’t change at all. Producing a house cost 10,000 worker-hours in 1950, and cost 10,000 worker-hours in 2000. Nominal price of houses has rapidly increased, from $10,000 in 1950 to $200,000 in 2000.

Productivity in Food rises moderately fast. Producing 1,000 meals cost 1,000 worker-hours in 1950, and cost 100 worker-hours in 2000. Nominal price of food has increased slowly, from $1,000 per 1,000 meals in 1950 to $5,000 per 1,000 meals in 2000.

Productivity in Music rises extremely fast. Producing 1,000 performances cost 10,000 worker-hours in 1950, and cost 1 worker-hour in 2000. Nominal price of music has collapsed, from $100,000 per 1,000 performances in 1950 to $1,000 per 1,000 performances in 2000.

This is of course an extremely stylized version of what has actually happened: Housing has gotten way more expensive, food has stayed about the same in price while farm employment has plummeted, and the rise of digital music has brought about a new Renaissance in actual music production and listening while revenue for the music industry has collapsed. There is a very nice Vox article on the “productivity paradox” showing a graph of how prices have changed in different industries.

How would productivity appear in the world I’ve just described, by standard measures? Well, to say that I actually need to say something about how consumers substitute across industries. But I think I’ll be forgiven in this case for saying that there is no substitution whatsoever; you can’t eat music or live in a burrito. There’s also a clear Maslow hierarchy here: They say that man cannot live by bread alone, but I think living by Led Zeppelin alone is even harder.

Consumers will therefore choose like this: Over 10 years, buy 1 house, 10,000 meals, and as many performances as you can afford after that. Further suppose that each person had $2,100 per year to spend in 1940-1950, and $50,000 per year to spend in 1990-2000. (This is approximately true for actual nominal US GDP per capita.)

1940-1950:
Total funds: $21,000

1 house = $10,000

10,000 meals = $10,000

Remaining funds: $1,000

Performances purchased: 10

1990-2000:

Total funds: $500,000

1 house = $200,000

10,000 meals = $50,000

Remaining funds: $250,000

Performances purchased: 250,000

(Do you really listen to this much music? 250,000 performances over 10 years is about 70 songs per day. If each song is 3 minutes, that’s only about 3.5 hours per day. If you listen to music while you work or watch a couple of movies with musical scores, yes, you really do listen to this much music! The unrealistic part is assuming that people in 1950 listen to so little, given that radio was already widespread. But if you think of music as standing in for all media, the general trend of being able to consume vastly more media in the digital age is clearly correct.)

Now consider how we would compute a price index for each time period. We would construct a basket of goods and determine the price of that basket in each time period, then adjust prices until that basket has a constant price.

Here, the basket would probably be what people bought in 1940-1950: 1 house, 10,000 meals, and 400 music performances.

In 1950, this basket cost $10,000+$10,000+$100 = $21,000.

In 2000, this basket cost $200,000+$50,000+$400 = $150,400.

This means that our inflation adjustment is $150,400/$21,000 = 7 to 1. This means that we would estimate the real per-capita GDP in 1950 at about $14,700. And indeed, that’s about the actual estimate of real per-capita GDP in 1950.

So, what would we say about productivity?

Sales of houses in 1950 were 1 per person, costing 10,000 worker hours.

Sales of food in 1950 were 10,000 per person, costing 10,000 worker hours.

Sales of music in 1950 were 400 per person, costing 4,000 worker hours.

Worker hours per person are therefore 24,000.

Sales of houses in 2000 were 1 per person, costing 10,000 worker hours.

Sales of food in 2000 were 10,000 per person, costing 1,000 worker hours.

Sales of music in 2000 were 250,000 per person, costing 25,000 worker hours.

Worker hours per person are therefore 36,000.

Therefore we would estimate that productivity rose from $14,700/24,000 = $0.61 per worker-hour to $50,000/36,000 = $1.40 per worker-hour. This is an annual growth rate of about 1.7%, which is again, pretty close to the actual estimate of productivity growth. For such a highly stylized model, my figures are doing remarkably well. (Honestly, better than I thought they would!)

But think about how much actual productivity rose, at least in the industries where it did.

We produce 10 times as much food per worker hour after 50 years, which is an annual growth rate of 4.7%, or three times the estimated growth rate.

We produce 10,000 times as much music per worker hour after 50 years, which is an annual growth rate of over 20%, or almost twelve times the estimated growth rate.

Moreover, should music producers be worried about losing their jobs to automation? Absolutely! People simply won’t be able to listen to much more music than they already are, so any continued increases in music productivity are going to make musicians lose jobs. And that was already allowing for music consumption to increase by a factor of over 600.

Of course, the real world has a lot more industries than this, and everything is a lot more complicated. We do actually substitute across some of those industries, unlike in this model.

But I hope I’ve gotten at least the basic point across that when things become drastically cheaper as technological progress often does, simply adjusting for inflation doesn’t do the job. One dollar of music today isn’t the same thing as one dollar of music a century ago, even if you inflation-adjust their dollars to match ours. We ought to be measuring in hours of music; an hour of music is much the same thing as an hour of music a century ago.

And likewise, that secretary/weather forecaster/news reporter/accountant/musician/filmmaker in your pocket that you call a “smartphone” really ought to be counted as more than just a simple inflation adjustment on its market price. The fact that it is mind-bogglingly cheaper to get these services than it used to be is the technological progress we care about; it’s not some statistical artifact to be removed by proper measurement.

Combine that with actually measuring the hours of real, productive work, and I think you’ll find that productivity is still rising quite rapidly, and that we should still be worried about what automation is going to do to our jobs.

Why are movies so expensive? Did they used to be? Do they need to be?

August 10, JDN 2457611

One of the better arguments in favor of copyright involves film production. Films are extraordinarily expensive to produce; without copyright, how would they recover their costs? $100 million is a common budget these days.

It is commonly thought that film budgets used to be much smaller, so I looked at some data from The Numbers on over 5,000 films going back to 1915, and inflation-adjusted the budgets using the CPI. (I learned some interesting LibreOffice Calc functions in the process of merging the data; also LibreOffice crashed a few times trying to make the graphs, so that’s fun. I finally realized it had copied over all the 10,000 hyperlinks from the HTML data set.)

If you just look at the nominal figures, there does seem to be some sort of upward trend:

Movie_Budgets_nominal

But once you do the proper inflation adjustment, this trend basically disappears:

Movie_Budgets_adjusted

In real terms, the grosses of some early movies are quite large. Adjusted to 2015 dollars, Gone with the Wind grossed $6.659 billion—still the highest ever. In 1937, Snow White and the Seven Dwarfs grossed over $3.043 billion in 2015 dollars. In 1950, Cinderella made it to $2.592 billion in today’s money. (Horrifyingly, The Birth of a Nation grossed $258 million in today’s money.)

Nor is there any evidence that movie production has gotten more expensive. The linear trend is actually negative, though with a very small slope that is not statistically significant. On average, the real budget of a movie falls by $1752 per year.

Movie_Budgets_trend

While the two most expensive movies came out recently (Pirates of the Caribbean: At World’s End and Avatar), the third most expensive was released in 1963 (Cleopatra). The really hugely expensive movies do seem to cluster relatively recently—but then so do the really cheap films, some of which have budgets under $10,000. It may just be that more movies are produced in general, and overall the cost of producing a film doesn’t seem to have changed in real terms. The best return on investment is My Date with Drew, released in 2005, which had a budget of $1,100 but grossed $181,000, giving it an ROI of 16,358%. The highest real profit was of course Gone with the Wind, which made an astonishing $6.592 billion, though Titanic, Avatar, Aliens and Terminator 2 combined actually beat it with a total profit of $6.651 billion, which may explain why James Cameron can now basically make any movie he wants and already has four sequels lined up for Avatar.

The biggest real loss was 1970’s Waterloo, which made back only $18 million of its $153 million budget, losing $135 million and having an ROI of -87.7%. This was not quite as bad an ROI as 2002’s The Adventures of Pluto Nash, which had an ROI of -92.91%.

But making movies has always been expensive, at least for big blockbusters. (The $8,900 budget of Primer is something I could probably put on credit cards if I had to.) It’s nothing new to spend $100 million in today’s money.

When considering the ethics and economics of copyright, it’s useful to think about what Michele Boldrin calls “pizzaright”: you can’t copy my pizza, or you are guilty of pizzaright infringement. Many of the arguments for copyright are so general—this is a valuable service, it carries some risk of failure, it wouldn’t be as profitable without the monopoly, so fewer companies might enter the business—that they would also apply to pizza. Yet somehow nobody thinks that pizzaright should be a thing. If there is a justification for copyrights, it must come from the special circumstances of works of art (broadly conceived, including writing, film, music, etc.), and the only one that really seems strong enough is the high upfront cost of certain types of art—and indeed, the only ones that really seem to fit that are films and video games.

Painting, writing, and music just aren’t that expensive. People are willing to create these things for very little money, and can do so more or less on their own, especially nowadays. If the prices are reasonable, people will still want to buy from the creators directly—and sure enough, widespread music piracy hasn’t killed music, it has only killed the corporate record industry. But movies and video games really can easily cost $100 million to make, so there’s a serious concern of what might happen if they couldn’t use copyright to recover their costs.

The question for me is, did we really need copyright to fund these budgets?

Let’s take a look at how Star Wars made its money. $6.249 billion came from box office revenue, while $873 million came from VHS and DVD sales; those would probably be substantially reduced if not for copyright. But even before The Force Awakens was released, the Star Wars franchise had already made some $12 billion in toy sales alone. “Merchandizing, merchandizing, where the real money from the movie is made!”

Did they need intellectual property to do that? Well, yes—but all they needed was trademark. Defenders of “intellectual property” like to use that term because it elides fundamental distinctions between the three types: trademark, copyright, and patent.
Trademark is unproblematic. You can’t lie about who you are or where you products came from when you’re selling something. So if you are claiming to sell official Star Wars merchandise, you’d better be selling official Star Wars merchandise, and trademark protects that.

Copyright is problematic, but may be necessary in some cases. Copyright protects the content of the movies from being copied or modified without Lucasfilm’s permission. So now rather than simply protecting against the claim that you represent Lucasfilm, we are protecting against people buying the movie, copying it, and reselling the copies—even though that is a real economic service they are providing, and is in no way fraudulent as long as they are clear about the fact that they made the copies.

Patent is, frankly, ridiculous. The concept of “owning” ideas is absurd. You came up with a good way to do something? Great! Go do it then. But don’t expect other people to pay you simply for the privilege of hearing your good idea. Of course I want to financially support researchers, but there are much, much better ways of doing that, like government grants and universities. Patents only raise revenue for research that sells, first of all—so vaccines and basic research can’t be funded that way, even though they are the most important research by far. Furthermore, there’s nothing to guarantee that the person who actually invented the idea is the one who makes the profit from it—and in our current system where corporations can own patents (and do own almost 90% of patents), it typically isn’t. Even if it were, the whole concept of owning ideas is nonsensical, and it has driven us to the insane extremes of corporations owning patents on human DNA. The best argument I’ve heard for patents is that they are a second-best solution that incentivizes transparency and avoids trade secrets from becoming commonplace; but in that case they should definitely be short, and we should never extend them. Companies should not be able to make basically cosmetic modifications and renew the patent, and expiring patents should be a cause for celebration.

Hollywood actually formed in Los Angeles precisely to escape patents, but of course they love copyright and trademark. So do they like “intellectual property”?

Could blockbuster films be produced profitably using only trademark, in the absence of copyright?

Clearly Star Wars would have still turned a profit. But not every movie can do such merchandizing, and when movies start getting written purely for merchandizing it can be painful to watch.

The real question is whether a film like Gone with the Wind or Avatar could still be made, and make a reasonable profit (if a much smaller one).

Well, there’s always porn. Porn raises over $400 million per year in revenue, despite having essentially unenforceable copyright. They too are outraged over piracy, yet somehow I don’t think porn will ever cease to exist. A top porn star can make over $200,000 per year.Then there are of course independent films that never turn a profit at all, yet people keep making them.

So clearly it is possible to make some films without copyright protection, and something like Gone with the Wind needn’t cost $100 million to make. The only reason it cost as much as it did (about $66 million in today’s money) was that movie stars could command huge winner-takes-all salaries, which would no longer be true if copyright went away. And don’t tell me people wouldn’t be willing to be movie stars for $200,000 a year instead of $1.8 million (what Clark Gable made for Gone with the Wind, adjusted for inflation).

Yet some Hollywood blockbuster budgets are genuinely necessary. The real question is whether we could have Avatar without copyright. Not having films like Avatar is something I would count as a substantial loss to our society; we would lose important pieces of our art and culture.

So, where did all that money go? I don’t have a breakdown for Avatar in particular, but I do have a full budget breakdown for The Village. Of its $71.7 million, $33.5 million was “above the line”, which basically means the winner-takes-all superstar salaries for the director, producer, and cast. That amount could be dramatically reduced with no real cost to society—let’s drop it to say $3 million. Shooting costs were $28.8 million, post-production was $8.4 million, and miscellaneous expenses added about $1 million; all of those would be much harder to reduce (they mainly go to technical staff who make reasonable salaries, not to superstars), so let’s assume the full amount is necessary. That’s about $38 million in real cost to produce. Avatar had a lot more (and better) post-production, so let’s go ahead and multiply the post-production budget by an order of magnitude to $84 million. Our new total budget is $113.8 million.
That sounds like a lot, and it is; but this could be made back without copyright. Avatar sold over 14.5 million DVDs and over 8 million Blu-Rays. Conservatively assuming that the price elasticity of demand is zero (which is ridiculous—assuming the monopoly pricing is optimal it should be -1), if those DVDs were sold for $2 each and the Blu-Rays were sold for $5 each, with 50% of those prices being profit, this would yield a total profit of $14.5 million from DVDs and $20 million from Blu-Rays. That’s already $34.5 million. With realistic assumptions about elasticity of demand, cutting the prices this much (DVDs down from an average of $16, Blu-Rays down from an average of $20) would multiply the number of DVDs sold by at least 5 and the number of Blu-Rays sold by at least 3, which would get us all the way up to $132 million—enough to cover our new budget. (Of course this is much less than they actually made, which is why they set the prices they did—but that doesn’t mean it’s optimal from society’s perspective.)

But okay, suppose I’m wrong about the elasticity, and dropping the price from $16 to $2 for a DVD somehow wouldn’t actually increase the number purchased. What other sources of revenue would they have? Well, box office tickets would still be a thing. They’d have to come down in price, but given the high-quality high-fidelity versions that cinemas require—making them quite hard to pirate—they would still get decent money from each cinema. Let’s say the price drops by 90%—all cinemas are now $1 cinemas!—and the sales again somehow remain exactly the same (rather than dramatically increasing as they actually would). What would Avatar’s worldwide box office gross be then? $278 million. They could give the DVDs away for free and still turn a profit.

And that’s Avatar, one of the most expensive movies ever made. By cutting out the winner-takes-all salaries and huge corporate profits, the budget can be substantially reduced, and then what real costs remain can be quite well covered by box office and DVD sales at reasonable prices. If you imagine that piracy somehow undercuts everything until you have to give away things for free, you might think this is impossible; but in reality pirated versions are of unreliable quality, people do want to support artists and they are willing to pay something for their entertainment. They’re just tired of paying monopoly prices to benefit the shareholders of Viacom.

Would this end the era of the multi-millionaire movie star? Yes, I suppose it might. But it would also put about $10 billion per year back in the pockets of American consumers—and there’s little reason to think it would take away future Avatars, much less future Gone with the Winds.

The credit rating agencies to be worried about aren’t the ones you think

JDN 2457499

John Oliver is probably the best investigative journalist in America today, despite being neither American nor officially a journalist; last week he took on the subject of credit rating agencies, a classic example of his mantra “If you want to do something evil, put it inside something boring.” (note that it’s on HBO, so there is foul language):

As ever, his analysis of the subject is quite good—it’s absurd how much power these agencies have over our lives, and how little accountability they have for even assuring accuracy.

But I couldn’t help but feel that he was kind of missing the point. The credit rating agencies to really be worried about aren’t Equifax, Experian, and Transunion, the ones that assess credit ratings on individuals. They are Standard & Poor’s, Moody’s, and Fitch (which would have been even easier to skewer the way John Oliver did—perhaps we can get them confused with Standardly Poor, Moody, and Filch), the agencies which assess credit ratings on institutions.

These credit rating agencies have almost unimaginable power over our society. They are responsible for rating the risk of corporate bonds, certificates of deposit, stocks, derivatives such as mortgage-backed securities and collateralized debt obligations, and even municipal and government bonds.

S&P, Moody’s, and Fitch don’t just rate the creditworthiness of Goldman Sachs and J.P. Morgan Chase; they rate the creditworthiness of Detroit and Greece. (Indeed, they played an important role in the debt crisis of Greece, which I’ll talk about more in a later post.)

Moreover, they are proven corrupt. It’s a matter of public record.

Standard and Poor’s is the worst; they have been successfully sued for fraud by small banks in Pennsylvania and by the State of New Jersey; they have also settled fraud cases with the Securities and Exchange Commission and the Department of Justice.

Moody’s has also been sued for fraud by the Department of Justice, and all three have been prosecuted for fraud by the State of New York.

But in fact this underestimates the corruption, because the worst conflicts of interest aren’t even illegal, or weren’t until Dodd-Frank was passed in 2010. The basic structure of this credit rating system is fundamentally broken; the agencies are private, for-profit corporations, and they get their revenue entirely from the banks that pay them to assess their risk. If they rate a bank’s asset as too risky, the bank stops paying them, and instead goes to another agency that will offer a higher rating—and simply the threat of doing so keeps them in line. As a result their ratings are basically uncorrelated with real risk—they failed to predict the collapse of Lehman Brothers or the failure of mortgage-backed CDOs, and they didn’t “predict” the European debt crisis so much as cause it by their panic.

Then of course there’s the fact that they are obviously an oligopoly, and furthermore one that is explicitly protected under US law. But then it dawns upon you: Wait… US law? US law decides the structure of credit rating agencies that set the bond rates of entire nations? Yes, that’s right. You’d think that such ratings would be set by the World Bank or something, but they’re not; in fact here’s a paper published by the World Bank in 2004 about how rather than reform our credit rating system, we should instead tell poor countries to reform themselves so they can better impress the private credit rating agencies.

In fact the whole concept of “sovereign debt risk” is fundamentally defective; a country that borrows in its own currency should never have to default on debt under any circumstances. National debt is almost nothing like personal or corporate debt. Their fears should be inflation and unemployment—their monetary policy should be set to minimize the harm of these two basic macroeconomic problems, understanding that policies which mitigate one may enflame the other. There is such a thing as bad fiscal policy, but it has nothing to do with “running out of money to pay your debt” unless you are forced to borrow in a currency you can’t control (as Greece is, because they are on the Euro—their debt is less like the US national debt and more like the debt of Puerto Rico, which is suffering an ongoing debt crisis you may not have heard about). If you borrow in your own currency, you should be worried about excessive borrowing creating inflation and devaluing your currency—but not about suddenly being unable to repay your creditors. The whole concept of giving a sovereign nation a credit rating makes no sense. You will be repaid on time and in full, in nominal terms; if inflation or currency exchange has devalued the currency you are repaid in, that’s sort of like a partial default, but it’s a fundamentally different kind of “default” than simply not paying back the money—and credit ratings have no way of capturing that difference.

In particular, it makes no sense for interest rates on government bonds to go up when a country is suffering some kind of macroeconomic problem.

The basic argument for why interest rates go up when risk is higher is that lenders expect to be paid more by those who do pay to compensate for what they lose from those who don’t pay. This is already much more problematic than most economists appreciate; I’ve been meaning to write a paper on how this system creates self-fulfilling prophecies of default and moral hazard from people who pay their debts being forced to subsidize those who don’t. But it at least makes some sense.

But if a country is a “high risk” in the sense of macroeconomic instability undermining the real value of their debt, we want to ensure that they can restore macroeconomic stability. But we know that when there is a surge in interest rates on government bonds, instability gets worse, not better. Fiscal policy is suddenly shifted away from real production into higher debt payments, and this creates unemployment and makes the economic crisis worse. As Paul Krugman writes about frequently, these policies of “austerity” cause enormous damage to national economies and ultimately benefit no one because they destroy the source of wealth that would have been used to repay the debt.

By letting credit rating agencies decide the rates at which governments must borrow, we are effectively treating national governments as a special case of corporations. But corporations, by design, act for profit and can go bankrupt. National governments are supposed to act for the public good and persist indefinitely. We can’t simply let Greece fail as we might let a bank fail (and of course we’ve seen that there are serious downsides even to that). We have to restructure the sovereign debt system so that it benefits the development of nations rather than detracting from it. The first step is removing the power of private for-profit corporations in the US to decide the “creditworthiness” of entire countries. If we need to assess such risks at all, they should be done by international institutions like the UN or the World Bank.

But right now people are so stuck in the idea that national debt is basically the same as personal or corporate debt that they can’t even understand the problem. For after all, one must repay one’s debts.

Why is it so hard to get a job?

JDN 2457411

The United States is slowly dragging itself out of the Second Depression.

Unemployment fell from almost 10% to about 5%.

Core inflation has been kept between 0% and 2% most of the time.

Overall inflation has been within a reasonable range:

US_inflation

Real GDP has returned to its normal growth trend, though with a permanent loss of output relative to what would have happened without the Great Recession.

US_GDP_growth

Consumption spending is also back on trend, tracking GDP quite precisely.

The Federal Reserve even raised the federal funds interest rate above the zero lower bound, signaling a return to normal monetary policy. (As I argued previously, I’m pretty sure that was their main goal actually.)

Employment remains well below the pre-recession peak, but is now beginning to trend upward once more.

The only thing that hasn’t recovered is labor force participation, which continues to decline. This is how we can have unemployment go back to normal while employment remains depressed; people leave the labor force by retiring, going back to school, or simply giving up looking for work. By the formal definition, someone is only unemployed if they are actively seeking work. No, this is not new, and it is certainly not Obama rigging the numbers. This is how we have measured unemployment for decades.

Actually, it’s kind of the opposite: Since the Clinton administration we’ve also kept track of “broad unemployment”, which includes people who’ve given up looking for work or people who have some work but are trying to find more. But we can’t directly compare it to anything that happened before 1994, because the BLS didn’t keep track of it before then. All we can do is estimate based on what we did measure. Based on such estimation, it is likely that broad unemployment in the Great Depression may have gotten as high as 50%. (I’ve found that one of the best-fitting models is actually one of the simplest; assume that broad unemployment is 1.8 times narrow unemployment. This fits much better than you might think.)

So, yes, we muddle our way through, and the economy eventually heals itself. We could have brought the economy back much sooner if we had better fiscal policy, but at least our monetary policy was good enough that we were spared the worst.

But I think most of us—especially in my generation—recognize that it is still really hard to get a job. Overall GDP is back to normal, and even unemployment looks all right; but why are so many people still out of work?

I have a hypothesis about this: I think a major part of why it is so hard to recover from recessions is that our system of hiring is terrible.

Contrary to popular belief, layoffs do not actually substantially increase during recessions. Quits are substantially reduced, because people are afraid to leave current jobs when they aren’t sure of getting new ones. As a result, rates of job separation actually go down in a recession. Job separation does predict recessions, but not in the way most people think. One of the things that made the Great Recession different from other recessions is that most layoffs were permanent, instead of temporary—but we’re still not sure exactly why.

Here, let me show you some graphs from the BLS.

This graph shows job openings from 2005 to 2015:

job_openings

This graph shows hires from 2005 to 2015:

job_hires

Both of those show the pattern you’d expect, with openings and hires plummeting in the Great Recession.

But check out this graph, of job separations from 2005 to 2015:

job_separations

Same pattern!

Unemployment in the Second Depression wasn’t caused by a lot of people losing jobs. It was caused by a lot of people not getting jobs—either after losing previous ones, or after graduating from school. There weren’t enough openings, and even when there were openings there weren’t enough hires.

Part of the problem is obviously just the business cycle itself. Spending drops because of a financial crisis, then businesses stop hiring people because they don’t project enough sales to justify it; then spending drops even further because people don’t have jobs, and we get caught in a vicious cycle.

But we are now recovering from the cyclical downturn; spending and GDP are back to their normal trend. Yet the jobs never came back. Something is wrong with our hiring system.

So what’s wrong with our hiring system? Probably a lot of things, but here’s one that’s been particularly bothering me for a long time.
As any job search advisor will tell you, networking is essential for career success.

There are so many different places you can hear this advice, it honestly gets tiring.

But stop and think for a moment about what that means. One of the most important determinants of what job you will get is… what people you know?

It’s not what you are best at doing, as it would be if the economy were optimally efficient.
It’s not even what you have credentials for, as we might expect as a second-best solution.

It’s not even how much money you already have, though that certainly is a major factor as well.

It’s what people you know.

Now, I realize, this is not entirely beyond your control. If you actively participate in your community, attend conferences in your field, and so on, you can establish new contacts and expand your network. A major part of the benefit of going to a good college is actually the people you meet there.

But a good portion of your social network is more or less beyond your control, and above all, says almost nothing about your actual qualifications for any particular job.

There are certain jobs, such as marketing, that actually directly relate to your ability to establish rapport and build weak relationships rapidly. These are a tiny minority. (Actually, most of them are the sort of job that I’m not even sure needs to exist.)

For the vast majority of jobs, your social skills are a tiny, almost irrelevant part of the actual skill set needed to do the job well. This is true of jobs from writing science fiction to teaching calculus, from diagnosing cancer to flying airliners, from cleaning up garbage to designing spacecraft. Social skills are rarely harmful, and even often provide some benefit, but if you need a quantum physicist, you should choose the recluse who can write down the Dirac equation by heart over the well-connected community leader who doesn’t know what an integral is.

At the very least, it strains credibility to suggest that social skills are so important for every job in the world that they should be one of the defining factors in who gets hired. And make no mistake: Networking is as beneficial for landing a job at a local bowling alley as it is for becoming Chair of the Federal Reserve. Indeed, for many entry-level positions networking is literally all that matters, while advanced positions at least exclude candidates who don’t have certain necessary credentials, and then make the decision based upon who knows whom.

Yet, if networking is so inefficient, why do we keep using it?

I can think of a couple reasons.

The first reason is that this is how we’ve always done it. Indeed, networking strongly pre-dates capitalism or even money; in ancient tribal societies there were certainly jobs to assign people to: who will gather berries, who will build the huts, who will lead the hunt. But there were no colleges, no certifications, no resumes—there was only your position in the social structure of the tribe. I think most people simply automatically default to a networking-based system without even thinking about it; it’s just the instinctual System 1 heuristic.

One of the few things I really liked about Debt: The First 5000 Years was the discussion of how similar the behavior of modern CEOs is to that of ancient tribal chieftans, for reasons that make absolutely no sense in terms of neoclassical economic efficiency—but perfect sense in light of human evolution. I wish Graeber had spent more time on that, instead of many of these long digressions about international debt policy that he clearly does not understand.

But there is a second reason as well, a better reason, a reason that we can’t simply give up on networking entirely.

The problem is that many important skills are very difficult to measure.

College degrees do a decent job of assessing our raw IQ, our willingness to persevere on difficult tasks, and our knowledge of the basic facts of a discipline (as well as a fantastic job of assessing our ability to pass standardized tests!). But when you think about the skills that really make a good physicist, a good economist, a good anthropologist, a good lawyer, or a good doctor—they really aren’t captured by any of the quantitative metrics that a college degree provides. Your capacity for creative problem-solving, your willingness to treat others with respect and dignity; these things don’t appear in a GPA.

This is especially true in research: The degree tells how good you are at doing the parts of the discipline that have already been done—but what we really want to know is how good you’ll be at doing the parts that haven’t been done yet.

Nor are skills precisely aligned with the content of a resume; the best predictor of doing something well may in fact be whether you have done so in the past—but how can you get experience if you can’t get a job without experience?

These so-called “soft skills” are difficult to measure—but not impossible. Basically the only reliable measurement mechanisms we have require knowing and working with someone for a long span of time. You can’t read it off a resume, you can’t see it in an interview (interviews are actually a horribly biased hiring mechanism, particularly biased against women). In effect, the only way to really know if someone will be good at a job is to work with them at that job for awhile.

There’s a fundamental information problem here I’ve never quite been able to resolve. It pops up in a few other contexts as well: How do you know whether a novel is worth reading without reading the novel? How do you know whether a film is worth watching without watching the film? When the information about the quality of something can only be determined by paying the cost of purchasing it, there is basically no way of assessing the quality of things before we purchase them.

Networking is an attempt to get around this problem. To decide whether to read a novel, ask someone who has read it. To decide whether to watch a film, ask someone who has watched it. To decide whether to hire someone, ask someone who has worked with them.

The problem is that this is such a weak measure that it’s not much better than no measure at all. I often wonder what would happen if businesses were required to hire people based entirely on resumes, with no interviews, no recommendation letters, and any personal contacts treated as conflicts of interest rather than useful networking opportunities—a world where the only thing we use to decide whether to hire someone is their documented qualifications. Could it herald a golden age of new economic efficiency and job fulfillment? Or would it result in widespread incompetence and catastrophic collapse? I honestly cannot say.

Thus ends our zero-lower-bound interest rate policy

JDN 2457383

Not with a bang, but with a whimper.

If you are reading the blogs as they are officially published, it will have been over a week since the Federal Reserve ended its policy of zero interest rates. (If you are reading this as a Patreon Blog from the Future, it will only have been a few days.)

The official announcement was made on December 16. The Federal Funds Target Rate will be raised from 0%-0.25% to 0.25%-0.5%. That one-quarter percentage point—itself no larger than the margin of error the Fed allots itself—will make all the difference.

As pointed out in the New York Times, this is the first time nominal interest rates have been raised in almost a decade. But the Fed had been promising it for some time, and thus a major reason they did it was to preserve their own credibility. They also say they think inflation is about to hit the 2% target, though it hasn’t yet (and I was never clear on why 2% was the target in the first place).

Actually, overall inflation is currently near zero. What is at 2% is what’s called “core inflation”, which excludes particularly volatile products such as oil and food. The idea is that we want to set monetary policy based upon long-run trends in the economy as a whole, not based upon sudden dips and surges in oil prices. But right now we are in the very odd scenario of the Fed raising interest rates in order to stop inflation even as the total amount most people need to spend to maintain their standard of living is the same as it was a year ago.

As MSNBC argues, it is essentially an announcement that the Second Depression is over and the economy has now returned to normal. Of course, simply announcing such a thing does not make it true.

Personally, I think this move is largely symbolic. The difference between 0% and 0.25% is unimportant for most practical purposes.

If you owe $100,000 over 30 years at 0% interest, you will pay $277.78 per month, totaling of course $100,000. If your interest rate were raised to 0.25% interest, you would instead owe $288.35 per month, totaling $103,807.28. Even over 30 years, that 0.25% interest raises your total expenditure by less than 4%.

Over shorter terms it’s even less important. If you owe $20,000 over 5 years at 0% interest, you will pay $333.33 per month totaling $20,000. At 0.25%, you would pay $335.46 per month totaling $20,127.34, a mere 0.6% more.

Moreover, if a bank was willing to take out a loan at 0%, they’ll probably still be at 0.25%.

Where it would have the largest impact is in more exotic financial instruments, like zero-amortization or negative-amortization bonds. A zero-amortization bond at 0% is literally free money forever (assuming you can keep rolling it over). A zero-amortization bond at 0.25% means you must at least pay 0.25% of the money back each year. A negative-amortization bond at 0% makes no sense mathematically (somehow you pay back less than 0% at each payment?), while a negative-amortization bond at 0.25% only doesn’t make sense practically. If both zero and negative-amortization seem really bizarre and impossible to justify, that’s because they are. They should not exist. Most exotic financial instruments have no reason to exist, aside from the fact that they can be used to bamboozle people into giving money to the financial corporations that create them. (Which reminds me, I need to see The Big Short. But of course I have to see Star Wars: The Force Awakens first; one must have priorities.)

So, what will happen as a result of this change in interest rates? Probably not much. Inflation might go down a little—which means we might have overall deflation, and that would be bad—and the rate of increase in credit might drop slightly. In the worst-case scenario, unemployment starts to rise again, the Fed realizes their mistake, and interest rates will be dropped back to zero.

I think it’s more instructive to look at why they did this—the symbolic significance behind it.

The zero lower bound is weird. It makes a lot of economists very uncomfortable. The usual rules for how monetary and fiscal policy work break down, because the equation hits up against a constraint—a corner solution, more technically. Krugman often talks about how many of the usual ideas about how interest rates and government spending work collapse at the zero-lower-bound. We have models of this sort of thing that are pretty good, but they’re weird and counter-intuitive, so policymakers never seem to actually use them.

What is the zero lower bound, you ask? Exactly what it says on the tin. There is a lower bound on how low you can set an interest rate, and for all practical purposes that limit is zero. If you start trying to set an interest rate of -5%, people won’t be willing to loan out money and will instead hoard cash. (Interestingly, a central bank with a strong currency, such as that of the US, UK, or EU, can actually set small negative nominal interest rates—because people consider their bonds safer than cash, so they’ll pay for the safety. The ECB, Europe’s Fed, actually did so for awhile.)

The zero-lower-bound actually applies to prices in general, not just interest rates. If a product is so worthless to you that you don’t even want it if it’s free, it’s very rare for anyone to actually pay you to take it—partly because there might be nothing to stop you from taking a huge amount of it and forcing them to pay you ridiculous amounts of money. “How much is this paperclip?” “-$0.75.” “I’ll have 50 billion, please.” In a few rare cases, they might be able to pay you to take it an amount that’s less than what it costs you to store and transport. Also, if they benefit from giving it to you, companies will give you things for free—think ads and free samples. But basically, if people won’t even take something for free, that thing simply doesn’t get sold.

But if we are in a recession, we really don’t want loans to stop being made altogether. So if people are unwilling to take out loans at 0% interest, we’re in trouble. Generally what we have to do is rely on inflation to reduce the real value of money over time, thus creating a real interest rate that’s negative even though the nominal interest rate remains stuck at 0%. But what if inflation is very low? Then there’s nothing you can do except find a way to raise inflation or increase demand for credit. This means relying upon unconventional methods like quantitative easing (trying to cause inflation), or preferably using fiscal policy to spend a bunch of money and thereby increase demand for credit.

What the Fed is basically trying to do here is say that we are no longer in that bad situation. We can now set interest rates where they actually belong, rather than forcing them as low as they’ll go and hoping inflation will make up the difference.

It’s actually similar to how if you take a test and score 100%, there’s no way of knowing whether you just barely got 100%, or if you would have still done as well if the test were twice as hard—but if you score 99%, you actually scored 99% and would have done worse if the test were harder. In the former case you were up against a constraint; in the latter it’s your actual value. The Fed is essentially announcing that we really want interest rates near 0%, as opposed to being bound at 0%—and the way they do that is by setting a target just slightly above 0%.

So far, there doesn’t seem to have been much effect on markets. And frankly, that’s just what I’d expect.

What do we do about unemployment?

JDN 2457188 EDT 11:21.

Macroeconomics, particularly monetary policy, is primarily concerned with controlling two variables.

The first is inflation: We don’t want prices to rise too fast, or markets will become unstable. This is something we have managed fairly well; other than food and energy prices which are known to be more volatile, prices have grown at a rate between 1.5% and 2.5% per year for the last 10 years; even with food and energy included, inflation has stayed between -1.5% and +5.0%. After recovering from its peak near 15% in 1980, US inflation has stayed between -1.5% and +6.0% ever since. While the optimal rate of inflation is probably between 2.0% and 4.0%, anything above 0.0% and below 10.0% is probably fine, so the only significant failure of US inflation policy was the deflation in 2009.

The second is unemployment: We want enough jobs for everyone who wants to work, and preferably we also wouldn’t have underemployment (people who are only working part-time even though they’d prefer full-time or discouraged workers (people who give up looking for jobs because they can’t find any, and aren’t counted as unemployed because they’re no looking looking for work). There’s also a tendency among economists to want “work incentives” that maximize the number of people who want to work, but I think these are wildly overrated. Work isn’t an end in itself; work is supposed to be creating products and providing services that make human lives better. The benefits of production have to be weighed against the costs of stress, exhaustion, and lost leisure time from working. Given that stress-related illnesses are some of the leading causes of death and disability in the United States, I don’t think that our problem is insufficient work incentives.

Unemployment is a problem that we have definitely not solved. Unemployment has bounced up and down between peaks and valleys, dropping as low as 4.0% and rising as high as 11.0% over the last 60 years. If 2009’s -1.5% deflation concerns you, then its 9.9% unemployment should concern you far more. Indeed, I’m not convinced that 5.0% is an acceptable “natural” rate of unemployment—that’s still millions of people who want work and can’t find it—but most economists would say that it is.

In fact, matters are worse than most people realize. Our unemployment rate has fallen back to a relatively normal 5.5%, as you can see in this graph (the blue line is unemployment, the red line is underemployment):

All_Unemployment

However, our employment rate never recovered from the Second Depression. As you can see in this graph, it fell from 63% to 58%, and has now only risen back to 59%:

Employment

How can unemployment fall without employment rising? The key is understanding how unemployment is calculated: It only counts people in the labor force. If people leave the labor force entirely, by retiring, going back to school, or simply giving up on finding work, they will no longer be counted as unemployed. The unemployment rate only counts people who want work but don’t have it, so as far as I’m concerned that figure should always be nearly zero. (Not quite zero since it takes some time to find a good fit; but maybe 1% at most. Any more than that and there is something wrong with our economic system.)

The optimal employment rate is not as obvious; it certainly isn’t 100%, as some people are too young, too old, or too disabled to be spending their time working. As automation improves, the number of workers necessary to produce any given product decreases, and eventually we may decide as a society that we are making enough products and most of us should be spending more of our time on other things, like spending time with family, creating works of art, or simply having fun. Maybe only a handful of people, the most driven or the most brilliant, will actually decide to work—and they will do because they want to, not because they have to. Indeed, the truly optimal employment rate might well be zero; think of The Culture, where there is no such concept as a “job”; there are things you do because you want to do them, or because they seem worthwhile, but there is none of this “working for pay” nonsense. We are not yet at the level of automation where this would be possible, but we are much closer than I think most people realize. Think about all of the various administrative and bureaucratic tasks that most people do the majority of the time, all the reports, all the meetings; why do they do that? Is it actually because the work is necessary, that the many levels of bureaucracy actually increase efficiency through specialization? Or is it simply because we’ve become so accustomed to the idea that people have to be working all the time in order to justify their existence? Is David Graeber (I reviewed one of his books previously) right that most jobs are actually (and this is a technical term), “bullshit jobs”? Once again, the problem doesn’t seem to be too few work incentives, but if anything too many.

Indeed, there is a basic fact about unemployment that has been hidden from most people. I’d normally say that this is accidental, that it’s too technical or obscure for most people to understand, but no, I think it has been actively concealed, or, since I guess the information has been publicly available, at least discussion of it has been actively avoided. It’s really not at all difficult to understand, yet it will fundamentally change the way you think about our unemployment problem. Here goes:

Since at least 2000 and probably since 1980 there have been more people looking for jobs than there have been jobs available.

The entire narrative of “people are lazy and don’t want to work” or “we need more work incentives” is just totally, totally wrong; people are desperate to find work, and there hasn’t been enough work for them to find since longer than I’ve been alive.

You can see this on the following graph, which is of what’s called the “Beveridge curve”; the horizontal axis is the unemployment rate, while the vertical axis is the rate of job vacancies. The red line across the diagonal is the point at which the two are even, and there are as many people looking for jobs as there are jobs to fill. Notice how the graph is always below the line. There have always been more unemployed people than jobs for them to fill, and at the worst of the Second Depression the ratio was 5 to 1.

Beveridge_curve_2

Personally I believe that we should be substantially above the line, and in a truly thriving economy there should be employers desperately trying to find employees and willing to pay them whatever it takes. You shouldn’t have to send out 20 job applications to get hired; 20 companies should have to send offers to you. For the economy does not exist to serve corporations; it exists to serve people.

I can see two basic ways to solve this problem: You can either create more jobs, or you can get people to stop looking for work. That may be sort of obvious, but I think people usually forget the second option.

We definitely do talk a lot about “job creation”, though usually in a totally nonsensical way—somehow “Job Creator” has come to be a euphemism for “rich person”. In fact the best way to create jobs is to put money into the hands of people who will spend it. The more people spend their money, the more it flows through the economy and the more wealth we end up with overall. High rates of spending—high marginal propensity to consumecan multiply the value of a dollar many times over.

But there’s also something to be said for getting people to stop looking for work—the key is do it in the right way. They shouldn’t stop looking because they give up; they should stop looking because they don’t need to work. People should have their basic needs met even if they aren’t working for an employer; human beings have rights and dignity beyond their productivity in the market. Employers should have to make you a better offer than “you’ll be homeless if you don’t do this”.

Both of these goals can be accomplished simultaneously by one simple policy: Basic income.

It’s really amazing how many problems can be solved by a basic income; it’s more or less the amazing wonder policy that solves all the world’s economic problems simultaneously. Poverty? Gone. Unemployment? Decimated. Inequality? Contained. (The pilot studies of basic income in India have been successful beyond all but the wildest dreams; they eliminate poverty, improve health, increase entrepreneurial activity, even reduce gender inequality.) The one major problem basic income doesn’t solve is government debt (indeed it likely increases it, at least in the short run), but as I’ve already talked about, that problem is not nearly as bad as most people fear.

And once again I think I should head off accusations that advocating a basic income makes me some sort of far-left Communist radical; Friedrich Hayek supported a basic income.

Basic income would help with unemployment in a third way as well; one of the major reasons unemployment is so harmful is that people who are unemployed can’t provide for themselves or their families. So a basic income would reduce the number of people looking for jobs, increase the number of jobs available, and also make being unemployed less painful, all in one fell swoop. I doubt it would solve the problem of unemployment entirely, but I think it would make an enormous difference.

How following the crowd can doom us all

JDN 2457110 EDT 21:30

Humans are nothing if not social animals. We like to follow the crowd, do what everyone else is doing—and many of us will continue to do so even if our own behavior doesn’t make sense to us. There is a very famous experiment in cognitive science that demonstrates this vividly.

People are given a very simple task to perform several times: We show you line X and lines A, B, and C. Now tell us which of A, B or C is the same length as X. Couldn’t be easier, right? But there’s a trick: seven other people are in the same room performing the same experiment, and they all say that B is the same length as X, even though you can clearly see that A is the correct answer. Do you stick with what you know, or say what everyone else is saying? Typically, you say what everyone else is saying. Over 18 trials, 75% of people followed the crowd at least once, and some people followed the crowd every single time. Some people even began to doubt their own perception, wondering if B really was the right answer—there are four lights, anyone?

Given that our behavior can be distorted by others in such simple and obvious tasks, it should be no surprise that it can be distorted even more in complex and ambiguous tasks—like those involved in finance. If everyone is buying up Beanie Babies or Tweeter stock, maybe you should too, right? Can all those people be wrong?

In fact, matters are even worse with the stock market, because it is in a sense rational to buy into a bubble if you know that other people will as well. As long as you aren’t the last to buy in, you can make a lot of money that way. In speculation, you try to predict the way that other people will cause prices to move and base your decisions around that—but then everyone else is doing the same thing. By Keynes called it a “beauty contest”; apparently in his day it was common to have contests for picking the most beautiful photo—but how is beauty assessed? By how many people pick it! So you actually don’t want to choose the one you think is most beautiful, you want to choose the one you think most people will think is the most beautiful—or the one you think most people will think most people will think….

Our herd behavior probably made a lot more sense when we evolved it millennia ago; when most of your threats are external and human beings don’t have that much influence over our environment, the majority opinion is quite likely to be right, and can often given you an answer much faster than you could figure it out on your own. (If everyone else thinks a lion is hiding in the bushes, there’s probably a lion hiding in the bushes—and if there is, the last thing you want is to be the only one who didn’t run.) The problem arises when this tendency to follow the ground feeds back on itself, and our behavior becomes driven not by the external reality but by an attempt to predict each other’s predictions of each other’s predictions. Yet this is exactly how financial markets are structured.

With this in mind, the surprise is not why markets are unstable—the surprise is why markets are ever stable. I think the main reason markets ever manage price stability is actually something most economists think of as a failure of markets: Price rigidity and so-called “menu costs“. If it’s costly to change your price, you won’t be constantly trying to adjust it to the mood of the hour—or the minute, or the microsecondbut instead trying to tie it to the fundamental value of what you’re selling so that the price will continue to be close for a long time ahead. You may get shortages in times of high demand and gluts in times of low demand, but as long as those two things roughly balance out you’ll leave the price where it is. But if you can instantly and costlessly change the price however you want, you can raise it when people seem particularly interested in buying and lower it when they don’t, and then people can start trying to buy when your price is low and sell when it is high. If people were completely rational and had perfect information, this arbitrage would stabilize prices—but since they’re not, arbitrage attempts can over- or under-compensate, and thus result in cyclical or even chaotic changes in prices.

Our herd behavior then makes this worse, as more people buying leads to, well, more people buying, and more people selling leads to more people selling. If there were no other causes of behavior, the result would be prices that explode outward exponentially; but even with other forces trying to counteract them, prices can move suddenly and unpredictably.

If most traders are irrational or under-informed while a handful are rational and well-informed, the latter can exploit the former for enormous amounts of money; this fact is often used to argue that irrational or under-informed traders will simply drop out, but it should only take you a few moments of thought to see why that isn’t necessarily true. The incentives isn’t just to be well-informed but also to keep others from being well-informed. If everyone were rational and had perfect information, stock trading would be the most boring job in the world, because the prices would never change except perhaps to grow with the growth rate of the overall economy. Wall Street therefore has every incentive in the world not to let that happen. And now perhaps you can see why they are so opposed to regulations that would require them to improve transparency or slow down market changes. Without the ability to deceive people about the real value of assets or trigger irrational bouts of mass buying or selling, Wall Street would make little or no money at all. Not only are markets inherently unstable by themselves, in addition we have extremely powerful individuals and institutions who are driven to ensure that this instability is never corrected.

This is why as our markets have become ever more streamlined and interconnected, instead of becoming more efficient as expected, they have actually become more unstable. They were never stable—and the gold standard made that instability worse—but despite monetary policy that has provided us with very stable inflation in the prices of real goods, the prices of assets such as stocks and real estate have continued to fluctuate wildly. Real estate isn’t as bad as stocks, again because of price rigidity—houses rarely have their values re-assessed multiple times per year, let alone multiple times per second. But real estate markets are still unstable, because of so many people trying to speculate on them. We think of real estate as a good way to make money fast—and if you’re lucky, it can be. But in a rational and efficient market, real estate would be almost as boring as stock trading; your profits would be driven entirely by population growth (increasing the demand for land without changing the supply) and the value added in construction of buildings. In fact, the population growth effect should be sapped by a land tax, and then you should only make a profit if you actually build things. Simply owning land shouldn’t be a way of making money—and the reason for this should be obvious: You’re not actually doing anything. I don’t like patent rents very much, but at least inventing new technologies is actually beneficial for society. Owning land contributes absolutely nothing, and yet it has been one of the primary means of amassing wealth for centuries and continues to be today.

But (so-called) investors and the banks and hedge funds they control have little reason to change their ways, as long as the system is set up so that they can keep profiting from the instability that they foster. Particularly when we let them keep the profits when things go well, but immediately rush to bail them out when things go badly, they have basically no incentive at all not to take maximum risk and seek maximum instability. We need a fundamentally different outlook on the proper role and structure of finance in our economy.

Fortunately one is emerging, summarized in a slogan among economically-savvy liberals: Banking should be boring. (Elizabeth Warren has said this, as have Joseph Stiglitz and Paul Krugman.) And indeed it should, for all banks are supposed to be doing is lending money from people who have it and don’t need it to people who need it but don’t have it. They aren’t supposed to be making large profits of their own, because they aren’t the ones actually adding value to the economy. Indeed it was never quite clear to me why banks should be privatized in the first place, though I guess it makes more sense than, oh, say, prisons.

Unfortunately, the majority opinion right now, at least among those who make policy, seems to be that banks don’t need to be restructured or even placed on a tighter leash; no, they need to be set free so they can work their magic again. Even otherwise reasonable, intelligent people quickly become unshakeable ideologues when it comes to the idea of raising taxes or tightening regulations. And as much as I’d like to think that it’s just a small but powerful minority of people who thinks this way, I know full well that a large proportion of Americans believe in these views and intentionally elect politicians who will act upon them.

All the more reason to break from the crowd, don’t you think?

How is the economy doing?

JDN 2457033 EST 12:22.

Whenever you introduce yourself to someone as an economist, you will typically be asked a single question: “How is the economy doing?” I’ve already experienced this myself, and I don’t have very many dinner parties under my belt.

It’s an odd question, for a couple of reasons: First, I didn’t say I was a macroeconomic forecaster. That’s a very small branch of economics—even a small branch of macroeconomics. Second, it is widely recognized among economists that our forecasters just aren’t very good at what they do. But it is the sort of thing that pops into people’s minds when they hear the word “economist”, so we get asked it a lot.

Why are our forecasts so bad? Some argue that the task is just inherently too difficult due to the chaotic system involved; but they used to say that about weather forecasts, and yet with satellites and computer models our forecasts are now far more accurate than they were 20 years ago. Others have argued that “politics always dominates over economics”, as though politics were somehow a fundamentally separate thing, forever exogenous, a parameter in our models that cannot be predicted. I have a number of economic aphorisms I’m trying to popularize; the one for this occasion is: “Nothing is exogenous.” (Maybe fundamental constants of physics? But actually many physicists think that those constants can be derived from even more fundamental laws.) My most common is “It’s the externalities, stupid.”; next is “It’s not the incentives, it’s the opportunities.”; and the last is “Human beings are 90% rational. But woe betide that other 10%.” In fact, it’s not quite true that all our macroeconomic forecasters are bad; a few, such as Krugman, are actually quite good. The Klein Award is given each year to the best macroeconomic forecasters, and the same names pop up too often for it to be completely random. (Sadly, one of the most common is Citigroup, meaning that our banksters know perfectly well what they’re doing when they destroy our economy—they just don’t care.) So in fact I think our failures of forecasting are not inevitable or permanent.

And of course that’s not what I do at all. I am a cognitive economist; I study how economic systems behave when they are run by actual human beings, rather than by infinite identical psychopaths. I’m particularly interested in what I call the tribal paradigm, the way that people identify with groups and act in the interests of those groups, how much solidarity people feel for each other and why, and what role ideology plays in that identification. I’m hoping to one day formally model solidarity and make directly testable predictions about things like charitable donations, immigration policies and disaster responses.

I do have a more macroeconomic bent than most other cognitive economists; I’m not just interested in how human irrationality affects individuals or corporations, I’m also interested in how it affects society as a whole. But unlike most macroeconomists I care more about inequality than unemployment, and hardly at all about inflation. Unless you start getting 40% inflation per year, inflation really isn’t that harmful—and can you imagine what 40% unemployment would be like? (Also, while 100% inflation is awful, 100% unemployment would be no economy at all.) If we’re going to have a “misery index“, it should weight unemployment at least 10 times as much as inflation—and it should also include terms for poverty and inequality. Frankly maybe we should just use poverty, since I’d be prepared to accept just about any level of inflation, unemployment, or even inequality if it meant eliminating poverty. This is of course is yet another reason why a basic income is so great! An anti-poverty measure can really only be called a failure if it doesn’t actually reduce poverty; the only way that could happen with a basic income is if it somehow completely destabilized the economy, which is extremely unlikely as long as the basic income isn’t something ridiculous like $100,000 per year.

I could probably talk about my master’s thesis; the econometric models are relatively arcane, but the basic idea of correlating the income concentration of the top 1% of 1% and the level of corruption is something most people can grasp easily enough.

Of course, that wouldn’t be much of an answer to “How is the economy doing?”; usually my answer is to repeat what I’ve last read from mainstream macroeconomic forecasts, which is usually rather banal—but maybe that’s the idea? Most small talk is pretty banal I suppose (I never was very good at that sort of thing). It sounds a bit like this: No, we’re not on the verge of horrible inflation—actually inflation is currently too low. (At this point someone will probably bring up the gold standard, and I’ll have to explain that the gold standard is an unequivocally terrible idea on so, so many levels. The gold standard caused the Great Depression.) Unemployment is gradually improving, and actually job growth is looking pretty good right now; but wages are still stagnant, which is probably what’s holding down inflation. We could have prevented the Second Depression entirely, but we didn’t because Republicans are terrible at managing the economy—all of the 10 most recent recessions and almost 80% of the recessions in the last century were under Republican presidents. Instead the Democrats did their best to implement basic principles of Keynesian macroeconomics despite Republican intransigence, and we muddled through. In another year or two we will actually be back at an unemployment rate of 5%, which the Federal Reserve considers “full employment”. That’s already problematic—what about that other 5%?—but there’s another problem as well: Much of our reduction in unemployment has come not from more people being employed but instead by more people dropping out of the labor force. Our labor force participation rate is the lowest it’s been since 1978, and is still trending downward. Most of these people aren’t getting jobs; they’re giving up. At best we may hope that they are people like me, who gave up on finding work in order to invest in their own education, and will return to the labor force more knowledgeable and productive one day—and indeed, college participation rates are also rising rapidly. And no, that doesn’t mean we’re becoming “overeducated”; investment in education, so-called “human capital”, is literally the single most important factor in long-term economic output, by far. Education is why we’re not still in the Stone Age. Physical capital can be replaced, and educated people will do so efficiently. But all the physical capital in the world will do you no good if nobody knows how to use it. When everyone in the world is a millionaire with two PhDs and all our work is done by robots, maybe then you can say we’re “overeducated”—and maybe then you’d still be wrong. Being “too educated” is like being “too rich” or “too happy”.

That’s usually enough to placate my interlocutor. I should probably count my blessings, for I imagine that the first confrontation you get at a dinner party if you say you are a biologist involves a Creationist demanding that you “prove evolution”. I like to think that some mathematical biologists—yes, that’s a thing—take their request literally and set out to mathematically prove that if allele distributions in a population change according to a stochastic trend then the alleles with highest expected fitness have, on average, the highest fitness—which is what we really mean by “survival of the fittest”. The more formal, the better; the goal is to glaze some Creationist eyes. Of course that’s a tautology—but so is literally anything that you can actually prove. Cosmologists probably get similar demands to “prove the Big Bang”, which sounds about as annoying. I may have to deal with gold bugs, but I’ll take them over Creationists any day.

What do other scientists get? When I tell people I am a cognitive scientist (as a cognitive economist I am sort of both an economist and a cognitive scientist after all), they usually just respond with something like “Wow, you must be really smart.”; which I suppose is true enough, but always strikes me as an odd response. I think they just didn’t know enough about the field to even generate a reasonable-sounding question, whereas with economists they always have “How is the economy doing?” handy. Political scientists probably get “Who is going to win the election?” for the same reason. People have opinions about economics, but they don’t have opinions about cognitive science—or rather, they don’t think they do. Actually most people have an opinion about cognitive science that is totally and utterly ridiculous, more on a par with Creationists than gold bugs: That is, most people believe in a soul that survives after death. This is rather like believing that after your computer has been smashed to pieces and ground back into the sand from whence it came, all the files you had on it are still out there somewhere, waiting to be retrieved. No, they’re long gone—and likewise your memories and your personality will be long gone once your brain has rotted away. Yes, we have a soul, but it’s made of lots of tiny robots; when the tiny robots stop working the soul is no more. Everything you are is a result of the functioning of your brain. This does not mean that your feelings are not real or do not matter; they are just as real and important as you thought they were. What it means is that when a person’s brain is destroyed, that person is destroyed, permanently and irrevocably. This is terrifying and difficult to accept; but it is also most definitely true. It is as solid a fact as any in modern science. Many people see a conflict between evolution and religion; but the Pope has long since rendered that one inert. No, the real conflict, the basic fact that undermines everything religion is based upon, is not in biology but in cognitive science. It is indeed the Basic Fact of Cognitive Science: We are our brains, no more and no less. (But I suppose it wouldn’t be polite to bring that up at dinner parties.)

The “You must be really smart.” response is probably what happens to physicists and mathematicians. Quantum mechanics confuses basically everyone, so few dare go near it. The truly bold might try to bring up Schrodinger’s Cat, but are unlikely to understand the explanation of why it doesn’t work. General relativity requires thinking in tensors and four-dimensional spaces—perhaps they’ll be asked the question “What’s inside a black hole?”, which of course no physicist can really answer; the best answer may actually be, “What do you mean, inside?” And if a mathematician tries to explain their work in lay terms, it usually comes off as either incomprehensible or ridiculous: Stokes’ Theorem would be either “the integral of a differential form over the boundary of some orientable manifold is equal to the integral of its exterior derivative over the whole manifold” or else something like “The swirliness added up inside an object is equal to the swirliness added up around the edges.”

Economists, however, always seem to get this one: “How is the economy doing?”

Right now, the answer is this: “It’s still pretty bad, but it’s getting a lot better. Hopefully the new Congress won’t screw that up.”

Should we raise the minimum wage?

JDN 2456949 PDT 10:22.

The minimum wage is an economic issue that most people are familiar with; a large portion of the population has worked for minimum wage at some point in their lives, and those who haven’t generally know someone who has. As Chris Rock famously remarked (in the recording, Chris Rock, as usual, uses some foul language), “You know what that means when they pay you minimum wage? You know what they’re trying to tell you? It’s like, ‘Hey, if I could pay you less, I would; but it’s against the law.’ ”

The minimum wage was last raised in 2009, but adjusted for inflation its real value has been trending downward since 1968. The dollar values are going up, but not fast enough to keep up with inflation.

So, should we raise it again? How much? Should we just match it to inflation, or actually raise it higher in real terms? Productivity (in terms of GDP per worker) has more than doubled since 1968, so perhaps the minimum wage should double as well?

There are two major sides in this debate, and I basically disagree with both of them.

The first is the right-wing view (here espoused by the self-avowed “Objectivist” Don Watkins) that the minimum wage should be abolished entirely because it is an arbitrary price floor that prevents workers from selling their labor at whatever wage the market will bear. He argues that the free market is the only way the value of labor should be assessed and the government has no business getting involved.

On the other end of the spectrum we have Robert Reich, who thinks we should definitely raise the minimum wage and it would be the best way to lift workers out of poverty. He argues that by providing minimum-wage workers with welfare and Medicaid, we are effectively subsidizing employers to pay lower wages. While I sympathize a good deal more with this view, I still don’t think it’s quite right.

Why not? Because Watkins is right about one thing: The minimum wage is, in fact, an arbitrary price floor. Out of all the possible wages that an employer could pay, how did we decide that this one should be the lowest? And the same applies to everyone, no matter who they are or what sort of work they do?

What Watkins gets wrong—and Reich gets right—is that wages are not actually set in a free and competitive market. Large corporations have market power; they can influence wages and prices to their own advantage. They use monopoly power to raise prices, and its inverse, monopsony power, to lower wages. The workers who are making a minimum wage of $7.25 wouldn’t necessarily make $7.25 in a competitive market; they could make more than that. All we know, actually, is that they would make at least this much, because if a worker’s marginal productivity is below the minimum wage the corporation simply wouldn’t have hired them.

Monopsony power doesn’t just lower wages; it also reduces employment. One of the ways that corporations can control wages is by controlling hiring; if they tried to hire more people, they’d have to offer a higher wage, so instead they hire fewer people. Under these circumstances, a higher minimum wage can actually create jobs, as Reich argues it will. And in this particular case I think he’s right about that, because corporations have enormous market power to hold wages down and in the Second Depression we have a huge amount of unused productive capacity. But this isn’t true in general. If markets are competitive, then raising minimum wage just causes unemployment. Even when corporations have market power, if there isn’t much unused capacity then raising minimum wage will just lead them to raise prices instead of hiring more workers.

Reich is also wrong about this idea that welfare payments subsidize low wages. On the contrary, the stronger your welfare system, the higher your wages will be. The reason is quite simple: A stronger welfare system gives workers more bargaining power. If not getting this job means you turn to prostitution or starve to death, then you’re going to take just about any wage they offer you. (I don’t entirely agree with Krugman’s defense of sweatshops—I believe there are ways to increase trade without allowing oppressive working conditions—but he makes this point quite vividly.) On the other hand, if you live in the US with a moderate welfare system, you can sometimes afford to say no; you might end up broke or worse, homeless, but you’re unlikely to starve to death because at least you have food stamps. And in a nation with a really robust welfare system like Sweden, you can walk away from any employer who offers to pay you less than your labor is worth, because you know that even if you can’t find a job for awhile your basic livelihood will be protected. As a result, stronger welfare programs make labor markets more competitive and raise wages. Welfare and Medicaid do not subsidize low-wage employers; they exert pressure on employers to raise their low wages. Indeed, a sufficiently strong welfare system could render minimum wage redundant, as I’ll get back to at the end of this post.

Of course, I am above all an empiricist; all theory must bow down before the data. So what does the data say? Does raising the minimum wage create jobs or destroy jobs? Our best answer from compiling various studies is… neither. Moderate increases in the minimum wage have no discernible effect on employment. In some studies we’ve found increases, in others decreases, but the overall average effect across many studies is indistinguishable from zero.

Of course, a sufficiently large increase is going to decrease employment; a Fox News reporter once famously asked: “Why not raise the minimum wage to $100,000 an hour!?” (which Jon Stewart aptly satirized as “Why not pay people in cocaine and unicorns!?”) Yes, raising the minimum wage to $100,000 an hour would create massive inflation and unemployment. But that really says nothing about whether raising the minimum wage to $10 or $20 would be a good idea. Covering your car with 4000 gallons of gasoline is a bad idea, but filling it with 10 gallons is generally necessary for its proper functioning.

This kind of argument is actually pretty common among Republicans, come to think of it. Take the Laffer Curve, for instance; it’s basically saying that since a 99% tax on everyone would damage the economy (which is obviously true) then a 40% tax specifically on millionaires must have the same effect. Another good one is Rush Limbaugh’s argument that if unemployment benefits are good, why not just put everyone on unemployment benefits? Well, again, because there’s a difference between doing something for some people sometimes and doing it for everyone all the time. There are these things called numbers; they measure whether something is bigger or smaller instead of just “there” or “not there”. You might want to learn about that.

Since moderate increases in minimum wage have no effect on unemployment, and we are currently under conditions of extremely low—in fact, dangerously low—inflation, then I think on balance we should go with Reich: Raising the minimum wage would do more good than harm.

But in general, is minimum wage the best way to help workers out of poverty? No, I don’t think it is. It’s awkward and heavy-handed; it involves trying to figure out what the optimal wage should be and writing it down in legislation, instead of regulating markets so that they will naturally seek that optimal level and respond to changes in circumstances. It only helps workers at the very bottom: Someone making $12 an hour is hardly rich, but they won’t benefit from increasing minimum wage to $10; in fact they might be worse off, if that increase triggers inflation that lowers the real value of their $12 wage.

What do I propose instead? A basic income. There should be a cash payment that every adult citizen receives, once a month, directly from the government—no questions asked. You don’t have to be unemployed, you don’t have to be disabled, you don’t have to be looking for work. You don’t have to spend it on anything in particular; you can use it for food, for housing, for transportation; or if you like you can use it for entertainment or save it for a rainy day. We don’t keep track of what you do with it, because it’s your own freedom and none of our business. We just give you this money as your dividends for being a shareholder in the United States of America.

This would be extremely easy to implement—the IRS already has all the necessary infrastructure, they just need to turn some minus signs into plus signs. We could remove all the bureaucracy involved in administering TANF and SNAP and Medicaid, because there’s no longer any reason to keep track of who is in poverty since nobody is. We could in fact fold the $500 billion a year we currently spend on means-tested programs into the basic income itself. We could pull another $300 billion from defense spending while still solidly retaining the world’s most powerful military.

Which brings me to the next point: How much would this cost? Probably less than you think. I propose indexing the basic income to the poverty line for households of 2 or more; since currently a household of 2 or more at the poverty line makes $15,730 per year, the basic income would be $7,865 per person per year. The total cost of giving that amount to each of the 243 million adults in the United States would be $1.9 trillion, or about 12% of our GDP. If we fold in the means-tested programs, that lowers the net cost to $1.4 trillion, 9% of GDP. This means that an additional flat tax of 9% would be enough to cover the entire amount, even if we don’t cut any other government spending.

If you use a progressive tax system like I recommended a couple of posts ago, you could raise this much with a tax on less than 5% of utility, which means that someone making the median income of $30,000 would only pay 5.3% more than they presently do. At the mean income of $50,000, you’d only pay 7.7%. And keep in mind that you are also receiving the additional $7,865; so in fact in both cases you actually end up with more than you had before the basic income was implemented. The break-even point is at about $80,000, where you pay an extra 9.9% ($7,920) and receive $7,865, so your after-tax income is now $79,945. Anyone making less than $80,000 per year actually gains from this deal; the only people who pay more than they receive are those who make more than $80,000. This is about the average income of someone in the fourth quintile (the range where 60% to 80% of the population is below you), so this means that roughly 70% of Americans would benefit from this program.

With this system in place, we wouldn’t need a minimum wage. Working full-time at our current minimum wage makes you $7.25*40*52 = $15,080 per year. If you are a single person, you’re getting $7,865 from the basic income, this means that you’ll still have more than you presently do as long as your employer pays you at least $3.47 per hour. And if they don’t? Well then you can just quit, knowing that at least you have that $7,865. If you’re married, it’s even better; the two of you already get $15,730 from the basic income. If you were previously raising a family working full-time on minimum wage while your spouse is unemployed, guess what: You actually will make more money after the policy no matter what wage your employer pays you.

This system can adapt to changes in the market, because it is indexed to the poverty level (which is indexed to inflation), and also because it doesn’t say anything about what wage an employer pays. They can pay as little or as much as the market will bear; but the market is going to bear more, because workers can afford to quit. Billionaires are going to hate this plan, because it raises their taxes (by about 40%) and makes it harder for them to exploit workers. But for 70% of Americans, this plan is a pretty good deal.

Schools of Thought

If you’re at all familiar with the schools of thought in economics, you may wonder where I stand. Am I a Keynesian? Or perhaps a post-Keynesian? A New Keynesian? A neo-Keynesian (not to be confused)? A neo-paleo-Keynesian? Or am I a Monetarist? Or a Modern Monetary Theorist? Or perhaps something more heterodox, like an Austrian or a Sraffian or a Marxist?

No, I am none of those things. I guess if you insist on labeling, you could call me a “cognitivist”; and in terms of policy I tend to agree with the Keynesians, but I also like the Modern Monetary Theorists.

But really I think this sort of labeling of ‘schools of thought’ is exactly the problem. There shouldn’t be schools of thought; the universe only works one way. When you don’t know the answer, you should have the courage to admit you don’t know. And once we actually have enough evidence to know something, people need to stop disagreeing about it. If you continue to disagree with what the evidence has shown, you’re not a ‘school of thought’; you’re just wrong.

The whole notion of ‘schools of thought’ smacks of cultural relativism; asking what the ‘Keynesian’ answer to a question is (and if you take enough economics classes I guarantee you will be asked exactly that) is rather like asking what religious beliefs prevail in a particular part of the world. It might be worth asking for some historical reason, but it’s not a question about economics; it’s a question about economic beliefs. This is the difference between asking how people believe the universe was created, and actually being a cosmologist. True, schools of thought aren’t as geographically localized as religions; but they do say the words ‘saltwater’ and ‘freshwater’ for a reason. I’m not all that interested in the Shinto myths versus the Hindu myths; I want to be a cosmologist.

At best, schools of thought are a sign of a field that hasn’t fully matured. Perhaps there were Newtonians and Einsteinians in 1910; but by 1930 there were just Einsteinians and bad physicists. Are there ‘schools of thought’ in physics today? Well, there are string theorists. But string theory hasn’t been a glorious success of physics advancement; on the contrary, it’s been a dead end from which the field has somehow failed to extricate itself for almost 50 years.

So where does that put us in economics? Well, some of the schools of thought are clearly dead ends, every bit as unfounded as string theory but far worse because they have direct influences on policy. String theory hasn’t ever killed anyone; bad economics definitely has. (How, you ask? Exposure to hazardous chemicals that were deregulated; poverty and starvation due to cuts to social welfare programs; and of course the Second Depression. I could go on.)

The worst offender is surely Austrian economics and its crazy cousin Randian libertarianism. Ayn Rand literally ruled a cult; Friedrich Hayek never took it quite that far, but there is certainly something cultish about Austrian economists. They insist that economics must be derived a priori, without recourse to empirical evidence (or at least that’s what they say when you point out that all the empirical evidence is against them). They are fond of ridiculous hyperbole about an inevitable slippery slope between raising taxes on capital gains and turning into Stalin’s Soviet Union, as well as rhetorical questions I find myself answering opposite to how they want (like “For are taxes not simply another form of robbery?” and “Once we allow the government to regulate what man can do, will they not continue until they control all aspects of our lives?”). They even co-opt and distort cognitivist concepts like herd instinct and asymmetric information; somehow Austrians think that asymmetric information is an argument for why markets are more efficient than government, even though Akerlof’s point was that asymmetric information is why we need regulations.

Marxists are on the opposite end of the political spectrum, but their ideas are equally nonsensical. (Marx himself was a bit more reasonable, but even he recognized they were going too far: “All I know is that I am not a Marxist.”) They have this whole “labor theory of value” thing where the value of something is the amount of work you have to put into it. This would mean that labor-saving innovations are pointless, because they devalue everything; it would also mean that putting an awful lot of work into something useless would nevertheless somehow make it enormously valuable. Really, it would never be worth doing much of anything, because the value you get out of something is exactly equal to the work you put in. Marxists also tend to think that what the world needs is a violent revolution to overthrow the bondage of capitalism; this is an absolutely terrible idea. During the transition it would be one of the bloodiest conflicts in history; afterward you’d probably get something like the Soviet Union or modern-day Venezuela. Even if you did somehow establish your glorious Communist utopia, you’d have destroyed so much productive capacity in the process that you’d make everyone poor. Socialist reforms make sense—and have worked well in Europe, particularly Scandinavia. But socialist revolution is a a good way to get millions of innocent people killed.

Sraffians are also quite silly; they have this bizarre notion that capital must be valued as “dated labor”, basically a formalized Marxism. I’ll admit, it’s weird how neoclassicists try to value labor as “human capital”; frankly it’s a bit disturbing how it echoes slavery. (And if you think slavery is dead, think again; it’s dead in the First World, but very much alive elsewhere.) But the solution to that problem is not to pretend that capital is a form of labor; it’s to recognize that capital and labor are different. Capital can be owned, sold, and redistributed; labor cannot. Labor is done by human beings, who have intrinsic value and rights; capital is made of inanimate matter, which does not. (This is what makes Citizens United so outrageous; “corporations are people” and “money is speech” are such fundamental distortions of democratic principles that they are literally Orwellian. We’re not that far from “freedom is slavery” and “war is peace”.)

Neoclassical economists do better, at least. They do respond to empirical data, albeit slowly. Their models are mathematically consistent. They rarely take account of human irrationality or asymmetric information, but when they do they rightfully recognize them as obstacles to efficient markets. But they still model people as infinite identical psychopaths, and they still divide themselves into schools of thought. Keynesians and Monetarists are particularly prominent, and Modern Monetary Theorists seem to be the next rising star. Each of these schools gets some things right and other things wrong, and that’s exactly why we shouldn’t make ourselves beholden to a particular tribe.

Monetarists follow Friedman, who said, “inflation is always and everywhere a monetary phenomenon.” This is wrong. You can definitely cause inflation without expanding your money supply; just ramp up government spending as in World War 2 or suffer a supply shock like we did when OPEC cut the oil supply. (In both cases, the US money supply was still tied to gold by the Bretton Woods system.) But they are right about one thing: To really have hyperinflation ala Weimar or Zimbabwe, you probably have to be printing money. If that were all there is to Monetarism, I can invert another Friedmanism: We’re all Monetarists now.

Keynesians are basically right about most things; in particular, they are the only branch of neoclassicists who understand recessions and know how to deal with them. The world’s most famous Keynesian is probably Krugman, who has the best track record of economic predictions in the popular media today. Keynesians much better appreciate the fact that humans are irrational; in fact, cognitivism can be partly traced to Keynes, who spoke often of the “animal spirits” that drive human behavior (Akerlof’s most recent book is called Animal Spirits). But even Keynesians have their sacred cows, like the Phillips Curve, the alleged inverse correlation between inflation and unemployment. This is fairly empirically accurate if you look just at First World economies after World War 2 and exclude major recessions. But Keynes himself said, “Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is long past the ocean is flat again.” The Phillips Curve “shifts” sometimes, and it’s not always clear why—and empirically it’s not easy to tell the difference between a curve that shifts a lot and a relationship that just isn’t there. There is very little evidence for a “natural rate of unemployment”. Worst of all, it’s pretty clear that the original policy implications of the Phillips Curve are all wrong; you can’t get rid of unemployment just by ramping up inflation, and that way really does lie Zimbabwe.

Finally, Modern Monetary Theorists understand money better than everyone else. They recognize that a sovereign government doesn’t have to get its money “from somewhere”; it can create however much money it needs. The whole narrative that the US is “out of money” isn’t just wrong, it’s incoherent; if there is one entity in the world that can never be out of money, it’s the US government, who print the world’s reserve currency. The panicked fears of quantitative easing causing hyperinflation aren’t quite as crazy; if the economy were at full capacity, printing $4 trillion over 5 years (yes, we did that) would absolutely cause some inflation. Since that’s only about 6% of US GDP, we might be back to 8% or even 10% inflation like the 1970s, but we certainly would not be in Zimbabwe. Moreover, we aren’t at full capacity; we needed to expand the money supply that much just to maintain prices where they are. The Second Depression is the Red Queen: It took all the running we could do to stay in one place. Modern Monetary Theorists also have some very good ideas about taxation; they point out that since the government only takes out the same thing it puts in—its own currency—it doesn’t make sense to say they are “taking” something (let alone “confiscating” it as Austrians would have you believe). Instead, it’s more like they are pumping it, taking money in and forcing it back out continuously. And just as pumping doesn’t take away water but rather makes it flow, taxation and spending doesn’t remove money from the economy but rather maintains its circulation. Now that I’ve said what they get right, what do they get wrong? Basically they focus too much on money, ignoring the real economy. They like to use double-entry accounting models, perfectly sensible for money, but absolutely nonsensical for real value. The whole point of an economy is that you can get more value out than you put in. From the Homo erectus who pulls apples from the trees to the software developer who buys a mansion, the reason they do it is that the value they get out (the gatherer gets to eat, the programmer gets to live in a mansion) is higher than the value they put in (the effort to climb the tree, the skill to write the code). If, as Modern Monetary Theorists are wont to do, you calculated a value for the human capital of the gatherer and the programmer equal to the value of the goods they purchase, you’d be missing the entire point.