The idiocy of the debt ceiling

Apr 23 JDN 2460058

I thought we had put this behind us. I guess I didn’t think the Republicans would stop using the tactic once they saw it worked, but I had hoped that the Democrats would come up with a better permanent solution so that it couldn’t be used again. But they did not, and here we are again: Republicans are refusing to raise the debt ceiling, we have now hit that ceiling, and we are running out of time before we have to start shutting down services or defaulting on debt. There are talks ongoing that may yet get the ceiling raised in time, but we’re now cutting it very close. Already the risk that we might default or do something crazy is causing turmoil in financial markets.

Because US Treasury bonds are widely regarded as one of the world’s most secure assets, and the US dollar is the most important global reserve currency, the entire world’s financial markets get disrupted every time there is an issue with the US national debt, and the debt ceiling creates such disruptions on the regular for no good reason.

I will try to offer some of my own suggestions for what to do here, but first, I want to make something very clear: The debt ceiling should not exist. I don’t think most people understand just how truly idiotic the entire concept of a debt ceiling is. It seems practically designed to make our government dysfunctional.

This is not like a credit card limit, where your bank imposes a limit on how much you can borrow based on how much they think you are likely to be able to repay. A lot of people have been making that analogy, and I can see why it’s tempting; but as usual, it’s important to remember that government debt is not like personal debt.

As I said some years ago, US government debt is about as close as the world is ever likely to come to a perfect credit market: with no effort at all, borrow as much as you want at low, steady interest rates, and everyone will always be sure that you will pay it back on time. The debt ceiling is a limit imposed by the government itself—it is not imposed by our creditors, who would be more than happy to lend us more.

Also, I’d like to remind you that some of the US national debt is owned by the US government itself (is that really even “debt”?) and most of what’s left is owned by US individuals or corporations—only about a third is owed to foreign powers. Here is a detailed breakdown of who owns US national debt.

There is no reason to put an arbitrary cap on the amount the US government can borrow. The only reason anyone is at all worried about a default on the US national debt is because of this stupid arbitrary cap. If it didn’t exist, they would simply roll over more Treasury bonds to make the payments and everything would run smoothly. And this is normally what happens, when the Republicans aren’t playing ridiculous brinkmanship games.

As it is, they could simply print money to pay it—and at this point, maybe that’s what needs to happen. Mint the Coin already: Mint a $1 trillion platinum coin and deposit it in the Federal Reserve, and there you go, you’ve paid off a chunk of the debt. Sometimes stupid problems require stupid solutions.

Aren’t there reasons to be worried about the government borrowing too much? Yes, a little. The amount of concern most people have about this is wildly disproportionate to the actual problem, but yes, there are legitimate concerns about high national debt resulting in high interest rates and eventually forcing us to raise taxes or cut services. This is a slow-burn, long-term problem that by its very nature would never require a sudden, immediate solution; but it is a genuine concern we should be aware of.

But here’s the thing: That’s a conversation we should be having when we vote on the budget. Whenever we pass a government budget, it already includes detailed projections of tax revenue and spending that yield precise, accurate forecasts of the deficit and the debt. If Republicans are genuinely concerned that we are overspending on certain programs, they should propose budget cuts to those programs and get those cuts passed as part of the budget.

Once a budget is already passed, we have committed to spend that money. It has literally been signed into law that $X will be spend on program Y. At that point, you can’t simply cut the spending. If you think we’re spending too much, you needed to say that before we signed it into law. It’s too late now.

I’m always dubious of analogies between household spending and government spending, but if you really want one, think of it this way: Say your credit card company is offering to raise your credit limit, and you just signed a contract for some home improvements that would force you to run up your credit card past your current limit. Do you call the credit card company and accept the higher limit, or not? If you don’t, why don’t you? And what’s your plan for paying those home contractors? Even if you later decide that the home improvements were a bad idea, you already signed the contract! You can’t just back out!

This is why the debt ceiling is so absurd: It is a self-imposed limit on what you’re allowed to spend after you have already committed to spending it. The only sensible thing to do is to raise the debt ceiling high enough to account for the spending you’ve already committed to—or better yet, eliminate the ceiling entirely.

I think that when they last had a majority in both houses, the Democrats should have voted to make the debt ceiling ludicrously high—say $100 trillion. Then, at least for the foreseeable future, we wouldn’t have to worry about raising it, and could just pass budgets normally like a sane government. But they didn’t do that; they only raised it as much as was strictly necessary, thus giving the Republicans an opening now to refuse to raise it again.

And that is what the debt ceiling actually seems to accomplish: It gives whichever political party is least concerned about the public welfare a lever they can pull to disrupt the entire system whenever they don’t get things the way they want. If you absolutely do not care about the public good—and it’s quite clear at this point that most of the Republican leadership does not—then whenever you don’t get your way, you can throw a tantrum that threatens to destabilize the entire global financial system.

We need to stop playing their game. Do what you have to do to keep things running for now—but then get rid of the damn debt ceiling before they can use it to do even more damage.

What happens when a bank fails

Mar 19 JDN 2460023

As of March 9, Silicon Valley Bank (SVB) has failed and officially been put into receivership under the FDIC. A bank that held $209 billion in assets has suddenly become insolvent.

This is the second-largest bank failure in US history, after Washington Mutual (WaMu) in 2008. In fact it will probably have more serious consequences than WaMu, for two reasons:

1. WaMu collapsed as part of the Great Recession, so there was already a lot of other things going on and a lot of policy responses already in place.

2. WaMu was mostly a conventional commercial bank that held deposits and loans for consumers, so its assets were largely protected by the FDIC, and thus its bankruptcy didn’t cause contagion the spread out to the rest of the system. (Other banks—shadow banks—did during the crash, but not so much WaMu.) SVB mostly served tech startups, so a whopping 89% of its deposits were not protected by FDIC insurance.

You’ve likely heard of many of the companies that had accounts at SVB: Roku, Roblox, Vimeo, even Vox. Stocks of the US financial industry lost $100 billion in value in two days.

The good news is that this will not be catastrophic. It probably won’t even trigger a recession (though the high interest rates we’ve been having lately potentially could drive us over that edge). Because this is commercial banking, it’s done out in the open, with transparency and reasonably good regulation. The FDIC knows what they are doing, and even though they aren’t covering all those deposits directly, they intend to find a buyer for the bank who will, and odds are good that they’ll be able to cover at least 80% of the lost funds.

In fact, while this one is exceptionally large, bank failures are not really all that uncommon. There have been nearly 100 failures of banks with assets over $1 billion in the US alone just since the 1970s. The FDIC exists to handle bank failures, and generally does the job well.

Then again, it’s worth asking whether we should really have a banking system in which failures are so routine.

The reason banks fail is kind of a dark open secret: They don’t actually have enough money to cover their deposits.

Banks loan away most of their cash, and rely upon the fact that most of their depositors will not want to withdraw their money at the same time. They are required to keep a certain ratio in reserves, but it’s usually fairly small, like 10%. This is called fractional-reserve banking.

As long as less than 10% of deposits get withdrawn at any given time, this works. But if a bunch of depositors suddenly decide to take out their money, the bank may not have enough to cover it all, and suddenly become insolvent.

In fact, the fear that a bank might become insolvent can actually cause it to become insolvent, in a self-fulfilling prophecy. Once depositors get word that the bank is about to fail, they rush to be the first to get their money out before it disappears. This is a bank run, and it’s basically what happened to SVB.

The FDIC was originally created to prevent or mitigate bank runs. Not only did they provide insurance that reduced the damage in the event of a bank failure; by assuring depositors that their money would be recovered even if the bank failed, they also reduced the chances of a bank run becoming a self-fulfilling prophecy.


Indeed, SVB is the exception that proves the rule, as they failed largely because their assets were mainly not FDIC insured.

Fractional-reserve banking effectively allows banks to create money, in the form of credit that they offer to borrowers. That credit gets deposited in other banks, which then go on to loan it out to still others; the result is that there is more money in the system than was ever actually printed by the central bank.

In most economies this commercial bank money is a far larger quantity than the central bank money actually printed by the central bank—often nearly 10 to 1. This ratio is called the money multiplier.

Indeed, it’s not a coincidence that the reserve ratio is 10% and the multiplier is 10; the theoretical maximum multiplier is always the inverse of the reserve ratio, so if you require reserves of 10%, the highest multiplier you can get is 10. Had we required 20% reserves, the multiplier would drop to 5.

Most countries have fractional-reserve banking, and have for centuries; but it’s actually a pretty weird system if you think about it.

Back when we were on the gold standard, fractional-reserve banking was a way of cheating, getting our money supply to be larger than the supply of gold would actually allow.

But now that we are on a pure fiat money system, it’s worth asking what fractional-reserve banking actually accomplishes. If we need more money, the central bank could just print more. Why do we delegate that task to commercial banks?

David Friedman of the Cato Institute had some especially harsh words on this, but honestly I find them hard to disagree with:

Before leaving the subject of fractional reserve systems, I should mention one particularly bizarre variant — a fractional reserve system based on fiat money. I call it bizarre because the essential function of a fractional reserve system is to reduce the resource cost of producing money, by allowing an ounce of reserves to replace, say, five ounces of currency. The resource cost of producing fiat money is zero; more precisely, it costs no more to print a five-dollar bill than a one-dollar bill, so the cost of having a larger number of dollars in circulation is zero. The cost of having more bills in circulation is not zero but small. A fractional reserve system based on fiat money thus economizes on the cost of producing something that costs nothing to produce; it adds the disadvantages of a fractional reserve system to the disadvantages of a fiat system without adding any corresponding advantages. It makes sense only as a discreet way of transferring some of the income that the government receives from producing money to the banking system, and is worth mentioning at all only because it is the system presently in use in this country.

Our banking system evolved gradually over time, and seems to have held onto many features that made more sense in an earlier era. Back when we had arbitrarily tied our central bank money supply to gold, creating a new money supply that was larger may have been a reasonable solution. But today, it just seems to be handing the reins over to private corporations, giving them more profits while forcing the rest of society to bear more risk.

The obvious alternative is full-reserve banking, where banks are simply required to hold 100% of their deposits in reserve and the multiplier drops to 1. This idea has been supported by a number of quite prominent economists, including Milton Friedman.

It’s not just a right-wing idea: The left-wing organization Positive Money is dedicated to advocating for a full-reserve banking system in the UK and EU. (The ECB VP’s criticism of the proposal is utterly baffling to me: it “would not create enough funding for investment and growth.” Um, you do know you can print more money, right? Hm, come to think of it, maybe the ECB doesn’t know that, because they think inflation is literally Hitler. There are legitimate criticisms to be had of Positive Money’s proposal, but “There won’t be enough money under this fiat money system” is a really weird take.)

There’s a relatively simple way to gradually transition from our current system to a full-reserve sytem: Simply increase the reserve ratio over time, and print more central bank money to keep the total money supply constant. If we find that it seems to be causing more problems than it solves, we could stop or reverse the trend.

Krugman has pointed out that this wouldn’t really fix the problems in the banking system, which actually seem to be much worse in the shadow banking sector than in conventional commercial banking. This is clearly right, but it isn’t really an argument against trying to improve conventional banking. I guess if stricter regulations on conventional banking push more money into the shadow banking system, that’s bad; but really that just means we should be imposing stricter regulations on the shadow banking system first (or simultaneously).

We don’t need to accept bank runs as a routine part of the financial system. There are other ways of doing things.

Is the cure for inflation worse than the disease?

Nov 13 JDN 2459897

A lot of people seem really upset about inflation. I’ve previously discussed why this is a bit weird; inflation really just isn’t that bad. In fact, I am increasingly concerned that the usual methods for fixing inflation are considerably worse than inflation itself.

To be clear, I’m not talking about hyperinflationif you are getting triple-digit inflation or more, you are clearly printing too much money and you need to stop. And there are places in the world where this happens.

But what about just regular, ordinary inflation, even when it’s fairly high? Prices rising at 8% or 9% or even 11% per year? What catastrophe befalls our society when this happens?

Okay, sure, if we could snap our fingers and make prices all stable without cost, that would be worth doing. But we can’t. All of our mechanisms for reducing inflation come with costs—and often very high costs.

The chief mechanism by which inflation is currently controlled is open-market operations by central banks such as the Federal Reserve, the Bank of England, and the European Central Bank. These central banks try to reduce inflation by selling bonds, which lowers the price of bonds and reduces capital available to banks, and thereby increases interest rates. This also effectively removes money from the economy, as banks are using that money to buy bonds instead of lending it out. (It is chiefly in this odd indirect sense that the central bank manages the “money supply”.)

But how does this actually reduce inflation? It’s remarkably indirect. It’s actually the higher interest rates which prevent people from buying houses and prevent companies from hiring workers which result in reduced economic growth—or even economic recession—which then is supposed to bring down prices. There’s actually a lot we still don’t know about how this works or how long it should be expected to take. What we do know is that the pain hits quickly and the benefits arise only months or even years later.

As Krugman has rightfully pointed out, the worst pain of the 1970s was not the double-digit inflation; it was the recessions that Paul Volcker’s economic policy triggered in response to that inflation. The inflation wasn’t exactly a good thing; but for most people, the cure was much worse than the disease.

Most laypeople seem to think that prices somehow go up without wages going up, but that simply isn’t how it works. Prices and wages rise at close to the same rate in most countries most of the time. In fact, inflation is often driven chiefly by rising wages rather than the other way around. There are often lags between when the inflation hits and when people see their wages rise; but these lags can actually be in either direction—inflation first or wages first—and for moderate amounts of inflation they are clearly less harmful than the high rates of unemployment that we would get if we fought inflation more aggressively with monetary policy.

Economists are also notoriously vague about exactly how they expect the central bank to reduce inflation. They use complex jargon or broad euphemisms. But when they do actually come out and say they want to reduce wages, it tends to outrage people. Well, that’s one of three main ways that interest rates actually reduce inflation: They reduce wages, they cause unemployment, or they stop people from buying houses. That’s pretty much all that central banks can do.

There may be other ways to reduce inflation, like windfall profits taxes, antitrust action, or even price controls. The first two are basically no-brainers; we should always be taxing windfall profits (if they really are due to a windfall outside a corporation’s control, there’s no incentive to distort), and we should absolutely be increasing antitrust action (why did we reduce it in the first place?). Price controls are riskier—they really do create shortages—but then again, is that really worse than lower wages or unemployment? Because the usual strategy involves lower wages and unemployment.

It’s a little ironic: The people who are usually all about laissez-faire are the ones who panic about inflation and want the government to take drastic action; meanwhile, I’m usually in favor of government intervention, but when it comes to moderate inflation, I think maybe we should just let it be.

Who still uses cash?

Feb 27 JDN 2459638

If you had to guess, what is the most common denomination of US dollar bills? You might check your wallet: $1? $20?

No, it’s actually $100. There are 13.1 billion $1 bills, 11.7 billion $20 bills, and 16.4 billion $100 bills. And since $100 bills are worth more, the vast majority of US dollar value in circulation is in those $100 bills—indeed, $1.64 trillion of the total $2.05 trillion cash supply.

This is… odd, to say the least. When’s the last time you spent a $100 bill? Then again, when’s the last time you spent… cash? In a typical week, 30% of Americans use no cash at all.

In the United States, cash is used for 26% of transactions, compared to 28% for debit card and 23% for credit cards. The US is actually a relatively cash-heavy country by First World standards. In the Netherlands and Scandinavia, cash is almost unheard of. When I last visited Amsterdam a couple of months ago, businesses were more likely to take US credit cards than they were to take cash euros.

A list of countries most reliant on cash shows mostly very poor countries, like Chad, Angola, and Burkina Faso. But even in Sub-Saharan Africa, mobile money is dominant in Botswana, Kenya and Uganda.

And yet the cash money supply is still quite large: $2.05 trillion is only a third of the US monetary base, but it’s still a huge amount of money. If most people aren’t using it, who is? And why is so much of it in the form of $100 bills?

It turns out that the answer to the second question can provide an answer to the first. $100 bills are not widely used for consumer purchases—indeed, most businesses won’t even accept them. (Honestly that has always bothered me: What exactly does “legal tender” mean, if you’re allowed to categorically refuse $100 bills? It’d be one thing to say “we can’t accept payment when we can’t make change”, and obviously nobody seriously expects you to accept $10,000 bills; but what if you have a $97 purchase?) When people spend cash, it’s mainly ones, fives, and twenties.

Who uses $100 bills? People who want to store money in a way that is anonymous, easily transportable—including across borders—and stable against market fluctuations. Drug dealers leap to mind (and indeed the money-laundering that HSBC did for drug cartels was largely in the form of thick stacks of $100 bills). Of course it isn’t just drug dealers, or even just illegal transactions, but it is mostly people who want to cross borders. 80% of US $100 bills are in circulation outside the United States. Since 80% of US cash is in the form of $100 bills, this means that nearly two-thirds of all US dollars are outside the US.

Knowing this, I have to wonder: Why does the Federal Reserve continue printing so many $100 bills? Okay, once they’re out there, it may be hard to get them back. But they do wear out eventually. (In fact, US dollars wear out faster than most currencies, because they are made of linen instead of plastic. Surprisingly, this actually makes them less eco-friendly despite being more biodegradable. Of course, the most eco-friendly method of payment is mobile payments, since their marginal environmental impact is basically zero.) So they could simply stop printing them, and eventually the global supply would dwindle.

They clearly haven’t done this—indeed, there were more $100 bills printed last year than any previous year, increasing the global supply by 2 billion bills, or $200 billion. Why not? Are they trying to keep money flowing for drug dealers? Even if the goal is to substitute for failing currencies in other countries (a somewhat odd, if altruistic, objective), wouldn’t that be more effective with $1 and $5 bills? $100 is a lot of money for people in Chad or Angola! Chad’s per-capita GDP is a staggeringly low $600 per year; that means that a $100 bill to a typical person in Chad would be like me holding onto a $10,000 bill (those exist, technically). Surely they’d prefer $1 bills—which would still feel to them like $100 bills feel to me. Even in middle-income countries, $100 is quite a bit; Ecuador actually uses the US dollar as its main currency, but their per-capita GDP is only $5,600, so $100 to them feels like $1000 to us.

If you want to usefully increase the money supply to stimulate consumer spending, print $20 bills—or just increase some numbers in bank reserve accounts. Printing $100 bills is honestly baffling to me. It seems at best inept, and at worst possibly corrupt—maybe they do want to support drug cartels?

What does a central bank actually do?

Aug 26 JDN 2458357

Though central banks are a cornerstone of the modern financial system, I don’t think most people have a clear understanding of how they actually function. (I think this may be by design; there are many ways we could make central banking more transparent, but policymakers seem reluctant to show their hand.)

I’ve even seen famous economists make really severe errors in their understanding of monetary policy, as John Taylor did when he characterized low-interest-rate policy as a “price ceiling”.

Central banks “print money” and “set interest rates”. But how exactly do they do these things, and what on Earth do they have to do with each other?

The first thing to understand is that most central banks don’t actually print money. In the US, cash is actually printed by the Department of the Treasury. But cash is only a small part of the money in circulation. The monetary base consists of cash in vaults and in circulation; the US monetary base is about $3.6 trillion. The money supply can be measured a few different ways, but the standard way is to include checking accounts, traveler’s checks, savings accounts, money market accounts, short-term certified deposits, and basically anything that can be easily withdrawn and spent as money. This is called the M2 money supply, and in the US it is currently over $14.1 trillion. That means that only 25% of our money supply is in actual, physical cash—the rest is all digital. This is actually a relatively high proportion for actual cash, as the monetary base was greatly increased in response to the Great Recession. When we say that the Fed “prints money”, what we really mean is that they are increasing the money supply—but typically they do so in a way that involves little if any actual printing of cash.

The second thing to understand is that central banks don’t exactly set interest rates either. They target interest rates. What’s the difference, you ask?

Well, setting interest rates would mean that they made a law or something saying you have to charge exactly 2.7%, and you get fined or something if you don’t do that.

Targeting interest rates is a subtler art. The Federal Reserve decides what interest rates they want banks to charge, and then they engage in what are called open-market operations to try to make that happen. Banks hold reservesmoney that they are required to keep as collateral for their loans. Since we are in a fractional-reserve system, they are allowed to keep only a certain proportion (usually about 10%). In open-market operations, the Fed buys and sells assets (usually US Treasury bonds) in order to either increase or decrease the amount of reserves available to banks, to try to get them to lend to each other at the targeted interest rates.

Why not simply set the interest rate by law? Because then it wouldn’t be the market-clearing interest rate. There would be shortages or gluts of assets.

It might be easier to grasp this if we step away from money for a moment and just think about the market for some other good, like televisions.

Suppose that the government wants to set the price of a television in the market to a particular value, say $500. (Why? Who knows. Let’s just run with it for a minute.)

If they simply declared by law that the price of a television must be $500, here’s what would happen: Either that would be too low, in which case there would be a shortage of televisions as demand exceeded supply; or that would be too high, in which case there would be a glut of televisions as supply exceeded demand. Only if they got spectacularly lucky and the market price already was $500 per television would they not have to worry about such things (and then, why bother?).

But suppose the government had the power to create and destroy televisions virtually at will with minimal cost.
Now, they have a better way; they can target the price of a television, and buy and sell televisions as needed to bring the market price to that target. If the price is too low, the government can buy and destroy a lot of televisions, to bring the price up. If the price is too high, the government can make and sell a lot of televisions, to bring the price down.

Now, let’s go back to money. This power to create and destroy at will is hard to believe for televisions, but absolutely true for money. The government can create and destroy almost any amount of money at will—they are limited only by the very inflation and deflation the central bank is trying to affect.

This allows central banks to intervene in the market without creating shortages or gluts; even though they are effectively controlling the interest rate, they are doing so in a way that avoids having a lot of banks wanting to take loans they can’t get or wanting to give loans they can’t find anyone to take.

The goal of all this manipulation is ultimately to reduce inflation and unemployment. Unfortunately it’s basically impossible to eliminate both simultaneously; the Phillips curve describes the relationship generally found that decreased inflation usually comes with increased unemployment and vice-versa. But the basic idea is that we set reasonable targets for each (usually about 2% inflation and 5% unemployment; frankly I’d prefer we swap the two, which was more or less what we did in the 1950s), and then if inflation is too high we raise interest rate targets, while if unemployment is too high we lower interest rate targets.

What if they’re both too high? Then we’re in trouble. This has happened; it is called stagflation. The money supply isn’t the other thing affecting inflation and unemployment, and sometimes we get hit with a bad shock that makes both of them high at once. In that situation, there isn’t much that monetary policy can do; we need to find other solutions.

But how does targeting interest rates lead to inflation? To be quite honest, we don’t actually know.

The basic idea is that lower interest rates should lead to more borrowing, which leads to more spending, which leads to more inflation. But beyond that, we don’t actually understand how interest rates translate into prices—this is the so-called transmission mechanism, which remains an unsolved problem in macroeconomics. Based on the empirical data, I lean toward the view that the mechanism is primarily via housing prices; lower interest rates lead to more mortgages, which raises the price of real estate, which raises the price of everything else. This also makes sense theoretically, as real estate consists of large, illiquid assets for which the long-term interest rate is very important. Your decision to buy an apple or even a television is probably not greatly affected by interest rates—but your decision to buy a house surely is.

If that is indeed the case, it’s worth thinking about whether this is really the right way to intervene on inflation and unemployment. High housing prices are an international crisis; maybe we need to be looking at ways to decrease unemployment without affecting housing prices. But that is a tale for another time.

What would a game with realistic markets look like?

Aug 12 JDN 2458343

From Pokemon to Dungeons & Dragons, Final Fantasy to Mass Effect, almost all role-playing games have some sort of market: Typically, you buy and sell equipment, and often can buy services such as sleeping at inns. Yet the way those markets work is extremely rigid and unrealistic.

(I’m of course excluding games like EVE Online that actually create real markets between players; those markets are so realistic I actually think they would provide a good opportunity for genuine controlled experiments in macroeconomics.)

The weirdest thing about in-game markets is the fact that items almost always come with a fixed price. Sometimes there is some opportunity for haggling, or some randomization between different merchants; but the notion always persists that the item has a “true price” that is being adjusted upward or downward. This is more or less the opposite of how prices actually work in real markets.

There is no “true price” of a car or a pizza. Prices are whatever buyers and sellers make them. There is a true value—the amount of real benefit that can be obtained from a good—but even this is something that varies between individuals and also changes based on the other goods being consumed. The value of a pizza is considerably higher for someone who hasn’t eaten in days than to someone who just finished eating another pizza.

There is also what is called “The Law of One Price”, but like all laws of economics, it’s like the Pirate Code, more what you’d call a “guideline”, and it only applies to a particular good in a particular market at a particular time. The Law of One Price doesn’t even say that a pizza should have the same price tomorrow as it does today, or that the same pizza can’t be sold to two different customers at two different prices; it only says that the same pizza shouldn’t have two different prices in the same place at the same time for the same customer. (It seems almost tautological, right? And yet it still fails empirically, and does so again and again. I have seen offers for the same book in the same condition posted on the same website that differed by as much as 50%.)

In well-developed capitalist markets in large First World countries, we can lull ourselves into the illusion that there is only one price for a good, because markets are highly liquid and either highly competitive or controlled by a strong and stable oligopoly that enforces a particular price across places and times. The McDonald’s Dollar Menu is a policy choice by a massive multinational corporation; it’s not what would occur naturally if those items were sold on a competitive market.

Even then, this illusion can be broken when we are faced with a large economic shock, such as the OPEC price shock in 1973 or a natural disaster like Hurricane Katrina. It also tends to be broken for illiquid goods such as real estate.

If we consider the environment in which most role-playing games take place, it’s usually a sort of quasi-medieval or quasi-Renaissance feudal society, where a given government controls only a small region and traveling between towns is difficult and dangerous. Not only should the prices of goods differ substantially between towns, the currency used should frequently differ as well. Yes, most places would accept gold and silver; but a kingdom with a stable government will generally have a currency of significant seignorage, with coins worth considerably more than the gold used to mint them—yet the value of that seignorage will drop off as you move further away from that kingdom and its sphere of influence.

Moreover, prices should be inconsistent even between traders in the same town, and extremely volatile. When a town is mostly self-sufficient and trade is only a small part of its economy, even a small shock such as a bad thunderstorm or a brief drought can yield massive shifts in prices. Shortages and gluts will be frequent, as both supply and demand are small and ever-changing.

This wouldn’t be that difficult to implement. The simplest way would just be to institute random shocks to prices that vary by place and time. A more sophisticated method would be to actually simulate supply and demand for different goods, and then have prices respond to realistic shocks (e.g. a drought makes wheat more expensive, and the price of swords suddenly skyrockets after news of an impending dragon attack). Experiments have shown that competitive market outcomes can be achieved by simulating even a dozen or so traders using very simple heuristics like “don’t pay more than you can afford” and “don’t charge less than it cost you”.

Why don’t game designers implement this? I think there are two reasons.

The first is simply that it would be more complicated. This is a legitimate concern in many cases; I particularly think Pokemon can justify using a simple economy, given its target audience. I particularly agree that having more than a handful of currencies would be too much for players to keep track of; though perhaps having two or three (one for each major faction?) is still more interesting than only having one.

Also, tabletop games are inherently more limited in the amount of computation they can use, compared to video games. But for a game as complicated as say Skyrim, this really isn’t much of a defense. Skyrim actually simulated the daily routines of over a hundred different non-player characters; it could have been simulating markets in the background as well—in fact, it could have simply had those same non-player characters buy and sell goods with each other in a double-auction market that would automatically generate the prices that players face.

The more important reason, I think, is that game designers have a paralyzing fear of arbitrage.

I find it particularly aggravating how frequently games will set it up so that the price at which you buy and the price at which you sell are constrained so that the buying price is always higher, often as much as twice as high. This is not at all how markets work in the real world; frankly it’s only even close to true for goods like cars that rapidly depreciate. It make senses that a given merchant will not sell you a good for less than what they would pay to buy it from you; but that only requires each individual merchant to have a well-defined willingness-to-pay and willingness-to-accept. It certainly does not require the arbitrary constraint that you can never sell something for more than what you bought it for.

In fact, I would probably even allow players who specialize in social skills to short-change and bamboozle merchants for profit, as this is absolutely something that happens in the real world, and was likely especially common under the very low levels of literacy and numeracy that prevailed in the Middle Ages.

To many game designers (and gamers), the ability to buy a good in one place, travel to another place, and sell that good for a higher price seems like cheating. But this practice is call being a merchant. That is literally what the entire retail industry does. The rules of your game should allow you to profit from activities that are in fact genuinely profitable real economic services in the real world.

I remember a similar complaint being raised against Skyrim shortly after its release, that one could acquire a pickaxe, collect iron ore, smelt it into steel, forge weapons out of it, and then sell the weapons for a sizeable profit. To some people, this sounded like cheating. To me, it sounds like being a blacksmith. This is especially true because Skyrim’s skill system allowed you to improve the quality of your smithed items over time, just like learning a trade through practice (though it ramped up too fast, as it didn’t take long to make yourself clearly the best blacksmith in all of Skyrim). Frankly, this makes far more sense than being able to acquire gold by adventuring through the countryside and slaughtering monsters or collecting lost items from caves. Blacksmiths were a large part of the medieval economy; spelunking adventurers were not. Indeed, it bothers me that there weren’t more opportunities like this; you couldn’t make your wealth by being a farmer, a vintner, or a carpenter, for instance.

Even if you managed to pull off pure arbitrage, providing no real services, such as by buying and selling between two merchants in the same town, or the same merchant on two consecutive days, that is also a highly profitable industry. Most of our financial system is built around it, frankly. If you manage to make your wealth selling wheat futures instead of slaying dragons, I say more power to you. After all, there were an awful lot of wheat-future traders in the Middle Ages, and to my knowledge no actually successful dragon-slayers.

Of course, if your game is about slaying dragons, it should include some slaying of dragons. And if you really don’t care about making a realistic market in your game, so be it. But I think that more realistic markets could actually offer a great deal of richness and immersion into a world without greatly increasing the difficulty or complexity of the game. A world where prices change in response to the events of the story just feels more real, more alive.

The ability to profit without violence might actually draw whole new modes of play to the game (as has indeed occurred with Skyrim, where a small but significant proportion of players have chosen to live out peaceful lives as traders or blacksmiths). I would also enrich the experience of more conventional players and helping them recover from setbacks (if the only way to make money is to fight monsters and you keep getting killed by monsters, there isn’t much you can do; but if you have the option of working as a trader or a carpenter for awhile, you could save up for better equipment and try the fighting later).

And hey, game designers: If any of you are having trouble figuring out how to implement such a thing, my consulting fees are quite affordable.

The unending madness of the gold standard

JDN 2457545

If you work in economics in any capacity (much like “How is the economy doing?” you don’t even really need to be in macroeconomics), you will encounter many people who believe in the gold standard. Many of these people will be otherwise quite intelligent and educated; they often understand economics better than most people (not that this is saying a whole lot). Yet somehow they continue to hold—and fiercely defend—this incredibly bizarre and anachronistic view of macroeconomics.

They even bring it up at the oddest times; I recently encountered someone who wrote a long and rambling post arguing for drug legalization (which I largely agree with, by the way) and concluded it with #EndTheFed, not seeming to grasp the total and utter irrelevance of this juxtaposition. It seems like it was just a conditioned response, or maybe the sort of irrelevant but consistent coda originally perfected by Cato and his “Carthago delenda est. “Foederale Reservatum delendum est. Hey, maybe that’s why they’re called the Cato Institute.

So just how bizarre is the gold standard? Well, let’s look at what sort of arguments they use to defend it. I’ll use Charles Kadlic, prominent Libertarian blogger on Forbes, as an example, with his “Top Ten Reasons That You Should Support the ‘Gold Commission’”:

  1. A gold standard is key to achieving a period of sustained, 4% real economic growth.
  2. A gold standard reduces the risk of recessions and financial crises.
  3. A gold standard would restore rising living standards to the middle-class.
  4. A gold standard would restore long-term price stability.
  5. A gold standard would stop the rise in energy prices.
  6. A gold standard would be a powerful force for restoring fiscal balance to federal state and local governments.
  7. A gold standard would help save Medicare and Social Security.
  8. A gold standard would empower Main Street over Wall Street.
  9. A gold standard would increase the liberty of the American people.
  10. Creation of a gold commission will provide the forum to chart a prudent path toward a 21st century gold standard.

Number 10 can be safely ignored, as clearly Kadlic just ran out of reasons and to make a round number tacked on the implicit assumption of the entire article, namely that this ‘gold commission’ would actually realistically lead us toward a gold standard. (Without it, the other 9 reasons are just non sequitur.)

So let’s look at the other 9, shall we? Literally none of them are true. Several are outright backward.

You know a policy is bad when even one of its most prominent advocates can’t even think of a single real benefit it would have. A lot of quite bad policies do have perfectly real benefits, they’re just totally outweighed by their costs: For example, cutting the top income tax rate to 20% probably would actually contribute something to economic growth. Not a lot, and it would cut a swath through the federal budget and dramatically increase inequality—but it’s not all downside. Yet Kadlic couldn’t actually even think of one benefit of the gold standard that actually holds up. (I actually can do his work for him: I do know of one benefit of the gold standard, but as I’ll get to momentarily it’s quite small and can easily be achieved in better ways.)

First of all, it’s quite clear that the gold standard did not increase economic growth. If you cherry-pick your years properly, you can make it seem like Nixon leaving the gold standard hurt growth, but if you look at the real long-run trends in economic growth it’s clear that we had really erratic growth up until about the 1910s (the surge of government spending in WW1 and the establishment of the Federal Reserve), at which point went through a temporary surge recovering from the Great Depression and then during WW2, and finally, if you smooth out the business cycle, our growth rates have slowly trended downward as growth in productivity has gradually slowed down.

Here’s GDP growth from 1800 to 1900, when we were on the classical gold standard:

US_GDP_growth_1800s

Here’s GDP growth from 1929 to today, using data from the Bureau of Economic Analysis:

US_GDP_growth_BEA

Also, both of these are total GDP growth (because that is what Kadlic said), which means that part of what you’re seeing here is population growth rather than growth in income per person. Here’s GDP per person in the 1800s:

US_GDP_growth_1800s

If you didn’t already know, I bet you can’t guess where on those graphs we left the gold standard, which you’d clearly be able to do if the gold standard had this dramatic “double your GDP growth” kind of effect. I can’t immediately rule out some small benefit to the gold standard just from this data, but don’t worry; more thorough economic studies have done that. Indeed, it is the mainstream consensus among economists today that the gold standard is what caused the Great Depression.

Indeed, there’s a whole subfield of historical economics research that basically amounts to “What were they thinking?” trying to explain why countries stayed on the gold standard for so long when it clearly wasn’t working. Here’s a paper trying to argue it was a costly signal of your “rectitude” in global bond markets, but I find much more compelling the argument that it was psychological: Their belief in the gold standard was simply too strong, so confirmation bias kept holding them back from what needed to be done. They were like my aforementioned #EndTheFed acquaintance.

Then we get to Kadlic’s second point: Does the gold standard reduce the risk of financial crises? Let’s also address point 4, which is closely related: Does the gold standard improve price stability? Tell that to 1929.

In fact, financial crises were more common on the classical gold standard; the period of pure fiat monetary policy was so stable that it was called the Great Moderation, until the crash in 2008 screwed it all up—and that crash occurred essentially outside the standard monetary system, in the “shadow banking system” of unregulated and virtually unlimited derivatives. Had we actually forced banks to stay within the light of the standard banking system, the Great Moderation might have continued indefinitely.

As for “price stability”, that’s sort of true if you look at the long run, because prices were as likely to go down as they were to go up. But that isn’t what we mean by “price stability”. A system with good price stability will have a low but positive and steady level of inflation, and will therefore exhibit some long-run increases in price levels; it won’t have prices jump up and down erratically and end up on average the same.

For jump up and down is what prices did on the gold standard, as you can see from FRED:

US_inflation_longrun

This is something we could have predicted in advance; the price of any given product jumps up and down over time, and gold is just one product among many. Tying prices to gold makes no more sense than tying them to any other commodity.

As for stopping the rise in energy prices, energy prices aren’t rising. Even if they were (and they could at some point), the only way the gold standard would stop that is by triggering deflation (and therefore recession) in the rest of the economy.

Regarding number 6, I don’t see how the fiscal balance of federal and state governments is improved by periodic bouts of deflation that make their debt unpayable.

As for number 7, saving Medicare and Social Security, their payments out are tied to inflation and their payments in are tied to nominal GDP, so overall inflation has very little effect on their long-term stability. In any case, the problem with Medicare is spiraling medical costs (which Obamacare has done a lot to fix), and the problem with Social Security is just the stupid arbitrary cap on the income subject to payroll tax; the gold standard would do very little to solve either of those problems, though I guess it would make the nominal income cap less binding by triggering deflation, which is just about the worst way to avoid a price ceiling I’ve ever heard.

Regarding 8 and 9, I don’t even understand why Kadlic thinks that going to a gold standard would empower individuals over banks (does it seem like individuals were empowered over banks in the “Robber Baron Era”?), or what in the world it has to do with giving people more liberty (all that… freedom… you lose… when the Fed… stabilizes… prices?), so I don’t even know where to begin on those assertions. You know what empowers people over banks? The Consumer Financial Protection Bureau. You know what would enhance liberty? Ending mass incarceration. Libertarians fight tooth and nail against the former; sometimes they get behind the latter, but sometimes they don’t; Gary Johnson for some bizarre reason believes in privatization of prisons, which are directly linked to the surge in US incarceration.

The only benefit I’ve been able to come up with for the gold standard is as a commitment mechanism, something the Federal Reserve could do to guarantee its future behavior and thereby reduce the fear that it will suddenly change course on its past promises. This would make forward guidance a lot more effective at changing long-term interest rates, because people would have reason to believe that the Fed means what it says when it projects its decisions 30 years out.

But there are much simpler and better commitment mechanisms the Fed could use. They could commit to a Taylor Rule or nominal GDP targeting, both of which mainstream economists have been clamoring for for decades. There are some definite downsides to both proposals, but also some important upsides; and in any case they’re both obviously better than the gold standard and serve the same forward guidance function.

Indeed, it’s really quite baffling that so many people believe in the gold standard. It cries out for some sort of psychological explanation, as to just what cognitive heuristic is failing when otherwise-intelligent and highly-educated people get monetary policy so deeply, deeply wrong. A lot of them don’t even to seem grasp when or how we left the gold standard; it really happened when FDR suspended gold convertibility in 1933. After that on the Bretton Woods system only national governments could exchange money for gold, and the Nixon shock that people normally think of as “ending the gold standard” was just the final nail in the coffin, and clearly necessary since inflation was rapidly eating through our gold reserves.

A lot of it seems to come down to a deep distrust of government, especially federal government (I still do not grok why the likes of Ron Paul think state governments are so much more trustworthy than the federal government); the Federal Reserve is a government agency (sort of) and is therefore not to be trusted—and look, it has federal right there in the name.

But why do people hate government so much? Why do they think politicians are much less honest than they actually are? Part of it could have to do with the terrifying expansion of surveillance and weakening of civil liberties in the face of any perceived outside threat (Sedition Act, PATRIOT ACT, basically the same thing), but often the same people defending those programs are the ones who otherwise constantly complain about Big Government. Why do polls consistently show that people don’t trust the government, but want it to do more?

I think a lot of this comes down to the vague meaning of the word “government” and the associations we make with particular questions about it. When I ask “Do you trust the government?” you think of the NSA and the Vietnam War and Watergate, and you answer “No.” But when I ask “Do you want the government to do more?” you think of the failure at Katrina, the refusal to expand Medicaid, the pitiful attempts at reducing carbon emissions, and you answer “Yes.” When I ask if you like the military, your conditioned reaction is to say the patriotic thing, “Yes.” But if I ask whether you like the wars we’ve been fighting lately, you think about the hundreds of thousands of people killed and the wanton destruction to achieve no apparent actual objective, and you say “No.” Most people don’t come to these polls with thought-out opinions they want to express; the questions evoke emotional responses in them and they answer accordingly. You can also evoke different responses by asking “Should we cut government spending?” (People say “Yes.”) versus asking “Should we cut military spending, Social Security, or Medicare?” (People say “No.”) The former evokes a sense of abstract government taking your tax money; the latter evokes the realization that this money is used for public services you value.

So, the gold standard has acquired positive emotional vibes, and the Fed has acquired negative emotional vibes.

The former is fairly easy to explain: “good as gold” is an ancient saying, and “the gold standard” is even a saying we use in general to describe the right way of doing something (“the gold standard in prostate cancer treatment”). Humans have always had a weird relationship with gold; something about its timeless and noncorroding shine mesmerizes us. That’s why you occasionally get proposals for a silver standard, but no one ever seems to advocate an oil standard, an iron standard, or a lumber standard, which would make about as much sense.

The latter is a bit more difficult to explain: What did the Fed ever do to you? But I think it might have something to do with the complexity of sound monetary policy, and the resulting air of technocratic mystery surrounding it. Moreover, the Fed actively cultivates this image, by using “open-market operations” and “quantitative easing” to “target interest rates”, instead of just saying, “We’re printing money.” There may be some good reasons to do it this way, but a lot of it really does seem to be intended to obscure the truth from the uninitiated and perpetuate the myth that they are almost superhuman. “It’s all very complicated, you see; you wouldn’t understand.” People are hoarding their money, so there’s not enough money in circulation, so prices are falling, so you’re printing more money and trying to get it into circulation. That’s really not that complicated. Indeed, if it were, we wouldn’t be able to write a simple equation like a Taylor Rule or nominal GDP targeting in order to automate it!

The reason so many people become gold bugs after taking a couple of undergraduate courses in economics, then, is that this teaches them enough that they feel they have seen through the veil; the curtain has been pulled open and the all-powerful Wizard revealed to be an ordinary man at a control panel. (Spoilers? The movie came out in 1939. Actually, it was kind of about the gold standard.) “What? You’ve just been printing money all this time? But that is surely madness!” They don’t actually understand why printing money is actually a perfectly sensible thing to do on many occasions, and it feels to them a lot like what would happen if they just went around printing money (counterfeiting) or what a sufficiently corrupt government could do if they printed unlimited amounts (which is why they keep bringing up Zimbabwe). They now grasp what is happening, but not why. A little learning is a dangerous thing.

Now as for why Paul Volcker wants to go back to Bretton Woods? That, I cannot say. He’s definitely got more than a little learning. At least he doesn’t want to go back to the classical gold standard.

The credit rating agencies to be worried about aren’t the ones you think

JDN 2457499

John Oliver is probably the best investigative journalist in America today, despite being neither American nor officially a journalist; last week he took on the subject of credit rating agencies, a classic example of his mantra “If you want to do something evil, put it inside something boring.” (note that it’s on HBO, so there is foul language):

As ever, his analysis of the subject is quite good—it’s absurd how much power these agencies have over our lives, and how little accountability they have for even assuring accuracy.

But I couldn’t help but feel that he was kind of missing the point. The credit rating agencies to really be worried about aren’t Equifax, Experian, and Transunion, the ones that assess credit ratings on individuals. They are Standard & Poor’s, Moody’s, and Fitch (which would have been even easier to skewer the way John Oliver did—perhaps we can get them confused with Standardly Poor, Moody, and Filch), the agencies which assess credit ratings on institutions.

These credit rating agencies have almost unimaginable power over our society. They are responsible for rating the risk of corporate bonds, certificates of deposit, stocks, derivatives such as mortgage-backed securities and collateralized debt obligations, and even municipal and government bonds.

S&P, Moody’s, and Fitch don’t just rate the creditworthiness of Goldman Sachs and J.P. Morgan Chase; they rate the creditworthiness of Detroit and Greece. (Indeed, they played an important role in the debt crisis of Greece, which I’ll talk about more in a later post.)

Moreover, they are proven corrupt. It’s a matter of public record.

Standard and Poor’s is the worst; they have been successfully sued for fraud by small banks in Pennsylvania and by the State of New Jersey; they have also settled fraud cases with the Securities and Exchange Commission and the Department of Justice.

Moody’s has also been sued for fraud by the Department of Justice, and all three have been prosecuted for fraud by the State of New York.

But in fact this underestimates the corruption, because the worst conflicts of interest aren’t even illegal, or weren’t until Dodd-Frank was passed in 2010. The basic structure of this credit rating system is fundamentally broken; the agencies are private, for-profit corporations, and they get their revenue entirely from the banks that pay them to assess their risk. If they rate a bank’s asset as too risky, the bank stops paying them, and instead goes to another agency that will offer a higher rating—and simply the threat of doing so keeps them in line. As a result their ratings are basically uncorrelated with real risk—they failed to predict the collapse of Lehman Brothers or the failure of mortgage-backed CDOs, and they didn’t “predict” the European debt crisis so much as cause it by their panic.

Then of course there’s the fact that they are obviously an oligopoly, and furthermore one that is explicitly protected under US law. But then it dawns upon you: Wait… US law? US law decides the structure of credit rating agencies that set the bond rates of entire nations? Yes, that’s right. You’d think that such ratings would be set by the World Bank or something, but they’re not; in fact here’s a paper published by the World Bank in 2004 about how rather than reform our credit rating system, we should instead tell poor countries to reform themselves so they can better impress the private credit rating agencies.

In fact the whole concept of “sovereign debt risk” is fundamentally defective; a country that borrows in its own currency should never have to default on debt under any circumstances. National debt is almost nothing like personal or corporate debt. Their fears should be inflation and unemployment—their monetary policy should be set to minimize the harm of these two basic macroeconomic problems, understanding that policies which mitigate one may enflame the other. There is such a thing as bad fiscal policy, but it has nothing to do with “running out of money to pay your debt” unless you are forced to borrow in a currency you can’t control (as Greece is, because they are on the Euro—their debt is less like the US national debt and more like the debt of Puerto Rico, which is suffering an ongoing debt crisis you may not have heard about). If you borrow in your own currency, you should be worried about excessive borrowing creating inflation and devaluing your currency—but not about suddenly being unable to repay your creditors. The whole concept of giving a sovereign nation a credit rating makes no sense. You will be repaid on time and in full, in nominal terms; if inflation or currency exchange has devalued the currency you are repaid in, that’s sort of like a partial default, but it’s a fundamentally different kind of “default” than simply not paying back the money—and credit ratings have no way of capturing that difference.

In particular, it makes no sense for interest rates on government bonds to go up when a country is suffering some kind of macroeconomic problem.

The basic argument for why interest rates go up when risk is higher is that lenders expect to be paid more by those who do pay to compensate for what they lose from those who don’t pay. This is already much more problematic than most economists appreciate; I’ve been meaning to write a paper on how this system creates self-fulfilling prophecies of default and moral hazard from people who pay their debts being forced to subsidize those who don’t. But it at least makes some sense.

But if a country is a “high risk” in the sense of macroeconomic instability undermining the real value of their debt, we want to ensure that they can restore macroeconomic stability. But we know that when there is a surge in interest rates on government bonds, instability gets worse, not better. Fiscal policy is suddenly shifted away from real production into higher debt payments, and this creates unemployment and makes the economic crisis worse. As Paul Krugman writes about frequently, these policies of “austerity” cause enormous damage to national economies and ultimately benefit no one because they destroy the source of wealth that would have been used to repay the debt.

By letting credit rating agencies decide the rates at which governments must borrow, we are effectively treating national governments as a special case of corporations. But corporations, by design, act for profit and can go bankrupt. National governments are supposed to act for the public good and persist indefinitely. We can’t simply let Greece fail as we might let a bank fail (and of course we’ve seen that there are serious downsides even to that). We have to restructure the sovereign debt system so that it benefits the development of nations rather than detracting from it. The first step is removing the power of private for-profit corporations in the US to decide the “creditworthiness” of entire countries. If we need to assess such risks at all, they should be done by international institutions like the UN or the World Bank.

But right now people are so stuck in the idea that national debt is basically the same as personal or corporate debt that they can’t even understand the problem. For after all, one must repay one’s debts.

Why is it so hard to get a job?

JDN 2457411

The United States is slowly dragging itself out of the Second Depression.

Unemployment fell from almost 10% to about 5%.

Core inflation has been kept between 0% and 2% most of the time.

Overall inflation has been within a reasonable range:

US_inflation

Real GDP has returned to its normal growth trend, though with a permanent loss of output relative to what would have happened without the Great Recession.

US_GDP_growth

Consumption spending is also back on trend, tracking GDP quite precisely.

The Federal Reserve even raised the federal funds interest rate above the zero lower bound, signaling a return to normal monetary policy. (As I argued previously, I’m pretty sure that was their main goal actually.)

Employment remains well below the pre-recession peak, but is now beginning to trend upward once more.

The only thing that hasn’t recovered is labor force participation, which continues to decline. This is how we can have unemployment go back to normal while employment remains depressed; people leave the labor force by retiring, going back to school, or simply giving up looking for work. By the formal definition, someone is only unemployed if they are actively seeking work. No, this is not new, and it is certainly not Obama rigging the numbers. This is how we have measured unemployment for decades.

Actually, it’s kind of the opposite: Since the Clinton administration we’ve also kept track of “broad unemployment”, which includes people who’ve given up looking for work or people who have some work but are trying to find more. But we can’t directly compare it to anything that happened before 1994, because the BLS didn’t keep track of it before then. All we can do is estimate based on what we did measure. Based on such estimation, it is likely that broad unemployment in the Great Depression may have gotten as high as 50%. (I’ve found that one of the best-fitting models is actually one of the simplest; assume that broad unemployment is 1.8 times narrow unemployment. This fits much better than you might think.)

So, yes, we muddle our way through, and the economy eventually heals itself. We could have brought the economy back much sooner if we had better fiscal policy, but at least our monetary policy was good enough that we were spared the worst.

But I think most of us—especially in my generation—recognize that it is still really hard to get a job. Overall GDP is back to normal, and even unemployment looks all right; but why are so many people still out of work?

I have a hypothesis about this: I think a major part of why it is so hard to recover from recessions is that our system of hiring is terrible.

Contrary to popular belief, layoffs do not actually substantially increase during recessions. Quits are substantially reduced, because people are afraid to leave current jobs when they aren’t sure of getting new ones. As a result, rates of job separation actually go down in a recession. Job separation does predict recessions, but not in the way most people think. One of the things that made the Great Recession different from other recessions is that most layoffs were permanent, instead of temporary—but we’re still not sure exactly why.

Here, let me show you some graphs from the BLS.

This graph shows job openings from 2005 to 2015:

job_openings

This graph shows hires from 2005 to 2015:

job_hires

Both of those show the pattern you’d expect, with openings and hires plummeting in the Great Recession.

But check out this graph, of job separations from 2005 to 2015:

job_separations

Same pattern!

Unemployment in the Second Depression wasn’t caused by a lot of people losing jobs. It was caused by a lot of people not getting jobs—either after losing previous ones, or after graduating from school. There weren’t enough openings, and even when there were openings there weren’t enough hires.

Part of the problem is obviously just the business cycle itself. Spending drops because of a financial crisis, then businesses stop hiring people because they don’t project enough sales to justify it; then spending drops even further because people don’t have jobs, and we get caught in a vicious cycle.

But we are now recovering from the cyclical downturn; spending and GDP are back to their normal trend. Yet the jobs never came back. Something is wrong with our hiring system.

So what’s wrong with our hiring system? Probably a lot of things, but here’s one that’s been particularly bothering me for a long time.
As any job search advisor will tell you, networking is essential for career success.

There are so many different places you can hear this advice, it honestly gets tiring.

But stop and think for a moment about what that means. One of the most important determinants of what job you will get is… what people you know?

It’s not what you are best at doing, as it would be if the economy were optimally efficient.
It’s not even what you have credentials for, as we might expect as a second-best solution.

It’s not even how much money you already have, though that certainly is a major factor as well.

It’s what people you know.

Now, I realize, this is not entirely beyond your control. If you actively participate in your community, attend conferences in your field, and so on, you can establish new contacts and expand your network. A major part of the benefit of going to a good college is actually the people you meet there.

But a good portion of your social network is more or less beyond your control, and above all, says almost nothing about your actual qualifications for any particular job.

There are certain jobs, such as marketing, that actually directly relate to your ability to establish rapport and build weak relationships rapidly. These are a tiny minority. (Actually, most of them are the sort of job that I’m not even sure needs to exist.)

For the vast majority of jobs, your social skills are a tiny, almost irrelevant part of the actual skill set needed to do the job well. This is true of jobs from writing science fiction to teaching calculus, from diagnosing cancer to flying airliners, from cleaning up garbage to designing spacecraft. Social skills are rarely harmful, and even often provide some benefit, but if you need a quantum physicist, you should choose the recluse who can write down the Dirac equation by heart over the well-connected community leader who doesn’t know what an integral is.

At the very least, it strains credibility to suggest that social skills are so important for every job in the world that they should be one of the defining factors in who gets hired. And make no mistake: Networking is as beneficial for landing a job at a local bowling alley as it is for becoming Chair of the Federal Reserve. Indeed, for many entry-level positions networking is literally all that matters, while advanced positions at least exclude candidates who don’t have certain necessary credentials, and then make the decision based upon who knows whom.

Yet, if networking is so inefficient, why do we keep using it?

I can think of a couple reasons.

The first reason is that this is how we’ve always done it. Indeed, networking strongly pre-dates capitalism or even money; in ancient tribal societies there were certainly jobs to assign people to: who will gather berries, who will build the huts, who will lead the hunt. But there were no colleges, no certifications, no resumes—there was only your position in the social structure of the tribe. I think most people simply automatically default to a networking-based system without even thinking about it; it’s just the instinctual System 1 heuristic.

One of the few things I really liked about Debt: The First 5000 Years was the discussion of how similar the behavior of modern CEOs is to that of ancient tribal chieftans, for reasons that make absolutely no sense in terms of neoclassical economic efficiency—but perfect sense in light of human evolution. I wish Graeber had spent more time on that, instead of many of these long digressions about international debt policy that he clearly does not understand.

But there is a second reason as well, a better reason, a reason that we can’t simply give up on networking entirely.

The problem is that many important skills are very difficult to measure.

College degrees do a decent job of assessing our raw IQ, our willingness to persevere on difficult tasks, and our knowledge of the basic facts of a discipline (as well as a fantastic job of assessing our ability to pass standardized tests!). But when you think about the skills that really make a good physicist, a good economist, a good anthropologist, a good lawyer, or a good doctor—they really aren’t captured by any of the quantitative metrics that a college degree provides. Your capacity for creative problem-solving, your willingness to treat others with respect and dignity; these things don’t appear in a GPA.

This is especially true in research: The degree tells how good you are at doing the parts of the discipline that have already been done—but what we really want to know is how good you’ll be at doing the parts that haven’t been done yet.

Nor are skills precisely aligned with the content of a resume; the best predictor of doing something well may in fact be whether you have done so in the past—but how can you get experience if you can’t get a job without experience?

These so-called “soft skills” are difficult to measure—but not impossible. Basically the only reliable measurement mechanisms we have require knowing and working with someone for a long span of time. You can’t read it off a resume, you can’t see it in an interview (interviews are actually a horribly biased hiring mechanism, particularly biased against women). In effect, the only way to really know if someone will be good at a job is to work with them at that job for awhile.

There’s a fundamental information problem here I’ve never quite been able to resolve. It pops up in a few other contexts as well: How do you know whether a novel is worth reading without reading the novel? How do you know whether a film is worth watching without watching the film? When the information about the quality of something can only be determined by paying the cost of purchasing it, there is basically no way of assessing the quality of things before we purchase them.

Networking is an attempt to get around this problem. To decide whether to read a novel, ask someone who has read it. To decide whether to watch a film, ask someone who has watched it. To decide whether to hire someone, ask someone who has worked with them.

The problem is that this is such a weak measure that it’s not much better than no measure at all. I often wonder what would happen if businesses were required to hire people based entirely on resumes, with no interviews, no recommendation letters, and any personal contacts treated as conflicts of interest rather than useful networking opportunities—a world where the only thing we use to decide whether to hire someone is their documented qualifications. Could it herald a golden age of new economic efficiency and job fulfillment? Or would it result in widespread incompetence and catastrophic collapse? I honestly cannot say.

Thus ends our zero-lower-bound interest rate policy

JDN 2457383

Not with a bang, but with a whimper.

If you are reading the blogs as they are officially published, it will have been over a week since the Federal Reserve ended its policy of zero interest rates. (If you are reading this as a Patreon Blog from the Future, it will only have been a few days.)

The official announcement was made on December 16. The Federal Funds Target Rate will be raised from 0%-0.25% to 0.25%-0.5%. That one-quarter percentage point—itself no larger than the margin of error the Fed allots itself—will make all the difference.

As pointed out in the New York Times, this is the first time nominal interest rates have been raised in almost a decade. But the Fed had been promising it for some time, and thus a major reason they did it was to preserve their own credibility. They also say they think inflation is about to hit the 2% target, though it hasn’t yet (and I was never clear on why 2% was the target in the first place).

Actually, overall inflation is currently near zero. What is at 2% is what’s called “core inflation”, which excludes particularly volatile products such as oil and food. The idea is that we want to set monetary policy based upon long-run trends in the economy as a whole, not based upon sudden dips and surges in oil prices. But right now we are in the very odd scenario of the Fed raising interest rates in order to stop inflation even as the total amount most people need to spend to maintain their standard of living is the same as it was a year ago.

As MSNBC argues, it is essentially an announcement that the Second Depression is over and the economy has now returned to normal. Of course, simply announcing such a thing does not make it true.

Personally, I think this move is largely symbolic. The difference between 0% and 0.25% is unimportant for most practical purposes.

If you owe $100,000 over 30 years at 0% interest, you will pay $277.78 per month, totaling of course $100,000. If your interest rate were raised to 0.25% interest, you would instead owe $288.35 per month, totaling $103,807.28. Even over 30 years, that 0.25% interest raises your total expenditure by less than 4%.

Over shorter terms it’s even less important. If you owe $20,000 over 5 years at 0% interest, you will pay $333.33 per month totaling $20,000. At 0.25%, you would pay $335.46 per month totaling $20,127.34, a mere 0.6% more.

Moreover, if a bank was willing to take out a loan at 0%, they’ll probably still be at 0.25%.

Where it would have the largest impact is in more exotic financial instruments, like zero-amortization or negative-amortization bonds. A zero-amortization bond at 0% is literally free money forever (assuming you can keep rolling it over). A zero-amortization bond at 0.25% means you must at least pay 0.25% of the money back each year. A negative-amortization bond at 0% makes no sense mathematically (somehow you pay back less than 0% at each payment?), while a negative-amortization bond at 0.25% only doesn’t make sense practically. If both zero and negative-amortization seem really bizarre and impossible to justify, that’s because they are. They should not exist. Most exotic financial instruments have no reason to exist, aside from the fact that they can be used to bamboozle people into giving money to the financial corporations that create them. (Which reminds me, I need to see The Big Short. But of course I have to see Star Wars: The Force Awakens first; one must have priorities.)

So, what will happen as a result of this change in interest rates? Probably not much. Inflation might go down a little—which means we might have overall deflation, and that would be bad—and the rate of increase in credit might drop slightly. In the worst-case scenario, unemployment starts to rise again, the Fed realizes their mistake, and interest rates will be dropped back to zero.

I think it’s more instructive to look at why they did this—the symbolic significance behind it.

The zero lower bound is weird. It makes a lot of economists very uncomfortable. The usual rules for how monetary and fiscal policy work break down, because the equation hits up against a constraint—a corner solution, more technically. Krugman often talks about how many of the usual ideas about how interest rates and government spending work collapse at the zero-lower-bound. We have models of this sort of thing that are pretty good, but they’re weird and counter-intuitive, so policymakers never seem to actually use them.

What is the zero lower bound, you ask? Exactly what it says on the tin. There is a lower bound on how low you can set an interest rate, and for all practical purposes that limit is zero. If you start trying to set an interest rate of -5%, people won’t be willing to loan out money and will instead hoard cash. (Interestingly, a central bank with a strong currency, such as that of the US, UK, or EU, can actually set small negative nominal interest rates—because people consider their bonds safer than cash, so they’ll pay for the safety. The ECB, Europe’s Fed, actually did so for awhile.)

The zero-lower-bound actually applies to prices in general, not just interest rates. If a product is so worthless to you that you don’t even want it if it’s free, it’s very rare for anyone to actually pay you to take it—partly because there might be nothing to stop you from taking a huge amount of it and forcing them to pay you ridiculous amounts of money. “How much is this paperclip?” “-$0.75.” “I’ll have 50 billion, please.” In a few rare cases, they might be able to pay you to take it an amount that’s less than what it costs you to store and transport. Also, if they benefit from giving it to you, companies will give you things for free—think ads and free samples. But basically, if people won’t even take something for free, that thing simply doesn’t get sold.

But if we are in a recession, we really don’t want loans to stop being made altogether. So if people are unwilling to take out loans at 0% interest, we’re in trouble. Generally what we have to do is rely on inflation to reduce the real value of money over time, thus creating a real interest rate that’s negative even though the nominal interest rate remains stuck at 0%. But what if inflation is very low? Then there’s nothing you can do except find a way to raise inflation or increase demand for credit. This means relying upon unconventional methods like quantitative easing (trying to cause inflation), or preferably using fiscal policy to spend a bunch of money and thereby increase demand for credit.

What the Fed is basically trying to do here is say that we are no longer in that bad situation. We can now set interest rates where they actually belong, rather than forcing them as low as they’ll go and hoping inflation will make up the difference.

It’s actually similar to how if you take a test and score 100%, there’s no way of knowing whether you just barely got 100%, or if you would have still done as well if the test were twice as hard—but if you score 99%, you actually scored 99% and would have done worse if the test were harder. In the former case you were up against a constraint; in the latter it’s your actual value. The Fed is essentially announcing that we really want interest rates near 0%, as opposed to being bound at 0%—and the way they do that is by setting a target just slightly above 0%.

So far, there doesn’t seem to have been much effect on markets. And frankly, that’s just what I’d expect.