What does a central bank actually do?

Aug 26 JDN 2458357

Though central banks are a cornerstone of the modern financial system, I don’t think most people have a clear understanding of how they actually function. (I think this may be by design; there are many ways we could make central banking more transparent, but policymakers seem reluctant to show their hand.)

I’ve even seen famous economists make really severe errors in their understanding of monetary policy, as John Taylor did when he characterized low-interest-rate policy as a “price ceiling”.

Central banks “print money” and “set interest rates”. But how exactly do they do these things, and what on Earth do they have to do with each other?

The first thing to understand is that most central banks don’t actually print money. In the US, cash is actually printed by the Department of the Treasury. But cash is only a small part of the money in circulation. The monetary base consists of cash in vaults and in circulation; the US monetary base is about $3.6 trillion. The money supply can be measured a few different ways, but the standard way is to include checking accounts, traveler’s checks, savings accounts, money market accounts, short-term certified deposits, and basically anything that can be easily withdrawn and spent as money. This is called the M2 money supply, and in the US it is currently over $14.1 trillion. That means that only 25% of our money supply is in actual, physical cash—the rest is all digital. This is actually a relatively high proportion for actual cash, as the monetary base was greatly increased in response to the Great Recession. When we say that the Fed “prints money”, what we really mean is that they are increasing the money supply—but typically they do so in a way that involves little if any actual printing of cash.

The second thing to understand is that central banks don’t exactly set interest rates either. They target interest rates. What’s the difference, you ask?

Well, setting interest rates would mean that they made a law or something saying you have to charge exactly 2.7%, and you get fined or something if you don’t do that.

Targeting interest rates is a subtler art. The Federal Reserve decides what interest rates they want banks to charge, and then they engage in what are called open-market operations to try to make that happen. Banks hold reservesmoney that they are required to keep as collateral for their loans. Since we are in a fractional-reserve system, they are allowed to keep only a certain proportion (usually about 10%). In open-market operations, the Fed buys and sells assets (usually US Treasury bonds) in order to either increase or decrease the amount of reserves available to banks, to try to get them to lend to each other at the targeted interest rates.

Why not simply set the interest rate by law? Because then it wouldn’t be the market-clearing interest rate. There would be shortages or gluts of assets.

It might be easier to grasp this if we step away from money for a moment and just think about the market for some other good, like televisions.

Suppose that the government wants to set the price of a television in the market to a particular value, say $500. (Why? Who knows. Let’s just run with it for a minute.)

If they simply declared by law that the price of a television must be $500, here’s what would happen: Either that would be too low, in which case there would be a shortage of televisions as demand exceeded supply; or that would be too high, in which case there would be a glut of televisions as supply exceeded demand. Only if they got spectacularly lucky and the market price already was $500 per television would they not have to worry about such things (and then, why bother?).

But suppose the government had the power to create and destroy televisions virtually at will with minimal cost.
Now, they have a better way; they can target the price of a television, and buy and sell televisions as needed to bring the market price to that target. If the price is too low, the government can buy and destroy a lot of televisions, to bring the price up. If the price is too high, the government can make and sell a lot of televisions, to bring the price down.

Now, let’s go back to money. This power to create and destroy at will is hard to believe for televisions, but absolutely true for money. The government can create and destroy almost any amount of money at will—they are limited only by the very inflation and deflation the central bank is trying to affect.

This allows central banks to intervene in the market without creating shortages or gluts; even though they are effectively controlling the interest rate, they are doing so in a way that avoids having a lot of banks wanting to take loans they can’t get or wanting to give loans they can’t find anyone to take.

The goal of all this manipulation is ultimately to reduce inflation and unemployment. Unfortunately it’s basically impossible to eliminate both simultaneously; the Phillips curve describes the relationship generally found that decreased inflation usually comes with increased unemployment and vice-versa. But the basic idea is that we set reasonable targets for each (usually about 2% inflation and 5% unemployment; frankly I’d prefer we swap the two, which was more or less what we did in the 1950s), and then if inflation is too high we raise interest rate targets, while if unemployment is too high we lower interest rate targets.

What if they’re both too high? Then we’re in trouble. This has happened; it is called stagflation. The money supply isn’t the other thing affecting inflation and unemployment, and sometimes we get hit with a bad shock that makes both of them high at once. In that situation, there isn’t much that monetary policy can do; we need to find other solutions.

But how does targeting interest rates lead to inflation? To be quite honest, we don’t actually know.

The basic idea is that lower interest rates should lead to more borrowing, which leads to more spending, which leads to more inflation. But beyond that, we don’t actually understand how interest rates translate into prices—this is the so-called transmission mechanism, which remains an unsolved problem in macroeconomics. Based on the empirical data, I lean toward the view that the mechanism is primarily via housing prices; lower interest rates lead to more mortgages, which raises the price of real estate, which raises the price of everything else. This also makes sense theoretically, as real estate consists of large, illiquid assets for which the long-term interest rate is very important. Your decision to buy an apple or even a television is probably not greatly affected by interest rates—but your decision to buy a house surely is.

If that is indeed the case, it’s worth thinking about whether this is really the right way to intervene on inflation and unemployment. High housing prices are an international crisis; maybe we need to be looking at ways to decrease unemployment without affecting housing prices. But that is a tale for another time.

What would a game with realistic markets look like?

Aug 12 JDN 2458343

From Pokemon to Dungeons & Dragons, Final Fantasy to Mass Effect, almost all role-playing games have some sort of market: Typically, you buy and sell equipment, and often can buy services such as sleeping at inns. Yet the way those markets work is extremely rigid and unrealistic.

(I’m of course excluding games like EVE Online that actually create real markets between players; those markets are so realistic I actually think they would provide a good opportunity for genuine controlled experiments in macroeconomics.)

The weirdest thing about in-game markets is the fact that items almost always come with a fixed price. Sometimes there is some opportunity for haggling, or some randomization between different merchants; but the notion always persists that the item has a “true price” that is being adjusted upward or downward. This is more or less the opposite of how prices actually work in real markets.

There is no “true price” of a car or a pizza. Prices are whatever buyers and sellers make them. There is a true value—the amount of real benefit that can be obtained from a good—but even this is something that varies between individuals and also changes based on the other goods being consumed. The value of a pizza is considerably higher for someone who hasn’t eaten in days than to someone who just finished eating another pizza.

There is also what is called “The Law of One Price”, but like all laws of economics, it’s like the Pirate Code, more what you’d call a “guideline”, and it only applies to a particular good in a particular market at a particular time. The Law of One Price doesn’t even say that a pizza should have the same price tomorrow as it does today, or that the same pizza can’t be sold to two different customers at two different prices; it only says that the same pizza shouldn’t have two different prices in the same place at the same time for the same customer. (It seems almost tautological, right? And yet it still fails empirically, and does so again and again. I have seen offers for the same book in the same condition posted on the same website that differed by as much as 50%.)

In well-developed capitalist markets in large First World countries, we can lull ourselves into the illusion that there is only one price for a good, because markets are highly liquid and either highly competitive or controlled by a strong and stable oligopoly that enforces a particular price across places and times. The McDonald’s Dollar Menu is a policy choice by a massive multinational corporation; it’s not what would occur naturally if those items were sold on a competitive market.

Even then, this illusion can be broken when we are faced with a large economic shock, such as the OPEC price shock in 1973 or a natural disaster like Hurricane Katrina. It also tends to be broken for illiquid goods such as real estate.

If we consider the environment in which most role-playing games take place, it’s usually a sort of quasi-medieval or quasi-Renaissance feudal society, where a given government controls only a small region and traveling between towns is difficult and dangerous. Not only should the prices of goods differ substantially between towns, the currency used should frequently differ as well. Yes, most places would accept gold and silver; but a kingdom with a stable government will generally have a currency of significant seignorage, with coins worth considerably more than the gold used to mint them—yet the value of that seignorage will drop off as you move further away from that kingdom and its sphere of influence.

Moreover, prices should be inconsistent even between traders in the same town, and extremely volatile. When a town is mostly self-sufficient and trade is only a small part of its economy, even a small shock such as a bad thunderstorm or a brief drought can yield massive shifts in prices. Shortages and gluts will be frequent, as both supply and demand are small and ever-changing.

This wouldn’t be that difficult to implement. The simplest way would just be to institute random shocks to prices that vary by place and time. A more sophisticated method would be to actually simulate supply and demand for different goods, and then have prices respond to realistic shocks (e.g. a drought makes wheat more expensive, and the price of swords suddenly skyrockets after news of an impending dragon attack). Experiments have shown that competitive market outcomes can be achieved by simulating even a dozen or so traders using very simple heuristics like “don’t pay more than you can afford” and “don’t charge less than it cost you”.

Why don’t game designers implement this? I think there are two reasons.

The first is simply that it would be more complicated. This is a legitimate concern in many cases; I particularly think Pokemon can justify using a simple economy, given its target audience. I particularly agree that having more than a handful of currencies would be too much for players to keep track of; though perhaps having two or three (one for each major faction?) is still more interesting than only having one.

Also, tabletop games are inherently more limited in the amount of computation they can use, compared to video games. But for a game as complicated as say Skyrim, this really isn’t much of a defense. Skyrim actually simulated the daily routines of over a hundred different non-player characters; it could have been simulating markets in the background as well—in fact, it could have simply had those same non-player characters buy and sell goods with each other in a double-auction market that would automatically generate the prices that players face.

The more important reason, I think, is that game designers have a paralyzing fear of arbitrage.

I find it particularly aggravating how frequently games will set it up so that the price at which you buy and the price at which you sell are constrained so that the buying price is always higher, often as much as twice as high. This is not at all how markets work in the real world; frankly it’s only even close to true for goods like cars that rapidly depreciate. It make senses that a given merchant will not sell you a good for less than what they would pay to buy it from you; but that only requires each individual merchant to have a well-defined willingness-to-pay and willingness-to-accept. It certainly does not require the arbitrary constraint that you can never sell something for more than what you bought it for.

In fact, I would probably even allow players who specialize in social skills to short-change and bamboozle merchants for profit, as this is absolutely something that happens in the real world, and was likely especially common under the very low levels of literacy and numeracy that prevailed in the Middle Ages.

To many game designers (and gamers), the ability to buy a good in one place, travel to another place, and sell that good for a higher price seems like cheating. But this practice is call being a merchant. That is literally what the entire retail industry does. The rules of your game should allow you to profit from activities that are in fact genuinely profitable real economic services in the real world.

I remember a similar complaint being raised against Skyrim shortly after its release, that one could acquire a pickaxe, collect iron ore, smelt it into steel, forge weapons out of it, and then sell the weapons for a sizeable profit. To some people, this sounded like cheating. To me, it sounds like being a blacksmith. This is especially true because Skyrim’s skill system allowed you to improve the quality of your smithed items over time, just like learning a trade through practice (though it ramped up too fast, as it didn’t take long to make yourself clearly the best blacksmith in all of Skyrim). Frankly, this makes far more sense than being able to acquire gold by adventuring through the countryside and slaughtering monsters or collecting lost items from caves. Blacksmiths were a large part of the medieval economy; spelunking adventurers were not. Indeed, it bothers me that there weren’t more opportunities like this; you couldn’t make your wealth by being a farmer, a vintner, or a carpenter, for instance.

Even if you managed to pull off pure arbitrage, providing no real services, such as by buying and selling between two merchants in the same town, or the same merchant on two consecutive days, that is also a highly profitable industry. Most of our financial system is built around it, frankly. If you manage to make your wealth selling wheat futures instead of slaying dragons, I say more power to you. After all, there were an awful lot of wheat-future traders in the Middle Ages, and to my knowledge no actually successful dragon-slayers.

Of course, if your game is about slaying dragons, it should include some slaying of dragons. And if you really don’t care about making a realistic market in your game, so be it. But I think that more realistic markets could actually offer a great deal of richness and immersion into a world without greatly increasing the difficulty or complexity of the game. A world where prices change in response to the events of the story just feels more real, more alive.

The ability to profit without violence might actually draw whole new modes of play to the game (as has indeed occurred with Skyrim, where a small but significant proportion of players have chosen to live out peaceful lives as traders or blacksmiths). I would also enrich the experience of more conventional players and helping them recover from setbacks (if the only way to make money is to fight monsters and you keep getting killed by monsters, there isn’t much you can do; but if you have the option of working as a trader or a carpenter for awhile, you could save up for better equipment and try the fighting later).

And hey, game designers: If any of you are having trouble figuring out how to implement such a thing, my consulting fees are quite affordable.

The unending madness of the gold standard

JDN 2457545

If you work in economics in any capacity (much like “How is the economy doing?” you don’t even really need to be in macroeconomics), you will encounter many people who believe in the gold standard. Many of these people will be otherwise quite intelligent and educated; they often understand economics better than most people (not that this is saying a whole lot). Yet somehow they continue to hold—and fiercely defend—this incredibly bizarre and anachronistic view of macroeconomics.

They even bring it up at the oddest times; I recently encountered someone who wrote a long and rambling post arguing for drug legalization (which I largely agree with, by the way) and concluded it with #EndTheFed, not seeming to grasp the total and utter irrelevance of this juxtaposition. It seems like it was just a conditioned response, or maybe the sort of irrelevant but consistent coda originally perfected by Cato and his “Carthago delenda est. “Foederale Reservatum delendum est. Hey, maybe that’s why they’re called the Cato Institute.

So just how bizarre is the gold standard? Well, let’s look at what sort of arguments they use to defend it. I’ll use Charles Kadlic, prominent Libertarian blogger on Forbes, as an example, with his “Top Ten Reasons That You Should Support the ‘Gold Commission’”:

  1. A gold standard is key to achieving a period of sustained, 4% real economic growth.
  2. A gold standard reduces the risk of recessions and financial crises.
  3. A gold standard would restore rising living standards to the middle-class.
  4. A gold standard would restore long-term price stability.
  5. A gold standard would stop the rise in energy prices.
  6. A gold standard would be a powerful force for restoring fiscal balance to federal state and local governments.
  7. A gold standard would help save Medicare and Social Security.
  8. A gold standard would empower Main Street over Wall Street.
  9. A gold standard would increase the liberty of the American people.
  10. Creation of a gold commission will provide the forum to chart a prudent path toward a 21st century gold standard.

Number 10 can be safely ignored, as clearly Kadlic just ran out of reasons and to make a round number tacked on the implicit assumption of the entire article, namely that this ‘gold commission’ would actually realistically lead us toward a gold standard. (Without it, the other 9 reasons are just non sequitur.)

So let’s look at the other 9, shall we? Literally none of them are true. Several are outright backward.

You know a policy is bad when even one of its most prominent advocates can’t even think of a single real benefit it would have. A lot of quite bad policies do have perfectly real benefits, they’re just totally outweighed by their costs: For example, cutting the top income tax rate to 20% probably would actually contribute something to economic growth. Not a lot, and it would cut a swath through the federal budget and dramatically increase inequality—but it’s not all downside. Yet Kadlic couldn’t actually even think of one benefit of the gold standard that actually holds up. (I actually can do his work for him: I do know of one benefit of the gold standard, but as I’ll get to momentarily it’s quite small and can easily be achieved in better ways.)

First of all, it’s quite clear that the gold standard did not increase economic growth. If you cherry-pick your years properly, you can make it seem like Nixon leaving the gold standard hurt growth, but if you look at the real long-run trends in economic growth it’s clear that we had really erratic growth up until about the 1910s (the surge of government spending in WW1 and the establishment of the Federal Reserve), at which point went through a temporary surge recovering from the Great Depression and then during WW2, and finally, if you smooth out the business cycle, our growth rates have slowly trended downward as growth in productivity has gradually slowed down.

Here’s GDP growth from 1800 to 1900, when we were on the classical gold standard:

US_GDP_growth_1800s

Here’s GDP growth from 1929 to today, using data from the Bureau of Economic Analysis:

US_GDP_growth_BEA

Also, both of these are total GDP growth (because that is what Kadlic said), which means that part of what you’re seeing here is population growth rather than growth in income per person. Here’s GDP per person in the 1800s:

US_GDP_growth_1800s

If you didn’t already know, I bet you can’t guess where on those graphs we left the gold standard, which you’d clearly be able to do if the gold standard had this dramatic “double your GDP growth” kind of effect. I can’t immediately rule out some small benefit to the gold standard just from this data, but don’t worry; more thorough economic studies have done that. Indeed, it is the mainstream consensus among economists today that the gold standard is what caused the Great Depression.

Indeed, there’s a whole subfield of historical economics research that basically amounts to “What were they thinking?” trying to explain why countries stayed on the gold standard for so long when it clearly wasn’t working. Here’s a paper trying to argue it was a costly signal of your “rectitude” in global bond markets, but I find much more compelling the argument that it was psychological: Their belief in the gold standard was simply too strong, so confirmation bias kept holding them back from what needed to be done. They were like my aforementioned #EndTheFed acquaintance.

Then we get to Kadlic’s second point: Does the gold standard reduce the risk of financial crises? Let’s also address point 4, which is closely related: Does the gold standard improve price stability? Tell that to 1929.

In fact, financial crises were more common on the classical gold standard; the period of pure fiat monetary policy was so stable that it was called the Great Moderation, until the crash in 2008 screwed it all up—and that crash occurred essentially outside the standard monetary system, in the “shadow banking system” of unregulated and virtually unlimited derivatives. Had we actually forced banks to stay within the light of the standard banking system, the Great Moderation might have continued indefinitely.

As for “price stability”, that’s sort of true if you look at the long run, because prices were as likely to go down as they were to go up. But that isn’t what we mean by “price stability”. A system with good price stability will have a low but positive and steady level of inflation, and will therefore exhibit some long-run increases in price levels; it won’t have prices jump up and down erratically and end up on average the same.

For jump up and down is what prices did on the gold standard, as you can see from FRED:

US_inflation_longrun

This is something we could have predicted in advance; the price of any given product jumps up and down over time, and gold is just one product among many. Tying prices to gold makes no more sense than tying them to any other commodity.

As for stopping the rise in energy prices, energy prices aren’t rising. Even if they were (and they could at some point), the only way the gold standard would stop that is by triggering deflation (and therefore recession) in the rest of the economy.

Regarding number 6, I don’t see how the fiscal balance of federal and state governments is improved by periodic bouts of deflation that make their debt unpayable.

As for number 7, saving Medicare and Social Security, their payments out are tied to inflation and their payments in are tied to nominal GDP, so overall inflation has very little effect on their long-term stability. In any case, the problem with Medicare is spiraling medical costs (which Obamacare has done a lot to fix), and the problem with Social Security is just the stupid arbitrary cap on the income subject to payroll tax; the gold standard would do very little to solve either of those problems, though I guess it would make the nominal income cap less binding by triggering deflation, which is just about the worst way to avoid a price ceiling I’ve ever heard.

Regarding 8 and 9, I don’t even understand why Kadlic thinks that going to a gold standard would empower individuals over banks (does it seem like individuals were empowered over banks in the “Robber Baron Era”?), or what in the world it has to do with giving people more liberty (all that… freedom… you lose… when the Fed… stabilizes… prices?), so I don’t even know where to begin on those assertions. You know what empowers people over banks? The Consumer Financial Protection Bureau. You know what would enhance liberty? Ending mass incarceration. Libertarians fight tooth and nail against the former; sometimes they get behind the latter, but sometimes they don’t; Gary Johnson for some bizarre reason believes in privatization of prisons, which are directly linked to the surge in US incarceration.

The only benefit I’ve been able to come up with for the gold standard is as a commitment mechanism, something the Federal Reserve could do to guarantee its future behavior and thereby reduce the fear that it will suddenly change course on its past promises. This would make forward guidance a lot more effective at changing long-term interest rates, because people would have reason to believe that the Fed means what it says when it projects its decisions 30 years out.

But there are much simpler and better commitment mechanisms the Fed could use. They could commit to a Taylor Rule or nominal GDP targeting, both of which mainstream economists have been clamoring for for decades. There are some definite downsides to both proposals, but also some important upsides; and in any case they’re both obviously better than the gold standard and serve the same forward guidance function.

Indeed, it’s really quite baffling that so many people believe in the gold standard. It cries out for some sort of psychological explanation, as to just what cognitive heuristic is failing when otherwise-intelligent and highly-educated people get monetary policy so deeply, deeply wrong. A lot of them don’t even to seem grasp when or how we left the gold standard; it really happened when FDR suspended gold convertibility in 1933. After that on the Bretton Woods system only national governments could exchange money for gold, and the Nixon shock that people normally think of as “ending the gold standard” was just the final nail in the coffin, and clearly necessary since inflation was rapidly eating through our gold reserves.

A lot of it seems to come down to a deep distrust of government, especially federal government (I still do not grok why the likes of Ron Paul think state governments are so much more trustworthy than the federal government); the Federal Reserve is a government agency (sort of) and is therefore not to be trusted—and look, it has federal right there in the name.

But why do people hate government so much? Why do they think politicians are much less honest than they actually are? Part of it could have to do with the terrifying expansion of surveillance and weakening of civil liberties in the face of any perceived outside threat (Sedition Act, PATRIOT ACT, basically the same thing), but often the same people defending those programs are the ones who otherwise constantly complain about Big Government. Why do polls consistently show that people don’t trust the government, but want it to do more?

I think a lot of this comes down to the vague meaning of the word “government” and the associations we make with particular questions about it. When I ask “Do you trust the government?” you think of the NSA and the Vietnam War and Watergate, and you answer “No.” But when I ask “Do you want the government to do more?” you think of the failure at Katrina, the refusal to expand Medicaid, the pitiful attempts at reducing carbon emissions, and you answer “Yes.” When I ask if you like the military, your conditioned reaction is to say the patriotic thing, “Yes.” But if I ask whether you like the wars we’ve been fighting lately, you think about the hundreds of thousands of people killed and the wanton destruction to achieve no apparent actual objective, and you say “No.” Most people don’t come to these polls with thought-out opinions they want to express; the questions evoke emotional responses in them and they answer accordingly. You can also evoke different responses by asking “Should we cut government spending?” (People say “Yes.”) versus asking “Should we cut military spending, Social Security, or Medicare?” (People say “No.”) The former evokes a sense of abstract government taking your tax money; the latter evokes the realization that this money is used for public services you value.

So, the gold standard has acquired positive emotional vibes, and the Fed has acquired negative emotional vibes.

The former is fairly easy to explain: “good as gold” is an ancient saying, and “the gold standard” is even a saying we use in general to describe the right way of doing something (“the gold standard in prostate cancer treatment”). Humans have always had a weird relationship with gold; something about its timeless and noncorroding shine mesmerizes us. That’s why you occasionally get proposals for a silver standard, but no one ever seems to advocate an oil standard, an iron standard, or a lumber standard, which would make about as much sense.

The latter is a bit more difficult to explain: What did the Fed ever do to you? But I think it might have something to do with the complexity of sound monetary policy, and the resulting air of technocratic mystery surrounding it. Moreover, the Fed actively cultivates this image, by using “open-market operations” and “quantitative easing” to “target interest rates”, instead of just saying, “We’re printing money.” There may be some good reasons to do it this way, but a lot of it really does seem to be intended to obscure the truth from the uninitiated and perpetuate the myth that they are almost superhuman. “It’s all very complicated, you see; you wouldn’t understand.” People are hoarding their money, so there’s not enough money in circulation, so prices are falling, so you’re printing more money and trying to get it into circulation. That’s really not that complicated. Indeed, if it were, we wouldn’t be able to write a simple equation like a Taylor Rule or nominal GDP targeting in order to automate it!

The reason so many people become gold bugs after taking a couple of undergraduate courses in economics, then, is that this teaches them enough that they feel they have seen through the veil; the curtain has been pulled open and the all-powerful Wizard revealed to be an ordinary man at a control panel. (Spoilers? The movie came out in 1939. Actually, it was kind of about the gold standard.) “What? You’ve just been printing money all this time? But that is surely madness!” They don’t actually understand why printing money is actually a perfectly sensible thing to do on many occasions, and it feels to them a lot like what would happen if they just went around printing money (counterfeiting) or what a sufficiently corrupt government could do if they printed unlimited amounts (which is why they keep bringing up Zimbabwe). They now grasp what is happening, but not why. A little learning is a dangerous thing.

Now as for why Paul Volcker wants to go back to Bretton Woods? That, I cannot say. He’s definitely got more than a little learning. At least he doesn’t want to go back to the classical gold standard.

The credit rating agencies to be worried about aren’t the ones you think

JDN 2457499

John Oliver is probably the best investigative journalist in America today, despite being neither American nor officially a journalist; last week he took on the subject of credit rating agencies, a classic example of his mantra “If you want to do something evil, put it inside something boring.” (note that it’s on HBO, so there is foul language):

As ever, his analysis of the subject is quite good—it’s absurd how much power these agencies have over our lives, and how little accountability they have for even assuring accuracy.

But I couldn’t help but feel that he was kind of missing the point. The credit rating agencies to really be worried about aren’t Equifax, Experian, and Transunion, the ones that assess credit ratings on individuals. They are Standard & Poor’s, Moody’s, and Fitch (which would have been even easier to skewer the way John Oliver did—perhaps we can get them confused with Standardly Poor, Moody, and Filch), the agencies which assess credit ratings on institutions.

These credit rating agencies have almost unimaginable power over our society. They are responsible for rating the risk of corporate bonds, certificates of deposit, stocks, derivatives such as mortgage-backed securities and collateralized debt obligations, and even municipal and government bonds.

S&P, Moody’s, and Fitch don’t just rate the creditworthiness of Goldman Sachs and J.P. Morgan Chase; they rate the creditworthiness of Detroit and Greece. (Indeed, they played an important role in the debt crisis of Greece, which I’ll talk about more in a later post.)

Moreover, they are proven corrupt. It’s a matter of public record.

Standard and Poor’s is the worst; they have been successfully sued for fraud by small banks in Pennsylvania and by the State of New Jersey; they have also settled fraud cases with the Securities and Exchange Commission and the Department of Justice.

Moody’s has also been sued for fraud by the Department of Justice, and all three have been prosecuted for fraud by the State of New York.

But in fact this underestimates the corruption, because the worst conflicts of interest aren’t even illegal, or weren’t until Dodd-Frank was passed in 2010. The basic structure of this credit rating system is fundamentally broken; the agencies are private, for-profit corporations, and they get their revenue entirely from the banks that pay them to assess their risk. If they rate a bank’s asset as too risky, the bank stops paying them, and instead goes to another agency that will offer a higher rating—and simply the threat of doing so keeps them in line. As a result their ratings are basically uncorrelated with real risk—they failed to predict the collapse of Lehman Brothers or the failure of mortgage-backed CDOs, and they didn’t “predict” the European debt crisis so much as cause it by their panic.

Then of course there’s the fact that they are obviously an oligopoly, and furthermore one that is explicitly protected under US law. But then it dawns upon you: Wait… US law? US law decides the structure of credit rating agencies that set the bond rates of entire nations? Yes, that’s right. You’d think that such ratings would be set by the World Bank or something, but they’re not; in fact here’s a paper published by the World Bank in 2004 about how rather than reform our credit rating system, we should instead tell poor countries to reform themselves so they can better impress the private credit rating agencies.

In fact the whole concept of “sovereign debt risk” is fundamentally defective; a country that borrows in its own currency should never have to default on debt under any circumstances. National debt is almost nothing like personal or corporate debt. Their fears should be inflation and unemployment—their monetary policy should be set to minimize the harm of these two basic macroeconomic problems, understanding that policies which mitigate one may enflame the other. There is such a thing as bad fiscal policy, but it has nothing to do with “running out of money to pay your debt” unless you are forced to borrow in a currency you can’t control (as Greece is, because they are on the Euro—their debt is less like the US national debt and more like the debt of Puerto Rico, which is suffering an ongoing debt crisis you may not have heard about). If you borrow in your own currency, you should be worried about excessive borrowing creating inflation and devaluing your currency—but not about suddenly being unable to repay your creditors. The whole concept of giving a sovereign nation a credit rating makes no sense. You will be repaid on time and in full, in nominal terms; if inflation or currency exchange has devalued the currency you are repaid in, that’s sort of like a partial default, but it’s a fundamentally different kind of “default” than simply not paying back the money—and credit ratings have no way of capturing that difference.

In particular, it makes no sense for interest rates on government bonds to go up when a country is suffering some kind of macroeconomic problem.

The basic argument for why interest rates go up when risk is higher is that lenders expect to be paid more by those who do pay to compensate for what they lose from those who don’t pay. This is already much more problematic than most economists appreciate; I’ve been meaning to write a paper on how this system creates self-fulfilling prophecies of default and moral hazard from people who pay their debts being forced to subsidize those who don’t. But it at least makes some sense.

But if a country is a “high risk” in the sense of macroeconomic instability undermining the real value of their debt, we want to ensure that they can restore macroeconomic stability. But we know that when there is a surge in interest rates on government bonds, instability gets worse, not better. Fiscal policy is suddenly shifted away from real production into higher debt payments, and this creates unemployment and makes the economic crisis worse. As Paul Krugman writes about frequently, these policies of “austerity” cause enormous damage to national economies and ultimately benefit no one because they destroy the source of wealth that would have been used to repay the debt.

By letting credit rating agencies decide the rates at which governments must borrow, we are effectively treating national governments as a special case of corporations. But corporations, by design, act for profit and can go bankrupt. National governments are supposed to act for the public good and persist indefinitely. We can’t simply let Greece fail as we might let a bank fail (and of course we’ve seen that there are serious downsides even to that). We have to restructure the sovereign debt system so that it benefits the development of nations rather than detracting from it. The first step is removing the power of private for-profit corporations in the US to decide the “creditworthiness” of entire countries. If we need to assess such risks at all, they should be done by international institutions like the UN or the World Bank.

But right now people are so stuck in the idea that national debt is basically the same as personal or corporate debt that they can’t even understand the problem. For after all, one must repay one’s debts.

Why is it so hard to get a job?

JDN 2457411

The United States is slowly dragging itself out of the Second Depression.

Unemployment fell from almost 10% to about 5%.

Core inflation has been kept between 0% and 2% most of the time.

Overall inflation has been within a reasonable range:

US_inflation

Real GDP has returned to its normal growth trend, though with a permanent loss of output relative to what would have happened without the Great Recession.

US_GDP_growth

Consumption spending is also back on trend, tracking GDP quite precisely.

The Federal Reserve even raised the federal funds interest rate above the zero lower bound, signaling a return to normal monetary policy. (As I argued previously, I’m pretty sure that was their main goal actually.)

Employment remains well below the pre-recession peak, but is now beginning to trend upward once more.

The only thing that hasn’t recovered is labor force participation, which continues to decline. This is how we can have unemployment go back to normal while employment remains depressed; people leave the labor force by retiring, going back to school, or simply giving up looking for work. By the formal definition, someone is only unemployed if they are actively seeking work. No, this is not new, and it is certainly not Obama rigging the numbers. This is how we have measured unemployment for decades.

Actually, it’s kind of the opposite: Since the Clinton administration we’ve also kept track of “broad unemployment”, which includes people who’ve given up looking for work or people who have some work but are trying to find more. But we can’t directly compare it to anything that happened before 1994, because the BLS didn’t keep track of it before then. All we can do is estimate based on what we did measure. Based on such estimation, it is likely that broad unemployment in the Great Depression may have gotten as high as 50%. (I’ve found that one of the best-fitting models is actually one of the simplest; assume that broad unemployment is 1.8 times narrow unemployment. This fits much better than you might think.)

So, yes, we muddle our way through, and the economy eventually heals itself. We could have brought the economy back much sooner if we had better fiscal policy, but at least our monetary policy was good enough that we were spared the worst.

But I think most of us—especially in my generation—recognize that it is still really hard to get a job. Overall GDP is back to normal, and even unemployment looks all right; but why are so many people still out of work?

I have a hypothesis about this: I think a major part of why it is so hard to recover from recessions is that our system of hiring is terrible.

Contrary to popular belief, layoffs do not actually substantially increase during recessions. Quits are substantially reduced, because people are afraid to leave current jobs when they aren’t sure of getting new ones. As a result, rates of job separation actually go down in a recession. Job separation does predict recessions, but not in the way most people think. One of the things that made the Great Recession different from other recessions is that most layoffs were permanent, instead of temporary—but we’re still not sure exactly why.

Here, let me show you some graphs from the BLS.

This graph shows job openings from 2005 to 2015:

job_openings

This graph shows hires from 2005 to 2015:

job_hires

Both of those show the pattern you’d expect, with openings and hires plummeting in the Great Recession.

But check out this graph, of job separations from 2005 to 2015:

job_separations

Same pattern!

Unemployment in the Second Depression wasn’t caused by a lot of people losing jobs. It was caused by a lot of people not getting jobs—either after losing previous ones, or after graduating from school. There weren’t enough openings, and even when there were openings there weren’t enough hires.

Part of the problem is obviously just the business cycle itself. Spending drops because of a financial crisis, then businesses stop hiring people because they don’t project enough sales to justify it; then spending drops even further because people don’t have jobs, and we get caught in a vicious cycle.

But we are now recovering from the cyclical downturn; spending and GDP are back to their normal trend. Yet the jobs never came back. Something is wrong with our hiring system.

So what’s wrong with our hiring system? Probably a lot of things, but here’s one that’s been particularly bothering me for a long time.
As any job search advisor will tell you, networking is essential for career success.

There are so many different places you can hear this advice, it honestly gets tiring.

But stop and think for a moment about what that means. One of the most important determinants of what job you will get is… what people you know?

It’s not what you are best at doing, as it would be if the economy were optimally efficient.
It’s not even what you have credentials for, as we might expect as a second-best solution.

It’s not even how much money you already have, though that certainly is a major factor as well.

It’s what people you know.

Now, I realize, this is not entirely beyond your control. If you actively participate in your community, attend conferences in your field, and so on, you can establish new contacts and expand your network. A major part of the benefit of going to a good college is actually the people you meet there.

But a good portion of your social network is more or less beyond your control, and above all, says almost nothing about your actual qualifications for any particular job.

There are certain jobs, such as marketing, that actually directly relate to your ability to establish rapport and build weak relationships rapidly. These are a tiny minority. (Actually, most of them are the sort of job that I’m not even sure needs to exist.)

For the vast majority of jobs, your social skills are a tiny, almost irrelevant part of the actual skill set needed to do the job well. This is true of jobs from writing science fiction to teaching calculus, from diagnosing cancer to flying airliners, from cleaning up garbage to designing spacecraft. Social skills are rarely harmful, and even often provide some benefit, but if you need a quantum physicist, you should choose the recluse who can write down the Dirac equation by heart over the well-connected community leader who doesn’t know what an integral is.

At the very least, it strains credibility to suggest that social skills are so important for every job in the world that they should be one of the defining factors in who gets hired. And make no mistake: Networking is as beneficial for landing a job at a local bowling alley as it is for becoming Chair of the Federal Reserve. Indeed, for many entry-level positions networking is literally all that matters, while advanced positions at least exclude candidates who don’t have certain necessary credentials, and then make the decision based upon who knows whom.

Yet, if networking is so inefficient, why do we keep using it?

I can think of a couple reasons.

The first reason is that this is how we’ve always done it. Indeed, networking strongly pre-dates capitalism or even money; in ancient tribal societies there were certainly jobs to assign people to: who will gather berries, who will build the huts, who will lead the hunt. But there were no colleges, no certifications, no resumes—there was only your position in the social structure of the tribe. I think most people simply automatically default to a networking-based system without even thinking about it; it’s just the instinctual System 1 heuristic.

One of the few things I really liked about Debt: The First 5000 Years was the discussion of how similar the behavior of modern CEOs is to that of ancient tribal chieftans, for reasons that make absolutely no sense in terms of neoclassical economic efficiency—but perfect sense in light of human evolution. I wish Graeber had spent more time on that, instead of many of these long digressions about international debt policy that he clearly does not understand.

But there is a second reason as well, a better reason, a reason that we can’t simply give up on networking entirely.

The problem is that many important skills are very difficult to measure.

College degrees do a decent job of assessing our raw IQ, our willingness to persevere on difficult tasks, and our knowledge of the basic facts of a discipline (as well as a fantastic job of assessing our ability to pass standardized tests!). But when you think about the skills that really make a good physicist, a good economist, a good anthropologist, a good lawyer, or a good doctor—they really aren’t captured by any of the quantitative metrics that a college degree provides. Your capacity for creative problem-solving, your willingness to treat others with respect and dignity; these things don’t appear in a GPA.

This is especially true in research: The degree tells how good you are at doing the parts of the discipline that have already been done—but what we really want to know is how good you’ll be at doing the parts that haven’t been done yet.

Nor are skills precisely aligned with the content of a resume; the best predictor of doing something well may in fact be whether you have done so in the past—but how can you get experience if you can’t get a job without experience?

These so-called “soft skills” are difficult to measure—but not impossible. Basically the only reliable measurement mechanisms we have require knowing and working with someone for a long span of time. You can’t read it off a resume, you can’t see it in an interview (interviews are actually a horribly biased hiring mechanism, particularly biased against women). In effect, the only way to really know if someone will be good at a job is to work with them at that job for awhile.

There’s a fundamental information problem here I’ve never quite been able to resolve. It pops up in a few other contexts as well: How do you know whether a novel is worth reading without reading the novel? How do you know whether a film is worth watching without watching the film? When the information about the quality of something can only be determined by paying the cost of purchasing it, there is basically no way of assessing the quality of things before we purchase them.

Networking is an attempt to get around this problem. To decide whether to read a novel, ask someone who has read it. To decide whether to watch a film, ask someone who has watched it. To decide whether to hire someone, ask someone who has worked with them.

The problem is that this is such a weak measure that it’s not much better than no measure at all. I often wonder what would happen if businesses were required to hire people based entirely on resumes, with no interviews, no recommendation letters, and any personal contacts treated as conflicts of interest rather than useful networking opportunities—a world where the only thing we use to decide whether to hire someone is their documented qualifications. Could it herald a golden age of new economic efficiency and job fulfillment? Or would it result in widespread incompetence and catastrophic collapse? I honestly cannot say.

Thus ends our zero-lower-bound interest rate policy

JDN 2457383

Not with a bang, but with a whimper.

If you are reading the blogs as they are officially published, it will have been over a week since the Federal Reserve ended its policy of zero interest rates. (If you are reading this as a Patreon Blog from the Future, it will only have been a few days.)

The official announcement was made on December 16. The Federal Funds Target Rate will be raised from 0%-0.25% to 0.25%-0.5%. That one-quarter percentage point—itself no larger than the margin of error the Fed allots itself—will make all the difference.

As pointed out in the New York Times, this is the first time nominal interest rates have been raised in almost a decade. But the Fed had been promising it for some time, and thus a major reason they did it was to preserve their own credibility. They also say they think inflation is about to hit the 2% target, though it hasn’t yet (and I was never clear on why 2% was the target in the first place).

Actually, overall inflation is currently near zero. What is at 2% is what’s called “core inflation”, which excludes particularly volatile products such as oil and food. The idea is that we want to set monetary policy based upon long-run trends in the economy as a whole, not based upon sudden dips and surges in oil prices. But right now we are in the very odd scenario of the Fed raising interest rates in order to stop inflation even as the total amount most people need to spend to maintain their standard of living is the same as it was a year ago.

As MSNBC argues, it is essentially an announcement that the Second Depression is over and the economy has now returned to normal. Of course, simply announcing such a thing does not make it true.

Personally, I think this move is largely symbolic. The difference between 0% and 0.25% is unimportant for most practical purposes.

If you owe $100,000 over 30 years at 0% interest, you will pay $277.78 per month, totaling of course $100,000. If your interest rate were raised to 0.25% interest, you would instead owe $288.35 per month, totaling $103,807.28. Even over 30 years, that 0.25% interest raises your total expenditure by less than 4%.

Over shorter terms it’s even less important. If you owe $20,000 over 5 years at 0% interest, you will pay $333.33 per month totaling $20,000. At 0.25%, you would pay $335.46 per month totaling $20,127.34, a mere 0.6% more.

Moreover, if a bank was willing to take out a loan at 0%, they’ll probably still be at 0.25%.

Where it would have the largest impact is in more exotic financial instruments, like zero-amortization or negative-amortization bonds. A zero-amortization bond at 0% is literally free money forever (assuming you can keep rolling it over). A zero-amortization bond at 0.25% means you must at least pay 0.25% of the money back each year. A negative-amortization bond at 0% makes no sense mathematically (somehow you pay back less than 0% at each payment?), while a negative-amortization bond at 0.25% only doesn’t make sense practically. If both zero and negative-amortization seem really bizarre and impossible to justify, that’s because they are. They should not exist. Most exotic financial instruments have no reason to exist, aside from the fact that they can be used to bamboozle people into giving money to the financial corporations that create them. (Which reminds me, I need to see The Big Short. But of course I have to see Star Wars: The Force Awakens first; one must have priorities.)

So, what will happen as a result of this change in interest rates? Probably not much. Inflation might go down a little—which means we might have overall deflation, and that would be bad—and the rate of increase in credit might drop slightly. In the worst-case scenario, unemployment starts to rise again, the Fed realizes their mistake, and interest rates will be dropped back to zero.

I think it’s more instructive to look at why they did this—the symbolic significance behind it.

The zero lower bound is weird. It makes a lot of economists very uncomfortable. The usual rules for how monetary and fiscal policy work break down, because the equation hits up against a constraint—a corner solution, more technically. Krugman often talks about how many of the usual ideas about how interest rates and government spending work collapse at the zero-lower-bound. We have models of this sort of thing that are pretty good, but they’re weird and counter-intuitive, so policymakers never seem to actually use them.

What is the zero lower bound, you ask? Exactly what it says on the tin. There is a lower bound on how low you can set an interest rate, and for all practical purposes that limit is zero. If you start trying to set an interest rate of -5%, people won’t be willing to loan out money and will instead hoard cash. (Interestingly, a central bank with a strong currency, such as that of the US, UK, or EU, can actually set small negative nominal interest rates—because people consider their bonds safer than cash, so they’ll pay for the safety. The ECB, Europe’s Fed, actually did so for awhile.)

The zero-lower-bound actually applies to prices in general, not just interest rates. If a product is so worthless to you that you don’t even want it if it’s free, it’s very rare for anyone to actually pay you to take it—partly because there might be nothing to stop you from taking a huge amount of it and forcing them to pay you ridiculous amounts of money. “How much is this paperclip?” “-$0.75.” “I’ll have 50 billion, please.” In a few rare cases, they might be able to pay you to take it an amount that’s less than what it costs you to store and transport. Also, if they benefit from giving it to you, companies will give you things for free—think ads and free samples. But basically, if people won’t even take something for free, that thing simply doesn’t get sold.

But if we are in a recession, we really don’t want loans to stop being made altogether. So if people are unwilling to take out loans at 0% interest, we’re in trouble. Generally what we have to do is rely on inflation to reduce the real value of money over time, thus creating a real interest rate that’s negative even though the nominal interest rate remains stuck at 0%. But what if inflation is very low? Then there’s nothing you can do except find a way to raise inflation or increase demand for credit. This means relying upon unconventional methods like quantitative easing (trying to cause inflation), or preferably using fiscal policy to spend a bunch of money and thereby increase demand for credit.

What the Fed is basically trying to do here is say that we are no longer in that bad situation. We can now set interest rates where they actually belong, rather than forcing them as low as they’ll go and hoping inflation will make up the difference.

It’s actually similar to how if you take a test and score 100%, there’s no way of knowing whether you just barely got 100%, or if you would have still done as well if the test were twice as hard—but if you score 99%, you actually scored 99% and would have done worse if the test were harder. In the former case you were up against a constraint; in the latter it’s your actual value. The Fed is essentially announcing that we really want interest rates near 0%, as opposed to being bound at 0%—and the way they do that is by setting a target just slightly above 0%.

So far, there doesn’t seem to have been much effect on markets. And frankly, that’s just what I’d expect.

Tax incidence revisited, part 3: Taxation and the value of money

JDN 2457352

Our journey through the world of taxes continues. I’ve already talked about how taxes have upsides and downsides, as well as how taxes directly affect prices and “before-tax” prices are almost meaningless.

Now it’s time to get into something that even a lot of economists don’t quite seem to grasp, yet which turns out to be fundamental to what taxes truly are.

In the usual way of thinking, it works something like this: We have an economy, through which a bunch of money flows, and then the government comes in and takes some of that money in the form of taxes. They do this because they want to spend money on a variety of services, from military defense to public schools, and in order to afford doing that they need money, so they take in taxes.

This view is not simply wrong—it’s almost literally backwards. Money is not something the economy had that the government comes in and takes. Money is something that the government creates and then adds to the economy to make it function more efficiently. Taxes are not the government taking out money that they need to use; taxes are the government regulating the quantity of money in the system in order to stabilize its value. The government could spend as much money as they wanted without collecting a cent in taxes (not should, but could—it would be a bad idea, but definitely possible); taxes do not exist to fund the government, but to regulate the money supply.

Indeed—and this is the really vital and counter-intuitive point—without taxes, money would have no value.

There is an old myth of how money came into existence that involves bartering: People used to trade goods for other goods, and then people found that gold was particularly good for trading, and started using it for everything, and then eventually people started making paper notes to trade for gold, and voila, money was born.

In fact, such a “barter economy” has never been documented to exist. It probably did once or twice, just given the enormous variety of human cultures; but it was never widespread. Ancient economies were based on family sharing, gifts, and debts of honor.

It is true that gold and silver emerged as the first forms of money, “commodity money”, but they did not emerge endogenously out of trading that was already happening—they were created by the actions of governments. The real value of the gold or silver may have helped things along, but it was not the primary reason why people wanted to hold the money. Money has been based upon government for over 3000 years—the history of money and civilization as we know it. “Fiat money” is basically a redundancy; almost all money, even in a gold standard system, is ultimately fiat money.

The primary reason why people wanted the money was so that they could use it to pay taxes.

It’s really quite simple, actually.

When there is a rule imposed by the government that you will be punished if you don’t turn up on April 15 with at least $4,287 pieces of green paper marked “US Dollar”, you will try to acquire $4,287 pieces of green paper marked “US Dollar”. You will not care whether those notes are exchangeable for gold or silver; you will not care that they were printed by the government originally. Because you will be punished if you don’t come up with those pieces of paper, you will try to get some.

If someone else has some pieces of green paper marked “US Dollar”, and knows that you need them to avoid being punished on April 15, they will offer them to you—provided that you give them something they want in return. Perhaps it’s a favor you could do for them, or something you own that they’d like to have. You will be willing to make this exchange, in order to avoid being punished on April 15.
Thus, taxation gives money value, and allows purchases to occur.

Once you establish a monetary system, it becomes self-sustaining. If you know other people will accept money as payment, you are more willing to accept money as payment because you know that you can go spend it with those people. “Legal tender” also helps this process along—the government threatens to punish people who refuse to accept money as payment. In practice, however, this sort of law is rarely enforced, and doesn’t need to be, because taxation by itself is sufficient to form the basis of the monetary system.

It’s deeply ironic that people who complain about printing money often say we are “debasing” the currency; when you think carefully about what debasement was, it clearly shows that the value of money never really resided in the gold or silver itself. If a government can successfully extract revenue from its monetary system by changing the amount of gold or silver in each coin, then the value of those coins can’t be in the gold and silver—it has to be in the power of the government. You can’t make a profit by dividing a commodity into smaller pieces and then selling the pieces. (Okay, you sort of can, by buying in bulk and selling at retail. But that’s not what we’re talking about. You can’t make money by buying 100 50-gallon barrels of oil and then selling them as 125 40-gallon barrels of oil; it’s the same amount of oil.)

Similarly, the fact that there is such a thing as seignioragethe value of currency in excess of its cost to create—shows that governments impart value to their money. Indeed, one of the reasons for debasement was to realign the value of coins with the value of the metals in the coins, which wouldn’t be necessary if those were simply by definition the same thing.

Taxation serves another important function in the monetary system, which is to regulate the supply of money. The government adds money to the economy by spending, and removes it by taxing; if they add more than they remove—a deficit—the money supply increases, while if they remove more than they add—a surplus—the money supply decreases. In order to maintain stable prices, you want the money supply to increase at approximately the rate of growth; for moderate inflation (which is probably better than actual price stability), you want the money supply to increase slightly faster than the rate of growth. Thus, in general we want the government deficit as a portion of GDP to be slightly larger than the growth rate of the economy. Thus, our current deficit of 2.8% of GDP is actually about where it should be, and we have no particular reason to want to decrease it. (This is somewhat oversimplified, because it ignores the contribution of the Federal Reserve, interest rates, and bank-created money. Most of the money in the world is actually not created by the government, but by banks which are restrained to greater or lesser extent by the government.)

Even a lot of people who try to explain modern monetary theory mistakenly speak as though there was a fundamental shift when we fully abandoned the gold standard in the 1970s. (This is a good explanation overall, but it makes this very error.) But in fact a gold standard really isn’t money “backed” by anything—gold is not what gives the money value, gold is almost worthless by itself. It’s pretty and it doesn’t corrode, but otherwise, what exactly can you do with it? Being tied to money is what made gold valuable, not the other way around. To see this, imagine a world where you have 20,000 tons of gold, but you know that you can never sell it. No one will ever purchase a single ounce. Would you feel particularly rich in that scenario? I think not. Now suppose you have a virtually limitless quantity of pieces of paper that you know people will accept for anything you would ever wish to buy. They are backed by nothing, they are just pieces of paper—but you are now rich, by the standard definition of the word. I can even make the analogy remove the exchange value of money and just use taxation: if you know that in two days you will be imprisoned if you don’t have this particular piece of paper, for the next two days you will guard that piece of paper with your life. It won’t bother you that you can’t exchange that piece of paper for anything else—you wouldn’t even want to. If instead someone else has it, you’ll be willing to do some rather large favors for them in order to get it.

Whenever people try to tell me that our money is “worthless” because it’s based on fiat instead of backed by gold (this happens surprisingly often), I always make them an offer: If you truly believe that our money is worthless, I’ll gladly take any you have off of your hands. I will even provide you with something of real value in return, such as an empty aluminum can or a pair of socks. If they truly believe that fiat money is worthless, they should eagerly accept my offer—yet oddly, nobody ever does.

This does actually create a rather interesting argument against progressive taxation: If the goal of taxation is simply to control inflation, shouldn’t we tax people based only on their spending? Well, if that were the only goal, maybe. But we also have other goals, such as maintaining employment and controlling inequality. Progressive taxation may actually take a larger amount of money out of the system than would be necessary simply to control inflation; but it does so in order to ensure that the super-rich do not become even more rich and powerful.

Governments are limited by real constraints of power and resources, but they they have no monetary constraints other than those they impose themselves. There is definitely something strongly coercive about taxation, and therefore about a monetary system which is built upon taxation. Unfortunately, I don’t know of any good alternatives. We might be able to come up with one: Perhaps people could donate to public goods in a mutually-enforced way similar to Kickstarter, but nobody has yet made that practical; or maybe the government could restructure itself to make a profit by selling private goods at the same time as it provides public goods, but then we have all the downsides of nationalized businesses. For the time being, the only system which has been shown to work to provide public goods and maintain long-term monetary stability is a system in which the government taxes and spends.

A gold standard is just a fiat monetary system in which the central bank arbitrarily decides that their money supply will be directly linked to the supply of an arbitrarily chosen commodity. At best, this could be some sort of commitment strategy to ensure that they don’t create vastly too much or too little money; but at worst, it prevents them from actually creating the right amount of money—and the gold standard was basically what caused the Great Depression. A gold standard is no more sensible a means of backing your currency than would be a standard requiring only prime-numbered interest rates, or one which requires you to print exactly as much money per minute as the price of a Ferrari.

No, the real thing that backs our money is the existence of the tax system. Far from taxation being “taking your hard-earned money”, without taxes money itself could not exist.