What does a central bank actually do?

Aug 26 JDN 2458357

Though central banks are a cornerstone of the modern financial system, I don’t think most people have a clear understanding of how they actually function. (I think this may be by design; there are many ways we could make central banking more transparent, but policymakers seem reluctant to show their hand.)

I’ve even seen famous economists make really severe errors in their understanding of monetary policy, as John Taylor did when he characterized low-interest-rate policy as a “price ceiling”.

Central banks “print money” and “set interest rates”. But how exactly do they do these things, and what on Earth do they have to do with each other?

The first thing to understand is that most central banks don’t actually print money. In the US, cash is actually printed by the Department of the Treasury. But cash is only a small part of the money in circulation. The monetary base consists of cash in vaults and in circulation; the US monetary base is about $3.6 trillion. The money supply can be measured a few different ways, but the standard way is to include checking accounts, traveler’s checks, savings accounts, money market accounts, short-term certified deposits, and basically anything that can be easily withdrawn and spent as money. This is called the M2 money supply, and in the US it is currently over $14.1 trillion. That means that only 25% of our money supply is in actual, physical cash—the rest is all digital. This is actually a relatively high proportion for actual cash, as the monetary base was greatly increased in response to the Great Recession. When we say that the Fed “prints money”, what we really mean is that they are increasing the money supply—but typically they do so in a way that involves little if any actual printing of cash.

The second thing to understand is that central banks don’t exactly set interest rates either. They target interest rates. What’s the difference, you ask?

Well, setting interest rates would mean that they made a law or something saying you have to charge exactly 2.7%, and you get fined or something if you don’t do that.

Targeting interest rates is a subtler art. The Federal Reserve decides what interest rates they want banks to charge, and then they engage in what are called open-market operations to try to make that happen. Banks hold reservesmoney that they are required to keep as collateral for their loans. Since we are in a fractional-reserve system, they are allowed to keep only a certain proportion (usually about 10%). In open-market operations, the Fed buys and sells assets (usually US Treasury bonds) in order to either increase or decrease the amount of reserves available to banks, to try to get them to lend to each other at the targeted interest rates.

Why not simply set the interest rate by law? Because then it wouldn’t be the market-clearing interest rate. There would be shortages or gluts of assets.

It might be easier to grasp this if we step away from money for a moment and just think about the market for some other good, like televisions.

Suppose that the government wants to set the price of a television in the market to a particular value, say $500. (Why? Who knows. Let’s just run with it for a minute.)

If they simply declared by law that the price of a television must be $500, here’s what would happen: Either that would be too low, in which case there would be a shortage of televisions as demand exceeded supply; or that would be too high, in which case there would be a glut of televisions as supply exceeded demand. Only if they got spectacularly lucky and the market price already was $500 per television would they not have to worry about such things (and then, why bother?).

But suppose the government had the power to create and destroy televisions virtually at will with minimal cost.
Now, they have a better way; they can target the price of a television, and buy and sell televisions as needed to bring the market price to that target. If the price is too low, the government can buy and destroy a lot of televisions, to bring the price up. If the price is too high, the government can make and sell a lot of televisions, to bring the price down.

Now, let’s go back to money. This power to create and destroy at will is hard to believe for televisions, but absolutely true for money. The government can create and destroy almost any amount of money at will—they are limited only by the very inflation and deflation the central bank is trying to affect.

This allows central banks to intervene in the market without creating shortages or gluts; even though they are effectively controlling the interest rate, they are doing so in a way that avoids having a lot of banks wanting to take loans they can’t get or wanting to give loans they can’t find anyone to take.

The goal of all this manipulation is ultimately to reduce inflation and unemployment. Unfortunately it’s basically impossible to eliminate both simultaneously; the Phillips curve describes the relationship generally found that decreased inflation usually comes with increased unemployment and vice-versa. But the basic idea is that we set reasonable targets for each (usually about 2% inflation and 5% unemployment; frankly I’d prefer we swap the two, which was more or less what we did in the 1950s), and then if inflation is too high we raise interest rate targets, while if unemployment is too high we lower interest rate targets.

What if they’re both too high? Then we’re in trouble. This has happened; it is called stagflation. The money supply isn’t the other thing affecting inflation and unemployment, and sometimes we get hit with a bad shock that makes both of them high at once. In that situation, there isn’t much that monetary policy can do; we need to find other solutions.

But how does targeting interest rates lead to inflation? To be quite honest, we don’t actually know.

The basic idea is that lower interest rates should lead to more borrowing, which leads to more spending, which leads to more inflation. But beyond that, we don’t actually understand how interest rates translate into prices—this is the so-called transmission mechanism, which remains an unsolved problem in macroeconomics. Based on the empirical data, I lean toward the view that the mechanism is primarily via housing prices; lower interest rates lead to more mortgages, which raises the price of real estate, which raises the price of everything else. This also makes sense theoretically, as real estate consists of large, illiquid assets for which the long-term interest rate is very important. Your decision to buy an apple or even a television is probably not greatly affected by interest rates—but your decision to buy a house surely is.

If that is indeed the case, it’s worth thinking about whether this is really the right way to intervene on inflation and unemployment. High housing prices are an international crisis; maybe we need to be looking at ways to decrease unemployment without affecting housing prices. But that is a tale for another time.

What would a game with realistic markets look like?

Aug 12 JDN 2458343

From Pokemon to Dungeons & Dragons, Final Fantasy to Mass Effect, almost all role-playing games have some sort of market: Typically, you buy and sell equipment, and often can buy services such as sleeping at inns. Yet the way those markets work is extremely rigid and unrealistic.

(I’m of course excluding games like EVE Online that actually create real markets between players; those markets are so realistic I actually think they would provide a good opportunity for genuine controlled experiments in macroeconomics.)

The weirdest thing about in-game markets is the fact that items almost always come with a fixed price. Sometimes there is some opportunity for haggling, or some randomization between different merchants; but the notion always persists that the item has a “true price” that is being adjusted upward or downward. This is more or less the opposite of how prices actually work in real markets.

There is no “true price” of a car or a pizza. Prices are whatever buyers and sellers make them. There is a true value—the amount of real benefit that can be obtained from a good—but even this is something that varies between individuals and also changes based on the other goods being consumed. The value of a pizza is considerably higher for someone who hasn’t eaten in days than to someone who just finished eating another pizza.

There is also what is called “The Law of One Price”, but like all laws of economics, it’s like the Pirate Code, more what you’d call a “guideline”, and it only applies to a particular good in a particular market at a particular time. The Law of One Price doesn’t even say that a pizza should have the same price tomorrow as it does today, or that the same pizza can’t be sold to two different customers at two different prices; it only says that the same pizza shouldn’t have two different prices in the same place at the same time for the same customer. (It seems almost tautological, right? And yet it still fails empirically, and does so again and again. I have seen offers for the same book in the same condition posted on the same website that differed by as much as 50%.)

In well-developed capitalist markets in large First World countries, we can lull ourselves into the illusion that there is only one price for a good, because markets are highly liquid and either highly competitive or controlled by a strong and stable oligopoly that enforces a particular price across places and times. The McDonald’s Dollar Menu is a policy choice by a massive multinational corporation; it’s not what would occur naturally if those items were sold on a competitive market.

Even then, this illusion can be broken when we are faced with a large economic shock, such as the OPEC price shock in 1973 or a natural disaster like Hurricane Katrina. It also tends to be broken for illiquid goods such as real estate.

If we consider the environment in which most role-playing games take place, it’s usually a sort of quasi-medieval or quasi-Renaissance feudal society, where a given government controls only a small region and traveling between towns is difficult and dangerous. Not only should the prices of goods differ substantially between towns, the currency used should frequently differ as well. Yes, most places would accept gold and silver; but a kingdom with a stable government will generally have a currency of significant seignorage, with coins worth considerably more than the gold used to mint them—yet the value of that seignorage will drop off as you move further away from that kingdom and its sphere of influence.

Moreover, prices should be inconsistent even between traders in the same town, and extremely volatile. When a town is mostly self-sufficient and trade is only a small part of its economy, even a small shock such as a bad thunderstorm or a brief drought can yield massive shifts in prices. Shortages and gluts will be frequent, as both supply and demand are small and ever-changing.

This wouldn’t be that difficult to implement. The simplest way would just be to institute random shocks to prices that vary by place and time. A more sophisticated method would be to actually simulate supply and demand for different goods, and then have prices respond to realistic shocks (e.g. a drought makes wheat more expensive, and the price of swords suddenly skyrockets after news of an impending dragon attack). Experiments have shown that competitive market outcomes can be achieved by simulating even a dozen or so traders using very simple heuristics like “don’t pay more than you can afford” and “don’t charge less than it cost you”.

Why don’t game designers implement this? I think there are two reasons.

The first is simply that it would be more complicated. This is a legitimate concern in many cases; I particularly think Pokemon can justify using a simple economy, given its target audience. I particularly agree that having more than a handful of currencies would be too much for players to keep track of; though perhaps having two or three (one for each major faction?) is still more interesting than only having one.

Also, tabletop games are inherently more limited in the amount of computation they can use, compared to video games. But for a game as complicated as say Skyrim, this really isn’t much of a defense. Skyrim actually simulated the daily routines of over a hundred different non-player characters; it could have been simulating markets in the background as well—in fact, it could have simply had those same non-player characters buy and sell goods with each other in a double-auction market that would automatically generate the prices that players face.

The more important reason, I think, is that game designers have a paralyzing fear of arbitrage.

I find it particularly aggravating how frequently games will set it up so that the price at which you buy and the price at which you sell are constrained so that the buying price is always higher, often as much as twice as high. This is not at all how markets work in the real world; frankly it’s only even close to true for goods like cars that rapidly depreciate. It make senses that a given merchant will not sell you a good for less than what they would pay to buy it from you; but that only requires each individual merchant to have a well-defined willingness-to-pay and willingness-to-accept. It certainly does not require the arbitrary constraint that you can never sell something for more than what you bought it for.

In fact, I would probably even allow players who specialize in social skills to short-change and bamboozle merchants for profit, as this is absolutely something that happens in the real world, and was likely especially common under the very low levels of literacy and numeracy that prevailed in the Middle Ages.

To many game designers (and gamers), the ability to buy a good in one place, travel to another place, and sell that good for a higher price seems like cheating. But this practice is call being a merchant. That is literally what the entire retail industry does. The rules of your game should allow you to profit from activities that are in fact genuinely profitable real economic services in the real world.

I remember a similar complaint being raised against Skyrim shortly after its release, that one could acquire a pickaxe, collect iron ore, smelt it into steel, forge weapons out of it, and then sell the weapons for a sizeable profit. To some people, this sounded like cheating. To me, it sounds like being a blacksmith. This is especially true because Skyrim’s skill system allowed you to improve the quality of your smithed items over time, just like learning a trade through practice (though it ramped up too fast, as it didn’t take long to make yourself clearly the best blacksmith in all of Skyrim). Frankly, this makes far more sense than being able to acquire gold by adventuring through the countryside and slaughtering monsters or collecting lost items from caves. Blacksmiths were a large part of the medieval economy; spelunking adventurers were not. Indeed, it bothers me that there weren’t more opportunities like this; you couldn’t make your wealth by being a farmer, a vintner, or a carpenter, for instance.

Even if you managed to pull off pure arbitrage, providing no real services, such as by buying and selling between two merchants in the same town, or the same merchant on two consecutive days, that is also a highly profitable industry. Most of our financial system is built around it, frankly. If you manage to make your wealth selling wheat futures instead of slaying dragons, I say more power to you. After all, there were an awful lot of wheat-future traders in the Middle Ages, and to my knowledge no actually successful dragon-slayers.

Of course, if your game is about slaying dragons, it should include some slaying of dragons. And if you really don’t care about making a realistic market in your game, so be it. But I think that more realistic markets could actually offer a great deal of richness and immersion into a world without greatly increasing the difficulty or complexity of the game. A world where prices change in response to the events of the story just feels more real, more alive.

The ability to profit without violence might actually draw whole new modes of play to the game (as has indeed occurred with Skyrim, where a small but significant proportion of players have chosen to live out peaceful lives as traders or blacksmiths). I would also enrich the experience of more conventional players and helping them recover from setbacks (if the only way to make money is to fight monsters and you keep getting killed by monsters, there isn’t much you can do; but if you have the option of working as a trader or a carpenter for awhile, you could save up for better equipment and try the fighting later).

And hey, game designers: If any of you are having trouble figuring out how to implement such a thing, my consulting fees are quite affordable.

The unending madness of the gold standard

JDN 2457545

If you work in economics in any capacity (much like “How is the economy doing?” you don’t even really need to be in macroeconomics), you will encounter many people who believe in the gold standard. Many of these people will be otherwise quite intelligent and educated; they often understand economics better than most people (not that this is saying a whole lot). Yet somehow they continue to hold—and fiercely defend—this incredibly bizarre and anachronistic view of macroeconomics.

They even bring it up at the oddest times; I recently encountered someone who wrote a long and rambling post arguing for drug legalization (which I largely agree with, by the way) and concluded it with #EndTheFed, not seeming to grasp the total and utter irrelevance of this juxtaposition. It seems like it was just a conditioned response, or maybe the sort of irrelevant but consistent coda originally perfected by Cato and his “Carthago delenda est. “Foederale Reservatum delendum est. Hey, maybe that’s why they’re called the Cato Institute.

So just how bizarre is the gold standard? Well, let’s look at what sort of arguments they use to defend it. I’ll use Charles Kadlic, prominent Libertarian blogger on Forbes, as an example, with his “Top Ten Reasons That You Should Support the ‘Gold Commission’”:

  1. A gold standard is key to achieving a period of sustained, 4% real economic growth.
  2. A gold standard reduces the risk of recessions and financial crises.
  3. A gold standard would restore rising living standards to the middle-class.
  4. A gold standard would restore long-term price stability.
  5. A gold standard would stop the rise in energy prices.
  6. A gold standard would be a powerful force for restoring fiscal balance to federal state and local governments.
  7. A gold standard would help save Medicare and Social Security.
  8. A gold standard would empower Main Street over Wall Street.
  9. A gold standard would increase the liberty of the American people.
  10. Creation of a gold commission will provide the forum to chart a prudent path toward a 21st century gold standard.

Number 10 can be safely ignored, as clearly Kadlic just ran out of reasons and to make a round number tacked on the implicit assumption of the entire article, namely that this ‘gold commission’ would actually realistically lead us toward a gold standard. (Without it, the other 9 reasons are just non sequitur.)

So let’s look at the other 9, shall we? Literally none of them are true. Several are outright backward.

You know a policy is bad when even one of its most prominent advocates can’t even think of a single real benefit it would have. A lot of quite bad policies do have perfectly real benefits, they’re just totally outweighed by their costs: For example, cutting the top income tax rate to 20% probably would actually contribute something to economic growth. Not a lot, and it would cut a swath through the federal budget and dramatically increase inequality—but it’s not all downside. Yet Kadlic couldn’t actually even think of one benefit of the gold standard that actually holds up. (I actually can do his work for him: I do know of one benefit of the gold standard, but as I’ll get to momentarily it’s quite small and can easily be achieved in better ways.)

First of all, it’s quite clear that the gold standard did not increase economic growth. If you cherry-pick your years properly, you can make it seem like Nixon leaving the gold standard hurt growth, but if you look at the real long-run trends in economic growth it’s clear that we had really erratic growth up until about the 1910s (the surge of government spending in WW1 and the establishment of the Federal Reserve), at which point went through a temporary surge recovering from the Great Depression and then during WW2, and finally, if you smooth out the business cycle, our growth rates have slowly trended downward as growth in productivity has gradually slowed down.

Here’s GDP growth from 1800 to 1900, when we were on the classical gold standard:

US_GDP_growth_1800s

Here’s GDP growth from 1929 to today, using data from the Bureau of Economic Analysis:

US_GDP_growth_BEA

Also, both of these are total GDP growth (because that is what Kadlic said), which means that part of what you’re seeing here is population growth rather than growth in income per person. Here’s GDP per person in the 1800s:

US_GDP_growth_1800s

If you didn’t already know, I bet you can’t guess where on those graphs we left the gold standard, which you’d clearly be able to do if the gold standard had this dramatic “double your GDP growth” kind of effect. I can’t immediately rule out some small benefit to the gold standard just from this data, but don’t worry; more thorough economic studies have done that. Indeed, it is the mainstream consensus among economists today that the gold standard is what caused the Great Depression.

Indeed, there’s a whole subfield of historical economics research that basically amounts to “What were they thinking?” trying to explain why countries stayed on the gold standard for so long when it clearly wasn’t working. Here’s a paper trying to argue it was a costly signal of your “rectitude” in global bond markets, but I find much more compelling the argument that it was psychological: Their belief in the gold standard was simply too strong, so confirmation bias kept holding them back from what needed to be done. They were like my aforementioned #EndTheFed acquaintance.

Then we get to Kadlic’s second point: Does the gold standard reduce the risk of financial crises? Let’s also address point 4, which is closely related: Does the gold standard improve price stability? Tell that to 1929.

In fact, financial crises were more common on the classical gold standard; the period of pure fiat monetary policy was so stable that it was called the Great Moderation, until the crash in 2008 screwed it all up—and that crash occurred essentially outside the standard monetary system, in the “shadow banking system” of unregulated and virtually unlimited derivatives. Had we actually forced banks to stay within the light of the standard banking system, the Great Moderation might have continued indefinitely.

As for “price stability”, that’s sort of true if you look at the long run, because prices were as likely to go down as they were to go up. But that isn’t what we mean by “price stability”. A system with good price stability will have a low but positive and steady level of inflation, and will therefore exhibit some long-run increases in price levels; it won’t have prices jump up and down erratically and end up on average the same.

For jump up and down is what prices did on the gold standard, as you can see from FRED:

US_inflation_longrun

This is something we could have predicted in advance; the price of any given product jumps up and down over time, and gold is just one product among many. Tying prices to gold makes no more sense than tying them to any other commodity.

As for stopping the rise in energy prices, energy prices aren’t rising. Even if they were (and they could at some point), the only way the gold standard would stop that is by triggering deflation (and therefore recession) in the rest of the economy.

Regarding number 6, I don’t see how the fiscal balance of federal and state governments is improved by periodic bouts of deflation that make their debt unpayable.

As for number 7, saving Medicare and Social Security, their payments out are tied to inflation and their payments in are tied to nominal GDP, so overall inflation has very little effect on their long-term stability. In any case, the problem with Medicare is spiraling medical costs (which Obamacare has done a lot to fix), and the problem with Social Security is just the stupid arbitrary cap on the income subject to payroll tax; the gold standard would do very little to solve either of those problems, though I guess it would make the nominal income cap less binding by triggering deflation, which is just about the worst way to avoid a price ceiling I’ve ever heard.

Regarding 8 and 9, I don’t even understand why Kadlic thinks that going to a gold standard would empower individuals over banks (does it seem like individuals were empowered over banks in the “Robber Baron Era”?), or what in the world it has to do with giving people more liberty (all that… freedom… you lose… when the Fed… stabilizes… prices?), so I don’t even know where to begin on those assertions. You know what empowers people over banks? The Consumer Financial Protection Bureau. You know what would enhance liberty? Ending mass incarceration. Libertarians fight tooth and nail against the former; sometimes they get behind the latter, but sometimes they don’t; Gary Johnson for some bizarre reason believes in privatization of prisons, which are directly linked to the surge in US incarceration.

The only benefit I’ve been able to come up with for the gold standard is as a commitment mechanism, something the Federal Reserve could do to guarantee its future behavior and thereby reduce the fear that it will suddenly change course on its past promises. This would make forward guidance a lot more effective at changing long-term interest rates, because people would have reason to believe that the Fed means what it says when it projects its decisions 30 years out.

But there are much simpler and better commitment mechanisms the Fed could use. They could commit to a Taylor Rule or nominal GDP targeting, both of which mainstream economists have been clamoring for for decades. There are some definite downsides to both proposals, but also some important upsides; and in any case they’re both obviously better than the gold standard and serve the same forward guidance function.

Indeed, it’s really quite baffling that so many people believe in the gold standard. It cries out for some sort of psychological explanation, as to just what cognitive heuristic is failing when otherwise-intelligent and highly-educated people get monetary policy so deeply, deeply wrong. A lot of them don’t even to seem grasp when or how we left the gold standard; it really happened when FDR suspended gold convertibility in 1933. After that on the Bretton Woods system only national governments could exchange money for gold, and the Nixon shock that people normally think of as “ending the gold standard” was just the final nail in the coffin, and clearly necessary since inflation was rapidly eating through our gold reserves.

A lot of it seems to come down to a deep distrust of government, especially federal government (I still do not grok why the likes of Ron Paul think state governments are so much more trustworthy than the federal government); the Federal Reserve is a government agency (sort of) and is therefore not to be trusted—and look, it has federal right there in the name.

But why do people hate government so much? Why do they think politicians are much less honest than they actually are? Part of it could have to do with the terrifying expansion of surveillance and weakening of civil liberties in the face of any perceived outside threat (Sedition Act, PATRIOT ACT, basically the same thing), but often the same people defending those programs are the ones who otherwise constantly complain about Big Government. Why do polls consistently show that people don’t trust the government, but want it to do more?

I think a lot of this comes down to the vague meaning of the word “government” and the associations we make with particular questions about it. When I ask “Do you trust the government?” you think of the NSA and the Vietnam War and Watergate, and you answer “No.” But when I ask “Do you want the government to do more?” you think of the failure at Katrina, the refusal to expand Medicaid, the pitiful attempts at reducing carbon emissions, and you answer “Yes.” When I ask if you like the military, your conditioned reaction is to say the patriotic thing, “Yes.” But if I ask whether you like the wars we’ve been fighting lately, you think about the hundreds of thousands of people killed and the wanton destruction to achieve no apparent actual objective, and you say “No.” Most people don’t come to these polls with thought-out opinions they want to express; the questions evoke emotional responses in them and they answer accordingly. You can also evoke different responses by asking “Should we cut government spending?” (People say “Yes.”) versus asking “Should we cut military spending, Social Security, or Medicare?” (People say “No.”) The former evokes a sense of abstract government taking your tax money; the latter evokes the realization that this money is used for public services you value.

So, the gold standard has acquired positive emotional vibes, and the Fed has acquired negative emotional vibes.

The former is fairly easy to explain: “good as gold” is an ancient saying, and “the gold standard” is even a saying we use in general to describe the right way of doing something (“the gold standard in prostate cancer treatment”). Humans have always had a weird relationship with gold; something about its timeless and noncorroding shine mesmerizes us. That’s why you occasionally get proposals for a silver standard, but no one ever seems to advocate an oil standard, an iron standard, or a lumber standard, which would make about as much sense.

The latter is a bit more difficult to explain: What did the Fed ever do to you? But I think it might have something to do with the complexity of sound monetary policy, and the resulting air of technocratic mystery surrounding it. Moreover, the Fed actively cultivates this image, by using “open-market operations” and “quantitative easing” to “target interest rates”, instead of just saying, “We’re printing money.” There may be some good reasons to do it this way, but a lot of it really does seem to be intended to obscure the truth from the uninitiated and perpetuate the myth that they are almost superhuman. “It’s all very complicated, you see; you wouldn’t understand.” People are hoarding their money, so there’s not enough money in circulation, so prices are falling, so you’re printing more money and trying to get it into circulation. That’s really not that complicated. Indeed, if it were, we wouldn’t be able to write a simple equation like a Taylor Rule or nominal GDP targeting in order to automate it!

The reason so many people become gold bugs after taking a couple of undergraduate courses in economics, then, is that this teaches them enough that they feel they have seen through the veil; the curtain has been pulled open and the all-powerful Wizard revealed to be an ordinary man at a control panel. (Spoilers? The movie came out in 1939. Actually, it was kind of about the gold standard.) “What? You’ve just been printing money all this time? But that is surely madness!” They don’t actually understand why printing money is actually a perfectly sensible thing to do on many occasions, and it feels to them a lot like what would happen if they just went around printing money (counterfeiting) or what a sufficiently corrupt government could do if they printed unlimited amounts (which is why they keep bringing up Zimbabwe). They now grasp what is happening, but not why. A little learning is a dangerous thing.

Now as for why Paul Volcker wants to go back to Bretton Woods? That, I cannot say. He’s definitely got more than a little learning. At least he doesn’t want to go back to the classical gold standard.

The credit rating agencies to be worried about aren’t the ones you think

JDN 2457499

John Oliver is probably the best investigative journalist in America today, despite being neither American nor officially a journalist; last week he took on the subject of credit rating agencies, a classic example of his mantra “If you want to do something evil, put it inside something boring.” (note that it’s on HBO, so there is foul language):

As ever, his analysis of the subject is quite good—it’s absurd how much power these agencies have over our lives, and how little accountability they have for even assuring accuracy.

But I couldn’t help but feel that he was kind of missing the point. The credit rating agencies to really be worried about aren’t Equifax, Experian, and Transunion, the ones that assess credit ratings on individuals. They are Standard & Poor’s, Moody’s, and Fitch (which would have been even easier to skewer the way John Oliver did—perhaps we can get them confused with Standardly Poor, Moody, and Filch), the agencies which assess credit ratings on institutions.

These credit rating agencies have almost unimaginable power over our society. They are responsible for rating the risk of corporate bonds, certificates of deposit, stocks, derivatives such as mortgage-backed securities and collateralized debt obligations, and even municipal and government bonds.

S&P, Moody’s, and Fitch don’t just rate the creditworthiness of Goldman Sachs and J.P. Morgan Chase; they rate the creditworthiness of Detroit and Greece. (Indeed, they played an important role in the debt crisis of Greece, which I’ll talk about more in a later post.)

Moreover, they are proven corrupt. It’s a matter of public record.

Standard and Poor’s is the worst; they have been successfully sued for fraud by small banks in Pennsylvania and by the State of New Jersey; they have also settled fraud cases with the Securities and Exchange Commission and the Department of Justice.

Moody’s has also been sued for fraud by the Department of Justice, and all three have been prosecuted for fraud by the State of New York.

But in fact this underestimates the corruption, because the worst conflicts of interest aren’t even illegal, or weren’t until Dodd-Frank was passed in 2010. The basic structure of this credit rating system is fundamentally broken; the agencies are private, for-profit corporations, and they get their revenue entirely from the banks that pay them to assess their risk. If they rate a bank’s asset as too risky, the bank stops paying them, and instead goes to another agency that will offer a higher rating—and simply the threat of doing so keeps them in line. As a result their ratings are basically uncorrelated with real risk—they failed to predict the collapse of Lehman Brothers or the failure of mortgage-backed CDOs, and they didn’t “predict” the European debt crisis so much as cause it by their panic.

Then of course there’s the fact that they are obviously an oligopoly, and furthermore one that is explicitly protected under US law. But then it dawns upon you: Wait… US law? US law decides the structure of credit rating agencies that set the bond rates of entire nations? Yes, that’s right. You’d think that such ratings would be set by the World Bank or something, but they’re not; in fact here’s a paper published by the World Bank in 2004 about how rather than reform our credit rating system, we should instead tell poor countries to reform themselves so they can better impress the private credit rating agencies.

In fact the whole concept of “sovereign debt risk” is fundamentally defective; a country that borrows in its own currency should never have to default on debt under any circumstances. National debt is almost nothing like personal or corporate debt. Their fears should be inflation and unemployment—their monetary policy should be set to minimize the harm of these two basic macroeconomic problems, understanding that policies which mitigate one may enflame the other. There is such a thing as bad fiscal policy, but it has nothing to do with “running out of money to pay your debt” unless you are forced to borrow in a currency you can’t control (as Greece is, because they are on the Euro—their debt is less like the US national debt and more like the debt of Puerto Rico, which is suffering an ongoing debt crisis you may not have heard about). If you borrow in your own currency, you should be worried about excessive borrowing creating inflation and devaluing your currency—but not about suddenly being unable to repay your creditors. The whole concept of giving a sovereign nation a credit rating makes no sense. You will be repaid on time and in full, in nominal terms; if inflation or currency exchange has devalued the currency you are repaid in, that’s sort of like a partial default, but it’s a fundamentally different kind of “default” than simply not paying back the money—and credit ratings have no way of capturing that difference.

In particular, it makes no sense for interest rates on government bonds to go up when a country is suffering some kind of macroeconomic problem.

The basic argument for why interest rates go up when risk is higher is that lenders expect to be paid more by those who do pay to compensate for what they lose from those who don’t pay. This is already much more problematic than most economists appreciate; I’ve been meaning to write a paper on how this system creates self-fulfilling prophecies of default and moral hazard from people who pay their debts being forced to subsidize those who don’t. But it at least makes some sense.

But if a country is a “high risk” in the sense of macroeconomic instability undermining the real value of their debt, we want to ensure that they can restore macroeconomic stability. But we know that when there is a surge in interest rates on government bonds, instability gets worse, not better. Fiscal policy is suddenly shifted away from real production into higher debt payments, and this creates unemployment and makes the economic crisis worse. As Paul Krugman writes about frequently, these policies of “austerity” cause enormous damage to national economies and ultimately benefit no one because they destroy the source of wealth that would have been used to repay the debt.

By letting credit rating agencies decide the rates at which governments must borrow, we are effectively treating national governments as a special case of corporations. But corporations, by design, act for profit and can go bankrupt. National governments are supposed to act for the public good and persist indefinitely. We can’t simply let Greece fail as we might let a bank fail (and of course we’ve seen that there are serious downsides even to that). We have to restructure the sovereign debt system so that it benefits the development of nations rather than detracting from it. The first step is removing the power of private for-profit corporations in the US to decide the “creditworthiness” of entire countries. If we need to assess such risks at all, they should be done by international institutions like the UN or the World Bank.

But right now people are so stuck in the idea that national debt is basically the same as personal or corporate debt that they can’t even understand the problem. For after all, one must repay one’s debts.

Why is it so hard to get a job?

JDN 2457411

The United States is slowly dragging itself out of the Second Depression.

Unemployment fell from almost 10% to about 5%.

Core inflation has been kept between 0% and 2% most of the time.

Overall inflation has been within a reasonable range:

US_inflation

Real GDP has returned to its normal growth trend, though with a permanent loss of output relative to what would have happened without the Great Recession.

US_GDP_growth

Consumption spending is also back on trend, tracking GDP quite precisely.

The Federal Reserve even raised the federal funds interest rate above the zero lower bound, signaling a return to normal monetary policy. (As I argued previously, I’m pretty sure that was their main goal actually.)

Employment remains well below the pre-recession peak, but is now beginning to trend upward once more.

The only thing that hasn’t recovered is labor force participation, which continues to decline. This is how we can have unemployment go back to normal while employment remains depressed; people leave the labor force by retiring, going back to school, or simply giving up looking for work. By the formal definition, someone is only unemployed if they are actively seeking work. No, this is not new, and it is certainly not Obama rigging the numbers. This is how we have measured unemployment for decades.

Actually, it’s kind of the opposite: Since the Clinton administration we’ve also kept track of “broad unemployment”, which includes people who’ve given up looking for work or people who have some work but are trying to find more. But we can’t directly compare it to anything that happened before 1994, because the BLS didn’t keep track of it before then. All we can do is estimate based on what we did measure. Based on such estimation, it is likely that broad unemployment in the Great Depression may have gotten as high as 50%. (I’ve found that one of the best-fitting models is actually one of the simplest; assume that broad unemployment is 1.8 times narrow unemployment. This fits much better than you might think.)

So, yes, we muddle our way through, and the economy eventually heals itself. We could have brought the economy back much sooner if we had better fiscal policy, but at least our monetary policy was good enough that we were spared the worst.

But I think most of us—especially in my generation—recognize that it is still really hard to get a job. Overall GDP is back to normal, and even unemployment looks all right; but why are so many people still out of work?

I have a hypothesis about this: I think a major part of why it is so hard to recover from recessions is that our system of hiring is terrible.

Contrary to popular belief, layoffs do not actually substantially increase during recessions. Quits are substantially reduced, because people are afraid to leave current jobs when they aren’t sure of getting new ones. As a result, rates of job separation actually go down in a recession. Job separation does predict recessions, but not in the way most people think. One of the things that made the Great Recession different from other recessions is that most layoffs were permanent, instead of temporary—but we’re still not sure exactly why.

Here, let me show you some graphs from the BLS.

This graph shows job openings from 2005 to 2015:

job_openings

This graph shows hires from 2005 to 2015:

job_hires

Both of those show the pattern you’d expect, with openings and hires plummeting in the Great Recession.

But check out this graph, of job separations from 2005 to 2015:

job_separations

Same pattern!

Unemployment in the Second Depression wasn’t caused by a lot of people losing jobs. It was caused by a lot of people not getting jobs—either after losing previous ones, or after graduating from school. There weren’t enough openings, and even when there were openings there weren’t enough hires.

Part of the problem is obviously just the business cycle itself. Spending drops because of a financial crisis, then businesses stop hiring people because they don’t project enough sales to justify it; then spending drops even further because people don’t have jobs, and we get caught in a vicious cycle.

But we are now recovering from the cyclical downturn; spending and GDP are back to their normal trend. Yet the jobs never came back. Something is wrong with our hiring system.

So what’s wrong with our hiring system? Probably a lot of things, but here’s one that’s been particularly bothering me for a long time.
As any job search advisor will tell you, networking is essential for career success.

There are so many different places you can hear this advice, it honestly gets tiring.

But stop and think for a moment about what that means. One of the most important determinants of what job you will get is… what people you know?

It’s not what you are best at doing, as it would be if the economy were optimally efficient.
It’s not even what you have credentials for, as we might expect as a second-best solution.

It’s not even how much money you already have, though that certainly is a major factor as well.

It’s what people you know.

Now, I realize, this is not entirely beyond your control. If you actively participate in your community, attend conferences in your field, and so on, you can establish new contacts and expand your network. A major part of the benefit of going to a good college is actually the people you meet there.

But a good portion of your social network is more or less beyond your control, and above all, says almost nothing about your actual qualifications for any particular job.

There are certain jobs, such as marketing, that actually directly relate to your ability to establish rapport and build weak relationships rapidly. These are a tiny minority. (Actually, most of them are the sort of job that I’m not even sure needs to exist.)

For the vast majority of jobs, your social skills are a tiny, almost irrelevant part of the actual skill set needed to do the job well. This is true of jobs from writing science fiction to teaching calculus, from diagnosing cancer to flying airliners, from cleaning up garbage to designing spacecraft. Social skills are rarely harmful, and even often provide some benefit, but if you need a quantum physicist, you should choose the recluse who can write down the Dirac equation by heart over the well-connected community leader who doesn’t know what an integral is.

At the very least, it strains credibility to suggest that social skills are so important for every job in the world that they should be one of the defining factors in who gets hired. And make no mistake: Networking is as beneficial for landing a job at a local bowling alley as it is for becoming Chair of the Federal Reserve. Indeed, for many entry-level positions networking is literally all that matters, while advanced positions at least exclude candidates who don’t have certain necessary credentials, and then make the decision based upon who knows whom.

Yet, if networking is so inefficient, why do we keep using it?

I can think of a couple reasons.

The first reason is that this is how we’ve always done it. Indeed, networking strongly pre-dates capitalism or even money; in ancient tribal societies there were certainly jobs to assign people to: who will gather berries, who will build the huts, who will lead the hunt. But there were no colleges, no certifications, no resumes—there was only your position in the social structure of the tribe. I think most people simply automatically default to a networking-based system without even thinking about it; it’s just the instinctual System 1 heuristic.

One of the few things I really liked about Debt: The First 5000 Years was the discussion of how similar the behavior of modern CEOs is to that of ancient tribal chieftans, for reasons that make absolutely no sense in terms of neoclassical economic efficiency—but perfect sense in light of human evolution. I wish Graeber had spent more time on that, instead of many of these long digressions about international debt policy that he clearly does not understand.

But there is a second reason as well, a better reason, a reason that we can’t simply give up on networking entirely.

The problem is that many important skills are very difficult to measure.

College degrees do a decent job of assessing our raw IQ, our willingness to persevere on difficult tasks, and our knowledge of the basic facts of a discipline (as well as a fantastic job of assessing our ability to pass standardized tests!). But when you think about the skills that really make a good physicist, a good economist, a good anthropologist, a good lawyer, or a good doctor—they really aren’t captured by any of the quantitative metrics that a college degree provides. Your capacity for creative problem-solving, your willingness to treat others with respect and dignity; these things don’t appear in a GPA.

This is especially true in research: The degree tells how good you are at doing the parts of the discipline that have already been done—but what we really want to know is how good you’ll be at doing the parts that haven’t been done yet.

Nor are skills precisely aligned with the content of a resume; the best predictor of doing something well may in fact be whether you have done so in the past—but how can you get experience if you can’t get a job without experience?

These so-called “soft skills” are difficult to measure—but not impossible. Basically the only reliable measurement mechanisms we have require knowing and working with someone for a long span of time. You can’t read it off a resume, you can’t see it in an interview (interviews are actually a horribly biased hiring mechanism, particularly biased against women). In effect, the only way to really know if someone will be good at a job is to work with them at that job for awhile.

There’s a fundamental information problem here I’ve never quite been able to resolve. It pops up in a few other contexts as well: How do you know whether a novel is worth reading without reading the novel? How do you know whether a film is worth watching without watching the film? When the information about the quality of something can only be determined by paying the cost of purchasing it, there is basically no way of assessing the quality of things before we purchase them.

Networking is an attempt to get around this problem. To decide whether to read a novel, ask someone who has read it. To decide whether to watch a film, ask someone who has watched it. To decide whether to hire someone, ask someone who has worked with them.

The problem is that this is such a weak measure that it’s not much better than no measure at all. I often wonder what would happen if businesses were required to hire people based entirely on resumes, with no interviews, no recommendation letters, and any personal contacts treated as conflicts of interest rather than useful networking opportunities—a world where the only thing we use to decide whether to hire someone is their documented qualifications. Could it herald a golden age of new economic efficiency and job fulfillment? Or would it result in widespread incompetence and catastrophic collapse? I honestly cannot say.

Thus ends our zero-lower-bound interest rate policy

JDN 2457383

Not with a bang, but with a whimper.

If you are reading the blogs as they are officially published, it will have been over a week since the Federal Reserve ended its policy of zero interest rates. (If you are reading this as a Patreon Blog from the Future, it will only have been a few days.)

The official announcement was made on December 16. The Federal Funds Target Rate will be raised from 0%-0.25% to 0.25%-0.5%. That one-quarter percentage point—itself no larger than the margin of error the Fed allots itself—will make all the difference.

As pointed out in the New York Times, this is the first time nominal interest rates have been raised in almost a decade. But the Fed had been promising it for some time, and thus a major reason they did it was to preserve their own credibility. They also say they think inflation is about to hit the 2% target, though it hasn’t yet (and I was never clear on why 2% was the target in the first place).

Actually, overall inflation is currently near zero. What is at 2% is what’s called “core inflation”, which excludes particularly volatile products such as oil and food. The idea is that we want to set monetary policy based upon long-run trends in the economy as a whole, not based upon sudden dips and surges in oil prices. But right now we are in the very odd scenario of the Fed raising interest rates in order to stop inflation even as the total amount most people need to spend to maintain their standard of living is the same as it was a year ago.

As MSNBC argues, it is essentially an announcement that the Second Depression is over and the economy has now returned to normal. Of course, simply announcing such a thing does not make it true.

Personally, I think this move is largely symbolic. The difference between 0% and 0.25% is unimportant for most practical purposes.

If you owe $100,000 over 30 years at 0% interest, you will pay $277.78 per month, totaling of course $100,000. If your interest rate were raised to 0.25% interest, you would instead owe $288.35 per month, totaling $103,807.28. Even over 30 years, that 0.25% interest raises your total expenditure by less than 4%.

Over shorter terms it’s even less important. If you owe $20,000 over 5 years at 0% interest, you will pay $333.33 per month totaling $20,000. At 0.25%, you would pay $335.46 per month totaling $20,127.34, a mere 0.6% more.

Moreover, if a bank was willing to take out a loan at 0%, they’ll probably still be at 0.25%.

Where it would have the largest impact is in more exotic financial instruments, like zero-amortization or negative-amortization bonds. A zero-amortization bond at 0% is literally free money forever (assuming you can keep rolling it over). A zero-amortization bond at 0.25% means you must at least pay 0.25% of the money back each year. A negative-amortization bond at 0% makes no sense mathematically (somehow you pay back less than 0% at each payment?), while a negative-amortization bond at 0.25% only doesn’t make sense practically. If both zero and negative-amortization seem really bizarre and impossible to justify, that’s because they are. They should not exist. Most exotic financial instruments have no reason to exist, aside from the fact that they can be used to bamboozle people into giving money to the financial corporations that create them. (Which reminds me, I need to see The Big Short. But of course I have to see Star Wars: The Force Awakens first; one must have priorities.)

So, what will happen as a result of this change in interest rates? Probably not much. Inflation might go down a little—which means we might have overall deflation, and that would be bad—and the rate of increase in credit might drop slightly. In the worst-case scenario, unemployment starts to rise again, the Fed realizes their mistake, and interest rates will be dropped back to zero.

I think it’s more instructive to look at why they did this—the symbolic significance behind it.

The zero lower bound is weird. It makes a lot of economists very uncomfortable. The usual rules for how monetary and fiscal policy work break down, because the equation hits up against a constraint—a corner solution, more technically. Krugman often talks about how many of the usual ideas about how interest rates and government spending work collapse at the zero-lower-bound. We have models of this sort of thing that are pretty good, but they’re weird and counter-intuitive, so policymakers never seem to actually use them.

What is the zero lower bound, you ask? Exactly what it says on the tin. There is a lower bound on how low you can set an interest rate, and for all practical purposes that limit is zero. If you start trying to set an interest rate of -5%, people won’t be willing to loan out money and will instead hoard cash. (Interestingly, a central bank with a strong currency, such as that of the US, UK, or EU, can actually set small negative nominal interest rates—because people consider their bonds safer than cash, so they’ll pay for the safety. The ECB, Europe’s Fed, actually did so for awhile.)

The zero-lower-bound actually applies to prices in general, not just interest rates. If a product is so worthless to you that you don’t even want it if it’s free, it’s very rare for anyone to actually pay you to take it—partly because there might be nothing to stop you from taking a huge amount of it and forcing them to pay you ridiculous amounts of money. “How much is this paperclip?” “-$0.75.” “I’ll have 50 billion, please.” In a few rare cases, they might be able to pay you to take it an amount that’s less than what it costs you to store and transport. Also, if they benefit from giving it to you, companies will give you things for free—think ads and free samples. But basically, if people won’t even take something for free, that thing simply doesn’t get sold.

But if we are in a recession, we really don’t want loans to stop being made altogether. So if people are unwilling to take out loans at 0% interest, we’re in trouble. Generally what we have to do is rely on inflation to reduce the real value of money over time, thus creating a real interest rate that’s negative even though the nominal interest rate remains stuck at 0%. But what if inflation is very low? Then there’s nothing you can do except find a way to raise inflation or increase demand for credit. This means relying upon unconventional methods like quantitative easing (trying to cause inflation), or preferably using fiscal policy to spend a bunch of money and thereby increase demand for credit.

What the Fed is basically trying to do here is say that we are no longer in that bad situation. We can now set interest rates where they actually belong, rather than forcing them as low as they’ll go and hoping inflation will make up the difference.

It’s actually similar to how if you take a test and score 100%, there’s no way of knowing whether you just barely got 100%, or if you would have still done as well if the test were twice as hard—but if you score 99%, you actually scored 99% and would have done worse if the test were harder. In the former case you were up against a constraint; in the latter it’s your actual value. The Fed is essentially announcing that we really want interest rates near 0%, as opposed to being bound at 0%—and the way they do that is by setting a target just slightly above 0%.

So far, there doesn’t seem to have been much effect on markets. And frankly, that’s just what I’d expect.

Tax incidence revisited, part 3: Taxation and the value of money

JDN 2457352

Our journey through the world of taxes continues. I’ve already talked about how taxes have upsides and downsides, as well as how taxes directly affect prices and “before-tax” prices are almost meaningless.

Now it’s time to get into something that even a lot of economists don’t quite seem to grasp, yet which turns out to be fundamental to what taxes truly are.

In the usual way of thinking, it works something like this: We have an economy, through which a bunch of money flows, and then the government comes in and takes some of that money in the form of taxes. They do this because they want to spend money on a variety of services, from military defense to public schools, and in order to afford doing that they need money, so they take in taxes.

This view is not simply wrong—it’s almost literally backwards. Money is not something the economy had that the government comes in and takes. Money is something that the government creates and then adds to the economy to make it function more efficiently. Taxes are not the government taking out money that they need to use; taxes are the government regulating the quantity of money in the system in order to stabilize its value. The government could spend as much money as they wanted without collecting a cent in taxes (not should, but could—it would be a bad idea, but definitely possible); taxes do not exist to fund the government, but to regulate the money supply.

Indeed—and this is the really vital and counter-intuitive point—without taxes, money would have no value.

There is an old myth of how money came into existence that involves bartering: People used to trade goods for other goods, and then people found that gold was particularly good for trading, and started using it for everything, and then eventually people started making paper notes to trade for gold, and voila, money was born.

In fact, such a “barter economy” has never been documented to exist. It probably did once or twice, just given the enormous variety of human cultures; but it was never widespread. Ancient economies were based on family sharing, gifts, and debts of honor.

It is true that gold and silver emerged as the first forms of money, “commodity money”, but they did not emerge endogenously out of trading that was already happening—they were created by the actions of governments. The real value of the gold or silver may have helped things along, but it was not the primary reason why people wanted to hold the money. Money has been based upon government for over 3000 years—the history of money and civilization as we know it. “Fiat money” is basically a redundancy; almost all money, even in a gold standard system, is ultimately fiat money.

The primary reason why people wanted the money was so that they could use it to pay taxes.

It’s really quite simple, actually.

When there is a rule imposed by the government that you will be punished if you don’t turn up on April 15 with at least $4,287 pieces of green paper marked “US Dollar”, you will try to acquire $4,287 pieces of green paper marked “US Dollar”. You will not care whether those notes are exchangeable for gold or silver; you will not care that they were printed by the government originally. Because you will be punished if you don’t come up with those pieces of paper, you will try to get some.

If someone else has some pieces of green paper marked “US Dollar”, and knows that you need them to avoid being punished on April 15, they will offer them to you—provided that you give them something they want in return. Perhaps it’s a favor you could do for them, or something you own that they’d like to have. You will be willing to make this exchange, in order to avoid being punished on April 15.
Thus, taxation gives money value, and allows purchases to occur.

Once you establish a monetary system, it becomes self-sustaining. If you know other people will accept money as payment, you are more willing to accept money as payment because you know that you can go spend it with those people. “Legal tender” also helps this process along—the government threatens to punish people who refuse to accept money as payment. In practice, however, this sort of law is rarely enforced, and doesn’t need to be, because taxation by itself is sufficient to form the basis of the monetary system.

It’s deeply ironic that people who complain about printing money often say we are “debasing” the currency; when you think carefully about what debasement was, it clearly shows that the value of money never really resided in the gold or silver itself. If a government can successfully extract revenue from its monetary system by changing the amount of gold or silver in each coin, then the value of those coins can’t be in the gold and silver—it has to be in the power of the government. You can’t make a profit by dividing a commodity into smaller pieces and then selling the pieces. (Okay, you sort of can, by buying in bulk and selling at retail. But that’s not what we’re talking about. You can’t make money by buying 100 50-gallon barrels of oil and then selling them as 125 40-gallon barrels of oil; it’s the same amount of oil.)

Similarly, the fact that there is such a thing as seignioragethe value of currency in excess of its cost to create—shows that governments impart value to their money. Indeed, one of the reasons for debasement was to realign the value of coins with the value of the metals in the coins, which wouldn’t be necessary if those were simply by definition the same thing.

Taxation serves another important function in the monetary system, which is to regulate the supply of money. The government adds money to the economy by spending, and removes it by taxing; if they add more than they remove—a deficit—the money supply increases, while if they remove more than they add—a surplus—the money supply decreases. In order to maintain stable prices, you want the money supply to increase at approximately the rate of growth; for moderate inflation (which is probably better than actual price stability), you want the money supply to increase slightly faster than the rate of growth. Thus, in general we want the government deficit as a portion of GDP to be slightly larger than the growth rate of the economy. Thus, our current deficit of 2.8% of GDP is actually about where it should be, and we have no particular reason to want to decrease it. (This is somewhat oversimplified, because it ignores the contribution of the Federal Reserve, interest rates, and bank-created money. Most of the money in the world is actually not created by the government, but by banks which are restrained to greater or lesser extent by the government.)

Even a lot of people who try to explain modern monetary theory mistakenly speak as though there was a fundamental shift when we fully abandoned the gold standard in the 1970s. (This is a good explanation overall, but it makes this very error.) But in fact a gold standard really isn’t money “backed” by anything—gold is not what gives the money value, gold is almost worthless by itself. It’s pretty and it doesn’t corrode, but otherwise, what exactly can you do with it? Being tied to money is what made gold valuable, not the other way around. To see this, imagine a world where you have 20,000 tons of gold, but you know that you can never sell it. No one will ever purchase a single ounce. Would you feel particularly rich in that scenario? I think not. Now suppose you have a virtually limitless quantity of pieces of paper that you know people will accept for anything you would ever wish to buy. They are backed by nothing, they are just pieces of paper—but you are now rich, by the standard definition of the word. I can even make the analogy remove the exchange value of money and just use taxation: if you know that in two days you will be imprisoned if you don’t have this particular piece of paper, for the next two days you will guard that piece of paper with your life. It won’t bother you that you can’t exchange that piece of paper for anything else—you wouldn’t even want to. If instead someone else has it, you’ll be willing to do some rather large favors for them in order to get it.

Whenever people try to tell me that our money is “worthless” because it’s based on fiat instead of backed by gold (this happens surprisingly often), I always make them an offer: If you truly believe that our money is worthless, I’ll gladly take any you have off of your hands. I will even provide you with something of real value in return, such as an empty aluminum can or a pair of socks. If they truly believe that fiat money is worthless, they should eagerly accept my offer—yet oddly, nobody ever does.

This does actually create a rather interesting argument against progressive taxation: If the goal of taxation is simply to control inflation, shouldn’t we tax people based only on their spending? Well, if that were the only goal, maybe. But we also have other goals, such as maintaining employment and controlling inequality. Progressive taxation may actually take a larger amount of money out of the system than would be necessary simply to control inflation; but it does so in order to ensure that the super-rich do not become even more rich and powerful.

Governments are limited by real constraints of power and resources, but they they have no monetary constraints other than those they impose themselves. There is definitely something strongly coercive about taxation, and therefore about a monetary system which is built upon taxation. Unfortunately, I don’t know of any good alternatives. We might be able to come up with one: Perhaps people could donate to public goods in a mutually-enforced way similar to Kickstarter, but nobody has yet made that practical; or maybe the government could restructure itself to make a profit by selling private goods at the same time as it provides public goods, but then we have all the downsides of nationalized businesses. For the time being, the only system which has been shown to work to provide public goods and maintain long-term monetary stability is a system in which the government taxes and spends.

A gold standard is just a fiat monetary system in which the central bank arbitrarily decides that their money supply will be directly linked to the supply of an arbitrarily chosen commodity. At best, this could be some sort of commitment strategy to ensure that they don’t create vastly too much or too little money; but at worst, it prevents them from actually creating the right amount of money—and the gold standard was basically what caused the Great Depression. A gold standard is no more sensible a means of backing your currency than would be a standard requiring only prime-numbered interest rates, or one which requires you to print exactly as much money per minute as the price of a Ferrari.

No, the real thing that backs our money is the existence of the tax system. Far from taxation being “taking your hard-earned money”, without taxes money itself could not exist.

How much should we save?

JDN 2457215 EDT 15:43.

One of the most basic questions in macroeconomics has oddly enough received very little attention: How much should we save? What is the optimal level of saving?

At the microeconomic level, how much you should save basically depends on what you think your income will be in the future. If you have more income now than you think you’ll have later, you should save now to spend later. If you have less income now than you think you’ll have later, you should spend now and dissave—save negatively, otherwise known as borrowing—and pay it back later. The life-cycle hypothesis says that people save when they are young in order to retire when they are old—in its strongest form, it says that we keep our level of spending constant across our lifetime at a value equal to our average income. The strongest form is utterly ridiculous and disproven by even the most basic empirical evidence, so usually the hypothesis is studied in a weaker form that basically just says that people save when they are young and spend when they are old—and even that runs into some serious problems.

The biggest problem, I think, is that the interest rate you receive on savings is always vastly less than the interest rate you pay on borrowing, which in turn is related to the fact that people are credit-constrainedthey generally would like to borrow more than they actually can. It also has a lot to do with the fact that our financial system is an oligopoly; banks make more profits if they can pay savers less and charge borrowers more, and by colluding with each other they can control enough of the market that no major competitors can seriously undercut them. (There is some competition, however, particularly from credit unions—and if you compare these two credit card offers from University of Michigan Credit Union at 8.99%/12.99% and Bank of America at 12.99%/22.99% respectively, you can see the oligopoly in action as the tiny competitor charges you a much fairer price than the oligopoly beast. 9% means doubling in just under eight years, 13% means doubling in a little over five years, and 23% means doubling in three years.) Another very big problem with the life-cycle theory is that human beings are astonishingly bad at predicting the future, and thus our expectations about our future income can vary wildly from the actual future income we end up receiving. People who are wise enough to know that they do not know generally save more than they think they’ll need, which is called precautionary saving. Combine that with our limited capacity for self-control, and I’m honestly not sure the life-cycle hypothesis is doing any work for us at all.

But okay, let’s suppose we had a theory of optimal individual saving. That would still leave open a much larger question, namely optimal aggregate saving. The amount of saving that is best for each individual may not be best for society as a whole, and it becomes a difficult policy challenge to provide incentives to make people save the amount that is best for society.

Or it would be, if we had the faintest idea what the optimal amount of saving for society is. There’s a very simple rule-of-thumb that a lot of economists use, often called the golden rule (not to be confused with the actual Golden Rule, though I guess the idea is that a social optimum is a moral optimum), which is that we should save exactly the same amount as the share of capital in income. If capital receives one third of income (This figure of one third has been called a “law”, but as with most “laws” in economics it’s really more like the Pirate Code; labor’s share of income varies across countries and years. I doubt you’ll be surprised to learn that it is falling around the world, meaning more income is going to capital owners and less is going to workers.), then one third of income should be saved to make more capital for next year.

When you hear that, you should be thinking: “Wait. Saved to make more capital? You mean invested to make more capital.” And this is the great sleight of hand in the neoclassical theory of economic growth: Saving and investment are made to be the same by definition. It’s called the savings-investment identity. As I talked about in an earlier post, the model seems to be that there is only one kind of good in the world, and you either use it up or save it to make more.

But of course that’s not actually how the world works; there are different kinds of goods, and if people stop buying tennis shoes that doesn’t automatically lead to more factories built to make tennis shoes—indeed, quite the opposite.If people reduce their spending, the products they no longer buy will now accumulate on shelves and the businesses that make those products will start downsizing their production. If people increase their spending, the products they now buy will fly off the shelves and the businesses that make them will expand their production to keep up.

In order to make the savings-investment identity true by definition, the definition of investment has to be changed. Inventory accumulation, products building up on shelves, is counted as “investment” when of course it is nothing of the sort. Inventory accumulation is a bad sign for an economy; indeed the time when we see the most inventory accumulation is right at the beginning of a recession.

As a result of this bizarre definition of “investment” and its equation with saving, we get the famous Paradox of Thrift, which does indeed sound paradoxical in its usual formulation: “A global increase in marginal propensity to save can result in a reduction in aggregate saving.” But if you strip out the jargon, it makes a lot more sense: “If people suddenly stop spending money, companies will stop investing, and the economy will grind to a halt.” There’s still a bit of feeling of paradox from the fact that we tried to save more money and ended up with less money, but that isn’t too hard to understand once you consider that if everyone else stops spending, where are you going to get your money from?

So what if something like this happens, we all try to save more and end up having no money? The government could print a bunch of money and give it to people to spend, and then we’d have money, right? Right. Exactly right, in fact. You now understand monetary policy better than most policymakers. Like a basic income, for many people it seems too simple to be true; but in a nutshell, that is Keynesian monetary policy. When spending falls and the economy slows down as a result, the government should respond by expanding the money supply so that people start spending again. In practice they usually expand the money supply by a really bizarre roundabout way, buying and selling bonds in open market operations in order to change the interest rate that banks charge each other for loans of reserves, the Fed funds rate, in the hopes that banks will change their actual lending interest rates and more people will be able to borrow, thus, ultimately, increasing the money supply (because, remember, banks don’t have the money they lend you—they create it).

We could actually just print some money and give it to people (or rather, change a bunch of numbers in an IRS database), but this is very unpopular, particularly among people like Ron Paul and other gold-bug Republicans who don’t understand how monetary policy works. So instead we try to obscure the printing of money behind a bizarre chain of activities, opening many more opportunities for failure: Chiefly, we can hit the zero lower bound where interest rates are zero and can’t go any lower (or can they?), or banks can be too stingy and decide not to lend, or people can be too risk-averse and decide not to borrow; and that’s not even to mention the redistribution of wealth that happens when all the money you print is given to banks. When that happens we turn to “unconventional monetary policy”, which basically just means that we get a little bit more honest about the fact that we’re printing money. (Even then you get articles like this one insisting that quantitative easing isn’t really printing money.)

I don’t know, maybe there’s actually some legitimate reason to do it this way—I do have to admit that when governments start openly printing money it often doesn’t end well. But really the question is why you’re printing money, whom you’re giving it to, and above all how much you are printing. Weimar Germany printed money to pay off odious war debts (because it totally makes sense to force a newly-established democratic government to pay the debts incurred by belligerent actions of the monarchy they replaced; surely one must repay one’s debts). Hungary printed money to pay for rebuilding after the devastation of World War 2. Zimbabwe printed money to pay for a war (I’m sensing a pattern here) and compensate for failed land reform policies. In all three cases the amount of money they printed was literally billions of times their original money supply. Yes, billions. They found their inflation cascading out of control and instead of stopping the printing, they printed even more. The United States has so far printed only about three times our original monetary base, still only about a third of our total money supply. (Monetary base is the part that the Federal reserve controls; the rest is created by banks. Typically 90% of our money is not monetary base.) Moreover, we did it for the right reasons—in response to deflation and depression. That is why, as Matthew O’Brien of The Atlantic put it so well, the US can never be Weimar.

I was supposed to be talking about saving and investment; why am I talking about money supply? Because investment is driven by the money supply. It’s not driven by saving, it’s driven by lending.

Now, part of the underlying theory was that lending and saving are supposed to be tied together, with money lent coming out of money saved; this is true if you assume that things are in a nice tidy equilibrium. But we never are, and frankly I’m not sure we’d want to be. In order to reach that equilibrium, we’d either need to have full-reserve banking, or banks would have to otherwise have their lending constrained by insufficient reserves; either way, we’d need to have a constant money supply. Any dollar that could be lent, would have to be lent, and the whole debt market would have to be entirely constrained by the availability of savings. You wouldn’t get denied for a loan because your credit rating is too low; you’d get denied for a loan because the bank would literally not have enough money available to lend you. Banking would have to be perfectly competitive, so if one bank can’t do it, no bank can. Interest rates would have to precisely match the supply and demand of money in the same way that prices are supposed to precisely match the supply and demand of products (and I think we all know how well that works out). This is why it’s such a big problem that most macroeconomic models literally do not include a financial sector. They simply assume that the financial sector is operating at such perfect efficiency that money in equals money out always and everywhere.

So, recognizing that saving and investment are in fact not equal, we now have two separate questions: What is the optimal rate of saving, and what is the optimal rate of investment? For saving, I think the question is almost meaningless; individuals should save according to their future income (since they’re so bad at predicting it, we might want to encourage people to save extra, as in programs like Save More Tomorrow), but the aggregate level of saving isn’t an important question. The important question is the aggregate level of investment, and for that, I think there are two ways of looking at it.

The first way is to go back to that original neoclassical growth model and realize it makes a lot more sense when the s term we called “saving” actually is a funny way of writing “investment”; in that case, perhaps we should indeed invest the same proportion of income as the income that goes to capital. An interesting, if draconian, way to do so would be to actually require this—all and only capital income may be used for business investment. Labor income must be used for other things, and capital income can’t be used for anything else. The days of yachts bought on stock options would be over forever—though so would the days of striking it rich by putting your paycheck into a tech stock. Due to the extreme restrictions on individual freedom, I don’t think we should actually do such a thing; but it’s an interesting thought that might lead to an actual policy worth considering.

But a second way that might actually be better—since even though the model makes more sense this way, it still has a number of serious flaws—is to think about what we might actually do in order to increase or decrease investment, and then consider the costs and benefits of each of those policies. The simplest case to analyze is if the government invests directly—and since the most important investments like infrastructure, education, and basic research are usually done this way, it’s definitely a useful example. How is the government going to fund this investment in, say, a nuclear fusion project? They have four basic ways: Cut spending somewhere else, raise taxes, print money, or issue debt. If you cut spending, the question is whether the spending you cut is more or less important than the investment you’re making. If you raise taxes, the question is whether the harm done by the tax (which is generally of two flavors; first there’s the direct effect of taking someone’s money so they can’t use it now, and second there’s the distortions created in the market that may make it less efficient) is outweighed by the new project. If you print money or issue debt, it’s a subtler question, since you are no longer pulling from any individual person or project but rather from the economy as a whole. Actually, if your economy has unused capacity as in a depression, you aren’t pulling from anywhere—you’re simply adding new value basically from thin air, which is why deficit spending in depressions is such a good idea. (More precisely, you’re putting resources to use that were otherwise going to lay fallow—to go back to my earlier example, the tennis shoes will no longer rest on the shelves.) But if you do not have sufficient unused capacity, you will get crowding-out; new debt will raise interest rates and make other investments more expensive, while printing money will cause inflation and make everything more expensive. So you need to weigh that cost against the benefit of your new investment and decide whether it’s worth it.

This second way is of course a lot more complicated, a lot messier, a lot more controversial. It would be a lot easier if we could just say: “The target investment rate should be 33% of GDP.” But even then the question would remain as to which investments to fund, and which consumption to pull from. The abstraction of simply dividing the economy into “consumption” versus “investment” leaves out matters of the utmost importance; Paul Allen’s 400-foot yacht and food stamps for children are both “consumption”, but taxing the former to pay for the latter seems not only justified but outright obligatory. The Bridge to Nowhere and the Humane Genome Project are both “investment”, but I think we all know which one had a higher return for human society. The neoclassical model basically assumes that the optimal choices for consumption and investment are decided automatically (automagically?) by the inscrutable churnings of the free market, but clearly that simply isn’t true.

In fact, it’s not always clear what exactly constitutes “consumption” versus “investment”, and the particulars of answering that question may distract us from answering the questions that actually matter. Is a refrigerator investment because it’s a machine you buy that sticks around and does useful things for you? Or is it consumption because consumers buy it and you use it for food? Is a car an investment because it’s vital to getting a job? Or is it consumption because you enjoy driving it? Someone could probably argue that the appreciation on Paul Allen’s yacht makes it an investment, for instance. Feeding children really is an investment, in their so-called “human capital” that will make them more productive for the rest of their lives. Part of the money that went to the Humane Genome Project surely paid some graduate student who then spent part of his paycheck on a keg of beer, which would make it consumption. And so on. The important question really isn’t “is this consumption or investment?” but “Is this worth doing?” And thus, the best answer to the question, “How much should we save?” may be: “Who cares?”

Why the Republican candidates like flat income tax—and we really, really don’t

JDN 2456160 EDT 13:55.

The Republican Party is scrambling to find viable Presidential candidates for next year’s election. The Democrats only have two major contenders: Hillary Clinton looks like the front-runner (and will obviously have the most funding), but Bernie Sanders is doing surprisingly well, and is particularly refreshing because he is running purely on his principles and ideas. He has no significant connections, no family dynasty (unlike Jeb Bush and, again, Hillary Clinton) and not a huge amount of wealth (Bernie’s net wealth is about $500,000, making him comfortably upper-middle class; compare to Hillary’s $21.5 million and her husband’s $80 million); but he has ideas that resonate with people. Bernie Sanders is what politics is supposed to be. Clinton’s campaign will certainly raise more than his; but he has already raised over $4 million, and if he makes it to about $10 million studies suggest that additional spending above that point is largely negligible. He actually has a decent chance of winning, and if he did it would be a very good sign for the future of America.

But the Republican field is a good deal more contentious, and the 19 candidates currently running have been scrambling to prove that they are the most right-wing in order to impress far-right primary voters. (When the general election comes around, whoever wins will of course pivot back toward the center, changing from, say, outright fascism to something more like reactionism or neo-feudalism. If you were hoping they’d pivot so far back as to actually be sensible center-right capitalists, think again; Hillary Clinton is the only one who will take that role, and they’ll go out of their way to disagree with her in every way they possibly can, much as they’ve done with Obama.) One of the ways that Republicans are hoping to prove their right-wing credentials is by proposing a flat income tax and eliminating the IRS.

Unlike most of their proposals, I can see why many people think this actually sounds like a good idea. It would certainly dramatically reduce bureaucracy, and that’s obviously worthwhile since excess bureaucracy is pure deadweight loss. (A surprising number of economists seem to forget that government does other things besides create excess bureaucracy, but I must admit it does in fact create excess bureaucracy.)

Though if they actually made the flat tax rate 20% or even—I can’t believe this is seriously being proposed—10%, there is no way the federal government would have enough revenue. The only options would be (1) massive increases in national debt (2) total collapse of government services—including their beloved military, mind you, or (3) directly linking the Federal Reserve quantitative easing program to fiscal policy and funding the deficit with printed money. Of these, 3 might not actually be that bad (it would probably trigger some inflation, but actually we could use that right now), but it’s extremely unlikely to happen, particularly under Republicans. In reality, after getting a taste of 2, we’d clearly end up with 1. And then they’d be complaining about the debt and clamor for more spending cuts, more spending cuts, ever more spending cuts, but there would simply be no way to run a functioning government on 10% of GDP in anything like our current system. Maybe you could do it on 20%—maybe—but we currently spend more like 35%, and that’s already a very low amount of spending for a First World country. The UK is more typical at 47%, while Germany is a bit low at 44%; Sweden spends 52% and France spends a whopping 57%. Anyone who suggests we cut government spending from 35% to 20% needs to explain which 3/7 of government services are going to immediately disappear—not to mention which 3/7 of government employees are going to be immediately laid off.

And then they want to add investment deductions; in general investment deductions are a good thing, as long as you tie them to actual investments in genuinely useful things like factories and computer servers. (Or better yet, schools, research labs, or maglev lines, but private companies almost never invest in that sort of thing, so the deduction wouldn’t apply.) The kernel of truth in the otherwise ridiculous argument that we should never tax capital is that taxing real investment would definitely be harmful in the long run. As I discussed with Miles Kimball (a cognitive economist at Michigan and fellow econ-blogger I hope to work with at some point), we could minimize the distortionary effects of corporate taxes by establishing a strong deduction for real investment, and this would allow us to redistribute some of this enormous wealth inequality without dramatically harming economic growth.

But if you deduct things that aren’t actually investments—like stock speculation and derivatives arbitrage—then you reduce your revenue dramatically and don’t actually incentivize genuinely useful investments. This is the problem with our current system, in which GE can pay no corporate income tax on $108 billion in annual profit—and you know they weren’t using all that for genuinely productive investment activities. But then, if you create a strong enforcement system for ensuring it is real investment, you need bureaucracy—which is exactly what the flat tax was claimed to remove. At the very least, the idea of eliminating the IRS remains ridiculous if you have any significant deductions.

Thus, the benefits of a flat income tax are minimal if not outright illusory; and the costs, oh, the costs are horrible. In order to have remotely reasonable amounts of revenue, you’d need to dramatically raise taxes on the majority of people, while significantly lowering them on the rich. You would create a direct transfer of wealth from the poor to the rich, increasing our already enormous income inequality and driving millions of people into poverty.

Thus, it would be difficult to more clearly demonstrate that you care only about the interests of the top 1% than to propose a flat income tax. I guess Mitt Romney’s 47% rant actually takes the cake on that one though (Yes, all those freeloading… soldiers… and children… and old people?).

Many Republicans are insisting that a flat tax would create a surge of economic growth, but that’s simply not how macroeconomics works. If you steeply raise taxes on the majority of people while cutting them on the rich, you’ll see consumer spending plummet and the entire economy will be driven into recession. Rich people simply don’t spend their money in the same way as the rest of us, and the functioning of the economy depends upon a continuous flow of spending. There is a standard neoclassical economic argument about how reducing spending and increasing saving would lead to increased investment and greater prosperity—but that model basically assumes that we have a fixed amount of stuff we’re either using up or making more stuff with, which is simply not how money works; as James Kroeger cogently explains on his blog “Nontrivial Pursuits”, money is created as it is needed; investment isn’t determined by people saving what they don’t spend. Indeed, increased consumption generally leads to increased investment, because our economy is currently limited by demand, not supply. We could build a lot more stuff, if only people could afford to buy it.

And that’s not even considering the labor incentives; as I already talked about in my previous post on progressive taxation, there are two incentives involved when you increase someone’s hourly wage. On the one hand, they get paid more for each hour, which is a reason to work; that’s the substitution effect. But on the other hand, they have more money in general, which is a reason they don’t need to work; that’s the income effect. Broadly speaking, the substitution effect dominates at low incomes (about $20,000 or less), the income effect dominates at high incomes (about $100,000 or more), and the two effects cancel out at moderate incomes. Since a tax on your income hits you in much the same way as a reduction in your wage, this means that raising taxes on the poor makes them work less, while raising taxes on the rich makes them work more. But if you go from our currently slightly-progressive system to a flat system, you raise taxes on the poor and cut them on the rich, which would mean that the poor would work less, and the rich would also work less! This would reduce economic output even further. If you want to maximize the incentive to work, you want progressive taxes, not flat taxes.

Flat taxes sound appealing because they are so simple; even the basic formula for our current tax rates is complicated, and we combine it with hundreds of pages of deductions and credits—not to mention tens of thousands of pages of case law!—making it a huge morass of bureaucracy that barely anyone really understands and corporate lawyers can easily exploit. I’m all in favor of getting rid of that; but you don’t need a flat tax to do that. You can fit the formula for a progressive tax on a single page—indeed, on a single line: r = 1 – I^-p

That’s it. It’s simple enough to be plugged into any calculator that is capable of exponents, not to mention efficiently implemented in Microsoft Excel (more efficiently than our current system in fact).

Combined with that simple formula, you could list all of the sensible deductions on a couple of additional pages (business investments and educational expenses, mostly—poverty should be addressed by a basic income, not by tax deductions on things like heating and housing, which are actually indirect corporate subsidies), along with a land tax (one line: $3000 per hectare), a basic income (one more line: $8,000 per adult and $4,000 per child), and some additional excise taxes on goods with negative externalities (like alcohol, tobacco, oil, coal, and lead), with a line for each; then you can provide a supplementary manual of maybe 50 pages explaining the detailed rules for applying each of those deductions in unusual cases. The entire tax code should be readable by an ordinary person in a single sitting no longer than a few hours. That means no more than 100 pages and no more than a 7th-grade reading level.

Why do I say this? Isn’t that a ridiculous standard? No, it is a Constitutional imperative. It is a fundamental violation of your liberty to tax you according to rules you cannot reasonably understand—indeed, bordering on Kafkaesque. While this isn’t taxation without representation—we do vote for representatives, after all—it is something very much like it; what good is the ability to change rules if you don’t even understand the rules in the first place? Nor would it be all that difficult: You first deduct these things from your income, then plug the result into this formula.

So yes, I absolutely agree with the basic principle of tax reform. The tax code should be scrapped and recreated from scratch, and the final product should be a primary form of only a few pages combined with a supplementary manual of no more than 100 pages. But you don’t need a flat tax to do that, and indeed for many other reasons a flat tax is a terrible idea, particularly if the suggested rate is 10% or 15%, less than half what we actually spend. The real question is why so many Republican candidates think that this will appeal to their voter base—and why they could actually be right about that.

Part of it is the entirely justified outrage at the complexity of our current tax system, and the appealing simplicity of a flat tax. Part of it is the long history of American hatred of taxes; we were founded upon resisting taxes, and we’ve been resisting taxes ever since. In some ways this is healthy; taxes per se are not a good thing, they are a bad thing, a necessary evil.

But those two things alone cannot explain why anyone would advocate raising taxes on the poorest half of the population while dramatically cutting them on the top 1%. If you are opposed to taxes in general, you’d cut them on everyone; and if you recognize the necessity of taxation, you’d be trying to find ways to minimize the harm while ensuring sufficient tax revenue, which in general means progressive taxation.

To understand why they would be pushing so hard for flat taxes, I think we need to say that many Republicans, particularly those in positions of power, honestly do think that rich people are better than poor people and we should always give more to the rich and less to the poor. (Maybe it’s partly halo effect, in which good begets good and bad begets bad? Or maybe just world theory, the ingrained belief that the world is as it ought to be?)

Romney’s 47% rant wasn’t an exception; it was what he honestly believes, what he says when he doesn’t know he’s on camera. He thinks that he earned every penny of his $250 million net wealth; yes, even the part he got from marrying his wife and the part he got from abusing tax laws, arbitraging assets and liquidating companies. He thinks that people who live on $4,000 or even $400 a year are simply lazy freeloaders, who could easily work harder, perhaps do some arbitrage and liquidation of their own (check out these alleged “rags to riches” stories including the line “tried his hand at mortgage brokering”), but choose not to, and as a result deserve what they get. (It’s important to realize just how bizarre this moral attitude truly is; even if I thought you were the laziest person on Earth, I wouldn’t let you starve to death.) He thinks that the social welfare programs which have reduced poverty but never managed to eliminate it are too generous—if he even thinks they should exist at all. And in thinking these things, he is not some bizarre aberration; he is representing an entire class of people, nearly all of whom vote Republican.

The good news is, these people are still in the minority. They hold significant sway over the Republican primary, but will not have nearly as much impact in the general election. And right now, the Republican candidates are so numerous and so awful that I have trouble seeing how the Democrats could possibly lose. (But please, don’t take that as a challenge, you guys.)

The terrible, horrible, no-good very-bad budget bill

JDN 2457005 PST 11:52.

I would have preferred to write about something a bit cheerier (like the fact that by the time I write my next post I expect to be finished with my master’s degree!), but this is obviously the big news in economic policy today. The new House budget bill was unveiled Tuesday, and then passed in the House on Thursday by a narrow vote. It has stalled in the Senate thanks in part to fierce—and entirely justified—opposition by Elizabeth Warren, and so today it has been delayed in the Senate. Obama has actually urged his fellow Democrats to pass it, in order to avoid another government shutdown. Here’s why Warren is right and Obama is wrong.

You know the saying “You can’t negotiate with terrorists!”? Well, in practice that’s not actually true—we negotiate with terrorists all the time; the FBI has special hostage negotiators for this purpose, because sometimes it really is the best option. But the saying has an underlying kernel of truth, which is that once someone is willing to hold hostages and commit murder, they have crossed a line, a Rubicon from which it is impossible to return; negotiations with them can never again be good-faith honest argumentation, but must always be a strategic action to minimize collateral damage. Everyone knows that if you had the chance you’d just as soon put bullets through all their heads—because everyone knows they’d do the same to you.

Well, right now, the Republicans are acting like terrorists. Emotionally a fair comparison would be with two-year-olds throwing tantrums, but two-year-olds do not control policy on which thousands of lives hang in the balance. This budget bill is designed—quite intentionally, I’m sure—in order to ensure that Democrats are left with only two options: Give up on every major policy issue and abandon all the principles they stand for, or fail to pass a budget and allow the government to shut down, canceling vital services and costing billions of dollars. They are holding the American people hostage.

But here is why you must not give in: They’re going to shoot the hostages anyway. This so-called “compromise” would not only add $479 million in spending on fighter jets that don’t work and the Pentagon hasn’t even asked for, not only cut $93 million from WIC, a 3.5% budget cut adjusted for inflation—literally denying food to starving mothers and children—and dramatically increase the amount of money that can be given by individuals in campaign donations (because apparently the unlimited corporate money of Citizens United wasn’t enough!), but would also remove two of the central provisions of Dodd-Frank financial regulation that are the only thing that stands between us and a full reprise of the Great Recession. And even if the Democrats in the Senate cave to the demands just as the spineless cowards in the House already did, there is nothing to stop Republicans from using the same scorched-earth tactics next year.

I wouldn’t literally say we should put bullets through their heads, but we definitely need to get these Republicans out of office immediately at the next election—and that means that all the left-wing people who insist they don’t vote “on principle” need to grow some spines of their own and vote. Vote Green if you want—the benefits of having a substantial Green coalition in Congress would be enormous, because the Greens favor three really good things in particular: Stricter regulation of carbon emissions, nationalization of the financial system, and a basic income. Or vote for some other obscure party that you like even better. But for the love of all that is good in the world, vote.

The two most obscure—and yet most important—measures in the bill are the elimination of the swaps pushout rule and the margin requirements on derivatives. Compared to these, the cuts in WIC are small potatoes (literally, they include a stupid provision about potatoes). They also really aren’t that complicated, once you boil them down to their core principles. This is however something Wall Street desperately wants you to never, ever do, for otherwise their global crime syndicate will be exposed.

The swaps pushout rule says quite simply that if you’re going to place bets on the failure of other companies—these are called credit default swaps, but they are really quite literally a bet that a given company will go bankrupt—you can’t do so with deposits that are insured by the FDIC. This is the absolute bare minimum regulatory standard that any reasonable economist (or for that matter sane human being!) would demand. Honestly I think credit default swaps should be banned outright. If you want insurance, you should have to buy insurance—and yes, deal with the regulations involved in buying insurance, because those regulations are there for a reason. There’s a reason you can’t buy fire insurance on other people’s houses, and that exact same reason applies a thousandfold for why you shouldn’t be able to buy credit default swaps on other people’s companies. Most people are not psychopaths who would burn down their neighbor’s house for the insurance money—but even when their executives aren’t psychopaths (as many are), most companies are specifically structured so as to behave as if they were psychopaths, as if no interests in the world mattered but their own profit.

But the swaps pushout rule does not by any means ban credit default swaps. Honestly, it doesn’t even really regulate them in any real sense. All it does is require that these bets have to be made with the banks’ own money and not with everyone else’s. You see, bank deposits—the regular kind, “commercial banking”, where you have your checking and savings accounts—are secured by government funds in the event a bank should fail. This makes sense, at least insofar as it makes sense to have private banks in the first place (if we’re going to insure with government funds, why not just use government funds?). But if you allow banks to place whatever bets they feel like using that money, they have basically no downside; heads they win, tails we lose. That’s why the swaps pushout rule is absolutely indispensable; without it, you are allowing banks to gamble with other people’s money.

What about margin requirements? This one is even worse. Margin requirements are literally the only thing that keeps banks from printing unlimited money. If there was one single cause of the Great Recession, it was the fact that there were no margin requirements on over-the-counter derivatives. Because there were no margin requirements, there was no limit to how much money banks could print, and so print they did; the result was a still mind-blowing quadrillion dollars in nominal value of outstanding derivatives. Not million, not billion, not even trillion; quadrillion. $1e15. $1,000,000,000,000,000. That’s how much money they printed. The total world money supply is about $70 trillion, which is 1/14 of that. (If you read that blog post, he makes a rather telling statement: “They demonstrate quite clearly that those who have been lending the money that we owe can’t possibly have had the money they lent.” No, of course they didn’t! They created it by lending it. That is what our system allows them to do.)

And yes, at its core, it was printing money. A lot of economists will tell you otherwise, about how that’s not really what’s happening, because it’s only “nominal” value, and nobody ever expects to cash them in—yeah, but what if they do? (These are largely the same people who will tell you that quantitative easing isn’t printing money, because, uh… er… squirrel!) A tiny fraction of these derivatives were cashed in in 2007, and I think you know what happened next. They printed this money and now they are holding onto it; but woe betide us all if they ever decide to spend it. Honestly we should invalidate all of these derivatives and force them to start over with strict margin requirements, but short of that we must at least, again at the bare minimum, have margin requirements.

Why are margin requirements so important? There’s actually a very simple equation that explains it. If the margin requirement is m, meaning that you must retain a portion m between 0 and 1 of the loans you make as reserves, the total amount of money supply that can be created from the current amount of money M is just M/m. So if margin requirements were 100%—full-reserve banking—then the total money supply is M, and therefore in full control of the central bank. This is how it should be, in my opinion. But usually m is set around 10%, so the total money supply is 10M, meaning that 90% of the money in the system was created by banks. But if you ever let that margin requirement go to zero, you end up dividing by zero—and the total amount of money that can be created is infinite.

To see how this works, suppose we start with $1000 and put it in bank A. Bank A then creates a loan; how big they can make the loan depends on the margin requirement. Let’s say it’s 10%. They can make a loan of $900, because they must keep $100 (10% of $1000) in reserve. So they do that, and then it gets placed in bank B. Then bank B can make a loan of $810, keeping $90. The $810 gets deposited in bank C, which can make a loan of $729, and so on. The total amount of money in the system is the sum of all these: $1000 in bank A (remember, that deposit doesn’t disappear when it’s loaned out!), plus the $900 in bank B, plus $810 in bank C, plus $729 in bank D. After 4 steps we are at $3,439. As we go through more and more steps, the money supply gets larger at an exponentially decaying rate and we converge toward the maximum at $10,000.

The original amount is M, and then we add M(1-m), M(1-m)^2, M(1-m)^3, and so on. That produces the following sum up to n terms (below is LaTeX, which I can’t render for you without a plugin, which requires me to pay for a WordPress subscription I cannot presently afford; you can copy-paste and render it yourself here):

\sum_{k=0}^{n} M (1-m)^k = M \frac{1 – (1-m)^{n+1}}{m}

And then as you let the number of terms grow arbitrarily large, it converges toward a limit at infinity:

\sum_{k=0}^{\infty} M (1-m)^k = \frac{M}{m}

To be fair, we never actually go through infinitely many steps, so even with a margin requirement of zero we don’t literally end up with infinite money. Instead, we just end up with n M, the number of steps times the initial money supply. Start with $1000 and go through 4 steps: $4000. Go through 10 steps: $10,000. Go through 100 steps: $100,000. It just keeps getting bigger and bigger, until that money has nowhere to go and the whole house of cards falls down.

Honestly, I’m not even sure why Wall Street banks would want to get rid of margin requirements. It’s basically putting your entire economy on the counterfeiting standard. Fiat money is often accused of this, but the government has both (a) the legitimate authority empowered by the electorate and (b) incentives to maintain macroeconomic stability, neither of which private banks have. There is no reason other than altruism (and we all know how much altruism Citibank and HSBC have—it is approximately equal to the margin requirement they are trying to get passed—and yes, they wrote the bill) that would prevent them from simply printing as much money as they possibly can, thus maximizing their profits; and they can even excuse the behavior by saying that everyone else is doing it, so it’s not like they could prevent the collapse all by themselves. But by lobbying for a regulation to specifically allow this, they no longer have that excuse; no, everyone won’t be doing it, not unless you pass this law to let them. Despite the global economic collapse that was just caused by this sort of behavior only seven years ago, they now want to return to doing it. At this point I’m beginning to wonder if calling them an international crime syndicate is actually unfair to international crime syndicates. These guys are so totally evil it actually goes beyond the bounds of rational behavior; they’re turning into cartoon supervillains. I would honestly not be that surprised if there were a video of one of these CEOs caught on camera cackling maniacally, “Muahahahaha! The world shall burn!” (Then again, I was pleasantly surprised to see the CEO of Goldman Sachs talking about the harms of income inequality, though it’s not clear he appreciated his own contribution to that inequality.)

And that is why Democrats must not give in. The Senate should vote it down. Failing that, Obama should veto. I wish he still had the line-item veto so he could just remove the egregious riders without allowing a government shutdown, but no, the Senate blocked it. And honestly their reasoning makes sense; there is supposed to be a balance of power between Congress and the President. I just wish we had a Congress that would use its power responsibly, instead of holding the American people hostage to the villainous whims of Wall Street banks.