What happens when a bank fails

Mar 19 JDN 2460023

As of March 9, Silicon Valley Bank (SVB) has failed and officially been put into receivership under the FDIC. A bank that held $209 billion in assets has suddenly become insolvent.

This is the second-largest bank failure in US history, after Washington Mutual (WaMu) in 2008. In fact it will probably have more serious consequences than WaMu, for two reasons:

1. WaMu collapsed as part of the Great Recession, so there was already a lot of other things going on and a lot of policy responses already in place.

2. WaMu was mostly a conventional commercial bank that held deposits and loans for consumers, so its assets were largely protected by the FDIC, and thus its bankruptcy didn’t cause contagion the spread out to the rest of the system. (Other banks—shadow banks—did during the crash, but not so much WaMu.) SVB mostly served tech startups, so a whopping 89% of its deposits were not protected by FDIC insurance.

You’ve likely heard of many of the companies that had accounts at SVB: Roku, Roblox, Vimeo, even Vox. Stocks of the US financial industry lost $100 billion in value in two days.

The good news is that this will not be catastrophic. It probably won’t even trigger a recession (though the high interest rates we’ve been having lately potentially could drive us over that edge). Because this is commercial banking, it’s done out in the open, with transparency and reasonably good regulation. The FDIC knows what they are doing, and even though they aren’t covering all those deposits directly, they intend to find a buyer for the bank who will, and odds are good that they’ll be able to cover at least 80% of the lost funds.

In fact, while this one is exceptionally large, bank failures are not really all that uncommon. There have been nearly 100 failures of banks with assets over $1 billion in the US alone just since the 1970s. The FDIC exists to handle bank failures, and generally does the job well.

Then again, it’s worth asking whether we should really have a banking system in which failures are so routine.

The reason banks fail is kind of a dark open secret: They don’t actually have enough money to cover their deposits.

Banks loan away most of their cash, and rely upon the fact that most of their depositors will not want to withdraw their money at the same time. They are required to keep a certain ratio in reserves, but it’s usually fairly small, like 10%. This is called fractional-reserve banking.

As long as less than 10% of deposits get withdrawn at any given time, this works. But if a bunch of depositors suddenly decide to take out their money, the bank may not have enough to cover it all, and suddenly become insolvent.

In fact, the fear that a bank might become insolvent can actually cause it to become insolvent, in a self-fulfilling prophecy. Once depositors get word that the bank is about to fail, they rush to be the first to get their money out before it disappears. This is a bank run, and it’s basically what happened to SVB.

The FDIC was originally created to prevent or mitigate bank runs. Not only did they provide insurance that reduced the damage in the event of a bank failure; by assuring depositors that their money would be recovered even if the bank failed, they also reduced the chances of a bank run becoming a self-fulfilling prophecy.


Indeed, SVB is the exception that proves the rule, as they failed largely because their assets were mainly not FDIC insured.

Fractional-reserve banking effectively allows banks to create money, in the form of credit that they offer to borrowers. That credit gets deposited in other banks, which then go on to loan it out to still others; the result is that there is more money in the system than was ever actually printed by the central bank.

In most economies this commercial bank money is a far larger quantity than the central bank money actually printed by the central bank—often nearly 10 to 1. This ratio is called the money multiplier.

Indeed, it’s not a coincidence that the reserve ratio is 10% and the multiplier is 10; the theoretical maximum multiplier is always the inverse of the reserve ratio, so if you require reserves of 10%, the highest multiplier you can get is 10. Had we required 20% reserves, the multiplier would drop to 5.

Most countries have fractional-reserve banking, and have for centuries; but it’s actually a pretty weird system if you think about it.

Back when we were on the gold standard, fractional-reserve banking was a way of cheating, getting our money supply to be larger than the supply of gold would actually allow.

But now that we are on a pure fiat money system, it’s worth asking what fractional-reserve banking actually accomplishes. If we need more money, the central bank could just print more. Why do we delegate that task to commercial banks?

David Friedman of the Cato Institute had some especially harsh words on this, but honestly I find them hard to disagree with:

Before leaving the subject of fractional reserve systems, I should mention one particularly bizarre variant — a fractional reserve system based on fiat money. I call it bizarre because the essential function of a fractional reserve system is to reduce the resource cost of producing money, by allowing an ounce of reserves to replace, say, five ounces of currency. The resource cost of producing fiat money is zero; more precisely, it costs no more to print a five-dollar bill than a one-dollar bill, so the cost of having a larger number of dollars in circulation is zero. The cost of having more bills in circulation is not zero but small. A fractional reserve system based on fiat money thus economizes on the cost of producing something that costs nothing to produce; it adds the disadvantages of a fractional reserve system to the disadvantages of a fiat system without adding any corresponding advantages. It makes sense only as a discreet way of transferring some of the income that the government receives from producing money to the banking system, and is worth mentioning at all only because it is the system presently in use in this country.

Our banking system evolved gradually over time, and seems to have held onto many features that made more sense in an earlier era. Back when we had arbitrarily tied our central bank money supply to gold, creating a new money supply that was larger may have been a reasonable solution. But today, it just seems to be handing the reins over to private corporations, giving them more profits while forcing the rest of society to bear more risk.

The obvious alternative is full-reserve banking, where banks are simply required to hold 100% of their deposits in reserve and the multiplier drops to 1. This idea has been supported by a number of quite prominent economists, including Milton Friedman.

It’s not just a right-wing idea: The left-wing organization Positive Money is dedicated to advocating for a full-reserve banking system in the UK and EU. (The ECB VP’s criticism of the proposal is utterly baffling to me: it “would not create enough funding for investment and growth.” Um, you do know you can print more money, right? Hm, come to think of it, maybe the ECB doesn’t know that, because they think inflation is literally Hitler. There are legitimate criticisms to be had of Positive Money’s proposal, but “There won’t be enough money under this fiat money system” is a really weird take.)

There’s a relatively simple way to gradually transition from our current system to a full-reserve sytem: Simply increase the reserve ratio over time, and print more central bank money to keep the total money supply constant. If we find that it seems to be causing more problems than it solves, we could stop or reverse the trend.

Krugman has pointed out that this wouldn’t really fix the problems in the banking system, which actually seem to be much worse in the shadow banking sector than in conventional commercial banking. This is clearly right, but it isn’t really an argument against trying to improve conventional banking. I guess if stricter regulations on conventional banking push more money into the shadow banking system, that’s bad; but really that just means we should be imposing stricter regulations on the shadow banking system first (or simultaneously).

We don’t need to accept bank runs as a routine part of the financial system. There are other ways of doing things.

What I think “gentrification” ought to mean

Mar7 JDN 2459281

A few years back I asked the question: “What is gentrification?”

The term evokes the notion of a gentrya landed upper class who hoards wealth and keeps the rest of the population in penury and de facto servitude. Yet the usual meaning of the term really just seems to mean “rich people buying houses in poor areas”. Where did we get the idea that rich people buying houses in poor areas constitutes the formation of a landed gentry?

In that previous post I argued that the concept of “gentrification” as usually applied is not a useful one, and we should instead be focusing directly on the issues of poverty and housing affordability. I still think that’s right.

But it occurs to me that there is something “gentrification” could be used to mean, that would actually capture some of the original intended meaning. It doesn’t seem to be used this way often, but unlike the usual meaning, this one actually has some genuine connection with the original concept of a gentry.

Here goes: Gentrification is the purchasing of housing for the purpose of renting it out.

Why this definition in particular? Well, it actually does have an effect similar in direction (though hardly in magnitude) to the formation of a landed gentry: It concentrates land ownership and makes people into tenants instead of homeowners. It converts what should have been a one-time transfer of wealth from one owner to another into a permanent passive income stream that typically involves the poor indefinitely paying to the rich.

Because houses aren’t very fungible, the housing market is one of monopolistic competition: Each house is its own unique commodity, only partially substitutable with others, and this gives market power to the owners of houses. When it’s a permanent sale, that market power will be reflected in the price, but it will also effectively transfer to the new owner. When it’s a rental, that market power remains firmly in the hands of the landlord. The more a landlord owns, the more market power they can amass: A large landholding corporation like the Irvine Company can amass an enormous amount of market power, effectively monopolizing an entire city. (Now that feels like a landed gentry! Bend the knee before the great and noble House Irvine.)

Compare this to two other activities that are often called “gentrification”: Rich people buying houses in poor areas for the purpose of living in them, and developers building apartment buildings and renting them out.

When rich people buy houses for the purpose of living in them, they are not concentrating land ownership. They aren’t generating a passive income stream. They are simply doing the same thing that other people do—buying houses to live in them—but they have more money with which to do so. This is utterly unproblematic, and I think people need to stop complaining about it. There is absolutely nothing wrong with buying a house because you want to live in it, and if it’s a really expensive house—like Jeff Bezos’ $165 million mansion—then the problem isn’t rich people buying houses, it’s the massive concentration of wealth that made anyone that rich in the first place. No one should be made to feel guilty for spending their own money on their own house. Every time “gentrification” is used to describe this process, it just makes it seem like “gentrification” is nothing to worry about—or maybe even something to celebrate.

What about developers who build apartments to rent them out? Aren’t they setting up a passive income stream from the poor to the rich? Don’t they have monopolistic market power? Yes, that’s all true. But they’re also doing something else that buying houses in order to rent them doesn’t: They are increasing the supply of housing.

What are the two most important factors determining the price of housing? The same two factors as anything else: Supply and demand. If prices are too high, the best way to fix that is to increase supply. Developers do that.

Conversely, buying up a house in order to rent it is actually reducing the supply of housing—or at least the supply of permanent owner-occupied housing. Whereas developers buy land that has less housing and build more housing on it, gentrifiers (as I’m defining them) buy housing that already exists and rent it out to others.

Indeed, it’s really not clear to me that rent is a thing that needs to exist. Obviously people need housing. And it certainly makes sense to have things like hotels for very short-term stays and dorms for students who are living in an area for a fixed number of years.

But it’s not clear to me that we really needed to have a system where people would own other people’s houses and charge them for the privilege of living in them. I think the best argument for it is a libertarian one: If people want to do that, why not let them?

Yet I think the downsides of renting are clear enough: People get evicted and displaced, and in many cases landlords consistently fail to provide the additional services that they are supposed to provide. (I wasn’t able to quickly find good statistics on how common it is for landlords to evade their responsibilities like this, but anecdotal evidence would suggest that it’s not uncommon.)

The clearest upside is that security deposits are generally cheaper than down payments, so it’s generally easier to rent a home than to buy one. But why does this have to be the case? Indeed, why do banks insist on such large down payments in the first place? It seems to be only social norms that set the rate of down payments; I’m not aware of any actual economic arguments for why a particular percentage of the home’s value needs to be paid in cash up front. It’s commonly thought that large down payments somehow reduce the risk of defaulting on a mortgage; but I’m not aware of much actual evidence of this. Here’s a theoretical model saying that down payments should matter, but it’s purely theoretical. Here’s an empirical paper showing that lower down payments are associated with higher interest rates—but it may be the higher interest rates that account for the higher defaults, not the lower down payments. There is also a selection bias, where buyers with worse credit get worse loan terms (which can be a self-fulfilling prophecy).

The best empirical work I could find on the subject was a HUD study suggesting that yes, lower down payments are associated with higher default risk—but their effect is much smaller than lots of other things. In particular, one percentage point of down payment was equivalent to about 5 points of credit score. So someone with a credit score of 750 and a down payment of 0% is no more likely to default than someone with a credit score of 650 and a down payment of 20%. Or, to use an example they specifically state in the paper: “For example, to have the same probability of default as a prime loan, a B or C [subprime] loan needs to have a CLTV [combined loan-to-value ratio] that is 11.9 percentage points lower than the CLTV of an otherwise identical prime loan.” A combined loan-to-value ratio 12 percentage points lower is essentially the same thing as a down payment that is 12 percentage points larger—and 12% of the median US home price of $300,000 is $36,000, not an amount of money most middle-class families can easily come up with.

I also found a quasi-experimental study showing that support from nonprofit housing organizations was much more effective at reducing default rates than higher down payments. So even if larger down payments do reduce defaults, there are better ways of doing so.

The biggest determinant of whether you will default on your mortgage is the obvious one: whether you have steady income large enough to afford the mortgage payment. Typically when people default it’s because their adjustable interest rate surged or they lost their job. When housing prices decline and you end up “underwater” (owing more than the house’s current price), strategic default can theoretically increase your wealth; but in fact it’s relatively rare to take advantage of this, because it’s devastating to your credit rating. Only about 20% of all mortgage defaults in the crisis were strategic—the other 80% were people who actually couldn’t afford to pay.

Another potential upside is that it may be easier to move from one place to another if you rent your home, since selling a home can take a considerable amount of time. But I think this benefit is overstated: Most home leases are 12 months long, while selling a house generally takes 60-90 days. So unless you are already near the end of your lease term when you decide to move, you may actually find that you could move faster if you sold your home than if you waited for your lease to end—and if you end your lease early, the penalties are often substantial. Your best-case scenario is a flat early termination fee; your worst-case scenario is being on the hook for all the remaining rent (at which point, why bother?). Some landlords instead require you to cover rent until a new tenant is found—which you may recognize as almost exactly equivalent to selling your own home.

I think the main reason that people rent instead of buying is simply that they can’t come up with a down payment. If it seems too heavy-handed or risky to simply cap down payments, how about we offer government-subsidized loans (or even grants!) to first-time home buyers to cover their down payments? This would be expensive, but no more so than the mortgage interest deduction—and far less regressive.

For now, we can continue to let people rent out homes. When developers do this, I think the benefits generally outweigh the harms: Above all, they are increasing the supply of housing. A case could be made for policies that incentivize the construction of condos rather than rentals, but above all, policy should be focusing on incentivizing construction.

However, when someone buys an existing house and then rents it out, they are doing something harmful. It probably shouldn’t be illegal, and in some cases there may be no good alternatives to simply letting people do it. But it’s a harmful activity nonetheless, and where legal enforcement is too strict, social stigma can be useful. And for that reason, I think it might actually be fair to call them gentrifiers.

What if employees were considered assets?

JDN 2457308 EDT 15:31

Robert Reich has an interesting proposal to change the way we think about labor and capital:
First, are workers assets to be developed or are they costs to be cut?” “Employers treat replaceable workers as costs to be cut, not as assets to be developed.”

This ultimately comes down to a fundamental question of how our accounting rules work: Workers are not considered assets, but wages are considered liabilities.

I don’t want to bore you with the details of accounting (accounting is often thought of as similar to economics, but really it’s the opposite of economics: Whereas economics is empirical, interesting, and fundamentally nonzero-sum, accounting is arbitrary, tedious, and zero-sum by construction), but I think it’s worth discussing the difference between how capital and labor are accounted.

By construction, every credit must come with a debit, no matter how arbitrary this may seem.

We start with an equation:

Assets + Expenses = Equity + Liabilities + Income

When purchasing a piece of capital, you credit the equity account with the capital you just bought, increasing it, then debit the expense account, increasing it as well. Because the capital is valued at the price at which you bought it, the increase in equity exactly balances the increase in expenses, and your assets, liabilities, and income do not change.

But when hiring a worker, you still debit the expense account, but now you credit the liabilities account, increasing it as well. So instead of increasing your equity, which is a good thing, you increase your liabilities, which is a bad thing.

This is why corporate executives are always on the lookout for ways to “cut labor costs”; they conceive of wages as simply outgoing money that doesn’t do anything useful, and therefore something to cut in order to increase profits.

Reich is basically suggesting that we start treating workers as equity, the same as we do with capital; and then corporate executives would be thinking in terms of making a “capital gain” by investing in their workers to increase their “value”.

The problem with this scheme is that it would really only make sense if corporations owned their workers—and I think we all know why that is not a good idea. The reason capital can be counted in the equity account is that capital can be sold off as a source of income; you don’t need to think of yourself as making a sort of “capital gain”; you can make, you know, actual capital gains.

I think actually the deeper problem here is that there is something wrong with accounting in general.

By its very nature, accounting is zero-sum. At best, this allows an error-checking mechanism wherein we can see if the two sides of the equation balance. But at worst, it makes us forget the point of economics.

While an individual may buy a capital asset on speculation, hoping to sell it for a higher price later, that isn’t what capital is for. At an aggregate level, speculation and arbitrage cannot increase real wealth; all they can do is move it around.

The reason we want to have capital is that it makes things—that the value of goods produced by a machine can far exceed the cost to produce that machine. It is in this way that capital—and indeed capitalism—creates real wealth.

Likewise, that is why we use labor—to make things. Labor is worthwhile because—and insofar as—the cost of the effort is less than the benefit of the outcome. Whether you are a baker, an author, a neurosurgeon, or an auto worker, the reason your job is worth doing is that the harm to you from doing it is smaller than the benefit to others from having it done. Indeed, the market mechanism is supposed to be structured so that by transferring wealth to you (i.e., paying you money), we make it so that both you and the people who buy your services are better off.

But accounting methods as we know them make no allowance for this; no matter what you do, the figures always balance. If you end up with more, someone else ends up with less. Since a worker is better off with a wage than they were before, we infer that a corporation must be worse off because it paid that wage. Since a corporation makes a profit selling a good, we infer that a consumer must be worse off because they paid for that purchase. We track the price of everything and understand the value of nothing.

There are two ways of pricing a capital asset: The cost to make it, or the value you get from it. Those two prices are only equal if markets are perfectly efficient, and even then they are only equal at the margin—the last factory built is worth what it can make, but every other factory built before that is worth more. It is that difference which creates real wealth—so assuming that they are the same basically defeats the purpose.

I don’t think we can do away with accounting; we need some way to keep track of where money goes, and we want that system to have built-in mechanisms to reduce rates of error and fraud. Double-entry bookkeeping certainly doesn’t make error and fraud disappear, but it at least does provide some protection against them, which we would lose if we removed the requirement that accounts must balance.

But somehow we need to restructure our metrics so that they give some sense of what economics is really about—not moving around a fixed amount of wealth, but making more wealth. Accounting for employees as assets wouldn’t solve that problem—but it might be a start, I guess?