What I think “gentrification” ought to mean

Mar7 JDN 2459281

A few years back I asked the question: “What is gentrification?”

The term evokes the notion of a gentrya landed upper class who hoards wealth and keeps the rest of the population in penury and de facto servitude. Yet the usual meaning of the term really just seems to mean “rich people buying houses in poor areas”. Where did we get the idea that rich people buying houses in poor areas constitutes the formation of a landed gentry?

In that previous post I argued that the concept of “gentrification” as usually applied is not a useful one, and we should instead be focusing directly on the issues of poverty and housing affordability. I still think that’s right.

But it occurs to me that there is something “gentrification” could be used to mean, that would actually capture some of the original intended meaning. It doesn’t seem to be used this way often, but unlike the usual meaning, this one actually has some genuine connection with the original concept of a gentry.

Here goes: Gentrification is the purchasing of housing for the purpose of renting it out.

Why this definition in particular? Well, it actually does have an effect similar in direction (though hardly in magnitude) to the formation of a landed gentry: It concentrates land ownership and makes people into tenants instead of homeowners. It converts what should have been a one-time transfer of wealth from one owner to another into a permanent passive income stream that typically involves the poor indefinitely paying to the rich.

Because houses aren’t very fungible, the housing market is one of monopolistic competition: Each house is its own unique commodity, only partially substitutable with others, and this gives market power to the owners of houses. When it’s a permanent sale, that market power will be reflected in the price, but it will also effectively transfer to the new owner. When it’s a rental, that market power remains firmly in the hands of the landlord. The more a landlord owns, the more market power they can amass: A large landholding corporation like the Irvine Company can amass an enormous amount of market power, effectively monopolizing an entire city. (Now that feels like a landed gentry! Bend the knee before the great and noble House Irvine.)

Compare this to two other activities that are often called “gentrification”: Rich people buying houses in poor areas for the purpose of living in them, and developers building apartment buildings and renting them out.

When rich people buy houses for the purpose of living in them, they are not concentrating land ownership. They aren’t generating a passive income stream. They are simply doing the same thing that other people do—buying houses to live in them—but they have more money with which to do so. This is utterly unproblematic, and I think people need to stop complaining about it. There is absolutely nothing wrong with buying a house because you want to live in it, and if it’s a really expensive house—like Jeff Bezos’ $165 million mansion—then the problem isn’t rich people buying houses, it’s the massive concentration of wealth that made anyone that rich in the first place. No one should be made to feel guilty for spending their own money on their own house. Every time “gentrification” is used to describe this process, it just makes it seem like “gentrification” is nothing to worry about—or maybe even something to celebrate.

What about developers who build apartments to rent them out? Aren’t they setting up a passive income stream from the poor to the rich? Don’t they have monopolistic market power? Yes, that’s all true. But they’re also doing something else that buying houses in order to rent them doesn’t: They are increasing the supply of housing.

What are the two most important factors determining the price of housing? The same two factors as anything else: Supply and demand. If prices are too high, the best way to fix that is to increase supply. Developers do that.

Conversely, buying up a house in order to rent it is actually reducing the supply of housing—or at least the supply of permanent owner-occupied housing. Whereas developers buy land that has less housing and build more housing on it, gentrifiers (as I’m defining them) buy housing that already exists and rent it out to others.

Indeed, it’s really not clear to me that rent is a thing that needs to exist. Obviously people need housing. And it certainly makes sense to have things like hotels for very short-term stays and dorms for students who are living in an area for a fixed number of years.

But it’s not clear to me that we really needed to have a system where people would own other people’s houses and charge them for the privilege of living in them. I think the best argument for it is a libertarian one: If people want to do that, why not let them?

Yet I think the downsides of renting are clear enough: People get evicted and displaced, and in many cases landlords consistently fail to provide the additional services that they are supposed to provide. (I wasn’t able to quickly find good statistics on how common it is for landlords to evade their responsibilities like this, but anecdotal evidence would suggest that it’s not uncommon.)

The clearest upside is that security deposits are generally cheaper than down payments, so it’s generally easier to rent a home than to buy one. But why does this have to be the case? Indeed, why do banks insist on such large down payments in the first place? It seems to be only social norms that set the rate of down payments; I’m not aware of any actual economic arguments for why a particular percentage of the home’s value needs to be paid in cash up front. It’s commonly thought that large down payments somehow reduce the risk of defaulting on a mortgage; but I’m not aware of much actual evidence of this. Here’s a theoretical model saying that down payments should matter, but it’s purely theoretical. Here’s an empirical paper showing that lower down payments are associated with higher interest rates—but it may be the higher interest rates that account for the higher defaults, not the lower down payments. There is also a selection bias, where buyers with worse credit get worse loan terms (which can be a self-fulfilling prophecy).

The best empirical work I could find on the subject was a HUD study suggesting that yes, lower down payments are associated with higher default risk—but their effect is much smaller than lots of other things. In particular, one percentage point of down payment was equivalent to about 5 points of credit score. So someone with a credit score of 750 and a down payment of 0% is no more likely to default than someone with a credit score of 650 and a down payment of 20%. Or, to use an example they specifically state in the paper: “For example, to have the same probability of default as a prime loan, a B or C [subprime] loan needs to have a CLTV [combined loan-to-value ratio] that is 11.9 percentage points lower than the CLTV of an otherwise identical prime loan.” A combined loan-to-value ratio 12 percentage points lower is essentially the same thing as a down payment that is 12 percentage points larger—and 12% of the median US home price of $300,000 is $36,000, not an amount of money most middle-class families can easily come up with.

I also found a quasi-experimental study showing that support from nonprofit housing organizations was much more effective at reducing default rates than higher down payments. So even if larger down payments do reduce defaults, there are better ways of doing so.

The biggest determinant of whether you will default on your mortgage is the obvious one: whether you have steady income large enough to afford the mortgage payment. Typically when people default it’s because their adjustable interest rate surged or they lost their job. When housing prices decline and you end up “underwater” (owing more than the house’s current price), strategic default can theoretically increase your wealth; but in fact it’s relatively rare to take advantage of this, because it’s devastating to your credit rating. Only about 20% of all mortgage defaults in the crisis were strategic—the other 80% were people who actually couldn’t afford to pay.

Another potential upside is that it may be easier to move from one place to another if you rent your home, since selling a home can take a considerable amount of time. But I think this benefit is overstated: Most home leases are 12 months long, while selling a house generally takes 60-90 days. So unless you are already near the end of your lease term when you decide to move, you may actually find that you could move faster if you sold your home than if you waited for your lease to end—and if you end your lease early, the penalties are often substantial. Your best-case scenario is a flat early termination fee; your worst-case scenario is being on the hook for all the remaining rent (at which point, why bother?). Some landlords instead require you to cover rent until a new tenant is found—which you may recognize as almost exactly equivalent to selling your own home.

I think the main reason that people rent instead of buying is simply that they can’t come up with a down payment. If it seems too heavy-handed or risky to simply cap down payments, how about we offer government-subsidized loans (or even grants!) to first-time home buyers to cover their down payments? This would be expensive, but no more so than the mortgage interest deduction—and far less regressive.

For now, we can continue to let people rent out homes. When developers do this, I think the benefits generally outweigh the harms: Above all, they are increasing the supply of housing. A case could be made for policies that incentivize the construction of condos rather than rentals, but above all, policy should be focusing on incentivizing construction.

However, when someone buys an existing house and then rents it out, they are doing something harmful. It probably shouldn’t be illegal, and in some cases there may be no good alternatives to simply letting people do it. But it’s a harmful activity nonetheless, and where legal enforcement is too strict, social stigma can be useful. And for that reason, I think it might actually be fair to call them gentrifiers.

What if employees were considered assets?

JDN 2457308 EDT 15:31

Robert Reich has an interesting proposal to change the way we think about labor and capital:
First, are workers assets to be developed or are they costs to be cut?” “Employers treat replaceable workers as costs to be cut, not as assets to be developed.”

This ultimately comes down to a fundamental question of how our accounting rules work: Workers are not considered assets, but wages are considered liabilities.

I don’t want to bore you with the details of accounting (accounting is often thought of as similar to economics, but really it’s the opposite of economics: Whereas economics is empirical, interesting, and fundamentally nonzero-sum, accounting is arbitrary, tedious, and zero-sum by construction), but I think it’s worth discussing the difference between how capital and labor are accounted.

By construction, every credit must come with a debit, no matter how arbitrary this may seem.

We start with an equation:

Assets + Expenses = Equity + Liabilities + Income

When purchasing a piece of capital, you credit the equity account with the capital you just bought, increasing it, then debit the expense account, increasing it as well. Because the capital is valued at the price at which you bought it, the increase in equity exactly balances the increase in expenses, and your assets, liabilities, and income do not change.

But when hiring a worker, you still debit the expense account, but now you credit the liabilities account, increasing it as well. So instead of increasing your equity, which is a good thing, you increase your liabilities, which is a bad thing.

This is why corporate executives are always on the lookout for ways to “cut labor costs”; they conceive of wages as simply outgoing money that doesn’t do anything useful, and therefore something to cut in order to increase profits.

Reich is basically suggesting that we start treating workers as equity, the same as we do with capital; and then corporate executives would be thinking in terms of making a “capital gain” by investing in their workers to increase their “value”.

The problem with this scheme is that it would really only make sense if corporations owned their workers—and I think we all know why that is not a good idea. The reason capital can be counted in the equity account is that capital can be sold off as a source of income; you don’t need to think of yourself as making a sort of “capital gain”; you can make, you know, actual capital gains.

I think actually the deeper problem here is that there is something wrong with accounting in general.

By its very nature, accounting is zero-sum. At best, this allows an error-checking mechanism wherein we can see if the two sides of the equation balance. But at worst, it makes us forget the point of economics.

While an individual may buy a capital asset on speculation, hoping to sell it for a higher price later, that isn’t what capital is for. At an aggregate level, speculation and arbitrage cannot increase real wealth; all they can do is move it around.

The reason we want to have capital is that it makes things—that the value of goods produced by a machine can far exceed the cost to produce that machine. It is in this way that capital—and indeed capitalism—creates real wealth.

Likewise, that is why we use labor—to make things. Labor is worthwhile because—and insofar as—the cost of the effort is less than the benefit of the outcome. Whether you are a baker, an author, a neurosurgeon, or an auto worker, the reason your job is worth doing is that the harm to you from doing it is smaller than the benefit to others from having it done. Indeed, the market mechanism is supposed to be structured so that by transferring wealth to you (i.e., paying you money), we make it so that both you and the people who buy your services are better off.

But accounting methods as we know them make no allowance for this; no matter what you do, the figures always balance. If you end up with more, someone else ends up with less. Since a worker is better off with a wage than they were before, we infer that a corporation must be worse off because it paid that wage. Since a corporation makes a profit selling a good, we infer that a consumer must be worse off because they paid for that purchase. We track the price of everything and understand the value of nothing.

There are two ways of pricing a capital asset: The cost to make it, or the value you get from it. Those two prices are only equal if markets are perfectly efficient, and even then they are only equal at the margin—the last factory built is worth what it can make, but every other factory built before that is worth more. It is that difference which creates real wealth—so assuming that they are the same basically defeats the purpose.

I don’t think we can do away with accounting; we need some way to keep track of where money goes, and we want that system to have built-in mechanisms to reduce rates of error and fraud. Double-entry bookkeeping certainly doesn’t make error and fraud disappear, but it at least does provide some protection against them, which we would lose if we removed the requirement that accounts must balance.

But somehow we need to restructure our metrics so that they give some sense of what economics is really about—not moving around a fixed amount of wealth, but making more wealth. Accounting for employees as assets wouldn’t solve that problem—but it might be a start, I guess?