How we can actually solve the housing shortage

Sep 16 JDN 2458378

In previous posts I’ve talked about the housing crisis facing most of the world’s major cities. (Even many cities in Africa are now facing a housing crisis!) In this post, I’m going to look at the empirical data to see if we can find a way to solve this crisis.

Most of the answer, it turns out, is really not that complicated: Build more housing.

There is a little bit more to it than that, but only a little bit. The basic problem is simply that there are more households than there are houses to hold them.

One of the biggest hurdles to fixing the housing crisis comes ironically from the left, in resistance to so-called “gentrification”. Local resistance to new construction is one of the greatest obstacles to keeping housing affordable. State and federal regulations are generally quite sensible: No industrial waste near the playgrounds. It’s the local regulations that make new housing so difficult.

I can understand why people fight “gentrification”: They see new housing going in as housing prices increase, and naturally assume that new houses cause higher prices. But it’s really the other way around: High prices cause new construction, which brings prices down. By its nature, new housing is almost always more expensive than existing housing. Building new housing still brings down the overall price of housing, even when the new housing is expensive. Building luxury condos does make existing apartments more affordable—and not building anything most certainly does not.

California’s housing crisis is particularly severe: California has been building less than half the units needed to sustain its current population trend since the crash in 2008. It’s worst of all in the Bay Area, where 500,000 jobs were added since 2009—and only 50,000 homes. California also has a big problem with delays in the permit process: Typically it takes as long as three or four years between approval and actual breaking ground.

We are seeing this in Oakland currently: The government has approved an actually reasonable amount of housing for once (vastly more than what they usually do), and as a result they may have a chance at keeping Oakland affordable even as it grows its population and economy. And yet we still get serious journalists saying utter nonsense like The building boom and resulting gentrification are squeezing the city’s most vulnerable.” Building booms don’t cause gentrification. Building booms are the best response to gentrification. When you say things like that, you sound to an economist like you’re saying “Pizza is so expensive; we need to stop people from making pizza!”

Homeowners who want to increase their property values may actually be rational—if incredibly selfish and monopolistic—in trying to block new construction. But activists who oppose “gentrification” need to stop shooting themselves in the foot by fighting the very same development that would have made housing cheaper.

The simplest thing we can do is make it easier to build housing. Streamline the permit process, provide subsidies, remove unnecessary regulations. Housing is one of the few markets where I can actually see a lot of unnecessary regulations. We don’t need to require parking; we should provide better public transit instead. And while requiring solar panels (as the whole state is now doing) sounds nice, it makes everything a lot more expensive—and by only requiring it on new housing, you are effectively saying you don’t want any new housing. I love solar panels, but what you should be doing is subsidizing solar panels, not requiring them. Does that cost the state budget more? Yes. Raise taxes on something else (a particularly good idea: electricity consumption) if you have to. But by mandating solar panels without any subsidies to support them, you are effectively putting a tax on new housing—which is exactly what California does not need.

It’s still a good idea to create incentives to build not simply housing, but affordable housing. There are ways to do this as well. Denver did an excellent job in creating an Affordable Housing Fund that they immediately spent in converting vacant apartments into affordable housing units.

There are also good reasons to try to fight foreign ownership of housing (and really, speculative ownership of housing in general). There is a strong correlation between current account deficits and housing appreciation, which makes sense if foreign investors are buying up our housing and making it more expensive. If Trump could actually reduce our trade deficit, that would drive down our current account deficit and quite likely make our housing more affordable. Of course, he has absolutely no idea how to do that.

Victor Duggan has a pretty good plan for lowering housing prices in Ireland which includes a land tax (as I’ve discussed previously) and a tax on foreign ownership of real estate. I disagree with him about the “Help-to-Buy” program, however; I actually think that was a fine idea, since the goal is not simply to keep housing cheap but to get people into houses. That wealth transfer is going to raise prices at the producer side—increasing production—but not at the consumer side—because people get compensated by the tax rebate. The net result should be more housing without more cost for buyers. You could have done the same thing by subsidizing construction, but I actually like the idea of putting the money directly in the pockets of homeowners. The tax incidence shouldn’t be much different in the long run, but it makes for a much more appealing and popular program.

We must stop Kavanaugh now!

Post 257: Sep 16 JDN 2458378

I realized that this post can’t afford to wait a week. It’s too urgent.

It’s the best news I’ve heard in a long time: Paul Manafort has pled guilty and is cooperating with the investigation. This is a good day for Mueller, a bad day for Trump—and a great day for America.

Manafort himself has been involved in international corruption for decades. It’s a shame that he will now be getting off light on some of his crimes. But prosecutors would only do that if he had information to share with them that was of commensurate value—and I’m willing to bet that means he has information to implicate the Donald himself. Trump is right to be afraid.

Of course, we are still a long way from impeaching Trump, let alone removing him from office, much less actually restoring normalcy and legitimacy to our executive branch. We are still in a long, dark tunnel—but perhaps at last we are beginning to glimpse the light at the other end.

We should let Mueller and the federal prosecutors do their jobs; so far, they’ve done them quite well. In the meantime, instead of speculating about just how deep this rabbit hole of corruption goes (come on, we know Trump is corrupt; the only question is how much and with whom), it would be better to focus our attention on ensuring that Trump cannot leave a lasting legacy of destruction in his wake.

Priority number one is stopping Brett Kavanaugh. Kavanaugh may seem like just another right-wing justice (after Scalia, how much worse can it get, really?), but no, he really is worse than that. He barely even pretends to respect the Constitution or past jurisprudence, and has done an astonishingly poor job of hiding his political agenda or his personal devotion to Trump. The most fundamental flaw of the US Supreme Court is the near-impossibility of removing a justice once appointed; that makes it absolutely vital that we stop his appointment from being confirmed.

It isn’t just Roe v. Wade that will be overturned if he gets on the court (that, at least, I can understand why a substantial proportion of Americans would approve—abortion is a much more complicated issue than either pro-life or pro-choice demagogues would have you believe, as the Stanford Encyclopedia of Philosophy agrees). Kavanaugh looks poised to tear apart a wide variety of protections for civil rights, environmental sustainability, and labor protections. Sadly, our current Republican Party has become so craven, so beholden to party above country and all else, that they will most likely vote to advance, and ultimately, confirm, his nomination. And America, and all the world, will suffer for it, for decades to come.

If this happens, whom should we blame? Well, first of all, Trump and Kavanaugh themselves, of course. Second, the Republicans who confirmed Kavanaugh. Third, everyone who voted for Trump. But fourth? Everyone who didn’t vote for Clinton. Everyone who said, “She’s just as bad”, or “The two parties are the same”, or “He can’t possibly win”, or “We need real change”, and either sat home or voted for a third party—every one of those people has a little bit of blood on their hands. If the US Supreme Court spends the next 30 years tearing away the rights of women, racial minorities, LGBT people, and the working class, it will be at least a little bit their fault. When the asbestos returns to our buildings, the ozone layer resumes its decay, and all the world’s coastlines flood ever higher, they will bear at least some responsibility. All their claimed devotion to a morally purer “true” left wing will mean absolutely nothing—for it was only our “cynical” “corrupt” “neoliberal” pragmatism that even tried to hold the line. It is not enough to deserve to win—you must actually win.

But it’s not too late. Not yet. We can still make our voices heard. If you have any doubt about whether your Senator will vote against Kavanaugh (living in California, I frankly don’t—say what you will about Dianne Feinstein and Kamala Harris, they have made their opposition to Kavanaugh abundantly clear at every opportunity), write or call that Senator and tell them why they must.

The confirmation vote is this Thursday, September 20. Make your voice heard by then, or it may be too late.

For labor day, thoughts on socialism

Planned Post 255: Sep 9 JDN 2458371

This week includes Labor Day, the holiday where we are perhaps best justified in taking the whole day off from work and doing nothing. Labor Day is sort of the moderate social democratic counterpart to the explicitly socialist holiday May Day.

The right wing in this country has done everything in their power to expand the definition of “socialism”, which is probably why most young people now have positive views of socialism. There was a time when FDR was seen as an alternative to socialism; but now I’m pretty sure he’d just be called a socialist.

Because of this, I am honestly not sure whether I should be considered a socialist. I definitely believe in the social democratic welfare state epitomized by Scandinavia, but I definitely don’t believe in total collectivization of all means of production.

I am increasingly convinced that shareholder capitalism is a terrible system (the renowned science fiction author Charles Stross actually gave an excellent talk on this subject), but I would not want to abandon free markets.
The best answer might be worker-owned cooperatives. The empirical data is actually quite consistent in showing worker co-ops to be as efficient if not more efficient than conventional corporations, and by construction their pay systems produce less inequality than corporations.

Indeed, I think there is reason to believe that a worker co-op is a much more natural outcome for free markets under a level playing field than a conventional corporation, and the main reason we have corporations is actually that capitalism arose out of (and in response to) feudalism.

Think about it: Why should most things be owned by the top 1%? (Okay, not quite “most”: to be fair, the top 1% only owns 40% of all US net wealth.) Why is 80% of the value of the stock market held by the top 10% of the population?

Most things aren’t done by the top 1%. There are a handful of individuals (namely, scientists who make seminal breakthroughs: Charles Darwin, Marie Curie, Albert Einstein, Rosalind Franklin, Alan Turing, Jonas Salk) who are so super-productive that they might conceivably deserve billionaire-level compensation—but they are almost never the ones who are actually billionaires. If markets were really distributing capital to those who would use it most productively, there’s no reason to think that inequality would be so self-sustaining—much less self-enhancing as it currently seems to be.

But when you realize that capitalism emerged out of a system where the top 1% (or less) already owned most things, and did so by a combination of “divine right” ideology and direct, explicit violence, this inequality becomes a lot less baffling. We never had a free market on a level playing field. The closest we’ve ever gotten has always been through social-democratic reforms (like the New Deal and Scandinavia).

How does this result in corporations? Well, when all the wealth is held by a small fraction of individuals, how do you start a business? You have to borrow money from the people who have it. Borrowing makes you beholden to your creditors, and puts you at great risk if your venture fails (especially back in the days when there were debtor’s prisons—and we’re starting to go back that direction!). Equity provides an alternative: In exchange for giving them the downside risk if your venture fails, you also give your creditors—now shareholders—the upside risk if your venture succeeds. But at the end of the day when your business has succeeded, where did most of the profits go? Into the hands of the people who already had money to begin with, who did nothing to actually contribute to society. The world would be better off if those people had never existed and their wealth had simply been shared with everyone else.

Compare this to what would happen if we all started with similar levels of wealth. (How much would each of us have? Total US wealth of about $44 trillion, spread among a population of 328 million, is about $130,000 each. I don’t know about you, but I think I could do quite a bit with that.) When starting a business, you wouldn’t go heavily into debt or sign away ownership of your company to some billionaire; you’d gather a group of dedicated partners, each of whom would contribute money and effort into building the business. As you added on new workers, it would make sense to pool their assets, and give them a share of the company as well. The natural structure for your business would be not a shareholder corporation, but a worker-owned cooperative.

I think on some level the super-rich actually understand this. If you look closely at the sort of policies they fight for, they really aren’t capitalist. They don’t believe in free, unfettered markets where competition reigns. They believe in monopoly, lobbying, corruption, nepotism, and above all, low taxes. (There’s actually nothing in the basic principles of capitalism that says taxes should be low. Taxes should be as high as they need to be to cover public goods—no higher, and no lower.) They don’t want to provide nationalized healthcare, not because they believe that private healthcare competition is more efficient (no one who looks at the data for even a few minutes can honestly believe that—US healthcare is by far the most expensive in the world), but because they know that it would give their employees too much freedom to quit and work elsewhere. Donald Trump doesn’t want a world where any college kid with a brilliant idea and a lot of luck can overthrow his empire; he wants a world where everyone owes him and his family personal favors that he can call in to humiliate them and exert his power. That’s not capitalism—it’s feudalism.

Crowdfunding also provides an interesting alternative; we might even call it the customer-owned cooperative. Kickstarter and Patreon provide a very interesting new economic model—still entirely within the realm of free markets—where customers directly fund production and interact with producers to decide what will be produced. This might turn out to be even more efficient—and notice that it would run a lot more smoothly if we had all started with a level playing field.

Establishing such a playing field, of course, requires a large amount of redistribution of wealth. Is this socialism? If you insist. But I think it’s more accurate to describe it as reparations for feudalism (not to mention colonialism). We aren’t redistributing what was fairly earned in free markets; we are redistributing what was stolen, so that from now on, wealth can be fairly earned in free markets.

We are in a golden age of corporate profits

Sep 2 JDN 245836

Take a good look at this graph, from the Federal Reserve Economic Database:

corporate_profits
The red line is corporate profits before tax. It is, unsurprisingly, the largest. The purple line is corporate profits after tax, with the standard adjustments for inventory depletion and capital costs. The green line is revenue from the federal corporate tax. Finally, I added a dashed blue line which multiplies before-tax profits by 30% to compare more directly with tax revenues. All these figures are annual, inflation-adjusted using the GDP deflator. The units are hundreds of billions of 2012 dollars.

The first thing you should notice is that the red and purple lines are near the highest they have ever been. Before-tax profits are over $2 trillion. After-tax profits are over $1.6 trillion.

Yet, corporate tax revenues are not the highest they have ever been. In 2006, they were over $400 billion; yet this year they don’t even reach $300 billion. The obvious reason for this is that we have been cutting corporate taxes. The more important reason is that corporations have gotten very good at avoiding whatever corporate taxes we charge.

On the books, we used to have a corporate tax rate of about 35%, which Trump just cut to 21%. But if you look at my dashed line, you can see that corporations haven’t actually paid more than 30% of their profits in taxes since 1970—and back then, the rate on the books was almost 50%.

Corporations have always avoided taxes. The effective tax rate—tax revenue divided by profits—is always much lower than the rate on the books. In 1951, the statutory tax rate was 50.75%; the effective rate was 47%. In 1970, the statutory rate was 49.2%; the effective rate was 31%. In 1993, the statutory rate was 35%; the effective rate was 26%. On average, corporations paid about 2/3 to 3/4 of what the statutory rate said.

corporate_tax_rate

You can even see how the effective rate trended steadily downward, much faster than the statutory rate. Corporations got better and better at finding and creating loopholes to let them avoid taxes. In 1950, the statutory rate was 38%—and sure enough, the effective rate was… 38%. Under Truman, corporations actually paid what they said they paid. Compare that to 1987, under Reagan, when the statutory rate was 40%—but the effective rate was only 26%.

Yet even with that downward trend, something happened under George W. Bush that widened the gap even further. While the statutory rate remained fixed at 35%, the effective rate plummeted from 26% in 2000 to 16% in 2002. The effective rate never again rose above 19%, and in 2009 it hit a minimum of just over 10%—less than one-third the statutory tax rate. It was trending upward, making it as “high” as 15%, until Trump’s tax cuts hit; in 2017 it was 13%, and it is projected to be even lower this year.

This is why it has always been disingenuous to compare our corporate tax rates with other countries and complain that they are too high. Our effective corporate tax rates have been in line with most other highly-developed countries for a long time now. The idea of “cutting rates and removing loopholes” sounds good in principle—but never actually seems to happen. George W. Bush’s “tax reforms” which were supposed to do this added so many loopholes that the effective tax rate plummeted.

I’m actually fairly ambivalent about corporate taxes in general. Their incidence really isn’t well-understood, though as Krugman has pointed out, so much of corporate profit is now monopoly rent that we can reasonably expect most of the incidence to fall on shareholders. What I’d really like to see happen is a repeal of the corporate tax combined with an increase in capital gains taxes. But we haven’t been increasing capital gains taxes; we’ve just been cutting corporate taxes.

The result has been a golden age for corporate profits. Make higher profits than ever before, and keep almost all of them without paying taxes! Nevermind that the deficit is exploding and our infrastructure is falling apart. America was founded in part on a hatred of taxes, so I guess we’re still carrying on that proud tradition.

What does a central bank actually do?

Aug 26 JDN 2458357

Though central banks are a cornerstone of the modern financial system, I don’t think most people have a clear understanding of how they actually function. (I think this may be by design; there are many ways we could make central banking more transparent, but policymakers seem reluctant to show their hand.)

I’ve even seen famous economists make really severe errors in their understanding of monetary policy, as John Taylor did when he characterized low-interest-rate policy as a “price ceiling”.

Central banks “print money” and “set interest rates”. But how exactly do they do these things, and what on Earth do they have to do with each other?

The first thing to understand is that most central banks don’t actually print money. In the US, cash is actually printed by the Department of the Treasury. But cash is only a small part of the money in circulation. The monetary base consists of cash in vaults and in circulation; the US monetary base is about $3.6 trillion. The money supply can be measured a few different ways, but the standard way is to include checking accounts, traveler’s checks, savings accounts, money market accounts, short-term certified deposits, and basically anything that can be easily withdrawn and spent as money. This is called the M2 money supply, and in the US it is currently over $14.1 trillion. That means that only 25% of our money supply is in actual, physical cash—the rest is all digital. This is actually a relatively high proportion for actual cash, as the monetary base was greatly increased in response to the Great Recession. When we say that the Fed “prints money”, what we really mean is that they are increasing the money supply—but typically they do so in a way that involves little if any actual printing of cash.

The second thing to understand is that central banks don’t exactly set interest rates either. They target interest rates. What’s the difference, you ask?

Well, setting interest rates would mean that they made a law or something saying you have to charge exactly 2.7%, and you get fined or something if you don’t do that.

Targeting interest rates is a subtler art. The Federal Reserve decides what interest rates they want banks to charge, and then they engage in what are called open-market operations to try to make that happen. Banks hold reservesmoney that they are required to keep as collateral for their loans. Since we are in a fractional-reserve system, they are allowed to keep only a certain proportion (usually about 10%). In open-market operations, the Fed buys and sells assets (usually US Treasury bonds) in order to either increase or decrease the amount of reserves available to banks, to try to get them to lend to each other at the targeted interest rates.

Why not simply set the interest rate by law? Because then it wouldn’t be the market-clearing interest rate. There would be shortages or gluts of assets.

It might be easier to grasp this if we step away from money for a moment and just think about the market for some other good, like televisions.

Suppose that the government wants to set the price of a television in the market to a particular value, say $500. (Why? Who knows. Let’s just run with it for a minute.)

If they simply declared by law that the price of a television must be $500, here’s what would happen: Either that would be too low, in which case there would be a shortage of televisions as demand exceeded supply; or that would be too high, in which case there would be a glut of televisions as supply exceeded demand. Only if they got spectacularly lucky and the market price already was $500 per television would they not have to worry about such things (and then, why bother?).

But suppose the government had the power to create and destroy televisions virtually at will with minimal cost.
Now, they have a better way; they can target the price of a television, and buy and sell televisions as needed to bring the market price to that target. If the price is too low, the government can buy and destroy a lot of televisions, to bring the price up. If the price is too high, the government can make and sell a lot of televisions, to bring the price down.

Now, let’s go back to money. This power to create and destroy at will is hard to believe for televisions, but absolutely true for money. The government can create and destroy almost any amount of money at will—they are limited only by the very inflation and deflation the central bank is trying to affect.

This allows central banks to intervene in the market without creating shortages or gluts; even though they are effectively controlling the interest rate, they are doing so in a way that avoids having a lot of banks wanting to take loans they can’t get or wanting to give loans they can’t find anyone to take.

The goal of all this manipulation is ultimately to reduce inflation and unemployment. Unfortunately it’s basically impossible to eliminate both simultaneously; the Phillips curve describes the relationship generally found that decreased inflation usually comes with increased unemployment and vice-versa. But the basic idea is that we set reasonable targets for each (usually about 2% inflation and 5% unemployment; frankly I’d prefer we swap the two, which was more or less what we did in the 1950s), and then if inflation is too high we raise interest rate targets, while if unemployment is too high we lower interest rate targets.

What if they’re both too high? Then we’re in trouble. This has happened; it is called stagflation. The money supply isn’t the other thing affecting inflation and unemployment, and sometimes we get hit with a bad shock that makes both of them high at once. In that situation, there isn’t much that monetary policy can do; we need to find other solutions.

But how does targeting interest rates lead to inflation? To be quite honest, we don’t actually know.

The basic idea is that lower interest rates should lead to more borrowing, which leads to more spending, which leads to more inflation. But beyond that, we don’t actually understand how interest rates translate into prices—this is the so-called transmission mechanism, which remains an unsolved problem in macroeconomics. Based on the empirical data, I lean toward the view that the mechanism is primarily via housing prices; lower interest rates lead to more mortgages, which raises the price of real estate, which raises the price of everything else. This also makes sense theoretically, as real estate consists of large, illiquid assets for which the long-term interest rate is very important. Your decision to buy an apple or even a television is probably not greatly affected by interest rates—but your decision to buy a house surely is.

If that is indeed the case, it’s worth thinking about whether this is really the right way to intervene on inflation and unemployment. High housing prices are an international crisis; maybe we need to be looking at ways to decrease unemployment without affecting housing prices. But that is a tale for another time.

Slides from my presentation at Worldcon

Whether you are a regular reader curious about my Worldcon talk, or a Worldcon visitor interested in seeing the slides, The slides from my presentation, “How do we get to the Federation from here?” can be found here.

I will be presenting at Worldcon this year!

I interrupt my usual broadcast for this special report. I will be speaking at Worldcon 76 in San Jose this year. My talk, “How do we get to the Federation from here?” is on world government, and will be held in room 212C of the convention center at 5:00 PM on Sunday, August 19. (Here is Worldcon’s complete program guide.

In lieu of my regular blog post next week, I’ll be posting the slides from my talk.

What would a game with realistic markets look like?

Aug 12 JDN 2458343

From Pokemon to Dungeons & Dragons, Final Fantasy to Mass Effect, almost all role-playing games have some sort of market: Typically, you buy and sell equipment, and often can buy services such as sleeping at inns. Yet the way those markets work is extremely rigid and unrealistic.

(I’m of course excluding games like EVE Online that actually create real markets between players; those markets are so realistic I actually think they would provide a good opportunity for genuine controlled experiments in macroeconomics.)

The weirdest thing about in-game markets is the fact that items almost always come with a fixed price. Sometimes there is some opportunity for haggling, or some randomization between different merchants; but the notion always persists that the item has a “true price” that is being adjusted upward or downward. This is more or less the opposite of how prices actually work in real markets.

There is no “true price” of a car or a pizza. Prices are whatever buyers and sellers make them. There is a true value—the amount of real benefit that can be obtained from a good—but even this is something that varies between individuals and also changes based on the other goods being consumed. The value of a pizza is considerably higher for someone who hasn’t eaten in days than to someone who just finished eating another pizza.

There is also what is called “The Law of One Price”, but like all laws of economics, it’s like the Pirate Code, more what you’d call a “guideline”, and it only applies to a particular good in a particular market at a particular time. The Law of One Price doesn’t even say that a pizza should have the same price tomorrow as it does today, or that the same pizza can’t be sold to two different customers at two different prices; it only says that the same pizza shouldn’t have two different prices in the same place at the same time for the same customer. (It seems almost tautological, right? And yet it still fails empirically, and does so again and again. I have seen offers for the same book in the same condition posted on the same website that differed by as much as 50%.)

In well-developed capitalist markets in large First World countries, we can lull ourselves into the illusion that there is only one price for a good, because markets are highly liquid and either highly competitive or controlled by a strong and stable oligopoly that enforces a particular price across places and times. The McDonald’s Dollar Menu is a policy choice by a massive multinational corporation; it’s not what would occur naturally if those items were sold on a competitive market.

Even then, this illusion can be broken when we are faced with a large economic shock, such as the OPEC price shock in 1973 or a natural disaster like Hurricane Katrina. It also tends to be broken for illiquid goods such as real estate.

If we consider the environment in which most role-playing games take place, it’s usually a sort of quasi-medieval or quasi-Renaissance feudal society, where a given government controls only a small region and traveling between towns is difficult and dangerous. Not only should the prices of goods differ substantially between towns, the currency used should frequently differ as well. Yes, most places would accept gold and silver; but a kingdom with a stable government will generally have a currency of significant seignorage, with coins worth considerably more than the gold used to mint them—yet the value of that seignorage will drop off as you move further away from that kingdom and its sphere of influence.

Moreover, prices should be inconsistent even between traders in the same town, and extremely volatile. When a town is mostly self-sufficient and trade is only a small part of its economy, even a small shock such as a bad thunderstorm or a brief drought can yield massive shifts in prices. Shortages and gluts will be frequent, as both supply and demand are small and ever-changing.

This wouldn’t be that difficult to implement. The simplest way would just be to institute random shocks to prices that vary by place and time. A more sophisticated method would be to actually simulate supply and demand for different goods, and then have prices respond to realistic shocks (e.g. a drought makes wheat more expensive, and the price of swords suddenly skyrockets after news of an impending dragon attack). Experiments have shown that competitive market outcomes can be achieved by simulating even a dozen or so traders using very simple heuristics like “don’t pay more than you can afford” and “don’t charge less than it cost you”.

Why don’t game designers implement this? I think there are two reasons.

The first is simply that it would be more complicated. This is a legitimate concern in many cases; I particularly think Pokemon can justify using a simple economy, given its target audience. I particularly agree that having more than a handful of currencies would be too much for players to keep track of; though perhaps having two or three (one for each major faction?) is still more interesting than only having one.

Also, tabletop games are inherently more limited in the amount of computation they can use, compared to video games. But for a game as complicated as say Skyrim, this really isn’t much of a defense. Skyrim actually simulated the daily routines of over a hundred different non-player characters; it could have been simulating markets in the background as well—in fact, it could have simply had those same non-player characters buy and sell goods with each other in a double-auction market that would automatically generate the prices that players face.

The more important reason, I think, is that game designers have a paralyzing fear of arbitrage.

I find it particularly aggravating how frequently games will set it up so that the price at which you buy and the price at which you sell are constrained so that the buying price is always higher, often as much as twice as high. This is not at all how markets work in the real world; frankly it’s only even close to true for goods like cars that rapidly depreciate. It make senses that a given merchant will not sell you a good for less than what they would pay to buy it from you; but that only requires each individual merchant to have a well-defined willingness-to-pay and willingness-to-accept. It certainly does not require the arbitrary constraint that you can never sell something for more than what you bought it for.

In fact, I would probably even allow players who specialize in social skills to short-change and bamboozle merchants for profit, as this is absolutely something that happens in the real world, and was likely especially common under the very low levels of literacy and numeracy that prevailed in the Middle Ages.

To many game designers (and gamers), the ability to buy a good in one place, travel to another place, and sell that good for a higher price seems like cheating. But this practice is call being a merchant. That is literally what the entire retail industry does. The rules of your game should allow you to profit from activities that are in fact genuinely profitable real economic services in the real world.

I remember a similar complaint being raised against Skyrim shortly after its release, that one could acquire a pickaxe, collect iron ore, smelt it into steel, forge weapons out of it, and then sell the weapons for a sizeable profit. To some people, this sounded like cheating. To me, it sounds like being a blacksmith. This is especially true because Skyrim’s skill system allowed you to improve the quality of your smithed items over time, just like learning a trade through practice (though it ramped up too fast, as it didn’t take long to make yourself clearly the best blacksmith in all of Skyrim). Frankly, this makes far more sense than being able to acquire gold by adventuring through the countryside and slaughtering monsters or collecting lost items from caves. Blacksmiths were a large part of the medieval economy; spelunking adventurers were not. Indeed, it bothers me that there weren’t more opportunities like this; you couldn’t make your wealth by being a farmer, a vintner, or a carpenter, for instance.

Even if you managed to pull off pure arbitrage, providing no real services, such as by buying and selling between two merchants in the same town, or the same merchant on two consecutive days, that is also a highly profitable industry. Most of our financial system is built around it, frankly. If you manage to make your wealth selling wheat futures instead of slaying dragons, I say more power to you. After all, there were an awful lot of wheat-future traders in the Middle Ages, and to my knowledge no actually successful dragon-slayers.

Of course, if your game is about slaying dragons, it should include some slaying of dragons. And if you really don’t care about making a realistic market in your game, so be it. But I think that more realistic markets could actually offer a great deal of richness and immersion into a world without greatly increasing the difficulty or complexity of the game. A world where prices change in response to the events of the story just feels more real, more alive.

The ability to profit without violence might actually draw whole new modes of play to the game (as has indeed occurred with Skyrim, where a small but significant proportion of players have chosen to live out peaceful lives as traders or blacksmiths). I would also enrich the experience of more conventional players and helping them recover from setbacks (if the only way to make money is to fight monsters and you keep getting killed by monsters, there isn’t much you can do; but if you have the option of working as a trader or a carpenter for awhile, you could save up for better equipment and try the fighting later).

And hey, game designers: If any of you are having trouble figuring out how to implement such a thing, my consulting fees are quite affordable.

Is a job guarantee better than a basic income?

Aug 5 JDN 2458336

In previous posts I’ve written about both the possibilities and challenges involved in creating a universal basic income. Today I’d like to address what I consider the most serious counter-argument against a basic income, an alternative proposal known as a job guarantee.

Whereas a basic income is literally just giving everyone free money, a job guarantee entails offering everyone who wants to work a job paid by the government. They’re not necessarily contradictory, but I’ve noticed a clear pattern: While basic income proponents are generally open to the idea of a job guarantee on the side, job guarantee proponents are often vociferously opposed to a basic income—even calling it “sinister”. I think the reason for this is that we see jobs as irrelevant, so we’re okay with throwing them in if you feel you must, while they see jobs as essential, so they meet any attempt to remove them with overwhelming resistance.

Where a basic income is extremely simple and could be implemented by a single act of the legislature, a job guarantee is considerably more complicated. The usual proposal for a job guarantee involves federal funding but local implementation, which is how most of our social welfare system is implemented—and why social welfare programs are so much better in liberal states like California than in conservative states like Mississippi, because California actually believes in what it’s implementing and Mississippi doesn’t. Anyone who wants a job guarantee needs to take that aspect seriously: In the places where poverty is worst, you’re offering control over the policy to the very governments that made poverty worst—and whether it is by malice or incompetence, what makes you think that won’t continue?

Another argument that I think job guarantee proponents don’t take seriously enough is the concern about “make-work”. They insist that a job guarantee is not “make-work”, but real work that’s just somehow not being done. They seem to think that there are a huge number of jobs that we could just create at the snap of a finger, which would be both necessary and useful on the one hand, and a perfect match for the existing skills of the unemployed population on the other hand. If that were the case, we would already be creating those jobs. It doesn’t even require a particularly strong faith in capitalism to understand this: If there is a profit to be made at hiring people to do something, there is probably already a business hiring people to do that. I don’t think of myself as someone with an overriding faith in capitalism, but a lot of the socialist arguments for job guarantees make me feel that way by comparison: They seem to think that there’s this huge untapped reserve of necessary work that the market is somehow failing to provide, and I’m just not seeing it.

There are public goods projects which aren’t profitable but would still be socially beneficial, like building rail lines and cleaning up rivers. But proponents of a job guarantee don’t seem to understand that these are almost all highly specialized jobs at our level of technology. We don’t need a bunch of people with shovels. We need engineers and welders and ecologists.

If you propose using people with shovels where engineers would be more efficient, that is make-work, whether you admit it or not. If you’re making people work in a less-efficient way in order to create jobs, then the jobs you are creating are fake jobs that aren’t worth creating. The line is often credited to Milton Friedman, but actually said first by William Aberhart in 1935:

Taking up the policy of a public works program as a solution for unemployment, it was criticized as a plan that took no account of the part that machinery played in modern construction, with a road-making machine instanced as an example. He saw, said Mr. Aberhart, work in progress at an airport and was told that the men were given picks and shovels in order to lengthen the work, to which he replied why not give them spoons and forks instead of picks and shovels if the object was to lengthen out the task.

I’m all for spending more on building rail lines and cleaning up rivers, but that’s not an anti-poverty program. The people who need the most help are precisely the ones who are least qualified to work on these projects: Children, old people, people with severe disabilities. Job guarantee proponents either don’t understand this fact or intentionally ignore it. If you aren’t finding jobs for 7-year-olds with autism and 70-year-olds with Parkinson’s disease, this program will not end poverty. And if you are, I find it really hard to believe that these are real, productive jobs and not useless “make-work”. A basic income would let the 7-year-olds stay in school and the 70-year-olds live in retirement homes—and keep them both out of poverty.

Another really baffling argument for a job guarantee over basic income is that a basic income would act as a wage subsidy, encouraging employers to reduce wages. That’s not how a basic income works. Not at all. A basic income would provide a pure income effect, necessarily increasing wage demands. People would not be as desperate for work, so they’d be more comfortable turning down unreasonable wage offers. A basic income would also incentivize some people to leave the labor force by retiring or going back to school; the reduction in labor supply would further increase wages. The Earned Income Tax Credit is in many respects similar to a wage subsidy. While superficially it might seem similar, a basic income would have the exact opposite effect.

One reasonable argument against a basic income is the possibility that it could cause inflation. This is something that can’t really be tested with small-scale experiments, so we really won’t know for sure until we try it. But there is reason to think that the inflation would be small, as the people removed from the labor force will largely be the ones who are least-productive to begin with. There is a growing body of empirical evidence suggesting that inflationary effects of a basic income would be small. For example, data on cash transfer programs in Mexico show only a small inflationary effect despite large reductions in poverty. The whole reason a basic income looks attractive is that automation technology is now so advanced is that we really don’t need everyone to be working anymore. Productivity is so high now that a policy of universal 40-hour work weeks just doesn’t make sense in the 21st century.

Probably the best argument for a job guarantee over a basic income concerns cost. A basic income is very expensive, there’s no doubt about that; and a job guarantee could be much cheaper. That is something I take very seriously: Saving $1.5 trillion a year is absolutely a good reason. Indeed, I don’t really object to this argument; the calculations are correct. I merely think that a basic income is enough better that its higher cost is justifiable. A job guarantee can eliminate unemployment, but not poverty.

But the argument for a job guarantee that most people seem to be find most compelling concerns meaning. The philosopher John Danaher expressed this one most cogently. Unemployment is an extremely painful experience for most people, far beyond what could be explained simply by their financial circumstances. Most people who win large sums of money in the lottery cut back their hours, but continue working—so work itself seems to have some value. What seems to happen is that when people lose the chance to work, they feel that they have lost a vital source of meaning in their lives.

Yet this raises two more questions:

First, would a job guarantee actually solve that problem?
Second, are there ways we could solve it under a basic income?

With regard to the first question, I want to re-emphasize the fact that a large proportion of these guaranteed jobs necessarily cannot be genuinely efficient production. If efficient production would have created these jobs, we would most likely already have created them. Our society does not suffer from an enormous quantity of necessary work that could be done with the skills already possessed by the unemployed population, which is somehow not getting done—indeed, it is essentially impossible for a capitalist economy with a highly-liquid financial system to suffer such a malady. If the work is so valuable, someone will probably take out a loan to hire someone to do it. If that’s not happening, either the unemployed people don’t have the necessary skills, or the work really can’t be all that productive. There are some public goods projects that would be beneficial but aren’t being done, but that’s a different problem, and the match between the public goods projects that need done and the skills of the unemployed population is extremely poor. Displaced coal miners aren’t useful for maintaining automated photovoltaic factories. Truckers who get replaced by robot trucks won’t be much good for building maglev rails.

With this in mind, it’s not clear to me that people would really be able to find much meaning in a guaranteed job. You can’t be fired, so the fact that you have the job doesn’t mean anyone is impressed by the quality of your work. Your work wasn’t actually necessary, or the private sector would already have hired someone to do it. The government went out of its way to find a job that precisely matched what you happen to be good at, regardless of whether that job was actually accomplishing anything to benefit society. How is that any better than not working at all? You are spending hours of drudgery to accomplish… what, exactly? If our goal was simply to occupy people’s time, we could do that with Netflix or video games.

With regard to the second question, note that a basic income is quite different from other social welfare programs in that everyone gets it. So it’s very difficult to attach a social stigma to receiving basic income payments—it would require attaching the stigma to literally everyone. Much of the lost meaning, I suspect, from being unemployed comes from the social stigma attached.

Now, it’s still possible to attach social stigma to people who only get the basic income—there isn’t much we can do to prevent that. But in the worst-case scenario, this means unemployed people get the same stigma as before but more money. Moreover, it’s much harder to detect a basic income recipient than, say, someone who eats at a soup kitchen or buys food using EBT; since it goes in your checking account, all everyone else sees is you spending money from your debit card, just like everyone else. People who know you personally would probably know; but people who know you personally are also less likely to destroy your well-being by imposing a high stigma. Maybe they’ll pressure you to get off the couch and get a job, but they’ll do so because they genuinely want to help you, not because they think you are “one of those lazy freeloaders”.

And, as BIEN points out, think about retired people: They don’t seem to be so unhappy. Being on basic income is more like being retired than like being unemployed. It’s something everyone gets, not some special handout for “those people”. It’s permanent, so it’s not like you need to scramble to get a job before it goes away. You just get money automatically, so you don’t have to navigate a complex bureaucracy to get it. Controlling for income, retired people don’t seem to be any less happy than working people—so maybe work doesn’t actually provide all that much meaning after all.

I guess I can’t rule out the possibility that people need jobs to find meaning in their lives, but I both hope and believe that this is not generally the case. You can find meaning in your family, your friends, your community, your hobbies. You can still work even if you don’t need to work for a living: Build a shed, mow your lawn, tune up your car, upgrade your computer, write a story, learn a musical instrument, or try your hand at painting.

If you need to be taking orders from a corporation five days a week in order to have meaning in your life, you have bigger problems. I think what has happened to many people is that employment has so drained their lives of the real sources of meaning that they cling to it as the only thing they have left. But in fact work is not the cure to your ennui—it is the cause of it. Finally being free of the endless toil that has plagued humanity since the dawn of our species will give you the chance to reconnect with what really matters in life. Show your children that you love them in person, to their faces, instead of in this painfully indirect way of “providing for” them by going to work every day. Find ways to apply your skills in volunteering or creating works of art, instead of in endless drudgery for the profit of some faceless corporation.

How (not) to destroy an immoral market

Jul 29 JDN 2458329

In this world there are people of primitive cultures, with a population that is slowly declining, trying to survive a constant threat of violence in the aftermath of colonialism. But you already knew that, of course.

What you may not have realized is that some of these people are actively hunted by other people, slaughtered so that their remains can be sold on the black market.

I am referring of course to elephants. Maybe those weren’t the people you first had in mind?

Elephants are not human in the sense of being Homo sapiens; but as far as I am concerned, they are people in a moral sense.

Elephants take as long to mature as humans, and spend most of their childhood learning. They are born with brains only 35% of the size of their adult brains, much as we are born with brains 28% the size of our adult brains. Their encephalization quotients range from about 1.5 to 2.4, comparable to chimpanzees.

Elephants have problem-solving intelligence comparable to chimpanzees, cetaceans, and corvids. Elephants can pass the “mirror test” of self-identification and self-awareness. Individual elephants exhibit clearly distinguishable personalities. They exhibit empathy toward humans and other elephants. They can think creatively and develop new tools.

Elephants distinguish individual humans or elephants by sight or by voice, comfort each other when distressed, and above all mourn their dead. The kind of mourning behaviors elephants exhibit toward the remains of their dead family members have only been observed in humans and chimpanzees.

On a darker note, elephants also seek revenge. In response to losing loved ones to poaching or collisions with trains, elephants have orchestrated organized counter-attacks against human towns. This is not a single animal defending itself, as almost any will do; this is a coordinated act of vengeance after the fact. Once again, we have only observed similar behaviors in humans, great apes, and cetaceans.

Huffington Post backed off and said “just kidding” after asserting that elephants are people—but I won’t. Elephants are people. They do not have an advanced civilization, to be sure. But as far as I am concerned they display all the necessary minimal conditions to be granted the fundamental rights of personhood. Killing an elephant is murder.

And yet, the ivory trade continues to be profitable. Most of this is black-market activity, though it was legal in some places until very recently; China only restored their ivory trade ban this year, and Hong Kong’s ban will not take full effect until 2021. Some places are backsliding: A proposal (currently on hold) by the US Fish and Wildlife Service under the Trump administration would also legalize some limited forms of ivory trade.
With this in mind, I can understand why people would support the practice of ivory-burning, symbolically and publicly destroying ivory by fire so that no one can buy it. Two years ago, Kenya organized a particularly large ivory-burning that set ablaze 105 tons of elephant tusk and 1.35 tons of rhino horn.

But as economist, when I first learned about ivory-burning, it seemed like a really, really bad idea.

Why? Supply and demand. By destroying supply, you have just raised the market price of ivory. You have therefore increased the market incentives for poaching elephants and rhinos.

Yet it turns out I was wrong about this, as were many other economists. I looked at the empirical research, and changed my mind substantially. Ivory-burning is not such a bad idea after all.

Here was my reasoning before: If I want to reduce the incentives to produce something, what do I need to do? Lower the price. How do I do that? I need to increase the supply. Economists have made several proposals for how to do that, and until I looked at the data I would have expected them to work; but they haven’t.

The best way to increase supply is to create synthetic ivory that is cheap and very difficult to tell apart from the real thing. This has been done, but it didn’t work. For some reason, sellers try to hide the expensive real ivory in with the cheap synthetic ivory. I admit I actually have trouble understanding this; if you can’t sell it at full price, why even bother with the illegal real ivory? Maybe their customers have methods of distinguishing the two that the regulators don’t? If so, why aren’t the regulators using those methods? Another concern with increasing the supply of ivory is that it might reduce the stigma of consuming ivory, thereby also increasing the demand.

A similar problem has arisen with so-called “ghost ivory”; for obvious reasons, existing ivory products were excluded from the ban imposed in 1947, lest the government be forced to confiscate millions of billiard balls and thousands of pianos. Yet poachers have learned ways to hide new, illegal ivory and sell it as old, legal ivory.

Another proposal was to organize “sustainable ivory harvesting”, which based on past experience with similar regulations is unlikely to be enforceable. Moreover, this is not like sustainable wood harvesting, where our only concern is environmental. I for one care about the welfare of individual elephants, and I don’t think they would want to be “harvested”, sustainably or otherwise.
There is one way of doing “sustainable harvesting” that might not be so bad for the elephants, which would be to set up a protected colony of elephants, help them to increase their population, and then when elephants die of natural causes, take only the tusks and sell those as ivory, stamped with an official seal as “humanely and sustainably produced”. Even then, elephants are among a handful of species that would be offended by us taking their ancestors’ remains. But if it worked, it could save many elephant lives. The bigger problem is how expensive such a project would be, and how long it would take to show any benefit; elephant lifespans are about half as long as ours, (except in zoos, where their mortality rate is much higher!) so a policy that might conceivably solve a problem in 30 to 40 years doesn’t really sound so great. More detailed theoretical and empirical analysis has made this clear: you just can’t get ivory fast enough to meet existing demand this way.

In any case, China’s ban on all ivory trade had an immediate effect at dropping the price of ivory, which synthetic ivory did not. Before that, strengthened regulations in the US (particularly in New York and California) had been effective at reducing ivory sales. The CITES treaty in 1989 that banned most international ivory trade was followed by an immediate increase in elephant populations.

The most effective response to ivory trade is an absolutely categorical ban with no loopholes. To fight “ghost ivory”, we should remove exceptions for old ivory, offering buybacks for any antiques with a verifiable pedigree and a brief period of no-penalty surrender for anything with no such records. The only legal ivory must be for medical and scientific purposes, and its sourcing records must be absolutely impeccable—just as we do with human remains.

Even synthetic ivory must also be banned, at least if it’s convincing enough that real ivory could be hidden in it. You can make something you call “synthetic ivory” that serves a similar consumer function, but it must be different enough that it can be easily verified at customs inspections.

We must give no quarter to poachers; Kenya was right to impose a life sentence for aggravated poaching. The Tanzanian proposal to “shoot to kill” was too extreme; summary execution is never acceptable. But if indeed someone currently has a weapons pointed at an elephant and refuses to drop it, I consider it justifiable to shoot them, just as I would if that weapon were aimed at a human.

The need for a categorical ban is what makes the current US proposal dangerous. The particular exceptions it carves out are not all that large, but the fact that it carves out exceptions at all makes enforcement much more difficult. To his credit, Trump himself doesn’t seem very keen on the proposal, which may mean that it is dead in the water. I don’t get to say this often, but so far Trump seems to be making the right choice on this one.

Though the economic theory predicted otherwise, the empirical data is actually quite clear: The most effective way to save elephants from poaching is an absolutely categorical ban on ivory.

Ivory-burning is a signal of commitment to such a ban. Any ivory we find being sold, we will burn. Whoever was trying to sell it will lose their entire investment. Find more, and we will burn that too.