Subsidies almost never create jobs

May 5 JDN 2458609

The most extreme examples of harmful subsidies are fossil fuel industries and stadiums.

Fossil fuels are obviously the worst: The $1 trillion per year in direct and indirect government subsidies to fossil fuel corporations are second only to the $4 trillion in climate change externalities produced by the fossil fuel industry. Instead of subsidizing these corporations $1 trillion, the world should be taxing them $4 trillion—so we are off by $5 trillion, every single year. This is 6.5% of the world’s GDP. In the United States, our largest oil subsidy is called the Interstate Highway System, but other countries have us beat: Iran, Uzbekistan, and Libya each give more than 10% of their GDP in subsidies to the fossil fuel industry.

Most stadiums receive some kind of subsidy, and many are outright publicly funded—and yet banks still get the naming rights. The largest teams with the most wealth of their own are typically also the most subsidized. The benefits of building a new stadium are not even particularly large, which is probably why over 85% of US economists agree that these subsidies don’t make sense.

But subsidies are all over the place, and one of the most common reasons given for them, if not the most common reason, is that they will “create jobs”.

This is the reason Trump gave for trying to subsidize coal (which isn’t even working). It’s the reason people give for subsidizing huge filmmaking conglomerates in making films (which costs the government about $90,000 per job created). Why are we handing money to rich people? It will create jobs!

This is almost never actually true. We have known that this kind of subsidy doesn’t work since at least the 1980s.

The states and cities that create the most jobs aren’t the ones that offer the most generous handouts to corporations. They are the ones that have the cleanest air, the best infrastructure, and above all the most educated population. This is why there have been months when the majority of US jobs were created in California. California is the largest state, but it’s not that large—it’s only about 12% of the US population. If as many as 70% of the new jobs are being created there, it’s because California is doing something right that most other states are doing very, very wrong.

And then there is the rent-seeking competition that megacorporations like Amazon engage in, getting cities to bid higher and higher subsidies, then locating where they probably would have anyway but with billions of dollars in free money. This is a trick we need to stop falling for: The federal government should outright ban any attempt to use subsidies to get an existing corporation to locate in a specific state or city. That’s not contributing to American society; it’s just moving things around.

There are a few kinds of industries it makes sense to subsidize, because they have high up-front costs and large public benefits. Examples include research and development and renewable energy. But here the goal is not to create jobs. It’s to create wealth, typically in the form of scientific knowledge. We aren’t trying to get them to hire people; we’re trying to get them to accomplish something that’s difficult and important.

Why don’t subsidies create jobs? It’s really quite simple: You need to pay for those subsidies.

The federal government doesn’t face a hard budget constraint like businesses do; they can print money. But state and municipal governments don’t have that power, and so their subsidies need to be made up in either taxes or debt—which means either taxes now, or taxes later. Or they could cut spending elsewhere, which means losing whatever benefits they were getting from that spending. This means that any jobs you created with the subsidies are just going to be destroyed somewhere else, by higher taxes or lower government spending.

Most state and local governments have really tight budgets. Allen, Texas was running a $30 billion budget deficit and cutting salaries for public school teachers, but still somehow found $60 billion to subsidize building a stadium. The stadium might “create jobs” by moving some economic activity from one place to another, but the actual real economic benefit of a stadium is very small. Public schools are the foundation of a highly-developed economy. Without widespread education, this high a standard of living is simply impossible to sustain. Cutting public education is one of the last things you should be willing to do to balance a government budget—and yet somehow it seems to be one of the first we actually do.

It is true that we spend a great deal on education, and that spending could be made a lot more cost-effective (we can start by cutting athletic coaches and administrators); but every $1 spent on education yields between $4 and $6 in additional long-run wealth for our society. This means that at quite reasonable tax rates (17% to 25%) a public education system can directly pay for itself. Compare this to subsidizing a stadium, which gets back less than $1 of benefit per $1 spent, or subsidizing oil companies, which actively harms the world.

People don’t seem to understand that a capitalist economy basically just creates as many jobs as it needs. In a financial crisis, that mechanism falters; that’s when the federal government should step in and print money to get it running again. But when the economy is running smoothly, trying to “create jobs” is just not a useful thing to do. Jobs will be created and destroyed by the market. Policy should be trying to increase welfare. Educate your population. Improve your healthcare system. Build more public transit. Invest in fighting poverty and homelessness. And if you don’t think you can afford those things, then you definitely can’t afford handouts to megacorporations that won’t even make back what you paid.

If you stop destroying jobs, you will stop economic growth

Dec 30 JDN 2458483

One thing that endlessly frustrates me (and probably most economists) about the public conversation on economics is the fact that people seem to think “destroying jobs” is bad. Indeed, not simply a downside to be weighed, but a knock-down argument: If something “destroys jobs”, that’s a sufficient reason to opposite it, whether it be a new technology, an environmental regulation, or a trade agreement. So then we tie ourselves up in knots trying to argue that the policy won’t really destroy jobs, or it will create more than it destroys—but it will destroy jobs, and we don’t actually know how many it will create.

Destroying jobs is good. Destroying jobs is the only way that economic growth ever happens.

I realize I’m probably fighting an uphill battle here, so let me start at the beginning: What do I mean when I say “destroying jobs”? What exactly is a “job”, anyway?
At its most basic level, a job is something that needs done. It’s a task that someone wants to perform, but is unwilling or unable to perform on their own, and is therefore willing to give up some of what they have in order to get someone else to do it for them.

Capitalism has blinded us to this basic reality. We have become so accustomed to getting the vast majority of our goods via jobs that we come to think of having a job as something intrinsically valuable. It is not. Working at a job is a downside. It is something to be minimized.

There is a kind of work that is valuable: Creative, fulfilling work that you do for the joy of it. This is what we are talking about when we refer to something as a “vocation” or even a “hobby”. Whether it’s building ships in bottles, molding things from polymer clay, or coding video games for your friends, there is a lot of work in the world that has intrinsic value. But these things aren’t jobs. No one will pay them to do these things—or need to; you’ll do them anyway.

The value we get from jobs is actually obtained from goods: Everything from houses to underwear to televisions to antibiotics. The reason you want to have a job is that you want the money from that job to give you access to markets for all the goods that are actually valuable to you.

Jobs are the input—the cost—of producing all of those goods. The more jobs it takes to make a good, the more expensive that good is. This is not a rule-of-thumb statement of what usually or typically occurs. This is the most fundamental definition of cost. The more people you have to pay to do something, the harder it was to do that thing. If you can do it with fewer people (or the same people working with less effort), you should. Money is the approximation; money is the rule-of-thumb. We use money as an accounting mechanism to keep track of how much effort was put into accomplishing something. But what really matters is the “sweat of our laborers, the genius of our scientists, the hopes of our children”.

Economic growth means that we produce more goods at less cost.

That is, we produce more goods with fewer jobs.

All new technologies destroy jobs—if they are worth anything at all. The entire purpose of a new technology is to let us do things faster, better, easier—to let us have more things with less work.

This has been true since at least the dawn of the Industrial Revolution.

The Luddites weren’t wrong that automated looms would destroy weaver jobs. They were wrong to think that this was a bad thing. Of course, they weren’t crazy. Their livelihoods were genuinely in jeopardy. And this brings me to what the conversation should be about when we instead waste time talking about “destroying jobs”.

Here’s a slogan for you: Kill the jobs. Save the workers.

We shouldn’t be disappointed to lose a job; we should think of that as an opportunity to give a worker a better life. For however many years, you’ve been toiling to do this thing; well, now it’s done. As a civilization, we have finally accomplished the task that you and so many others set out to do. We have not “replaced you with a machine”; we have built a machine that now frees you from your toil and allows you to do something better with your life. Your purpose in life wasn’t to be a weaver or a coal miner or a steelworker; it was to be a friend and a lover and a parent. You can now get more chance to do the things that really matter because you won’t have to spend all your time working some job.

When we replaced weavers with looms, plows with combine harvesters, computers-the-people with computers-the-machines (a transformation now so complete most people don’t even seem to know that the word used to refer to a person—the award-winning film Hidden Figures is about computers-the-people), tollbooth operators with automated transponders—all these things meant that the job was now done. For the first time in the history of human civilization, nobody had to do that job anymore. Think of how miserable life is for someone pushing a plow or sitting in a tollbooth for 10 hours a day; aren’t you glad we don’t have to do that anymore (in this country, anyway)?

And the same will be true if we replace radiologists with AI diagnostic algorithms (we will; it’s probably not even 10 years away), or truckers with automated trucks (we will; I give it 20 years), or cognitive therapists with conversational AI (we might, but I’m more skeptical), or construction workers with building-printers (we probably won’t anytime soon, but it would be nice), the same principle applies: This is something we’ve finally accomplished as a civilization. We can check off the box on our to-do list and move on to the next thing.

But we shouldn’t simply throw away the people who were working on that noble task as if they were garbage. Their job is done—they did it well, and they should be rewarded. Yes, of course, the people responsible for performing the automation should be rewarded: The engineers, programmers, technicians. But also the people who were doing the task in the meantime, making sure that the work got done while those other people were spending all that time getting the machine to work: They should be rewarded too.

Losing your job to a machine should be the best thing that ever happened to you. You should still get to receive most of your income, and also get the chance to find a new job or retire.

How can such a thing be economically feasible? That’s the whole point: The machines are more efficient. We have more stuff now. That’s what economic growth is. So there’s literally no reason we can’t give every single person in the world at least as much wealth as we did before—there is now more wealth.

There’s a subtler argument against this, which is that diverting some of the surplus of automation to the workers who get displaced would reduce the incentives to create automation. This is true, so far as it goes. But you know what else reduces the incentives to create automation? Political opposition. Luddism. Naive populism. Trade protectionism.

Moreover, these forces are clearly more powerful, because they attack the opportunity to innovate: Trade protection can make it illegal to share knowledge with other countries. Luddist policies can make it impossible to automate a factory.

Whereas, sharing the wealth would only reduce the incentive to create automation; it would still be possible, simply less lucrative. Instead of making $40 billion, you’d only make $10 billion—you poor thing. I sincerely doubt there is a single human being on Earth with a meaningful contribution to make to humanity who would make that contribution if they were paid $40 billion but not if they were only paid $10 billion.

This is something that could be required by regulation, or negotiated into labor contracts. If your job is eliminated by automation, for the next year you get laid off but still paid your full salary. Then, your salary is converted into shares in the company that are projected to provide at least 50% of your previous salary in dividends—forever. By that time, you should be able to find another job, and as long as it pays at least half of what your old job did, you will be better off. Or, you can retire, and live off that 50% plus whatever else you were getting as a pension.

From the perspective of the employer, this does make automation a bit less attractive: The up-front cost in the first year has been increased by everyone’s salary, and the long-term cost has been increased by all those dividends. Would this reduce the number of jobs that get automated, relative to some imaginary ideal? Sure. But we don’t live in that ideal world anyway; plenty of other obstacles to innovation were in the way, and by solving the political conflict, this will remove as many as it adds. We might actually end up with more automation this way; and even if we don’t, we will certainly end up with less political conflict as well as less wealth and income inequality.

What would a new macroeconomics look like?

Dec 9 JDN 2458462

In previous posts I have extensively criticized the current paradigm of macroeconomics. But it’s always easier to tear the old edifice down than to build a better one in its place. So in this post I thought I’d try to be more constructive: What sort of new directions could macroeconomics take?

The most important change we need to make is to abandon the assumption of dynamic optimization. This will be a very hard sell, as most macroeconomists have become convinced that the Lucas Critique means we need to always base everything on the dynamic optimization of a single representative agent. I don’t think this was actually what Lucas meant (though maybe we should ask him; he’s still at Chicago), and I certainly don’t think it is what he should have meant. He had a legitimate point about the way macroeconomics was operating at that time: It was ignoring the feedback loops that occur when we start trying to change policies.

Goodhart’s Law is probably a better formulation: Once you make an indicator into a target, you make it less effective as an indicator. So while inflation does seem to be negatively correlated with unemployment, that doesn’t mean we should try to increase inflation to extreme levels in order to get rid of unemployment; sooner or later the economy is going to adapt and we’ll just have both inflation and unemployment at the same time. (Campbell’s Law provides a specific example that I wish more people in the US understood: Test scores would be a good measure of education if we didn’t use them to target educational resources.)

The reason we must get rid of dynamic optimization is quite simple: No one behaves that way.

It’s often computationally intractable even in our wildly oversimplified models that experts spend years working onnow you’re imagining that everyone does this constantly?

The most fundamental part of almost every DSGE model is the Euler equation; this equation comes directly from the dynamic optimization. It’s supposed to predict how people will choose to spend and save based upon their plans for an infinite sequence of future income and spending—and if this sounds utterly impossible, that’s because it is. Euler equations don’t fit the data at all, and even extreme attempts to save them by adding a proliferation of additional terms have failed. (It reminds me very much of the epicycles that astronomers used to add to the geocentric model of the universe to try to squeeze in weird results like Mars, before they had the heliocentric model.)

We should instead start over: How do people actually choose their spending? Well, first of all, it’s not completely rational. But it’s also not totally random. People spend on necessities before luxuries; they try to live within their means; they shop for bargains. There is a great deal of data from behavioral economics that could be brought to bear on understanding the actual heuristics people use in deciding how to spend and save. There have already been successful policy interventions using this knowledge, like Save More Tomorrow.

The best thing about this is that it should make our models simpler. We’re no longer asking each agent in the model to solve an impossible problem. However people actually make these decisions, we know it can be done, because it is being done. Most people don’t really think that hard, even when they probably should; so the heuristics really can’t be that complicated. My guess is that you can get a good fit—certainly better than an Euler equation—just by assuming that people set a target for how much they’re going to save (which is also probably pretty small for most people), and then spend the rest.

The second most important thing we need to add is inequality. Some people are much richer than others; this is a very important fact about economics that we need to understand. Yet it has taken the economics profession decades to figure this out, and even now I’m only aware of one class of macroeconomic models that seriously involves inequality, the Heterogeneous Agent New Keynesian (HANK) models which didn’t emerge until the last few years (the earliest publication I can find is 2016!). And these models are monsters; they are almost always computationally intractable and have a huge number of parameters to estimate.

Understanding inequality will require more parameters, that much is true. But if we abandon dynamic optimization, we won’t need as many as the HANK models have, and most of the new parameters are actually things we can observe, like the distribution of wages and years of schooling.

Observability of parameters is a big deal. Another problem with the way the Lucas Critique has been used is that we’ve been told we need to be using “deep structural parameters” like the temporal elasticity of substitution and the coefficient of relative risk aversion—but we have no idea what those actually are. We can’t observe them, and all of our attempts to measure them indirectly have yielded inconclusive or even inconsistent results. This is probably because these parameters are based on assumptions about human rationality that are simply not realistic. Most people probably don’t have a well-defined temporal elasticity of substitution, because their day-to-day decisions simply aren’t consistent enough over time for that to make sense. Sometimes they eat salad and exercise; sometimes they loaf on the couch and drink milkshakes. Likewise with risk aversion: many moons ago I wrote about how people will buy both insurance and lottery tickets, which no one with a consistent coefficient of relative risk aversion would ever do.

So if we are interested in deep structural parameters, we need to base those parameters on behavioral experiments so that we can understand actual human behavior. And frankly I don’t think we need deep structural parameters; I think this is a form of greedy reductionism, where we assume that the way to understand something is always to look at smaller pieces. Sometimes the whole is more than the sum of its parts. Economists obviously feel a lot of envy for physics; but they don’t seem to understand that aerodynamics would never have (ahem) gotten off the ground if we had first waited for an exact quantum mechanical solution of the oxygen atom (which we still don’t have, by the way). Macroeconomics may not actually need “microfoundations” in the strong sense that most economists intend; it needs to be consistent with small-scale behavior, but it doesn’t need to be derived from small-scale behavior.

This means that the new paradigm in macroeconomics does not need to be computationally intractable. Using heuristics instead of dynamic optimization and worrying less about microfoundations will make the models simpler; adding inequality need not make them so much more complicated.

What does a central bank actually do?

Aug 26 JDN 2458357

Though central banks are a cornerstone of the modern financial system, I don’t think most people have a clear understanding of how they actually function. (I think this may be by design; there are many ways we could make central banking more transparent, but policymakers seem reluctant to show their hand.)

I’ve even seen famous economists make really severe errors in their understanding of monetary policy, as John Taylor did when he characterized low-interest-rate policy as a “price ceiling”.

Central banks “print money” and “set interest rates”. But how exactly do they do these things, and what on Earth do they have to do with each other?

The first thing to understand is that most central banks don’t actually print money. In the US, cash is actually printed by the Department of the Treasury. But cash is only a small part of the money in circulation. The monetary base consists of cash in vaults and in circulation; the US monetary base is about $3.6 trillion. The money supply can be measured a few different ways, but the standard way is to include checking accounts, traveler’s checks, savings accounts, money market accounts, short-term certified deposits, and basically anything that can be easily withdrawn and spent as money. This is called the M2 money supply, and in the US it is currently over $14.1 trillion. That means that only 25% of our money supply is in actual, physical cash—the rest is all digital. This is actually a relatively high proportion for actual cash, as the monetary base was greatly increased in response to the Great Recession. When we say that the Fed “prints money”, what we really mean is that they are increasing the money supply—but typically they do so in a way that involves little if any actual printing of cash.

The second thing to understand is that central banks don’t exactly set interest rates either. They target interest rates. What’s the difference, you ask?

Well, setting interest rates would mean that they made a law or something saying you have to charge exactly 2.7%, and you get fined or something if you don’t do that.

Targeting interest rates is a subtler art. The Federal Reserve decides what interest rates they want banks to charge, and then they engage in what are called open-market operations to try to make that happen. Banks hold reservesmoney that they are required to keep as collateral for their loans. Since we are in a fractional-reserve system, they are allowed to keep only a certain proportion (usually about 10%). In open-market operations, the Fed buys and sells assets (usually US Treasury bonds) in order to either increase or decrease the amount of reserves available to banks, to try to get them to lend to each other at the targeted interest rates.

Why not simply set the interest rate by law? Because then it wouldn’t be the market-clearing interest rate. There would be shortages or gluts of assets.

It might be easier to grasp this if we step away from money for a moment and just think about the market for some other good, like televisions.

Suppose that the government wants to set the price of a television in the market to a particular value, say $500. (Why? Who knows. Let’s just run with it for a minute.)

If they simply declared by law that the price of a television must be $500, here’s what would happen: Either that would be too low, in which case there would be a shortage of televisions as demand exceeded supply; or that would be too high, in which case there would be a glut of televisions as supply exceeded demand. Only if they got spectacularly lucky and the market price already was $500 per television would they not have to worry about such things (and then, why bother?).

But suppose the government had the power to create and destroy televisions virtually at will with minimal cost.
Now, they have a better way; they can target the price of a television, and buy and sell televisions as needed to bring the market price to that target. If the price is too low, the government can buy and destroy a lot of televisions, to bring the price up. If the price is too high, the government can make and sell a lot of televisions, to bring the price down.

Now, let’s go back to money. This power to create and destroy at will is hard to believe for televisions, but absolutely true for money. The government can create and destroy almost any amount of money at will—they are limited only by the very inflation and deflation the central bank is trying to affect.

This allows central banks to intervene in the market without creating shortages or gluts; even though they are effectively controlling the interest rate, they are doing so in a way that avoids having a lot of banks wanting to take loans they can’t get or wanting to give loans they can’t find anyone to take.

The goal of all this manipulation is ultimately to reduce inflation and unemployment. Unfortunately it’s basically impossible to eliminate both simultaneously; the Phillips curve describes the relationship generally found that decreased inflation usually comes with increased unemployment and vice-versa. But the basic idea is that we set reasonable targets for each (usually about 2% inflation and 5% unemployment; frankly I’d prefer we swap the two, which was more or less what we did in the 1950s), and then if inflation is too high we raise interest rate targets, while if unemployment is too high we lower interest rate targets.

What if they’re both too high? Then we’re in trouble. This has happened; it is called stagflation. The money supply isn’t the other thing affecting inflation and unemployment, and sometimes we get hit with a bad shock that makes both of them high at once. In that situation, there isn’t much that monetary policy can do; we need to find other solutions.

But how does targeting interest rates lead to inflation? To be quite honest, we don’t actually know.

The basic idea is that lower interest rates should lead to more borrowing, which leads to more spending, which leads to more inflation. But beyond that, we don’t actually understand how interest rates translate into prices—this is the so-called transmission mechanism, which remains an unsolved problem in macroeconomics. Based on the empirical data, I lean toward the view that the mechanism is primarily via housing prices; lower interest rates lead to more mortgages, which raises the price of real estate, which raises the price of everything else. This also makes sense theoretically, as real estate consists of large, illiquid assets for which the long-term interest rate is very important. Your decision to buy an apple or even a television is probably not greatly affected by interest rates—but your decision to buy a house surely is.

If that is indeed the case, it’s worth thinking about whether this is really the right way to intervene on inflation and unemployment. High housing prices are an international crisis; maybe we need to be looking at ways to decrease unemployment without affecting housing prices. But that is a tale for another time.

What would a game with realistic markets look like?

Aug 12 JDN 2458343

From Pokemon to Dungeons & Dragons, Final Fantasy to Mass Effect, almost all role-playing games have some sort of market: Typically, you buy and sell equipment, and often can buy services such as sleeping at inns. Yet the way those markets work is extremely rigid and unrealistic.

(I’m of course excluding games like EVE Online that actually create real markets between players; those markets are so realistic I actually think they would provide a good opportunity for genuine controlled experiments in macroeconomics.)

The weirdest thing about in-game markets is the fact that items almost always come with a fixed price. Sometimes there is some opportunity for haggling, or some randomization between different merchants; but the notion always persists that the item has a “true price” that is being adjusted upward or downward. This is more or less the opposite of how prices actually work in real markets.

There is no “true price” of a car or a pizza. Prices are whatever buyers and sellers make them. There is a true value—the amount of real benefit that can be obtained from a good—but even this is something that varies between individuals and also changes based on the other goods being consumed. The value of a pizza is considerably higher for someone who hasn’t eaten in days than to someone who just finished eating another pizza.

There is also what is called “The Law of One Price”, but like all laws of economics, it’s like the Pirate Code, more what you’d call a “guideline”, and it only applies to a particular good in a particular market at a particular time. The Law of One Price doesn’t even say that a pizza should have the same price tomorrow as it does today, or that the same pizza can’t be sold to two different customers at two different prices; it only says that the same pizza shouldn’t have two different prices in the same place at the same time for the same customer. (It seems almost tautological, right? And yet it still fails empirically, and does so again and again. I have seen offers for the same book in the same condition posted on the same website that differed by as much as 50%.)

In well-developed capitalist markets in large First World countries, we can lull ourselves into the illusion that there is only one price for a good, because markets are highly liquid and either highly competitive or controlled by a strong and stable oligopoly that enforces a particular price across places and times. The McDonald’s Dollar Menu is a policy choice by a massive multinational corporation; it’s not what would occur naturally if those items were sold on a competitive market.

Even then, this illusion can be broken when we are faced with a large economic shock, such as the OPEC price shock in 1973 or a natural disaster like Hurricane Katrina. It also tends to be broken for illiquid goods such as real estate.

If we consider the environment in which most role-playing games take place, it’s usually a sort of quasi-medieval or quasi-Renaissance feudal society, where a given government controls only a small region and traveling between towns is difficult and dangerous. Not only should the prices of goods differ substantially between towns, the currency used should frequently differ as well. Yes, most places would accept gold and silver; but a kingdom with a stable government will generally have a currency of significant seignorage, with coins worth considerably more than the gold used to mint them—yet the value of that seignorage will drop off as you move further away from that kingdom and its sphere of influence.

Moreover, prices should be inconsistent even between traders in the same town, and extremely volatile. When a town is mostly self-sufficient and trade is only a small part of its economy, even a small shock such as a bad thunderstorm or a brief drought can yield massive shifts in prices. Shortages and gluts will be frequent, as both supply and demand are small and ever-changing.

This wouldn’t be that difficult to implement. The simplest way would just be to institute random shocks to prices that vary by place and time. A more sophisticated method would be to actually simulate supply and demand for different goods, and then have prices respond to realistic shocks (e.g. a drought makes wheat more expensive, and the price of swords suddenly skyrockets after news of an impending dragon attack). Experiments have shown that competitive market outcomes can be achieved by simulating even a dozen or so traders using very simple heuristics like “don’t pay more than you can afford” and “don’t charge less than it cost you”.

Why don’t game designers implement this? I think there are two reasons.

The first is simply that it would be more complicated. This is a legitimate concern in many cases; I particularly think Pokemon can justify using a simple economy, given its target audience. I particularly agree that having more than a handful of currencies would be too much for players to keep track of; though perhaps having two or three (one for each major faction?) is still more interesting than only having one.

Also, tabletop games are inherently more limited in the amount of computation they can use, compared to video games. But for a game as complicated as say Skyrim, this really isn’t much of a defense. Skyrim actually simulated the daily routines of over a hundred different non-player characters; it could have been simulating markets in the background as well—in fact, it could have simply had those same non-player characters buy and sell goods with each other in a double-auction market that would automatically generate the prices that players face.

The more important reason, I think, is that game designers have a paralyzing fear of arbitrage.

I find it particularly aggravating how frequently games will set it up so that the price at which you buy and the price at which you sell are constrained so that the buying price is always higher, often as much as twice as high. This is not at all how markets work in the real world; frankly it’s only even close to true for goods like cars that rapidly depreciate. It make senses that a given merchant will not sell you a good for less than what they would pay to buy it from you; but that only requires each individual merchant to have a well-defined willingness-to-pay and willingness-to-accept. It certainly does not require the arbitrary constraint that you can never sell something for more than what you bought it for.

In fact, I would probably even allow players who specialize in social skills to short-change and bamboozle merchants for profit, as this is absolutely something that happens in the real world, and was likely especially common under the very low levels of literacy and numeracy that prevailed in the Middle Ages.

To many game designers (and gamers), the ability to buy a good in one place, travel to another place, and sell that good for a higher price seems like cheating. But this practice is call being a merchant. That is literally what the entire retail industry does. The rules of your game should allow you to profit from activities that are in fact genuinely profitable real economic services in the real world.

I remember a similar complaint being raised against Skyrim shortly after its release, that one could acquire a pickaxe, collect iron ore, smelt it into steel, forge weapons out of it, and then sell the weapons for a sizeable profit. To some people, this sounded like cheating. To me, it sounds like being a blacksmith. This is especially true because Skyrim’s skill system allowed you to improve the quality of your smithed items over time, just like learning a trade through practice (though it ramped up too fast, as it didn’t take long to make yourself clearly the best blacksmith in all of Skyrim). Frankly, this makes far more sense than being able to acquire gold by adventuring through the countryside and slaughtering monsters or collecting lost items from caves. Blacksmiths were a large part of the medieval economy; spelunking adventurers were not. Indeed, it bothers me that there weren’t more opportunities like this; you couldn’t make your wealth by being a farmer, a vintner, or a carpenter, for instance.

Even if you managed to pull off pure arbitrage, providing no real services, such as by buying and selling between two merchants in the same town, or the same merchant on two consecutive days, that is also a highly profitable industry. Most of our financial system is built around it, frankly. If you manage to make your wealth selling wheat futures instead of slaying dragons, I say more power to you. After all, there were an awful lot of wheat-future traders in the Middle Ages, and to my knowledge no actually successful dragon-slayers.

Of course, if your game is about slaying dragons, it should include some slaying of dragons. And if you really don’t care about making a realistic market in your game, so be it. But I think that more realistic markets could actually offer a great deal of richness and immersion into a world without greatly increasing the difficulty or complexity of the game. A world where prices change in response to the events of the story just feels more real, more alive.

The ability to profit without violence might actually draw whole new modes of play to the game (as has indeed occurred with Skyrim, where a small but significant proportion of players have chosen to live out peaceful lives as traders or blacksmiths). I would also enrich the experience of more conventional players and helping them recover from setbacks (if the only way to make money is to fight monsters and you keep getting killed by monsters, there isn’t much you can do; but if you have the option of working as a trader or a carpenter for awhile, you could save up for better equipment and try the fighting later).

And hey, game designers: If any of you are having trouble figuring out how to implement such a thing, my consulting fees are quite affordable.

Is a job guarantee better than a basic income?

Aug 5 JDN 2458336

In previous posts I’ve written about both the possibilities and challenges involved in creating a universal basic income. Today I’d like to address what I consider the most serious counter-argument against a basic income, an alternative proposal known as a job guarantee.

Whereas a basic income is literally just giving everyone free money, a job guarantee entails offering everyone who wants to work a job paid by the government. They’re not necessarily contradictory, but I’ve noticed a clear pattern: While basic income proponents are generally open to the idea of a job guarantee on the side, job guarantee proponents are often vociferously opposed to a basic income—even calling it “sinister”. I think the reason for this is that we see jobs as irrelevant, so we’re okay with throwing them in if you feel you must, while they see jobs as essential, so they meet any attempt to remove them with overwhelming resistance.

Where a basic income is extremely simple and could be implemented by a single act of the legislature, a job guarantee is considerably more complicated. The usual proposal for a job guarantee involves federal funding but local implementation, which is how most of our social welfare system is implemented—and why social welfare programs are so much better in liberal states like California than in conservative states like Mississippi, because California actually believes in what it’s implementing and Mississippi doesn’t. Anyone who wants a job guarantee needs to take that aspect seriously: In the places where poverty is worst, you’re offering control over the policy to the very governments that made poverty worst—and whether it is by malice or incompetence, what makes you think that won’t continue?

Another argument that I think job guarantee proponents don’t take seriously enough is the concern about “make-work”. They insist that a job guarantee is not “make-work”, but real work that’s just somehow not being done. They seem to think that there are a huge number of jobs that we could just create at the snap of a finger, which would be both necessary and useful on the one hand, and a perfect match for the existing skills of the unemployed population on the other hand. If that were the case, we would already be creating those jobs. It doesn’t even require a particularly strong faith in capitalism to understand this: If there is a profit to be made at hiring people to do something, there is probably already a business hiring people to do that. I don’t think of myself as someone with an overriding faith in capitalism, but a lot of the socialist arguments for job guarantees make me feel that way by comparison: They seem to think that there’s this huge untapped reserve of necessary work that the market is somehow failing to provide, and I’m just not seeing it.

There are public goods projects which aren’t profitable but would still be socially beneficial, like building rail lines and cleaning up rivers. But proponents of a job guarantee don’t seem to understand that these are almost all highly specialized jobs at our level of technology. We don’t need a bunch of people with shovels. We need engineers and welders and ecologists.

If you propose using people with shovels where engineers would be more efficient, that is make-work, whether you admit it or not. If you’re making people work in a less-efficient way in order to create jobs, then the jobs you are creating are fake jobs that aren’t worth creating. The line is often credited to Milton Friedman, but actually said first by William Aberhart in 1935:

Taking up the policy of a public works program as a solution for unemployment, it was criticized as a plan that took no account of the part that machinery played in modern construction, with a road-making machine instanced as an example. He saw, said Mr. Aberhart, work in progress at an airport and was told that the men were given picks and shovels in order to lengthen the work, to which he replied why not give them spoons and forks instead of picks and shovels if the object was to lengthen out the task.

I’m all for spending more on building rail lines and cleaning up rivers, but that’s not an anti-poverty program. The people who need the most help are precisely the ones who are least qualified to work on these projects: Children, old people, people with severe disabilities. Job guarantee proponents either don’t understand this fact or intentionally ignore it. If you aren’t finding jobs for 7-year-olds with autism and 70-year-olds with Parkinson’s disease, this program will not end poverty. And if you are, I find it really hard to believe that these are real, productive jobs and not useless “make-work”. A basic income would let the 7-year-olds stay in school and the 70-year-olds live in retirement homes—and keep them both out of poverty.

Another really baffling argument for a job guarantee over basic income is that a basic income would act as a wage subsidy, encouraging employers to reduce wages. That’s not how a basic income works. Not at all. A basic income would provide a pure income effect, necessarily increasing wage demands. People would not be as desperate for work, so they’d be more comfortable turning down unreasonable wage offers. A basic income would also incentivize some people to leave the labor force by retiring or going back to school; the reduction in labor supply would further increase wages. The Earned Income Tax Credit is in many respects similar to a wage subsidy. While superficially it might seem similar, a basic income would have the exact opposite effect.

One reasonable argument against a basic income is the possibility that it could cause inflation. This is something that can’t really be tested with small-scale experiments, so we really won’t know for sure until we try it. But there is reason to think that the inflation would be small, as the people removed from the labor force will largely be the ones who are least-productive to begin with. There is a growing body of empirical evidence suggesting that inflationary effects of a basic income would be small. For example, data on cash transfer programs in Mexico show only a small inflationary effect despite large reductions in poverty. The whole reason a basic income looks attractive is that automation technology is now so advanced is that we really don’t need everyone to be working anymore. Productivity is so high now that a policy of universal 40-hour work weeks just doesn’t make sense in the 21st century.

Probably the best argument for a job guarantee over a basic income concerns cost. A basic income is very expensive, there’s no doubt about that; and a job guarantee could be much cheaper. That is something I take very seriously: Saving $1.5 trillion a year is absolutely a good reason. Indeed, I don’t really object to this argument; the calculations are correct. I merely think that a basic income is enough better that its higher cost is justifiable. A job guarantee can eliminate unemployment, but not poverty.

But the argument for a job guarantee that most people seem to be find most compelling concerns meaning. The philosopher John Danaher expressed this one most cogently. Unemployment is an extremely painful experience for most people, far beyond what could be explained simply by their financial circumstances. Most people who win large sums of money in the lottery cut back their hours, but continue working—so work itself seems to have some value. What seems to happen is that when people lose the chance to work, they feel that they have lost a vital source of meaning in their lives.

Yet this raises two more questions:

First, would a job guarantee actually solve that problem?
Second, are there ways we could solve it under a basic income?

With regard to the first question, I want to re-emphasize the fact that a large proportion of these guaranteed jobs necessarily cannot be genuinely efficient production. If efficient production would have created these jobs, we would most likely already have created them. Our society does not suffer from an enormous quantity of necessary work that could be done with the skills already possessed by the unemployed population, which is somehow not getting done—indeed, it is essentially impossible for a capitalist economy with a highly-liquid financial system to suffer such a malady. If the work is so valuable, someone will probably take out a loan to hire someone to do it. If that’s not happening, either the unemployed people don’t have the necessary skills, or the work really can’t be all that productive. There are some public goods projects that would be beneficial but aren’t being done, but that’s a different problem, and the match between the public goods projects that need done and the skills of the unemployed population is extremely poor. Displaced coal miners aren’t useful for maintaining automated photovoltaic factories. Truckers who get replaced by robot trucks won’t be much good for building maglev rails.

With this in mind, it’s not clear to me that people would really be able to find much meaning in a guaranteed job. You can’t be fired, so the fact that you have the job doesn’t mean anyone is impressed by the quality of your work. Your work wasn’t actually necessary, or the private sector would already have hired someone to do it. The government went out of its way to find a job that precisely matched what you happen to be good at, regardless of whether that job was actually accomplishing anything to benefit society. How is that any better than not working at all? You are spending hours of drudgery to accomplish… what, exactly? If our goal was simply to occupy people’s time, we could do that with Netflix or video games.

With regard to the second question, note that a basic income is quite different from other social welfare programs in that everyone gets it. So it’s very difficult to attach a social stigma to receiving basic income payments—it would require attaching the stigma to literally everyone. Much of the lost meaning, I suspect, from being unemployed comes from the social stigma attached.

Now, it’s still possible to attach social stigma to people who only get the basic income—there isn’t much we can do to prevent that. But in the worst-case scenario, this means unemployed people get the same stigma as before but more money. Moreover, it’s much harder to detect a basic income recipient than, say, someone who eats at a soup kitchen or buys food using EBT; since it goes in your checking account, all everyone else sees is you spending money from your debit card, just like everyone else. People who know you personally would probably know; but people who know you personally are also less likely to destroy your well-being by imposing a high stigma. Maybe they’ll pressure you to get off the couch and get a job, but they’ll do so because they genuinely want to help you, not because they think you are “one of those lazy freeloaders”.

And, as BIEN points out, think about retired people: They don’t seem to be so unhappy. Being on basic income is more like being retired than like being unemployed. It’s something everyone gets, not some special handout for “those people”. It’s permanent, so it’s not like you need to scramble to get a job before it goes away. You just get money automatically, so you don’t have to navigate a complex bureaucracy to get it. Controlling for income, retired people don’t seem to be any less happy than working people—so maybe work doesn’t actually provide all that much meaning after all.

I guess I can’t rule out the possibility that people need jobs to find meaning in their lives, but I both hope and believe that this is not generally the case. You can find meaning in your family, your friends, your community, your hobbies. You can still work even if you don’t need to work for a living: Build a shed, mow your lawn, tune up your car, upgrade your computer, write a story, learn a musical instrument, or try your hand at painting.

If you need to be taking orders from a corporation five days a week in order to have meaning in your life, you have bigger problems. I think what has happened to many people is that employment has so drained their lives of the real sources of meaning that they cling to it as the only thing they have left. But in fact work is not the cure to your ennui—it is the cause of it. Finally being free of the endless toil that has plagued humanity since the dawn of our species will give you the chance to reconnect with what really matters in life. Show your children that you love them in person, to their faces, instead of in this painfully indirect way of “providing for” them by going to work every day. Find ways to apply your skills in volunteering or creating works of art, instead of in endless drudgery for the profit of some faceless corporation.

Most trade barriers are not tariffs

Jul 8 JDN 2458309

When we talk about “protectionism” or “trade barriers”, what usually comes to mind is tariffs: taxes imposed on imports or exports. But especially now that international trade organizations have successfully reduced tariffs around the world, most trade barriers are not of this form at all.

Especially in highly-developed countries, but really almost everywhere, the most common trade barriers are what is simply but inelegantly called non-tariff barriers to trade: this includes licenses, quotas, subsidies, bailout guarantees, labeling requirements, and even some environmental regulations.

Non-tariff barriers are much more complicated to deal with, for at least three reasons.

First, with the exception of quotas and subsidies, non-tariff barriers are not easily quantifiable. We can easily put a number on the value of a tariff (though its impact is somewhat subtler than that), but this is not so easy for the effect of a bailout guarantee or a labeling requirement.

Second, non-tariff barriers are often much harder to detect. It’s obvious enough that imposing a tax on imported steel will reduce our imports of steel; but it requires a deeper understanding of the trade system to understand why bailing out domestic banks would distort financial flows, interest rates and exchange rates (even though the impact of this may actually be larger—the effect on global trade of US bank bailouts was between $35 billion and $110 billion).

Third, some trade barriers are either justifiable or simply inevitable. Simply having customs screening at the border is a non-tariff barrier, but it is widely regarded as a justifiable security measure (and I agree, by the way, even though I am generally in favor of much more open borders). Requiring strict labor and environmental standards on the production of products both domestic and imported is highly beneficial, but also imposes a trade barrier. In a broader sense, differences in language and culture could even be regarded as trade barriers (they certainly increase the real cost of trade), but it’s not clear that we could eliminate such things even if we wanted to.

This requires us to look very closely at almost every major government policy, to see how it might be distorting world trade. Some policies won’t meaningfully distort trade at all; these are not trade barriers. Others will distort trade, but are beneficial enough in other ways that they are still worth it; these are justifiable trade barriers. Still others will distort trade so much that they cannot be justified despite their other benefits. Finally, some policies will be put in place more or less explicitly to distort trade, usually in the form of protectionism to prop up domestic industries.

Protectionist policies are of course the first things to get rid of. Honestly, it baffles me that people even want to impose them in the first place. For some reason they think of exports as the benefit and imports as the cost, when it’s really the other way around; when we impose protectionism, we go out of our way to make it harder to get cars and iPhones so that we can stop other countries from taking our green paper. This seems to be tied to the fact that people think of jobs as something desirable, when really it’s wealth that’s desirable, and jobs are just one way of getting wealth—in some sense the most expensive way. Our macroeconomic policy obsesses over inflation, which is almost literally meaningless (as long as it is not too unpredictable, really nothing would change if inflation were raised from 2% to 4% or even 10%) and unemployment, which is at best an imperfect indicator of what we really should care about, namely the welfare of our people. A world of full employment with poverty wages is much worse than a world of high unemployment where a basic income provides for everyone’s needs. It is true that in our current system, unemployment is closely tied to a lot of very bad outcomes—but I maintain that this is largely because unemployment entails losing your income and your healthcare.

Some regulations that appear benign may actually be harmful because of their effects on trade. Yet I should also point out that it’s possible to go too far the other direction, and start tearing down all regulations in the name of reducing trade barriers. We particularly seem to do this in the financial industry, where “deregulation” seems to be on everyone’s lips until it causes a crisis, then we impose some regulations that fix the worst problems, things look good for awhile—and then we go back around and everyone starts talking about “deregulation” again. Meanwhile, the same people who talk about “freedom” as an excuse for removing financial safeguards are the ones who lock up children at the border. I think this is something that needs to be reframed: Which regulations are you removing? Just what, exactly, are you making legal that wasn’t before? Legalizing murder would be “deregulation”.

Trade policy, therefore, is a very delicate balance, between removing distortions and protecting legitimate public interests, between the needs of your own country and the world as a whole. This is why we need this whole apparatus of international trade institutions; it’s not a simple matter.

But I will say this: It would probably help if people educated themselves a bit more about how trade actually works before voting in politicians who promise to “save their jobs” from foreign competition.