What would a game with realistic markets look like?

Aug 12 JDN 2458343

From Pokemon to Dungeons & Dragons, Final Fantasy to Mass Effect, almost all role-playing games have some sort of market: Typically, you buy and sell equipment, and often can buy services such as sleeping at inns. Yet the way those markets work is extremely rigid and unrealistic.

(I’m of course excluding games like EVE Online that actually create real markets between players; those markets are so realistic I actually think they would provide a good opportunity for genuine controlled experiments in macroeconomics.)

The weirdest thing about in-game markets is the fact that items almost always come with a fixed price. Sometimes there is some opportunity for haggling, or some randomization between different merchants; but the notion always persists that the item has a “true price” that is being adjusted upward or downward. This is more or less the opposite of how prices actually work in real markets.

There is no “true price” of a car or a pizza. Prices are whatever buyers and sellers make them. There is a true value—the amount of real benefit that can be obtained from a good—but even this is something that varies between individuals and also changes based on the other goods being consumed. The value of a pizza is considerably higher for someone who hasn’t eaten in days than to someone who just finished eating another pizza.

There is also what is called “The Law of One Price”, but like all laws of economics, it’s like the Pirate Code, more what you’d call a “guideline”, and it only applies to a particular good in a particular market at a particular time. The Law of One Price doesn’t even say that a pizza should have the same price tomorrow as it does today, or that the same pizza can’t be sold to two different customers at two different prices; it only says that the same pizza shouldn’t have two different prices in the same place at the same time for the same customer. (It seems almost tautological, right? And yet it still fails empirically, and does so again and again. I have seen offers for the same book in the same condition posted on the same website that differed by as much as 50%.)

In well-developed capitalist markets in large First World countries, we can lull ourselves into the illusion that there is only one price for a good, because markets are highly liquid and either highly competitive or controlled by a strong and stable oligopoly that enforces a particular price across places and times. The McDonald’s Dollar Menu is a policy choice by a massive multinational corporation; it’s not what would occur naturally if those items were sold on a competitive market.

Even then, this illusion can be broken when we are faced with a large economic shock, such as the OPEC price shock in 1973 or a natural disaster like Hurricane Katrina. It also tends to be broken for illiquid goods such as real estate.

If we consider the environment in which most role-playing games take place, it’s usually a sort of quasi-medieval or quasi-Renaissance feudal society, where a given government controls only a small region and traveling between towns is difficult and dangerous. Not only should the prices of goods differ substantially between towns, the currency used should frequently differ as well. Yes, most places would accept gold and silver; but a kingdom with a stable government will generally have a currency of significant seignorage, with coins worth considerably more than the gold used to mint them—yet the value of that seignorage will drop off as you move further away from that kingdom and its sphere of influence.

Moreover, prices should be inconsistent even between traders in the same town, and extremely volatile. When a town is mostly self-sufficient and trade is only a small part of its economy, even a small shock such as a bad thunderstorm or a brief drought can yield massive shifts in prices. Shortages and gluts will be frequent, as both supply and demand are small and ever-changing.

This wouldn’t be that difficult to implement. The simplest way would just be to institute random shocks to prices that vary by place and time. A more sophisticated method would be to actually simulate supply and demand for different goods, and then have prices respond to realistic shocks (e.g. a drought makes wheat more expensive, and the price of swords suddenly skyrockets after news of an impending dragon attack). Experiments have shown that competitive market outcomes can be achieved by simulating even a dozen or so traders using very simple heuristics like “don’t pay more than you can afford” and “don’t charge less than it cost you”.

Why don’t game designers implement this? I think there are two reasons.

The first is simply that it would be more complicated. This is a legitimate concern in many cases; I particularly think Pokemon can justify using a simple economy, given its target audience. I particularly agree that having more than a handful of currencies would be too much for players to keep track of; though perhaps having two or three (one for each major faction?) is still more interesting than only having one.

Also, tabletop games are inherently more limited in the amount of computation they can use, compared to video games. But for a game as complicated as say Skyrim, this really isn’t much of a defense. Skyrim actually simulated the daily routines of over a hundred different non-player characters; it could have been simulating markets in the background as well—in fact, it could have simply had those same non-player characters buy and sell goods with each other in a double-auction market that would automatically generate the prices that players face.

The more important reason, I think, is that game designers have a paralyzing fear of arbitrage.

I find it particularly aggravating how frequently games will set it up so that the price at which you buy and the price at which you sell are constrained so that the buying price is always higher, often as much as twice as high. This is not at all how markets work in the real world; frankly it’s only even close to true for goods like cars that rapidly depreciate. It make senses that a given merchant will not sell you a good for less than what they would pay to buy it from you; but that only requires each individual merchant to have a well-defined willingness-to-pay and willingness-to-accept. It certainly does not require the arbitrary constraint that you can never sell something for more than what you bought it for.

In fact, I would probably even allow players who specialize in social skills to short-change and bamboozle merchants for profit, as this is absolutely something that happens in the real world, and was likely especially common under the very low levels of literacy and numeracy that prevailed in the Middle Ages.

To many game designers (and gamers), the ability to buy a good in one place, travel to another place, and sell that good for a higher price seems like cheating. But this practice is call being a merchant. That is literally what the entire retail industry does. The rules of your game should allow you to profit from activities that are in fact genuinely profitable real economic services in the real world.

I remember a similar complaint being raised against Skyrim shortly after its release, that one could acquire a pickaxe, collect iron ore, smelt it into steel, forge weapons out of it, and then sell the weapons for a sizeable profit. To some people, this sounded like cheating. To me, it sounds like being a blacksmith. This is especially true because Skyrim’s skill system allowed you to improve the quality of your smithed items over time, just like learning a trade through practice (though it ramped up too fast, as it didn’t take long to make yourself clearly the best blacksmith in all of Skyrim). Frankly, this makes far more sense than being able to acquire gold by adventuring through the countryside and slaughtering monsters or collecting lost items from caves. Blacksmiths were a large part of the medieval economy; spelunking adventurers were not. Indeed, it bothers me that there weren’t more opportunities like this; you couldn’t make your wealth by being a farmer, a vintner, or a carpenter, for instance.

Even if you managed to pull off pure arbitrage, providing no real services, such as by buying and selling between two merchants in the same town, or the same merchant on two consecutive days, that is also a highly profitable industry. Most of our financial system is built around it, frankly. If you manage to make your wealth selling wheat futures instead of slaying dragons, I say more power to you. After all, there were an awful lot of wheat-future traders in the Middle Ages, and to my knowledge no actually successful dragon-slayers.

Of course, if your game is about slaying dragons, it should include some slaying of dragons. And if you really don’t care about making a realistic market in your game, so be it. But I think that more realistic markets could actually offer a great deal of richness and immersion into a world without greatly increasing the difficulty or complexity of the game. A world where prices change in response to the events of the story just feels more real, more alive.

The ability to profit without violence might actually draw whole new modes of play to the game (as has indeed occurred with Skyrim, where a small but significant proportion of players have chosen to live out peaceful lives as traders or blacksmiths). I would also enrich the experience of more conventional players and helping them recover from setbacks (if the only way to make money is to fight monsters and you keep getting killed by monsters, there isn’t much you can do; but if you have the option of working as a trader or a carpenter for awhile, you could save up for better equipment and try the fighting later).

And hey, game designers: If any of you are having trouble figuring out how to implement such a thing, my consulting fees are quite affordable.

Elasticity and the Law of Demand

JDN 2457289 EDT 21:04

This will be the second post in my new bite-size format, the first one that’s in the middle of the week.

I’ve alluded previously to the subject of demand elasticity, but I think it’s worth explaining in a little more detail. The basic concept is fairly straightforward: Demand is more elastic when the amount that people want to buy changes a large amount for a small change in price. The opposite is inelastic.

Apples are a relatively elastic good. If the price of apples goes up, people buy fewer apples. Maybe they buy other fruit instead, such as oranges or bananas; or maybe they give up on fruit and eat something else, like rice.

Salt is an extremely inelastic good. No matter what the price of salt is, at least within the range it has been for the last few centuries, people are going to continue to buy pretty much the same amount of salt. (In ancient times salt was actually expensive enough that people couldn’t afford enough of it, which was particularly harmful in desert regions. Mark Kulansky’s book Salt on this subject is surprisingly compelling, given the topic.)
Specifically, the elasticity is equal to the proportional change in quantity demanded, divided by the proportional change in price.

For example, if the price of gas rises from $2 per gallon to $3 per gallon, that’s a 50% increase. If the quantity of gas purchase then falls from 100 billion gallons to 90 billion gallons, that’s a 10% decrease. If increasing the price by 50% decreased the quantity demanded by 10%, that would be a demand elasticity of -10%/50% = -1/5 = -0.2

In practice, measuring elasticity is more complicated than that, because supply and demand are both changing at the same time; so when we see a price change and a quantity change, it isn’t always clear how much of each change is due to supply and how much is due to demand. Sophisticated econometric techniques have been developed to try to separate these two effects (in future posts I plan to explain the basics of some of these techniques), but it’s difficult and not always successful.

In general, markets function better when supply and demand are more elastic. When shifts in price trigger large shifts in quantity, this creates pressure on the price to remain at a fixed level rather than jumping up and down. This in turn means that the market will generally be predictable and stable.

It’s also much harder to make monopoly profits in a market with elastic demand; even if you do have a monopoly, if demand is highly elastic then raising the price won’t make you any money, because whatever you gain in selling each gizmo for more, you’ll lose in selling fewer gizmos. In fact, the profit margin for a monopoly is inversely proportional to the elasticity of demand.

Markets do not function well when supply and demand are highly inelastic. Monopolies can become very powerful and result in very large losses of human welfare. A particularly vivid example of this was in the news recently, when a company named Turing purchased the rights to a drug called Daraprim used primarily by AIDS patients, then hiked the price from $13.50 to $750. This made enough people mad that the CEO has since promised to bring it back down, though he hasn’t said how far.

That price change was only possible because Daraprim has highly inelastic demand—if you’ve got AIDS, you’re going to take AIDS medicine, as much as prescribed, provided only that it doesn’t drive you completely bankrupt. (Not an unreasonable fear, as medical costs are the leading cause of bankruptcy in the United States.) This raised price probably would bankrupt a few people, but for the most part it wouldn’t affect the amount of drug sold; it would just funnel a huge amount of money from AIDS patients to the company. This is probably part of why it made people so mad; that and there would probably be a few people who died because they couldn’t afford this new expensive medication.

Imagine if a company had tried to pull the same stunt for a more elastic good, like apples. “CEO buys up all apple farms, raises price of apples from $2 per pound to $100 per pound.” What’s going to happen then? People are not going to buy any apples. Perhaps a handful of the most die-hard apple lovers still would, but the rest of us are going to meet our fruit needs elsewhere.

For most goods most of the time, elasticity of demand is negative, meaning that as price increases, quantity demanded decreases. This is in fact called the Law of Demand; but as I’ve said, “laws” in economics are like the Pirate Code: They’re really more what you’d call “guidelines”.
There are three major exceptions to the Law of Demand. The first one is the one most economists talk about, and it almost never happens. The second one is talked about occasionally, and it’s quite common. The third one is almost never talked about, and yet it is by far the most common and one of the central driving forces in modern capitalism.
The exception that we usually talk about in economics is called the Giffen Effect. A Giffen good is a good that’s so cheap and such a bare necessity that when it becomes more expensive, you won’t be able to buy less of it; instead you’ll buy more of it, and buy less of other things with your reduced income.

It’s very hard to come up with empirical examples of Giffen goods, but it’s an easy theoretical argument to make. Suppose you’re buying grapes for a party, and you know you need 4 bags of grapes. You have $10 to spend. Suppose there are green grapes selling for $1 per bag and red grapes selling for $4 per bag, and suppose you like red grapes better. With your $10, you can buy 2 bags of green grapes and 2 bags of red grapes, and that’s the 4 bags you need. But now suppose that the price of green grapes rises to $2 per bag. In order to afford 4 bags of grapes, you now need to buy 3 bags of green grapes and only 1 bag of red grapes. Even though it was the price of green grapes that rose, you ended up buying more green grapes. In this scenario, green grapes are a Giffen good.

The exception that is talked about occasionally and occurs a lot in real life is the Veblen Effect. Whereas a Giffen good is a very cheap bare necessity, a Veblen good is a very expensive pure luxury.

The whole point of buying a Veblen good is to prove that you can. You don’t buy a Ferrari because a Ferrari is a particularly nice automobile (a Prius is probably better, and a Tesla certainly is); you buy a Ferrari to show off that you’re so rich you can buy a Ferrari.

On my previous post, jenszorn asked: “Much of consumer behavior is irrational by your standards. But people often like to spend money just for the sake of spending and for showing off. Why else does a Rolex carry a price tag for $10,000 for a Rolex watch when a $100 Seiko keeps better time and requires far less maintenance?” Veblen goods! It’s not strictly true that Veblen goods are irrational; it can be in any particular individual’s best interest is served by buying Veblen goods in order to signal their status and reap the benefits of that higher status. However, it’s definitely true that Veblen goods are inefficient; because ostentatious displays of wealth are a zero-sum game (it’s not about what you have, it’s about what you have that others don’t), any resources spent on rich people proving how rich they are are resources that society could otherwise have used, say, feeding the poor, curing diseases, building infrastructure, or colonizing other planets.

Veblen goods can also result in a violation of the Law of Demand, because raising the price of a Veblen good like Ferraris or Rolexes can make them even better at showing off how rich you are, and therefore more appealing to the kind of person who buys them. Conversely, lowering the price might not result in any more being purchased, because they wouldn’t seem as impressive anymore. Currently a Ferrari costs about $250,000; if they reduced that figure to $100,000, there aren’t a lot of people who would suddenly find it affordable, but many people who currently buy Ferraris might switch to Bugattis or Lamborghinis instead. There are limits to this, of course: If the price of a Ferrari dropped to $2,000, people wouldn’t buy them to show off anymore; but the far larger effect would be the millions of people buying them because you can now get a perfectly good car for $2,000. Yes, I would sell my dear little Smart if it meant I could buy a Ferrari instead and save $8,000 at the same time.

But the third major exception to the Law of Demand is actually the most important one, yet it’s the one that economists hardly ever talk about: Speculation.

The most common reason why people would buy more of something that has gotten more expensive is that they expect it to continue getting more expensive, and then they will be able to sell what they bought at an even higher price and make a profit.

When the price of Apple stock goes up, do people stop buying Apple stock? On the contrary, they almost certainly start buying more—and then the price goes up even further still. If rising prices get self-fulfilling enough, you get an asset bubble; it grows and grows until one day it can’t, and then the bubble bursts and prices collapse again. This has happened hundreds of times in history, from the Tulip Mania to the Beanie Baby Bubble to the Dotcom Boom to the US Housing Crisis.

It isn’t necessarily irrational to participate in a bubble; some people must be irrational, but most people can buy above what they would be willing to pay by accurately predicting that they’ll find someone else who is willing to pay an even higher price later. It’s called Greater Fool Theory: The price I paid may be foolish, but I’ll find someone who is even more foolish to take it off my hands. But like Veblen goods, speculation goods are most definitely inefficient; nothing good comes from prices that rise and fall wildly out of sync with the real value of goods.

Speculation goods are all around us, from stocks to gold to real estate. Most speculation goods also serve some other function (though some, like gold, are really mostly just Veblen goods otherwise; actual useful applications of gold are extremely rare), but their speculative function often controls their price in a way that dominates all other considerations. There’s no real limit to how high or low the price can go for a speculation good; no longer tied to the real value of the good, it simply becomes a question of how much people decide to pay.

Indeed, speculation bubbles are one of the fundamental problems with capitalism as we know it; they are one of the chief causes of the boom-and-bust business cycle that has cost the world trillions of dollars and thousands of lives. Most of our financial industry is now dedicated to the trading of speculation goods, and finance is taking over a larger and larger section of our economy all the time. Many of the world’s best and brightest are being funneled into finance instead of genuinely productive industries; 15% of Harvard grads take a job in finance, and almost half did just before the crash. The vast majority of what goes on in our financial system is simply elaborations on speculation; very little real productivity ever enters into the equation.

In fact, as a general rule I think when we see a violation of the Law of Demand, we know that something is wrong in the economy. If there are Giffen goods, some people are too poor to buy what they really need. If there are Veblen goods, inequality is too large and people are wasting resources competing for status. And since there are always speculation goods, the history of capitalism has been a history of market instability.

Fortunately, elasticity of demand is usually negative: As the price goes up, people want to buy less. How much less is the elasticity.

How much should we save?

JDN 2457215 EDT 15:43.

One of the most basic questions in macroeconomics has oddly enough received very little attention: How much should we save? What is the optimal level of saving?

At the microeconomic level, how much you should save basically depends on what you think your income will be in the future. If you have more income now than you think you’ll have later, you should save now to spend later. If you have less income now than you think you’ll have later, you should spend now and dissave—save negatively, otherwise known as borrowing—and pay it back later. The life-cycle hypothesis says that people save when they are young in order to retire when they are old—in its strongest form, it says that we keep our level of spending constant across our lifetime at a value equal to our average income. The strongest form is utterly ridiculous and disproven by even the most basic empirical evidence, so usually the hypothesis is studied in a weaker form that basically just says that people save when they are young and spend when they are old—and even that runs into some serious problems.

The biggest problem, I think, is that the interest rate you receive on savings is always vastly less than the interest rate you pay on borrowing, which in turn is related to the fact that people are credit-constrainedthey generally would like to borrow more than they actually can. It also has a lot to do with the fact that our financial system is an oligopoly; banks make more profits if they can pay savers less and charge borrowers more, and by colluding with each other they can control enough of the market that no major competitors can seriously undercut them. (There is some competition, however, particularly from credit unions—and if you compare these two credit card offers from University of Michigan Credit Union at 8.99%/12.99% and Bank of America at 12.99%/22.99% respectively, you can see the oligopoly in action as the tiny competitor charges you a much fairer price than the oligopoly beast. 9% means doubling in just under eight years, 13% means doubling in a little over five years, and 23% means doubling in three years.) Another very big problem with the life-cycle theory is that human beings are astonishingly bad at predicting the future, and thus our expectations about our future income can vary wildly from the actual future income we end up receiving. People who are wise enough to know that they do not know generally save more than they think they’ll need, which is called precautionary saving. Combine that with our limited capacity for self-control, and I’m honestly not sure the life-cycle hypothesis is doing any work for us at all.

But okay, let’s suppose we had a theory of optimal individual saving. That would still leave open a much larger question, namely optimal aggregate saving. The amount of saving that is best for each individual may not be best for society as a whole, and it becomes a difficult policy challenge to provide incentives to make people save the amount that is best for society.

Or it would be, if we had the faintest idea what the optimal amount of saving for society is. There’s a very simple rule-of-thumb that a lot of economists use, often called the golden rule (not to be confused with the actual Golden Rule, though I guess the idea is that a social optimum is a moral optimum), which is that we should save exactly the same amount as the share of capital in income. If capital receives one third of income (This figure of one third has been called a “law”, but as with most “laws” in economics it’s really more like the Pirate Code; labor’s share of income varies across countries and years. I doubt you’ll be surprised to learn that it is falling around the world, meaning more income is going to capital owners and less is going to workers.), then one third of income should be saved to make more capital for next year.

When you hear that, you should be thinking: “Wait. Saved to make more capital? You mean invested to make more capital.” And this is the great sleight of hand in the neoclassical theory of economic growth: Saving and investment are made to be the same by definition. It’s called the savings-investment identity. As I talked about in an earlier post, the model seems to be that there is only one kind of good in the world, and you either use it up or save it to make more.

But of course that’s not actually how the world works; there are different kinds of goods, and if people stop buying tennis shoes that doesn’t automatically lead to more factories built to make tennis shoes—indeed, quite the opposite.If people reduce their spending, the products they no longer buy will now accumulate on shelves and the businesses that make those products will start downsizing their production. If people increase their spending, the products they now buy will fly off the shelves and the businesses that make them will expand their production to keep up.

In order to make the savings-investment identity true by definition, the definition of investment has to be changed. Inventory accumulation, products building up on shelves, is counted as “investment” when of course it is nothing of the sort. Inventory accumulation is a bad sign for an economy; indeed the time when we see the most inventory accumulation is right at the beginning of a recession.

As a result of this bizarre definition of “investment” and its equation with saving, we get the famous Paradox of Thrift, which does indeed sound paradoxical in its usual formulation: “A global increase in marginal propensity to save can result in a reduction in aggregate saving.” But if you strip out the jargon, it makes a lot more sense: “If people suddenly stop spending money, companies will stop investing, and the economy will grind to a halt.” There’s still a bit of feeling of paradox from the fact that we tried to save more money and ended up with less money, but that isn’t too hard to understand once you consider that if everyone else stops spending, where are you going to get your money from?

So what if something like this happens, we all try to save more and end up having no money? The government could print a bunch of money and give it to people to spend, and then we’d have money, right? Right. Exactly right, in fact. You now understand monetary policy better than most policymakers. Like a basic income, for many people it seems too simple to be true; but in a nutshell, that is Keynesian monetary policy. When spending falls and the economy slows down as a result, the government should respond by expanding the money supply so that people start spending again. In practice they usually expand the money supply by a really bizarre roundabout way, buying and selling bonds in open market operations in order to change the interest rate that banks charge each other for loans of reserves, the Fed funds rate, in the hopes that banks will change their actual lending interest rates and more people will be able to borrow, thus, ultimately, increasing the money supply (because, remember, banks don’t have the money they lend you—they create it).

We could actually just print some money and give it to people (or rather, change a bunch of numbers in an IRS database), but this is very unpopular, particularly among people like Ron Paul and other gold-bug Republicans who don’t understand how monetary policy works. So instead we try to obscure the printing of money behind a bizarre chain of activities, opening many more opportunities for failure: Chiefly, we can hit the zero lower bound where interest rates are zero and can’t go any lower (or can they?), or banks can be too stingy and decide not to lend, or people can be too risk-averse and decide not to borrow; and that’s not even to mention the redistribution of wealth that happens when all the money you print is given to banks. When that happens we turn to “unconventional monetary policy”, which basically just means that we get a little bit more honest about the fact that we’re printing money. (Even then you get articles like this one insisting that quantitative easing isn’t really printing money.)

I don’t know, maybe there’s actually some legitimate reason to do it this way—I do have to admit that when governments start openly printing money it often doesn’t end well. But really the question is why you’re printing money, whom you’re giving it to, and above all how much you are printing. Weimar Germany printed money to pay off odious war debts (because it totally makes sense to force a newly-established democratic government to pay the debts incurred by belligerent actions of the monarchy they replaced; surely one must repay one’s debts). Hungary printed money to pay for rebuilding after the devastation of World War 2. Zimbabwe printed money to pay for a war (I’m sensing a pattern here) and compensate for failed land reform policies. In all three cases the amount of money they printed was literally billions of times their original money supply. Yes, billions. They found their inflation cascading out of control and instead of stopping the printing, they printed even more. The United States has so far printed only about three times our original monetary base, still only about a third of our total money supply. (Monetary base is the part that the Federal reserve controls; the rest is created by banks. Typically 90% of our money is not monetary base.) Moreover, we did it for the right reasons—in response to deflation and depression. That is why, as Matthew O’Brien of The Atlantic put it so well, the US can never be Weimar.

I was supposed to be talking about saving and investment; why am I talking about money supply? Because investment is driven by the money supply. It’s not driven by saving, it’s driven by lending.

Now, part of the underlying theory was that lending and saving are supposed to be tied together, with money lent coming out of money saved; this is true if you assume that things are in a nice tidy equilibrium. But we never are, and frankly I’m not sure we’d want to be. In order to reach that equilibrium, we’d either need to have full-reserve banking, or banks would have to otherwise have their lending constrained by insufficient reserves; either way, we’d need to have a constant money supply. Any dollar that could be lent, would have to be lent, and the whole debt market would have to be entirely constrained by the availability of savings. You wouldn’t get denied for a loan because your credit rating is too low; you’d get denied for a loan because the bank would literally not have enough money available to lend you. Banking would have to be perfectly competitive, so if one bank can’t do it, no bank can. Interest rates would have to precisely match the supply and demand of money in the same way that prices are supposed to precisely match the supply and demand of products (and I think we all know how well that works out). This is why it’s such a big problem that most macroeconomic models literally do not include a financial sector. They simply assume that the financial sector is operating at such perfect efficiency that money in equals money out always and everywhere.

So, recognizing that saving and investment are in fact not equal, we now have two separate questions: What is the optimal rate of saving, and what is the optimal rate of investment? For saving, I think the question is almost meaningless; individuals should save according to their future income (since they’re so bad at predicting it, we might want to encourage people to save extra, as in programs like Save More Tomorrow), but the aggregate level of saving isn’t an important question. The important question is the aggregate level of investment, and for that, I think there are two ways of looking at it.

The first way is to go back to that original neoclassical growth model and realize it makes a lot more sense when the s term we called “saving” actually is a funny way of writing “investment”; in that case, perhaps we should indeed invest the same proportion of income as the income that goes to capital. An interesting, if draconian, way to do so would be to actually require this—all and only capital income may be used for business investment. Labor income must be used for other things, and capital income can’t be used for anything else. The days of yachts bought on stock options would be over forever—though so would the days of striking it rich by putting your paycheck into a tech stock. Due to the extreme restrictions on individual freedom, I don’t think we should actually do such a thing; but it’s an interesting thought that might lead to an actual policy worth considering.

But a second way that might actually be better—since even though the model makes more sense this way, it still has a number of serious flaws—is to think about what we might actually do in order to increase or decrease investment, and then consider the costs and benefits of each of those policies. The simplest case to analyze is if the government invests directly—and since the most important investments like infrastructure, education, and basic research are usually done this way, it’s definitely a useful example. How is the government going to fund this investment in, say, a nuclear fusion project? They have four basic ways: Cut spending somewhere else, raise taxes, print money, or issue debt. If you cut spending, the question is whether the spending you cut is more or less important than the investment you’re making. If you raise taxes, the question is whether the harm done by the tax (which is generally of two flavors; first there’s the direct effect of taking someone’s money so they can’t use it now, and second there’s the distortions created in the market that may make it less efficient) is outweighed by the new project. If you print money or issue debt, it’s a subtler question, since you are no longer pulling from any individual person or project but rather from the economy as a whole. Actually, if your economy has unused capacity as in a depression, you aren’t pulling from anywhere—you’re simply adding new value basically from thin air, which is why deficit spending in depressions is such a good idea. (More precisely, you’re putting resources to use that were otherwise going to lay fallow—to go back to my earlier example, the tennis shoes will no longer rest on the shelves.) But if you do not have sufficient unused capacity, you will get crowding-out; new debt will raise interest rates and make other investments more expensive, while printing money will cause inflation and make everything more expensive. So you need to weigh that cost against the benefit of your new investment and decide whether it’s worth it.

This second way is of course a lot more complicated, a lot messier, a lot more controversial. It would be a lot easier if we could just say: “The target investment rate should be 33% of GDP.” But even then the question would remain as to which investments to fund, and which consumption to pull from. The abstraction of simply dividing the economy into “consumption” versus “investment” leaves out matters of the utmost importance; Paul Allen’s 400-foot yacht and food stamps for children are both “consumption”, but taxing the former to pay for the latter seems not only justified but outright obligatory. The Bridge to Nowhere and the Humane Genome Project are both “investment”, but I think we all know which one had a higher return for human society. The neoclassical model basically assumes that the optimal choices for consumption and investment are decided automatically (automagically?) by the inscrutable churnings of the free market, but clearly that simply isn’t true.

In fact, it’s not always clear what exactly constitutes “consumption” versus “investment”, and the particulars of answering that question may distract us from answering the questions that actually matter. Is a refrigerator investment because it’s a machine you buy that sticks around and does useful things for you? Or is it consumption because consumers buy it and you use it for food? Is a car an investment because it’s vital to getting a job? Or is it consumption because you enjoy driving it? Someone could probably argue that the appreciation on Paul Allen’s yacht makes it an investment, for instance. Feeding children really is an investment, in their so-called “human capital” that will make them more productive for the rest of their lives. Part of the money that went to the Humane Genome Project surely paid some graduate student who then spent part of his paycheck on a keg of beer, which would make it consumption. And so on. The important question really isn’t “is this consumption or investment?” but “Is this worth doing?” And thus, the best answer to the question, “How much should we save?” may be: “Who cares?”