We do not benefit from economic injustice.

JDN 2457461

Recently I think I figured out why so many middle-class White Americans express so much guilt about global injustice: A lot of people seem to think that we actually benefit from it. Thus, they feel caught between a rock and a hard place; conquering injustice would mean undermining their own already precarious standard of living, while leaving it in place is unconscionable.

The compromise, is apparently to feel really, really guilty about it, constantly tell people to “check their privilege” in this bizarre form of trendy autoflagellation, and then… never really get around to doing anything about the injustice.

(I guess that’s better than the conservative interpretation, which seems to be that since we benefit from this, we should keep doing it, and make sure we elect big, strong leaders who will make that happen.)

So let me tell you in no uncertain words: You do not benefit from this.

If anyone does—and as I’ll get to in a moment, that is not even necessarily true—then it is the billionaires who own the multinational corporations that orchestrate these abuses. Billionaires and billionaires only stand to gain from the exploitation of workers in the US, China, and everywhere else.

How do I know this with such certainty? Allow me to explain.

First of all, it is a common perception that prices of goods would be unattainably high if they were not produced on the backs of sweatshop workers. This perception is mistaken. The primary effect of the exploitation is simply to raise the profits of the corporation; there is a secondary effect of raising the price a moderate amount; and even this would be overwhelmed by the long-run dynamic effect of the increased consumer spending if workers were paid fairly.

Let’s take an iPad, for example. The price of iPads varies around the world in a combination of purchasing power parity and outright price discrimination; but the top model almost never sells for less than $500. The raw material expenditure involved in producing one is about $370—and the labor expenditure? Just $11. Not $110; $11. If it had been $110, the price could still be kept under $500 and turn a profit; it would simply be much smaller. That is, even if prices are really so elastic that Americans would refuse to buy an iPad at any more than $500 that would still mean Apple could still afford to raise the wages they pay (or rather, their subcontractors pay) workers by an order of magnitude. A worker who currently works 50 hours a week for $10 per day could now make $10 per hour. And the price would not have to change; Apple would simply lose profit, which is why they don’t do this. In the absence of pressure to the contrary, corporations will do whatever they can to maximize profits.

Now, in fact, the price probably would go up, because Apple fans are among the most inelastic technology consumers in the world. But suppose it went up to $600, which would mean a 1:1 absorption of these higher labor expenditures into price. Does that really sound like “Americans could never afford this”? A few people right on the edge might decide they couldn’t buy it at that price, but it wouldn’t be very many—indeed, like any well-managed monopoly, Apple knows to stop raising the price at the point where they start losing more revenue than they gain.

Similarly, half the price of an iPhone is pure profit for Apple, and only 2% goes into labor. Once again, wages could be raised by an order of magnitude and the price would not need to change.

Apple is a particularly obvious example, but it’s quite simple to see why exploitative labor cannot be the source of improved economic efficiency. Paying workers less does not make them do better work. Treating people more harshly does not improve their performance. Quite the opposite: People work much harder when they are treated well. In addition, at the levels of income we’re talking about, small improvements in wages would result in substantial improvements in worker health, further improving performance. Finally, substitution effect dominates income effect at low incomes. At very high incomes, income effect can dominate substitution effect, so higher wages might result in less work—but it is precisely when we’re talking about poor people that it makes the least sense to say they would work less if you paid them more and treated them better.

At most, paying higher wages can redistribute existing wealth, if we assume that the total amount of wealth does not increase. So it’s theoretically possible that paying higher wages to sweatshop workers would result in them getting some of the stuff that we currently have (essentially by a price mechanism where the things we want get more expensive, but our own wages don’t go up). But in fact our wages are most likely too low as well—wages in the US have become unlinked from productivity, around the time of Reagan—so there’s reason to think that a more just system would improve our standard of living also. Where would all the extra wealth come from? Well, there’s an awful lot of room at the top.

The top 1% in the US own 35% of net wealth, about as much as the bottom 95%. The 400 billionaires of the Forbes list have more wealth than the entire African-American population combined. (We’re double-counting Oprah—but that’s it, she’s the only African-American billionaire in the US.) So even assuming that the total amount of wealth remains constant (which is too conservative, as I’ll get to in a moment), improving global labor standards wouldn’t need to pull any wealth from the middle class; it could get plenty just from the top 0.01%.

In surveys, most Americans are willing to pay more for goods in order to improve labor standards—and the amounts that people are willing to pay, while they may seem small (on the order of 10% to 20% more), are in fact clearly enough that they could substantially increase the wages of sweatshop workers. The biggest problem is that corporations are so good at covering their tracks that it’s difficult to know whether you are really supporting higher labor standards. The multiple layers of international subcontractors make things even more complicated; the people who directly decide the wages are not the people who ultimately profit from them, because subcontractors are competitive while the multinationals that control them are monopsonists.

But for now I’m not going to deal with the thorny question of how we can actually regulate multinational corporations to stop them from using sweatshops. Right now, I just really want to get everyone on the same page and be absolutely clear about cui bono. If there is a benefit at all, it’s not going to you and me.

Why do I keep saying “if”? As so many people will ask me: “Isn’t it obvious that if one person gets less money, someone else must get more?” If you’ve been following my blog at all, you know that the answer is no.

On a single transaction, with everything else held constant, that is true. But we’re not talking about a single transaction. We’re talking about a system of global markets. Indeed, we’re not really talking about money at all; we’re talking about wealth.

By paying their workers so little that those workers can barely survive, corporations are making it impossible for those workers to go out and buy things of their own. Since the costs of higher wages are concentrated in one corporation while the benefits of higher wages are spread out across society, there is a Tragedy of the Commons where each corporation acting in its own self-interest undermines the consumer base that would have benefited all corporations (not to mention people who don’t own corporations). It does depend on some parameters we haven’t measured very precisely, but under a wide range of plausible values, it works out that literally everyone is worse off under this system than they would have been under a system of fair wages.

This is not simply theoretical. We have empirical data about what happened when companies (in the US at least) stopped using an even more extreme form of labor exploitation: slavery.

Because we were on the classical gold standard, GDP growth in the US in the 19th century was extremely erratic, jumping up and down as high as 10 lp and as low as -5 lp. But if you try to smooth out this roller-coaster business cycle, you can see that our growth rate did not appear tobe slowed by the ending of slavery:

US_GDP_growth_1800s

 

Looking at the level of real per capita GDP (on a log scale) shows a continuous growth trend as if nothing had changed at all:

US_GDP_per_capita_1800s

In fact, if you average the growth rates (in log points, averaging makes sense) from 1800 to 1860 as antebellum and from 1865 to 1900 as postbellum, you find that the antebellum growth rate averaged 1.04 lp, while the postbellum growth rate averaged 1.77 lp. Over a period of 50 years, that’s the difference between growing by a factor of 1.7 and growing by a factor of 2.4. Of course, there were a lot of other factors involved besides the end of slavery—but at the very least it seems clear that ending slavery did not reduce economic growth, which it would have if slavery were actually an efficient economic system.

This is a different question from whether slaveowners were irrational in continuing to own slaves. Purely on the basis of individual profit, it was most likely rational to own slaves. But the broader effects on the economic system as a whole were strongly negative. I think that part of why the debate on whether slavery is economically inefficient has never been settled is a confusion between these two questions. One side says “Slavery damaged overall economic growth.” The other says “But owning slaves produced a rate of return for investors as high as manufacturing!” Yeah, those… aren’t answering the same question. They are in fact probably both true. Something can be highly profitable for individuals while still being tremendously damaging to society.

I don’t mean to imply that sweatshops are as bad as slavery; they are not. (Though there is still slavery in the world, and some sweatshops tread a fine line.) What I’m saying is that showing that sweatshops are profitable (no doubt there) or even that they are better than most of the alternatives for their workers (probably true in most cases) does not show that they are economically efficient. Sweatshops are beneficent exploitationthey make workers better off, but in an obviously unjust way. And they only make workers better off compared to the current alternatives; if they were replaced with industries paying fair wages, workers would obviously be much better off still.

And my point is, so would we. While the prices of goods would increase slightly in the short run, in the long run the increased consumer spending by people in Third World countries—which soon would cease to be Third World countries, as happened in Korea and Japan—would result in additional trade with us that would raise our standard of living, not lower it. The only people it is even plausible to think would be harmed are the billionaires who own our multinational corporations; and yet even they might stand to benefit from the improved efficiency of the global economy.

No, you do not benefit from sweatshops. So stop feeling guilty, stop worrying so much about “checking your privilege”—and let’s get out there and do something about it.

Will robots take our jobs?

JDN 2457451
I briefly discussed this topic before, but I thought it deserved a little more depth. Also, the SF author in me really likes writing this sort of post where I get to speculate about futures that are utopian, dystopian, or (most likely) somewhere in between.

The fear is quite widespread, but how realistic is it? Will robots in fact take all our jobs?

Most economists do not think so. Robert Solow famously quipped, “You can see the computer age everywhere but in the productivity statistics.” (It never quite seemed to occur to him that this might be a flaw in the way we measure productivity statistics.)

By the usual measure of labor productivity, robots do not appear to have had a large impact. Indeed, their impact appears to have been smaller than almost any other major technological innovation.

Using BLS data (which was formatted badly and thus a pain to clean, by the way—albeit not as bad as the World Bank data I used on my master’s thesis, which was awful), I made this graph of the growth rate of labor productivity as usually measured:

Productivity_growth

The fluctuations are really jagged due to measurement errors, so I also made an annually smoothed version:

Productivity_growth_smooth

Based on this standard measure, productivity has grown more or less steadily during my lifetime, fluctuating with the business cycle around a value of about 3.5% per year (3.4 log points). If anything, the growth rate seems to be slowing down; in recent years it’s been around 1.5% (1.5 lp).

This was clearly the time during which robots became ubiquitous—autonomous robots did not emerge until the 1970s and 1980s, and robots became widespread in factories in the 1980s. Then there’s the fact that computing power has been doubling every 1.5 years during this period, which is an annual growth rate of 59% (46 lp). So why hasn’t productivity grown at anywhere near that rate?

I think the main problem is that we’re measuring productivity all wrong. We measure it in terms of money instead of in terms of services. Yes, we try to correct for inflation; but we fail to account for the fact that computers have allowed us to perform literally billions of services every day that could not have been performed without them. You can’t adjust that away by plugging into the CPI or the GDP deflator.

Think about it: Your computer provides you the services of all the following:

  1. A decent typesetter and layout artist
  2. A truly spectacular computer (remember, that used to be a profession!)
  3. A highly skilled statistician (who takes no initiative—you must tell her what calculations to do)
  4. A painting studio
  5. A photographer
  6. A video camera operator
  7. A professional orchestra of the highest quality
  8. A decent audio recording studio
  9. Thousands of books, articles, and textbooks
  10. Ideal seats at every sports stadium in the world

And that’s not even counting things like social media and video games that can’t even be readily compared to services that were provided before computers.

If you added up the value of all of those jobs, the amount you would have had to pay in order to hire all those people to do all those things for you before computers existed, your computer easily provides you with at least $1 million in professional services every year. Put another way, your computer has taken jobs that would have provided $1 million in wages. You do the work of a hundred people with the help of your computer.

This isn’t counted in our productivity statistics precisely because it’s so efficient. If we still had to pay that much for all these services, it would be included in our GDP and then our GDP per worker would properly reflect all this work that is getting done. But then… whom would we be paying? And how would we have enough to pay that? Capitalism isn’t actually set up to handle this sort of dramatic increase in productivity—no system is, really—and thus the market price for work has almost no real relation to the productive capacity of the technology that makes that work possible.

Instead it has to do with scarcity of work—if you are the only one in the world who can do something (e.g. write Harry Potter books), you can make an awful lot of money doing that thing, while something that is far more important but can be done by almost anyone (e.g. feed babies) will pay nothing or next to nothing. At best we could say it has to do with marginal productivity, but marginal in the sense of your additional contribution over and above what everyone else could already do—not in the sense of the value actually provided by the work that you are doing. Anyone who thinks that markets automatically reward hard work or “pay you what you’re worth” clearly does not understand how markets function in the real world.

So, let’s ask again: Will robots take our jobs?

Well, they’ve already taken many jobs already. There isn’t even a clear high-skill/low-skill dichotomy here; robots are just as likely to make pharmacists obsolete as they are truck drivers, just as likely to replace surgeons as they are cashiers.

Labor force participation is declining, though slowly:

Labor_force_participation

Yet I think this also underestimates the effect of technology. As David Graeber points out, most of the new jobs we’ve been creating seem to be for lack of a better term bullshit jobs—jobs that really don’t seem like they need to be done, other than to provide people with something to do so that we can justify paying them salaries.

As he puts it:

Again, an objective measure is hard to find, but one easy way to get a sense is to ask: what would happen were this entire class of people to simply disappear? Say what you like about nurses, garbage collectors, or mechanics, it’s obvious that were they to vanish in a puff of smoke, the results would be immediate and catastrophic. A world without teachers or dock-workers would soon be in trouble, and even one without science fiction writers or ska musicians would clearly be a lesser place. It’s not entirely clear how humanity would suffer were all private equity CEOs, lobbyists, PR researchers, actuaries, telemarketers, bailiffs or legal consultants to similarly vanish. (Many suspect it might markedly improve.)

The paragon of all bullshit jobs is sales. Sales is a job that simply should not exist. If something is worth buying, you should be able to present it to the market and people should choose to buy it. If there are many choices for a given product, maybe we could have some sort of independent product rating agencies that decide which ones are the best. But sales means trying to convince people to buy your product—you have an absolutely overwhelming conflict of interest that makes your statements to customers so utterly unreliable that they are literally not even information anymore. The vast majority of advertising, marketing, and sales is thus, in a fundamental sense, literally noise. Sales contributes absolutely nothing to our economy, and because we spend so much effort on it and advertising occupies so much of our time and attention, takes a great deal away. But sales is one of our most steadily growing labor sectors; once we figure out how to make things without people, we employ the people in trying to convince customers to buy the new things we’ve made. Sales is also absolutely miserable for many of the people who do it, as I know from personal experience in two different sales jobs that I had to quit before the end of the first week.

Fortunately we have not yet reached the point where sales is the fastest growing labor sector. Currently the fastest-growing jobs fall into three categories: Medicine, green energy, and of course computers—but actually mostly medicine. Yet even this is unlikely to last; one of the easiest ways to reduce medical costs would be to replace more and more medical staff with automated systems. A nursing robot may not be quite as pleasant as a real professional nurse—but if by switching to robots the hospital can save several million dollars a year, they’re quite likely to do so.

Certain tasks are harder to automate than others—particularly anything requiring creativity and originality is very hard to replace, which is why I believe that in the 2050s or so there will be a Revenge of the Humanities Majors as all the supposedly so stable and forward-thinking STEM jobs disappear and the only jobs that are left are for artists, authors, musicians, game designers and graphic designers. (Also, by that point, very likely holographic designers, VR game designers, and perhaps even neurostim artists.) Being good at math won’t mean anything anymore—frankly it probably shouldn’t right now. No human being, not even great mathematical savants, is anywhere near as good at arithmetic as a pocket calculator. There will still be a place for scientists and mathematicians, but it will be the creative aspects of science and math that persist—design of experiments, development of new theories, mathematical intuition to develop new concepts. The grunt work of cleaning data and churning through statistical models will be fully automated.

Most economists appear to believe that we will continue to find tasks for human beings to perform, and this improved productivity will simply raise our overall standard of living. As any ECON 101 textbook will tell you, “scarcity is a fundamental fact of the universe, because human needs are unlimited and resources are finite.”

In fact, neither of those claims are true. Human needs are not unlimited; indeed, on Maslow’s hierarchy of needs First World countries have essentially reached the point where we could provide the entire population with the whole pyramid, guaranteed, all the time—if we were willing and able to fundamentally reform our economic system.

Resources are not even finite; what constitutes a “resource” depends on technology, as does how accessible or available any given source of resources will be. When we were hunter-gatherers, our only resources were the plants and animals around us. Agriculture turned seeds and arable land into a vital resource. Whale oil used to be a major scarce resource, until we found ways to use petroleum. Petroleum in turn is becoming increasingly irrelevant (and cheap) as solar and wind power mature. Soon the waters of the oceans themselves will be our power source as we refine the deuterium for fusion. Eventually we’ll find we need something for interstellar travel that we used to throw away as garbage (perhaps it will in fact be dilithium!) I suppose that if the universe is finite or if FTL is impossible, we will be bound by what is available in the cosmic horizon… but even that is not finite, as the universe continues to expand! If the universe is open (as it probably is) and one day we can harness the dark energy that seethes through the ever-expanding vacuum, our total energy consumption can grow without bound just as the universe does. Perhaps we could even stave off the heat death of the universe this way—we after all have billions of years to figure out how.

If scarcity were indeed this fundamental law that we could rely on, then more jobs would always continue to emerge, producing whatever is next on the list of needs ordered by marginal utility. Life would always get better, but there would always be more work to be done. But in fact, we are basically already at the point where our needs are satiated; we continue to try to make more not because there isn’t enough stuff, but because nobody will let us have it unless we do enough work to convince them that we deserve it.

We could continue on this route, making more and more bullshit jobs, pretending that this is work that needs done so that we don’t have to adjust our moral framework which requires that people be constantly working for money in order to deserve to live. It’s quite likely in fact that we will, at least for the foreseeable future. In this future, robots will not take our jobs, because we’ll make up excuses to create more.

But that future is more on the dystopian end, in my opinion; there is another way, a better way, the world could be. As technology makes it ever easier to produce as much wealth as we need, we could learn to share that wealth. As robots take our jobs, we could get rid of the idea of jobs as something people must have in order to live. We could build a new economic system: One where we don’t ask ourselves whether children deserve to eat before we feed them, where we don’t expect adults to spend most of their waking hours pushing papers around in order to justify letting them have homes, where we don’t require students to take out loans they’ll need decades to repay before we teach them history and calculus.

This second vision is admittedly utopian, and perhaps in the worst way—perhaps there’s simply no way to make human beings actually live like this. Perhaps our brains, evolved for the all-too-real scarcity of the ancient savannah, simply are not plastic enough to live without that scarcity, and so create imaginary scarcity by whatever means they can. It is indeed hard to believe that we can make so fundamental a shift. But for a Homo erectus in 500,000 BP, the idea that our descendants would one day turn rocks into thinking machines that travel to other worlds would be pretty hard to believe too.

Will robots take our jobs? Let’s hope so.

Tax incidence revisited, part 2: How taxes affect prices

JDN 2457341

One of the most important aspects of taxation is also one of the most counter-intuitive and (relatedly) least-understood: Taxes are not externally applied to pre-existing exchanges of money. Taxes endogenously interact with the system of prices, changing what the prices will be and then taking a portion of the money exchanged.

The price of something “before taxes” is not actually the price you would pay for it if there had been no taxes on it. Your “pre-tax income” is not actually the income you would have had if there were no income or payroll taxes.

The most obvious case to consider is that of government employees: If there were no taxes, public school teachers could not exist, so the “pre-tax income” of a public school teacher is a meaningless quantity. You don’t “take taxes out” of a government salary; you decide how much money the government employee will actually receive, and then at the same time allocate a certain amount into other budgets based on the tax code—a certain amount into the state general fund, a certain amount into the Social Security Trust Fund, and so on. These two actions could in principle be done completely separately; instead of saying that a teacher has a “pre-tax salary” of $50,000 and is taxed 20%, you could simply say that the teacher receives $40,000 and pay $10,000 into the appropriate other budgets.

In fact, when there is a conflict of international jurisdiction this is sometimes literally what we do. Employees of the World Bank are given immunity from all income and payroll taxes (effectively, diplomatic immunity, though this is not usually how we use the term) based on international law, except for US citizens, who have their taxes paid for them by the World Bank. As a result, all World Bank salaries are quoted “after-tax”, that is, the actual amount of money employees will receive in their paychecks. As a result, a $120,000 salary at the World Bank is considerably higher than a $120,000 salary at Goldman Sachs; the latter would only (“only”) pay about $96,000 in real terms.

For private-sector salaries, it’s not as obvious, but it’s still true. There is actually someone who pays that “before-tax” salary—namely, the employer. “Pre-tax” salaries are actually a measure of labor expenditure (sometimes erroneously called “labor costs”, even by economists—but a true labor cost is the amount of effort, discomfort, stress, and opportunity cost involved in doing labor; it’s an amount of utility, not an amount of money). The salary “before tax” is the amount of money that the employer has to come up with in order to pay their payroll. It is a real amount of money being exchanged, divided between the employee and the government.

The key thing to realize is that salaries are not set in a vacuum. There are various economic (and political) pressures which drive employers to set different salaries. In the real world, there are all sorts of pressures that affect salaries: labor unions, regulations, racist and sexist biases, nepotism, psychological heuristics, employees with different levels of bargaining skill, employers with different concepts of fairness or levels of generosity, corporate boards concerned about public relations, shareholder activism, and so on.

But even if we abstract away from all that for a moment and just look at the fundamental economics, assuming that salaries are set at the price the market will bear, that price depends upon the tax system.

This is because taxes effectively drive a wedge between supply and demand.

Indeed, on a graph, it actually looks like a wedge, as you’ll see in a moment.

Let’s pretend that we’re in a perfectly competitive market. Everyone is completely rational, we all have perfect information, and nobody has any power to manipulate the market. We’ll even assume that we are dealing with hourly wages and we can freely choose the number of hours worked. (This is silly, of course; but removing this complexity helps to clarify the concept and doesn’t change the basic result that prices depend upon taxes.)

We’ll have a supply curve, which is a graph of the minimum price the worker is willing to accept for each hour in order to work a given number of hours. We generally assume that the supply curve slopes upward, meaning that people are willing to work more hours if you offer them a higher wage for each hour. The idea is that it gets progressively harder to find the time—it eats into more and more important alternative activities. (This is in fact a gross oversimplification, but it’ll do for now. In the real world, labor is the one thing for which the supply curve frequently bends backward.)

supply_curve

We’ll also have a demand curve, which is a graph of the maximum price the employer is willing to pay for each hour, if the employee works that many hours. We generally assume that the demand curve slopes downward, meaning that the employer is willing to pay less for each hour if the employee works more hours. The reason is that most activities have diminishing marginal returns, so each extra hour of work generally produces less output than the previous hour, and is therefore not worth paying as much for. (This too is an oversimplification, as I discussed previously in my post on the Law of Demand.)

demand_curve

Put these two together, and in a competitive market the price will be set at the point at which supply is equal to demand, so that the very last hour of work was worth exactly what the employer paid for it. That last hour is just barely worth it to the employer, and just barely worth it to the worker; any additional time would either be too expensive for the employer or not lucrative enough for the worker. But for all the previous hours, the value to the employer is higher than the wage, and the cost to the worker is lower than the wage. As a result, both the employer and the worker benefit.

equilibrium_notax

But now, suppose we implement a tax. For concreteness, suppose the previous market-clearing wage was $20 per hour, the worker was working 40 hours, and the tax is 20%. If the employer still offers a wage of $20 for 40 hours of work, the worker is no longer going to accept it, because they will only receive $16 per hour after taxes, and $16 isn’t enough for them to be willing to work 40 hours. The worker could ask for a pre-tax wage of $25 so that the after-tax wage would be $20, but then the employer will balk, because $25 per hour is too expensive for 40 hours of work.

In order to restore the balance (and when we say “equilibrium”, that’s really all we mean—balance), the employer will need to offer a higher pre-tax wage, which means they will demand fewer hours of work. The worker will then be willing to accept a lower after-tax wage for those reduced hours.

In effect, there are now two prices at work: A supply price, the after-tax wage that the worker receives, which must be at or above the supply curve; and a demand price, the pre-tax wage that the employer pays, which must be at or below the demand curve. The difference between those two prices is the tax.

equilibrium_tax

In this case, I’ve set it up so that the pre-tax wage is $22.50, the after-tax wage is $18, and the amount of the tax is $4.50 or 20% of $22.50. In order for both the employer and the worker to accept those prices, the amount of hours worked has been reduced to 35.

As a result of the tax, the wage that we’ve been calling “pre-tax” is actually higher than the wage that the worker would have received if the tax had not existed. This is a general phenomenon; it’s almost always true that your “pre-tax” wage or salary overestimates what you would have actually gotten if the tax had not existed. In one extreme case, it might actually be the same; in another extreme case, your after-tax wage is what you would have received and the “pre-tax” wage rises high enough to account for the entirety of the tax revenue. It’s not really “pre-tax” at all; it’s the after-tax demand price.

Because of this, it’s fundamentally wrongheaded for people to complain that taxes are “taking your hard-earned money”. In all but the most exceptional cases, that “pre-tax” salary that’s being deducted from would never have existed. It’s more of an accounting construct than anything else, or like I said before a measure of labor expenditure. It is generally true that your after-tax salary is lower than the salary you would have gotten without the tax, but the difference is generally much smaller than the amount of the tax that you see deducted. In this case, the worker would see $4.50 per hour deducted from their wage, but in fact they are only down $2 per hour from where they would have been without the tax. And of course, none of this includes the benefits of the tax, which in many cases actually far exceed the costs; if we extended the example, it wouldn’t be hard to devise a scenario in which the worker who had their wage income reduced received an even larger benefit in the form of some public good such as national defense or infrastructure.

What if employees were considered assets?

JDN 2457308 EDT 15:31

Robert Reich has an interesting proposal to change the way we think about labor and capital:
First, are workers assets to be developed or are they costs to be cut?” “Employers treat replaceable workers as costs to be cut, not as assets to be developed.”

This ultimately comes down to a fundamental question of how our accounting rules work: Workers are not considered assets, but wages are considered liabilities.

I don’t want to bore you with the details of accounting (accounting is often thought of as similar to economics, but really it’s the opposite of economics: Whereas economics is empirical, interesting, and fundamentally nonzero-sum, accounting is arbitrary, tedious, and zero-sum by construction), but I think it’s worth discussing the difference between how capital and labor are accounted.

By construction, every credit must come with a debit, no matter how arbitrary this may seem.

We start with an equation:

Assets + Expenses = Equity + Liabilities + Income

When purchasing a piece of capital, you credit the equity account with the capital you just bought, increasing it, then debit the expense account, increasing it as well. Because the capital is valued at the price at which you bought it, the increase in equity exactly balances the increase in expenses, and your assets, liabilities, and income do not change.

But when hiring a worker, you still debit the expense account, but now you credit the liabilities account, increasing it as well. So instead of increasing your equity, which is a good thing, you increase your liabilities, which is a bad thing.

This is why corporate executives are always on the lookout for ways to “cut labor costs”; they conceive of wages as simply outgoing money that doesn’t do anything useful, and therefore something to cut in order to increase profits.

Reich is basically suggesting that we start treating workers as equity, the same as we do with capital; and then corporate executives would be thinking in terms of making a “capital gain” by investing in their workers to increase their “value”.

The problem with this scheme is that it would really only make sense if corporations owned their workers—and I think we all know why that is not a good idea. The reason capital can be counted in the equity account is that capital can be sold off as a source of income; you don’t need to think of yourself as making a sort of “capital gain”; you can make, you know, actual capital gains.

I think actually the deeper problem here is that there is something wrong with accounting in general.

By its very nature, accounting is zero-sum. At best, this allows an error-checking mechanism wherein we can see if the two sides of the equation balance. But at worst, it makes us forget the point of economics.

While an individual may buy a capital asset on speculation, hoping to sell it for a higher price later, that isn’t what capital is for. At an aggregate level, speculation and arbitrage cannot increase real wealth; all they can do is move it around.

The reason we want to have capital is that it makes things—that the value of goods produced by a machine can far exceed the cost to produce that machine. It is in this way that capital—and indeed capitalism—creates real wealth.

Likewise, that is why we use labor—to make things. Labor is worthwhile because—and insofar as—the cost of the effort is less than the benefit of the outcome. Whether you are a baker, an author, a neurosurgeon, or an auto worker, the reason your job is worth doing is that the harm to you from doing it is smaller than the benefit to others from having it done. Indeed, the market mechanism is supposed to be structured so that by transferring wealth to you (i.e., paying you money), we make it so that both you and the people who buy your services are better off.

But accounting methods as we know them make no allowance for this; no matter what you do, the figures always balance. If you end up with more, someone else ends up with less. Since a worker is better off with a wage than they were before, we infer that a corporation must be worse off because it paid that wage. Since a corporation makes a profit selling a good, we infer that a consumer must be worse off because they paid for that purchase. We track the price of everything and understand the value of nothing.

There are two ways of pricing a capital asset: The cost to make it, or the value you get from it. Those two prices are only equal if markets are perfectly efficient, and even then they are only equal at the margin—the last factory built is worth what it can make, but every other factory built before that is worth more. It is that difference which creates real wealth—so assuming that they are the same basically defeats the purpose.

I don’t think we can do away with accounting; we need some way to keep track of where money goes, and we want that system to have built-in mechanisms to reduce rates of error and fraud. Double-entry bookkeeping certainly doesn’t make error and fraud disappear, but it at least does provide some protection against them, which we would lose if we removed the requirement that accounts must balance.

But somehow we need to restructure our metrics so that they give some sense of what economics is really about—not moving around a fixed amount of wealth, but making more wealth. Accounting for employees as assets wouldn’t solve that problem—but it might be a start, I guess?

How much should we save?

JDN 2457215 EDT 15:43.

One of the most basic questions in macroeconomics has oddly enough received very little attention: How much should we save? What is the optimal level of saving?

At the microeconomic level, how much you should save basically depends on what you think your income will be in the future. If you have more income now than you think you’ll have later, you should save now to spend later. If you have less income now than you think you’ll have later, you should spend now and dissave—save negatively, otherwise known as borrowing—and pay it back later. The life-cycle hypothesis says that people save when they are young in order to retire when they are old—in its strongest form, it says that we keep our level of spending constant across our lifetime at a value equal to our average income. The strongest form is utterly ridiculous and disproven by even the most basic empirical evidence, so usually the hypothesis is studied in a weaker form that basically just says that people save when they are young and spend when they are old—and even that runs into some serious problems.

The biggest problem, I think, is that the interest rate you receive on savings is always vastly less than the interest rate you pay on borrowing, which in turn is related to the fact that people are credit-constrainedthey generally would like to borrow more than they actually can. It also has a lot to do with the fact that our financial system is an oligopoly; banks make more profits if they can pay savers less and charge borrowers more, and by colluding with each other they can control enough of the market that no major competitors can seriously undercut them. (There is some competition, however, particularly from credit unions—and if you compare these two credit card offers from University of Michigan Credit Union at 8.99%/12.99% and Bank of America at 12.99%/22.99% respectively, you can see the oligopoly in action as the tiny competitor charges you a much fairer price than the oligopoly beast. 9% means doubling in just under eight years, 13% means doubling in a little over five years, and 23% means doubling in three years.) Another very big problem with the life-cycle theory is that human beings are astonishingly bad at predicting the future, and thus our expectations about our future income can vary wildly from the actual future income we end up receiving. People who are wise enough to know that they do not know generally save more than they think they’ll need, which is called precautionary saving. Combine that with our limited capacity for self-control, and I’m honestly not sure the life-cycle hypothesis is doing any work for us at all.

But okay, let’s suppose we had a theory of optimal individual saving. That would still leave open a much larger question, namely optimal aggregate saving. The amount of saving that is best for each individual may not be best for society as a whole, and it becomes a difficult policy challenge to provide incentives to make people save the amount that is best for society.

Or it would be, if we had the faintest idea what the optimal amount of saving for society is. There’s a very simple rule-of-thumb that a lot of economists use, often called the golden rule (not to be confused with the actual Golden Rule, though I guess the idea is that a social optimum is a moral optimum), which is that we should save exactly the same amount as the share of capital in income. If capital receives one third of income (This figure of one third has been called a “law”, but as with most “laws” in economics it’s really more like the Pirate Code; labor’s share of income varies across countries and years. I doubt you’ll be surprised to learn that it is falling around the world, meaning more income is going to capital owners and less is going to workers.), then one third of income should be saved to make more capital for next year.

When you hear that, you should be thinking: “Wait. Saved to make more capital? You mean invested to make more capital.” And this is the great sleight of hand in the neoclassical theory of economic growth: Saving and investment are made to be the same by definition. It’s called the savings-investment identity. As I talked about in an earlier post, the model seems to be that there is only one kind of good in the world, and you either use it up or save it to make more.

But of course that’s not actually how the world works; there are different kinds of goods, and if people stop buying tennis shoes that doesn’t automatically lead to more factories built to make tennis shoes—indeed, quite the opposite.If people reduce their spending, the products they no longer buy will now accumulate on shelves and the businesses that make those products will start downsizing their production. If people increase their spending, the products they now buy will fly off the shelves and the businesses that make them will expand their production to keep up.

In order to make the savings-investment identity true by definition, the definition of investment has to be changed. Inventory accumulation, products building up on shelves, is counted as “investment” when of course it is nothing of the sort. Inventory accumulation is a bad sign for an economy; indeed the time when we see the most inventory accumulation is right at the beginning of a recession.

As a result of this bizarre definition of “investment” and its equation with saving, we get the famous Paradox of Thrift, which does indeed sound paradoxical in its usual formulation: “A global increase in marginal propensity to save can result in a reduction in aggregate saving.” But if you strip out the jargon, it makes a lot more sense: “If people suddenly stop spending money, companies will stop investing, and the economy will grind to a halt.” There’s still a bit of feeling of paradox from the fact that we tried to save more money and ended up with less money, but that isn’t too hard to understand once you consider that if everyone else stops spending, where are you going to get your money from?

So what if something like this happens, we all try to save more and end up having no money? The government could print a bunch of money and give it to people to spend, and then we’d have money, right? Right. Exactly right, in fact. You now understand monetary policy better than most policymakers. Like a basic income, for many people it seems too simple to be true; but in a nutshell, that is Keynesian monetary policy. When spending falls and the economy slows down as a result, the government should respond by expanding the money supply so that people start spending again. In practice they usually expand the money supply by a really bizarre roundabout way, buying and selling bonds in open market operations in order to change the interest rate that banks charge each other for loans of reserves, the Fed funds rate, in the hopes that banks will change their actual lending interest rates and more people will be able to borrow, thus, ultimately, increasing the money supply (because, remember, banks don’t have the money they lend you—they create it).

We could actually just print some money and give it to people (or rather, change a bunch of numbers in an IRS database), but this is very unpopular, particularly among people like Ron Paul and other gold-bug Republicans who don’t understand how monetary policy works. So instead we try to obscure the printing of money behind a bizarre chain of activities, opening many more opportunities for failure: Chiefly, we can hit the zero lower bound where interest rates are zero and can’t go any lower (or can they?), or banks can be too stingy and decide not to lend, or people can be too risk-averse and decide not to borrow; and that’s not even to mention the redistribution of wealth that happens when all the money you print is given to banks. When that happens we turn to “unconventional monetary policy”, which basically just means that we get a little bit more honest about the fact that we’re printing money. (Even then you get articles like this one insisting that quantitative easing isn’t really printing money.)

I don’t know, maybe there’s actually some legitimate reason to do it this way—I do have to admit that when governments start openly printing money it often doesn’t end well. But really the question is why you’re printing money, whom you’re giving it to, and above all how much you are printing. Weimar Germany printed money to pay off odious war debts (because it totally makes sense to force a newly-established democratic government to pay the debts incurred by belligerent actions of the monarchy they replaced; surely one must repay one’s debts). Hungary printed money to pay for rebuilding after the devastation of World War 2. Zimbabwe printed money to pay for a war (I’m sensing a pattern here) and compensate for failed land reform policies. In all three cases the amount of money they printed was literally billions of times their original money supply. Yes, billions. They found their inflation cascading out of control and instead of stopping the printing, they printed even more. The United States has so far printed only about three times our original monetary base, still only about a third of our total money supply. (Monetary base is the part that the Federal reserve controls; the rest is created by banks. Typically 90% of our money is not monetary base.) Moreover, we did it for the right reasons—in response to deflation and depression. That is why, as Matthew O’Brien of The Atlantic put it so well, the US can never be Weimar.

I was supposed to be talking about saving and investment; why am I talking about money supply? Because investment is driven by the money supply. It’s not driven by saving, it’s driven by lending.

Now, part of the underlying theory was that lending and saving are supposed to be tied together, with money lent coming out of money saved; this is true if you assume that things are in a nice tidy equilibrium. But we never are, and frankly I’m not sure we’d want to be. In order to reach that equilibrium, we’d either need to have full-reserve banking, or banks would have to otherwise have their lending constrained by insufficient reserves; either way, we’d need to have a constant money supply. Any dollar that could be lent, would have to be lent, and the whole debt market would have to be entirely constrained by the availability of savings. You wouldn’t get denied for a loan because your credit rating is too low; you’d get denied for a loan because the bank would literally not have enough money available to lend you. Banking would have to be perfectly competitive, so if one bank can’t do it, no bank can. Interest rates would have to precisely match the supply and demand of money in the same way that prices are supposed to precisely match the supply and demand of products (and I think we all know how well that works out). This is why it’s such a big problem that most macroeconomic models literally do not include a financial sector. They simply assume that the financial sector is operating at such perfect efficiency that money in equals money out always and everywhere.

So, recognizing that saving and investment are in fact not equal, we now have two separate questions: What is the optimal rate of saving, and what is the optimal rate of investment? For saving, I think the question is almost meaningless; individuals should save according to their future income (since they’re so bad at predicting it, we might want to encourage people to save extra, as in programs like Save More Tomorrow), but the aggregate level of saving isn’t an important question. The important question is the aggregate level of investment, and for that, I think there are two ways of looking at it.

The first way is to go back to that original neoclassical growth model and realize it makes a lot more sense when the s term we called “saving” actually is a funny way of writing “investment”; in that case, perhaps we should indeed invest the same proportion of income as the income that goes to capital. An interesting, if draconian, way to do so would be to actually require this—all and only capital income may be used for business investment. Labor income must be used for other things, and capital income can’t be used for anything else. The days of yachts bought on stock options would be over forever—though so would the days of striking it rich by putting your paycheck into a tech stock. Due to the extreme restrictions on individual freedom, I don’t think we should actually do such a thing; but it’s an interesting thought that might lead to an actual policy worth considering.

But a second way that might actually be better—since even though the model makes more sense this way, it still has a number of serious flaws—is to think about what we might actually do in order to increase or decrease investment, and then consider the costs and benefits of each of those policies. The simplest case to analyze is if the government invests directly—and since the most important investments like infrastructure, education, and basic research are usually done this way, it’s definitely a useful example. How is the government going to fund this investment in, say, a nuclear fusion project? They have four basic ways: Cut spending somewhere else, raise taxes, print money, or issue debt. If you cut spending, the question is whether the spending you cut is more or less important than the investment you’re making. If you raise taxes, the question is whether the harm done by the tax (which is generally of two flavors; first there’s the direct effect of taking someone’s money so they can’t use it now, and second there’s the distortions created in the market that may make it less efficient) is outweighed by the new project. If you print money or issue debt, it’s a subtler question, since you are no longer pulling from any individual person or project but rather from the economy as a whole. Actually, if your economy has unused capacity as in a depression, you aren’t pulling from anywhere—you’re simply adding new value basically from thin air, which is why deficit spending in depressions is such a good idea. (More precisely, you’re putting resources to use that were otherwise going to lay fallow—to go back to my earlier example, the tennis shoes will no longer rest on the shelves.) But if you do not have sufficient unused capacity, you will get crowding-out; new debt will raise interest rates and make other investments more expensive, while printing money will cause inflation and make everything more expensive. So you need to weigh that cost against the benefit of your new investment and decide whether it’s worth it.

This second way is of course a lot more complicated, a lot messier, a lot more controversial. It would be a lot easier if we could just say: “The target investment rate should be 33% of GDP.” But even then the question would remain as to which investments to fund, and which consumption to pull from. The abstraction of simply dividing the economy into “consumption” versus “investment” leaves out matters of the utmost importance; Paul Allen’s 400-foot yacht and food stamps for children are both “consumption”, but taxing the former to pay for the latter seems not only justified but outright obligatory. The Bridge to Nowhere and the Humane Genome Project are both “investment”, but I think we all know which one had a higher return for human society. The neoclassical model basically assumes that the optimal choices for consumption and investment are decided automatically (automagically?) by the inscrutable churnings of the free market, but clearly that simply isn’t true.

In fact, it’s not always clear what exactly constitutes “consumption” versus “investment”, and the particulars of answering that question may distract us from answering the questions that actually matter. Is a refrigerator investment because it’s a machine you buy that sticks around and does useful things for you? Or is it consumption because consumers buy it and you use it for food? Is a car an investment because it’s vital to getting a job? Or is it consumption because you enjoy driving it? Someone could probably argue that the appreciation on Paul Allen’s yacht makes it an investment, for instance. Feeding children really is an investment, in their so-called “human capital” that will make them more productive for the rest of their lives. Part of the money that went to the Humane Genome Project surely paid some graduate student who then spent part of his paycheck on a keg of beer, which would make it consumption. And so on. The important question really isn’t “is this consumption or investment?” but “Is this worth doing?” And thus, the best answer to the question, “How much should we save?” may be: “Who cares?”

The sunk-cost fallacy

JDN 2457075 EST 14:46.

I am back on Eastern Time once again, because we just finished our 3600-km road trek from Long Beach to Ann Arbor. I seem to move an awful lot; this makes me a bit like Schumpeter, who moved an average of every two years his whole adult life. Schumpeter and I have much in common, in fact, though I have no particular interest in horses.

Today’s topic is the sunk-cost fallacy, which was particularly salient as I had to box up all my things for the move. There were many items that I ended up having to throw away because it wasn’t worth moving them—but this was always painful, because I couldn’t help but think of all the work or money I had put into them. I threw away craft projects I had spent hours working on and collections of bottlecaps I had gathered over years—because I couldn’t think of when I’d use them, and ultimately the question isn’t how hard they were to make in the past, it’s what they’ll be useful for in the future. But each time it hurt, like I was giving up a little part of myself.

That’s the sunk-cost fallacy in a nutshell: Instead of considering whether it will be useful to us later and thus worth having around, we naturally tend to consider the effort that went into getting it. Instead of making our decisions based on the future, we make them based on the past.

Come to think of it, the entire Marxist labor theory of value is basically one gigantic sunk-cost fallacy: Instead of caring about the usefulness of a product—the mainstream utility theory of value—we are supposed to care about the labor that went into making it. To see why this is wrong, imagine someone spends 10,000 hours carving meaningless symbols into a rock, and someone else spends 10 minutes working with chemicals but somehow figures out how to cure pancreatic cancer. Which one would you pay more for—particularly if you had pancreatic cancer?

This is one of the most common irrational behaviors humans do, and it’s worth considering why that might be. Most people commit the sunk-cost fallacy on a daily basis, and even those of us who are aware of it will still fall into it if we aren’t careful.

This often seems to come from a fear of being wasteful; I don’t know of any data on this, but my hunch is that the more environmentalist you are, the more often you tend to run into the sunk-cost fallacy. You feel particularly bad wasting things when you are conscious of the damage that waste does to our planetary ecosystem. (Which is not to say that you should not be environmentalist; on the contrary, most of us should be a great deal more environmentalist than we are. The negative externalities of environmental degradation are almost unimaginably enormous—climate change already kills 150,000 people every year and is projected to kill tens if not hundreds of millions people over the 21st century.)

I think sunk-cost fallacy is involved in a lot of labor regulations as well. Most countries have employment protection legislation that makes it difficult to fire people for various reasons, ranging from the basically reasonable (discrimination against women and racial minorities) to the totally absurd (in some countries you can’t even fire people for being incompetent). These sorts of regulations are often quite popular, because people really don’t like the idea of losing their jobs. When faced with the possibility of losing your job, you should be thinking about what your future options are; but many people spend a lot of time thinking about the past effort they put into this one. I think there is also some endowment effect and loss aversion at work as well: You value your job more simply because you already have it, so you don’t want to lose it even for something better.

Yet these regulations are widely regarded by economists as inefficient; and for once I am inclined to agree. While I certainly don’t want people being fired frivolously or for discriminatory reasons, sometimes companies really do need to lay off workers because there simply isn’t enough demand for their products. When a factory closes down, we think about the jobs that are lost—but we don’t think about the better jobs they can now do instead.

I favor a system like what they have in Denmark (I’m popularizing a hashtag about this sort of thing: #Scandinaviaisbetter): We don’t try to protect your job, we try to protect you. Instead of regulations that make it hard to fire people, Denmark has a generous unemployment insurance system, strong social welfare policies, and active labor market policies that help people retrain and find new and better jobs. One thing I think Denmark might want to consider is restrictions on cyclical layoffs—in a recession there is pressure to lay off workers, but that can create a vicious cycle that makes recessions worse. Denmark was hit considerably harder by the Great Recession than France, for example; where France’s unemployment rose from 7.5% to 9.6%, Denmark’s rose from an astonishing 3.1% all the way up to 7.6%.

Then again, sometimes what looks like a sunk-cost fallacy actually isn’t—and I think this gives us insight into how we might have evolved such an apparently silly heuristic in the first place.

Why would you care about what you did in the past when deciding what to do in the future? Well there’s one reason in particular: Credible commitment. There are many cases in life where you’d like to be able to plan to do something in the future, but when the time comes to actually do it you’ll be tempted not to follow through.

This sort of thing happens all the time: When you take out a loan, you plan to pay it back—but when you need to actually make payments it sure would be nice if you didn’t have to. If you’re trying to slim down, you go on a diet—but doesn’t that cookie look delicious? You know you should quit smoking for your health—but what’s one more cigarette, really? When you get married, you promise to be faithful—but then sometimes someone else comes along who seems so enticing! Your term paper is due in two weeks, so you really should get working on it—but your friends are going out for drinks tonight, why not start the paper tomorrow?

Our true long-term interests are often misaligned with our short-term temptations. This often happens because of hyperbolic discounting, which is a bit technical; but the basic idea is that you tend to rate the importance of an event in inverse proportion to its distance in time. That turns out to be irrational, because as you get closer to the event, your valuations will change disproportionately. The optimal rational choice would be exponential discounting, where you value each successive moment a fixed percentage less than the last—since that percentage doesn’t change, your valuations will always stay in line with one another. But basically nobody really uses exponential discounting in real life.

We can see this vividly in experiments: If we ask people whether they would you rather receive $100 today, or $110 a week from now, they often go with $100 today. But if you ask them whether they would rather receive $100 in 52 weeks or $110 in 53 weeks, almost everyone chooses the $110. The value of a week apparently depends on how far away it is! (The $110 is clearly the rational choice by the way. Discounting 10% per week makes no sense at all—unless you literally believe that $1,000 today is as good as $140,000 a year from now.)

To solve this problem, it can be advantageous to make commitments—either enforced by direct measures such as legal penalties, or even simply by making promises that we feel guilty breaking. That’s why cold turkey is often the most effective way to quit a drug. Physiologically that makes no sense, because gradual cessation clearly does reduce withdrawal symptoms. But psychologically it does, because cold turkey allows you to make a hardline commitment to never again touch the stuff. The majority of successful smokers report using cold turkey, though there is still ongoing research on whether properly-orchestrated gradual reduction can be more effective. Likewise, vague notions like “I’ll eat better and exercise more” are virtually useless, while specific prescriptions like “I will do 20 minutes of exercise every day and stop eating red meat” are much more effective—the latter allows you to make a promise to yourself that can be broken, and since you feel bad breaking it you are motivated to keep it.

In the presence of such commitments, the past does matter, at least insofar as you made commitments to yourself or others in the past. If you promised never to smoke another cigarette, or never to cheat on your wife, or never to eat meat again, you actually have a good reason—and a good chance—to never do those things. This is easy to confuse with a sunk cost; when you think about the 20 years you’ve been married or the 10 years you’ve been vegetarian, you might be thinking of the sunk cost you’ve incurred over that time, or you might be thinking of the promises you’ve made and kept to yourself and others. In the former case you are irrationally committing a sunk-cost fallacy; in the latter you are rationally upholding a credible commitment.

This is most likely why we evolved in such a way as to commit sunk-cost fallacies. The ability to enforce commitments on ourselves and others was so important that it was worth it to overcompensate and sometimes let us care about sunk costs. Because commitments and sunk costs are often difficult to distinguish, it would have been more costly to evolve better ways of distinguish them than it was to simply make the mistake.

Perhaps people who are outraged by being laid off aren’t actually committing a sunk-cost fallacy at all; perhaps they are instead assuming the existence of a commitment where none exists. “I gave this company 20 good years, and now they’re getting rid of me?” But the truth is, you gave the company nothing. They never committed to keeping you (unless they signed a contract, but that’s different; if they are violating a contract, of course they should be penalized for that). They made you a trade, and when that trade ceases to be advantageous they will stop making it. Corporations don’t think of themselves as having any moral obligations whatsoever; they exist only to make profit. It is certainly debatable whether it was a good idea to set up corporations in this way; but unless and until we change that system it is important to keep it in mind. You will almost never see a corporation do something out of kindness or moral obligation; that’s simply not how corporations work. At best, they do nice things to enhance their brand reputation (Starbucks, Whole Foods, Microsoft, Disney, Costco). Some don’t even bother doing that, letting people hate as long as they continue to buy (Walmart, BP, DeBeers). Actually the former model seems to be more successful lately, which bodes well for the future; but be careful to recognize that few if any of these corporations are genuinely doing it out of the goodness of their hearts. Human beings are often altruistic; corporations are specifically designed not to be.

And there were some things I did promise myself I would keep—like old photos and notebooks that I want to keep as memories—so those went in boxes. Other things were obviously still useful—clothes, furniture, books. But for the rest? It was painful, but I thought about what I could realistically use them for, and if I couldn’t think of anything, they went into the trash.

No, capital taxes should not be zero

JDN 2456998 PST 11:38.

It’s an astonishingly common notion among neoclassical economists that we should never tax capital gains, and all taxes should fall upon labor income. Here Scott Sumner writing for The Economist has the audacity to declare this a ‘basic principle of economics’. Many of the arguments are based on rather esoteric theorems like the Atkinson-Stiglitz Theorem (I thought you were better than that, Stiglitz!) and the Chamley-Judd Theorem.

All of these theorems rest upon two very important assumptions, which many economists take for granted—yet which are utterly and totally untrue. For once it’s not assumed that we are infinite identical psychopaths; actually psychopaths might not give wealth to their children in inheritance, which would undermine the argument in a different way, by making each individual have a finite time horizon. No, the assumptions are that saving is the source of investment, and investment is the source of capital income.

Investment is the source of capital, that’s definitely true—the total amount of wealth in society is determined by investment. You do have to account for the fact that real investment isn’t just factories and machines, it’s also education, healthcare, infrastructure. With that in mind, yes, absolutely, the total amount of wealth is a function of the investment rate.

But that doesn’t mean that investment is the source of capital income—because in our present system the distribution of capital income is in no way determined by real investment or the actual production of goods. Virtually all capital income comes from financial markets, which are rife with corruption—they are indeed the main source of corruption that remains in First World nations—and driven primarily by arbitrage and speculation, not real investment. Contrary to popular belief and economic theory, the stock market does not fund corporations; corporations fund the stock market. It’s this bizarre game our society plays, in which a certain portion of the real output of our productive industries is siphoned off so that people who are already rich can gamble over it. Any theory of capital income which fails to take these facts into account is going to be fundamentally distorted.

The other assumption is that investment is savings, that the way capital increases is by labor income that isn’t spent on consumption. This isn’t even close to true, and I never understood why so many economists think it is. The notion seems to be that there is a certain amount of money in the world, and what you don’t spend on consumption goods you can instead spend on investment. But this is just flatly not true; the money supply is dynamically flexible, and the primary means by which money is created is through banks creating loans for the purpose of investment. It’s that I term I talked about in my post on the deficit; it seems to come out of nowhere, because that’s literally what happens.

On the reasoning that savings is just labor income that you don’t spend on consumption, then if you compute the figure W – C , wages and salaries minus consumption, that figure should be savings, and it should be equal to investment. Well, that figure is negative—for reasons I gave in that post. Total employee compensation in the US in 2014 is $9.2 trillion, while total personal consumption expenditure is $11.4 trillion. The reason we are able to save at all is because of government transfers, which account for $2.5 trillion. To fill up our GDP to its total of $16.8 trillion, you need to add capital income: proprietor income ($1.4 trillion) and receipts on assets ($2.1 trillion); then you need to add in the part of government spending that isn’t transfers ($1.4 trillion).

If you start with the fanciful assumption that the way capital increases is by people being “thrifty” and choosing to save a larger proportion of their income, then it makes some sense not to tax capital income. (Scott Sumner makes exactly that argument, having us compare two brothers with equal income, one of whom chooses to save more.) But this is so fundamentally removed from how capital—and for that matter capitalism—actually operates that I have difficulty understanding why anyone could think that it is true.

The best I can come up with is something like this: They model the world by imagining that there is only one good, peanuts, and everyone starts with the same number of peanuts, and everyone has a choice to either eat their peanuts or save and replant them. Then, the total production of peanuts in the future will be due to the proportion of peanuts that were replanted today, and the amount of peanuts each person has will be due to their past decisions to save rather than consume. Therefore savings will be equal to investment and investment will be the source of capital income.

I bet you can already see the problem even in this simple model, if we just relax the assumption of equal wealth endowments: Some people have a lot more peanuts than others. Why do some people eat all their peanuts? Well it probably has something to do with the fact they’d starve if they didn’t. Reducing your consumption below the level at which you can survive isn’t “thrifty”, it’s suicidal. (And if you think this is a strawman, the IMF has literally told Third World countries that their problem is they need to save more. Here they are arguing that in Ghana.) In fact, economic growth leads to saving, not the other way around. Most Americans aren’t starving, and could probably stand to save more than we do, but honestly it might not be good if we did—everyone trying to save more can lead to the Paradox of Thrift and cause a recession.

Even worse, in that model world, there is only capital income. There is no such thing as labor income, only the number of peanuts you grow from last year’s planting. If we now add in labor income, what happens? Well, peanuts don’t work anymore… let’s try robots. You have a certain number of robots, and you can either use the robots to do things you need (including somehow feeding you, I guess), or you can use them to build more robots to use later. You can also build more robots yourself. Then the “zero capital tax” argument amounts to saying that the government should take some of your robots for public use if you made them yourself, but not if they were made by other robots you already had.

In order for that argument to carry through, you need to say that there was no such thing as an initial capital endowment; all robots that exist were either made by their owners or saved from previous construction. If there is anyone who simply happened to be born with more robots, or has more because they stole them from someone else (or, more likely, both, they inherited from someone who stole), the argument falls apart.

And even then you need to think about the incentives: If capital income is really all from savings, then taxing capital income provides an incentive to spend. Is that a bad thing? I feel like it isn’t; the economy needs spending. In the robot toy model, we’re giving people a reason to use their robots to do actual stuff, instead of just leaving them to make more robots. That actually seems like it might be a good thing, doesn’t it? More stuff gets done that helps people, instead of just having vast warehouses full of robots building other robots in the hopes that someday we can finally use them for something. Whereas, taxing labor income may give people an incentive not to work, which is definitely going to reduce economic output. More precisely, higher taxes on labor would give low-wage workers an incentive to work less, and give high-wage workers an incentive to work more, which is a major part of the justification of progressive income taxes. A lot of the models intended to illustrate the Chamley-Judd Theorem assume that taxes have an effect on capital but no effect on labor, which is kind of begging the question.

Another thought that occurred to me is: What if the robots in the warehouse are all destroyed by a war or an earthquake? And indeed the possibility of sudden capital destruction would be a good reason not to put everything into investment. This is generally modeled as “uninsurable depreciation risk”, but come on; of course it’s uninsurable. All real risk is uninsurable in the aggregate. Insurance redistributes resources from those who have them but don’t need them to those who suddenly find they need them but don’t have them. This actually does reduce the real risk in utility, but it certainly doesn’t reduce the real risk in terms of goods. Stephen Colbert made this point very well: “Obamacare needs the premiums of healthier people to cover the costs of sicker people. It’s a devious con that can only be described as—insurance.” (This suggests that Stephen Colbert understands insurance better than many economists.) Someone has to make that new car that you bought using your insurance when you totaled the last one. Insurance companies cannot create cars or houses—or robots—out of thin air. And as Piketty and Saez point out, uninsurable risk undermines the Chamley-Judd Theorem. Unlike all these other economists, Piketty and Saez actually understand capital and inequality.
Sumner hand-waves that point away by saying we should just institute a one-time transfer of wealth to equalized the initial distribution, as though this were somehow a practically (not to mention politically) feasible alternative. Ultimately, yes, I’d like to see something like that happen; restore the balance and then begin anew with a just system. But that’s exceedingly difficult to do, while raising the tax rate on capital gains is very easy—and furthermore if we leave the current stock market and derivatives market in place, we will not have a just system by any stretch of the imagination. Perhaps if we can actually create a system where new wealth is really due to your own efforts, where there is no such thing as inheritance of riches (say a 100% estate tax above $1 million), no such thing as poverty (a basic income), no speculation or arbitrage, and financial markets that actually have a single real interest rate and offer all the credit that everyone needs, maybe then you can say that we should not tax capital income.

Until then, we should tax capital income, probably at least as much as we tax labor income.

Why immigration is good

JDN 2456977 PST 12:31.

The big topic in policy news today is immigration. After years of getting nothing done on the issue, Obama has finally decided to bypass Congress and reform our immigration system by executive order. Republicans are threatening to impeach him if he does. His decision to go forward without Congressional approval may have something to do with the fact that Republicans just took control of both houses of Congress. Naturally, Fox News is predicting economic disaster due to the expansion of the welfare state. (When is that not true?) A more legitimate critique comes from the New York Times, who point out how this sudden shift demonstrates a number of serious problems in our political system and how it is financed.

So let’s talk about immigration, and why it is almost always a good thing for a society and its economy. There are a couple of downsides, but they are far outweighed by the upsides.

I’ll start with the obvious: Immigration is good for the immigrants. That’s why they’re doing it. Uprooting yourself from your home and moving thousands of miles isn’t easy under the best circumstances (like I when I moved from Michigan to California for grad school); now imagine doing it when you are in crushing poverty and you have to learn a whole new language and culture once you arrive. People are only willing to do this when the stakes are high. The most extreme example is of course the children refugees from Latin America, who are finally getting some of the asylum they so greatly deserve, but even the “ordinary” immigrants coming from Mexico are leaving a society racked with poverty, endemic with corruption, and bathed in violence—most recently erupting in riots that have set fire to government buildings. These people are desperate; they are crossing our border despite the fences and guns because they feel they have no other choice. As a fundamental question of human rights, it is not clear to me that we even have the right to turn these people away. Forget the effect on our economy; forget the rate of assimilation; what right do we have to say to these people that their suffering should go on because they were born on the wrong side of an arbitrary line?

There are wealthier immigrants—many of them here, in fact, for grad schoolwhose circumstances are not so desperate; but hardly anyone even considers turning them away, because we want their money and their skills in our society. Americans who fear brain drain have it all backwards; the United States is where the brains drain to. This trend may be reversing more recently as our right-wing economic policy pulls funding away from education and science, but it would likely only reach the point where we export as many intelligent people as we import; we’re not talking about creating a deficit here, only reducing our world-dominating surplus. And anyway I’m not so concerned about those people; yes, the world needs them, but they don’t need much help from the world.

My concern is for our tired, our poor, our huddled masses yearning to breathe free. These are the people we are thinking about turning away—and these are the people who most desperately need us to take them in. That alone should be enough reason to open our borders, but apparently it isn’t for most people, so let’s talk about some of the ways that America stands to gain from such a decision.

First of all, immigration increases economic growth. Immigrants don’t just take in money; they also spend it back out, which further increases output and creates jobs. Immigrants are more likely than native citizens to be entrepreneurs, perhaps because taking the chance to start a business isn’t so scary after you’ve already taken the chance to travel thousands of miles to a new country. Our farming system is highly dependent upon cheap immigrant labor (that’s a little disturbing, but if as far as the US economy, we get cheap food by hiring immigrants on farms). On average, immigrants are younger than our current population, so they are more likely to work and less likely to retire, which has helped save the US from the economic malaise that afflicts nations like Japan where the aging population is straining the retirement system. More open immigration wouldn’t just increase the number of immigrants coming here to do these things; it would also make the immigrants who are already here more productive by opening up opportunities for education and entrepreneurship. Immigration could speed the recovery from the Second Depression and maybe even revitalize our dying Rust Belt cities.

Now, what about the downsides? By increasing the supply of labor faster than they increase the demand for labor, immigrants could reduce wages. There is some evidence that immigrants reduce wages, particularly for low-skill workers. This effect is rather small, however; in many studies it’s not even statistically significant (PDF link). A 10% increase in low-skill immigrants leads to about a 3% decrease in low-skill wages (PDF link). The total economy grows, but wages decrease at the bottom, so there is a net redistribution of wealth upward.

Immigration is one of the ways that globalization increases within-nation inequality even as it decreases between-nation inequality; you move the poor people to rich countries, and they become less poor than they were, but still poorer than most of the people in those rich countries, which increases the inequality there. On average the world becomes better off, but it can seem bad for the rich countries, especially the people in rich countries who were already relatively poor. Because they distribute wealth by birthright, national borders actually create something analogous to the privilege of feudal lords, albeit to a much larger segment of the population. (Much larger: Here’s a right-wing site trying to argue that the median American is in the top 1% of income by world standards; neat trick, because Americans comprise 4% of the world population—so our top half makes up 2% of the world’s population by themselves. Yet somehow apparently that 2% of the population is the top 1%? Also, the US isn’t the only rich country; have you heard of, say, Europe?)

There’s also a lot of variation in the literature as to the size—or even direction—of the effect of immigration on low-skill wages. But since the theory makes sense and the preponderance of the evidence is toward a moderate reduction in wages for low-skill native workers, let’s assume that this is indeed the case.

First of all I have to go back to my original point: These immigrants are getting higher wages than they would have in the countries they left. (That part is usually even true of the high-skill immigrants.) So if you’re worried about low wages for low-skill workers, why are you only worried about that for workers who were born on this side of the fence? There’s something deeply nationalistic—if not outright racist—inherent in the complaint that Americans will have lower pay or lose their jobs when Mexicans come here. Don’t Mexicans also deserve jobs and higher pay?

Aside from that, do we really want to preserve higher wages at the cost of economic efficiency? Are high wages an end in themselves? It seems to me that what we’re really concerned about is welfare—we want the people of our society to live better lives. High wages are one way to do that, but not the only way; a basic income could reverse that upward redistribution of wealth, taking the economic benefits of the immigration that normally accrue toward the top and giving them to the bottom. As I already talked about in an earlier post, a basic income is a lot more efficient than trying to mess around with wages. Markets are very powerful; we shouldn’t always accept what they do, but we should also be careful when we interfere with them. If the market is trying to drive certain wages down, that means that there is more desire to do that kind of work then there is work of that kind that needs done. The wage change creates a market incentive for people to switch to more productive kinds of work. We should also be working to create opportunities to make that switch—funding free education, for instance—because an incentive without an opportunity is a bit like pointing a gun at someone’s head and ordering them to give birth to a unicorn.

So on the one hand we have the increase in local inequality and the potential reduction in low-skill wages; those are basically the only downsides. On the other hand, we have increases in short-term and long-term economic growth, lower global inequality, more spending, more jobs, a younger population with less strain on the retirement system, more entrepreneurship, and above all, the enormous lifelong benefits to the immigrants themselves that motivated them to move in the first place. It seems pretty obvious to me: we can enact policies to reduce the downsides, but above all we must open our borders.