Our government just voted to let thousands of people die for no reason

May 14, JDN 2457888

The US House of Representatives just voted to pass a bill that will let thousands of Americans die for no reason. At the time of writing it hasn’t yet passed the Senate, but it may yet do so. And if it does, there can be little doubt that President Trump (a phrase I still feel nauseous saying) will sign it.

Some already call it Trumpcare (or “Trump-doesn’t-care”); but officially they call it the American Health Care Act. I think we should use the formal name, because it is a name which is already beginning to take on a dark irony; yes, only in America would such a terrible health care act be considered. Every other highly-developed country has a universal healthcare system; most of them have single-payer systems (and this has been true for over two decades).
The Congressional Budget Office estimates that the AHCA will increase the number of uninsured Americans by 24 million. Of these, 14 million will be people near the poverty line who lose access to Medicaid.

In 2009, a Harvard study estimated that 45,000 Americans die each year because they don’t have health insurance. This is on the higher end; other studies have estimated more like 20,000. But based on the increases in health insurance rates under Obamacare, somewhere between 5,000 and 10,000 American lives have been saved each year since it was enacted. That reduction came from insuring about 10 million people who weren’t insured before.

Making a linear projection, we can roughly estimate the number of additional Americans who will die every year if this American Health Care Act is implemented. (24 million/10 million)(5,000 to 10,000) = 12,000 to 24,000 deaths per year. For comparison, there are about 14,000 total homicides in the United States each year (and we have an exceptionally high homicide rate for a highly-developed country).
Indeed, morally, it might make sense to count these deaths as homicides (by the principle of “depraved indifference”); Trump therefore intends to double our homicide rate.

Of course, it will not be prosecuted this way. And one can even make an ethical case for why it shouldn’t be, why it would be impossible to make policy if every lawmaker had to face the consequences of every policy choice. (Start a war? A hundred thousand deaths. Fail to start a war in response to a genocide? A different hundred thousand deaths.)

But for once, I might want to make an exception. Because these deaths will not be the result of a complex policy trade-off with merits and demerits on both sides. They will not be the result of honest mistakes or unforeseen disasters. These people will die out of pure depraved indifference.

We had a healthcare bill that was working. Indeed, Obamacare was remarkably successful. It increased insurance rates and reduced mortality rates while still managing to slow the growth in healthcare expenditure.

The only real cost was an increase in taxes on the top 5% (and particularly the top 1%) of the income distribution. But the Republican Party—and make no mistake, the vote was on almost completely partisan lines, and not a single Democrat supported it—has now made it a matter of official policy that they care more about cutting taxes on millionaires than they do about poor people dying from lack of healthcare.

Yet there may be a silver lining in all of this: Once people saw that Obamacare could work, the idea of universal healthcare in the United States began to seem like a serious political position. The Overton Window has grown. Indeed, it may even have shifted to the left for once; the responses to the American Health Care Act have been almost uniformly comprised of shock and outrage, when really what the bill does is goes back to the same awful system we had before. Going backward and letting thousands of people die for no reason should appall people—but I feared that it might not, because it would seem “normal”. We in America have grown very accustomed to letting poor people die in order to slightly increase the profits of billionaires, and I thought this time might be no different—but it was different. Once Obamacare actually passed and began to work, people really saw what was happening—that all this suffering and death wasn’t necessary, it wasn’t an inextricable part of having a functioning economy. And now that they see that, they aren’t willing to go back.

Selling debt goes against everything the free market stands for

JDN 2457555

I don’t think most people—or even most economists—have any concept of just how fundamentally perverse and destructive our financial system has become, and a large chunk of it ultimately boils down to one thing: Selling debt.

Certainly collateralized debt obligations (CDOs), and their meta-form, CDO2s (pronounced “see-dee-oh squareds”), are nothing more than selling debt, and along with credit default swaps (CDS; they are basically insurance, but without those pesky regulations against things like fraud and conflicts of interest) they were directly responsible for the 2008 financial crisis and the ensuing Great Recession and Second Depression.

But selling debt continues in a more insidious way, underpinning the entire debt collection industry which raises tens of billions of dollars per year by harassment, intimidation and extortion, especially of the poor and helpless. Frankly, I think what’s most shocking is how little money they make, given the huge number of people they harass and intimidate.

John Oliver did a great segment on debt collections (with a very nice surprise at the end):

But perhaps most baffling to me is the number of people who defend the selling of debt on the grounds that it is a “free market” activity which must be protected from government “interference in personal liberty”. To show this is not a strawman, here’s the American Enterprise Institute saying exactly that.

So let me say this in no uncertain terms: Selling debt goes against everything the free market stands for.

One of the most basic principles of free markets, one of the founding precepts of capitalism laid down by no less than Adam Smith (and before him by great political philosophers like John Locke), is the freedom of contract. This is the good part of capitalism, the part that makes sense, the reason we shouldn’t tear it all down but should instead try to reform it around the edges.

Indeed, the freedom of contract is so fundamental to human liberty that laws can only be considered legitimate insofar as they do not infringe upon it without a compelling public interest. Freedom of contract is right up there with freedom of speech, freedom of the press, freedom of religion, and the right of due process.

The freedom of contract is the right to make agreements, including financial agreements, with anyone you please, and under conditions that you freely and rationally impose in a state of good faith and transparent discussion. Conversely, it is the right not to make agreements with those you choose not to, and to not be forced into agreements under conditions of fraud, intimidation, or impaired judgment.

Freedom of contract is the basis of my right to take on debt, provided that I am honest about my circumstances and I can find a lender who is willing to lend to me. So taking on debt is a fundamental part of freedom of contract.

But selling debt is something else entirely. Far from exercising the freedom of contract, it violates it. When I take out a loan from bank A, and then they turn around and sell that loan to bank B, I suddenly owe money to bank B, but I never agreed to do that. I had nothing to do with their decision to work with bank B as opposed to keeping the loan or selling it to bank C.

Current regulations prohibit banks from “changing the terms of the loan”, but in practice they change them all the time—they can’t change the principal balance, the loan term, or the interest rate, but they can change the late fees, the payment schedule, and lots of subtler things about the loan that can still make a very big difference. Indeed, as far as I’m concerned they have changed the terms of the loan—one of the terms of the loan was that I was to pay X amount to bank A, not that I was to pay X amount to bank B. I may or may not have good reasons not to want to pay bank B—they might be far less trustworthy than bank A, for instance, or have a far worse social responsibility record—and in any case it doesn’t matter; it is my choice whether or not I want anything to do with bank B, whatever my reasons might be.

I take this matter quite personally, for it is by the selling of debt that, in moral (albeit not legal) terms, a British bank stole my parents’ house. Indeed, not just any British bank; it was none other than HSBC, the money launderers for terrorists.

When they first obtained their mortgage, my parents did not actually know that HSBC was quite so evil as to literally launder money for terrorists, but they did already know that they were involved in a great many shady dealings, and even specifically told their lender that they did not want the loan sold, and if it was to be sold, it was absolutely never to be sold to HSBC in particular. Their mistake (which was rather like the “mistake” of someone who leaves their car unlocked and has it stolen, or forgets to arm the home alarm system and suffers a burglary) was not to get this written into the formal contract, rather than simply made as a verbal agreement with the bankers. Such verbal contracts are enforceable under the law, at least in theory; but that would require proof of the verbal contract (and what proof could we provide?), and also probably have cost as much as the house in litigation fees.

Oh, by the way, they were given a subprime interest rate of 8% despite being middle-class professionals with good credit, no doubt to maximize the broker’s closing commission. Most banks reserved such behavior for racial minorities, but apparently this one was equal-opportunity in the worst way.Perhaps my parents were naive to trust bankers any further than they could throw them.

As a result, I think you know what happened next: They sold the loan to HSBC.

Now, had it ended there, with my parents unwittingly forced into supporting a bank that launders money for terrorists, that would have been bad enough. But it assuredly did not.

By a series of subtle and manipulative practices that poked through one loophole after another, HSBC proceeded to raise my parents’ payments higher and higher. One particularly insidious tactic they used was to sit on the checks until just after the due date passed, so they could charge late fees on the payments, then they recapitalized the late fees. My parents caught on to this particular trick after a few months, and started mailing the checks certified so they would be date-stamped; and lo and behold, all the payments were suddenly on time! By several other similarly devious tactics, all of which were technically legal or at least not provable, they managed to raise my parents’ monthly mortgage payments by over 50%.

Note that it was a fixed-rate, fixed-term mortgage. The initial payments—what should have been always the payments, that’s the point of a fixed-rate fixed-term mortgage—were under $2000 per month. By the end they were paying over $3000 per month. HSBC forced my parents to overpay on a mortgage an amount equal to the US individual poverty line, or the per-capita GDP of Peru.

They tried to make the payments, but after being wildly over budget and hit by other unexpected expenses (including defects in the house’s foundation that they had to pay to fix, but because of the “small” amount at stake and the overwhelming legal might of the construction company, no lawyer was willing to sue over), they simply couldn’t do it anymore, and gave up. They gave the house to the bank with a deed in lieu of foreclosure.

And that is the story of how a bank that my parents never agreed to work with, never would have agreed to work with, indeed specifically said they would not work with, still ended up claiming their house—our house, the house I grew up in from the age of 12. Legally, I cannot prove they did anything against the law. (I mean, other than laundered money for terrorists.) But morally, how is this any less than theft? Would we not be victimized less had a burglar broken into our home, vandalized the walls and stolen our furniture?

Indeed, that would probably be covered under our insurance! Where can I buy insurance against the corrupt and predatory financial system? Where are my credit default swaps to pay me when everything goes wrong?

And all of this could have been prevented, if banks simply weren’t allowed to violate our freedom of contract by selling their loans to other banks.

Indeed, the Second Depression could probably have been likewise prevented. Without selling debt, there is no securitization. Without securitization, there is far less leverage. Without leverage, there are not bank failures. Without bank failures, there is no depression. A decade of global economic growth was lost because we allowed banks to sell debt whenever they please.

I have heard the counter-arguments many times:

“But what if banks need the liquidity?” Easy. They can take out their own loans with those other banks. If bank A finds they need more cashflow, they should absolutely feel free to take out a loan from bank B. They can even point to their projected revenues from the mortgage payments we owe them, as a means of repaying that loan. But they should not be able to involve us in that transaction. If you want to trust HSBC, that’s your business (you’re an idiot, but it’s a free country). But you have no right to force me to trust HSBC.

“But banks might not be willing to make those loans, if they knew they couldn’t sell or securitize them!” THAT’S THE POINT. Banks wouldn’t take on all these ridiculous risks in their lending practices that they did (“NINJA loans” and mortgages with payments larger than their buyers’ annual incomes), if they knew they couldn’t just foist the debt off on some Greater Fool later on. They would only make loans they actually expect to be repaid. Obviously any loan carries some risk, but banks would only take on risks they thought they could bear, as opposed to risks they thought they could convince someone else to bear—which is the definition of moral hazard.

“Homes would be unaffordable if people couldn’t take out large loans!” First of all, I’m not against mortgages—I’m against securitization of mortgages. Yes, of course, people need to be able to take out loans. But they shouldn’t be forced to pay those loans to whoever their bank sees fit. If indeed the loss of subprime securitized mortgages made it harder for people to get homes, that’s a problem; but the solution to that problem was never to make it easier for people to get loans they can’t afford—it is clearly either to reduce the price of homes or increase the incomes of buyers. Subsidized housing construction, public housing, changes in zoning regulation, a basic income, lower property taxes, an expanded earned-income tax credit—these are the sort of policies that one implements to make housing more affordable, not “go ahead and let banks exploit people however they want”.

Remember, a regulation against selling debt would protect the freedom of contract. It would remove a way for private individuals and corporations to violate that freedom, like regulations against fraud, intimidation, and coercion. It should be uncontroversial that no one has any right to force you to do business with someone you would not voluntarily do business with, certainly not in a private transaction between for-profit corporations. Maybe that sort of mandate makes sense in rare circumstances by the government, but even then it should really be implemented as a tax, not a mandate to do business with a particular entity. The right to buy what you choose is the foundation of a free market—and implicit in it is the right not to buy what you do not choose.

There are many regulations on debt that do impose upon freedom of contract: As horrific as payday loans are, if someone really honestly knowingly wants to take on short-term debt at 400% APR I’m not sure it’s my business to stop them. And some people may really be in such dire circumstances that they need money that urgently and no one else will lend to them. Insofar as I want payday loans regulated, it is to ensure that they are really lending in good faith—as many surely are not—and ultimately I want to outcompete them by providing desperate people with more reasonable loan terms. But a ban on securitization is like a ban on fraud; it is the sort of law that protects our rights.

What do we mean by “risk”?

JDN 2457118 EDT 20:50.

In an earlier post I talked about how, empirically, expected utility theory can’t explain the fact that we buy both insurance and lottery tickets, and how, normatively it really doesn’t make a lot of sense to buy lottery tickets precisely because of what expected utility theory says about them.

But today I’d like to talk about one of the major problems with expected utility theory, which I consider one of the major unexplored frontiers of economics: Expected utility theory treats all kinds of risk exactly the same.

In reality there are three kinds of risk: The first is what I’ll call classical risk, which is like the game of roulette; the odds are well-defined and known in advance, and you can play the game a large number of times and average out the results. This is where expected utility theory really shines; if you are dealing with classical risk, expected utility is obviously the way to go and Von Neumann and Morgenstern quite literally proved mathematically that anything else is irrational.

The second is uncertainty, a distinction which was most famously expounded by Frank Knight, an economist at the University of Chicago. (Chicago is a funny place; on the one hand they are a haven for the madness that is Austrian economics; on the other hand they have led the charge in behavioral and cognitive economics. Knight was a perfect fit, because he was a little of both.) Uncertainty is risk under ill-defined or unknown probabilities, where there is no way to play the game twice. Most real-world “risk” is actually uncertainty: Will the People’s Republic of China collapse in the 21st century? How many deaths will global warming cause? Will human beings ever colonize Mars? Is P = NP? None of those questions have known answers, but nor can we clearly assign probabilities either; Either P = NP or not, as a mathematical theorem (or, like the continuum hypothesis, it’s independent of ZFC, the most bizarre possibility of all), and it’s not as if someone is rolling dice to decide how many people global warming will kill. You can think of this in terms of “possible worlds”, though actually most modal theorists would tell you that we can’t even say that P=NP is possible (nor can we say it isn’t possible!) because, as a necessary statement, it can only be possible if it is actually true; this follows from the S5 axiom of modal logic, and you know what, even I am already bored with that sentence. Clearly there is some sense in which P=NP is possible, and if that’s not what modal logic says then so much the worse for modal logic. I am not a modal realist (not to be confused with a moral realist, which I am); I don’t think that possible worlds are real things out there somewhere. I think possibility is ultimately a statement about ignorance, and since we don’t know that P=NP is false then I contend that it is possible that it is true. Put another way, it would not be obviously irrational to place a bet that P=NP will be proved true by 2100; but if we can’t even say that it is possible, how can that be?

Anyway, that’s the mess that uncertainty puts us in, and almost everything is made of uncertainty. Expected utility theory basically falls apart under uncertainty; it doesn’t even know how to give an answer, let alone one that is correct. In reality what we usually end up doing is waving our hands and trying to assign a probability anyway—because we simply don’t know what else to do.

The third one is not one that’s usually talked about, yet I think it’s quite important; I will call it one-shot risk. The probabilities are known or at least reasonably well approximated, but you only get to play the game once. You can also generalize to few-shot risk, where you can play a small number of times, where “small” is defined relative to the probabilities involved; this is a little vaguer, but basically what I have in mind is that even though you can play more than once, you can’t play enough times to realistically expect the rarest outcomes to occur. Expected utility theory almost works on one-shot and few-shot risk, but you have to be very careful about taking it literally.

I think an example make things clearer: Playing the lottery is a few-shot risk. You can play the lottery multiple times, yes; potentially hundreds of times in fact. But hundreds of times is nothing compared to the 1 in 400 million chance you have of actually winning. You know that probability; it can be computed exactly from the rules of the game. But nonetheless expected utility theory runs into some serious problems here.

If we were playing a classical risk game, expected utility would obviously be right. So for example if you know that you will live one billion years, and you are offered the chance to play a game (somehow compensating for the mind-boggling levels of inflation, economic growth, transhuman transcendence, and/or total extinction that will occur during that vast expanse of time) in which at each year you can either have a guaranteed $40,000 of inflation-adjusted income or a 99.999,999,75% chance of $39,999 of inflation-adjusted income and a 0.000,000,25% chance of $100 million in inflation-adjusted income—which will disappear at the end of the year, along with everything you bought with it, so that each year you start afresh. Should you take the second option? Absolutely not, and expected utility theory explains why; that one or two years where you’ll experience 8 QALY per year isn’t worth dropping from 4.602056 QALY per year to 4.602049 QALY per year for the other nine hundred and ninety-eight million years. (Can you even fathom how long that is? From here, one billion years is all the way back to the Mesoproterozoic Era, which we think is when single-celled organisms first began to reproduce sexually. The gain is to be Mitt Romney for a year or two; the loss is the value of a dollar each year over and over again for the entire time that has elapsed since the existence of gamete meiosis.) I think it goes without saying that this whole situation is almost unimaginably bizarre. Yet that is implicitly what we’re assuming when we use expected utility theory to assess whether you should buy lottery tickets.

The real situation is more like this: There’s one world you can end up in, and almost certainly will, in which you buy lottery tickets every year and end up with an income of $39,999 instead of $40,000. There is another world, so unlikely as to be barely worth considering, yet not totally impossible, in which you get $100 million and you are completely set for life and able to live however you want for the rest of your life. Averaging over those two worlds is a really weird thing to do; what do we even mean by doing that? You don’t experience one world 0.000,000,25% as much as the other (whereas in the billion-year scenario, that is exactly what you do); you only experience one world or the other.

In fact, it’s worse than this, because if a classical risk game is such that you can play it as many times as you want as quickly as you want, we don’t even need expected utility theory—expected money theory will do. If you can play a game where you have a 50% chance of winning $200,000 and a 50% chance of losing $50,000, which you can play up to once an hour for the next 48 hours, and you will be extended any credit necessary to cover any losses, you’d be insane not to play; your 99.9% confidence level of wealth at the end of the two days is from $850,000 to $6,180,000. While you may lose money for awhile, it is vanishingly unlikely that you will end up losing more than you gain.

Yet if you are offered the chance to play this game only once, you probably should not take it, and the reason why then comes back to expected utility. If you have good access to credit you might consider it, because going $50,000 into debt is bad but not unbearably so (I did, going to college) and gaining $200,000 might actually be enough better to justify the risk. Then the effect can be averaged over your lifetime; let’s say you make $50,000 per year over 40 years. Losing $50,000 means making your average income $48,750, while gaining $200,000 means making your average income $55,000; so your QALY per year go from a guaranteed 4.70 to a 50% chance of 4.69 and a 50% chance of 4.74; that raises your expected utility from 4.70 to 4.715.

But if you don’t have good access to credit and your income for this year is $50,000, then losing $50,000 means losing everything you have and living in poverty or even starving to death. The benefits of raising your income by $200,000 this year aren’t nearly great enough to take that chance. Your expected utility goes from 4.70 to a 50% chance of 5.30 and a 50% chance of zero.

So expected utility theory only seems to properly apply if we can play the game enough times that the improbable events are likely to happen a few times, but not so many times that we can be sure our money will approach the average. And that’s assuming we know the odds and we aren’t just stuck with uncertainty.

Unfortunately, I don’t have a good alternative; so far expected utility theory may actually be the best we have. But it remains deeply unsatisfying, and I like to think we’ll one day come up with something better.

Prospect Theory: Why we buy insurance and lottery tickets

JDN 2457061 PST 14:18.

Today’s topic is called prospect theory. Prospect theory is basically what put cognitive economics on the map; it was the knock-down argument that Kahneman used to show that human beings are not completely rational in their economic decisions. It all goes back to a 1979 paper by Kahneman and Tversky that now has 34000 citations (yes, we’ve been having this argument for a rather long time now). In the 1990s it was refined into cumulative prospect theory, which is more mathematically precise but basically the same idea.

What was that argument? People buy both insurance and lottery tickets.

The “both” is very important. Buying insurance can definitely be rational—indeed, typically is. Buying lottery tickets could theoretically be rational, under very particular circumstances. But they cannot both be rational at the same time.

To see why, let’s talk some more about marginal utility of wealth. Recall that a dollar is not worth the same to everyone; to a billionaire a dollar is a rounding error, to most of us it is a bottle of Coke, but to a starving child in Ghana it could be life itself. We typically observe diminishing marginal utility of wealth—the more money you have, the less another dollar is worth to you.

If we sketch a graph of your utility versus wealth it would look something like this:


Notice how it increases as your wealth increases, but at a rapidly diminishing rate.

If you have diminishing marginal utility of wealth, you are what we call risk-averse. If you are risk-averse, you’ll (sometimes) want to buy insurance. Let’s suppose the units on that graph are tens of thousands of dollars. Suppose you currently have an income of $50,000. You are offered the chance to pay $10,000 a year to buy unemployment insurance, so that if you lose your job, instead of making $10,000 on welfare you’ll make $30,000 on unemployment. You think you have about a 20% chance of losing your job.

If you had constant marginal utility of wealth, this would not be a good deal for you. Your expected value of money would be reduced if you buy the insurance: Before you had an 80% chance of $50,000 and a 20% chance of $10,000 so your expected amount of money is $42,000. With the insurance you have an 80% chance of $40,000 and a 20% chance of $30,000 so your expected amount of money is $38,000. Why would you take such a deal? That’s like giving up $4,000 isn’t it?

Well, let’s look back at that utility graph. At $50,000 your utility is 1.80, uh… units, er… let’s say QALY. 1.80 QALY per year, meaning you live 80% better than the average human. Maybe, I guess? Doesn’t seem too far off. In any case, the units of measurement aren’t that important.


By buying insurance your effective income goes down to $40,000 per year, which lowers your utility to 1.70 QALY. That’s a fairly significant hit, but it’s not unbearable. If you lose your job (20% chance), you’ll fall down to $30,000 and have a utility of 1.55 QALY. Again, noticeable, but bearable. Your overall expected utility with insurance is therefore 1.67 QALY.

But what if you don’t buy insurance? Well then you have a 20% chance of taking a big hit and falling all the way down to $10,000 where your utility is only 1.00 QALY. Your expected utility is therefore only 1.64 QALY. You’re better off going with the insurance.

And this is how insurance companies make a profit (well; the legitimate way anyway; they also like to gouge people and deny cancer patients of course); on average, they make more from each customer than they pay out, but customers are still better off because they are protected against big losses. In this case, the insurance company profits $4,000 per customer per year, customers each get 30 milliQALY per year (about the same utility as an extra $2,000 more or less), everyone is happy.

But if this is your marginal utility of wealth—and it most likely is, approximately—then you would never want to buy a lottery ticket. Let’s suppose you actually have pretty good odds; it’s a 1 in 1 million chance of $1 million for a ticket that costs $2. This means that the state is going to take in about $2 million for every $1 million they pay out to a winner.

That’s about as good as your odds for a lottery are ever going to get; usually it’s more like a 1 in 400 million chance of $150 million for $1, which is an even bigger difference than it sounds, because $150 million is nowhere near 150 times as good as $1 million. It’s a bit better from the state’s perspective though, because they get to receive $400 million for every $150 million they pay out.

For your convenience I have zoomed out the graph so that you can see 100, which is an income of $1 million (which you’ll have this year if you win; to get it next year, you’ll have to play again). You’ll notice I did not have to zoom out the vertical axis, because 20 times as much money only ends up being about 2 times as much utility. I’ve marked with lines the utility of $50,000 (1.80, as we said before) versus $1 million (3.30).


What about the utility of $49,998 which is what you’ll have if you buy the ticket and lose? At this number of decimal places you can’t see the difference, so I’ll need to go out a few more. At $50,000 you have 1.80472 QALY. At $49,998 you have 1.80470 QALY. That $2 only costs you 0.00002 QALY, 20 microQALY. Not much, really; but of course not, it’s only $2.

How much does the 1 in 1 million chance of $1 million give you? Even less than that. Remember, the utility gain for going from $50,000 to $1 million is only 1.50 QALY. So you’re adding one one-millionth of that in expected utility, which is of course 1.5 microQALY, or 0.0000015 QALY.

That $2 may not seem like it’s worth much, but that 1 in 1 million chance of $1 million is worth less than one tenth as much. Again, I’ve tried to make these figures fairly realistic; they are by no means exact (I don’t actually think $49,998 corresponds to exactly 1.804699 QALY), but the order of magnitude difference is right. You gain about ten times as much utility from spending that $2 on something you want than you do on taking the chance at $1 million.

I said before that it is theoretically possible for you to have a utility function for which the lottery would be rational. For that you’d need to have increasing marginal utility of wealth, so that you could be what we call risk-seeking. Your utility function would have to look like this:


There’s no way marginal utility of wealth looks like that. This would be saying that it would hurt Bill Gates more to lose $1 than it would hurt a starving child in Ghana, which makes no sense at all. (It certainly would makes you wonder why he’s so willing to give it to them.) So frankly even if we didn’t buy insurance the fact that we buy lottery tickets would already look pretty irrational.

But in order for it to be rational to buy both lottery tickets and insurance, our utility function would have to be totally nonsensical. Maybe it could look like this or something; marginal utility decreases normally for awhile, and then suddenly starts going upward again for no apparent reason:


Clearly it does not actually look like that. Not only would this mean that Bill Gates is hurt more by losing $1 than the child in Ghana, we have this bizarre situation where the middle class are the people who have the lowest marginal utility of wealth in the world. Both the rich and the poor would need to have higher marginal utility of wealth than we do. This would mean that apparently yachts are just amazing and we have no idea. Riding a yacht is the pinnacle of human experience, a transcendence beyond our wildest imaginings; and riding a slightly bigger yacht is even more amazing and transcendent. Love and the joy of a life well-lived pale in comparison to the ecstasy of adding just one more layer of gold plate to your Ferrari collection.

Where increasing marginal utility is ridiculous, this is outright special pleading. You’re just making up bizarre utility functions that perfectly line up with whatever behavior people happen to have so that you can still call it rational. It’s like saying, “It could be perfectly rational! Maybe he enjoys banging his head against the wall!”

Kahneman and Tversky had a better idea. They realized that human beings aren’t so great at assessing probability, and furthermore tend not to think in terms of total amounts of wealth or annual income at all, but in terms of losses and gains. Through a series of clever experiments they showed that we are not so much risk-averse as we are loss-averse; we are actually willing to take more risk if it means that we will be able to avoid a loss.

In effect, we seem to be acting as if our utility function looks like this, where the zero no longer means “zero income”, it means “whatever we have right now“:


We tend to weight losses about twice as much as gains, and we tend to assume that losses also diminish in their marginal effect the same way that gains do. That is, we would only take a 50% chance to lose $1000 if it meant a 50% chance to gain $2000; but we’d take a 10% chance at losing $10,000 to save ourselves from a guaranteed loss of $1000.

This can explain why we buy insurance, provided that you frame it correctly. One of the things about prospect theory—and about human behavior in general—is that it exhibits framing effects: The answer we give depends upon the way you ask the question. That’s so totally obviously irrational it’s honestly hard to believe that we do it; but we do, and sometimes in really important situations. Doctors—doctors—will decide a moral dilemma differently based on whether you describe it as “saving 400 out of 600 patients” or “letting 200 out of 600 patients die”.

In this case, you need to frame insurance as the default option, and not buying insurance as an extra risk you are taking. Then saving money by not buying insurance is a gain, and therefore less important, while a higher risk of a bad outcome is a loss, and therefore important.

If you frame it the other way, with not buying insurance as the default option, then buying insurance is taking a loss by making insurance payments, only to get a gain if the insurance pays out. Suddenly the exact same insurance policy looks less attractive. This is a big part of why Obamacare has been effective but unpopular. It was set up as a fine—a loss—if you don’t buy insurance, rather than as a bonus—a gain—if you do buy insurance. The latter would be more expensive, but we could just make it up by taxing something else; and it might have made Obamacare more popular, because people would see the government as giving them something instead of taking something away. But the fine does a better job of framing insurance as the default option, so it motivates more people to actually buy insurance.

But even that would still not be enough to explain how it is rational to buy lottery tickets (Have I mentioned how it’s really not a good idea to buy lottery tickets?), because buying a ticket is a loss and winning the lottery is a gain. You actually have to get people to somehow frame not winning the lottery as a loss, making winning the default option despite the fact that it is absurdly unlikely. But I have definitely heard people say things like this: “Well if my numbers come up and I didn’t play that week, how would I feel then?” Pretty bad, I’ll grant you. But how much you wanna bet that never happens? (They’ll bet… the price of the ticket, apparently.)

In order for that to work, people either need to dramatically overestimate the probability of winning, or else ignore it entirely. Both of those things totally happen.

First, we overestimate the probability of rare events and underestimate the probability of common events—this is actually the part that makes it cumulative prospect theory instead of just regular prospect theory. If you make a graph of perceived probability versus actual probability, it looks like this:


We don’t make much distinction between 40% and 60%, even though that’s actually pretty big; but we make a huge distinction between 0% and 0.00001% even though that’s actually really tiny. I think we basically have categories in our heads: “Never, almost never, rarely, sometimes, often, usually, almost always, always.” Moving from 0% to 0.00001% is going from “never” to “almost never”, but going from 40% to 60% is still in “often”. (And that for some reason reminded me of “Well, hardly ever!”)

But that’s not even the worst of it. After all that work to explain how we can make sense of people’s behavior in terms of something like a utility function (albeit a distorted one), I think there’s often a simpler explanation still: Regret aversion under total neglect of probability.

Neglect of probability is self-explanatory: You totally ignore the probability. But what’s regret aversion, exactly? Unfortunately I’ve had trouble finding any good popular sources on the topic; it’s all scholarly stuff. (Maybe I’m more cutting-edge than I thought!)

The basic idea that is that you minimize regret, where regret can be formalized as the difference in utility between the outcome you got and the best outcome you could have gotten. In effect, it doesn’t matter whether something is likely or unlikely; you only care how bad it is.

This explains insurance and lottery tickets in one fell swoop: With insurance, you have the choice of risking a big loss (big regret) which you can avoid by paying a small amount (small regret). You take the small regret, and buy insurance. With lottery tickets, you have the chance of getting a large gain (big regret if you don’t) which you gain by paying a small amount (small regret).

This can also explain why a typical American’s fears go in the order terrorists > Ebola > sharks > > cars > cheeseburgers, while the actual risk of dying goes in almost the opposite order, cheeseburgers > cars > > terrorists > sharks > Ebola. (Terrorists are scarier than sharks and Ebola and actually do kill more Americans! Yay, we got something right! Other than that it is literally reversed.)

Dying from a terrorist attack would be horrible; in addition to your own death you have all the other likely deaths and injuries, and the sheer horror and evil of the terrorist attack itself. Dying from Ebola would be almost as bad, with gruesome and agonizing symptoms. Dying of a shark attack would be still pretty awful, as you get dismembered alive. But dying in a car accident isn’t so bad; it’s usually over pretty quick and the event seems tragic but ordinary. And dying of heart disease and diabetes from your cheeseburger overdose will happen slowly over many years, you’ll barely even notice it coming and probably die rapidly from a heart attack or comfortably in your sleep. (Wasn’t that a pleasant paragraph? But there’s really no other way to make the point.)

If we try to estimate the probability at all—and I don’t think most people even bother—it isn’t by rigorous scientific research; it’s usually by availability heuristic: How many examples can you think of in which that event happened? If you can think of a lot, you assume that it happens a lot.

And that might even be reasonable, if we still lived in hunter-gatherer tribes or small farming villages and the 150 or so people you knew were the only people you ever heard about. But now that we have live TV and the Internet, news can get to us from all around the world, and the news isn’t trying to give us an accurate assessment of risk, it’s trying to get our attention by talking about the biggest, scariest, most exciting things that are happening around the world. The amount of news attention an item receives is in fact in inverse proportion to the probability of its occurrence, because things are more exciting if they are rare and unusual. Which means that if we are estimating how likely something is based on how many times we heard about it on the news, our estimates are going to be almost exactly reversed from reality. Ironically it is the very fact that we have more information that makes our estimates less accurate, because of the way that information is presented.

It would be a pretty boring news channel that spent all day saying things like this: “82 people died in car accidents today, and 1657 people had fatal heart attacks, 11.8 million had migraines, and 127 million played the lottery and lost; in world news, 214 countries did not go to war, and 6,147 children starved to death in Africa…” This would, however, be vastly more informative.

In the meantime, here are a couple of counter-heuristics I recommend to you: Don’t think about losses and gains, think about where you are and where you might be. Don’t say, “I’ll gain $1,000”; say “I’ll raise my income this year to $41,000.” Definitely do not think in terms of the percentage price of things; think in terms of absolute amounts of money. Cheap expensive things, expensive cheap things is a motto of mine; go ahead and buy the $5 toothbrush instead of the $1, because that’s only $4. But be very hesitant to buy the $22,000 car instead of the $21,000, because that’s $1,000. If you need to estimate the probability of something, actually look it up; don’t try to guess based on what it feels like the probability should be. Make this unprecedented access to information work for you instead of against you. If you want to know how many people die in car accidents each year, you can literally ask Google and it will tell you that (I tried it—it’s 1.3 million worldwide). The fatality rate of a given disease versus the risk of its vaccine, the safety rating of a particular brand of car, the number of airplane crash deaths last month, the total number of terrorist attacks, the probability of becoming a university professor, the average functional lifespan of a new television—all these things and more await you at the click of a button. Even if you think you’re pretty sure, why not look it up anyway?

Perhaps then we can make prospect theory wrong by making ourselves more rational.

How do we measure happiness?

JDN 2457028 EST 20:33.

No, really, I’m asking. I strongly encourage my readers to offer in the comments any ideas they have about the measurement of happiness in the real world; this has been a stumbling block in one of my ongoing research projects.

In one sense the measurement of happiness—or more formally utility—is absolutely fundamental to economics; in another it’s something most economists are astonishingly afraid of even trying to do.

The basic question of economics has nothing to do with money, and is really only incidentally related to “scarce resources” or “the production of goods” (though many textbooks will define economics in this way—apparently implying that a post-scarcity economy is not an economy). The basic question of economics is really this: How do we make people happy?

This must always be the goal in any economic decision, and if we lose sight of that fact we can make some truly awful decisions. Other goals may work sometimes, but they inevitably fail: If you conceive of the goal as “maximize GDP”, then you’ll try to do any policy that will increase the amount of production, even if that production comes at the expense of stress, injury, disease, or pollution. (And doesn’t that sound awfully familiar, particularly here in the US? 40% of Americans report their jobs as “very stressful” or “extremely stressful”.) If you were to conceive of the goal as “maximize the amount of money”, you’d print money as fast as possible and end up with hyperinflation and total economic collapse ala Zimbabwe. If you were to conceive of the goal as “maximize human life”, you’d support methods of increasing population to the point where we had a hundred billion people whose lives were barely worth living. Even if you were to conceive of the goal as “save as many lives as possible”, you’d find yourself investing in whatever would extend lifespan even if it meant enormous pain and suffering—which is a major problem in end-of-life care around the world. No, there is one goal and one goal only: Maximize happiness.

I suppose technically it should be “maximize utility”, but those are in fact basically the same thing as long as “happiness” is broadly conceived as eudaimoniathe joy of a life well-lived—and not a narrow concept of just adding up pleasure and subtracting out pain. The goal is not to maximize the quantity of dopamine and endorphins in your brain; the goal is to achieve a world where people are safe from danger, free to express themselves, with friends and family who love them, who participate in a world that is just and peaceful. We do not want merely the illusion of these things—we want to actually have them. So let me be clear that this is what I mean when I say “maximize happiness”.

The challenge, therefore, is how we figure out if we are doing that. Things like money and GDP are easy to measure; but how do you measure happiness?
Early economists like Adam Smith and John Stuart Mill tried to deal with this question, and while they were not very successful I think they deserve credit for recognizing its importance and trying to resolve it. But sometime around the rise of modern neoclassical economics, economists gave up on the project and instead sought a narrower task, to measure preferences.

This is often called technically ordinal utility, as opposed to cardinal utility; but this terminology obscures the fundamental distinction. Cardinal utility is actual utility; ordinal utility is just preferences.

(The notion that cardinal utility is defined “up to a linear transformation” is really an eminently trivial observation, and it shows just how little physics the physics-envious economists really understand. All we’re talking about here is units of measurement—the same distance is 10.0 inches or 25.4 centimeters, so is distance only defined “up to a linear transformation”? It’s sometimes argued that there is no clear zero—like Fahrenheit and Celsius—but actually it’s pretty clear to me that there is: Zero utility is not existing. So there you go, now you have Kelvin.)

Preferences are a bit easier to measure than happiness, but not by as much as most economists seem to think. If you imagine a small number of options, you can just put them in order from most to least preferred and there you go; and we could imagine asking someone to do that, or—the technique of revealed preferenceuse the choices they make to infer their preferences by assuming that when given the choice of X and Y, choosing X means you prefer X to Y.

Like much of neoclassical theory, this sounds good in principle and utterly collapses when applied to the real world. Above all: How many options do you have? It’s not easy to say, but the number is definitely huge—and both of those facts pose serious problems for a theory of preferences.

The fact that it’s not easy to say means that we don’t have a well-defined set of choices; even if Y is theoretically on the table, people might not realize it, or they might not see that it’s better even though it actually is. Much of our cognitive effort in any decision is actually spent narrowing the decision space—when deciding who to date or where to go to college or even what groceries to buy, simply generating a list of viable options involves a great deal of effort and extremely complex computation. If you have a true utility function, you can satisficechoosing the first option that is above a certain threshold—or engage in constrained optimizationchoosing whether to continue searching or accept your current choice based on how good it is. Under preference theory, there is no such “how good it is” and no such thresholds. You either search forever or choose a cutoff arbitrarily.

Even if we could decide how many options there are in any given choice, in order for this to form a complete guide for human behavior we would need an enormous amount of information. Suppose there are 10 different items I could have or not have; then there are 10! = 3.6 million possible preference orderings. If there were 100 items, there would be 100! = 9e157 possible orderings. It won’t do simply to decide on each item whether I’d like to have it or not. Some things are complements: I prefer to have shoes, but I probably prefer to have $100 and no shoes at all rather than $50 and just a left shoe. Other things are substitutes: I generally prefer eating either a bowl of spaghetti or a pizza, rather than both at the same time. No, the combinations matter, and that means that we have an exponentially increasing decision space every time we add a new option. If there really is no more structure to preferences than this, we have an absurd computational task to make even the most basic decisions.

This is in fact most likely why we have happiness in the first place. Happiness did not emerge from a vacuum; it evolved by natural selection. Why make an organism have feelings? Why make it care about things? Wouldn’t it be easier to just hard-code a list of decisions it should make? No, on the contrary, it would be exponentially more complex. Utility exists precisely because it is more efficient for an organism to like or dislike things by certain amounts rather than trying to define arbitrary preference orderings. Adding a new item means assigning it an emotional value and then slotting it in, instead of comparing it to every single other possibility.

To illustrate this: I like Coke more than I like Pepsi. (Let the flame wars begin?) I also like getting massages more than I like being stabbed. (I imagine less controversy on this point.) But the difference in my mind between massages and stabbings is an awful lot larger than the difference between Coke and Pepsi. Yet according to preference theory (“ordinal utility”), that difference is not meaningful; instead I have to say that I prefer the pair “drink Pepsi and get a massage” to the pair “drink Coke and get stabbed”. There’s no such thing as “a little better” or “a lot worse”; there is only what I prefer over what I do not prefer, and since these can be assigned arbitrarily there is an impossible computational task before me to make even the most basic decisions.

Real utility also allows you to make decisions under risk, to decide when it’s worth taking a chance. Is a 50% chance of $100 worth giving up a guaranteed $50? Probably. Is a 50% chance of $10 million worth giving up a guaranteed $5 million? Not for me. Maybe for Bill Gates. How do I make that decision? It’s not about what I prefer—I do in fact prefer $10 million to $5 million. It’s about how much difference there is in terms of my real happiness—$5 million is almost as good as $10 million, but $100 is a lot better than $50. My marginal utility of wealth—as I discussed in my post on progressive taxation—is a lot steeper at $50 than it is at $5 million. There’s actually a way to use revealed preferences under risk to estimate true (“cardinal”) utility, developed by Von Neumann and Morgenstern. In fact they proved a remarkably strong theorem: If you don’t have a cardinal utility function that you’re maximizing, you can’t make rational decisions under risk. (In fact many of our risk decisions clearly aren’t rational, because we aren’t actually maximizing an expected utility; what we’re actually doing is something more like cumulative prospect theory, the leading cognitive economic theory of risk decisions. We overrespond to extreme but improbable events—like lightning strikes and terrorist attacks—and underrespond to moderate but probable events—like heart attacks and car crashes. We play the lottery but still buy health insurance. We fear Ebola—which has never killed a single American—but not influenza—which kills 10,000 Americans every year.)

A lot of economists would argue that it’s “unscientific”—Kenneth Arrow said “impossible”—to assign this sort of cardinal distance between our choices. But assigning distances between preferences is something we do all the time. Amazon.com lets us vote on a 5-star scale, and very few people send in error reports saying that cardinal utility is meaningless and only preference orderings exist. In 2000 I would have said “I like Gore best, Nader is almost as good, and Bush is pretty awful; but of course they’re all a lot better than the Fascist Party.” If we had simply been able to express those feelings on the 2000 ballot according to a range vote, either Nader would have won and the United States would now have a three-party system (and possibly a nationalized banking system!), or Gore would have won and we would be a decade ahead of where we currently are in preventing and mitigating global warming. Either one of these things would benefit millions of people.

This is extremely important because of another thing that Arrow said was “impossible”—namely, “Arrow’s Impossibility Theorem”. It should be called Arrow’s Range Voting Theorem, because simply by restricting preferences to a well-defined utility and allowing people to make range votes according to that utility, we can fulfill all the requirements that are supposedly “impossible”. The theorem doesn’t say—as it is commonly paraphrased—that there is no fair voting system; it says that range voting is the only fair voting system. A better claim is that there is no perfect voting system, which is true if you mean that there is no way to vote strategically that doesn’t accurately reflect your true beliefs. The Myerson-Satterthwaithe Theorem is then the proper theorem to use; if you could design a voting system that would force you to reveal your beliefs, you could design a market auction that would force you to reveal your optimal price. But the least expressive way to vote in a range vote is to pick your favorite and give them 100% while giving everyone else 0%—which is identical to our current plurality vote system. The worst-case scenario in range voting is our current system.

But the fact that utility exists and matters, unfortunately doesn’t tell us how to measure it. The current state-of-the-art in economics is what’s called “willingness-to-pay”, where we arrange (or observe) decisions people make involving money and try to assign dollar values to each of their choices. This is how you get disturbing calculations like “the lives lost due to air pollution are worth $10.2 billion.”

Why are these calculations disturbing? Because they have the whole thing backwards—people aren’t valuable because they are worth money; money is valuable because it helps people. It’s also really bizarre because it has to be adjusted for inflation. Finally—and this is the point that far too few people appreciate—the value of a dollar is not constant across people. Because different people have different marginal utilities of wealth, something that I would only be willing to pay $1000 for, Bill Gates might be willing to pay $1 million for—and a child in Africa might only be willing to pay $10, because that is all he has to spend. This makes the “willingness-to-pay” a basically meaningless concept independent of whose wealth we are spending.

Utility, on the other hand, might differ between people—but, at least in principle, it can still be added up between them on the same scale. The problem is that “in principle” part: How do we actually measure it?

So far, the best I’ve come up with is to borrow from public health policy and use the QALY, or quality-adjusted life year. By asking people macabre questions like “What is the maximum number of years of your life you would give up to not have a severe migraine every day?” (I’d say about 20—that’s where I feel ambivalent. At 10 I definitely would; at 30 I definitely wouldn’t.) or “What chance of total paralysis would you take in order to avoid being paralyzed from the waist down?” (I’d say about 20%.) we assign utility values: 80 years of migraines is worth giving up 20 years to avoid, so chronic migraine is a quality of life factor of 0.75. Total paralysis is 5 times as bad as paralysis from the waist down, so if waist-down paralysis is a quality of life factor of 0.90 then total paralysis is 0.50.

You can probably already see that there are lots of problems: What if people don’t agree? What if due to framing effects the same person gives different answers to slightly different phrasing? Some conditions will directly bias our judgments—depression being the obvious example. How many years of your life would you give up to not be depressed? Suicide means some people say all of them. How well do we really know our preferences on these sorts of decisions, given that most of them are decisions we will never have to make? It’s difficult enough to make the actual decisions in our lives, let alone hypothetical decisions we’ve never encountered.

Another problem is often suggested as well: How do we apply this methodology outside questions of health? Does it really make sense to ask you how many years of your life drinking Coke or driving your car is worth?
Well, actually… it better, because you make that sort of decision all the time. You drive instead of staying home, because you value where you’re going more than the risk of dying in a car accident. You drive instead of walking because getting there on time is worth that additional risk as well. You eat foods you know aren’t good for you because you think the taste is worth the cost. Indeed, most of us aren’t making most of these decisions very well—maybe you shouldn’t actually drive or drink that Coke. But in order to know that, we need to know how many years of your life a Coke is worth.

As a very rough estimate, I figure you can convert from willingness-to-pay to QALY by dividing by your annual consumption spending Say you spend annually about $20,000—pretty typical for a First World individual. Then $1 is worth about 50 microQALY, or about 26 quality-adjusted life-minutes. Now suppose you are in Third World poverty; your consumption might be only $200 a year, so $1 becomes worth 5 milliQALY, or 1.8 quality-adjusted life-days. The very richest individuals might spend as much as $10 million on consumption, so $1 to them is only worth 100 nanoQALY, or 3 quality-adjusted life-seconds.

That’s an extremely rough estimate, of course; it assumes you are in perfect health, all your time is equally valuable and all your purchasing decisions are optimized by purchasing at marginal utility. Don’t take it too literally; based on the above estimate, an hour to you is worth about $2.30, so it would be worth your while to work for even $3 an hour. Here’s a simple correction we should probably make: if only a third of your time is really usable for work, you should expect at least $6.90 an hour—and hey, that’s a little less than the US minimum wage. So I think we’re in the right order of magnitude, but the details have a long way to go.

So let’s hear it, readers: How do you think we can best measure happiness?

No, capital taxes should not be zero

JDN 2456998 PST 11:38.

It’s an astonishingly common notion among neoclassical economists that we should never tax capital gains, and all taxes should fall upon labor income. Here Scott Sumner writing for The Economist has the audacity to declare this a ‘basic principle of economics’. Many of the arguments are based on rather esoteric theorems like the Atkinson-Stiglitz Theorem (I thought you were better than that, Stiglitz!) and the Chamley-Judd Theorem.

All of these theorems rest upon two very important assumptions, which many economists take for granted—yet which are utterly and totally untrue. For once it’s not assumed that we are infinite identical psychopaths; actually psychopaths might not give wealth to their children in inheritance, which would undermine the argument in a different way, by making each individual have a finite time horizon. No, the assumptions are that saving is the source of investment, and investment is the source of capital income.

Investment is the source of capital, that’s definitely true—the total amount of wealth in society is determined by investment. You do have to account for the fact that real investment isn’t just factories and machines, it’s also education, healthcare, infrastructure. With that in mind, yes, absolutely, the total amount of wealth is a function of the investment rate.

But that doesn’t mean that investment is the source of capital income—because in our present system the distribution of capital income is in no way determined by real investment or the actual production of goods. Virtually all capital income comes from financial markets, which are rife with corruption—they are indeed the main source of corruption that remains in First World nations—and driven primarily by arbitrage and speculation, not real investment. Contrary to popular belief and economic theory, the stock market does not fund corporations; corporations fund the stock market. It’s this bizarre game our society plays, in which a certain portion of the real output of our productive industries is siphoned off so that people who are already rich can gamble over it. Any theory of capital income which fails to take these facts into account is going to be fundamentally distorted.

The other assumption is that investment is savings, that the way capital increases is by labor income that isn’t spent on consumption. This isn’t even close to true, and I never understood why so many economists think it is. The notion seems to be that there is a certain amount of money in the world, and what you don’t spend on consumption goods you can instead spend on investment. But this is just flatly not true; the money supply is dynamically flexible, and the primary means by which money is created is through banks creating loans for the purpose of investment. It’s that I term I talked about in my post on the deficit; it seems to come out of nowhere, because that’s literally what happens.

On the reasoning that savings is just labor income that you don’t spend on consumption, then if you compute the figure W – C , wages and salaries minus consumption, that figure should be savings, and it should be equal to investment. Well, that figure is negative—for reasons I gave in that post. Total employee compensation in the US in 2014 is $9.2 trillion, while total personal consumption expenditure is $11.4 trillion. The reason we are able to save at all is because of government transfers, which account for $2.5 trillion. To fill up our GDP to its total of $16.8 trillion, you need to add capital income: proprietor income ($1.4 trillion) and receipts on assets ($2.1 trillion); then you need to add in the part of government spending that isn’t transfers ($1.4 trillion).

If you start with the fanciful assumption that the way capital increases is by people being “thrifty” and choosing to save a larger proportion of their income, then it makes some sense not to tax capital income. (Scott Sumner makes exactly that argument, having us compare two brothers with equal income, one of whom chooses to save more.) But this is so fundamentally removed from how capital—and for that matter capitalism—actually operates that I have difficulty understanding why anyone could think that it is true.

The best I can come up with is something like this: They model the world by imagining that there is only one good, peanuts, and everyone starts with the same number of peanuts, and everyone has a choice to either eat their peanuts or save and replant them. Then, the total production of peanuts in the future will be due to the proportion of peanuts that were replanted today, and the amount of peanuts each person has will be due to their past decisions to save rather than consume. Therefore savings will be equal to investment and investment will be the source of capital income.

I bet you can already see the problem even in this simple model, if we just relax the assumption of equal wealth endowments: Some people have a lot more peanuts than others. Why do some people eat all their peanuts? Well it probably has something to do with the fact they’d starve if they didn’t. Reducing your consumption below the level at which you can survive isn’t “thrifty”, it’s suicidal. (And if you think this is a strawman, the IMF has literally told Third World countries that their problem is they need to save more. Here they are arguing that in Ghana.) In fact, economic growth leads to saving, not the other way around. Most Americans aren’t starving, and could probably stand to save more than we do, but honestly it might not be good if we did—everyone trying to save more can lead to the Paradox of Thrift and cause a recession.

Even worse, in that model world, there is only capital income. There is no such thing as labor income, only the number of peanuts you grow from last year’s planting. If we now add in labor income, what happens? Well, peanuts don’t work anymore… let’s try robots. You have a certain number of robots, and you can either use the robots to do things you need (including somehow feeding you, I guess), or you can use them to build more robots to use later. You can also build more robots yourself. Then the “zero capital tax” argument amounts to saying that the government should take some of your robots for public use if you made them yourself, but not if they were made by other robots you already had.

In order for that argument to carry through, you need to say that there was no such thing as an initial capital endowment; all robots that exist were either made by their owners or saved from previous construction. If there is anyone who simply happened to be born with more robots, or has more because they stole them from someone else (or, more likely, both, they inherited from someone who stole), the argument falls apart.

And even then you need to think about the incentives: If capital income is really all from savings, then taxing capital income provides an incentive to spend. Is that a bad thing? I feel like it isn’t; the economy needs spending. In the robot toy model, we’re giving people a reason to use their robots to do actual stuff, instead of just leaving them to make more robots. That actually seems like it might be a good thing, doesn’t it? More stuff gets done that helps people, instead of just having vast warehouses full of robots building other robots in the hopes that someday we can finally use them for something. Whereas, taxing labor income may give people an incentive not to work, which is definitely going to reduce economic output. More precisely, higher taxes on labor would give low-wage workers an incentive to work less, and give high-wage workers an incentive to work more, which is a major part of the justification of progressive income taxes. A lot of the models intended to illustrate the Chamley-Judd Theorem assume that taxes have an effect on capital but no effect on labor, which is kind of begging the question.

Another thought that occurred to me is: What if the robots in the warehouse are all destroyed by a war or an earthquake? And indeed the possibility of sudden capital destruction would be a good reason not to put everything into investment. This is generally modeled as “uninsurable depreciation risk”, but come on; of course it’s uninsurable. All real risk is uninsurable in the aggregate. Insurance redistributes resources from those who have them but don’t need them to those who suddenly find they need them but don’t have them. This actually does reduce the real risk in utility, but it certainly doesn’t reduce the real risk in terms of goods. Stephen Colbert made this point very well: “Obamacare needs the premiums of healthier people to cover the costs of sicker people. It’s a devious con that can only be described as—insurance.” (This suggests that Stephen Colbert understands insurance better than many economists.) Someone has to make that new car that you bought using your insurance when you totaled the last one. Insurance companies cannot create cars or houses—or robots—out of thin air. And as Piketty and Saez point out, uninsurable risk undermines the Chamley-Judd Theorem. Unlike all these other economists, Piketty and Saez actually understand capital and inequality.
Sumner hand-waves that point away by saying we should just institute a one-time transfer of wealth to equalized the initial distribution, as though this were somehow a practically (not to mention politically) feasible alternative. Ultimately, yes, I’d like to see something like that happen; restore the balance and then begin anew with a just system. But that’s exceedingly difficult to do, while raising the tax rate on capital gains is very easy—and furthermore if we leave the current stock market and derivatives market in place, we will not have a just system by any stretch of the imagination. Perhaps if we can actually create a system where new wealth is really due to your own efforts, where there is no such thing as inheritance of riches (say a 100% estate tax above $1 million), no such thing as poverty (a basic income), no speculation or arbitrage, and financial markets that actually have a single real interest rate and offer all the credit that everyone needs, maybe then you can say that we should not tax capital income.

Until then, we should tax capital income, probably at least as much as we tax labor income.