# Fear not to “overreact”

Mar 29 JDN 2458938

It could be given as a story problem in an algebra class, if you didn’t mind terrifying your students:

A virus spreads exponentially, so that the population infected doubles every two days. Currently 10,000 people are infected. How long will it be until 300,000 are infected? Until 10,000,000 are infected? Until 600,000,000 are infected?

300,000/10,000 is about 32 = 2^5, so it will take 5 doublings, or 10 days.

10,000,000/10,000 is about 1024=2^10, so it will take 10 doublings, or 20 days.

600,000,000/10,000 is about 64*1024=2^6*2^10, so it will take 16 doublings, or 32 days.

This is the approximate rate at which COVID-19 spreads if uncontrolled.

Fortunately it is not completely uncontrolled; there were about 10,000 confirmed infections on January 30, and there are now about 300,000 as of March 22. This is about 50 days, so the daily growth rate has averaged about 7%. On the other hand, this is probably a substantial underestimate, because testing remains very poor, particularly here in the US.

Yet the truth is, we don’t know how bad COVID-19 is going to get. Some estimates suggest it may be nearly as bad as the 1918 flu pandemic; others say it may not be much worse than H1N1. Perhaps all this social distancing and quarantine is an overreaction? Perhaps the damage from closing all the schools and restaurants will actually be worse than the damage from the virus itself?

This is because the costs here are highly asymmetric. Overreaction has a moderate, fairly predictable cost. Underreaction could be utterly catastrophic. If we overreact, we waste a quarter or two of productivity, and then everything returns to normal. If we underreact, millions of people die.

This is what it means to err on the side of caution: If we are not 90% sure that we are overreacting, then we should be doing more. We should be fed up with the quarantine procedures and nearly certain that they are not all necessary. That means we are doing the right thing.

Indeed, the really terrifying thing is that we may already have underreacted. These graphs of what will happen under various scenarios really don’t look good:

But there may still be a chance to react adequately. The advice for most of us seems almost too simple: Stay home. Wash your hands.

# Ancient plagues, modern pandemics

Mar 1 JDN 2458917

The coronavirus epidemic continues; though it originated in Wuhan province, the virus has now been confirmed in places as far-flung as Italy, Brazil, and Mexico. So far, about 90,000 people have caught it, and about 3,000 have died, mostly in China.

There are legitimate reasons to be concerned about this epidemic: Like influenza, coronavirus spreads quickly, and can be carried without symptoms, yet unlike influenza, it has a very high rate of complications, causing hospitalization as often as 10% of the time and death as often as 2%. There’s a lot of uncertainty about these numbers, because it’s difficult to know exactly how many people are infected but either have no symptoms or have symptoms that can be confused with other diseases. But we do have reason to believe that coronavirus is much deadlier for those infected than influenza: Influenza spreads so widely that it kills about 300,000 people every year, but this is only 0.1% of the people infected.

And yet, despite our complex interwoven network of international trade that sends people and goods all around the world, our era is probably the safest in history in terms of the risk of infectious disease.

Partly this is technology: Especially for bacterial infections, we have highly effective treatments that our forebears lacked. But for most viral infections we actually don’t have very effective treatments—which means that technology per se is not the real hero here.

Vaccination is a major part of the answer: Vaccines have effectively eradicated polio and smallpox, and would probably be on track to eliminate measles and rubella if not for dangerous anti-vaccination ideology. But even with no vaccine against coronavirus (yet) and not very effective vaccines against influenza, still the death rates from these viruses are nowhere near those of ancient plagues.

The Black Death killed something like 40% of Europe’s entire population. The Plague of Justinian killed as many as 20% of the entire world’s population. This is a staggeringly large death rate compared to a modern pandemic, in which even a 2% death rate would be considered a total catastrophe.

Even the 1918 influenza pandemic, which killed more than all the battle deaths in World War I combined, wasn’t as terrible as an ancient plague; it killed about 2% of the infected population. And when a very similar influenza virus appeared in 2009, how many people did it kill? About 400,000 people, roughly 0.1% of those infectedslightly worse than the average flu season. That’s how much better our public health has gotten in the last century alone.

Remember SARS, a previous viral pandemic that also emerged in China? It only killed 774 people, in a year in which over 300,000 died of influenza.

Sanitation is probably the most important factor: Certainly sanitation was far worse in ancient times. Today almost everyone routinely showers and washes their hands, which makes a big difference—but it’s notable that widespread bathing didn’t save the Romans from the Plague of Justinian.

I think it’s underappreciated just how much better our communication and quarantine procedures are today than they once were. In ancient times, the only way you heard about a plague was a live messenger carrying the news—and that messenger might well be already carrying the virus. Today, an epidemic in China becomes immediate news around the world. This means that people prepare—they avoid travel, they stock up on food, they become more diligent about keeping clean. And perhaps even more important than the preparation by individual people is the preparation by institutions: Governments, hospitals, research labs. We can see the pandemic coming and be ready to respond weeks or even months before it hits us.

So yes, do wash your hands regularly. Wash for at least 20 seconds, which will definitely feel like a long time if you haven’t made it a habit—but it does make a difference. Try to avoid travel for awhile. Stock up on food and water in case you need to be quarantined. Follow whatever instructions public health officials give as the pandemic progresses. But you don’t need to panic: We’ve got this under control. That Horseman of the Apocalypse is dead; and fear not, Famine and War are next. I’m afraid Death himself will probably be awhile, though.

# The cost of illness

Feb 2 JDN 2458882

As I write this I am suffering from some sort of sinus infection, most likely some strain of rhinovirus. So far it has just been basically a bad cold, so there isn’t much to do aside from resting and waiting it out. But it did get me thinking about healthcare—we’re so focused on the costs of providing it that we often forget the costs of not providing it.

The United States is the only First World country without a universal healthcare system. It is not a coincidence that we also have some of the highest rates of preventable mortality and burden of disease.

We in the United States spend about \$3.5 trillion per year on healthcare, the most of any country in the world, even as a proportion of GDP. Yet this is not the cost of disease; this is how much we were willing to pay to avoid the cost of disease. Whatever harm that would have been caused without all that treatment must actually be worth more than \$3.5 trillion to us—because we paid that much to avoid it.

Globally, the disease burden is about 30,000 disability-adjusted life-years (DALY) per 100,000 people per year—that is to say, the average person is about 30% disabled by disease. I’ve spoken previously about quality-adjusted life years (QALY); the two measures take slightly different approaches to the same overall goal, and are largely interchangeable for most purposes.

Of course this result relies upon the disability weights; it’s not so obvious how we should be comparing across different conditions. How many years would you be willing to trade of normal life to avoid ten years of Alzheimer’s? But it’s probably not too far off to say that if we could somehow wave a magic wand and cure all disease, we would really increase our GDP by something like 30%. This would be over \$6 trillion in the US, and over \$26 trillion worldwide.

Of course, we can’t actually do that. But we can ask what kinds of policies are most likely to promote health in a cost-effective way.

Unsurprisingly, the biggest improvements to be made are in the poorest countries, where it can be astonishingly cheap to improve health. Malaria prevention has a cost of around \$30 per DALY—by donating to the Against Malaria Foundation you can buy a year of life for less than the price of a new video game. Compare this to the standard threshold in the US of \$50,000 per QALY: Targeting healthcare in the poorest countries can increase cost-effectiveness a thousandfold. In humanitarian terms, it would be well worth diverting spending from our own healthcare to provide public health interventions in poor countries. (Fortunately, we have even better options than that, like raising taxes on billionaires or diverting military spending instead.)

We in the United States spend about twice as much (per person per year) on healthcare as other First World countries. Are our health outcomes twice as good? Clearly not. Are they any better at all? That really isn’t clear. We certainly don’t have a particularly high life expectancy. We spend more on administrative costs than we do on preventative care—unlike every other First World country except Australia. Almost all of our drugs and therapies are more expensive here than they are everywhere else in the world.

The obvious answer here is to make our own healthcare system more like those of other First World countries. There are a variety of universal health care systems in the world that we could model ourselves on, ranging from the single-payer government-run system in the UK to the universal mandate system of Switzerland. The amazing thing is that it almost doesn’t matter which one we choose: We could copy basically any other First World country and get better healthcare for less spending. Obamacare was in many ways similar to the Swiss system, but we never fully implemented it and the Republicans have been undermining it every way they can. Under President Trump, they have made significant progress in undermining it, and as a result, there are now 3 million more Americans without health insurance than there were before Trump took office. The Republican Party is intentionally increasing the harm of disease.

# What do we mean by “obesity”?

Nov 25 JDN 2458448

I thought this topic would be particularly appropriate for the week of Thanksgiving, since as a matter of public ritual, this time every year, we eat too much and don’t get enough exercise.

No doubt you have heard the term “obesity epidemic”: It’s not just used by WebMD or mainstream news; it’s also used by the American Heart Association, the Center for Disease Control, the World Health Organization, and sometimes even published in peer-reviewed journal articles.

This is kind of weird, because the formal meaning of the term “epidemic” clearly does not apply here. I feel uncomfortable going against public health officials in what is clearly their area of expertise rather than my own, but everything I’ve ever read about the official definition of the word “epidemic” requires it to be an infectious disease. You can’t “catch” obesity. Hanging out with people who are obese may slightly raise your risk of obesity, but not in the way that hanging out with people with influenza gives you influenza. It’s not caused by bacteria or viruses. Eating food touched by a fat person won’t cause you to catch the fat. Therefore, whatever else it is, this is not an epidemic. (I guess sometimes we use the term more metaphorically, “an epidemic of bankruptcies” or an “epidemic of video game consumption”; but I feel like the WHO and CDC of all people should be more careful.)

Indeed, before we decide what exactly this is, I think we should first ask ourselves a deeper question: What do we mean by “obesity”?

The standard definition of “obesity” relies upon the body mass index (BMI), a very crude measure that simply takes your body mass and divides by the square of your height. It’s easy to measure, but that’s basically its only redeeming quality.

Anyone who has studied dimensional analysis should immediately see a problem here: That isn’t a unit of density. It’s a unit of… density-length? If you take the exact same individual and scale them up by 10%, their BMI will increase by 10%. Do we really intend to say that simply being larger makes you obese, for the exact same ratios of muscle, fat, and bone?

Because of this, the taller you are, the more likely your BMI is going to register as “obese”, holding constant your actual level of health and fitness. And worldwide, average height has been increasing. This isn’t enough to account for the entire trend in rising BMI, but it reduces it substantially; average height has increased by about 10% since the 1950s, which is enough to raise our average BMI by about 2 points of the 5-point observed increase.

And of course BMI doesn’t say anything about your actual ratios of fat and muscle; all it says is how many total kilograms are in your body. As a result, there is a systematic bias against athletes in the calculation of BMI—and any health measure that is biased against athletes is clearly doing something wrong. All those doctors telling us to exercise more may not realize it, but if we actually took their advice, our BMIs would very likely get higher, not lower—especially for men, especially for strength-building exercise.

It’s also quite clear that our standards for “healthy weight” are distorted by social norms. Feminists have been talking about this for years; most women will never look like supermodels no matter how much weight they lose—and eating disorders are much more dangerous than being even 50 pounds overweight. We’re starting to figure out that similar principles hold for men: A six-pack of abs doesn’t actually mean you’re healthy; it means you are dangerously depleted of fatty acids.

To compensate for this, it seems like the most sensible methodology would be to figure out empirically what sort of weight is most strongly correlated with good health and long lifespan—what BMI maximizes your expected QALY.

You might think that this is what public health officials did when defining what is currently categorized as “normal weight”—but you would be wrong. They used social norms and general intuition, and as a result, our standards for “normal weight” are systematically miscalibrated.

In fact, the empirical evidence is quite clear: The people with the highest expected QALY are those who are classified as “overweight”, with BMI between 25 and 30. Those of “normal weight” (20 to 25) fare slightly worse, followed by those classified as “obese class I” (30 to 35)—but we don’t actually see large effects until either “underweight” (18.5-20) or “obese class II” (35 to 40). And the really severe drops in life and health expectancy don’t happen until “obese class III” (>40); and we see the same severe drops at “very underweight” (<18.5).
With that in mind, consider that the global average BMI increased from 21.7 in men and 21.4 in women in 1975 to 24.2 in men and 24.4 in women in 2014. That is, the world average increased from the low end of “normal weight” which is actually too light, to the high end of “normal weight” which is probably optimal. The global prevalence of “morbid obesity”, the kind that actually has severely detrimental effects on health, is only 0.64% in men and 1.6% in men. Even including “severe obesity”, the kind that has a noticeable but not dramatic effect on health, is only 2.3% in men and 5.0% in women. That’s your epidemic? Reporting often says things like “2/3 of American adults are overweight or obese”; but all that “overweight” proportion should be utterly disregarded, since it is beneficial to health. The actual prevalence of obesity in the US—even including class I obesity which is not very harmful—is less than 40%.

If obesity were the health crisis it were made out to be, we should expect that global life expectancy is decreasing, or at the very least not increasing. On the contrary, it is rapidly increasing: In 1955, global life expectancy was only 55 years, while it is now over 70.

Worldwide, the countries with the highest obesity rates are those with the longest life expectancy, because both of these things are strongly correlated with high levels of economic development. But it may not just be that: Smoking reduces obesity while also reducing lifespan, and a lot of those countries with very high obesity (including the US) have very low rates of smoking.

There’s some evidence that within the set of rich, highly-developed countries, obesity rates are positively correlated with lower life expectancy, but these effects are much smaller than the effects of high development itself. Going from the highest obesity in the world (the US, of course) to the lowest among all highly-developed countries (Japan) requires reducing the obesity rate by 34 percentage points but only increases life expectancy by about 5 years. You’d get the same increase by raising overall economic development from the level of Turkey to the level of Greece, about 10 points on the 100-point HDI scale.

Now, am I saying that we should all be 400 pounds? No, there does come a point where excess weight is clearly detrimental to health. But this threshold is considerably higher than you have probably been led to believe. If you are 15 or 20 pounds “overweight” by what our society (or even your doctor!) tells you, you are probably actually at the optimal weight for your body type. If you are 30 or 40 pounds “overweight”, you may want to try to lose some weight, but don’t make yourself suffer to achieve it. Only if you are 50 pounds or more “overweight” should you really be considering drastic action. If you do try to lose weight, be realistic about your goal: Losing 5% to 10% of your initial weight is a roaring success.

There are also reasons to be particularly concerned about obesity and lack of exercise in children, which is why Michelle Obama’s “Let’s Move!” campaign was a good thing.

And yes, exercise more! Don’t do it to try to lose weight (exercise does not actually cause much weight loss). Just do it. Exercise has so many health benefits it’s honestly kind of ridiculous.

But why am I complaining about this, anyway? Even if we cause some people to worry more about eating less than is strictly necessary, what’s the harm in that? At least we’re getting people to exercise, and Thanksgiving was already ruined by politics anyway.

Well, here’s the thing: I don’t think this obesity panic is actually making us any less obese.

The United States is the most obese country in the world—and you can’t so much as call up Facebook or step into a subway car in the US without someone telling you that you’re too fat and you need to lose weight. The people who really are obese and may need medical help losing weight are the ones most likely to be publicly shamed and harassed for their weight—and there’s no evidence that this actually does anything to reduce their weight. People who experience shaming and harassment for their weight are actually less likely to achieve sustained weight loss.

Teenagers—both boys and girls—who are perceived to be “overweight” are at substantially elevated risk of depression and suicide. People who more fully internalize feelings of shame about their weight have higher blood pressure and higher triglicerides, though once you control for other factors the effect is not huge. There’s even evidence that fat shaming by medical professionals leads to worse treatment outcomes among obese patients.

If we want to actually reduce obesity—and this makes sense, at least for the upper-tail obesity of BMI above 35—then we should be looking at what sort of interventions are actually effective at doing that. Medicine has an important role to play of course, but I actually think economics might be stronger here (though I suppose I would, wouldn’t I?).

Number 1: Stop subsidizing meat and feed grains. There is now quite clear evidence that direct and indirect government subsidies for meat production are a contributing factor in our high fat consumption and thus high obesity rate, though obviously other factors matter too. If you’re worried about farmers, subsidize vegetables instead, or pay for active labor market programs that will train those farmers to work in new industries. This thing we do where we try to save the job instead of the worker is fundamentally idiotic and destructive. Jobs are supposed to be destroyed; that’s what technological improvement is. If you stop destroying jobs, you will stop economic growth.

Number 2: Restrict advertising of high-sugar, high-fat foods, especially to children. Food advertising is particularly effective, because it draws on such primal impulses, and children are particularly vulnerable (as the APA has publicly reported on, including specifically for food advertising). Corporations like McDonald’s and Kellogg’s know quite well what they’re doing when they advertise high-fat, high-sugar foods to kids and get them into the habit of eating them early.

Number 3: Find policies to promote exercise. Despite its small effects on weight loss, exercise has enormous effects on health. Indeed, the fact that people who successfully lose weight show long-term benefits even if they put the weight back on suggests to me that really what they gained was a habit of exercise. We need to find ways to integrate exercise into our daily lives more. The one big thing that our ancestors did do better than we do is constantly exercise—be it hunting, gathering, or farming. Standing desks and treadmill desks may seem weird, but there is evidence that they actually improve health. Right now they are quite expensive, so most people don’t buy them. If we subsidized them, they would be cheaper; if they were cheaper, more people would buy them; if more people bought them, they would seem less weird. Eventually, it could become normative to walk on a treadmill while you work and sitting might seem weird. Even a quite large subsidy could be worthwhile: say we had to spend \$500 per person per year to buy every single adult a treadmill desk each year. That comes to about \$80 billion per year, which is less than one fourth what we’re currently spending on diabetes or heart disease, so we’d break even if we simply managed to reduce those two conditions by 13%. Add in all the other benefits for depression, chronic pain, sleep, sexual function, and so on, and the quality of life improvement could be quite substantial.

# Medicaid expansion and the human cost of political polarization

JDN 2457422

As of this writing, there are still 22 of our 50 US states that have refused to expand Medicaid under the Affordable Care Act. Several other states (including Michigan) expanded Medicaid, but on an intentionally slowed timetable. The way the law was written, these people are not eligible for subsidized private insurance (because it was assumed they’d be on Medicaid!), so there are almost 3 million people without health insurance because of the refused expansions.

Why? Would expanding Medicaid on the original timetable be too arduous to accomplish? If so, explain why 13 states managed to do it on time.

Would expanding Medicaid be expensive, and put a strain on state budgets? No, the federal government will pay 90% of the cost until 2020. Some states claim that even the 10% is unbearable, but when you figure in the reduced strain on emergency rooms and public health, expanding Medicaid would most likely save state money, especially with the 90% federal funding.

To really understand why so many states are digging in their heels, I’ve made you a little table. It includes three pieces of information about each state: The first column is whether it accepted Medicaid immediately (“Yes”), accepted it with delays or conditions, or hasn’t officially accepted it yet but is negotiating to do so (“Maybe”), or refused it completely (“No”). The second column is the political party of the state governor. The third column is the majority political party of the state legislatures (“D” for Democrat, “R” for Republican, “I” for Independent, or “M” for mixed if one house has one majority and the other house has the other).

 State Medicaid? Governor Legislature Alabama No R R Alaska Maybe I R Arizona Yes R R Arkansas Maybe R R California Yes D D Colorado Yes D M Connecticut Yes D D Delaware Yes D D Florida No R R Georgia No R R Hawaii Yes D D Idaho No R R Illinois Yes R D Indiana Maybe R R Iowa Maybe R M Kansas No R R Kentucky Yes R M Lousiana Maybe D R Maine No R M Maryland Yes R D Massachusetts Yes R D Michigan Maybe R R Minnesota No D M Mississippi No R R Missouri No D M Montana Maybe D M Nebraska No R R Nevada Yes R R New Hampshire Maybe D R New Jersey Yes R D New Mexico Yes R M New York Yes D D North Carolina No R R North Dakota Yes R R Ohio Yes R R Oklahoma No R R Oregon Yes D D Pennsylvania Maybe D R Rhode Island Yes D D South Carolina No R R South Dakota Maybe R R Tennessee No R R Texas No R R Utah No R R Vermont Yes D D Virginia Maybe D R Washington Yes D D West Virginia Yes D R Wisconsin No R R Wyoming Maybe R R

I have taken the liberty of some color-coding.

The states highlighted in red are states that refused the Medicaid expansion which have Republican governors and Republican majorities in both legislatures; that’s Alabama, Florida, Georgia, Idaho, Kansas, Mississippi, Nebraska, North Carolina, Oklahoma, South Carolina, Tennessee, Texas, Utah, and Wisconsin.

The states highlighted in purple are states that refused the Medicaid expansion which have mixed party representation between Democrats and Republicans; that’s Maine, Minnesota, and Missouri.

And I would have highlighted in blue the states that refused the Medicaid expansion which have Democrat governors and Democrat majorities in both legislatures—but there aren’t any.

There were Republican-led states which said “Yes” (Arizona, Nevada, North Dakota, and Ohio). There were Republican-led states which said “Maybe” (Arkansas, Indiana, Michigan, South Dakota, and Wyoming).

Mixed states were across the board, some saying “Yes” (Colorado, Illinois, Kentucky, Maryland, Massachusetts, New Jersey, New Mexico, and West Virginia), some saying “Maybe” (Alaska, Iowa, Lousiana, Montana, New Hampshire, Pennsylvania, and Virginia), and a few saying “No” (Maine, Minnesota, and Missouri).

But every single Democrat-led state said “Yes”. California, Connecticut, Delaware, Hawaii, New York, Oregon, Rhode Island, Vermont, and Washington. There aren’t even any Democrat-led states that said “Maybe”.

Perhaps it is simplest to summarize this in another table. Each row is a party configuration (“Democrat, Republican”, or “mixed”); the column is a Medicaid decision (“Yes”, “Maybe”, or “No”); in each cell is the count of how many states that fit that description:

 Yes Maybe No Democrat 9 0 0 Republican 4 5 14 Mixed 8 7 3

Shall I do a chi-square test? Sure, why not? A chi-square test of independence produces a p-value of 0.00001. This is not a coincidence. Being a Republican-led state is strongly correlated with rejecting the Medicaid expansion.

Indeed, because the elected officials were there first, I can say that there is Granger causality from being a Republican-led state to rejecting the Medicaid expansion. Based on the fact that mixed states were much less likely to reject Medicaid than Republican states, I could even estimate a dose-response curve on how having more Republicans makes you more likely to reject Medicaid.

Republicans did this, is basically what I’m getting at here.

Obamacare itself was legitimately controversial (though the Republicans never quite seemed to grasp that they needed a counterproposal for their argument to make sense), but once it was passed, accepting the Medicaid expansion should have been a no-brainer. The federal government is giving you money in order to give healthcare to poor people. It will not be expensive for your state budget; in fact it will probably save you money in the long run. It will help thousands or millions of your constituents. Its impact on the federal budget is negligible.

But no, 14 Republican-led states couldn’t let themselves get caught implementing a Democrat’s policy, especially if it would actually work. If it failed catastrophically, they could say “See? We told you so.” But if it succeeded, they’d have to admit that their opponents sometimes have good ideas. (You know, just like the Democrats did, when they copied most of Mitt Romney’s healthcare system.)

As a result of their stubbornness, almost 3 million Americans don’t have healthcare. Some of those people will die as a result—economists estimate about 7,000 people, to be precise. Hundreds of thousands more will suffer. All needlessly.

When 3,000 people are killed in a terrorist attack, Republicans clamor to kill millions in response with carpet bombing and nuclear weapons.

But when 7,000 people will die without healthcare, Republicans say we can’t afford it.

# Christmas and the economy

JDN2457380 (Dec 23, 2015)

By the time this post officially goes live, it will be two days before Christmas. (As I actually write, the Federal Reserve just ended our zero-lower-bound interest rate policy. I’ll talk about that more in a later post.)

Christmas is one of the most economically significant of holidays. Partly this is because of the fact that there are more Christians than people of any other religion, but mostly it is because Christmas is the most capitalist of holidays, the one that is by now defined primarily by the surge it creates in consumer spending. Yet even this surge is often wildly overstated.

Total Christmas-related spending is over \$600 billion per year, almost exactly equal to the US military budget. (Good news, by the way; the US military budget is declining under the Obama administration, approaching—though not yet reaching—a more sensible and sustainable peacetime level.) This is mostly gifts, but cards, decorations and travel are also important parts.

This is a lot of money, but not so much compared to total US consumer spending, which is \$6.7 trillion per year. (The Consumer Expenditure Survey tracks this sort of thing with an obsessive level of detail; if you’ve ever wanted to know how much the average 45-54 year-old American spends on eggs each year, now you can.) Thus, about 9% of our spending is Christmas-related, which honestly seems kind of low given than the season now covers approximately 20% of the year.

The best I can figure, the reason Christmas keeps moving back is a competitive pressure: There’s some sort of advantage to being the first business to start your Christmas sales, so each business tries to be earlier than everyone else was last year—with the result that they all keep moving further and further back in the year. Eventually we’ll just start our Christmas shopping on December 26.

The money supply fluctuates seasonally, and often peaks in December; but it also often peaks in March (and I’m honestly not sure why). So once again, Christmas isn’t as important for the economy as many would have you believe. While it may provide some macroeconomic boost, it provides the largest boost when people have lots of extra money to spend, which is we need it the least.

As I wrote about in last year’s Christmas post, many economists believe that much of this spending is inefficient, because they don’t actually understand what gifts are for. Fortunately economists seem to be coming around and seeing why gifts are actually beneficial, though their reasons for this are sometimes dry enough that they don’t make great Christmas cards. (That doesn’t stop some people from saying that you shouldn’t give gifts, and if you give anything you should give cash.)
So no, the economy will not live or die depending on how much people buy at Christmas. While it is the most economically significant holiday, it is still not really all that economically significant.

What I’m more concerned about is the stress that the Christmas season creates in a lot of people. WebMD, the Cleveland Clinic, the Mayo Clinic, and MedicineNet all have articles about the public health damage caused by holiday stress. Death rates actually spike during the holiday season, though the precise reason is unclear—and contrary to rumor it is definitely not suicide. Deaths by heart attack and stroke spike during the holidays, possibly due to lack of medical care.

There are many causes of this stress; not least, I’m sure, is the increased pressure on retail workers. But a lot of it may just be the increased pressure people put on themselves to buy the perfect gift, have the perfect Christmas dinner, not get into a political argument with their racist family members, and so on.

But when we push ourselves so hard to have a perfect holiday, we end up making ourselves miserable. It’s like constantly saying in your head, “Have fun! Why aren’t you having fun!?”

So what I’d like to say to you all is really quite simple: Try to relax. It’s okay if everything doesn’t go perfectly. Happiness is not found in pressuring ourselves to live a perfect life. It is found in appreciating how good our lives already are.

# How do we measure happiness?

JDN 2457028 EST 20:33.

No, really, I’m asking. I strongly encourage my readers to offer in the comments any ideas they have about the measurement of happiness in the real world; this has been a stumbling block in one of my ongoing research projects.

In one sense the measurement of happiness—or more formally utility—is absolutely fundamental to economics; in another it’s something most economists are astonishingly afraid of even trying to do.

The basic question of economics has nothing to do with money, and is really only incidentally related to “scarce resources” or “the production of goods” (though many textbooks will define economics in this way—apparently implying that a post-scarcity economy is not an economy). The basic question of economics is really this: How do we make people happy?

This must always be the goal in any economic decision, and if we lose sight of that fact we can make some truly awful decisions. Other goals may work sometimes, but they inevitably fail: If you conceive of the goal as “maximize GDP”, then you’ll try to do any policy that will increase the amount of production, even if that production comes at the expense of stress, injury, disease, or pollution. (And doesn’t that sound awfully familiar, particularly here in the US? 40% of Americans report their jobs as “very stressful” or “extremely stressful”.) If you were to conceive of the goal as “maximize the amount of money”, you’d print money as fast as possible and end up with hyperinflation and total economic collapse ala Zimbabwe. If you were to conceive of the goal as “maximize human life”, you’d support methods of increasing population to the point where we had a hundred billion people whose lives were barely worth living. Even if you were to conceive of the goal as “save as many lives as possible”, you’d find yourself investing in whatever would extend lifespan even if it meant enormous pain and suffering—which is a major problem in end-of-life care around the world. No, there is one goal and one goal only: Maximize happiness.

I suppose technically it should be “maximize utility”, but those are in fact basically the same thing as long as “happiness” is broadly conceived as eudaimoniathe joy of a life well-lived—and not a narrow concept of just adding up pleasure and subtracting out pain. The goal is not to maximize the quantity of dopamine and endorphins in your brain; the goal is to achieve a world where people are safe from danger, free to express themselves, with friends and family who love them, who participate in a world that is just and peaceful. We do not want merely the illusion of these things—we want to actually have them. So let me be clear that this is what I mean when I say “maximize happiness”.

The challenge, therefore, is how we figure out if we are doing that. Things like money and GDP are easy to measure; but how do you measure happiness?
Early economists like Adam Smith and John Stuart Mill tried to deal with this question, and while they were not very successful I think they deserve credit for recognizing its importance and trying to resolve it. But sometime around the rise of modern neoclassical economics, economists gave up on the project and instead sought a narrower task, to measure preferences.

This is often called technically ordinal utility, as opposed to cardinal utility; but this terminology obscures the fundamental distinction. Cardinal utility is actual utility; ordinal utility is just preferences.

(The notion that cardinal utility is defined “up to a linear transformation” is really an eminently trivial observation, and it shows just how little physics the physics-envious economists really understand. All we’re talking about here is units of measurement—the same distance is 10.0 inches or 25.4 centimeters, so is distance only defined “up to a linear transformation”? It’s sometimes argued that there is no clear zero—like Fahrenheit and Celsius—but actually it’s pretty clear to me that there is: Zero utility is not existing. So there you go, now you have Kelvin.)

Preferences are a bit easier to measure than happiness, but not by as much as most economists seem to think. If you imagine a small number of options, you can just put them in order from most to least preferred and there you go; and we could imagine asking someone to do that, or—the technique of revealed preferenceuse the choices they make to infer their preferences by assuming that when given the choice of X and Y, choosing X means you prefer X to Y.

Like much of neoclassical theory, this sounds good in principle and utterly collapses when applied to the real world. Above all: How many options do you have? It’s not easy to say, but the number is definitely huge—and both of those facts pose serious problems for a theory of preferences.

The fact that it’s not easy to say means that we don’t have a well-defined set of choices; even if Y is theoretically on the table, people might not realize it, or they might not see that it’s better even though it actually is. Much of our cognitive effort in any decision is actually spent narrowing the decision space—when deciding who to date or where to go to college or even what groceries to buy, simply generating a list of viable options involves a great deal of effort and extremely complex computation. If you have a true utility function, you can satisficechoosing the first option that is above a certain threshold—or engage in constrained optimizationchoosing whether to continue searching or accept your current choice based on how good it is. Under preference theory, there is no such “how good it is” and no such thresholds. You either search forever or choose a cutoff arbitrarily.

Even if we could decide how many options there are in any given choice, in order for this to form a complete guide for human behavior we would need an enormous amount of information. Suppose there are 10 different items I could have or not have; then there are 10! = 3.6 million possible preference orderings. If there were 100 items, there would be 100! = 9e157 possible orderings. It won’t do simply to decide on each item whether I’d like to have it or not. Some things are complements: I prefer to have shoes, but I probably prefer to have \$100 and no shoes at all rather than \$50 and just a left shoe. Other things are substitutes: I generally prefer eating either a bowl of spaghetti or a pizza, rather than both at the same time. No, the combinations matter, and that means that we have an exponentially increasing decision space every time we add a new option. If there really is no more structure to preferences than this, we have an absurd computational task to make even the most basic decisions.

This is in fact most likely why we have happiness in the first place. Happiness did not emerge from a vacuum; it evolved by natural selection. Why make an organism have feelings? Why make it care about things? Wouldn’t it be easier to just hard-code a list of decisions it should make? No, on the contrary, it would be exponentially more complex. Utility exists precisely because it is more efficient for an organism to like or dislike things by certain amounts rather than trying to define arbitrary preference orderings. Adding a new item means assigning it an emotional value and then slotting it in, instead of comparing it to every single other possibility.

To illustrate this: I like Coke more than I like Pepsi. (Let the flame wars begin?) I also like getting massages more than I like being stabbed. (I imagine less controversy on this point.) But the difference in my mind between massages and stabbings is an awful lot larger than the difference between Coke and Pepsi. Yet according to preference theory (“ordinal utility”), that difference is not meaningful; instead I have to say that I prefer the pair “drink Pepsi and get a massage” to the pair “drink Coke and get stabbed”. There’s no such thing as “a little better” or “a lot worse”; there is only what I prefer over what I do not prefer, and since these can be assigned arbitrarily there is an impossible computational task before me to make even the most basic decisions.

Real utility also allows you to make decisions under risk, to decide when it’s worth taking a chance. Is a 50% chance of \$100 worth giving up a guaranteed \$50? Probably. Is a 50% chance of \$10 million worth giving up a guaranteed \$5 million? Not for me. Maybe for Bill Gates. How do I make that decision? It’s not about what I prefer—I do in fact prefer \$10 million to \$5 million. It’s about how much difference there is in terms of my real happiness—\$5 million is almost as good as \$10 million, but \$100 is a lot better than \$50. My marginal utility of wealth—as I discussed in my post on progressive taxation—is a lot steeper at \$50 than it is at \$5 million. There’s actually a way to use revealed preferences under risk to estimate true (“cardinal”) utility, developed by Von Neumann and Morgenstern. In fact they proved a remarkably strong theorem: If you don’t have a cardinal utility function that you’re maximizing, you can’t make rational decisions under risk. (In fact many of our risk decisions clearly aren’t rational, because we aren’t actually maximizing an expected utility; what we’re actually doing is something more like cumulative prospect theory, the leading cognitive economic theory of risk decisions. We overrespond to extreme but improbable events—like lightning strikes and terrorist attacks—and underrespond to moderate but probable events—like heart attacks and car crashes. We play the lottery but still buy health insurance. We fear Ebola—which has never killed a single American—but not influenza—which kills 10,000 Americans every year.)

A lot of economists would argue that it’s “unscientific”—Kenneth Arrow said “impossible”—to assign this sort of cardinal distance between our choices. But assigning distances between preferences is something we do all the time. Amazon.com lets us vote on a 5-star scale, and very few people send in error reports saying that cardinal utility is meaningless and only preference orderings exist. In 2000 I would have said “I like Gore best, Nader is almost as good, and Bush is pretty awful; but of course they’re all a lot better than the Fascist Party.” If we had simply been able to express those feelings on the 2000 ballot according to a range vote, either Nader would have won and the United States would now have a three-party system (and possibly a nationalized banking system!), or Gore would have won and we would be a decade ahead of where we currently are in preventing and mitigating global warming. Either one of these things would benefit millions of people.

This is extremely important because of another thing that Arrow said was “impossible”—namely, “Arrow’s Impossibility Theorem”. It should be called Arrow’s Range Voting Theorem, because simply by restricting preferences to a well-defined utility and allowing people to make range votes according to that utility, we can fulfill all the requirements that are supposedly “impossible”. The theorem doesn’t say—as it is commonly paraphrased—that there is no fair voting system; it says that range voting is the only fair voting system. A better claim is that there is no perfect voting system, which is true if you mean that there is no way to vote strategically that doesn’t accurately reflect your true beliefs. The Myerson-Satterthwaithe Theorem is then the proper theorem to use; if you could design a voting system that would force you to reveal your beliefs, you could design a market auction that would force you to reveal your optimal price. But the least expressive way to vote in a range vote is to pick your favorite and give them 100% while giving everyone else 0%—which is identical to our current plurality vote system. The worst-case scenario in range voting is our current system.

But the fact that utility exists and matters, unfortunately doesn’t tell us how to measure it. The current state-of-the-art in economics is what’s called “willingness-to-pay”, where we arrange (or observe) decisions people make involving money and try to assign dollar values to each of their choices. This is how you get disturbing calculations like “the lives lost due to air pollution are worth \$10.2 billion.”

Why are these calculations disturbing? Because they have the whole thing backwards—people aren’t valuable because they are worth money; money is valuable because it helps people. It’s also really bizarre because it has to be adjusted for inflation. Finally—and this is the point that far too few people appreciate—the value of a dollar is not constant across people. Because different people have different marginal utilities of wealth, something that I would only be willing to pay \$1000 for, Bill Gates might be willing to pay \$1 million for—and a child in Africa might only be willing to pay \$10, because that is all he has to spend. This makes the “willingness-to-pay” a basically meaningless concept independent of whose wealth we are spending.

Utility, on the other hand, might differ between people—but, at least in principle, it can still be added up between them on the same scale. The problem is that “in principle” part: How do we actually measure it?

So far, the best I’ve come up with is to borrow from public health policy and use the QALY, or quality-adjusted life year. By asking people macabre questions like “What is the maximum number of years of your life you would give up to not have a severe migraine every day?” (I’d say about 20—that’s where I feel ambivalent. At 10 I definitely would; at 30 I definitely wouldn’t.) or “What chance of total paralysis would you take in order to avoid being paralyzed from the waist down?” (I’d say about 20%.) we assign utility values: 80 years of migraines is worth giving up 20 years to avoid, so chronic migraine is a quality of life factor of 0.75. Total paralysis is 5 times as bad as paralysis from the waist down, so if waist-down paralysis is a quality of life factor of 0.90 then total paralysis is 0.50.

You can probably already see that there are lots of problems: What if people don’t agree? What if due to framing effects the same person gives different answers to slightly different phrasing? Some conditions will directly bias our judgments—depression being the obvious example. How many years of your life would you give up to not be depressed? Suicide means some people say all of them. How well do we really know our preferences on these sorts of decisions, given that most of them are decisions we will never have to make? It’s difficult enough to make the actual decisions in our lives, let alone hypothetical decisions we’ve never encountered.

Another problem is often suggested as well: How do we apply this methodology outside questions of health? Does it really make sense to ask you how many years of your life drinking Coke or driving your car is worth?
Well, actually… it better, because you make that sort of decision all the time. You drive instead of staying home, because you value where you’re going more than the risk of dying in a car accident. You drive instead of walking because getting there on time is worth that additional risk as well. You eat foods you know aren’t good for you because you think the taste is worth the cost. Indeed, most of us aren’t making most of these decisions very well—maybe you shouldn’t actually drive or drink that Coke. But in order to know that, we need to know how many years of your life a Coke is worth.

As a very rough estimate, I figure you can convert from willingness-to-pay to QALY by dividing by your annual consumption spending Say you spend annually about \$20,000—pretty typical for a First World individual. Then \$1 is worth about 50 microQALY, or about 26 quality-adjusted life-minutes. Now suppose you are in Third World poverty; your consumption might be only \$200 a year, so \$1 becomes worth 5 milliQALY, or 1.8 quality-adjusted life-days. The very richest individuals might spend as much as \$10 million on consumption, so \$1 to them is only worth 100 nanoQALY, or 3 quality-adjusted life-seconds.

That’s an extremely rough estimate, of course; it assumes you are in perfect health, all your time is equally valuable and all your purchasing decisions are optimized by purchasing at marginal utility. Don’t take it too literally; based on the above estimate, an hour to you is worth about \$2.30, so it would be worth your while to work for even \$3 an hour. Here’s a simple correction we should probably make: if only a third of your time is really usable for work, you should expect at least \$6.90 an hour—and hey, that’s a little less than the US minimum wage. So I think we’re in the right order of magnitude, but the details have a long way to go.

So let’s hear it, readers: How do you think we can best measure happiness?