Valuing harm without devaluing the harmed

June 9 JDN 2458644

In last week’s post I talked about the matter of “putting a value on a human life”. I explained how we don’t actually need to make a transparently absurd statement like “a human life is worth $5 million” to do cost-benefit analysis; we simply need to ask ourselves what else we could do with any given amount of money. We don’t actually need to put a dollar value on human lives; we need only value them in terms of other lives.

But there is a deeper problem to face here, which is how we ought to value not simply life, but quality of life. The notion is built into the concept of quality-adjusted life-years (QALY), but how exactly do we make such a quality adjustment?

Indeed, much like cost-benefit analysis in general or the value of a statistical life, the very concept of QALY can be repugnant to many people. The problem seems to be that it violates our deeply-held belief that all lives are of equal value: If I say that saving one person adds 2.5 QALY and saving another adds 68 QALY, I seem to be saying that the second person is worth more than the first.

But this is not really true. QALY aren’t associated with a particular individual. They are associated with the duration and quality of life.

It should be fairly easy to convince yourself that duration matters: Saving a newborn baby who will go on to live to be 84 years old adds an awful lot more in terms of human happiness than extending the life of a dying person by a single hour. To call each of these things “saving a life” is actually very unequal: It’s implying that 1 hour for the second person is worth 84 years for the first.

Quality, on the other hand, poses much thornier problems. Presumably, we’d like to be able to say that being wheelchair-bound is a bad thing, and if we can make people able to walk we should want to do that. But this means that we need to assign some sort of QALY cost to being in a wheelchair, which then seems to imply that people in wheelchairs are worth less than people who can walk.

And the same goes for any disability or disorder: Assigning a QALY cost to depression, or migraine, or cystic fibrosis, or diabetes, or blindness, or pneumonia, always seems to imply that people with the condition are worth less than people without. This is a deeply unsettling result.

Yet I think the mistake is in how we are using the concept of “worth”. We are not saying that the happiness of someone with depression is less important than the happiness of someone without; we are saying that the person with depression experiences less happiness—which, in this case of depression especially, is basically true by construction.

Does this imply, however, that if we are given the choice between saving two people, one of whom has a disability, we should save the one without?

Well, here’s an extreme example: Suppose there is a plague which kills 50% of its victims within one year. There are two people in a burning building. One of them has the plague, the other does not. You only have time to save one: Which do you save? I think it’s quite obvious you save the person who doesn’t have the plague.

But that only relies upon duration, which wasn’t so difficult. All right, fine; say the plague doesn’t kill you. Instead, it renders you paralyzed and in constant pain for the rest of your life. Is it really that far-fetched to say that we should save the person who won’t have that experience?

We really shouldn’t think of it as valuing people; we should think of it as valuing actions. QALY are a way of deciding which actions we should take, not which people are more important or more worthy. “Is a person who can walk worth more than a person who needs a wheelchair?” is a fundamentally bizarre and ultimately useless question. ‘Worth more’ in what sense? “Should we spend $100 million developing this technology that will allow people who use wheelchairs to walk?” is the question we should be asking. The QALY cost we assign to a condition isn’t about how much people with that condition are worth; it’s about what resources we should be willing to commit in order to treat that condition. If you have a given condition, you should want us to assign a high QALY cost to it, to motivate us to find better treatments.

I think it’s also important to consider which individuals are having QALY added or subtracted. In last week’s post I talked about how some people read “the value of a statistical life is $5 million” to mean “it’s okay to kill someone as long as you profit at least $5 million”; but this doesn’t follow at all. We don’t say that it’s all right to steal $1,000 from someone just because they lose $1,000 and you gain $1,000. We wouldn’t say it was all right if you had a better investment strategy and would end up with $1,100 afterward. We probably wouldn’t even say it was all right if you were much poorer and desperate for the money (though then we might at least be tempted). If a billionaire kills people to make $10 million each (sadly I’m quite sure that oil executives have killed for far less), that’s still killing people. And in fact since he is a billionaire, his marginal utility of wealth is so low that his value of a statistical life isn’t $5 million; it’s got to be in the billions. So the net happiness of the world has not increased, in fact.

Above all, it’s vital to appreciate the benefits of doing good cost-benefit analysis. Cost-benefit analysis tells us to stop fighting wars. It tells us to focus our spending on medical research and foreign aid instead of yet more corporate subsidies or aircraft carriers. It tells us how to allocate our public health resources so as to save the most lives. It emphasizes how vital our environmental regulations are in making our lives better and longer.

Could we do all these things without QALY? Maybe—but I suspect we would not do them as well, and when millions of lives are on the line, “not as well” is thousands of innocent people dead. Sometimes we really are faced with two choices for a public health intervention, and we need to decide which one will help the most people. Sometimes we really do have to set a pollution target, and decide just what amount of risk is worth accepting for the economic benefits of industry. These are very difficult questions, and without good cost-benefit analysis we could get the answers dangerously wrong.

How much should we value statistical lives?

June 9 JDN 2458644

The very concept of putting a dollar value on a human life offends most people. I understand why: It suggests that human lives are fungible, and also seems to imply that killing people is just fine as long as it produces sufficient profit.

In next week’s post I’ll try to assuage some of those fears: Saying that a life is worth say $5 million doesn’t actually mean that it’s justifiable to kill someone as long as it pays you $5 million.

But for now let me say that we really have no choice but to do this. There are a huge number of interventions we could make in the world that all have the same basic form: They could save lives, but they cost money. We need to be able to say when we are justified in spending more money to save more lives, and when we are not.

No, it simply won’t do to say that “money is no object”. Because money isn’t just money—money is human happiness. A willingness to spend unlimited amounts to save even a single life, if it could be coherently implemented at all, would result in, if not complete chaos or deadlock, a joyless, empty world where we all live to be 100 by being contained in protective foam and fed by machines. It may be uncomfortable to ask a question like “How many people should we be willing to let die to let ourselves have Disneyland?”; but if that answer were zero, we should not have Disneyland. The same is true for almost everything in our lives: From automobiles to chocolate, almost any product you buy, any service you consume, has resulted in some person’s death at some point.

And there is an even more urgent reason, in fact: There are many things we are currently not doing that could save many lives for very little money. Targeted foreign aid or donations to top charities could save lives for as little as $1000 each. Foreign aid is so cost-effective that even if the only thing foreign aid had ever accomplished was curing smallpox, it would be twice as cost-effective as the UK National Health Service (which is one of the best healthcare systems in the world). Tighter environmental regulations save an additional life for about $200,000 in compliance cost, which is less than we would have spent in health care costs; the Clean Air Act added about $12 trillion to the US economy over the last 30 years.

Reduced military spending could literally pay us money to save people’s lives—based on the cost of the Afghanistan War, we are currently paying as much as $1 million per person to kill people that we really have very little reason to kill.

Most of the lives we could save are statistical lives: We can’t point to a particular individual who will or will not die because of the decision, but we can do the math and say approximately how many people will or will not die. We know that approximately 11,000 people will die each year if we loosen regulations on mercury pollution; we can’t say who they are, but they’re out there. Human beings have a lot of trouble thinking this way; it’s just not how our brains evolved to work. But when we’re talking about policy on a national or global scale, it’s quite simply the only way to do things. Anything else is talking nonsense.

Standard estimates of the value of a statistical life range from about $4 million to $9 million. These estimates are based on how much people are willing to pay for reductions in risk. So for instance if people would pay $100 to reduce their chances of dying by 0.01%, we divide the former by the latter to say that a life is worth about $1 million.

It’s a weird question: You clearly can’t just multiply like that. How much would you be willing to accept for a 100% chance of death? Presumably there isn’t really such an amount, because you would be dead. So your willingness-to-accept is undefined. And there’s no particular reason for it to be linear below that: Since marginal utility of wealth is decreasing, the amount you would demand for a 50% chance of death is a lot more than 50 times as much as what you would demand for a 1% chance of death.
Say for instance that utility of wealth is logarithmic. Say your currently lifetime wealth is $1 million, and your current utility is about 70 QALY. Then if we measure wealth in thousands of dollars, we have W = 1000 and U = 10 ln W.

How much would you be willing to accept for a 1% chance of death? Your utility when dead is presumably zero, so we are asking for an amount m such that 0.99 U(W+m) = U(W). 0.99 (10 ln (W+m)) = 10 ln (W) means (W+m)^0.99 = W, so m = W^(1/0.99) – W. We started with W = 1000, so m = 72. You would be willing to accept $72,000 for a 1% chance of death. So we would estimate the value of a statistical life at $7.2 million.

How much for a 0.0001% chance of death? W^(1/0.999999)-W = 0.0069. So you would demand $6.90 for such a risk, and we’d estimate your value of a statistical life at $6.9 million. Pretty close, though not the same.

But how much would you be willing to accept for a 50% chance of death? W^(1/0.5) – W = 999,000. That is, $999 million. So if we multiplied that out, we’d say that your value of a statistical life has now risen to a staggering (and ridiculous) $2 billion.

Mathematically, the estimates are more consistent if we use small probabilities—but all this assumes that people actually know their own utility of wealth and calculate it correctly, which is a very unreasonable assumption.

The much bigger problem with this method is that human beings are terrible at dealing with small probabilities. When asked how much they’d be willing to pay to reduce their chances of dying by 0.01%, most people probably have absolutely no idea and may literally just say a random number.

We need to rethink our entire approach for judging such numbers. Honestly we shouldn’t be trying to put a dollar value on a human life; we should be asking about the dollar cost of saving a human life. We should be asking what else we could do with that money. Indeed, for the time being, I think the best thing to do is actually to compare lives to lives: How many lives could we save for this amount of money?

Thus, if we’re considering starting a war that will cost $1 trillion, we need to ask ourselves: How many innocent people would die if we don’t do that? How many will die if we do? And what else could we do with a trillion dollars? If the war is against Nazi Germany, okay, sure; we’re talking about killing millions to save tens of millions. But if it’s against ISIS, or Iran, those numbers don’t come out so great.

If we have a choice between two policies, each of which will cost $10 billion, and one of them will save 1,000 lives while the other will save 100,000, the obvious answer is to pick the second one. Yet this is exactly the world we live in, and we’re not doing that. We are throwing money at military spending and tax cuts (things that many not save any lives at all) and denying it from climate change adaptation, foreign aid, and poverty relief.

Instead of asking whether a given intervention is cost-effective based upon some notion of a dollar value of a human life, we should be asking what the current cost of saving a human life is, and we should devote all available resources into whatever means saves the most lives for the least money. Most likely that means some sort of foreign aid, public health intervention, or poverty relief in Third World countries. It clearly does not mean cutting taxes on billionaires or starting another war in the Middle East.

The upsides of life extension

Dec 16 JDN 2458469

If living is good, then living longer is better.

This may seem rather obvious, but it’s something we often lose sight of when discussing the consequences of medical technology for extending life. It’s almost like it seems too obvious that living longer must be better, and so we go out of our way to find ways that it is actually worse.

Even from a quick search I was able to find half a dozen popular media articles about life extension, and not one of them focused primarily on the benefits. The empirical literature is better, asking specific, empirically testable questions like “How does life expectancy relate to retirement age?” and “How is lifespan related to population and income growth?” and “What effect will longer lifespans have on pension systems?” Though even there I found essays in medical journals complaining that we have extended “quantity” of life without “quality” (yet by definition, if you are using QALY to assess the cost-effectiveness of a medical intervention, that’s already taken into account).

But still I think somewhere along the way we have forgotten just how good this is. We may not even be able to imagine the benefits of extending people’s lives to 200 or 500 or 1000 years.

To really get some perspective on this, I want you to imagine what a similar conversation must have looked like in roughly the year 1800, the Industrial Revolution, when industrial capitalism came along and made babies finally stop dying.

There was no mass media back then (not enough literacy), but imagine what it would have been like if there had been, or imagine what conversations about the future between elites must have been like.

And we do actually have at least one example of an elite author lamenting the increase in lifespan: His name was Thomas Malthus.

The Malthusian argument was seductive then, and it remains seductive today: If you improve medicine and food production, you will increase population. But if you increase population, you will eventually outstrip those gains in medicine and food and return once more to disease and starvation, only now with more mouths to feed.

Basically any modern discussion of “overpopulation” has this same flavor (by the way, serious environmentalists don’t use that concept; they’re focused on reducing pollution and carbon emissions, not people). Why bother helping poor countries, when they’re just going to double their population and need twice the help?

Well, as a matter of fact, Malthus was wrong. In fact, he was not just wrong: He was backwards. Increased population has come with increased standard of living around the world, as it allowed for more trade, greater specialization, and the application of economies of scale. You can’t build a retail market with a hunter-gatherer tribe. You can’t built an auto industry with a single city-state. You can’t build a space program with a population of 1 million. Having more people has allowed each person to do and have more than they could before.

Current population projections suggest world population will stabilize between 11 and 12 billion. Crucially, this does not factor in any kind of radical life extension technology. The projections allow for moderate increases in lifespan, but not people living much past 100.

Would increased lifespan lead to increased population? Probably, yes. I can’t be certain, because I can very easily imagine people deciding to put off having kids if they can reasonably expect to live 200 years and never become infertile.

I’m actually more worried about the unequal distribution of offspring: People who don’t believe in contraception will be able to have an awful lot of kids during that time, which could be bad for both the kids and society as a whole. We may need to impose regulations on reproduction similar to (but hopefully less draconian than) the One-Child policy imposed in China.

I think the most sensible way to impose the right incentives while still preserving civil liberties is to make it a tax: The first kid gets a subsidy, to help care for them. The second kid is revenue-neutral; we tax you but you get it back as benefits for the child. (Why not just let them keep the money? One of the few places where I think government paternalism is justifiable is protection against abusive or neglectful parents.) The third and later kids result in progressively higher taxes. We always feed the kids on government money, but their parents are going to end up quite poor if they don’t learn how to use contraceptives. (And of course, contraceptives will be made available for free without a prescription.)

But suppose that, yes, population does greatly increase as a result of longer lifespans. This is not a doomsday scenario. In fact, in itself, this is a good thing. If life is worth living, more lives are better.

The question becomes how we ensure that all these people live good lives; but technology will make that easier too. There seems to be an underlying assumption that increased lifespan won’t come with improved health and vitality; but this is already not true. 60 is the new 50: People who are 60 years old today live as well as people who were 50 years old just a generation ago.

And in fact, radical life extension will be an entirely different mechanism. We’re not talking about replacing a hip here, a kidney there; we’re talking about replenishing your chromosomal telomeres, repairing your cells at the molecular level, and revitalizing the content of your blood. The goal of life extension technology isn’t to make you technically alive but hooked up to machines for 200 years; it’s to make you young again for 200 years. The goal is a world where centenarians are playing tennis with young adults fresh out of college and you have trouble telling which is which.

There is another inequality concern here as well, which is cost. Especially in the US—actually almost only in the US, since most of the world has socialized medicine—where medicine is privatized and depends on your personal budget, I can easily imagine a world where the rich live to 200 and the poor die at 60. (The forgettable Justin Timberlake film In Time started with this excellent premise and then went precisely nowhere with it. Oddly, the Deus Ex games seem to have considered every consequence of mixing capitalism with human augmentation except this one.) We should be proactively taking steps to prevent this nightmare scenario by focusing on making healthcare provision equitable and universal. Even if this slows down the development of the technology a little bit, it’ll be worth it to make sure that when it does arrive, it will arrive for everyone.

We really don’t know what the world will look like when people can live 200 years or more. Yes, there will be challenges that come from the transition; honestly I’m most worried about keeping alive ideas that people grew up with two centuries prior. Imagine talking politics with Abraham Lincoln: He was viewed as extremely progressive for his time, even radical—but he was still a big-time racist.

The good news there is that people are not actually as set in their ways as many believe: While the huge surge in pro-LGBT attitudes did come from younger generations, support for LGBT rights has been gradually creeping up among older generations too. Perhaps if Abraham Lincoln had lived through the Great Depression, the World Wars, and the Civil Rights Movement he’d be a very different person than he was in 1865. Longer lifespans will mean people live through more social change; that’s something we’re going to need to cope with.

And of course violent death becomes even more terrifying when aging is out of the picture: It’s tragic enough when a 20-year-old dies in a car accident today and we imagine the 60 years they lost—but what if it was 180 years or 480 years instead? But violent death in basically all its forms is declining around the world.

But again, I really want to emphasize this: Think about how good this is. Imagine meeting your great-grandmother—and not just meeting her, not just having some fleeting contact you half-remember from when you were four years old or something, but getting to know her, talking with her as an adult, going to the same movies, reading the same books. Imagine the converse: Knowing your great-grandchildren, watching them grow up and have kids of their own, your great-great-grandchildren. Imagine the world that we could build if people stopped dying all the time.

And if that doesn’t convince you, I highly recommend Nick Bostrom’s “Fable of the Dragon-Tyrant”.

Stop making excuses for the dragon.

What do we mean by “obesity”?

Nov 25 JDN 2458448

I thought this topic would be particularly appropriate for the week of Thanksgiving, since as a matter of public ritual, this time every year, we eat too much and don’t get enough exercise.

No doubt you have heard the term “obesity epidemic”: It’s not just used by WebMD or mainstream news; it’s also used by the American Heart Association, the Center for Disease Control, the World Health Organization, and sometimes even published in peer-reviewed journal articles.

This is kind of weird, because the formal meaning of the term “epidemic” clearly does not apply here. I feel uncomfortable going against public health officials in what is clearly their area of expertise rather than my own, but everything I’ve ever read about the official definition of the word “epidemic” requires it to be an infectious disease. You can’t “catch” obesity. Hanging out with people who are obese may slightly raise your risk of obesity, but not in the way that hanging out with people with influenza gives you influenza. It’s not caused by bacteria or viruses. Eating food touched by a fat person won’t cause you to catch the fat. Therefore, whatever else it is, this is not an epidemic. (I guess sometimes we use the term more metaphorically, “an epidemic of bankruptcies” or an “epidemic of video game consumption”; but I feel like the WHO and CDC of all people should be more careful.)

Indeed, before we decide what exactly this is, I think we should first ask ourselves a deeper question: What do we mean by “obesity”?

The standard definition of “obesity” relies upon the body mass index (BMI), a very crude measure that simply takes your body mass and divides by the square of your height. It’s easy to measure, but that’s basically its only redeeming quality.

Anyone who has studied dimensional analysis should immediately see a problem here: That isn’t a unit of density. It’s a unit of… density-length? If you take the exact same individual and scale them up by 10%, their BMI will increase by 10%. Do we really intend to say that simply being larger makes you obese, for the exact same ratios of muscle, fat, and bone?

Because of this, the taller you are, the more likely your BMI is going to register as “obese”, holding constant your actual level of health and fitness. And worldwide, average height has been increasing. This isn’t enough to account for the entire trend in rising BMI, but it reduces it substantially; average height has increased by about 10% since the 1950s, which is enough to raise our average BMI by about 2 points of the 5-point observed increase.

And of course BMI doesn’t say anything about your actual ratios of fat and muscle; all it says is how many total kilograms are in your body. As a result, there is a systematic bias against athletes in the calculation of BMI—and any health measure that is biased against athletes is clearly doing something wrong. All those doctors telling us to exercise more may not realize it, but if we actually took their advice, our BMIs would very likely get higher, not lower—especially for men, especially for strength-building exercise.

It’s also quite clear that our standards for “healthy weight” are distorted by social norms. Feminists have been talking about this for years; most women will never look like supermodels no matter how much weight they lose—and eating disorders are much more dangerous than being even 50 pounds overweight. We’re starting to figure out that similar principles hold for men: A six-pack of abs doesn’t actually mean you’re healthy; it means you are dangerously depleted of fatty acids.

To compensate for this, it seems like the most sensible methodology would be to figure out empirically what sort of weight is most strongly correlated with good health and long lifespan—what BMI maximizes your expected QALY.

You might think that this is what public health officials did when defining what is currently categorized as “normal weight”—but you would be wrong. They used social norms and general intuition, and as a result, our standards for “normal weight” are systematically miscalibrated.

In fact, the empirical evidence is quite clear: The people with the highest expected QALY are those who are classified as “overweight”, with BMI between 25 and 30. Those of “normal weight” (20 to 25) fare slightly worse, followed by those classified as “obese class I” (30 to 35)—but we don’t actually see large effects until either “underweight” (18.5-20) or “obese class II” (35 to 40). And the really severe drops in life and health expectancy don’t happen until “obese class III” (>40); and we see the same severe drops at “very underweight” (<18.5).
With that in mind, consider that the global average BMI increased from 21.7 in men and 21.4 in women in 1975 to 24.2 in men and 24.4 in women in 2014. That is, the world average increased from the low end of “normal weight” which is actually too light, to the high end of “normal weight” which is probably optimal. The global prevalence of “morbid obesity”, the kind that actually has severely detrimental effects on health, is only 0.64% in men and 1.6% in men. Even including “severe obesity”, the kind that has a noticeable but not dramatic effect on health, is only 2.3% in men and 5.0% in women. That’s your epidemic? Reporting often says things like “2/3 of American adults are overweight or obese”; but all that “overweight” proportion should be utterly disregarded, since it is beneficial to health. The actual prevalence of obesity in the US—even including class I obesity which is not very harmful—is less than 40%.

If obesity were the health crisis it were made out to be, we should expect that global life expectancy is decreasing, or at the very least not increasing. On the contrary, it is rapidly increasing: In 1955, global life expectancy was only 55 years, while it is now over 70.

Worldwide, the countries with the highest obesity rates are those with the longest life expectancy, because both of these things are strongly correlated with high levels of economic development. But it may not just be that: Smoking reduces obesity while also reducing lifespan, and a lot of those countries with very high obesity (including the US) have very low rates of smoking.

There’s some evidence that within the set of rich, highly-developed countries, obesity rates are positively correlated with lower life expectancy, but these effects are much smaller than the effects of high development itself. Going from the highest obesity in the world (the US, of course) to the lowest among all highly-developed countries (Japan) requires reducing the obesity rate by 34 percentage points but only increases life expectancy by about 5 years. You’d get the same increase by raising overall economic development from the level of Turkey to the level of Greece, about 10 points on the 100-point HDI scale.

 

Now, am I saying that we should all be 400 pounds? No, there does come a point where excess weight is clearly detrimental to health. But this threshold is considerably higher than you have probably been led to believe. If you are 15 or 20 pounds “overweight” by what our society (or even your doctor!) tells you, you are probably actually at the optimal weight for your body type. If you are 30 or 40 pounds “overweight”, you may want to try to lose some weight, but don’t make yourself suffer to achieve it. Only if you are 50 pounds or more “overweight” should you really be considering drastic action. If you do try to lose weight, be realistic about your goal: Losing 5% to 10% of your initial weight is a roaring success.

There are also reasons to be particularly concerned about obesity and lack of exercise in children, which is why Michelle Obama’s “Let’s Move!” campaign was a good thing.

And yes, exercise more! Don’t do it to try to lose weight (exercise does not actually cause much weight loss). Just do it. Exercise has so many health benefits it’s honestly kind of ridiculous.

But why am I complaining about this, anyway? Even if we cause some people to worry more about eating less than is strictly necessary, what’s the harm in that? At least we’re getting people to exercise, and Thanksgiving was already ruined by politics anyway.

Well, here’s the thing: I don’t think this obesity panic is actually making us any less obese.

The United States is the most obese country in the world—and you can’t so much as call up Facebook or step into a subway car in the US without someone telling you that you’re too fat and you need to lose weight. The people who really are obese and may need medical help losing weight are the ones most likely to be publicly shamed and harassed for their weight—and there’s no evidence that this actually does anything to reduce their weight. People who experience shaming and harassment for their weight are actually less likely to achieve sustained weight loss.

Teenagers—both boys and girls—who are perceived to be “overweight” are at substantially elevated risk of depression and suicide. People who more fully internalize feelings of shame about their weight have higher blood pressure and higher triglicerides, though once you control for other factors the effect is not huge. There’s even evidence that fat shaming by medical professionals leads to worse treatment outcomes among obese patients.

If we want to actually reduce obesity—and this makes sense, at least for the upper-tail obesity of BMI above 35—then we should be looking at what sort of interventions are actually effective at doing that. Medicine has an important role to play of course, but I actually think economics might be stronger here (though I suppose I would, wouldn’t I?).

Number 1: Stop subsidizing meat and feed grains. There is now quite clear evidence that direct and indirect government subsidies for meat production are a contributing factor in our high fat consumption and thus high obesity rate, though obviously other factors matter too. If you’re worried about farmers, subsidize vegetables instead, or pay for active labor market programs that will train those farmers to work in new industries. This thing we do where we try to save the job instead of the worker is fundamentally idiotic and destructive. Jobs are supposed to be destroyed; that’s what technological improvement is. If you stop destroying jobs, you will stop economic growth.

Number 2: Restrict advertising of high-sugar, high-fat foods, especially to children. Food advertising is particularly effective, because it draws on such primal impulses, and children are particularly vulnerable (as the APA has publicly reported on, including specifically for food advertising). Corporations like McDonald’s and Kellogg’s know quite well what they’re doing when they advertise high-fat, high-sugar foods to kids and get them into the habit of eating them early.

Number 3: Find policies to promote exercise. Despite its small effects on weight loss, exercise has enormous effects on health. Indeed, the fact that people who successfully lose weight show long-term benefits even if they put the weight back on suggests to me that really what they gained was a habit of exercise. We need to find ways to integrate exercise into our daily lives more. The one big thing that our ancestors did do better than we do is constantly exercise—be it hunting, gathering, or farming. Standing desks and treadmill desks may seem weird, but there is evidence that they actually improve health. Right now they are quite expensive, so most people don’t buy them. If we subsidized them, they would be cheaper; if they were cheaper, more people would buy them; if more people bought them, they would seem less weird. Eventually, it could become normative to walk on a treadmill while you work and sitting might seem weird. Even a quite large subsidy could be worthwhile: say we had to spend $500 per person per year to buy every single adult a treadmill desk each year. That comes to about $80 billion per year, which is less than one fourth what we’re currently spending on diabetes or heart disease, so we’d break even if we simply managed to reduce those two conditions by 13%. Add in all the other benefits for depression, chronic pain, sleep, sexual function, and so on, and the quality of life improvement could be quite substantial.

The extreme efficiency of environmental regulation—and the extreme inefficiency of war

Apr 8 JDN 2458217

Insofar as there has been any coherent policy strategy for the Trump administration, it has largely involved three things:

  1. Increase investment in military, incarceration, and immigration enforcement
  2. Redistribute wealth from the poor and middle class to the rich
  3. Remove regulations that affect business, particularly environmental regulations

The human cost of such a policy strategy is difficult to overstate. Literally millions of people will die around the world if such policies continue. This is almost the exact opposite of what our government should be doing.

This is because military is one of the most wasteful and destructive forms of government investment, while environmental regulation is one of the most efficient and beneficial. The magnitude of these differences is staggering.

First of all, it is not clear that the majority of US military spending provides any marginal benefit. It could quite literally be zero. The US spends more on military than the next ten countries combined.

I think it’s quite reasonable to say that the additional defense benefit becomes negligible once you exceed the sum of spending from all plausible enemies. China, Russia, and Saudi Arabia together add up to about $350 billion per year. Current US spending is $610 billion per year. (And this calculation, by the way, requires them all to band together, while simultaneously all our NATO allies completely abandon us.) That means we could probably cut $260 billion per year without losing anything.

What about the remaining $350 billion? I could be extremely generous here, and assume that nuclear weapons, alliances, economic ties, and diplomacy all have absolutely no effect, so that without our military spending we would be invaded and immediately lose, and that if we did lose a war with China or Russia it would be utterly catastrophic and result in the deaths of 10% of the US population. Since in this hypothetical scenario we are only preventing the war by the barest margin, each year of spending only adds 1 year to the lives of the war’s potential victims. That means we are paying some $350 billion per year to add 1 year to the lives of 32 million people. That is a cost of about $11,000 per QALY. If it really is saving us from being invaded, that doesn’t sound all that unreasonable. And indeed, I don’t favor eliminating all military spending.

Of course, the marginal benefit of additional spending is still negligible—and UN peacekeeping is about twice as cost-effective as US military action, even if we had to foot the entire bill ourselves.

Alternatively, I could consider only the actual, documented results of our recent military action, which has resulted in over 280,000 deaths in Iraq and 110,000 in Afghanistan, all for little or no apparent gain. Life expectancy in these countries is about 70 in Iraq and 60 in Afghanistan. Quality of life there is pretty awful, but people are also greatly harmed by war without actually dying in it, so I think a fair conversion factor is about 60 QALY per death. That’s a loss of 23.4 MQALY. The cost of the Iraq War was about $1.1 trillion, while the cost of the Afghanistan War was about a further $1.1 trillion. This means that we paid $94,000 per lost QALY. If this is right, we paid enormous amounts to destroy lives and accomplished nothing at all.

Somewhere in between, we could assume that cutting the military budget greatly would result in the US being harmed in a manner similar to World War 2, which killed about 500,000 Americans. Paying $350 billion per year to gain 500,000 QALY per year is a price of $700,000 per QALY. I think this is about right; we are getting some benefit, but we are spending an enormous amount to get it.

Now let’s compare that to the cost-effectiveness of environmental regulation.

Since 1990, the total cost of implementing the regulations in the Clean Air Act was about $65 billion. That’s over 28 years, so less than $2.5 billion per year. Compare that to the $610 billion per year we spend on the military.

Yet the Clean Air Act saves over 160,000 lives every single year. And these aren’t lives extended one more year as they were in the hypothetical scenario where we are just barely preventing a catastrophic war; most of these people are old, but go on to live another 20 years or more. That means we are gaining 3.2 MQALY for a price of $2.5 billion. This is a price of only $800 per QALY.

From 1970 to 1990, the Clean Air Act cost more to implement: about $520 billion (so, you know, less than one year of military spending). But its estimated benefit was to save over 180,000 lives per year, and its estimated economic benefit was $22 trillion.

Look at those figures again, please. Even under very pessimistic assumptions where we would be on the verge of war if not for our enormous spending, we’re spending at least $11,000 and probably more like $700,000 on the military for each QALY gained. But environmental regulation only costs us about $800 per QALY. That’s a factor of at least 14 and more likely 1000. Environmental regulation is probably about one thousand times as cost-effective as military spending.

And I haven’t even included the fact that there is a direct substitution here: Climate change is predicted to trigger thousands if not millions of deaths due to military conflict. Even if national security were literally the only thing we cared about, it would probably still be more cost-effective to invest in carbon emission reduction rather than building yet another aircraft carrier. And if, like me, you think that a child who dies from asthma is just as important as one who gets bombed by China, then the cost-benefit analysis is absolutely overwhelming; every $60,000 spent on war instead of environmental protection is a statistical murder.

This is not even particularly controversial among economists. There is disagreement about specific environmental regulations, but the general benefits of fighting climate change and keeping air and water clean are universally acknowledged. There is disagreement about exactly how much military spending is necessary, but you’d be hard-pressed to find an economist who doesn’t think we could cut our military substantially with little or no risk to security.

In defense of slacktivism

Jan 22, JDN 2457776

It’s one of those awkward portmanteaus that people often make to try to express a concept in fewer syllables, while also implicitly saying that the phenomenon is specific enough to deserve its own word: “Slacktivism”, made of “slacker” and “activism”, not unlike “mansplain” is made of “man” and “explain” or “edutainment” was made of “education” and “entertainment”—or indeed “gerrymander” was made of “Elbridge Gerry” and “salamander”. The term seems to be particularly popular on Huffington Post, which has a whole category on slacktivism. There is a particular subcategory of slacktivism that is ironically against other slacktivism, which has been dubbed “snarktivism”.

It’s almost always used as a pejorative; very few people self-identify as “slacktivists” (though once I get through this post, you may see why I’m considering it myself). “Slacktivism” is activism that “isn’t real” somehow, activism that “doesn’t count”.

Of course, that raises the question: What “counts” as legitimate activism? Is it only protest marches and sit-ins? Then very few people have ever been or will ever be activists. Surely donations should count, at least? Those have a direct, measurable impact. What about calling your Congressman, or letter-writing campaigns? These have been staples of activism for decades.
If the term “slacktivism” means anything at all, it seems to point to activities surrounding raising awareness, where the goal is not to enact a particular policy or support a particular NGO but to simply get as much public attention to a topic as possible. It seems to be particularly targeted at blogging and social media—and that’s important, for reasons I’ll get to shortly. If you gather a group of people in your community and give a speech about LGBT rights, you’re an activist. If you send out the exact same speech on Facebook, you’re a slacktivist.

One of the arguments against “slacktivism” is that it can be used to funnel resources at the wrong things; this blog post makes a good point that the Kony 2012 campaign doesn’t appear to have actually accomplished anything except profits for the filmmakers behind it. (Then again: A blog post against slacktivism? Are you sure you’re not doing right now the thing you think you are against?) But is this problem unique to slacktivism, or is it a more general phenomenon that people simply aren’t all that informed about how to have the most impact? There are an awful lot of inefficient charities out there, and in fact the most important waste of charitable funds involves people giving to their local churches. Fortunately, this is changing, as people become more secularized; churches used to account for over half of US donations, and now they only account for less than a third. (Naturally, Christian organizations are pulling out their hair over this.) The 60 million Americans who voted for Trump made a horrible mistake and will cause enormous global damage; but they weren’t slacktivists, were they?

Studies do suggest that traditionally “slacktivist” activities like Facebook likes aren’t a very strong predictor of future, larger actions, and more private modes of support (like donations and calling your Congressman) tend to be stronger predictors. But so what? In order for slacktivism to be a bad thing, they would have to be a negative predictor. They would have to substitute for more effective activism, and there’s no evidence that this happens.

In fact, there’s even some evidence that slacktivism has a positive effect (normally I wouldn’t cite Fox News, but I think in this case we should expect a bias in the opposite direction, and you can read the full Georgetown study if you want):

A study from Georgetown University in November entitled “Dynamics of Cause Engagement” looked how Americans learned about and interacted with causes and other social issues, and discovered some surprising findings on Slacktivism.

While the traditional forms of activism like donating money or volunteering far outpaces slacktivism, those who engage in social issues online are twice as likely as their traditional counterparts to volunteer and participate in events. In other words, slacktivists often graduate to full-blown activism.

At worst, most slacktivists are doing nothing for positive social change, and that’s what the vast majority of people have been doing for the entirety of human history. We can bemoan this fact, but that won’t change it. Most people are simply too uniformed to know what’s going on in the world, and too broke and too busy to do anything about it.

Indeed, slacktivism may be the one thing they can do—which is why I think it’s worth defending.

From an economist’s perspective, there’s something quite odd about how people’s objections to slacktivism are almost always formulated. The rational, sensible objection would be to their small benefits—this isn’t accomplishing enough, you should do something more effective. But in fact, almost all the objections to slacktivism I have ever read focus on their small costs—you’re not a “real activist” because you don’t make sacrifices like I do.

Yet it is a basic principle of economic rationality that, all other things equal, lower cost is better. Indeed, this is one of the few principles of economic rationality that I really do think is unassailable; perfect information is unrealistic and total selfishness makes no sense at all. But cost minimization is really very hard to argue with—why pay more, when you can pay less and get the same benefit?

From an economist’s perspective, the most important thing about an activity is its cost-effectiveness, measured either by net benefitbenefit minus cost—or rate of returnbenefit divided by cost. But in both cases, a lower cost is always better; and in fact slacktivism has an astonishing rate of return, precisely because its cost is so small.

Suppose that a campaign of 10 million Facebook likes actually does have a 1% chance of changing a policy in a way that would save 10,000 lives, with a life expectancy of 50 years each. Surely this is conservative, right? I’m only giving it a 1% chance of success, on a policy with a relatively small impact (10,000 lives could be a single clause in an EPA regulatory standard), with a large number of slacktivist participants (10 million is more people than the entire population of Switzerland). Yet because clicking “like” and “share” only costs you maybe 10 seconds, we’re talking about an expected cost of (10 million)(10/86,400/365) = 0.32 QALY for an expected benefit of (10,000)(0.01)(50) = 5000 QALY. That is a rate of return of 1,500,000%—that’s 1.5 million percent.

Let’s compare this to the rate of return on donating to a top charity like UNICEF, Oxfam, the Against Malaria Foundation, or the Schistomoniasis Control Initiative, for which donating about $300 would save the life of 1 child, adding about 50 QALY. That $300 most likely cost you about 0.01 QALY (assuming an annual income of $30,000), so we’re looking at a return of 500,000%. Now, keep in mind that this is a huge rate of return, far beyond what you can ordinarily achieve, that donating $300 to UNICEF is probably one of the best things you could possibly be doing with that money—and yet slacktivism may still exceed it in efficiency. Maybe slacktivism doesn’t sound so bad after all?

Of course, the net benefit of your participation is higher in the case of donation; you yourself contribute 50 QALY instead of only contributing 0.0005 QALY. Ultimately net benefit is what matters; rate of return is a way of estimating what the net benefit would be when comparing different ways of spending the same amount of time or money. But from the figures I just calculated, it begins to seem like maybe the very best thing you could do with your time is clicking “like” and “share” on Facebook posts that will raise awareness of policies of global importance. Now, you have to include all that extra time spent poring through other Facebook posts, and consider that you may not be qualified to assess the most important issues, and there’s a lot of uncertainty involved in what sort of impact you yourself will have… but it’s almost certainly not the worst thing you could be doing with your time, and frankly running these numbers has made me feel a lot better about all the hours I have actually spent doing this sort of thing. It’s a small benefit, yes—but it’s an even smaller cost.

Indeed, the fact that so many people treat low cost as bad, when it is almost by definition good, and the fact that they also target their ire so heavily at blogging and social media, says to me that what they are really trying to accomplish here has nothing to do with actually helping people in the most efficient way possible.

Rather, it’s two things.

The obvious one is generational—it’s yet another chorus in the unending refrain that is “kids these days”. Facebook is new, therefore it is suspicious. Adults have been complaining about their descendants since time immemorial; some of the oldest written works we have are of ancient Babylonians complaining that their kids are lazy and selfish. Either human beings have been getting lazier and more selfish for thousands of years, or, you know, kids are always a bit more lazy and selfish than their parents or at least seem so from afar.

The one that’s more interesting for an economist is signaling. By complaining that other people aren’t paying enough cost for something, what you’re really doing is complaining that they aren’t signaling like you are. The costly signal has been made too cheap, so now it’s no good as a signal anymore.

“Anyone can click a button!” you say. Yes, and? Isn’t it wonderful that now anyone with a smartphone (and there are more people with access to smartphones than toilets, because #WeLiveInTheFuture) can contribute, at least in some small way, to improving the world? But if anyone can do it, then you can’t signal your status by doing it. If your goal was to make yourself look better, I can see why this would bother you; all these other people doing things that look just as good as what you do! How will you ever distinguish yourself from the riffraff now?

This is also likely what’s going on as people fret that “a college degree’s not worth anything anymore” because so many people are getting them now; well, as a signal, maybe not. But if it’s just a signal, why are we spending so much money on it? Surely we can find a more efficient way to rank people by their intellect. I thought it was supposed to be an education—in which case the meteoric rise in global college enrollments should be cause for celebration. (In reality of course a college degree can serve both roles, and it remains an open question among labor economists as to which effect is stronger and by how much. But the signaling role is almost pure waste from the perspective of social welfare; we should be trying to maximize the proportion of real value added.)

For this reason, I think I’m actually prepared to call myself a slacktivist. I aim for cost-effective awareness-raising; I want to spread the best ideas to the most people for the lowest cost. Why, would you prefer I waste more effort, to signal my own righteousness?

Experimentally testing categorical prospect theory

Dec 4, JDN 2457727

In last week’s post I presented a new theory of probability judgments, which doesn’t rely upon people performing complicated math even subconsciously. Instead, I hypothesize that people try to assign categories to their subjective probabilities, and throw away all the information that wasn’t used to assign that category.

The way to most clearly distinguish this from cumulative prospect theory is to show discontinuity. Kahneman’s smooth, continuous function places fairly strong bounds on just how much a shift from 0% to 0.000001% can really affect your behavior. In particular, if you want to explain the fact that people do seem to behave differently around 10% compared to 1% probabilities, you can’t allow the slope of the smooth function to get much higher than 10 at any point, even near 0 and 1. (It does depend on the precise form of the function, but the more complicated you make it, the more free parameters you add to the model. In the most parsimonious form, which is a cubic polynomial, the maximum slope is actually much smaller than this—only 2.)

If that’s the case, then switching from 0.% to 0.0001% should have no more effect in reality than a switch from 0% to 0.00001% would to a rational expected utility optimizer. But in fact I think I can set up scenarios where it would have a larger effect than a switch from 0.001% to 0.01%.

Indeed, these games are already quite profitable for the majority of US states, and they are called lotteries.

Rationally, it should make very little difference to you whether your odds of winning the Powerball are 0 (you bought no ticket) or 0.000000001% (you bought a ticket), even when the prize is $100 million. This is because your utility of $100 million is nowhere near 100 million times as large as your marginal utility of $1. A good guess would be that your lifetime income is about $2 million, your utility is logarithmic, the units of utility are hectoQALY, and the baseline level is about 100,000.

I apologize for the extremely large number of decimals, but I had to do that in order to show any difference at all. I have bolded where the decimals first deviate from the baseline.

Your utility if you don’t have a ticket is ln(20) = 2.9957322736 hQALY.

Your utility if you have a ticket is (1-10^-9) ln(20) + 10^-9 ln(1020) = 2.9957322775 hQALY.

You gain a whopping 40 microQALY over your whole lifetime. I highly doubt you could even perceive such a difference.

And yet, people are willing to pay nontrivial sums for the chance to play such lotteries. Powerball tickets sell for about $2 each, and some people buy tickets every week. If you do that and live to be 80, you will spend some $8,000 on lottery tickets during your lifetime, which results in this expected utility: (1-4*10^-6) ln(20-0.08) + 4*10^-6 ln(1020) = 2.9917399955 hQALY.
You have now sacrificed 0.004 hectoQALY, which is to say 0.4 QALY—that’s months of happiness you’ve given up to play this stupid pointless game.

Which shouldn’t be surprising, as (with 99.9996% probability) you have given up four months of your lifetime income with nothing to show for it. Lifetime income of $2 million / lifespan of 80 years = $25,000 per year; $8,000 / $25,000 = 0.32. You’ve actually sacrificed slightly more than this, which comes from your risk aversion.

Why would anyone do such a thing? Because while the difference between 0 and 10^-9 may be trivial, the difference between “impossible” and “almost impossible” feels enormous. “You can’t win if you don’t play!” they say, but they might as well say “You can’t win if you do play either.” Indeed, the probability of winning without playing isn’t zero; you could find a winning ticket lying on the ground, or win due to an error that is then upheld in court, or be given the winnings bequeathed by a dying family member or gifted by an anonymous donor. These are of course vanishingly unlikely—but so was winning in the first place. You’re talking about the difference between 10^-9 and 10^-12, which in proportional terms sounds like a lot—but in absolute terms is nothing. If you drive to a drug store every week to buy a ticket, you are more likely to die in a car accident on the way to the drug store than you are to win the lottery.

Of course, these are not experimental conditions. So I need to devise a similar game, with smaller stakes but still large enough for people’s brains to care about the “almost impossible” category; maybe thousands? It’s not uncommon for an economics experiment to cost thousands, it’s just usually paid out to many people instead of randomly to one person or nobody. Conducting the experiment in an underdeveloped country like India would also effectively amplify the amounts paid, but at the fixed cost of transporting the research team to India.

But I think in general terms the experiment could look something like this. You are given $20 for participating in the experiment (we treat it as already given to you, to maximize your loss aversion and endowment effect and thereby give us more bang for our buck). You then have a chance to play a game, where you pay $X to get a P probability of $Y*X, and we vary these numbers.

The actual participants wouldn’t see the variables, just the numbers and possibly the rules: “You can pay $2 for a 1% chance of winning $200. You can also play multiple times if you wish.” “You can pay $10 for a 5% chance of winning $250. You can only play once or not at all.”

So I think the first step is to find some dilemmas, cases where people feel ambivalent, and different people differ in their choices. That’s a good role for a pilot study.

Then we take these dilemmas and start varying their probabilities slightly.

In particular, we try to vary them at the edge of where people have mental categories. If subjective probability is continuous, a slight change in actual probability should never result in a large change in behavior, and furthermore the effect of a change shouldn’t vary too much depending on where the change starts.

But if subjective probability is categorical, these categories should have edges. Then, when I present you with two dilemmas that are on opposite sides of one of the edges, your behavior should radically shift; while if I change it in a different way, I can make a large change without changing the result.

Based solely on my own intuition, I guessed that the categories roughly follow this pattern:

Impossible: 0%

Almost impossible: 0.1%

Very unlikely: 1%

Unlikely: 10%

Fairly unlikely: 20%

Roughly even odds: 50%

Fairly likely: 80%

Likely: 90%

Very likely: 99%

Almost certain: 99.9%

Certain: 100%

So for example, if I switch from 0%% to 0.01%, it should have a very large effect, because I’ve moved you out of your “impossible” category (indeed, I think the “impossible” category is almost completely sharp; literally anything above zero seems to be enough for most people, even 10^-9 or 10^-10). But if I move from 1% to 2%, it should have a small effect, because I’m still well within the “very unlikely” category. Yet the latter change is literally one hundred times larger than the former. It is possible to define continuous functions that would behave this way to an arbitrary level of approximation—but they get a lot less parsimonious very fast.

Now, immediately I run into a problem, because I’m not even sure those are my categories, much less that they are everyone else’s. If I knew precisely which categories to look for, I could tell whether or not I had found it. But the process of both finding the categories and determining if their edges are truly sharp is much more complicated, and requires a lot more statistical degrees of freedom to get beyond the noise.

One thing I’m considering is assigning these values as a prior, and then conducting a series of experiments which would adjust that prior. In effect I would be using optimal Bayesian probability reasoning to show that human beings do not use optimal Bayesian probability reasoning. Still, I think that actually pinning down the categories would require a large number of participants or a long series of experiments (in frequentist statistics this distinction is vital; in Bayesian statistics it is basically irrelevant—one of the simplest reasons to be Bayesian is that it no longer bothers you whether someone did 2 experiments of 100 people or 1 experiment of 200 people, provided they were the same experiment of course). And of course there’s always the possibility that my theory is totally off-base, and I find nothing; a dissertation replicating cumulative prospect theory is a lot less exciting (and, sadly, less publishable) than one refuting it.

Still, I think something like this is worth exploring. I highly doubt that people are doing very much math when they make most probabilistic judgments, and using categories would provide a very good way for people to make judgments usefully with no math at all.