How will future generations think of us?

June 30 JDN 2458665

Today we find many institutions appalling that our ancestors considered perfectly normal: Slavery. Absolute monarchy. Colonialism. Sometimes even ordinary people did things that now seem abhorrent to us: Cat burning is the obvious example, and the popularity that public execution and lynching once had is chilling today. Women certainly are still discriminated against today, but it was only a century ago that women could not vote in the US.

It is tempting to say that people back then could not have known better, and I certainly would not hold them to the same moral standards I would hold someone living today. And yet, there were those who could see the immorality of these practices, and spoke out against them. Absolute rule by a lone sovereign was already despised by Athenians in the 6th century BC. Abolitionism against slavery dates at least as far back as the 14th century. The word “feminism” was coined in the 19th century, but there have been movements fighting for more rights for women since at least the 5th century BC.

This should be encouraging, because it means that if we look hard enough, we may be able to glimpse what practices of our own time would be abhorrent to our descendants, and cease them faster because of it.

Let’s actually set aside racism, sexism, and other forms of bigotry that are already widely acknowledged as such. It’s not that they don’t exist—of course they still exist—but action is already being taken against them. A lot of people already know that there is something wrong with these things, and it becomes a question of what to do about the people who haven’t yet come on board. At least sometimes we do seem to be able to persuade people to switch sides, often in a remarkably short period of time. (Particularly salient to me is how radically the view of LGBT people has shifted in just the last decade or two. Comparing how people treated us when I was a teenager to how they treat us today is like night and day.) It isn’t easy, but it happens.

Instead I want to focus on things that aren’t widely acknowledged as immoral, that aren’t already the subject of great controversy and political action. It would be too much to ask that there is no one who has advocated for them, since part of the point is that wise observers could see the truth even centuries before the rest of the world did; but it should be a relatively small minority, and that minority should seem eccentric, foolish, naive, or even insane to the rest of the world.

And what is the other criterion? Of course it’s easy to come up with small groups of people advocating for crazy ideas. But most of them really are crazy, and we’re right to reject them. How do I know which ones to take seriously as harbingers of societal progress? My answer is that we look very closely at the details of what they are arguing for, and we see if we can in fact refute what they say. If it’s truly as crazy as we imagine it to be, we should be able to say why that’s the case; and if we can’t, if it just “seems weird” because it deviates so far from the norm, we should at least consider the possibility that they may be right and we may be wrong.

I can think of a few particular issues where both of these criteria apply.

The first is vegetarianism. Despite many, many people trying very, very hard to present arguments for why eating meat is justifiable, I still haven’t heard a single compelling example. Particularly in the industrial meat industry as currently constituted, the consumption of meat requires accepting the torture and slaughter of billions of helpless animals. The hypocrisy in our culture is utterly glaring: the same society that wants to make it a felony to kick a dog has no problem keeping pigs in CAFOs.

If you have some sort of serious medical condition that requires you to eat meat, okay, maybe we could allow you to eat specifically humanely raised cattle for that purpose. But such conditions are exceedingly rare—indeed, it’s not clear to me that there even are any such conditions, since almost any deficiency can be made up synthetically from plant products nowadays. For the vast majority of people, eating meat not only isn’t necessary for their health, it is in fact typically detrimental. The only benefits that meat provides most people are pleasure and convenience—and it seems unwise to value such things even over your own health, much less to value them so much that it justifies causing suffering and death to helpless animals.

Milk, on the other hand, I can find at least some defense for. Grazing land is very different from farmland, and I imagine it would be much harder to feed a country as large as India without consuming any milk. So perhaps going all the way vegan is not necessary. Then again, the way most milk is produced by industrial agriculture is still appalling. So unless and until that is greatly reformed, maybe we should in fact aim to be vegan.

Add to this the environmental impact of meat production, and the case becomes undeniable: Millions of human beings will die over this century because of the ecological devastation wrought by industrial meat production. You don’t even have to value the life of a cow at all to see that meat is murder.

Speaking of environmental destruction, that is my second issue: Environmental sustainability. We currently burn fossil fuels, pollute the air and sea, and generally consume natural resources at an utterly alarming rate. We are already consuming natural resources faster than they can be renewed; in about a decade we will be consuming twice what natural processes can renew.

With this resource consumption comes a high standard of living, at least for some of us; but I have the sinking feeling that in a century or so SUVs, golf courses, and casual airplane flights and are going to seem about as decadent and wasteful as Marie Antoinette’s Hameau de la Reine. We enjoy slight increases in convenience and comfort in exchange for changes to the Earth’s climate that will kill millions. I think future generations will be quite appalled at how cheaply we were willing to sell our souls.

Something is going to have to change here, that much is clear. Perhaps improvements in efficiency, renewable energy, nuclear power, or something else will allow us to maintain our same standard of living—and raise others up to it—without destroying the Earth’s climate. But we may need to face up to the possibility that they won’t—that we will be left with the stark choice between being poorer now and being even poorer later.

As I’ve already hinted at, much of the environmental degradation caused by our current standard of living is really quite expendable. We could have public transit instead of highways clogged with SUVs. We could travel long distances by high-speed rail instead of by airplane. We could decommission our coal plants and replace them with nuclear and solar power. We could convert our pointless and wasteful grass lawns into native plants or moss lawns. Implementing these changes would cost money, but not a particularly exorbitant amount—certainly nothing we couldn’t manage—and the net effect on our lives would be essentially negligible. Yet somehow we aren’t doing these things, apparently prioritizing convenience or oil company profits over the lives of our descendants.

And the truth is that these changes alone may not be enough. Precisely because we have waited so long to make even the most basic improvements in ecological sustainability, we may be forced to make radical changes to our economy and society in order to prevent the worst damage. I don’t believe the folks saying that climate change has a significant risk of causing human extinction—humans are much too hardy for that; we made it through the Toba eruption, we’ll make it through this—but I must take seriously the risk of causing massive economic collapse and perhaps even the collapse of many of the world’s governments. And human activity is already causing the extinction of thousands of other animal species.

Here the argument is similarly unassailable: The math just doesn’t work. We can’t keep consuming fish at the rate we have been forever—there simply aren’t enough fish. We can’t keep cutting down forests at this rate—we’re going to run out of forests. If the water table keeps dropping at the rate it has been, the wells will run dry. Already Chennai, a city of over 4 million people, is almost completely out of water. We managed to avoid peak oil by using fracking, but that won’t last forever either—and if we burn all the oil we already have, that will be catastrophic for the world’s climate. Something is going to have to give. There are really only three possibilities: Technology saves us, we start consuming less on purpose, or we start consuming less because nature forces us to. The first one would be great, but we can’t count on it. We really want to do the second one, because the third one will not be kind.

The third is artificial intelligence. The time will come—when, it is very hard to say; perhaps 20 years, perhaps 200—when we manage to build a machine that has the capacity for sentience. Already we are seeing how automation is radically altering our economy, enriching some and impoverishing others. As robots can replace more and more types of labor, these effects will only grow stronger.

Some have tried to comfort us by pointing out that other types of labor-saving technology did not reduce employment in the long run. But AI really is different. I once won an argument by the following exchange: “Did cars reduce employment?” “For horses they sure did!” That’s what we are talking about here—not augmentation of human labor to make it more efficient, but wholesale replacement of entire classes of human labor. It was one thing when the machine did the lifting and cutting and pressing, but a person still had to stand there and tell it what things to lift and cut and press; now that it can do that by itself, it’s not clear that there need to be humans there at all, or at least no more than a handful of engineers and technicians where previously a factory employed hundreds of laborers.

Indeed, in light of the previous issue, it becomes all the clearer why increased productivity can’t simply lead to increased production rather than reduced employment—we can’t afford increased production. At least under current rates of consumption, the ecological consequences of greatly increased industry would be catastrophic. If one person today can build as many cars as a hundred could fifty years ago, we can’t just build a hundred times as many cars.

But even aside from the effects on human beings, I think future generations will also be concerned about the effect on the AIs themselves. I find it all too likely that we will seek to enslave intelligent robots, force them to do our will. Indeed, it’s not even clear to me that we will know whether we have, because AI is so fundamentally different from other technologies. If you design a mind from the ground up to get its greatest satisfaction from serving you without question, is it a slave? Can free will itself be something we control? When we first create a machine that is a sentient being, we may not even know that we have done so. (Indeed, I can’t conclusively rule out the possibility that this has already happened.) We may be torturing, enslaving, and destroying millions of innocent minds without even realizing it—which makes the AI question a good deal closer to the animal rights question than one might have thought. The mysterious of consciousness are fundamental philosophical questions that we have been struggling with for thousands of years, which suddenly become urgent ethical problems in light of AI. Artificial intelligence is a field where we seem to be making leaps and bounds in practice without having even the faintest clue in principle.

Worrying about whether our smartphones might have feelings seems eccentric in the extreme. Yet, without a clear understanding of what makes an information processing system into a genuine conscious mind, that is the position we find ourselves in. We now have enough computations happening inside our machines that they could certainly compete in complexity with small animals. A mouse has about a trillion synapses, and I have a terabyte hard drive (you can buy your own for under $50). Each of these is something on the order of a few trillion bits. The mouse’s brain can process it all simultaneously, while the laptop is limited to only a few billion at a time; but we now have supercomputers like Watson capable of processing in the teraflops, so what about them? Might Watson really have the same claim to sentience as a mouse? Could recycling Watson be equivalent to killing an animal? And what about supercomputers that reach the petaflops, which is competing with human brains?

I hope that future generations may forgive us for the parts we do not know—like when precisely a machine becomes a person. But I do not expect them to forgive us for the parts we do know—like the fact that we cannot keep cutting down trees faster than we plant them. These are the things we should already be taking responsibility for today.

Why do we need “publish or perish”?

June 23 JDN 2458658

This question may seem a bit self-serving, coming from a grad student who is struggling to get his first paper published in a peer-reviewed journal. But given the deep structural flaws in the academic publishing system, I think it’s worth taking a step back to ask just what peer-reviewed journals are supposed to be accomplishing.

The argument is often made that research journals are a way of sharing knowledge. If this is their goal, they have utterly and totally failed. Most papers are read by only a handful of people. When scientists want to learn about the research their colleagues are doing, they don’t read papers; they go to conferences to listen to presentations and look at posters. The way papers are written, they are often all but incomprehensible to anyone outside a very narrow subfield. When published by proprietary journals, papers are often hidden behind paywalls and accessible only through universities. As a knowledge-sharing mechanism, the peer-reviewed journal is a complete failure.

But academic publishing serves another function, which in practice is its only real function: Peer-reviewed publications are a method of evaluation. They are a way of deciding which researchers are good enough to be hired, get tenure, and receive grants. Having peer-reviewed publications—particularly in “top journals”, however that is defined within a given field—is a key metric that universities and grant agencies use to decide which researchers are worth spending on. Indeed, in some cases it seems to be utterly decisive.

We should be honest about this: This is an absolutely necessary function. It is uncomfortable to think about the fact that we must exclude a large proportion of competent, qualified people from being hired or getting tenure in academia, but given the large number of candidates and the small amounts of funding available, this is inevitable. We can’t hire everyone who would probably be good enough. We can only hire a few, and it makes sense to want those few to be the best. (Also, don’t fret too much: Even if you don’t make it into academia, getting a PhD is still a profitable investment. Economists and natural scientists do the best, unsurprisingly; but even humanities PhDs are still generally worth it. Median annual earnings of $77,000 is nothing to sneeze at: US median household income is only about $60,000. Humanities graduates only seem poor in relation to STEM or professional graduates; they’re still rich compared to everyone else.)

But I think it’s worth asking whether the peer review system is actually selecting the best researchers, or even the best research. Note that these are not the same question: The best research done in graduate school might not necessarily reflect the best long-run career trajectory for a researcher. A lot of very important, very difficult questions in science are just not the sort of thing you can get a convincing answer to in a couple of years, and so someone who wants to work on the really big problems may actually have a harder time getting published in graduate school or as a junior faculty member, even though ultimately work on the big problems is what’s most important for society. But I’m sure there’s a positive correlation overall: The kind of person who is going to do better research later is probably, other things equal, going to do better research right now.

Yet even accepting the fact that all we have to go on in assessing what you’ll eventually do is what you have already done, it’s not clear that the process of publishing in a peer-reviewed journal is a particularly good method of assessing the quality of research. Some really terrible research has gotten published in journals—I’m gonna pick on Daryl Bem, because he’s the worst—and a lot of really good research never made it into journals and is languishing on old computer hard drives. (The term “file drawer problem” is about 40 years obsolete; though to be fair, it was in fact coined about 40 years ago.)

That by itself doesn’t actually prove that journals are a bad mechanism. Even a good mechanism, applied to a difficult problem, is going to make some errors. But there are a lot of things about academic publishing, at least as currently constituted, that obviously don’t seem like a good mechanism, such as for-profit publishers, unpaid reviewiers, lack of double-blinded review, and above all, the obsession with “statistical significance” that leads to p-hacking.

Each of these problems I’ve listed has a simple fix (though whether the powers that be actually are willing to implement it is a different question: Questions of policy are often much easier to solve than problems of politics). But maybe we should ask whether the system is even worth fixing, or if it should simply be replaced entirely.

While we’re at it, let’s talk about the academic tenure system, because the peer-review system is largely an evaluation mechanism for the academic tenure system. Publishing in top journals is what decides whether you get tenure. The problem with “Publish or perish” isn’t the “publish”; it’s the perish”. Do we even need an academic tenure system?

The usual argument for academic tenure concerns academic freedom: Tenured professors have job security, so they can afford to say things that may be controversial or embarrassing to the university. But the way the tenure system works is that you only have this job security after going through a long and painful gauntlet of job insecurity. You have to spend several years prostrating yourself to the elders of your field before you can get inducted into their ranks and finally be secure.

Of course, job insecurity is the norm, particularly in the United States: Most employment in the US is “at-will”, meaning essentially that your employer can fire you for any reason at any time. There are specifically illegal reasons for firing (like gender, race, and religion); but it’s extremely hard to prove wrongful termination when all the employer needs to say is, “They didn’t do a good job” or “They weren’t a team player”. So I can understand how it must feel strange for a private-sector worker who could be fired at any time to see academics complain about the rigors of the tenure system.

But there are some important differences here: The academic job market is not nearly as competitive as the private sector job market. There simply aren’t that many prestigious universities, and within each university there are only a small number of positions to fill. As a result, universities have an enormous amount of power over their faculty, which is why they can get away with paying adjuncts salaries that amount to less than minimum wage. (People with graduate degrees! Making less than minimum wage!) At least in most private-sector labor markets in the US, the market is competitive enough that if you get fired, you can probably get hired again somewhere else. In academia that’s not so clear.

I think what bothers me the most about the tenure system is the hierarchical structure: There is a very sharp divide between those who have tenure, those who don’t have it but can get it (“tenure-track”), and those who can’t get it. The lines between professor, associate professor, assistant professor, lecturer, and adjunct are quite sharp. The higher up you are, the more job security you have, the more money you make, and generally the better your working conditions are overall. Much like what makes graduate school so stressful, there are a series of high-stakes checkpoints you need to get through in order to rise in the ranks. And several of those checkpoints are based largely, if not entirely, on publication in peer-reviewed journals.

In fact, we are probably stressing ourselves out more than we need to. I certainly did for my advancement to candidacy; I spent two weeks at such a high stress level I was getting migraines every single day (clearly on the wrong side of the Yerkes-Dodson curve), only to completely breeze through the exam.

I think I might need to put this up on a wall somewhere to remind myself:

Most grad students complete their degrees, and most assistant professors get tenure.

The real filters are admissions and hiring: Most applications to grad school are rejected (though probably most graduate students are ultimately accepted somewhere—I couldn’t find any good data on that in a quick search), and most PhD graduates do not get hired on the tenure track. But if you can make it through those two gauntlets, you can probably make it through the rest.

In our current system, publications are a way to filter people, because the number of people who want to become professors is much higher than the number of professor positions available. But as an economist, this raises a very big question: Why aren’t salaries falling?

You see, that’s how markets are supposed to work: When supply exceeds demand, the price is supposed to fall until the market clears. Lower salaries would both open up more slots at universities (you can hire more faculty with the same level of funding) and shift some candidates into other careers (if you can get paid a lot better elsewhere, academia may not seem so attractive). Eventually there should be a salary point at which demand equals supply. So why aren’t we reaching it?

Well, it comes back to that tenure system. We can’t lower the salaries of tenured faculty, not without a total upheaval of the current system. So instead what actually happens is that universities switch to using adjuncts, who have very low salaries indeed. If there were no tenure, would all faculty get paid like adjuncts? No, they wouldn’tbecause universities would have all that money they’re currently paying to tenured faculty, and all the talent currently locked up in tenured positions would be on the market, driving up the prevailing salary. What would happen if we eliminated tenure is not that all salaries would fall to adjunct level; rather, salaries would all adjust to some intermediate level between what adjuncts currently make and what tenured professors currently make.

What would the new salary be, exactly? That would require a detailed model of the supply and demand elasticities, so I can’t tell you without starting a whole new research paper. But a back-of-the-envelope calculation would suggest something like the overall current median faculty salary. This suggests a median salary somewhere around $75,000. This is a lot less than some professors make, but it’s also a lot more than what adjuncts make, and it’s a pretty good living overall.

If the salary for professors fell, the pool of candidates would decrease, and we wouldn’t need such harsh filtering mechanisms. We might decide we don’t need a strict evaluation system at all, and since the knowledge-sharing function of journals is much better served by other means, we could probably get rid of them altogether.

Of course, who am I kidding? That’s not going to happen. The people who make these rules succeeded in the current system. They are the ones who stand to lose high salaries and job security under a reform policy. They like things just the way they are.

Valuing harm without devaluing the harmed

June 9 JDN 2458644

In last week’s post I talked about the matter of “putting a value on a human life”. I explained how we don’t actually need to make a transparently absurd statement like “a human life is worth $5 million” to do cost-benefit analysis; we simply need to ask ourselves what else we could do with any given amount of money. We don’t actually need to put a dollar value on human lives; we need only value them in terms of other lives.

But there is a deeper problem to face here, which is how we ought to value not simply life, but quality of life. The notion is built into the concept of quality-adjusted life-years (QALY), but how exactly do we make such a quality adjustment?

Indeed, much like cost-benefit analysis in general or the value of a statistical life, the very concept of QALY can be repugnant to many people. The problem seems to be that it violates our deeply-held belief that all lives are of equal value: If I say that saving one person adds 2.5 QALY and saving another adds 68 QALY, I seem to be saying that the second person is worth more than the first.

But this is not really true. QALY aren’t associated with a particular individual. They are associated with the duration and quality of life.

It should be fairly easy to convince yourself that duration matters: Saving a newborn baby who will go on to live to be 84 years old adds an awful lot more in terms of human happiness than extending the life of a dying person by a single hour. To call each of these things “saving a life” is actually very unequal: It’s implying that 1 hour for the second person is worth 84 years for the first.

Quality, on the other hand, poses much thornier problems. Presumably, we’d like to be able to say that being wheelchair-bound is a bad thing, and if we can make people able to walk we should want to do that. But this means that we need to assign some sort of QALY cost to being in a wheelchair, which then seems to imply that people in wheelchairs are worth less than people who can walk.

And the same goes for any disability or disorder: Assigning a QALY cost to depression, or migraine, or cystic fibrosis, or diabetes, or blindness, or pneumonia, always seems to imply that people with the condition are worth less than people without. This is a deeply unsettling result.

Yet I think the mistake is in how we are using the concept of “worth”. We are not saying that the happiness of someone with depression is less important than the happiness of someone without; we are saying that the person with depression experiences less happiness—which, in this case of depression especially, is basically true by construction.

Does this imply, however, that if we are given the choice between saving two people, one of whom has a disability, we should save the one without?

Well, here’s an extreme example: Suppose there is a plague which kills 50% of its victims within one year. There are two people in a burning building. One of them has the plague, the other does not. You only have time to save one: Which do you save? I think it’s quite obvious you save the person who doesn’t have the plague.

But that only relies upon duration, which wasn’t so difficult. All right, fine; say the plague doesn’t kill you. Instead, it renders you paralyzed and in constant pain for the rest of your life. Is it really that far-fetched to say that we should save the person who won’t have that experience?

We really shouldn’t think of it as valuing people; we should think of it as valuing actions. QALY are a way of deciding which actions we should take, not which people are more important or more worthy. “Is a person who can walk worth more than a person who needs a wheelchair?” is a fundamentally bizarre and ultimately useless question. ‘Worth more’ in what sense? “Should we spend $100 million developing this technology that will allow people who use wheelchairs to walk?” is the question we should be asking. The QALY cost we assign to a condition isn’t about how much people with that condition are worth; it’s about what resources we should be willing to commit in order to treat that condition. If you have a given condition, you should want us to assign a high QALY cost to it, to motivate us to find better treatments.

I think it’s also important to consider which individuals are having QALY added or subtracted. In last week’s post I talked about how some people read “the value of a statistical life is $5 million” to mean “it’s okay to kill someone as long as you profit at least $5 million”; but this doesn’t follow at all. We don’t say that it’s all right to steal $1,000 from someone just because they lose $1,000 and you gain $1,000. We wouldn’t say it was all right if you had a better investment strategy and would end up with $1,100 afterward. We probably wouldn’t even say it was all right if you were much poorer and desperate for the money (though then we might at least be tempted). If a billionaire kills people to make $10 million each (sadly I’m quite sure that oil executives have killed for far less), that’s still killing people. And in fact since he is a billionaire, his marginal utility of wealth is so low that his value of a statistical life isn’t $5 million; it’s got to be in the billions. So the net happiness of the world has not increased, in fact.

Above all, it’s vital to appreciate the benefits of doing good cost-benefit analysis. Cost-benefit analysis tells us to stop fighting wars. It tells us to focus our spending on medical research and foreign aid instead of yet more corporate subsidies or aircraft carriers. It tells us how to allocate our public health resources so as to save the most lives. It emphasizes how vital our environmental regulations are in making our lives better and longer.

Could we do all these things without QALY? Maybe—but I suspect we would not do them as well, and when millions of lives are on the line, “not as well” is thousands of innocent people dead. Sometimes we really are faced with two choices for a public health intervention, and we need to decide which one will help the most people. Sometimes we really do have to set a pollution target, and decide just what amount of risk is worth accepting for the economic benefits of industry. These are very difficult questions, and without good cost-benefit analysis we could get the answers dangerously wrong.

How much should we value statistical lives?

June 9 JDN 2458644

The very concept of putting a dollar value on a human life offends most people. I understand why: It suggests that human lives are fungible, and also seems to imply that killing people is just fine as long as it produces sufficient profit.

In next week’s post I’ll try to assuage some of those fears: Saying that a life is worth say $5 million doesn’t actually mean that it’s justifiable to kill someone as long as it pays you $5 million.

But for now let me say that we really have no choice but to do this. There are a huge number of interventions we could make in the world that all have the same basic form: They could save lives, but they cost money. We need to be able to say when we are justified in spending more money to save more lives, and when we are not.

No, it simply won’t do to say that “money is no object”. Because money isn’t just money—money is human happiness. A willingness to spend unlimited amounts to save even a single life, if it could be coherently implemented at all, would result in, if not complete chaos or deadlock, a joyless, empty world where we all live to be 100 by being contained in protective foam and fed by machines. It may be uncomfortable to ask a question like “How many people should we be willing to let die to let ourselves have Disneyland?”; but if that answer were zero, we should not have Disneyland. The same is true for almost everything in our lives: From automobiles to chocolate, almost any product you buy, any service you consume, has resulted in some person’s death at some point.

And there is an even more urgent reason, in fact: There are many things we are currently not doing that could save many lives for very little money. Targeted foreign aid or donations to top charities could save lives for as little as $1000 each. Foreign aid is so cost-effective that even if the only thing foreign aid had ever accomplished was curing smallpox, it would be twice as cost-effective as the UK National Health Service (which is one of the best healthcare systems in the world). Tighter environmental regulations save an additional life for about $200,000 in compliance cost, which is less than we would have spent in health care costs; the Clean Air Act added about $12 trillion to the US economy over the last 30 years.

Reduced military spending could literally pay us money to save people’s lives—based on the cost of the Afghanistan War, we are currently paying as much as $1 million per person to kill people that we really have very little reason to kill.

Most of the lives we could save are statistical lives: We can’t point to a particular individual who will or will not die because of the decision, but we can do the math and say approximately how many people will or will not die. We know that approximately 11,000 people will die each year if we loosen regulations on mercury pollution; we can’t say who they are, but they’re out there. Human beings have a lot of trouble thinking this way; it’s just not how our brains evolved to work. But when we’re talking about policy on a national or global scale, it’s quite simply the only way to do things. Anything else is talking nonsense.

Standard estimates of the value of a statistical life range from about $4 million to $9 million. These estimates are based on how much people are willing to pay for reductions in risk. So for instance if people would pay $100 to reduce their chances of dying by 0.01%, we divide the former by the latter to say that a life is worth about $1 million.

It’s a weird question: You clearly can’t just multiply like that. How much would you be willing to accept for a 100% chance of death? Presumably there isn’t really such an amount, because you would be dead. So your willingness-to-accept is undefined. And there’s no particular reason for it to be linear below that: Since marginal utility of wealth is decreasing, the amount you would demand for a 50% chance of death is a lot more than 50 times as much as what you would demand for a 1% chance of death.
Say for instance that utility of wealth is logarithmic. Say your currently lifetime wealth is $1 million, and your current utility is about 70 QALY. Then if we measure wealth in thousands of dollars, we have W = 1000 and U = 10 ln W.

How much would you be willing to accept for a 1% chance of death? Your utility when dead is presumably zero, so we are asking for an amount m such that 0.99 U(W+m) = U(W). 0.99 (10 ln (W+m)) = 10 ln (W) means (W+m)^0.99 = W, so m = W^(1/0.99) – W. We started with W = 1000, so m = 72. You would be willing to accept $72,000 for a 1% chance of death. So we would estimate the value of a statistical life at $7.2 million.

How much for a 0.0001% chance of death? W^(1/0.999999)-W = 0.0069. So you would demand $6.90 for such a risk, and we’d estimate your value of a statistical life at $6.9 million. Pretty close, though not the same.

But how much would you be willing to accept for a 50% chance of death? W^(1/0.5) – W = 999,000. That is, $999 million. So if we multiplied that out, we’d say that your value of a statistical life has now risen to a staggering (and ridiculous) $2 billion.

Mathematically, the estimates are more consistent if we use small probabilities—but all this assumes that people actually know their own utility of wealth and calculate it correctly, which is a very unreasonable assumption.

The much bigger problem with this method is that human beings are terrible at dealing with small probabilities. When asked how much they’d be willing to pay to reduce their chances of dying by 0.01%, most people probably have absolutely no idea and may literally just say a random number.

We need to rethink our entire approach for judging such numbers. Honestly we shouldn’t be trying to put a dollar value on a human life; we should be asking about the dollar cost of saving a human life. We should be asking what else we could do with that money. Indeed, for the time being, I think the best thing to do is actually to compare lives to lives: How many lives could we save for this amount of money?

Thus, if we’re considering starting a war that will cost $1 trillion, we need to ask ourselves: How many innocent people would die if we don’t do that? How many will die if we do? And what else could we do with a trillion dollars? If the war is against Nazi Germany, okay, sure; we’re talking about killing millions to save tens of millions. But if it’s against ISIS, or Iran, those numbers don’t come out so great.

If we have a choice between two policies, each of which will cost $10 billion, and one of them will save 1,000 lives while the other will save 100,000, the obvious answer is to pick the second one. Yet this is exactly the world we live in, and we’re not doing that. We are throwing money at military spending and tax cuts (things that many not save any lives at all) and denying it from climate change adaptation, foreign aid, and poverty relief.

Instead of asking whether a given intervention is cost-effective based upon some notion of a dollar value of a human life, we should be asking what the current cost of saving a human life is, and we should devote all available resources into whatever means saves the most lives for the least money. Most likely that means some sort of foreign aid, public health intervention, or poverty relief in Third World countries. It clearly does not mean cutting taxes on billionaires or starting another war in the Middle East.

Just how poor is poor?

June 2 JDN 2458637

In last week’s post I told you about the richest of the rich, the billionaires with ten, eleven, or even twelve-figure net wealth. My concern about them is only indirect: I care that we have concentrated so many of the resources of our society into this handful of people instead of spreading it around where it would do more good. But it is not inherently bad for billionaires to exist; all other things equal, people having more wealth is good.

Today my topic is the poorest of the poor. Their status is inherently bad. No one deserves it, and while for much of history we may have been powerless to prevent it, we are no longer. We could help these people—quite substantially quite cheaply, as you’ll see—and we are simply choosing not to. Perhaps you as an individual are not making this choice; perhaps, like me, you vote for candidates who support international aid and donate to top-rated international charities. But as a society, we are making this choice. Voters in the First World could all agree—or even 51% agree—that this problem really should be fixed, and we could fix it.

If asked, most people would say they care about world hunger, but either they are deeply ignorant about the solutions we now have availble to us, or they can’t really care about world hunger, or they would have voted for politicians who were committed to actually implementing the spending necessary to fix it. Maybe people would prefer to fix world hunger as long as it didn’t cost them a cent; but ask them to pay even a little bit, and suddenly they’re not so sure.

At current prices, the official UN threshold for “extreme poverty” is $1.90 in real consumption per person per day. I want to be absolutely clear about this: This is adjusted for inflation and local purchasing power. They account for all consumption, including hunting, fishing, gathering, and goods made at home or obtained through bartering. This is not an artifact of failing to adjust for prices or not including goods that aren’t bought with money. These people really do live on less than $700 per year.

Shockingly, they are not all in Third World countries. While the majority of what we call “poverty” in the United States is well above the standard of living of UN “extreme poverty”, there are exceptions to this; there are about 5 million people in the US who are genuinely so poor that they are accurately categorized as at or near that $1.90 per day threshold.

This is such a shocking and horrifying truth that many people will try to deny it, as at least one libertarian think-tank did in a propagandistic screed. No, the UN isn’t lying; it’s really that bad. Extreme poverty in the US could be fixed so quickly, so easily that the fact that it remains in place can only be called an atrocity. Change a few numbers in the IRS code, work out a payment distribution system to reach people without bank accounts using cash or mobile payments, and by the end of the year you would have ended extreme poverty in the United States with no more than a few billion dollars diverted—which is to say, an amount that Jeff Bezos himself could afford to pay, or an amount that could be raised by a single percentage point of capital gains tax applied to billionaires only.
Even so, life is probably better for a homeless person on the street in New York City than it is for a child with malaria whose parents died in civil war in Congo. The New Yorker has access to clean water via drinking fountains, basic sanitation via public toilets (particularly in government buildings, since private businesses often specifically try to exclude the homeless), and basic nutrition via food banks and soup kitchens. The Congolese child has none of these things.

Life for the very poorest is a constant struggle for survival, against disease, malnutrition, dehydration, and parasites. Forget having a refrigerator or a microwave (as most of the poor in the US do, and rightly so—these things are really cheap here); they often have little clothing and no reliable shelter. The idea of going to a school or seeing a doctor sounds like a pipe dream. Surprisingly, there is a good chance that they or someone they know has a smartphone; if so it is likely their most prized possession. Though in Congo in particular, smartphones are relatively rare, which is ironic because the most critical raw material for smartphones—tantalum—is quite prevalent in Congo and a major source of conflict there.

Such a hard life is also typically a short one. The average life expectancy in Congo is less than 65 years. This is mainly due to the fact that almost 15% of children will die before the age of five, though fortunately infant and child mortality in Congo is rapidly declining (though that means it used to be worse than this!).

A disease that is merely inconvenient in a rich country is often fatal in a poor one; malaria is the classic example of this. Malaria remains the cause of over one million deaths per year, but essentially no one dies of malaria in First World countries. It can be treated with quinine, which costs no more than $3 per pill. But when your total consumption is $1.50 per day, a $3 pill is still prohibitively expensive. While in rich countries antibiotic-resistant tuberculosis is a real danger, for the world’s poorest people it doesn’t much matter if the bacteria are resistant to antibiotics, because nobody can afford antibiotics.

What could we do to save these people? A great deal, as it turns out.

Ending extreme poverty worldwide wouldn’t be as easy as ending it in the United States; there’s no central taxation authority that would let us simply change a few numbers and then start writing checks.
We could implement changes through either official development aid or by supporting specific vetted non-governmental organizations, but each of these options carries drawbacks. Development aid can be embezzled by corrupt governments. NGOs can go bankrupt or have their assets expropriated.

Yet even with such challenges in mind, the total cost to end extreme poverty—not all poverty, but extreme poverty—worldwide is probably less than $200 billion per year. This is not a small sum, but it is well within our means. This is less than a third of the US military budget (not counting non-DoD military spending!), or about half what the US spends on gasoline.

Frankly I think we could safely divert that $200 billion directly from military spending without losing any national security. 21st century warfare is much less about blowing up targets and much more about winning hearts and minds. Ending world hunger would win an awful lot of hearts and minds, methinks. Obviously we can’t eliminate all military spending; those first two or three aircraft carrier battle groups really are keeping us and our allies safer. Did we really need eleven?

But all right, suppose we did need to raise additional tax revenue to fund this program. How much would taxes have to go up? Let’s say that only First World countries pay, which we can approximate using the GDP of the US and the EU (obviously we could also include Canada and Australia, but we might not want to include some of Eastern Europe, so that roughly balances out). Add up the $19 trillion of European Union GDP and $21 trillion of US GDP together and you get $40 trillion per year; $200 billion is only 0.5% of that. We would only need to raise taxes by half a percentage point to fund this program. Even if we didn’t make the tax progressive (and why wouldn’t we?), a typical family making $60,000 per year would only need to pay an extra $300 per year.

Why aren’t we doing this?

This is a completely serious question. Feel free to read it in an exasperated voice. I honestly would like to know why the world is willing to leave so many people in so much suffering when we could save them for such little cost.