The only thing necessary for the triumph of evil is that good people refuse to do cost-benefit analysis

July 27, JDN 2457597

My title is based on a famous quote often attributed to Edmund Burke, but which we have no record of him actually saying:

The only thing necessary for the triumph of evil is that good men do nothing.

The closest he actually appears to have written is this:

When bad men combine, the good must associate; else they will fall one by one, an unpitied sacrifice in a contemptible struggle.

Burke’s intended message was about the need for cooperation and avoiding diffusion of responsibility; then his words were distorted into a duty to act against evil in general.

But my point today is going to be a little bit more specific: A great deal of real-world evils would be eliminated if good people were more willing to engage in cost-benefit analysis.

As discussed on Less Wrong awhile back, there is a common “moral” saying which comes from the Talmud (if not earlier; and of course it’s hardly unique to Judaism), which gives people a great warm and fuzzy glow whenever they say it:

Whoever saves a single life, it is as if he had saved the whole world.

Yet this is in fact the exact opposite of moral. It is a fundamental, insane perversion of morality. It amounts to saying that “saving a life” is just a binary activity, either done or not, and once you’ve done it once, congratulations, you’re off the hook for the other 7 billion. All those other lives mean literally nothing, once you’ve “done your duty”.

Indeed, it would seem to imply that you can be a mass murderer, as long as you save someone else somewhere along the line. If Mao Tse-tung at some point stopped someone from being run over by a car, it’s okay that his policies killed more people than the population of Greater Los Angeles.

Conversely, if anything you have ever done has resulted in someone’s death, you’re just as bad as Mao; in fact if you haven’t also saved someone somewhere along the line and he has, you’re worse.

Maybe this is how you get otherwise-intelligent people saying such insanely ridiculous things as George W. Bush’s crimes are uncontroversially worse than Osama bin Laden’s.” (No, probably not, since Chomsky at least feigns something like cost-benefit analysis. I’m not sure what his failure mode is, but it’s probably not this one in particular. “Uncontroversially”… you keep using that word…)

Cost-benefit analysis is actually a very simple concept (though applying it in practice can be mind-bogglingly difficult): Try to maximize the good things minus the bad things. If an action would increase good things more than bad things, do it; if it would increase bad things more than good things, don’t do it.

What it replaces is simplistic deontological reasoning about “X is always bad” or “Y is always good”; that’s almost never true. Even great evils can be justified by greater goods, and many goods are not worth having because of the evils they would require to achieve. We seem to want all our decisions to have no downside, perhaps because that would resolve our cognitive dissonance most easily; but in the real world, most decisions have an upside and a downside, and it’s a question of which is larger.

Why is it that so many people—especially good people—have such an aversion to cost-benefit analysis?

I gained some insight into this by watching a video discussion from an online Harvard course taught by Michael Sandel (which is free, by the way, if you’d like to try it out). He was leading the discussion Socratically, which is in general a good method of teaching—but like anything else can be used to teach things that are wrong, and is in some ways more effective at doing so because it has a way of making students think they came up with the answers on their own. He says something like, “Do we really want our moral judgments to be based on cost-benefit analysis?” and gives some examples where people made judgments using cost-benefit analysis to support his suggestion that this is something bad.

But of course his examples are very specific: They all involve corporations using cost-benefit analysis to maximize profits. One of them is the Ford Pinto case, where Ford estimated the cost to them of a successful lawsuit, multiplied by the probability of such lawsuits, and then compared that with the cost of a total recall. Finding that the lawsuits were projected to be cheaper, they opted for that result, and thereby allowed several people to be killed by their known defective product.

Now, it later emerged that Ford Pintos were not actually especially dangerous, and in fact Ford didn’t just include lawsuits but also a standard estimate of the “value of a statistical human life”, and as a result of that their refusal to do the recall was probably the completely correct decision—but why let facts get in the way of a good argument?

But let’s suppose that all the facts had been as people thought they were—the product was unsafe and the company was only interested in their own profits. We don’t need to imagine this hypothetically; this is clearly what actually happened with the tobacco industry, and indeed with the oil industry. Is that evil? Of course it is. But not because it’s cost-benefit analysis.

Indeed, the reason this is evil is the same reason most things are evil: They are psychopathically selfish. They advance the interests of those who do them, while causing egregious harms to others.

Exxon is apparently prepared to sacrifice millions of lives to further their own interests, which makes them literally no better than Mao, as opposed to this bizarre “no better than Mao” that we would all be if the number of lives saved versus killed didn’t matter. Let me be absolutely clear; I am not speaking in hyperbole when I say that the board of directors of Exxon is morally no better than Mao. No, I mean they literally are willing to murder 20 million people to serve their own interests—more precisely 10 to 100 million, by WHO estimates. Maybe it matters a little bit that these people will be killed by droughts and hurricanes rather than by knives and guns; but then, most of the people Mao killed died of starvation, and plenty of the people killed by Exxon will too. But this statement wouldn’t have the force it does if I could not speak in terms of quantitative cost-benefit analysis. Killing people is one thing, and most industries would have to own up to it; being literally willing to kill as many people as history’s greatest mass murderers is quite anotherand yet it is true of Exxon.

But I can understand why people would tend to associate cost-benefit analysis with psychopaths maximizing their profits; there are two reasons for this.

First, most neoclassical economists appear to believe in both cost-benefit analysis and psychopathic profit maximization. They don’t even clearly distinguish their concept of “rational” from the concept of total psychopathic selfishness—hence why I originally titled this blog “infinite identical psychopaths”. The people arguing for cost-benefit analysis are usually economists, and economists are usually neoclassical, so most of the time you hear arguments for cost-benefit analysis they are also linked with arguments for horrifically extreme levels of selfishness.

Second, most people are uncomfortable with cost-benefit analysis, and as a result don’t use it. So, most of the cost-benefit analysis you’re likely to hear is done by terrible human beings, typically at the reins of multinational corporations. This becomes self-reinforcing, as all the good people don’t do cost-benefit analysis, so they don’t see good people doing it, so they don’t do it, and so on.

Therefore, let me present you with some clear-cut cases where cost-benefit analysis can save millions of lives, and perhaps even save the world.

Imagine if our terrorism policy used cost-benefit analysis; we wouldn’t kill 100,000 innocent people and sacrifice 4,400 soldiers fighting a war that didn’t have any appreciable benefit as a bizarre form of vengeance for 3,000 innocent people being killed. Moreover, we wouldn’t sacrifice core civil liberties to prevent a cause of death that’s 300 times rarer than car accidents.

Imagine if our healthcare policy used cost-benefit analysis; we would direct research funding to maximize our chances of saving lives, not toward the form of cancer that is quite literally the sexiest. We would go to a universal healthcare system like the rest of the First World, and thereby save thousands of additional lives while spending less on healthcare.

With cost-benefit analysis, we would reform our system of taxes and subsidies to internalize the cost of carbon emissions, most likely resulting in a precipitous decline of the oil and coal industries and the rapid rise of solar and nuclear power, and thereby save millions of lives. Without cost-benefit analysis, we instead get unemployed coal miners appearing on TV to grill politicians about how awful it is to lose your job even though that job is decades obsolete and poisoning our entire planet. Would eliminating coal hurt coal miners? Yes, it would, at least in the short run. It’s also completely, totally worth it, by at least a thousandfold.

We would invest heavily in improving our transit systems, with automated cars or expanded rail networks, thereby preventing thousands of deaths per year—instead of being shocked and outraged when an automated car finally kills one person, while manual vehicles in their place would have killed half a dozen by now.

We would disarm all of our nuclear weapons, because the risk of a total nuclear apocalypse is not worth it to provide some small increment in national security above our already overwhelming conventional military. While we’re at it, we would downsize that military in order to save enough money to end world hunger.

And oh by the way, we would end world hunger. The benefits of doing so are enormous; the costs are remarkably small. We’ve actually been making a great deal of progress lately—largely due to the work of development economists, and lots and lots of cost-benefit analysis. This process involves causing a lot of economic disruption, making people unemployed, taking riches away from some people and giving them to others; if we weren’t prepared to bear those costs, we would never get these benefits.

Could we do all these things without cost-benefit analysis? I suppose so, if we go through the usual process of covering of our ears whenever a downside is presented and amplification whenever an upside is presented, until we can more or less convince ourselves that there is no downside even though there always is. We can continue having arguments where one side presents only downsides, the other side presents only upsides, and then eventually one side prevails by sheer numbers, and it could turn out to be the upside team (or should I say “tribe”?).

But I think we’d progress a lot faster if we were honest about upsides and downsides, and had the courage to stand up and say, “Yes, that downside is real; but it’s worth it.” I realize it’s not easy to tell a coal miner to his face that his job is obsolete and killing people, and I don’t really blame Hillary Clinton for being wishy-washy about it; but the truth is, we need to start doing that. If we accept that costs are real, we may be able to mitigate them (as Hillary plans to do with a $30 billion investment in coal mining communities, by the way); if we pretend they don’t exist, people will still get hurt but we will be blind to their suffering. Or worse, we will do nothing—and evil will triumph.

Why it matters that torture is ineffective

JDN 2457531

Like “longest-ever-serving Speaker of the House sexually abuses teenagers” and “NSA spy program is trying to monitor the entire telephone and email system”, the news that the US government systematically tortures suspects is an egregious violation that goes to the highest levels of our government—that for some reason most Americans don’t particularly seem to care about.

The good news is that President Obama signed an executive order in 2009 banning torture domestically, reversing official policy under the Bush Administration, and then better yet in 2014 expanded the order to apply to all US interests worldwide. If this is properly enforced, perhaps our history of hypocrisy will finally be at its end. (Well, not if Trump wins…)

Yet as often seems to happen, there are two extremes in this debate and I think they’re both wrong.
The really disturbing side is “Torture works and we have to use it!” The preferred mode of argumentation for this is the “ticking time bomb scenario”, in which we have some urgent disaster to prevent (such as a nuclear bomb about to go off) and torture is the only way to stop it from happening. Surely then torture is justified? This argument may sound plausible, but as I’ll get to below, this is a lot like saying, “If aliens were attacking from outer space trying to wipe out humanity, nuclear bombs would probably be justified against them; therefore nuclear bombs are always justified and we can use them whenever we want.” If you can’t wait for my explanation, The Atlantic skewers the argument nicely.

Yet the opponents of torture have brought this sort of argument on themselves, by staking out a position so extreme as “It doesn’t matter if torture works! It’s wrong, wrong, wrong!” This kind of simplistic deontological reasoning is very appealing and intuitive to humans, because it casts the world into simple black-and-white categories. To show that this is not a strawman, here are several different people all making this same basic argument, that since torture is illegal and wrong it doesn’t matter if it works and there should be no further debate.

But the truth is, if it really were true that the only way to stop a nuclear bomb from leveling Los Angeles was to torture someone, it would be entirely justified—indeed obligatory—to torture that suspect and stop that nuclear bomb.

The problem with that argument is not just that this is not our usual scenario (though it certainly isn’t); it goes much deeper than that:

That scenario makes no sense. It wouldn’t happen.

To use the example the late Antonin Scalia used from an episode of 24 (perhaps the most egregious Fictional Evidence Fallacy ever committed), if there ever is a nuclear bomb planted in Los Angeles, that would literally be one of the worst things that ever happened in the history of the human race—literally a Holocaust in the blink of an eye. We should be prepared to cause extreme suffering and death in order to prevent it. But not only is that event (fortunately) very unlikely, torture would not help us.

Why? Because torture just doesn’t work that well.

It would be too strong to say that it doesn’t work at all; it’s possible that it could produce some valuable intelligence—though clear examples of such results are amazingly hard to come by. There are some social scientists who have found empirical results showing some effectiveness of torture, however. We can’t say with any certainty that it is completely useless. (For obvious reasons, a randomized controlled experiment in torture is wildly unethical, so none have ever been attempted.) But to justify torture it isn’t enough that it could work sometimes; it has to work vastly better than any other method we have.

And our empirical data is in fact reliable enough to show that that is not the case. Torture often produces unreliable information, as we would expect from the game theory involved—your incentive is to stop the pain, not provide accurate intel; the psychological trauma that torture causes actually distorts memory and reasoning; and as a matter of fact basically all the useful intelligence obtained in the War on Terror was obtained through humane interrogation methods. As interrogation experts agree, torture just isn’t that effective.

In principle, there are four basic cases to consider:

1. Torture is vastly more effective than the best humane interrogation methods.

2. Torture is slightly more effective than the best humane interrogation methods.

3. Torture is as effective as the best humane interrogation methods.

4. Torture is less effective than the best humane interrogation methods.

The evidence points most strongly to case 4, which would mean that torture is a no-brainer; if it doesn’t even work as well as other methods, it’s absurd to use it. You’re basically kicking puppies at that point—purely sadistic violence that accomplishes nothing. But the data isn’t clear enough for us to rule out case 3 or even case 2. There is only one case we can strictly rule out, and that is case 1.

But it was only in case 1 that torture could ever be justified!

If you’re trying to justify doing something intrinsically horrible, it’s not enough that it has some slight benefit.

People seem to have this bizarre notion that we have only two choices in morality:

Either we are strict deontologists, and wrong actions can never be justified by good outcomes ever, in which case apparently vaccines are morally wrong, because stabbing children with needles is wrong. Tto be fair, some people seem to actually believe this; but then, some people believe the Earth is less than 10,000 years old.

Or alternatively we are the bizarre strawman concept most people seem to have of utilitarianism, under which any wrong action can be justified by even the slightest good outcome, in which case all you need to do to justify slavery is show that it would lead to a 1% increase in per-capita GDP. Sadly, there honestly do seem to be economists who believe this sort of thing. Here’s one arguing that US chattel slavery was economically efficient, and some of the more extreme arguments for why sweatshops are good can take on this character. Sweatshops may be a necessary evil for the time being, but they are still an evil.

But what utilitarianism actually says (and I consider myself some form of nuanced rule-utilitarian, though actually I sometimes call it “deontological consequentialism” to emphasize that I mean to synthesize the best parts of the two extremes) is not that the ends always justify the means, but that the ends can justify the means—that it can be morally good or even obligatory to do something intrinsically bad (like stabbing children with needles) if it is the best way to accomplish some greater good (like saving them from measles and polio). But the good actually has to be greater, and it has to be the best way to accomplish that good.

To see why this later proviso is important, consider the real-world ethical issues involved in psychology experiments. The benefits of psychology experiments are already quite large, and poised to grow as the science improves; one day the benefits of cognitive science to humanity may be even larger than the benefits of physics and biology are today. Imagine a world without mood disorders or mental illness of any kind; a world without psychopathy, where everyone is compassionate; a world where everyone is achieving their full potential for happiness and self-actualization. Cognitive science may yet make that world possible—and I haven’t even gotten into its applications in artificial intelligence.

To achieve that world, we will need a great many psychology experiments. But does that mean we can just corral people off the street and throw them into psychology experiments without their consent—or perhaps even their knowledge? That we can do whatever we want in those experiments, as long as it’s scientifically useful? No, it does not. We have ethical standards in psychology experiments for a very good reason, and while those ethical standards do slightly reduce the efficiency of the research process, the reduction is small enough that the moral choice is obviously to retain the ethics committees and accept the slight reduction in research efficiency. Yes, randomly throwing people into psychology experiments might actually be slightly better in purely scientific terms (larger and more random samples)—but it would be terrible in moral terms.

Along similar lines, even if torture works about as well or even slightly better than other methods, that’s simply not enough to justify it morally. Making a successful interrogation take 16 days instead of 17 simply wouldn’t be enough benefit to justify the psychological trauma to the suspect (and perhaps the interrogator!), the risk of harm to the falsely accused, or the violation of international human rights law. And in fact a number of terrorism suspects were waterboarded for months, so even the idea that it could shorten the interrogation is pretty implausible. If anything, torture seems to make interrogations take longer and give less reliable information—case 4.

A lot of people seem to have this impression that torture is amazingly, wildly effective, that a suspect who won’t crack after hours of humane interrogation can be tortured for just a few minutes and give you all the information you need. This is exactly what we do not find empirically; if he didn’t crack after hours of talk, he won’t crack after hours of torture. If you literally only have 30 minutes to find the nuke in Los Angeles, I’m sorry; you’re not going to find the nuke in Los Angeles. No adversarial interrogation is ever going to be completed that quickly, no matter what technique you use. Evacuate as many people to safe distances or underground shelters as you can in the time you have left.

This is why the “ticking time-bomb” scenario is so ridiculous (and so insidious); that’s simply not how interrogation works. The best methods we have for “rapid” interrogation of hostile suspects take hours or even days, and they are humane—building trust and rapport is the most important step. The goal is to get the suspect to want to give you accurate information.

For the purposes of the thought experiment, okay, you can stipulate that it would work (this is what the Stanford Encyclopedia of Philosophy does). But now all you’ve done is made the thought experiment more distant from the real-world moral question. The closest real-world examples we’ve ever had involved individual crimes, probably too small to justify the torture (as bad as a murdered child is, think about what you’re doing if you let the police torture people). But by the time the terrorism to be prevented is large enough to really be sufficient justification, it (1) hasn’t happened in the real world and (2) surely involves terrorists who are sufficiently ideologically committed that they’ll be able to resist the torture. If such a situation arises, of course we should try to get information from the suspects—but what we try should be our best methods, the ones that work most consistently, not the ones that “feel right” and maybe happen to work on occasion.

Indeed, the best explanation I have for why people use torture at all, given its horrible effects and mediocre effectiveness at best is that it feels right.

When someone does something terrible (such as an act of terrorism), we rightfully reduce our moral valuation of them relative to everyone else. If you are even tempted to deny this, suppose a terrorist and a random civilian are both inside a burning building and you only have time to save one. Of course you save the civilian and not the terrorist. And that’s still true even if you know that once the terrorist was rescued he’d go to prison and never be a threat to anyone else. He’s just not worth as much.

In the most extreme circumstances, a person can be so terrible that their moral valuation should be effectively zero: If the only person in a burning building is Stalin, I’m not sure you should save him even if you easily could. But it is a grave moral mistake to think that a person’s moral valuation should ever go negative, yet I think this is something that people do when confronted with someone they truly hate. The federal agents torturing those terrorists didn’t merely think of them as worthless—they thought of them as having negative worth. They felt it was a positive good to harm them. But this is fundamentally wrong; no sentient being has negative worth. Some may be so terrible as to have essentially zero worth; and we are often justified in causing harm to some in order to save others. It would have been entirely justified to kill Stalin (as a matter of fact he died of heart disease and old age), to remove the continued threat he posed; but to torture him would not have made the world a better place, and actually might well have made it worse.

Yet I can see how psychologically it could be useful to have a mechanism in our brains that makes us hate someone so much we view them as having negative worth. It makes it a lot easier to harm them when necessary, makes us feel a lot better about ourselves when we do. The idea that any act of homicide is a tragedy but some of them are necessary tragedies is a lot harder to deal with than the idea that some people are just so evil that killing or even torturing them is intrinsically good. But some of the worst things human beings have ever done ultimately came from that place in our brains—and torture is one of them.