Hypocrisy is underrated

Sep 12 JDN 2459470

Hypocrisy isn’t a good thing, but it isn’t nearly as bad as most people seem to think. Often accusing someone of hypocrisy is taken as a knock-down argument for everything they are saying, and this is just utterly wrong. Someone can be a hypocrite and still be mostly right.

Often people are accused of hypocrisy when they are not being hypocritical; for instance the right wing seems to think that “They want higher taxes on the rich, but they are rich!” is hypocrisy, when in fact it’s simply altruism. (If they had wanted the rich guillotined, that would be hypocrisy. Maybe the problem is that the right wing can’t tell the difference?) Even worse, “They live under capitalism but they want to overthrow capitalism!” is not even close to hypocrisy—let alone why, how would someone overthrow a system they weren’t living under? (There are many things wrong with Marxists, but that is not one of them.)

But in fact I intend something stronger: Hypocrisy itself just isn’t that bad.


There are currently two classes of Republican politicians with regard to the COVID vaccines: Those who are consistent in their principles and don’t get the vaccines, and those who are hypocrites and get the vaccines while telling their constituents not to. Of the two, who is better? The hypocrites. At least they are doing the right thing even as they say things that are very, very wrong.

There are really four cases to consider. The principles you believe in could be right, or they could be wrong. And you could follow those principles, or you could be a hypocrite. Each of these two factors is independent.

If your principles are right and you are consistent, that’s the best case; if your principles are right and you are a hypocrite, that’s worse.

But if your principles are wrong and you are consistent, that’s the worst case; if your principles are wrong and you are a hypocrite, that’s better.

In fact I think for most things the ordering goes like this: Consistent Right > Hypocritical Wrong > Hypocritical Right > Consistent Wrong. Your behavior counts for more than your principles—so if you’re going to be a hypocrite, it’s better for your good actions to not match your bad principles.

Obviously if we could get people to believe good moral principles and then follow them, that would be best. And we should in fact be working to achieve that.

But if you know that someone’s moral principles are wrong, it doesn’t accomplish anything to accuse them of being a hypocrite. If it’s true, that’s a good thing.

Here’s a pretty clear example for you: Anyone who says that the Bible is infallible but doesn’t want gay people stoned to death is a hypocrite. The Bible is quite clear on this matter; Leviticus 20:13 really doesn’t leave much room for interpretation. By this standard, most Christians are hypocrites—and thank goodness for that. I owe my life to the hypocrisy of millions.

Of course if I could convince them that the Bible isn’t infallible—perhaps by pointing out all the things it says that contradict their most deeply-held moral and factual beliefs—that would be even better. But the last thing I want to do is make their behavior more consistent with their belief that the Bible is infallible; that would turn them into fanatical monsters. The Spanish Inquisition was very consistent in behaving according to the belief that the Bible is infallible.

Here’s another example: Anyone who thinks that cruelty to cats and dogs is wrong but is willing to buy factory-farmed beef and ham is a hypocrite. Any principle that would tell you that it’s wrong to kick a dog or cat would tell you that the way cows and pigs are treated in CAFOs is utterly unconscionable. But if you are really unwilling to give up eating meat and you can’t find or afford free-range beef, it still would be bad for you to start kicking dogs in a display of your moral consistency.

And one more example for good measure: The leaders of any country who resist human rights violations abroad but tolerate them at home are hypocrites. Obviously the best thing to do would be to fight human rights violations everywhere. But perhaps for whatever reason you are unwilling or unable to do this—one disturbing truth is that many human rights violations at home (such as draconian border policies) are often popular with your local constituents. Human-rights violations abroad are also often more severe—detaining children at the border is one thing, a full-scale genocide is quite another. So, for good reasons or bad, you may decide to focus your efforts on resisting human rights violations abroad rather than at home; this would make you a hypocrite. But it would still make you much better than a more consistent leader who simply ignores all human rights violations wherever they may occur.

In fact, there are cases in which it may be optimal for you to knowingly be a hypocrite. If you have two sets of competing moral beliefs, and you don’t know which is true but you know that as a whole they are inconsistent, your best option is to apply each set of beliefs in the domain for which you are most confident that it is correct, while searching for more information that might allow you to correct your beliefs and reconcile the inconsistency. If you are self-aware about this, you will know that you are behaving in a hypocritical way—but you will still behave better than you would if you picked the wrong beliefs and stuck to them dogmatically. In fact, given a reasonable level of risk aversion, you’ll be better off being a hypocrite than you would by picking one set of beliefs arbitrarily (say, at the flip of a coin). At least then you avoid the worst-case scenario of being the most wrong.

There is yet another factor to take into consideration. Sometimes following your own principles is hard.

Considerable ink has been spilled on the concept of akrasia, or “weakness of will”, in which we judge that A is better than B yet still find ourselves doing B. Philosophers continue to debate to this day whether this really happens. As a behavioral economist, I observe it routinely, perhaps even daily. In fact, I observe it in myself.

I think the philosophers’ mistake is to presume that there is one simple, well-defined “you” that makes all observations and judgments and takes actions. Our brains are much more complicated than that. There are many “you”s inside your brain, each with its own capacities, desires, and judgments. Yes, there is some important sense in which they are all somehow unified into a single consciousness—by a mechanism which still eludes our understanding. But it doesn’t take esoteric cognitive science to see that there are many minds inside you: Haven’t you ever felt an urge to do something you knew you shouldn’t do? Haven’t you ever succumbed to such an urge—drank the drink, eaten the dessert, bought the shoes, slept with the stranger—when it seemed so enticing but you knew it wasn’t really the right choice?

We even speak of being “of two minds” when we are ambivalent about something, and I think there is literal truth in this. The neural networks in your brain are forming coalitions, and arguing between them over which course of action you ought to take. Eventually one coalition will prevail, and your action will be taken; but afterward your reflective mind need not always agree that the coalition which won the vote was the one that deserved to.

The evolutionary reason for this is simple: We’re a kludge. We weren’t designed from the top down for optimal efficiency. We were the product of hundreds of millions of years of subtle tinkering, adding a bit here, removing a bit there, layering the mammalian, reflective cerebral cortex over the reptilian, emotional limbic system over the ancient, involuntary autonomic system. Combine this with the fact that we are built in pairs, with left and right halves of each kind of brain (and yes, they are independently functional when their connection is severed), and the wonder is that we ever agree with our own decisions.

Thus, there is a kind of hypocrisy that is not a moral indictment at all: You may genuinely and honestly agree that it is morally better to do something and still not be able to bring yourself to do it. You may know full well that it would be better to donate that money to malaria treatment rather than buy yourself that tub of ice cream—you may be on a diet and full well know that the ice cream won’t even benefit you in the long run—and still not be able to stop yourself from buying the ice cream.

Sometimes your feeling of hesitation at an altruistic act may be a useful insight; I certainly don’t think we should feel obliged to give all our income, or even all of our discretionary income, to high-impact charities. (For most people I encourage 5%. I personally try to aim for 10%. If all the middle-class and above in the First World gave even 1% we could definitely end world hunger.) But other times it may lead you astray, make you unable to resist the temptation of a delicious treat or a shiny new toy when even you know the world would be better off if you did otherwise.

Yet when following our own principles is so difficult, it’s not really much of a criticism to point out that someone has failed to do so, particularly when they themselves already recognize that they failed. The inconsistency between behavior and belief indicates that something is wrong, but it may not be any dishonesty or even anything wrong with their beliefs.

I wouldn’t go so far as to say you should stop ever calling out hypocrisy. Sometimes it is clearly useful to do so. But while hypocrisy is often the sign of a moral failing, it isn’t always—and even when it is, often as not the problem is the bad principles, not the behavior inconsistent with them.

Good news for a change

Mar 28 JDN 2459302

When President Biden made his promise to deliver 100 million vaccine doses to Americans within his first 100 days, many were skeptical. Perhaps we had grown accustomed to the anti-scientific attitudes and utter incompetence of Trump’s administration, and no longer believed that the US federal government could do anything right.

The skeptics were wrong. For the promise has not only been kept, it has been greatly exceeded. As of this writing, Biden has been President for 60 days and we have already administered 121 million vaccine doses. If we continue at the current rate, it is likely that we will have administered over 200 million vaccine doses and fully vaccinated over 100 million Americans by Biden’s promised 100-day timeline—twice as fast as what was originally promised. Biden has made another bold promise: Every adult in the United States vaccinated by the end of May. I admit I’m not confident it can be done—but I wasn’t confident we’d hit 100 million by now either.

In fact, the US now has one of the best rates of COVID vaccination in the world, with the proportion of our population vaccinated far above the world average and below only Israel, UAE, Chile, the UK, and Bahrain (plus some tiny countries like Monaco). In fact, we actually have the largest absolute number of vaccinated individuals in the world, surpassing even China and India.

It turns out that the now-infamous map saying that the US and UK were among the countries best-prepared for a pandemic wasn’t so wrong after all; it’s just that having such awful administration for four years made our otherwise excellent preparedness fail. Put someone good in charge, and yes, indeed, it turns out that the US can deal with pandemics quite well.

The overall rate of new COVID cases in the US began to plummet right around the time the vaccination program gained steam, and has plateaued around 50,000 per day for the past few weeks. This is still much too high, but it is is a vast improvement over the 200,000 cases per day we had in early January. Our death rate due to COVID now hovers around 1,500 people per day—that’s still a 9/11 every two days. But this is half what our death rate was at its worst. And since our baseline death rate is 7,500 deaths per day, 1,800 of them by heart disease, this now means that COVID is no longer the leading cause of death in the United States; heart disease has once again reclaimed its throne. Of course, people dying from heart disease is still a bad thing; but it’s at least a sign of returning to normalcy.

Worldwide, the pandemic is slowing down, but still by no means defeated, with over 400,000 new cases and 7,500 deaths every day. The US rate of 17 new cases per 100,000 people per day is about 3 times the world average, but comparable to Germany (17) and Norway (18), and nowhere near as bad as Chile (30), Brazil (35), France (37), or Sweden (45), let alone the very hardest-hit places like Serbia (71), Hungary (78), Jordan (83), Czechia (90), and Estonia (110). (That big gap between Norway and Sweden? It’s because Sweden resisted using lockdowns.) And there is cause for optimism even in these places, as vaccination rates already exceed total COVID cases.

I can see a few patterns in the rate of vaccination by state: very isolated states have managed to vaccinate their population fastest—Hawaii and Alaska have done very well, and even most of the territories have done quite well (though notably not Puerto Rico). The south has done poorly (for obvious reasons), but not as poorly as I might have feared; even Texas and Mississippi have given at least one dose to 21% of their population. New England has been prioritizing getting as many people with at least one dose as possible, rather than trying to fully vaccinate each person; I think this is the right strategy.

We must continue to stay home when we can and wear masks when we go out. This will definitely continue for at least a few more months, and the vaccine rollout may not even be finished in many countries by the end of the year. In the worst-case scenario, COVID may become an endemic virus that we can’t fully eradicate and we’ll have to keep getting vaccinated every year like we do for influenza (though the good news there is that it likely wouldn’t be much more dangerous than influenza at that point either—though another influenza is nothing to, er, sneeze at).

Yet there is hope at last. Things are finally getting better.

Ancient plagues, modern pandemics

Mar 1 JDN 2458917

The coronavirus epidemic continues; though it originated in Wuhan province, the virus has now been confirmed in places as far-flung as Italy, Brazil, and Mexico. So far, about 90,000 people have caught it, and about 3,000 have died, mostly in China.

There are legitimate reasons to be concerned about this epidemic: Like influenza, coronavirus spreads quickly, and can be carried without symptoms, yet unlike influenza, it has a very high rate of complications, causing hospitalization as often as 10% of the time and death as often as 2%. There’s a lot of uncertainty about these numbers, because it’s difficult to know exactly how many people are infected but either have no symptoms or have symptoms that can be confused with other diseases. But we do have reason to believe that coronavirus is much deadlier for those infected than influenza: Influenza spreads so widely that it kills about 300,000 people every year, but this is only 0.1% of the people infected.

And yet, despite our complex interwoven network of international trade that sends people and goods all around the world, our era is probably the safest in history in terms of the risk of infectious disease.

Partly this is technology: Especially for bacterial infections, we have highly effective treatments that our forebears lacked. But for most viral infections we actually don’t have very effective treatments—which means that technology per se is not the real hero here.

Vaccination is a major part of the answer: Vaccines have effectively eradicated polio and smallpox, and would probably be on track to eliminate measles and rubella if not for dangerous anti-vaccination ideology. But even with no vaccine against coronavirus (yet) and not very effective vaccines against influenza, still the death rates from these viruses are nowhere near those of ancient plagues.

The Black Death killed something like 40% of Europe’s entire population. The Plague of Justinian killed as many as 20% of the entire world’s population. This is a staggeringly large death rate compared to a modern pandemic, in which even a 2% death rate would be considered a total catastrophe.

Even the 1918 influenza pandemic, which killed more than all the battle deaths in World War I combined, wasn’t as terrible as an ancient plague; it killed about 2% of the infected population. And when a very similar influenza virus appeared in 2009, how many people did it kill? About 400,000 people, roughly 0.1% of those infectedslightly worse than the average flu season. That’s how much better our public health has gotten in the last century alone.

Remember SARS, a previous viral pandemic that also emerged in China? It only killed 774 people, in a year in which over 300,000 died of influenza.

Sanitation is probably the most important factor: Certainly sanitation was far worse in ancient times. Today almost everyone routinely showers and washes their hands, which makes a big difference—but it’s notable that widespread bathing didn’t save the Romans from the Plague of Justinian.

I think it’s underappreciated just how much better our communication and quarantine procedures are today than they once were. In ancient times, the only way you heard about a plague was a live messenger carrying the news—and that messenger might well be already carrying the virus. Today, an epidemic in China becomes immediate news around the world. This means that people prepare—they avoid travel, they stock up on food, they become more diligent about keeping clean. And perhaps even more important than the preparation by individual people is the preparation by institutions: Governments, hospitals, research labs. We can see the pandemic coming and be ready to respond weeks or even months before it hits us.

So yes, do wash your hands regularly. Wash for at least 20 seconds, which will definitely feel like a long time if you haven’t made it a habit—but it does make a difference. Try to avoid travel for awhile. Stock up on food and water in case you need to be quarantined. Follow whatever instructions public health officials give as the pandemic progresses. But you don’t need to panic: We’ve got this under control. That Horseman of the Apocalypse is dead; and fear not, Famine and War are next. I’m afraid Death himself will probably be awhile, though.

Influenza vaccination, herd immunity, and the Tragedy of the Commons

Dec 24, JDN 2458112

Usually around this time of year I do a sort of “Christmas special” blog post, something about holidays or gifts. But this year I have a rather different seasonal idea in mind. It’s not just the holiday season; it’s also flu season.

Each year, influenza kills over 56,000 people in the US, and between 300,000 and 600,000 people worldwide, mostly in the winter months. And yet, in any given year, only about 40% of adults and 60% of children get the flu vaccine.

The reason for this should be obvious to any student of economics: It’s a Tragedy of the Commons. If enough people got vaccinated that we attained reliable herd immunity (which would take about 90%), then almost nobody would get influenza, and the death rate would plummet. But for any given individual, the vaccine is actually not all that effective. Your risk of getting the flu only drops by about half if you receive the vaccine. The effectiveness is particularly low among the elderly, who are also at the highest risk for serious complications due to influenza.

Thus, for any given individual, the incentive to get vaccinated isn’t all that strong, even though society as a whole would be much better off if we all got vaccinated. Your probability of suffering serious complications from influenza is quite low, and wouldn’t be reduced all that much if you got the vaccine; so even though flu vaccines aren’t that costly in terms of time, money, discomfort, and inconvenience, the cost is just high enough that a lot of us don’t bother to get the shot each year.

On an individual level, my advice is simple: Go get a flu shot. Don’t do it just for yourself; do it for everyone around you. You are protecting the most vulnerable people in our society.

But if we really want everyone to get vaccinated, we need a policy response. I can think of two policies that might work, which can be broadly called a “stick” and a “carrot”.

The “stick” approach would be to make vaccination mandatory, as it already is for many childhood vaccines. Some sort of penalty would have to be introduced, but that’s not the real challenge. The real challenge would be how to actually enforce that penalty: How do we tell who is vaccinated and who isn’t?

When schools make vaccination mandatory, they require vaccination records for admission. It would be simple enough to add annual flu vaccines to the list of required shots for high schools and colleges (though no doubt the anti-vax crowd would make a ruckus). But can you make vaccination mandatory for work? That seems like a much larger violation of civil liberties. Alternatively, we could require that people submit medical records with their tax returns to avoid a tax penalty—but the privacy violations there are quite substantial as well.

Hence, I would favor the “carrot” approach: Use government subsidies to provide a positive incentive for vaccination. Don’t simply make vaccination free; actually pay people to get vaccinated. Make the subsidy larger than the actual cost of the shots, and require that the doctors and pharmacies administering them remit the extra to the customers. Something like $20 per shot ought to do it; since the cost of the shots is also around $20, then vaccinating the full 300 million people of the United States every year would cost about $12 billion; this is less than the estimated economic cost of influenza, so it would essentially pay for itself.

$20 isn’t a lot of money for most people; but then, like I said, the time and inconvenience of a flu shot aren’t that large either. There have been moderately successful (but expensive) programs incentivizing doctors to perform vaccinations, but that’s stupid; frankly I’m amazed it worked at all. It’s patients who need incentivized. Doctors will give you a flu shot if you ask them. The problem is that most people don’t ask.

Do this, and we could potentially save tens of thousands of lives every year, for essentially zero net cost. And that sounds to me like a Christmas wish worth making.

Why it matters that torture is ineffective

JDN 2457531

Like “longest-ever-serving Speaker of the House sexually abuses teenagers” and “NSA spy program is trying to monitor the entire telephone and email system”, the news that the US government systematically tortures suspects is an egregious violation that goes to the highest levels of our government—that for some reason most Americans don’t particularly seem to care about.

The good news is that President Obama signed an executive order in 2009 banning torture domestically, reversing official policy under the Bush Administration, and then better yet in 2014 expanded the order to apply to all US interests worldwide. If this is properly enforced, perhaps our history of hypocrisy will finally be at its end. (Well, not if Trump wins…)

Yet as often seems to happen, there are two extremes in this debate and I think they’re both wrong.
The really disturbing side is “Torture works and we have to use it!” The preferred mode of argumentation for this is the “ticking time bomb scenario”, in which we have some urgent disaster to prevent (such as a nuclear bomb about to go off) and torture is the only way to stop it from happening. Surely then torture is justified? This argument may sound plausible, but as I’ll get to below, this is a lot like saying, “If aliens were attacking from outer space trying to wipe out humanity, nuclear bombs would probably be justified against them; therefore nuclear bombs are always justified and we can use them whenever we want.” If you can’t wait for my explanation, The Atlantic skewers the argument nicely.

Yet the opponents of torture have brought this sort of argument on themselves, by staking out a position so extreme as “It doesn’t matter if torture works! It’s wrong, wrong, wrong!” This kind of simplistic deontological reasoning is very appealing and intuitive to humans, because it casts the world into simple black-and-white categories. To show that this is not a strawman, here are several different people all making this same basic argument, that since torture is illegal and wrong it doesn’t matter if it works and there should be no further debate.

But the truth is, if it really were true that the only way to stop a nuclear bomb from leveling Los Angeles was to torture someone, it would be entirely justified—indeed obligatory—to torture that suspect and stop that nuclear bomb.

The problem with that argument is not just that this is not our usual scenario (though it certainly isn’t); it goes much deeper than that:

That scenario makes no sense. It wouldn’t happen.

To use the example the late Antonin Scalia used from an episode of 24 (perhaps the most egregious Fictional Evidence Fallacy ever committed), if there ever is a nuclear bomb planted in Los Angeles, that would literally be one of the worst things that ever happened in the history of the human race—literally a Holocaust in the blink of an eye. We should be prepared to cause extreme suffering and death in order to prevent it. But not only is that event (fortunately) very unlikely, torture would not help us.

Why? Because torture just doesn’t work that well.

It would be too strong to say that it doesn’t work at all; it’s possible that it could produce some valuable intelligence—though clear examples of such results are amazingly hard to come by. There are some social scientists who have found empirical results showing some effectiveness of torture, however. We can’t say with any certainty that it is completely useless. (For obvious reasons, a randomized controlled experiment in torture is wildly unethical, so none have ever been attempted.) But to justify torture it isn’t enough that it could work sometimes; it has to work vastly better than any other method we have.

And our empirical data is in fact reliable enough to show that that is not the case. Torture often produces unreliable information, as we would expect from the game theory involved—your incentive is to stop the pain, not provide accurate intel; the psychological trauma that torture causes actually distorts memory and reasoning; and as a matter of fact basically all the useful intelligence obtained in the War on Terror was obtained through humane interrogation methods. As interrogation experts agree, torture just isn’t that effective.

In principle, there are four basic cases to consider:

1. Torture is vastly more effective than the best humane interrogation methods.

2. Torture is slightly more effective than the best humane interrogation methods.

3. Torture is as effective as the best humane interrogation methods.

4. Torture is less effective than the best humane interrogation methods.

The evidence points most strongly to case 4, which would mean that torture is a no-brainer; if it doesn’t even work as well as other methods, it’s absurd to use it. You’re basically kicking puppies at that point—purely sadistic violence that accomplishes nothing. But the data isn’t clear enough for us to rule out case 3 or even case 2. There is only one case we can strictly rule out, and that is case 1.

But it was only in case 1 that torture could ever be justified!

If you’re trying to justify doing something intrinsically horrible, it’s not enough that it has some slight benefit.

People seem to have this bizarre notion that we have only two choices in morality:

Either we are strict deontologists, and wrong actions can never be justified by good outcomes ever, in which case apparently vaccines are morally wrong, because stabbing children with needles is wrong. Tto be fair, some people seem to actually believe this; but then, some people believe the Earth is less than 10,000 years old.

Or alternatively we are the bizarre strawman concept most people seem to have of utilitarianism, under which any wrong action can be justified by even the slightest good outcome, in which case all you need to do to justify slavery is show that it would lead to a 1% increase in per-capita GDP. Sadly, there honestly do seem to be economists who believe this sort of thing. Here’s one arguing that US chattel slavery was economically efficient, and some of the more extreme arguments for why sweatshops are good can take on this character. Sweatshops may be a necessary evil for the time being, but they are still an evil.

But what utilitarianism actually says (and I consider myself some form of nuanced rule-utilitarian, though actually I sometimes call it “deontological consequentialism” to emphasize that I mean to synthesize the best parts of the two extremes) is not that the ends always justify the means, but that the ends can justify the means—that it can be morally good or even obligatory to do something intrinsically bad (like stabbing children with needles) if it is the best way to accomplish some greater good (like saving them from measles and polio). But the good actually has to be greater, and it has to be the best way to accomplish that good.

To see why this later proviso is important, consider the real-world ethical issues involved in psychology experiments. The benefits of psychology experiments are already quite large, and poised to grow as the science improves; one day the benefits of cognitive science to humanity may be even larger than the benefits of physics and biology are today. Imagine a world without mood disorders or mental illness of any kind; a world without psychopathy, where everyone is compassionate; a world where everyone is achieving their full potential for happiness and self-actualization. Cognitive science may yet make that world possible—and I haven’t even gotten into its applications in artificial intelligence.

To achieve that world, we will need a great many psychology experiments. But does that mean we can just corral people off the street and throw them into psychology experiments without their consent—or perhaps even their knowledge? That we can do whatever we want in those experiments, as long as it’s scientifically useful? No, it does not. We have ethical standards in psychology experiments for a very good reason, and while those ethical standards do slightly reduce the efficiency of the research process, the reduction is small enough that the moral choice is obviously to retain the ethics committees and accept the slight reduction in research efficiency. Yes, randomly throwing people into psychology experiments might actually be slightly better in purely scientific terms (larger and more random samples)—but it would be terrible in moral terms.

Along similar lines, even if torture works about as well or even slightly better than other methods, that’s simply not enough to justify it morally. Making a successful interrogation take 16 days instead of 17 simply wouldn’t be enough benefit to justify the psychological trauma to the suspect (and perhaps the interrogator!), the risk of harm to the falsely accused, or the violation of international human rights law. And in fact a number of terrorism suspects were waterboarded for months, so even the idea that it could shorten the interrogation is pretty implausible. If anything, torture seems to make interrogations take longer and give less reliable information—case 4.

A lot of people seem to have this impression that torture is amazingly, wildly effective, that a suspect who won’t crack after hours of humane interrogation can be tortured for just a few minutes and give you all the information you need. This is exactly what we do not find empirically; if he didn’t crack after hours of talk, he won’t crack after hours of torture. If you literally only have 30 minutes to find the nuke in Los Angeles, I’m sorry; you’re not going to find the nuke in Los Angeles. No adversarial interrogation is ever going to be completed that quickly, no matter what technique you use. Evacuate as many people to safe distances or underground shelters as you can in the time you have left.

This is why the “ticking time-bomb” scenario is so ridiculous (and so insidious); that’s simply not how interrogation works. The best methods we have for “rapid” interrogation of hostile suspects take hours or even days, and they are humane—building trust and rapport is the most important step. The goal is to get the suspect to want to give you accurate information.

For the purposes of the thought experiment, okay, you can stipulate that it would work (this is what the Stanford Encyclopedia of Philosophy does). But now all you’ve done is made the thought experiment more distant from the real-world moral question. The closest real-world examples we’ve ever had involved individual crimes, probably too small to justify the torture (as bad as a murdered child is, think about what you’re doing if you let the police torture people). But by the time the terrorism to be prevented is large enough to really be sufficient justification, it (1) hasn’t happened in the real world and (2) surely involves terrorists who are sufficiently ideologically committed that they’ll be able to resist the torture. If such a situation arises, of course we should try to get information from the suspects—but what we try should be our best methods, the ones that work most consistently, not the ones that “feel right” and maybe happen to work on occasion.

Indeed, the best explanation I have for why people use torture at all, given its horrible effects and mediocre effectiveness at best is that it feels right.

When someone does something terrible (such as an act of terrorism), we rightfully reduce our moral valuation of them relative to everyone else. If you are even tempted to deny this, suppose a terrorist and a random civilian are both inside a burning building and you only have time to save one. Of course you save the civilian and not the terrorist. And that’s still true even if you know that once the terrorist was rescued he’d go to prison and never be a threat to anyone else. He’s just not worth as much.

In the most extreme circumstances, a person can be so terrible that their moral valuation should be effectively zero: If the only person in a burning building is Stalin, I’m not sure you should save him even if you easily could. But it is a grave moral mistake to think that a person’s moral valuation should ever go negative, yet I think this is something that people do when confronted with someone they truly hate. The federal agents torturing those terrorists didn’t merely think of them as worthless—they thought of them as having negative worth. They felt it was a positive good to harm them. But this is fundamentally wrong; no sentient being has negative worth. Some may be so terrible as to have essentially zero worth; and we are often justified in causing harm to some in order to save others. It would have been entirely justified to kill Stalin (as a matter of fact he died of heart disease and old age), to remove the continued threat he posed; but to torture him would not have made the world a better place, and actually might well have made it worse.

Yet I can see how psychologically it could be useful to have a mechanism in our brains that makes us hate someone so much we view them as having negative worth. It makes it a lot easier to harm them when necessary, makes us feel a lot better about ourselves when we do. The idea that any act of homicide is a tragedy but some of them are necessary tragedies is a lot harder to deal with than the idea that some people are just so evil that killing or even torturing them is intrinsically good. But some of the worst things human beings have ever done ultimately came from that place in our brains—and torture is one of them.