Good news for a change

Mar 28 JDN 2459302

When President Biden made his promise to deliver 100 million vaccine doses to Americans within his first 100 days, many were skeptical. Perhaps we had grown accustomed to the anti-scientific attitudes and utter incompetence of Trump’s administration, and no longer believed that the US federal government could do anything right.

The skeptics were wrong. For the promise has not only been kept, it has been greatly exceeded. As of this writing, Biden has been President for 60 days and we have already administered 121 million vaccine doses. If we continue at the current rate, it is likely that we will have administered over 200 million vaccine doses and fully vaccinated over 100 million Americans by Biden’s promised 100-day timeline—twice as fast as what was originally promised. Biden has made another bold promise: Every adult in the United States vaccinated by the end of May. I admit I’m not confident it can be done—but I wasn’t confident we’d hit 100 million by now either.

In fact, the US now has one of the best rates of COVID vaccination in the world, with the proportion of our population vaccinated far above the world average and below only Israel, UAE, Chile, the UK, and Bahrain (plus some tiny countries like Monaco). In fact, we actually have the largest absolute number of vaccinated individuals in the world, surpassing even China and India.

It turns out that the now-infamous map saying that the US and UK were among the countries best-prepared for a pandemic wasn’t so wrong after all; it’s just that having such awful administration for four years made our otherwise excellent preparedness fail. Put someone good in charge, and yes, indeed, it turns out that the US can deal with pandemics quite well.

The overall rate of new COVID cases in the US began to plummet right around the time the vaccination program gained steam, and has plateaued around 50,000 per day for the past few weeks. This is still much too high, but it is is a vast improvement over the 200,000 cases per day we had in early January. Our death rate due to COVID now hovers around 1,500 people per day—that’s still a 9/11 every two days. But this is half what our death rate was at its worst. And since our baseline death rate is 7,500 deaths per day, 1,800 of them by heart disease, this now means that COVID is no longer the leading cause of death in the United States; heart disease has once again reclaimed its throne. Of course, people dying from heart disease is still a bad thing; but it’s at least a sign of returning to normalcy.

Worldwide, the pandemic is slowing down, but still by no means defeated, with over 400,000 new cases and 7,500 deaths every day. The US rate of 17 new cases per 100,000 people per day is about 3 times the world average, but comparable to Germany (17) and Norway (18), and nowhere near as bad as Chile (30), Brazil (35), France (37), or Sweden (45), let alone the very hardest-hit places like Serbia (71), Hungary (78), Jordan (83), Czechia (90), and Estonia (110). (That big gap between Norway and Sweden? It’s because Sweden resisted using lockdowns.) And there is cause for optimism even in these places, as vaccination rates already exceed total COVID cases.

I can see a few patterns in the rate of vaccination by state: very isolated states have managed to vaccinate their population fastest—Hawaii and Alaska have done very well, and even most of the territories have done quite well (though notably not Puerto Rico). The south has done poorly (for obvious reasons), but not as poorly as I might have feared; even Texas and Mississippi have given at least one dose to 21% of their population. New England has been prioritizing getting as many people with at least one dose as possible, rather than trying to fully vaccinate each person; I think this is the right strategy.

We must continue to stay home when we can and wear masks when we go out. This will definitely continue for at least a few more months, and the vaccine rollout may not even be finished in many countries by the end of the year. In the worst-case scenario, COVID may become an endemic virus that we can’t fully eradicate and we’ll have to keep getting vaccinated every year like we do for influenza (though the good news there is that it likely wouldn’t be much more dangerous than influenza at that point either—though another influenza is nothing to, er, sneeze at).

Yet there is hope at last. Things are finally getting better.

Ancient plagues, modern pandemics

Mar 1 JDN 2458917

The coronavirus epidemic continues; though it originated in Wuhan province, the virus has now been confirmed in places as far-flung as Italy, Brazil, and Mexico. So far, about 90,000 people have caught it, and about 3,000 have died, mostly in China.

There are legitimate reasons to be concerned about this epidemic: Like influenza, coronavirus spreads quickly, and can be carried without symptoms, yet unlike influenza, it has a very high rate of complications, causing hospitalization as often as 10% of the time and death as often as 2%. There’s a lot of uncertainty about these numbers, because it’s difficult to know exactly how many people are infected but either have no symptoms or have symptoms that can be confused with other diseases. But we do have reason to believe that coronavirus is much deadlier for those infected than influenza: Influenza spreads so widely that it kills about 300,000 people every year, but this is only 0.1% of the people infected.

And yet, despite our complex interwoven network of international trade that sends people and goods all around the world, our era is probably the safest in history in terms of the risk of infectious disease.

Partly this is technology: Especially for bacterial infections, we have highly effective treatments that our forebears lacked. But for most viral infections we actually don’t have very effective treatments—which means that technology per se is not the real hero here.

Vaccination is a major part of the answer: Vaccines have effectively eradicated polio and smallpox, and would probably be on track to eliminate measles and rubella if not for dangerous anti-vaccination ideology. But even with no vaccine against coronavirus (yet) and not very effective vaccines against influenza, still the death rates from these viruses are nowhere near those of ancient plagues.

The Black Death killed something like 40% of Europe’s entire population. The Plague of Justinian killed as many as 20% of the entire world’s population. This is a staggeringly large death rate compared to a modern pandemic, in which even a 2% death rate would be considered a total catastrophe.

Even the 1918 influenza pandemic, which killed more than all the battle deaths in World War I combined, wasn’t as terrible as an ancient plague; it killed about 2% of the infected population. And when a very similar influenza virus appeared in 2009, how many people did it kill? About 400,000 people, roughly 0.1% of those infectedslightly worse than the average flu season. That’s how much better our public health has gotten in the last century alone.

Remember SARS, a previous viral pandemic that also emerged in China? It only killed 774 people, in a year in which over 300,000 died of influenza.

Sanitation is probably the most important factor: Certainly sanitation was far worse in ancient times. Today almost everyone routinely showers and washes their hands, which makes a big difference—but it’s notable that widespread bathing didn’t save the Romans from the Plague of Justinian.

I think it’s underappreciated just how much better our communication and quarantine procedures are today than they once were. In ancient times, the only way you heard about a plague was a live messenger carrying the news—and that messenger might well be already carrying the virus. Today, an epidemic in China becomes immediate news around the world. This means that people prepare—they avoid travel, they stock up on food, they become more diligent about keeping clean. And perhaps even more important than the preparation by individual people is the preparation by institutions: Governments, hospitals, research labs. We can see the pandemic coming and be ready to respond weeks or even months before it hits us.

So yes, do wash your hands regularly. Wash for at least 20 seconds, which will definitely feel like a long time if you haven’t made it a habit—but it does make a difference. Try to avoid travel for awhile. Stock up on food and water in case you need to be quarantined. Follow whatever instructions public health officials give as the pandemic progresses. But you don’t need to panic: We’ve got this under control. That Horseman of the Apocalypse is dead; and fear not, Famine and War are next. I’m afraid Death himself will probably be awhile, though.

Influenza vaccination, herd immunity, and the Tragedy of the Commons

Dec 24, JDN 2458112

Usually around this time of year I do a sort of “Christmas special” blog post, something about holidays or gifts. But this year I have a rather different seasonal idea in mind. It’s not just the holiday season; it’s also flu season.

Each year, influenza kills over 56,000 people in the US, and between 300,000 and 600,000 people worldwide, mostly in the winter months. And yet, in any given year, only about 40% of adults and 60% of children get the flu vaccine.

The reason for this should be obvious to any student of economics: It’s a Tragedy of the Commons. If enough people got vaccinated that we attained reliable herd immunity (which would take about 90%), then almost nobody would get influenza, and the death rate would plummet. But for any given individual, the vaccine is actually not all that effective. Your risk of getting the flu only drops by about half if you receive the vaccine. The effectiveness is particularly low among the elderly, who are also at the highest risk for serious complications due to influenza.

Thus, for any given individual, the incentive to get vaccinated isn’t all that strong, even though society as a whole would be much better off if we all got vaccinated. Your probability of suffering serious complications from influenza is quite low, and wouldn’t be reduced all that much if you got the vaccine; so even though flu vaccines aren’t that costly in terms of time, money, discomfort, and inconvenience, the cost is just high enough that a lot of us don’t bother to get the shot each year.

On an individual level, my advice is simple: Go get a flu shot. Don’t do it just for yourself; do it for everyone around you. You are protecting the most vulnerable people in our society.

But if we really want everyone to get vaccinated, we need a policy response. I can think of two policies that might work, which can be broadly called a “stick” and a “carrot”.

The “stick” approach would be to make vaccination mandatory, as it already is for many childhood vaccines. Some sort of penalty would have to be introduced, but that’s not the real challenge. The real challenge would be how to actually enforce that penalty: How do we tell who is vaccinated and who isn’t?

When schools make vaccination mandatory, they require vaccination records for admission. It would be simple enough to add annual flu vaccines to the list of required shots for high schools and colleges (though no doubt the anti-vax crowd would make a ruckus). But can you make vaccination mandatory for work? That seems like a much larger violation of civil liberties. Alternatively, we could require that people submit medical records with their tax returns to avoid a tax penalty—but the privacy violations there are quite substantial as well.

Hence, I would favor the “carrot” approach: Use government subsidies to provide a positive incentive for vaccination. Don’t simply make vaccination free; actually pay people to get vaccinated. Make the subsidy larger than the actual cost of the shots, and require that the doctors and pharmacies administering them remit the extra to the customers. Something like $20 per shot ought to do it; since the cost of the shots is also around $20, then vaccinating the full 300 million people of the United States every year would cost about $12 billion; this is less than the estimated economic cost of influenza, so it would essentially pay for itself.

$20 isn’t a lot of money for most people; but then, like I said, the time and inconvenience of a flu shot aren’t that large either. There have been moderately successful (but expensive) programs incentivizing doctors to perform vaccinations, but that’s stupid; frankly I’m amazed it worked at all. It’s patients who need incentivized. Doctors will give you a flu shot if you ask them. The problem is that most people don’t ask.

Do this, and we could potentially save tens of thousands of lives every year, for essentially zero net cost. And that sounds to me like a Christmas wish worth making.

Why it matters that torture is ineffective

JDN 2457531

Like “longest-ever-serving Speaker of the House sexually abuses teenagers” and “NSA spy program is trying to monitor the entire telephone and email system”, the news that the US government systematically tortures suspects is an egregious violation that goes to the highest levels of our government—that for some reason most Americans don’t particularly seem to care about.

The good news is that President Obama signed an executive order in 2009 banning torture domestically, reversing official policy under the Bush Administration, and then better yet in 2014 expanded the order to apply to all US interests worldwide. If this is properly enforced, perhaps our history of hypocrisy will finally be at its end. (Well, not if Trump wins…)

Yet as often seems to happen, there are two extremes in this debate and I think they’re both wrong.
The really disturbing side is “Torture works and we have to use it!” The preferred mode of argumentation for this is the “ticking time bomb scenario”, in which we have some urgent disaster to prevent (such as a nuclear bomb about to go off) and torture is the only way to stop it from happening. Surely then torture is justified? This argument may sound plausible, but as I’ll get to below, this is a lot like saying, “If aliens were attacking from outer space trying to wipe out humanity, nuclear bombs would probably be justified against them; therefore nuclear bombs are always justified and we can use them whenever we want.” If you can’t wait for my explanation, The Atlantic skewers the argument nicely.

Yet the opponents of torture have brought this sort of argument on themselves, by staking out a position so extreme as “It doesn’t matter if torture works! It’s wrong, wrong, wrong!” This kind of simplistic deontological reasoning is very appealing and intuitive to humans, because it casts the world into simple black-and-white categories. To show that this is not a strawman, here are several different people all making this same basic argument, that since torture is illegal and wrong it doesn’t matter if it works and there should be no further debate.

But the truth is, if it really were true that the only way to stop a nuclear bomb from leveling Los Angeles was to torture someone, it would be entirely justified—indeed obligatory—to torture that suspect and stop that nuclear bomb.

The problem with that argument is not just that this is not our usual scenario (though it certainly isn’t); it goes much deeper than that:

That scenario makes no sense. It wouldn’t happen.

To use the example the late Antonin Scalia used from an episode of 24 (perhaps the most egregious Fictional Evidence Fallacy ever committed), if there ever is a nuclear bomb planted in Los Angeles, that would literally be one of the worst things that ever happened in the history of the human race—literally a Holocaust in the blink of an eye. We should be prepared to cause extreme suffering and death in order to prevent it. But not only is that event (fortunately) very unlikely, torture would not help us.

Why? Because torture just doesn’t work that well.

It would be too strong to say that it doesn’t work at all; it’s possible that it could produce some valuable intelligence—though clear examples of such results are amazingly hard to come by. There are some social scientists who have found empirical results showing some effectiveness of torture, however. We can’t say with any certainty that it is completely useless. (For obvious reasons, a randomized controlled experiment in torture is wildly unethical, so none have ever been attempted.) But to justify torture it isn’t enough that it could work sometimes; it has to work vastly better than any other method we have.

And our empirical data is in fact reliable enough to show that that is not the case. Torture often produces unreliable information, as we would expect from the game theory involved—your incentive is to stop the pain, not provide accurate intel; the psychological trauma that torture causes actually distorts memory and reasoning; and as a matter of fact basically all the useful intelligence obtained in the War on Terror was obtained through humane interrogation methods. As interrogation experts agree, torture just isn’t that effective.

In principle, there are four basic cases to consider:

1. Torture is vastly more effective than the best humane interrogation methods.

2. Torture is slightly more effective than the best humane interrogation methods.

3. Torture is as effective as the best humane interrogation methods.

4. Torture is less effective than the best humane interrogation methods.

The evidence points most strongly to case 4, which would mean that torture is a no-brainer; if it doesn’t even work as well as other methods, it’s absurd to use it. You’re basically kicking puppies at that point—purely sadistic violence that accomplishes nothing. But the data isn’t clear enough for us to rule out case 3 or even case 2. There is only one case we can strictly rule out, and that is case 1.

But it was only in case 1 that torture could ever be justified!

If you’re trying to justify doing something intrinsically horrible, it’s not enough that it has some slight benefit.

People seem to have this bizarre notion that we have only two choices in morality:

Either we are strict deontologists, and wrong actions can never be justified by good outcomes ever, in which case apparently vaccines are morally wrong, because stabbing children with needles is wrong. Tto be fair, some people seem to actually believe this; but then, some people believe the Earth is less than 10,000 years old.

Or alternatively we are the bizarre strawman concept most people seem to have of utilitarianism, under which any wrong action can be justified by even the slightest good outcome, in which case all you need to do to justify slavery is show that it would lead to a 1% increase in per-capita GDP. Sadly, there honestly do seem to be economists who believe this sort of thing. Here’s one arguing that US chattel slavery was economically efficient, and some of the more extreme arguments for why sweatshops are good can take on this character. Sweatshops may be a necessary evil for the time being, but they are still an evil.

But what utilitarianism actually says (and I consider myself some form of nuanced rule-utilitarian, though actually I sometimes call it “deontological consequentialism” to emphasize that I mean to synthesize the best parts of the two extremes) is not that the ends always justify the means, but that the ends can justify the means—that it can be morally good or even obligatory to do something intrinsically bad (like stabbing children with needles) if it is the best way to accomplish some greater good (like saving them from measles and polio). But the good actually has to be greater, and it has to be the best way to accomplish that good.

To see why this later proviso is important, consider the real-world ethical issues involved in psychology experiments. The benefits of psychology experiments are already quite large, and poised to grow as the science improves; one day the benefits of cognitive science to humanity may be even larger than the benefits of physics and biology are today. Imagine a world without mood disorders or mental illness of any kind; a world without psychopathy, where everyone is compassionate; a world where everyone is achieving their full potential for happiness and self-actualization. Cognitive science may yet make that world possible—and I haven’t even gotten into its applications in artificial intelligence.

To achieve that world, we will need a great many psychology experiments. But does that mean we can just corral people off the street and throw them into psychology experiments without their consent—or perhaps even their knowledge? That we can do whatever we want in those experiments, as long as it’s scientifically useful? No, it does not. We have ethical standards in psychology experiments for a very good reason, and while those ethical standards do slightly reduce the efficiency of the research process, the reduction is small enough that the moral choice is obviously to retain the ethics committees and accept the slight reduction in research efficiency. Yes, randomly throwing people into psychology experiments might actually be slightly better in purely scientific terms (larger and more random samples)—but it would be terrible in moral terms.

Along similar lines, even if torture works about as well or even slightly better than other methods, that’s simply not enough to justify it morally. Making a successful interrogation take 16 days instead of 17 simply wouldn’t be enough benefit to justify the psychological trauma to the suspect (and perhaps the interrogator!), the risk of harm to the falsely accused, or the violation of international human rights law. And in fact a number of terrorism suspects were waterboarded for months, so even the idea that it could shorten the interrogation is pretty implausible. If anything, torture seems to make interrogations take longer and give less reliable information—case 4.

A lot of people seem to have this impression that torture is amazingly, wildly effective, that a suspect who won’t crack after hours of humane interrogation can be tortured for just a few minutes and give you all the information you need. This is exactly what we do not find empirically; if he didn’t crack after hours of talk, he won’t crack after hours of torture. If you literally only have 30 minutes to find the nuke in Los Angeles, I’m sorry; you’re not going to find the nuke in Los Angeles. No adversarial interrogation is ever going to be completed that quickly, no matter what technique you use. Evacuate as many people to safe distances or underground shelters as you can in the time you have left.

This is why the “ticking time-bomb” scenario is so ridiculous (and so insidious); that’s simply not how interrogation works. The best methods we have for “rapid” interrogation of hostile suspects take hours or even days, and they are humane—building trust and rapport is the most important step. The goal is to get the suspect to want to give you accurate information.

For the purposes of the thought experiment, okay, you can stipulate that it would work (this is what the Stanford Encyclopedia of Philosophy does). But now all you’ve done is made the thought experiment more distant from the real-world moral question. The closest real-world examples we’ve ever had involved individual crimes, probably too small to justify the torture (as bad as a murdered child is, think about what you’re doing if you let the police torture people). But by the time the terrorism to be prevented is large enough to really be sufficient justification, it (1) hasn’t happened in the real world and (2) surely involves terrorists who are sufficiently ideologically committed that they’ll be able to resist the torture. If such a situation arises, of course we should try to get information from the suspects—but what we try should be our best methods, the ones that work most consistently, not the ones that “feel right” and maybe happen to work on occasion.

Indeed, the best explanation I have for why people use torture at all, given its horrible effects and mediocre effectiveness at best is that it feels right.

When someone does something terrible (such as an act of terrorism), we rightfully reduce our moral valuation of them relative to everyone else. If you are even tempted to deny this, suppose a terrorist and a random civilian are both inside a burning building and you only have time to save one. Of course you save the civilian and not the terrorist. And that’s still true even if you know that once the terrorist was rescued he’d go to prison and never be a threat to anyone else. He’s just not worth as much.

In the most extreme circumstances, a person can be so terrible that their moral valuation should be effectively zero: If the only person in a burning building is Stalin, I’m not sure you should save him even if you easily could. But it is a grave moral mistake to think that a person’s moral valuation should ever go negative, yet I think this is something that people do when confronted with someone they truly hate. The federal agents torturing those terrorists didn’t merely think of them as worthless—they thought of them as having negative worth. They felt it was a positive good to harm them. But this is fundamentally wrong; no sentient being has negative worth. Some may be so terrible as to have essentially zero worth; and we are often justified in causing harm to some in order to save others. It would have been entirely justified to kill Stalin (as a matter of fact he died of heart disease and old age), to remove the continued threat he posed; but to torture him would not have made the world a better place, and actually might well have made it worse.

Yet I can see how psychologically it could be useful to have a mechanism in our brains that makes us hate someone so much we view them as having negative worth. It makes it a lot easier to harm them when necessary, makes us feel a lot better about ourselves when we do. The idea that any act of homicide is a tragedy but some of them are necessary tragedies is a lot harder to deal with than the idea that some people are just so evil that killing or even torturing them is intrinsically good. But some of the worst things human beings have ever done ultimately came from that place in our brains—and torture is one of them.