How (not) to destroy an immoral market

Jul 29 JDN 2458329

In this world there are people of primitive cultures, with a population that is slowly declining, trying to survive a constant threat of violence in the aftermath of colonialism. But you already knew that, of course.

What you may not have realized is that some of these people are actively hunted by other people, slaughtered so that their remains can be sold on the black market.

I am referring of course to elephants. Maybe those weren’t the people you first had in mind?

Elephants are not human in the sense of being Homo sapiens; but as far as I am concerned, they are people in a moral sense.

Elephants take as long to mature as humans, and spend most of their childhood learning. They are born with brains only 35% of the size of their adult brains, much as we are born with brains 28% the size of our adult brains. Their encephalization quotients range from about 1.5 to 2.4, comparable to chimpanzees.

Elephants have problem-solving intelligence comparable to chimpanzees, cetaceans, and corvids. Elephants can pass the “mirror test” of self-identification and self-awareness. Individual elephants exhibit clearly distinguishable personalities. They exhibit empathy toward humans and other elephants. They can think creatively and develop new tools.

Elephants distinguish individual humans or elephants by sight or by voice, comfort each other when distressed, and above all mourn their dead. The kind of mourning behaviors elephants exhibit toward the remains of their dead family members have only been observed in humans and chimpanzees.

On a darker note, elephants also seek revenge. In response to losing loved ones to poaching or collisions with trains, elephants have orchestrated organized counter-attacks against human towns. This is not a single animal defending itself, as almost any will do; this is a coordinated act of vengeance after the fact. Once again, we have only observed similar behaviors in humans, great apes, and cetaceans.

Huffington Post backed off and said “just kidding” after asserting that elephants are people—but I won’t. Elephants are people. They do not have an advanced civilization, to be sure. But as far as I am concerned they display all the necessary minimal conditions to be granted the fundamental rights of personhood. Killing an elephant is murder.

And yet, the ivory trade continues to be profitable. Most of this is black-market activity, though it was legal in some places until very recently; China only restored their ivory trade ban this year, and Hong Kong’s ban will not take full effect until 2021. Some places are backsliding: A proposal (currently on hold) by the US Fish and Wildlife Service under the Trump administration would also legalize some limited forms of ivory trade.
With this in mind, I can understand why people would support the practice of ivory-burning, symbolically and publicly destroying ivory by fire so that no one can buy it. Two years ago, Kenya organized a particularly large ivory-burning that set ablaze 105 tons of elephant tusk and 1.35 tons of rhino horn.

But as economist, when I first learned about ivory-burning, it seemed like a really, really bad idea.

Why? Supply and demand. By destroying supply, you have just raised the market price of ivory. You have therefore increased the market incentives for poaching elephants and rhinos.

Yet it turns out I was wrong about this, as were many other economists. I looked at the empirical research, and changed my mind substantially. Ivory-burning is not such a bad idea after all.

Here was my reasoning before: If I want to reduce the incentives to produce something, what do I need to do? Lower the price. How do I do that? I need to increase the supply. Economists have made several proposals for how to do that, and until I looked at the data I would have expected them to work; but they haven’t.

The best way to increase supply is to create synthetic ivory that is cheap and very difficult to tell apart from the real thing. This has been done, but it didn’t work. For some reason, sellers try to hide the expensive real ivory in with the cheap synthetic ivory. I admit I actually have trouble understanding this; if you can’t sell it at full price, why even bother with the illegal real ivory? Maybe their customers have methods of distinguishing the two that the regulators don’t? If so, why aren’t the regulators using those methods? Another concern with increasing the supply of ivory is that it might reduce the stigma of consuming ivory, thereby also increasing the demand.

A similar problem has arisen with so-called “ghost ivory”; for obvious reasons, existing ivory products were excluded from the ban imposed in 1947, lest the government be forced to confiscate millions of billiard balls and thousands of pianos. Yet poachers have learned ways to hide new, illegal ivory and sell it as old, legal ivory.

Another proposal was to organize “sustainable ivory harvesting”, which based on past experience with similar regulations is unlikely to be enforceable. Moreover, this is not like sustainable wood harvesting, where our only concern is environmental. I for one care about the welfare of individual elephants, and I don’t think they would want to be “harvested”, sustainably or otherwise.
There is one way of doing “sustainable harvesting” that might not be so bad for the elephants, which would be to set up a protected colony of elephants, help them to increase their population, and then when elephants die of natural causes, take only the tusks and sell those as ivory, stamped with an official seal as “humanely and sustainably produced”. Even then, elephants are among a handful of species that would be offended by us taking their ancestors’ remains. But if it worked, it could save many elephant lives. The bigger problem is how expensive such a project would be, and how long it would take to show any benefit; elephant lifespans are about half as long as ours, (except in zoos, where their mortality rate is much higher!) so a policy that might conceivably solve a problem in 30 to 40 years doesn’t really sound so great. More detailed theoretical and empirical analysis has made this clear: you just can’t get ivory fast enough to meet existing demand this way.

In any case, China’s ban on all ivory trade had an immediate effect at dropping the price of ivory, which synthetic ivory did not. Before that, strengthened regulations in the US (particularly in New York and California) had been effective at reducing ivory sales. The CITES treaty in 1989 that banned most international ivory trade was followed by an immediate increase in elephant populations.

The most effective response to ivory trade is an absolutely categorical ban with no loopholes. To fight “ghost ivory”, we should remove exceptions for old ivory, offering buybacks for any antiques with a verifiable pedigree and a brief period of no-penalty surrender for anything with no such records. The only legal ivory must be for medical and scientific purposes, and its sourcing records must be absolutely impeccable—just as we do with human remains.

Even synthetic ivory must also be banned, at least if it’s convincing enough that real ivory could be hidden in it. You can make something you call “synthetic ivory” that serves a similar consumer function, but it must be different enough that it can be easily verified at customs inspections.

We must give no quarter to poachers; Kenya was right to impose a life sentence for aggravated poaching. The Tanzanian proposal to “shoot to kill” was too extreme; summary execution is never acceptable. But if indeed someone currently has a weapons pointed at an elephant and refuses to drop it, I consider it justifiable to shoot them, just as I would if that weapon were aimed at a human.

The need for a categorical ban is what makes the current US proposal dangerous. The particular exceptions it carves out are not all that large, but the fact that it carves out exceptions at all makes enforcement much more difficult. To his credit, Trump himself doesn’t seem very keen on the proposal, which may mean that it is dead in the water. I don’t get to say this often, but so far Trump seems to be making the right choice on this one.

Though the economic theory predicted otherwise, the empirical data is actually quite clear: The most effective way to save elephants from poaching is an absolutely categorical ban on ivory.

Ivory-burning is a signal of commitment to such a ban. Any ivory we find being sold, we will burn. Whoever was trying to sell it will lose their entire investment. Find more, and we will burn that too.

The inherent atrocity of “border security”

Jun 24 JDN 2458294

By now you are probably aware of the fact that a new “zero tolerance” border security policy under the Trump administration has resulted in 2,000 children being forcibly separated from their parents by US government agents. If you weren’t, here are a variety of different sources all telling the same basic story of large-scale state violence and terror.

Make no mistake: This is an atrocity. The United Nations has explicitly condemned this human rights violation—to which Trump responded by making an unprecedented threat of withdrawing unilaterally from the UN Human Rights Council.

#ThisIsNotNormal, and Trump was everything we feared—everything we warned—he would be: Corrupt, incompetent, cruel, and authoritarian.

Yet Trump’s border policy differs mainly in degree, not kind, from existing US border policy. There is much more continuity here than most of us would like to admit.

The Trump administration has dramatically increased “interior removals”, the most obviously cruel acts, where ICE agents break into the houses of people living in the US and take them away. Don’t let the cold language fool you; this is literally people with guns breaking into your home and kidnapping members of your family. This is characteristic of totalitarian governments, not liberal democracies.

And yet, the Obama administration actually holds the record for most deportations (though only because they included “at-border deportations” which other administrations did not). A major policy change by George W. Bush started this whole process of detaining people at the border instead of releasing them and requiring them to return for later court dates.

I could keep going back; US border enforcement has gotten more and more aggressive as time goes on. US border security staffing has quintupled since just 1990. There was a time when the United States was a land of opportunity that welcomed “your tired, your poor, your huddled masses”; but that time is long past.

And this, in itself, is a human rights violation. Indeed, I am convinced that border security itself is inherently a human rights violation, always and everywhere; future generations will not praise us for being more restrained than Trump’s abject and intentional cruelty, but condemn us for acting under the same basic moral framework that justified it.

There is an imaginary line in the sand just a hundred miles south of where I sit now. On one side of the line, a typical family makes $66,000 per year. On the other side, a typical family makes only $20,000. On one side of the line, life expectancy is 81 years; on the other, 77. This means that over their lifetime, someone on this side of the line can expect to make over one million dollars more than they would if they had lived on the other side. Step across this line, get a million dollars; it sounds ridiculous, but it’s an empirical fact.

This would be bizarre enough by itself; but now consider that on that line there are fences, guard towers, and soldiers who will keep you from crossing it. If you have appropriate papers, you can cross; but if you don’t, they will arrest and detain you, potentially for months. This is not how we treat you if you are carrying contraband or have a criminal record. This is how we treat you if you don’t have a passport.

How can we possibly reconcile this with the principles of liberal democracy? Philosophers have tried, to be sure. Yet they invariably rely upon some notion that the people who want to cross our border are coming from another country where they were already granted basic human rights and democratic representation—which is almost never the case. People who come here from the UK or the Netherlands or generally have the proper visas. Even people who come here from China usually have visas—though China is by no means a liberal democracy. It’s people who come here from Haiti and Nicaragua who don’t—and these are some of the most corrupt and impoverished nations in the world.

As I said in an earlier post, I was not offended that Trump characterized countries like Haiti and Syria as “shitholes”. By any objective standard, that is accurate; these countries are terrible, terrible places to live. No, what offends me is that he thinks this gives us a right to turn these people away, as though the horrible conditions of their country somehow “rub off” on them and make them less worthy as human beings. On the contrary, we have a word for people who come from “shithole” countries seeking help, and that word is “refugee”.

Under international law, “refugee” has a very specific legal meaning, under which most immigrants do not qualify. But in a broader moral sense, almost every immigrant is a refugee. People don’t uproot themselves and travel thousands of miles on a whim. They are coming here because conditions in their home country are so bad that they simply cannot tolerate them anymore, and they come to us desperately seeking our help. They aren’t asking for handouts of free money—illegal immigrants are a net gain for our fiscal system, paying more in taxes than they receive in benefits. They are looking for jobs, and willing to accept much lower wages than the workers already here—because those wages are still dramatically higher than what they had where they came from.

Of course, that does potentially mean they are competing with local low-wage workers, doesn’t it? Yes—but not as much as you might think. There is only a very weak relationship between higher immigration and lower wages (some studies find none at all!), even at the largest plausible estimates, the gain in welfare for the immigrants is dramatically higher than the loss in welfare for the low-wage workers who are already here. It’s not even a question of valuing them equally; as long as you value an immigrant at least one tenth as much as a native-born citizen, the equation comes out favoring more immigration.

This is for two reasons: One, most native-born workers already are unwilling to do the jobs that most immigrants do, such as picking fruit and laying masonry; and two, increased spending by immigrants boosts the local economy enough to compensate for any job losses.

 

But even aside from the economic impacts, what is the moral case for border security?

I have heard many people argue that “It’s our home, we should be able to decide who lives here.” First of all, there are some major differences between letting someone live in your home and letting someone come into your country. I’m not saying we should allow immigrants to force themselves into people’s homes, only that we shouldn’t arrest them when they try cross the border.

But even if I were to accept the analogy, if someone were fleeing oppression by an authoritarian government and asked to live in my home, I would let them. I would help hide them from the government if they were trying to escape persecution. I would even be willing to house people simply trying to escape poverty, as long as it were part of a well-organized program designed to ensure that everyone actually gets helped and the burden on homeowners and renters was not too great. I wouldn’t simply let homeless people come live here, because that creates all sorts of coordination problems (I can only fit so many, and how do I prioritize which ones?); but I’d absolutely participate in a program that coordinates placement of homeless families in apartments provided by volunteers. (In fact, maybe I should try to petition for such a program, as Southern California has a huge homelessness rate due to our ridiculous housing prices.)

Many people seem to fear that immigrants will bring crime, but actually they reduce crime rates. It’s really kind of astonishing how much less crime immigrants commit than locals. My hypothesis is that immigrants are a self-selected sample; the kind of person willing to move thousands of miles isn’t the kind of person who commits a lot of crimes.
I understand wanting to keep out terrorists and drug smugglers, but there are already plenty of terrorists and drug smugglers here in the US; if we are unwilling to set up border security between California and Nevada, I don’t see why we should be setting it up between California and Baja California. But okay, fine, we can keep the customs agents who inspect your belongings when you cross the border. If someone doesn’t have proper documentation, we can even detain and interrogate them—for a few hours, not a few months. The goal should be to detect dangerous criminals and nothing else. Once we are confident that you have not committed any felonies, we should let you through—frankly, we should give you a green card. We should only be willing to detain someone at the border for the same reasons we would be willing to detain a citizen who already lives here—that is, probable cause for an actual crime. (And no, you don’t get to count “illegal border crossing” as a crime, because that’s begging the question. By the same logic I could justify detaining people for jaywalking.)

A lot of people argue that restricting immigration is necessary to “preserve local culture”; but I’m not even sure that this is a goal sufficiently important to justify arresting and detaining people, and in any case, that’s really not how culture works. Culture is not advanced by purism and stagnation, but by openness and cross-pollination. From anime to pizza, many of our most valued cultural traditions would not exist without interaction across cultural boundaries. Introducing more Spanish speakers into the US may make us start saying no problemo and vamonos, but it’s not going to destroy liberal democracy. If you value culture, you should value interactions across different societies.

Most importantly, think about what you are trying to justify. Even if we stop doing Trump’s most extreme acts of cruelty, we are still talking about using military force to stop people from crossing an imaginary line. ICE basically treats people the same way the SS did. “Papers, please” isn’t something we associate with free societies—it’s characteristic of totalitarianism. We are so accustomed to border security (or so ignorant of its details) that we don’t see it for the atrocity it so obviously is.

National borders function something very much like feudal privilege. We have our “birthright”, which grants us all sorts of benefits and special privileges—literally tripling our incomes and extending our lives. We did nothing to earn this privilege. If anything, we show ourselves to be less deserving (e.g. by committing more crimes). And we use the government to defend our privilege by force.

Are people born on the other side of the line less human? Are they less morally worthy? On what grounds do we point guns at them and lock them away for the “crime” of wanting to live here?

What Trump is doing right now is horrific. But it is not that much more horrific than what we were already doing. My hope is that this will finally open our eyes to the horrors that we had been participating in all along.

What we could, what we should, and what we must

May 27 JDN 2458266

In one of the most famous essays in all of ethical philosophy, Peter Singer famously argued that we are morally obligated to give so much to charity that we would effectively reduce ourselves to poverty only slightly better than what our donations sought to prevent. His argument is a surprisingly convincing one, especially for such a radical proposition. Indeed, one of the core activities of the Effective Altruism movement has basically been finding ways to moderate Singer’s argument without giving up on its core principles, because it’s so obvious both that we ought to do much more to help people around the world and that there’s no way we’re ever going to do what that argument actually asks of us.

The most cost-effective charities in the world can save a human life for an average cost of under $4,000. The maneuver that Singer basically makes is quite simple: If you know that you could save someone’s life for $4,000, you have $4,000 to spend, and instead you spend that $4,000 on something else, aren’t you saying that whatever you did spend it on was more important than saving that person’s life? And is that really something you believe?

But if you think a little more carefully, it becomes clear that things are not quite so simple. You aren’t being paid $4,000 to kill someone, first of all. If you were willing to accept $4,000 as sufficient payment to commit a murder, you would be, quite simply, a monster. Implicitly the “infinite identical psychopath” of neoclassical rational agent models would be willing to do such a thing, but very few actual human beings—even actual psychopaths—are that callous.

Obviously, we must refrain from murdering people, even for amounts far in excess of $4,000. If you were offered the chance to murder someone for $4 billion dollars, I can understand why you would be tempted to do such a thing. Think of what you could do with all that money! Not only would you and everyone in your immediate family be independently wealthy for life, you could donate billions of dollars to charity and save as much as a million lives. What’s one life for a million? Even then, I have a strong intuition that you shouldn’t commit this murder—but I have never been able to find a compelling moral argument for why. The best I’ve been able to come up with a sort of Kantian notion: What if everyone did this?

Since the most plausible scenario is that the $4 billion comes from existing wealth, all those murders would simply be transferring wealth around, from unknown sources. If you stipulate where the wealth comes from, the dilemma can change quite a bit.

Suppose for example the $4 billion is confiscated from Bashar Al-Assad. That would be in itself a good thing, lessening the power of a genocidal tyrant. So we need to add that to the positive side of the ledger. It is probably worth killing one innocent person just to undermine Al-Assad’s power; indeed, the US Air Force certainly seems to think so, as they average more than one civilian fatality every day in airstrikes.

Now suppose the wealth was extracted by clever financial machinations that took just a few dollars out of every bank account in America. This would be in itself a bad thing, but perhaps not a terrible thing, especially since we’re planning on giving most of it to UNICEF. Those people should have given it anyway, right? This sounds like a pretty good movie, actually; a cyberpunk Robin Hood basically.

Next, suppose it was obtained by stealing the life savings of a million poor people in Africa. Now the method of obtaining the money is so terrible that it’s not clear that funneling it through UNICEF would compensate, even if you didn’t have to murder someone to get it.

Finally, suppose that the wealth is actually created anew—not printed money from the Federal Reserve, but some new technology that will increase the world’s wealth by billions of dollars yet requires the death of an innocent person to create. In this scenario, the murder has become something more like the inherent risk in human subjects biomedical research, and actually seems justifiable. And indeed, that fits with the Kantian answer, for if we all had the chance to kill one person in order to create something that would increase the wealth of the world by $4 billion, we could turn this planet into a post-scarcity utopia within a generation for fewer deaths than are currently caused by diabetes.

Anyway, my point here is that the detailed context of a decision actually matters a great deal. We can’t simply abstract away from everything else in the world and ask whether the money is worth the life.

When we consider this broader context with regard to the world’s most cost-effective charities, it becomes apparent that a small proportion of very dedicated people giving huge proportions of their income to charity is not the kind of world we want to see.

If I actually gave so much that I equalized my marginal utility of wealth to that of a child dying of malaria in Ghana, I would have to donate over 95% of my income—and well before that point, I would be homeless and impoverished. This actually seems penny-wise and pound-foolish even from the perspective of total altruism: If I stop paying rent, it gets a lot harder for me to finish my doctorate and become a development economist. And even if I never donated another dollar, the world would be much better off with one more good development economist than with even another $23,000 to the Against Malaria Foundation. Once you factor in the higher income I’ll have (and proportionately higher donations I’ll make), it’s obviously the wrong decision for me to give 95% of $25,000 today rather than 10% of $70,000 every year for the next 20 years after I graduate.

But the optimal amount for me to donate from that perspective is whatever the maximum would be that I could give without jeopardizing my education and career prospects. This is almost certainly more than I am presently giving. Exactly how much more is actually not all that apparent: It’s not enough to say that I need to be able to pay rent, eat three meals a day, and own a laptop that’s good enough for programming and statistical analysis. There’s also a certain amount that I need for leisure, to keep myself at optimal cognitive functioning for the next several years. Do I need that specific video game, that specific movie? Surely not—but if I go the next ten years without ever watching another movie or playing another video game, I’m probably going to be in trouble psychologically. But what exactly is the minimum amount to keep me functioning well? And how much should I be willing to spend attending conferences? Those can be important career-building activities, but they can also be expensive wastes of time.

Singer acts as though jeopardizing your career prospects is no big deal, but this is clearly wrong: The harm isn’t just to your own well-being, but also to your productivity and earning power that could have allowed you to donate more later. You are a human capital asset, and you are right to invest in yourself. Exactly how much you should invest in yourself is a much harder question.
Such calculations are extremely difficult to do. There are all sorts of variables I simply don’t know, and don’t have any clear way of finding out. It’s not a good sign for an ethical theory when even someone with years of education and expertise on specifically that topic still can’t figure out the answer. Ethics is supposed to be something we can apply to everyone.

So I think it’s most helpful to think in those terms: What could we apply to everyone? What standard of donation would be high enough if we could get everyone on board?

World poverty is rapidly declining. The direct poverty gap at the UN poverty line of $1.90 per day is now only $80 billion. Realistically, we couldn’t simply close that gap precisely (there would also be all sorts of perverse incentives if we tried to do it that way). But the standard estimate that it would take about $300 billion per year in well-targeted spending to eliminate world hunger is looking very good.

How much would each person, just those in the middle class or above within the US or the EU, have to give in order to raise this much?
89% of US income is received by the top 60% of households (who I would said are unambiguously “middle class or above”). Income inequality is not as extreme within the EU, so the proportion of income received by the top 60% seems to be more like 75%.

89% of US GDP plus 75% of EU GDP is all together about $29 trillion per year. This means that in order to raise $300 billion, each person in the middle class or above would need to donate just over one percent of their income.

Not 95%. Not 25%. Not even 10%. Just 1%. That would be enough.

Of course, more is generally better—at least until you start jeopardizing your career prospects. So by all means, give 2% or 5% or even 10%. But I really don’t think it’s helpful to make people feel guilty about not giving 95% when all we really needed was for everyone to give 1%.

There is an important difference between what we could do, what we should do, and what we must do.

What we must do are moral obligations so strong they are essentially inviolable: We must not murder people. There may be extreme circumstances where exceptions can be made (such as collateral damage in war), and we can always come up with hypothetical scenarios that would justify almost anything, but for the vast majority of people the vast majority of time, these ethical rules are absolutely binding.

What we should do are moral obligations that are strong enough to be marks against your character if you break them, but not so absolutely binding that you have to be a monster not to follow them. This is where I put donating at least 1% of your income. (This is also where I put being vegetarian, but perhaps that is a topic for another time.) You really ought to do it, and you are doing something wrongful if you don’t—but most people don’t, and you are not a terrible person if you don’t.

This latter category is in part socially constructed, based on the norms people actually follow. Today, slavery is obviously a grave crime, and to be a human trafficker who participates in it you must be a psychopath. But two hundred years ago, things were somewhat different: Slavery was still wrong, yes, but it was quite possible to be an ordinary person who was generally an upstanding citizen in most respects and yet still own slaves. I would still condemn people who owned slaves back then, but not nearly as forcefully as I would condemn someone who owned slaves today. Two hundred years from now, perhaps vegetarianism will move up a category: The norm will be that everyone eats only plants, and someone who went out of their way to kill and eat a pig would have to be a psychopath. Eating meat is already wrong today—but it will be more wrong in the future. I’d say the same about donating 1% of your income, but actually I’m hoping that by two hundred years from now there will be no more poverty left to eradicate, and donation will no longer be necessary.

Finally, there is what we could do—supererogatory, even heroic actions of self-sacrifice that would make the world a better place, but cannot be reasonably expected of us. This is where donating 95% or even 25% of your income would fall. Yes, absolutely, that would help more people than donating 1%; but you don’t owe the world that much. It’s not wrong for you to contribute less than this. You don’t need to feel guilty for not giving this much.

But I do want to make you feel guilty if you don’t give at least 1%. Don’t tell me you can’t. You can. If your income is $30,000 per year, that’s $300 per year. If you needed that much for a car repair, or dental work, or fixing your roof, you’d find a way to come up with it. No one in the First World middle class is that liquidity-constrained. It is true that half of Americans say they couldn’t come up with $400 in an emergency, but I frankly don’t believe it. (I believe it for the bottom 25% or so, who are actually poor; but not half of Americans.) If you have even one credit card that’s not maxed out, you can do this—and frankly even if a card is maxed out, you can probably call them and get them to raise your limit. There is something you could cut out of your spending that would allow you to get back 1% of your annual income. I don’t know what it is, necessarily: Restaurants? Entertainment? Clothes? But I’m not asking you to give a third of your income—I’m asking you to give one penny out of every dollar.

I give considerably more than that; my current donation target is 8% and I’m planning on raising it to 10% or more once I get a high-paying job. I live on a grad student salary which is less than the median personal income in the US. So I know it can be done. But I am very intentionally not asking you to give this much; that would be above and beyond the call of duty. I’m only asking you to give 1%.

No, this isn’t like Watergate. It’s worse.

May 21, JDN 2457895

Make no mistake: This a historic moment. This may be the greatest corruption scandal in the history of the United States. Donald Trump has fired the director of the FBI in order to block an investigation—and he said so himself.

It has become cliche to compare scandals to Watergate—to the point where we even stick the suffix “-gate” on things to indicate scandals. “Gamergate”, “Climategate”, and so on. So any comparison to Watergate is bound to draw some raised eyebrows.

But just as it’s not Godwin’s Law when you’re really talking about fascism and genocide, it’s not the “-gate” cliche when we are talking about a corruption scandal that goes all the way up to the President of the United States. And The Atlantic is right: this isn’t Watergate; it’s worse.

First of all, let’s talk about the crime of which Trump is accused. Nixon was accused of orchestrating burglary and fraud. These are not minor offenses, to be sure. But they are ordinary criminal offenses, felonies at worst. Trump is accused of fundamental Constitutional violations (particularly the First Amendment and the Emoluments Clause), and above all, Trump is accused of treason. This is the highest crime recognized by the Constitution of the United States. It is the only crime with a specifically listed Constitutional punishment—and that punishment is execution.

Donald Trump is being investigated not for stealing something or concealing information, but for colluding with foreign powers in the attempt to undermine American democracy. Is he guilty? I don’t know; that’s why we’re investigating. But let me say this: If he isn’t guilty of something, it’s quite baffling that he would fight so hard to stop the investigation.

Speaking of which: Trump’s intervention to stop Comey is much more direct, and much more sudden, than anything Nixon did to stop the Watergate investigations. Nixon of course tried to stonewall the investigations, but he did so subtly, cautiously, always trying to at least appear like he valued due process and rule of law. Trump made no such efforts, openly threatening Comey personally on Twitter and publicly declaring on national television that he had fired him to block the investigation.

But perhaps what makes the Trump-Comey affair most terrifying is how the supposedly “mainstream” Republican Party has reacted. The Republicans of Nixon had some honor left in them; several resigned rather than follow Nixon’s illegal orders, and dozens of Republicans in Congress supported the investigations and called for Nixon’s impeachment. Apparently that honor is gone now, as GOP leaders like Mitch McConnell and Lindsey Graham have expressed support for the President’s corrupt and illegal actions citing no principle other than party loyalty. If we needed any more proof that the Republican Party of the United States is no longer a mainstream political party, this is it. They don’t believe in democracy or rule of law anymore. They believe in winning at any cost, loyalty at any price. They have become a radical far-right organization—indeed, if they continue down this road of supporting the President in undermining the freedom of the press and consolidating his own power, I think it is fair to call them literally neo-fascist.

We are about to see whether American institutions can withstand such an onslaught, whether liberty and justice can prevail against corruption and tyranny. So far, there have been reasons to be optimistic: In particular, the judicial branch has proudly and bravely held the line, blocking Trump’s travel ban (multiple times), resisting his order to undermine sanctuary cities, and standing up to direct criticisms and even threats from the President himself. Our system of checks and balances is being challenged, but so far it is holding up against that challenge. We will find out soon enough whether the American system truly is robust enough to survive.

Belief in belief, and why it’s important

Oct 30, JDN 2457692

In my previous post on ridiculous beliefs, I passed briefly over this sentence:

“People invest their identity in beliefs, and decide what beliefs to profess based on the group identities they value most.”

Today I’d like to talk about the fact that “to profess” is a very important phrase in that sentence. Part of understanding ridiculous beliefs, I think, is understanding that many, if not most, of them are not actually proper beliefs. They are what Daniel Dennett calls “belief in belief”, and has elsewhere been referred to as “anomalous belief”. They are not beliefs in the ordinary sense that we would line up with the other beliefs in our worldview and use them to anticipate experiences and motivate actions. They are something else, lone islands of belief that are not weaved into our worldview. But all the same they are invested with importance, often moral or even ultimate importance; this one belief may not make any sense with everyone else, but you must believe it, because it is a vital part of your identity and your tribe. To abandon it would not simply be mistaken; it would be heresy, it would be treason.

How do I know this? Mainly because nobody has tried to stone me to death lately.

The Bible is quite explicit about at least a dozen reasons I am supposed to be executed forthwith; you likely share many of them: Heresy, apostasy, blasphemy, nonbelief, sodomy, fornication, covetousness, taking God’s name in vain, eating shellfish (though I don’t anymore!), wearing mixed fiber, shaving, working on the Sabbath, making images of things, and my personal favorite, not stoning other people for committing such crimes (as we call it in game theory, a second-order punishment).

Yet I have met many people who profess to be “Bible-believing Christians”, and even may oppose some of these activities (chiefly sodomy, blasphemy, and nonbelief) on the grounds that they are against what the Bible says—and yet not one has tried to arrange my execution, nor have I ever seriously feared that they might.

Is this because we live in a secular society? Well, yes—but not simply that. It isn’t just that these people are afraid of being punished by our secular government should they murder me for my sins; they believe that it is morally wrong to murder me, and would rarely even consider the option. Someone could point them to the passage in Leviticus (20:16, as it turns out) that explicitly says I should be executed, and it would not change their behavior toward me.

On first glance this is quite baffling. If I thought you were about to drink a glass of water that contained cyanide, I would stop you, by force if necessary. So if they truly believe that I am going to be sent to Hell—infinitely worse than cyanide—then shouldn’t they be willing to use any means necessary to stop that from happening? And wouldn’t this be all the more true if they believe that they themselves will go to Hell should they fail to punish me?

If these “Bible-believing Christians” truly believed in Hell the way that I believe in cyanide—that is, as proper beliefs which anticipate experience and motivate action—then they would in fact try to force my conversion or execute me, and in doing so would believe that they are doing right. This used to be quite common in many Christian societies (most infamously in the Salem Witch Trials), and still is disturbingly common in many Muslim societies—ISIS doesn’t just throw gay men off rooftops and stone them as a weird idiosyncrasy; it is written in the Hadith that they’re supposed to. Nor is this sort of thing confined to terrorist groups; the “legitimate” government of Saudi Arabia routinely beheads atheists or imprisons homosexuals (though has a very capricious enforcement system, likely so that the monarchy can trump up charges to justify executing whomever they choose). Beheading people because the book said so is what your behavior would look like if you honestly believed, as a proper belief, that the Qur’an or the Bible or whatever holy book actually contained the ultimate truth of the universe. The great irony of calling religion people’s “deeply-held belief” is that it is in almost all circumstances the exact opposite—it is their most weakly held belief, the one that they could most easily sacrifice without changing their behavior.

Yet perhaps we can’t even say that to people, because they will get equally defensive and insist that they really do hold this very important anomalous belief, and how dare you accuse them otherwise. Because one of the beliefs they really do hold, as a proper belief, and a rather deeply-held one, is that you must always profess to believe your religion and defend your belief in it, and if anyone catches you not believing it that’s a horrible, horrible thing. So even though it’s obvious to everyone—probably even to you—that your behavior looks nothing like what it would if you actually believed in this book, you must say that you do, scream that you do if necessary, for no one must ever, ever find out that it is not a proper belief.

Another common trick is to try to convince people that their beliefs do affect their behavior, even when they plainly don’t. We typically use the words “religious” and “moral” almost interchangeably, when they are at best orthogonal and arguably even opposed. Part of why so many people seem to hold so rigidly to their belief-in-belief is that they think that morality cannot be justified without recourse to religion; so even though on some level they know religion doesn’t make sense, they are afraid to admit it, because they think that means admitting that morality doesn’t make sense. If you are even tempted by this inference, I present to you the entire history of ethical philosophy. Divine Command theory has been a minority view among philosophers for centuries.

Indeed, it is precisely because your moral beliefs are not based on your religion that you feel a need to resort to that defense of your religion. If you simply believed religion as a proper belief, you would base your moral beliefs on your religion, sure enough; but you’d also defend your religion in a fundamentally different way, not as something you’re supposed to believe, not as a belief that makes you a good person, but as something that is just actually true. (And indeed, many fanatics actually do defend their beliefs in those terms.) No one ever uses the argument that if we stop believing in chairs we’ll all become murderers, because chairs are actually there. We don’t believe in belief in chairs; we believe in chairs.

And really, if such a belief were completely isolated, it would not be a problem; it would just be this weird thing you say you believe that everyone really knows you don’t and it doesn’t affect how you behave, but okay, whatever. The problem is that it’s never quite isolated from your proper beliefs; it does affect some things—and in particular it can offer a kind of “support” for other real, proper beliefs that you do have, support which is now immune to rational criticism.

For example, as I already mentioned: Most of these “Bible-believing Christians” do, in fact, morally oppose homosexuality, and say that their reason for doing so is based on the Bible. This cannot literally be true, because if they actually believed the Bible they wouldn’t want gay marriage taken off the books, they’d want a mass pogrom of 4-10% of the population (depending how you count), on a par with the Holocaust. Fortunately their proper belief that genocide is wrong is overriding. But they have no such overriding belief supporting the moral permissibility of homosexuality or the personal liberty of marriage rights, so the very tenuous link to their belief-in-belief in the Bible is sufficient to tilt their actual behavior.

Similarly, if the people I meet who say they think maybe 9/11 was an inside job by our government really believed that, they would most likely be trying to organize a violent revolution; any government willing to murder 3,000 of its own citizens in a false flag operation is one that must be overturned and can probably only be overturned by force. At the very least, they would flee the country. If they lived in a country where the government is actually like that, like Zimbabwe or North Korea, they wouldn’t fear being dismissed as conspiracy theorists, they’d fear being captured and executed. The very fact that you live within the United States and exercise your free speech rights here says pretty strongly that you don’t actually believe our government is that evil. But they wouldn’t be so outspoken about their conspiracy theories if they didn’t at least believe in believing them.

I also have to wonder how many of our politicians who lean on the Constitution as their source of authority have actually read the Constitution, as it says a number of rather explicit things against, oh, say, the establishment of religion (First Amendment) or searches and arrests without warrants (Fourth Amendment) that they don’t much seem to care about. Some are better about this than others; Rand Paul, for instance, actually takes the Constitution pretty seriously (and is frequently found arguing against things like warrantless searches as a result!), but Ted Cruz for example says he has spent decades “defending the Constitution”, despite saying things like “America is a Christian nation” that directly violate the First Amendment. Cruz doesn’t really seem to believe in the Constitution; but maybe he believes in believing the Constitution. (It’s also quite possible he’s just lying to manipulate voters.)

 

The many varieties of argument “men”

JDN 2457552

After several long, intense, and very likely controversial posts in a row, I decided to take a break with a post that is short and fun.

You have probably already heard of a “strawman” argument, but I think there are many more “materials” an argument can be made of which would be useful terms to have, so I have proposed a taxonomy of similar argument “men”. Perhaps this will help others in the future to more precisely characterize where arguments have gone wrong and how they should have gone differently.

For examples of each, I’m using a hypothetical argument about the gold standard, based on the actual arguments I refute in my previous post on the subject.

This is an argument actually given by a proponent of the gold standard, upon which my “men” shall be built:

1) A gold standard is key to achieving a period of sustained, 4% real economic growth.

The U.S. dollar was created as a defined weight of gold and silver in 1792. As detailed in the booklet, The 21st Century Gold Standard (available free at http://agoldenage.com), I co-authored with fellow Forbes.com columnist Ralph Benko, a dollar as good as gold endured until 1971 with the relatively brief exceptions of the War of 1812, the Civil War and Reconstruction, and 1933, the year President Franklin Roosevelt suspended dollar/gold convertibility until January 31, 1934 when the dollar/gold link was re-established at $35 an ounce, a 40% devaluation from the prior $20.67 an ounce. Over that entire 179 years, the U.S. economy grew at a 3.9% average annual rate, including all of the panics, wars, industrialization and a myriad other events. During the post World War II Bretton Woods gold standard, the U.S. economy also grew on average 4% a year.

By contrast, during the 40-years since going off gold, U.S. economic growth has averaged an anemic 2.8% a year. The only 40-year periods in which the economic growth was slower were those ending in the Great Depression, from 1930 to 1940.

2) A gold standard reduces the risk of recessions and financial crises.

Critics of the gold standard point out, correctly, that it would prohibit the Federal Reserve from manipulating interest rates and the value of the dollar in hopes of stimulating demand. In fact, the idea that a paper dollar would lead to a more stable economy was one of the key selling points for abandoning the gold standard in 1971.

However, this power has done far more harm than good. Under the paper dollar, recessions have become more severe and financial crises more frequent. During the post World War II gold standard, unemployment averaged less than 5% and never rose above 7% during a calendar year. Since going off gold, unemployment has averaged more than 6%, and has been above 8% now for nearly 3.5 years.

And now, the argument men:

Fallacious (Bad) Argument Men

These argument “men” are harmful and irrational; they are to be avoided, and destroyed wherever they are found. Maybe in some very extreme circumstances they would be justifiable—but only in circumstances where it is justifiable to be dishonest and manipulative. You can use a strawman argument to convince a terrorist to let the hostages go; you can’t use one to convince your uncle not to vote Republican.

Strawman: The familiar fallacy in which instead of trying to address someone else’s argument, you make up your own fake version of that argument which is easier to defeat. The image is of making an effigy of your opponent out of straw and beating on the effigy to avoid confronting the actual opponent.

You can’t possibly think that going to the gold standard would make the financial system perfect! There will still be corrupt bankers, a banking oligopoly, and an unpredictable future. The gold standard would do nothing to remove these deep flaws in the system.

Hitman: An even worse form of the strawman, in which you misrepresent not only your opponent’s argument, but your opponent themselves, using your distortion of their view as an excuse for personal attacks against their character.

Oh, you would favor the gold standard, wouldn’t you? A rich, middle-aged White man, presumably straight and nominally Christian? You have all the privileges in life, so you don’t care if you take away the protections that less-fortunate people depend upon. You don’t care if other people become unemployed, so long as you don’t have to bear inflation reducing the real value of your precious capital assets.

Conman: An argument for your own view which you don’t actually believe, but expect to be easier to explain or more persuasive to this particular audience than the true reasons for your beliefs.

Back when we were on the gold standard, it was the era of “Robber Barons”. Poverty was rampant. If we go back to that system, it will just mean handing over all the hard-earned money of working people to billionaire capitalists.

Vaporman: Not even an argument, just a forceful assertion of your view that takes the place or shape of an argument.

The gold standard is madness! It makes no sense at all! How can you even think of going back to such a ridiculous monetary system?

Honest (Acceptable) Argument Men

These argument “men” are perfectly acceptable, and should be the normal expectation in honest discourse.

Woodman: The actual argument your opponent made, addressed and refuted honestly using sound evidence.

There is very little evidence that going back to the gold standard would in any way improve the stability of the currency or the financial system. While long-run inflation was very low under the gold standard, this fact obscures the volatility of inflation, which was extremely high; bouts of inflation were followed by bouts of deflation, swinging the value of the dollar up or down as much as 15% in a single year. Nor is there any evidence that the gold standard prevented financial crises, as dozens of financial crises occurred under the gold standard, if anything more often than they have since the full-fiat monetary system established in 1971.

Bananaman: An actual argument your opponent made that you honestly refute, which nonetheless is so ridiculous that it seems like a strawman, even though it isn’t. Named in “honor” of Ray Comfort’s Banana Argument. Of course, some bananas are squishier than others, and the only one I could find here was at least relatively woody–though still recognizable as a banana:

You said “A gold standard is key to achieving a period of sustained, 4% real economic growth.” based on several distorted, misunderstood, or outright false historical examples. The 4% annual growth in total GDP during the early part of the United States was due primarily to population growth, not a rise in real standard of living, while the rapid growth during WW2 was obviously due to the enormous and unprecedented surge in government spending (and by the way, we weren’t even really on the gold standard during that period). In a blatant No True Scotsman fallacy, you specifically exclude the Great Depression from the “true gold standard” so that you don’t have to admit that the gold standard contributed significantly to the severity of the depression.

Middleman: An argument that synthesizes your view and your opponent’s view, in an attempt to find a compromise position that may be acceptable, if not preferred, by all.

Unlike the classical gold standard, the Bretton Woods gold standard in place from 1945 to 1971 was not obviously disastrous. If you want to go back to a system of international exchange rates fixed by gold similar to Bretton Woods, I would consider that a reasonable position to take.

Virtuous (Good) Argument Men

These argument “men” go above and beyond the call of duty; rather than simply seek to win arguments honestly, they actively seek the truth behind the veil of opposing arguments. These cannot be expected in all circumstances, but they are to be aspired to, and commended when found.

Ironman: Your opponent’s actual argument, but improved, with some of its flaws shored up. The same basic thinking as your opponent, but done more carefully, filling in the proper gaps.

The gold standard might not reduce short-run inflation, but it would reduce longrun inflation, making our currency more stable over long periods of time. We would be able to track long-term price trends in goods such as housing and technology much more easily, and people would have an easier time psychologically grasping the real prices of goods as they change during their lifetime. No longer would we hear people complain, “How can you want a minimum wage of $15? As a teenager in 1955, I got paid $3 an hour and I was happy with that!” when that $3 in 1955, adjusted for inflation, is $26.78 in today’s money.

Steelman: Not the argument your opponent made, but the one they should have made. The best possible argument you are aware of that would militate in favor of their view, the one that sometimes gives you pause about your own opinions, the real and tangible downside of what you believe in.

Tying currency to gold or any other commodity may not be very useful directly, but it could serve one potentially vital function, which is as a commitment mechanism to prevent the central bank from manipulating the currency to enrich themselves or special interests. It may not be the optimal commitment mechanism, but it is a psychologically appealing one for many people, and is also relatively easy to define and keep track of. It is also not subject to as much manipulation as something like nominal GDP targeting or a Taylor Rule, which could be fudged by corrupt statisticians. And while it might cause moderate volatility, it can also protect against the most extreme forms of volatility such as hyperinflation. In countries with very corrupt governments, a gold standard might actually be a good idea, if you could actually enforce it, because it would at least limit the damage that can be done by corrupt central bank officials. Had such a system been in place in Zimbabwe in the 1990s, the hyperinflation might have been prevented. The US is not nearly as corrupt as Zimbabwe, so we probably do not need a gold standard; but it may be wise to recommend the use of gold standards or similar fixed-exchange currencies in Third World countries so that corrupt leaders cannot abuse the monetary system to gain at the expense of their people.

Why it matters that torture is ineffective

JDN 2457531

Like “longest-ever-serving Speaker of the House sexually abuses teenagers” and “NSA spy program is trying to monitor the entire telephone and email system”, the news that the US government systematically tortures suspects is an egregious violation that goes to the highest levels of our government—that for some reason most Americans don’t particularly seem to care about.

The good news is that President Obama signed an executive order in 2009 banning torture domestically, reversing official policy under the Bush Administration, and then better yet in 2014 expanded the order to apply to all US interests worldwide. If this is properly enforced, perhaps our history of hypocrisy will finally be at its end. (Well, not if Trump wins…)

Yet as often seems to happen, there are two extremes in this debate and I think they’re both wrong.
The really disturbing side is “Torture works and we have to use it!” The preferred mode of argumentation for this is the “ticking time bomb scenario”, in which we have some urgent disaster to prevent (such as a nuclear bomb about to go off) and torture is the only way to stop it from happening. Surely then torture is justified? This argument may sound plausible, but as I’ll get to below, this is a lot like saying, “If aliens were attacking from outer space trying to wipe out humanity, nuclear bombs would probably be justified against them; therefore nuclear bombs are always justified and we can use them whenever we want.” If you can’t wait for my explanation, The Atlantic skewers the argument nicely.

Yet the opponents of torture have brought this sort of argument on themselves, by staking out a position so extreme as “It doesn’t matter if torture works! It’s wrong, wrong, wrong!” This kind of simplistic deontological reasoning is very appealing and intuitive to humans, because it casts the world into simple black-and-white categories. To show that this is not a strawman, here are several different people all making this same basic argument, that since torture is illegal and wrong it doesn’t matter if it works and there should be no further debate.

But the truth is, if it really were true that the only way to stop a nuclear bomb from leveling Los Angeles was to torture someone, it would be entirely justified—indeed obligatory—to torture that suspect and stop that nuclear bomb.

The problem with that argument is not just that this is not our usual scenario (though it certainly isn’t); it goes much deeper than that:

That scenario makes no sense. It wouldn’t happen.

To use the example the late Antonin Scalia used from an episode of 24 (perhaps the most egregious Fictional Evidence Fallacy ever committed), if there ever is a nuclear bomb planted in Los Angeles, that would literally be one of the worst things that ever happened in the history of the human race—literally a Holocaust in the blink of an eye. We should be prepared to cause extreme suffering and death in order to prevent it. But not only is that event (fortunately) very unlikely, torture would not help us.

Why? Because torture just doesn’t work that well.

It would be too strong to say that it doesn’t work at all; it’s possible that it could produce some valuable intelligence—though clear examples of such results are amazingly hard to come by. There are some social scientists who have found empirical results showing some effectiveness of torture, however. We can’t say with any certainty that it is completely useless. (For obvious reasons, a randomized controlled experiment in torture is wildly unethical, so none have ever been attempted.) But to justify torture it isn’t enough that it could work sometimes; it has to work vastly better than any other method we have.

And our empirical data is in fact reliable enough to show that that is not the case. Torture often produces unreliable information, as we would expect from the game theory involved—your incentive is to stop the pain, not provide accurate intel; the psychological trauma that torture causes actually distorts memory and reasoning; and as a matter of fact basically all the useful intelligence obtained in the War on Terror was obtained through humane interrogation methods. As interrogation experts agree, torture just isn’t that effective.

In principle, there are four basic cases to consider:

1. Torture is vastly more effective than the best humane interrogation methods.

2. Torture is slightly more effective than the best humane interrogation methods.

3. Torture is as effective as the best humane interrogation methods.

4. Torture is less effective than the best humane interrogation methods.

The evidence points most strongly to case 4, which would mean that torture is a no-brainer; if it doesn’t even work as well as other methods, it’s absurd to use it. You’re basically kicking puppies at that point—purely sadistic violence that accomplishes nothing. But the data isn’t clear enough for us to rule out case 3 or even case 2. There is only one case we can strictly rule out, and that is case 1.

But it was only in case 1 that torture could ever be justified!

If you’re trying to justify doing something intrinsically horrible, it’s not enough that it has some slight benefit.

People seem to have this bizarre notion that we have only two choices in morality:

Either we are strict deontologists, and wrong actions can never be justified by good outcomes ever, in which case apparently vaccines are morally wrong, because stabbing children with needles is wrong. Tto be fair, some people seem to actually believe this; but then, some people believe the Earth is less than 10,000 years old.

Or alternatively we are the bizarre strawman concept most people seem to have of utilitarianism, under which any wrong action can be justified by even the slightest good outcome, in which case all you need to do to justify slavery is show that it would lead to a 1% increase in per-capita GDP. Sadly, there honestly do seem to be economists who believe this sort of thing. Here’s one arguing that US chattel slavery was economically efficient, and some of the more extreme arguments for why sweatshops are good can take on this character. Sweatshops may be a necessary evil for the time being, but they are still an evil.

But what utilitarianism actually says (and I consider myself some form of nuanced rule-utilitarian, though actually I sometimes call it “deontological consequentialism” to emphasize that I mean to synthesize the best parts of the two extremes) is not that the ends always justify the means, but that the ends can justify the means—that it can be morally good or even obligatory to do something intrinsically bad (like stabbing children with needles) if it is the best way to accomplish some greater good (like saving them from measles and polio). But the good actually has to be greater, and it has to be the best way to accomplish that good.

To see why this later proviso is important, consider the real-world ethical issues involved in psychology experiments. The benefits of psychology experiments are already quite large, and poised to grow as the science improves; one day the benefits of cognitive science to humanity may be even larger than the benefits of physics and biology are today. Imagine a world without mood disorders or mental illness of any kind; a world without psychopathy, where everyone is compassionate; a world where everyone is achieving their full potential for happiness and self-actualization. Cognitive science may yet make that world possible—and I haven’t even gotten into its applications in artificial intelligence.

To achieve that world, we will need a great many psychology experiments. But does that mean we can just corral people off the street and throw them into psychology experiments without their consent—or perhaps even their knowledge? That we can do whatever we want in those experiments, as long as it’s scientifically useful? No, it does not. We have ethical standards in psychology experiments for a very good reason, and while those ethical standards do slightly reduce the efficiency of the research process, the reduction is small enough that the moral choice is obviously to retain the ethics committees and accept the slight reduction in research efficiency. Yes, randomly throwing people into psychology experiments might actually be slightly better in purely scientific terms (larger and more random samples)—but it would be terrible in moral terms.

Along similar lines, even if torture works about as well or even slightly better than other methods, that’s simply not enough to justify it morally. Making a successful interrogation take 16 days instead of 17 simply wouldn’t be enough benefit to justify the psychological trauma to the suspect (and perhaps the interrogator!), the risk of harm to the falsely accused, or the violation of international human rights law. And in fact a number of terrorism suspects were waterboarded for months, so even the idea that it could shorten the interrogation is pretty implausible. If anything, torture seems to make interrogations take longer and give less reliable information—case 4.

A lot of people seem to have this impression that torture is amazingly, wildly effective, that a suspect who won’t crack after hours of humane interrogation can be tortured for just a few minutes and give you all the information you need. This is exactly what we do not find empirically; if he didn’t crack after hours of talk, he won’t crack after hours of torture. If you literally only have 30 minutes to find the nuke in Los Angeles, I’m sorry; you’re not going to find the nuke in Los Angeles. No adversarial interrogation is ever going to be completed that quickly, no matter what technique you use. Evacuate as many people to safe distances or underground shelters as you can in the time you have left.

This is why the “ticking time-bomb” scenario is so ridiculous (and so insidious); that’s simply not how interrogation works. The best methods we have for “rapid” interrogation of hostile suspects take hours or even days, and they are humane—building trust and rapport is the most important step. The goal is to get the suspect to want to give you accurate information.

For the purposes of the thought experiment, okay, you can stipulate that it would work (this is what the Stanford Encyclopedia of Philosophy does). But now all you’ve done is made the thought experiment more distant from the real-world moral question. The closest real-world examples we’ve ever had involved individual crimes, probably too small to justify the torture (as bad as a murdered child is, think about what you’re doing if you let the police torture people). But by the time the terrorism to be prevented is large enough to really be sufficient justification, it (1) hasn’t happened in the real world and (2) surely involves terrorists who are sufficiently ideologically committed that they’ll be able to resist the torture. If such a situation arises, of course we should try to get information from the suspects—but what we try should be our best methods, the ones that work most consistently, not the ones that “feel right” and maybe happen to work on occasion.

Indeed, the best explanation I have for why people use torture at all, given its horrible effects and mediocre effectiveness at best is that it feels right.

When someone does something terrible (such as an act of terrorism), we rightfully reduce our moral valuation of them relative to everyone else. If you are even tempted to deny this, suppose a terrorist and a random civilian are both inside a burning building and you only have time to save one. Of course you save the civilian and not the terrorist. And that’s still true even if you know that once the terrorist was rescued he’d go to prison and never be a threat to anyone else. He’s just not worth as much.

In the most extreme circumstances, a person can be so terrible that their moral valuation should be effectively zero: If the only person in a burning building is Stalin, I’m not sure you should save him even if you easily could. But it is a grave moral mistake to think that a person’s moral valuation should ever go negative, yet I think this is something that people do when confronted with someone they truly hate. The federal agents torturing those terrorists didn’t merely think of them as worthless—they thought of them as having negative worth. They felt it was a positive good to harm them. But this is fundamentally wrong; no sentient being has negative worth. Some may be so terrible as to have essentially zero worth; and we are often justified in causing harm to some in order to save others. It would have been entirely justified to kill Stalin (as a matter of fact he died of heart disease and old age), to remove the continued threat he posed; but to torture him would not have made the world a better place, and actually might well have made it worse.

Yet I can see how psychologically it could be useful to have a mechanism in our brains that makes us hate someone so much we view them as having negative worth. It makes it a lot easier to harm them when necessary, makes us feel a lot better about ourselves when we do. The idea that any act of homicide is a tragedy but some of them are necessary tragedies is a lot harder to deal with than the idea that some people are just so evil that killing or even torturing them is intrinsically good. But some of the worst things human beings have ever done ultimately came from that place in our brains—and torture is one of them.