The cognitive science of morality part I: Joshua Greene

JDN 2457124 EDT 15:33.

Thursday and Friday of this past week there was a short symposium at the University of Michigan called “The Cognitive Science of Moral Minds“, sponsored by the Weinberg Cognitive Science Institute, a new research institute at Michigan. It was founded by a former investment banker, because those are the only people who actually have money these days—and Michigan, like most universities, will pretty much take money from whoever offers it (including naming buildings after those people and not even changing the name after it’s revealed that the money was obtained in a $550-million fraud scheme, for which he was fined $200 million, because that’s apparently how our so-called “justice” system so-called “works”. A hint for the SEC: If the fine paid divided by the amount defrauded would be a sensible rate for a marginal income tax, that’s not a punishment). So far as I know Weinberg isn’t a white-collar criminal the way Wyly is, so that’s good at least. Still, why are we relying upon investment bankers to decide what science institutes we’ll found?

The Weinberg Institute was founded just last year. Yes, four years after I got my bachelor’s degree in cognitive science from Michigan, they decide to actually make that a full institute instead of an awkward submajor of the psychology department. Oh, and did I mention how neither the psychology or economics department would support my thesis research in behavioral economics but then called in Daniel Kahneman as the keynote speaker at my graduation? Yeah, sometimes I think I’m a little too cutting-edge for my own good.

The symposium had Joshua Greene of Harvard and Molly Crockett of Oxford, both of whom I’d been hoping to meet for a few years now. I finally got the chance! (It also had Peter Railton—likely not hard to get, seeing as he works at our own philosophy department, but still has some fairly interesting ideas—and some law professor I’d never heard of named John Mikhail, whose talk was really boring.) I asked Greene how I could get in on his research, and he said I should do a PhD at Harvard… which is something I’ve been trying to convince Harvard for three years now—they keep not letting me in.

Anyway… the symposium was actually quite good, and the topic of moral cognition is incredibly fascinating and of course incredibly relevant to Infinite Identical Psychopaths.

Let’s start with Greene’s work. His basic research program is studying what our brains are doing when we try to resolve moral dilemmas. Normally I’m not a huge fan of fMRI research, because it’s just so damn coarse; I like to point out that it is basically equivalent to trying to understand how your computer works by running a voltmeter over the motherboard. But Greene does a good job of not over-interpreting results and combining careful experimental methods to really get a better sense of what’s going on.

There are basically two standard moral dilemmas people like to use in moral cognition research, and frankly I think this is a problem, because they don’t only differ in the intended way but also in many other ways; also once you’ve heard them, they no longer surprise you, so if you ever are a subject in one moral cognition experiment, it’s going to color your responses in any others from then on. I think we should come up with a much more extensive list of dilemmas that differ in various different dimensions; this would also make it much less likely for someone to already have seen them all before. A few weeks ago I made a Facebook post proposing a new dilemma of this sort, and the response, while an entirely unscientific poll, at least vaguely suggested that something may be wrong with the way Greene and others interpret the two standard dilemmas.

What are the standard dilemmas? They are called the trolley dilemma and the footbridge dilemma respectively; collectively they are trolley problems, of which there are several—but most aren’t actually used in moral cognition research for some reason.

In the trolley dilemma, there is, well, a trolley, hurtling down a track on which, for whatever reason, five people are trapped. There is another track, and you can flip a switch to divert the trolley onto that track, which will save those five people; but alas there is one other person trapped on that other track, who will now die. Do you flip the switch? Like most people, I say “Yes”.

In the footbridge dilemma, the trolley is still hurtling toward five people, but now you are above the track, standing on a footbridge beside an extremely fat man. The man is so fat, in fact, that if you push him in front of the trolley he will cause it to derail before it hits the five other people. You yourself are not fat enough to achieve this. Do you push the fat man? Like most people, I say “No.”

I actually hope you weren’t familiar with those dilemmas before, because your first impression is really useful to what I’m about to say next: Aren’t those really weird?

I mean, really weird, particularly the second one—what sort of man is fat enough to stop a trolley, yet nonetheless light enough or precariously balanced enough that I can reliably push him off a footbridge? These sorts of dilemmas are shades of the plugged-in-violinist; well, if the Society of Violin Enthusiasts ever does that, I suppose you can unplug the violinist—but what the hell does that have to do with abortion? (At the end of this post I’ve made a little appendix about the plugged-in-violinist and why it fails so miserably as an argument, but since it’s tangential I’ll move on for now.)

Even the first trolley problem, which seems a paragon of logical causality by comparison, is actually pretty bizarre. What are these people doing on the tracks? Why can’t they get off the tracks? Why is the trolley careening toward them? Why can’t the trolley be stopped some other way? Why is nobody on the trolley? What is this switch doing here, and why am I able to switch tracks despite having no knowledge, expertise or authority in trolley traffic control? Where are the proper traffic controllers? (There’s actually a pretty great sequence in Stargate: Atlantis where they have exactly this conversation.)

Now, if your goal is only to understand the core processes of human moral reasoning, using bizarre scenarios actually makes some sense; you can precisely control the variables—though, as I already said, they really don’t usuallyand see what exactly it is that makes us decide right from wrong. Would you do it for five? No? What about ten? What about fifty? Just what is the marginal utility of pushing a fat man off a footbridge? What if you could flip a switch to drop him through a trapdoor instead of pushing him? (Actually Greene did do that one, and the result is that more people do it than would push him, but not as many as would flip the switch to shift the track.) You’d probably do it if he willingly agreed, right? What if you had to pay his family $100,000 in life insurance as part of the deal? Does it matter if it’s your money or someone else’s? Does it matter how much you have to pay his family? $1,000,000? $1,000? Only $10? If he only needs $1 of enticement, is that as good as giving free consent?

You can go the other way as well: So you’d flip the switch for five? What about three? What about two? Okay, you strict act-utilitarian you: Would you do it for only one? Would you flip a coin because the expected marginal utility of two random strangers is equal? You wouldn’t, would you? So now your intervention does mean something, even if you think it’s less important than maximizing the number of lives saved. What if it were 10,000,001 lives versus 10,000,000 lives? Would you nuke a slightly smaller city to save a slightly larger one? Does it matter to you which country the cities are in? Should it matter?

Greene’s account is basically the standard one, which is that the reason we won’t push the fat man off the footbridge is that we have an intense emotional reaction to physically manhandling someone, but in the case of flipping the switch we don’t have that reaction, so our minds are clearer and we can simply rationally assess that five lives matter more than one. Greene maintains that this emotional response is irrational, an atavistic holdover from our evolutionary history, and we would make society better by suppressing it and going with the “rational”, (act-)utilitarian response. (I know he knows the difference between act-utilitarian and rule-utilitarian, because he has a PhD in philosophy. Why he didn’t mention it in the lecture, I cannot say.)

He does make a pretty good case for that, including the fMRIs showing that emotion centers light up a lot more for the footbridge dilemma than for the trolley dilemma; but I must say, I’m really not quite convinced.

Does flipping the switch to drop him through a trapdoor yield more support because it’s emotionally more distant? Or because it makes a bit more sense? We’ve solved the “Why can I push him hard enough?” problem, albeit not the “How he is heavy enough to stop a trolley?” problem.

I’ve also thought about ways to make the gruesome manhandling happen but nonetheless make more logical sense, and the best I’ve come up with is what we might call the lion dilemma: There is a hungry lion about to attack a group of five children and eat them all. You are standing on a ridge above, where the lion can’t easily get to you; if he eats the kids you’ll easily escape. Beside you is a fat man who weighs as much as the five children combined. If you push him off the ridge, he’ll be injured and unable to run, so the lion will attack him first, and then after eating him the lion will no longer be hungry and will leave the children alone. You yourself aren’t fat enough to make this work, however; you only weigh as much as two of the kids, not all five. You don’t have any weapons to kill the lion or anyone you could call for help, but you are sure you can push the fat man off the ridge quickly enough. Do you push the fat man off the ridge? I think I do—as did most of my friends in my aforementioned totally unscientific Facebook poll—though I’m not as sure of that as I was about flipping the switch. Yet nobody can deny the physicality of my action; not only am I pushing him just as before, he’s not going to be merely run over by a trolley, he’s going to be mauled and eaten by a lion. Of course, I might actually try something else, like yelling, “Run, kids!” and sliding down with the fat man to try to wrestle the lion together; and again we can certainly ask what the seven of us are doing out here unarmed and alone with lions about. But given the choice between the kids being eaten, myself and three of the kids being eaten, or the fat man being eaten, the last one does actually seem like the least-bad option.

Another good one, actually by the same Judith Thompson of plugged-in-violinist fame (for once her dilemma actually makes some sense; seriously, read A Defense of Abortion and you’ll swear she was writing it on psilocybin), is the transplant dilemma: You’re a doctor in a hospital where are five dying patients of different organ failures—two kidneys, one liver, one heart, and one lung, let’s say. You are one of the greatest transplant surgeons of all time, and there is no doubt in your mind that if you had a viable organ for each of them, you could save their lives—but you don’t. Yet as it so happens, a young man is visiting town and came to the hospital after severely breaking his leg in a skateboarding accident. He is otherwise in perfect health, and what’s more, he’s an organ donor and actually a match for all five of your dying patients. You could quietly take him into the surgical wing, give him a little too much anesthesia “by accident” as you operate on his leg, and then take his organs and save all five other patients. Nobody would ever know. Do you do it? Of course you don’t, you’re not a monster. But… you could save five by killing one, right? Is it just your irrational emotional aversion to cutting people open? No, you’re a surgeon—and I think you’ll be happy to know that actual surgeons agree that this is not the sort of thing they should be doing, despite the fact that they obviously have no problem cutting people open for the greater good all the time. The aversion to harm your own patient may come from (or be the source of) the Hippocratic Oath—are we prepared to say that the Hippocratic Oath is irrational?

I also came up with another really interesting one I’ll call the philanthropist assassin dilemma. One day, as you are walking past a dark alley, a shady figure pops out and makes you an offer: If you take this little vial of cyanide and pour it in the coffee of that man across the street while he’s in the bathroom, a donation of $100,000 will be made to UNICEF. If you refuse, the shady character will keep the $100,000 for himself. Nevermind the weirdness—they’re all weird, and unlike the footbridge dilemma this one actually could happen even though it probably won’t. Assume that despite being a murderous assassin this fellow really intends to make the donation if you help him carry out this murder. $100,000 to UNICEF would probably save the lives of over a hundred children. Furthermore, you can give the empty vial back to the philanthropist assassin and since there’s no logical connection between you and the victim, there’s basically no chance you’d ever be caught even if he is. (Also, how can you care more about your own freedom than the lives of a hundred children?) How can you justify not doing it? It’s just one man you don’t know, who apparently did something bad enough to draw the ire of philanthropist assassins, against the lives of a hundred innocent children! Yet I’m sure you share my strong intuition that you should not take the offer. It doesn’t require manhandling anybody—just a quick little pour into a cup of coffee—so that can’t be it. A hundred children! And yet I still don’t see how I could carry out this murder. Is that irrational, as Greene claims? Should we be prepared to carry out such a murder if the opportunity ever arises?

Okay, how about this one then, the white-collar criminal dilemma? You are a highly-skilled hacker, and you could hack into the accounts of a major bank and steal a few dollars from each account, gathering a total of $1 billion that you can then immediately donate to UNICEF, covering their entire operating budget for this year and possibly next year as well, saving the lives of countless children—perhaps literally millions of children. Should you do it? Honestly in this case I think maybe you should! (Maybe Sam Wyly isn’t so bad after all? He donated his stolen money to a university, which isn’t nearly as good as UNICEF… also he stole $550 million and donated $10 million, so there’s that.) But now suppose that you can only get into the system if you physically break into the bank and kill several of the guards. What are a handful of guards against millions of children? Yet you sound like a Well-Intentioned Extremist in a Hollywood blockbuster (seriously, someone should make this movie), and your action certainly doesn’t seem as unambigously heroic as one might think of any act that saves the lives of a million children and only kills a handful of people. Why is it that I think we should lobby governments and corporations to make these donations voluntarily, even if it takes a decade longer, rather than finding someone who can steal the money by force? Children will die in the meantime! Don’t those children matter?

I don’t have a good answer, actually. Maybe Greene is right and it’s just this atavistic emotional response that prevents me from seeing that these acts would be justified. But then again, maybe it’s not—maybe there’s something more here that Greene is missing.

And that brings me back to the act-utilitarian versus rule-utilitarian distinction, which Greene ignored in his lecture. In act-utilitarian terms, obviously you save the children; it’s a no-brainer, 100 children > 1 hapless coffee-drinker and 1,000,000 children >> 10 guards. But in rule-utilitarian terms, things come out a bit different. What kind of society would we live in, if at any moment we could fear the wrath of philanthropist assassins? Right now, there’s plenty of money in the bank for anyone to steal, but what would happen to our financial system if we didn’t punish bank robbers so long as they spent the money on the right charities? All of it, or just most of it? And which charities are the right charities? What would our medical system be like if we knew that our organs might be harvested at any time so long as there were two or more available recipients? Despite these dilemmas actually being a good deal more realistic than the standard trolley problems, the act-utilitarian response still relies upon assuming that this is an exceptional circumstance which will never be heard about or occur again. Yet those are by definition precisely the sort of moral principles we can’t live our lives by.

This post has already gotten really long, so I won’t even get into Molly Crockett’s talk until a later post. I probably won’t do it as the next post either, but the one after that, because next Friday is Capybara Day (“What?” you say? Stay tuned).

Appendix: The plugged-in-violinist

In the original dilemma, the Society of Violin Enthusiasts kidnaps you in the night (you couldn’t possibly have seen this coming), and they plug you into a violinist you’ve never met before. You know he’ll die if you unplug him, but you don’t know how long he’s going to be plugged in, and as long as he is plugged in you will be completely bedridden. As terrible as it feels to kill this poor violinist, most people (including myself) agree that it is probably justifiable to unplug him. Hence, Thompson claims, it doesn’t matter whether a fetus is a person; nobody disputes that the violinist is a person, but you can still unplug him. So, the argument goes, abortion is still permissible even if fetuses are persons.

Well, here are some key differences between abortion and the plugged-in-violinist: 1. Pregnancy is natural and foreseeable. 2. Pregnancy is quite strictly time-limited. 3. Most pregnancies are the result of consensual sex—for those that aren’t, maybe the argument works better. 4. The fetus inside you, in case you’ve forgotten, is your child—to whom you clearly have fundamental responsibilities far in excess what you would have to a random stranger. 5. Pregnancy is rarely debilitating, mostly just inconvenient. 6. Abortion typically doesn’t just remove the fetus; it brutally dismembers it, usually cutting it into tiny pieces and sucking it up through a tube. If you are imagining a humane “deliver by Caesarian and then suffocate”, think again. Here, I’ll let even Planned Parenthood admit that, albeit in very euphemistic terms.

Here, try adjusting the analogy accordingly, at least as much as we can: You are a member of the Society of Violin Enthusiasts, and you have signed up for a program in which you receive a weekly $50 payment in exchange for being entered into a lottery with a 5% chance per year of causing you to be plugged into an ailing violinist for nine months, during which time you will still be able to work and engage in most ordinary activities, but cannot exercise intensely or drink alcohol, and you may experience odd food cravings and occasional nausea. Only women can be plugged into violinists for deep biological reasons we can’t do anything about. If you unplug the violinist early he will die, though he may have a chance of surviving near the end of the term. Well, your ticket was drawn today, and guess what, the violinist you’re plugged into just so happens to be your estranged son, whom you’ve not seen since he was born. Here he is, plugged in. Nine months of inconvenience await you and you’ll probably end up having to deal with your estranged son again once he wakes up. Remember, you can’t just unplug him; you’ll need to brutally dismember him or hire someone to do it. Now, tell me, what kind of monster are you if you unplug and dismember your own son?

Of course if the fetus isn’t morally a person—and for at least the first few months, I’m entirely prepared to say it’s not—then this new analogy fails as well, and it’s more like cutting up, I don’t know, a sea sponge, or maybe a fish. That doesn’t seem nearly so terrible, and indeed might well be morally justified in various circumstances. But Thompson’s point was supposed to be that it doesn’t matter whether the fetus is a person—yet, clearly it does.

2 thoughts on “The cognitive science of morality part I: Joshua Greene

  1. […] In an earlier post I discussed the Weinberg Cognitive Science Conference and my favorite speaker in the lineup, Joshua Greene. After a brief interlude from Capybara Day, it’s now time to talk about my second-favorite speaker, Molly Crockett. (Is it just me, or does the name “Molly” somehow seem incongruous with a person of such prestige? I felt the same way about […]

    Like

Leave a comment