The Cognitive Science of Morality Part II: Molly Crockett

JDN 2457140 EDT 20:16.

This weekend has been very busy for me, so this post is going to be shorter than most—which is probably a good thing anyway, since my posts tend to run a bit long.

In an earlier post I discussed the Weinberg Cognitive Science Conference and my favorite speaker in the lineup, Joshua Greene. After a brief interlude from Capybara Day, it’s now time to talk about my second-favorite speaker, Molly Crockett. (Is it just me, or does the name “Molly” somehow seem incongruous with a person of such prestige?)

Molly Crockett is a neuroeconomist, though you’d never hear her say that. She doesn’t think of herself as an economist at all, but purely as a neuroscientist. I suspect this is because when she hears the word “economist” she thinks of only mainstream neoclassical economists, and she doesn’t want to be associated with such things.

Still, what she studies is clearly neuroeconomics—I in fact first learned of her work by reading the textbook Neuroeconomics, though I really got interested in her work after watching her TED Talk. It’s one of the better TED talks (they put out so many of them now that the quality is mixed at best); she talks about news reporting on neuroscience, how it is invariably ridiculous and sensationalist. This is particularly frustrating because of how amazing and important neuroscience actually is.

I could almost forgive the sensationalism if they were talking about something that’s actually fantastically boring, like, say, tax codes, or financial regulations. Of course, even then there is the Oliver Effect: You can hide a lot of evil by putting it in something boring. But Dodd-Frank is 2300 pages long; I read an earlier draft that was only (“only”) 600 pages, and it literally contained a three-page section explaining how to define the word “bank”. (Assuming direct proportionality, I would infer that there is now a twelve-page section defining the word “bank”. Hopefully not?) It doesn’t get a whole lot more snoozeworthy than that. So if you must be a bit sensationalist in order to get people to see why eliminating margin requirements and the swaps pushout rule are terrible, terrible ideas, so be it.

But neuroscience is not boring, and so sensationalism only means that news outlets are making up exciting things that aren’t true instead of saying the actually true things that are incredibly exciting.

Here, let me express without sensationalism what Molly Crockett does for a living: Molly Crockett experimentally determines how psychoactive drugs modulate moral judgments. The effects she observes are small, but they are real; and since these experiments are done using small doses for a short period of time, if these effects scale up they could be profound. This is the basic research component—when it comes to technological fruition it will be literally A Clockwork Orange. But it may be A Clockwork Orange in the best possible way: It could be, at last, a medical cure for psychopathy, a pill to make us not just happier or healthier, but better. We are not there yet by any means, but this is clearly the first step: Molly Crockett is to A Clockwork Orange roughly as Michael Faraday is to the Internet.

In one of the experiments she talked about at the conference, Crockett found that serotonin reuptake inhibitors enhance harm aversion. Serotonin reuptake inhibitors are very commonly used drugs—you are likely familiar with one called Prozac. So basically what this study means is that Prozac makes people more averse to causing pain in themselves or others. It doesn’t necessarily make them more altruistic, let alone more ethical; but it does make them more averse to causing pain. (To see the difference, imagine a 19th-century field surgeon dealing with a wounded soldier; there is no anesthetic, but an amputation must be made. Sometimes being ethical requires causing pain.)

The experiment is actually what Crockett calls “the honest Milgram Experiment“; under Milgram, the experimenters told their subjects they would be causing shocks, but no actual shocks were administered. Under Crockett, the shocks are absolutely 100% real (though they are restricted to a much lower voltage of course). People are given competing offers that contain an amount of money and a number of shocks to be delivered, either to you or to the other subject. They decide how much it’s worth to them to bear the shocks—or to make someone else bear them. It’s a classic willingness-to-pay paradigm, applied to the Milgram Experiment.

What Crockett found did not surprise me, nor do I expect it will surprise you if you imagine yourself in the same place; but it would totally knock the socks off of any neoclassical economist. People are much more willing to bear shocks for money than they are to give shocks for money. They are what Crockett terms hyper-altruistic; I would say that they are exhibiting an apparent solidarity coefficient greater than 1. They seem to be valuing others more than they value themselves.

Normally I’d say that this makes no sense at all—why would you value some random stranger more than yourself? Equally perhaps, and obviously only a psychopath would value them not at all; but more? And there’s no way you can actually live this way in your daily life; you’d give away all your possessions and perhaps even starve yourself to death. (I guess maybe Jesus lived that way.) But Crockett came up with a model that explains it pretty well: We are morally risk-averse. If we knew we were dealing with someone very strong who had no trouble dealing with shocks, we’d be willing to shock them a fairly large amount. But we might actually be dealing with someone very vulnerable who would suffer greatly; and we don’t want to take that chance.

I think there’s some truth to that. But her model leaves something else out that I think is quite important: We are also averse to unfairness. We don’t like the idea of raising one person while lowering another. (Obviously not so averse as to never do it—we do it all the time—but without a compelling reason we consider it morally unjustified.) So if the two subjects are in roughly the same condition (being two undergrads at Oxford, they probably are), then helping one while hurting the other is likely to create inequality where none previously existed. But if you hurt yourself in order to help yourself, no such inequality is created; all you do is raise yourself up, provided that you do believe that the money is good enough to be worth the shocks. It’s actually quite Rawslian; lifting one person up while not affecting the other is exactly the sort of inequality you’re allowed to create according to the Difference Principle.

There’s also the fact that the subjects can’t communicate; I think if I could make a deal to share the money afterward, I’d feel better about shocking someone more in order to get us both more money. So perhaps with communication people would actually be willing to shock others more. (And the sensation headline would of course be: “Talking makes people hurt each other.”)

But all of these ideas are things that could be tested in future experiments! And maybe I’ll do those experiments someday, or Crockett, or one of her students. And with clever experimental paradigms we might find out all sorts of things about how the human mind works, how moral intuitions are structured, and ultimately how chemical interventions can actually change human moral behavior. The potential for both good and evil is so huge, it’s both wondrous and terrifying—but can you deny that it is exciting?

And that’s not even getting into the Basic Fact of Cognitive Science, which undermines all concepts of afterlife and theistic religion. I already talked about it before—as the sort of thing that I sort of wish I could say when I introduce myself as a cognitive scientist—but I think it bears repeating.

As Patricia Churchland said on the Colbert Report: Colbert asked, “Are you saying I have no soul?” and she answered, “Yes.” I actually prefer Daniel Dennett’s formulation: “Yes, we have a soul, but it’s made of lots of tiny robots.”

We don’t have a magical, supernatural soul (whatever that means); we don’t have an immortal soul that will rise into Heaven or be reincarnated in someone else. But we do have something worth preserving: We have minds that are capable of consciousness. We love and hate, exalt and suffer, remember and imagine, understand and wonder. And yes, we are born and we die. Once the unique electrochemical pattern that defines your consciousness is sufficiently degraded, you are gone. Nothing remains of what you were—except perhaps the memories of others, or things you have created. But even this legacy is unlikely to last forever. One day it is likely that all of us—and everything we know, and everything we have built, from the Great Pyramids to Hamlet to Beethoven’s Ninth to Principia Mathematica to the US Interstate Highway System—will be gone. I don’t have any consolation to offer you on that point; I can’t promise you that anything will survive a thousand years, much less a million. There is a chance—even a chance that at some point in the distant future, whatever humanity has become will find a way to reverse the entropic decay of the universe itself—but nothing remotely like a guarantee. In all probability you, and I, and all of this will be gone someday, and that is absolutely terrifying.

But it is also undeniably true. The fundamental link between the mind and the brain is one of the basic facts of cognitive science; indeed I like to call it The Basic Fact of Cognitive Science. We know specifically which kinds of brain damage will make you unable to form memories, comprehend language, speak language (a totally different area), see, hear, smell, feel anger, integrate emotions with logic… do I need to go on? Everything that you are is done by your brain—because you are your brain.

Now why can’t the science journalists write about that? Instead we get “The Simple Trick That Can Boost Your Confidence Immediately” and “When it Comes to Picking Art, Men & Women Just Don’t See Eye to Eye.” HuffPo is particularly awful of course; the New York Times is better, but still hardly as good as one might like. They keep trying to find ways to make it exciting—but so rarely seem to grasp how exciting it already is.

Happy Capybara Day! Or the power of culture

JDN 2457131 EDT 14:33.

Did you celebrate Capybara Day yesterday? You didn’t? Why not? We weren’t able to find any actual capybaras this year, but maybe next year we’ll be able to plan better and find a capybara at a zoo; unfortunately the nearest zoo with a capybara appears to be in Maryland. But where would we be without a capybara to consult annually on the stock market?

Right now you are probably rather confused, perhaps wondering if I’ve gone completely insane. This is because Capybara Day is a holiday of my own invention, one which only a handful of people have even heard about.

But if you think we’d never have a holiday so bizarre, think again: For all I did was make some slight modifications to Groundhog Day. Instead of consulting a groundhog about the weather every February 2, I proposed that we consult a capybara about the stock market every April 17. And if you think you have some reason why groundhogs are better at predicting the weather (perhaps because they at least have some vague notion of what weather is) than capybaras are at predicting the stock market (since they have no concept of money or numbers), think about this: Capybara Day could produce extremely accurate predictions, provided only that people actually believed it. The prophecy of rising or falling stock prices could very easily become self-fulfilling. If it were a cultural habit of ours to consult capybaras about the stock market, capybaras would become good predictors of the stock market.

That might seem a bit far-fetched, but think about this: Why is there a January Effect? (To be fair, some researchers argue that there isn’t, and the apparent correlation between higher stock prices and the month of January is simply an illusion, perhaps the result of data overfitting.)

But I think it probably is real, and moreover has some very obvious reasons behind it. In this I’m in agreement with Richard Thaler, a founder of cognitive economics who wrote about such anomalies in the 1980s. December is a time when two very culturally-important events occur: The end of the year, during which many contracts end, profits are assessed, and tax liabilities are determined; and Christmas, the greatest surge of consumer spending and consumer debt.

The first effect means that corporations are very likely to liquidate assets—particularly assets that are running at a loss—in order to minimize their tax liabilities for the year, which will drive down prices. The second effect means that consumers are in search of financing for extravagant gift purchases, and those who don’t run up credit cards may instead sell off stocks. This is if anything a more rational way of dealing with the credit constraint, since interest rates on credit cards are typically far in excess of stock returns. But this surge of selling due to credit constraints further depresses prices.

In January, things return to normal; assets are repurchased, debt is repaid. This brings prices back up to where they were, which results in a higher than normal return for January.

Neoclassical economists are loath to admit that such a seasonal effect could exist, because it violates their concept of how markets work—and to be fair, the January Effect is actually weak enough to be somewhat ambiguous. But actually it doesn’t take much deviation from neoclassical models to explain the effect: Tax policies and credit constraints are basically enough to do it, so you don’t even need to go that far into understanding human behavior. It’s perfectly rational to behave this way given the distortions that are created by taxes and credit limits, and the arbitrage opportunity is one that you can only take advantage of if you have large amounts of credit and aren’t worried about minimizing your tax liabilities. It’s important to remember just how strong the assumptions of models like CAPM truly are; in addition to the usual infinite identical psychopaths, CAPM assumes there are no taxes, no transaction costs, and unlimited access to credit. I’d say it’s amazing that it works at all, but actually, it doesn’t—check out this graph of risk versus return and tell me if you think CAPM is actually giving us any information at all about how stock markets behave. It frankly looks like you could have drawn a random line through a scatter plot and gotten just as good a fit. Knowing how strong its assumptions are, we would not expect CAPM to work—and sure enough, it doesn’t.

Of course, that leaves the question of why our tax policy would be structured in this way—why make the year end on December 31 instead of some other date? And for that, you need to go back through hundreds of years of history, the Gregorian calendar, which in turn was influenced by Christianity, and before that the Julian calendar—in other words, culture.

Culture is one of the most powerful forces that influences human behavior—and also one of the strangest and least-understood. Economic theory is basically silent on the matter of culture. Typically it is ignored entirely, assumed to be irrelevant against the economic incentives that are the true drivers of human action. (There’s a peculiar emotion many neoclassical economists express that I can best describe as self-righteous cynicism, the attitude that we alone—i.e., economists—understand that human beings are not the noble and altruistic creatures many imagine us to be, nor beings of art and culture, but simply cold, calculating machines whose true motives are reducible to profit incentives—and all who think otherwise are being foolish and naïve; true enlightenment is understanding that human beings are infinite identical psychopaths. This is the attitude epitomized by the economist who once sent me an email with “altruism” written in scare quotes.)

Occasionally culture will be invoked as an external (in jargon, exogenous) force, to explain some aspect of human behavior that is otherwise so totally irrational that even invoking nonsensical preferences won’t make it go away. When a suicide bomber blows himself up in a crowd of people, it’s really pretty hard to explain that in terms of rational profit incentives—though I have seen it tried. (It could be self-interest at a larger scale, like families or nations—but then, isn’t that just the tribal paradigm I’ve been arguing for all along?)

But culture doesn’t just motivate us to do extreme or wildly irrational things. It motivates us all the time, often in quite beneficial ways; we wait in line, hold doors for people walking behind us, tip waiters who serve us, and vote in elections, not because anyone pressures us directly to do so (unlike say Australia we do not have compulsory voting) but because it’s what we feel we ought to do. There is a sense of altruism—and altruism provides the ultimate justification for why it is right to do these things—but the primary motivator in most cases is culture—that’s what people do, and are expected to do, around here.

Indeed, even when there is a direct incentive against behaving a certain way—like criminal penalties against theft—the probability of actually suffering a direct penalty is generally so low that it really can’t be our primary motivation. Instead, the reason we don’t cheat and steal is that we think we shouldn’t, and a major part of why we think we shouldn’t is that we have cultural norms against it.

We can actually observe differences in cultural norms across countries in the laboratory. In this 2008 study by Massimo Castro (PDF) comparing British and Italian people playing an economic game called the public goods game in which you can pay a cost yourself to benefit the group as a whole, it was found not only that people were less willing to benefit groups of foreigners than groups of compatriots, British people were overall more generous than Italian people. This 2010 study by Gachter et. al. (actually Joshua Greene talked about it last week) compared how people play the game in various cities, they found three basic patterns: In Western European and American cities such as Zurich, Copenhagen and Boston, cooperation started out high and remained high throughout; people were just cooperative in general. In Asian cities such as Chengdu and Seoul, cooperation started out low, but if people were punished for not cooperating, cooperation would improve over time, eventually reaching about the same place as in the highly cooperative cities. And in Mediterranean cities such as Istanbul, Athens, and Riyadh, cooperation started low and stayed low—even when people could be punished for not cooperating, nobody actually punished them. (These patterns are broadly consistent with the World Bank corruption ratings of these regions, by the way; Western Europe shows very low corruption, while Asia and the Mediterranean show high corruption. Of course this isn’t all that’s going on—and Asia isn’t much less corrupt than the Middle East, while this experiment might make you think so.)

Interestingly, these cultural patterns showed Melbourne as behaving more like an Asian city than a Western European one—perhaps being in the Pacific has worn off on Australia more than they realize.

This is very preliminary, cutting-edge research I’m talking about, so be careful about drawing too many conclusions. But in general we’ve begun to find some fairly clear cultural differences in economic behavior across different societies. While this would not be at all surprising to a sociologist or anthropologist, it’s the sort of thing that economists have insisted for years is impossible.

This is the frontier of cognitive economics, in my opinion. We know that culture is a very powerful motivator of our behavior, and it is time for us to understand how it works—and then, how it can be changed. We know that culture can be changed—cultural norms do change over time, sometimes remarkably rapidly; but we have only a faint notion of how or why they change. Changing culture has the power to do things that simply changing policy cannot, however; policy requires enforcement, and when the enforcement is removed the behavior will often disappear. But if a cultural norm can be imparted, it could sustain itself for a thousand years without any government action at all.

The cognitive science of morality part I: Joshua Greene

JDN 2457124 EDT 15:33.

Thursday and Friday of this past week there was a short symposium at the University of Michigan called “The Cognitive Science of Moral Minds“, sponsored by the Weinberg Cognitive Science Institute, a new research institute at Michigan. It was founded by a former investment banker, because those are the only people who actually have money these days—and Michigan, like most universities, will pretty much take money from whoever offers it (including naming buildings after those people and not even changing the name after it’s revealed that the money was obtained in a $550-million fraud scheme, for which he was fined $200 million, because that’s apparently how our so-called “justice” system so-called “works”. A hint for the SEC: If the fine paid divided by the amount defrauded would be a sensible rate for a marginal income tax, that’s not a punishment). So far as I know Weinberg isn’t a white-collar criminal the way Wyly is, so that’s good at least. Still, why are we relying upon investment bankers to decide what science institutes we’ll found?

The Weinberg Institute was founded just last year. Yes, four years after I got my bachelor’s degree in cognitive science from Michigan, they decide to actually make that a full institute instead of an awkward submajor of the psychology department. Oh, and did I mention how neither the psychology or economics department would support my thesis research in behavioral economics but then called in Daniel Kahneman as the keynote speaker at my graduation? Yeah, sometimes I think I’m a little too cutting-edge for my own good.

The symposium had Joshua Greene of Harvard and Molly Crockett of Oxford, both of whom I’d been hoping to meet for a few years now. I finally got the chance! (It also had Peter Railton—likely not hard to get, seeing as he works at our own philosophy department, but still has some fairly interesting ideas—and some law professor I’d never heard of named John Mikhail, whose talk was really boring.) I asked Greene how I could get in on his research, and he said I should do a PhD at Harvard… which is something I’ve been trying to convince Harvard for three years now—they keep not letting me in.

Anyway… the symposium was actually quite good, and the topic of moral cognition is incredibly fascinating and of course incredibly relevant to Infinite Identical Psychopaths.

Let’s start with Greene’s work. His basic research program is studying what our brains are doing when we try to resolve moral dilemmas. Normally I’m not a huge fan of fMRI research, because it’s just so damn coarse; I like to point out that it is basically equivalent to trying to understand how your computer works by running a voltmeter over the motherboard. But Greene does a good job of not over-interpreting results and combining careful experimental methods to really get a better sense of what’s going on.

There are basically two standard moral dilemmas people like to use in moral cognition research, and frankly I think this is a problem, because they don’t only differ in the intended way but also in many other ways; also once you’ve heard them, they no longer surprise you, so if you ever are a subject in one moral cognition experiment, it’s going to color your responses in any others from then on. I think we should come up with a much more extensive list of dilemmas that differ in various different dimensions; this would also make it much less likely for someone to already have seen them all before. A few weeks ago I made a Facebook post proposing a new dilemma of this sort, and the response, while an entirely unscientific poll, at least vaguely suggested that something may be wrong with the way Greene and others interpret the two standard dilemmas.

What are the standard dilemmas? They are called the trolley dilemma and the footbridge dilemma respectively; collectively they are trolley problems, of which there are several—but most aren’t actually used in moral cognition research for some reason.

In the trolley dilemma, there is, well, a trolley, hurtling down a track on which, for whatever reason, five people are trapped. There is another track, and you can flip a switch to divert the trolley onto that track, which will save those five people; but alas there is one other person trapped on that other track, who will now die. Do you flip the switch? Like most people, I say “Yes”.

In the footbridge dilemma, the trolley is still hurtling toward five people, but now you are above the track, standing on a footbridge beside an extremely fat man. The man is so fat, in fact, that if you push him in front of the trolley he will cause it to derail before it hits the five other people. You yourself are not fat enough to achieve this. Do you push the fat man? Like most people, I say “No.”

I actually hope you weren’t familiar with those dilemmas before, because your first impression is really useful to what I’m about to say next: Aren’t those really weird?

I mean, really weird, particularly the second one—what sort of man is fat enough to stop a trolley, yet nonetheless light enough or precariously balanced enough that I can reliably push him off a footbridge? These sorts of dilemmas are shades of the plugged-in-violinist; well, if the Society of Violin Enthusiasts ever does that, I suppose you can unplug the violinist—but what the hell does that have to do with abortion? (At the end of this post I’ve made a little appendix about the plugged-in-violinist and why it fails so miserably as an argument, but since it’s tangential I’ll move on for now.)

Even the first trolley problem, which seems a paragon of logical causality by comparison, is actually pretty bizarre. What are these people doing on the tracks? Why can’t they get off the tracks? Why is the trolley careening toward them? Why can’t the trolley be stopped some other way? Why is nobody on the trolley? What is this switch doing here, and why am I able to switch tracks despite having no knowledge, expertise or authority in trolley traffic control? Where are the proper traffic controllers? (There’s actually a pretty great sequence in Stargate: Atlantis where they have exactly this conversation.)

Now, if your goal is only to understand the core processes of human moral reasoning, using bizarre scenarios actually makes some sense; you can precisely control the variables—though, as I already said, they really don’t usuallyand see what exactly it is that makes us decide right from wrong. Would you do it for five? No? What about ten? What about fifty? Just what is the marginal utility of pushing a fat man off a footbridge? What if you could flip a switch to drop him through a trapdoor instead of pushing him? (Actually Greene did do that one, and the result is that more people do it than would push him, but not as many as would flip the switch to shift the track.) You’d probably do it if he willingly agreed, right? What if you had to pay his family $100,000 in life insurance as part of the deal? Does it matter if it’s your money or someone else’s? Does it matter how much you have to pay his family? $1,000,000? $1,000? Only $10? If he only needs $1 of enticement, is that as good as giving free consent?

You can go the other way as well: So you’d flip the switch for five? What about three? What about two? Okay, you strict act-utilitarian you: Would you do it for only one? Would you flip a coin because the expected marginal utility of two random strangers is equal? You wouldn’t, would you? So now your intervention does mean something, even if you think it’s less important than maximizing the number of lives saved. What if it were 10,000,001 lives versus 10,000,000 lives? Would you nuke a slightly smaller city to save a slightly larger one? Does it matter to you which country the cities are in? Should it matter?

Greene’s account is basically the standard one, which is that the reason we won’t push the fat man off the footbridge is that we have an intense emotional reaction to physically manhandling someone, but in the case of flipping the switch we don’t have that reaction, so our minds are clearer and we can simply rationally assess that five lives matter more than one. Greene maintains that this emotional response is irrational, an atavistic holdover from our evolutionary history, and we would make society better by suppressing it and going with the “rational”, (act-)utilitarian response. (I know he knows the difference between act-utilitarian and rule-utilitarian, because he has a PhD in philosophy. Why he didn’t mention it in the lecture, I cannot say.)

He does make a pretty good case for that, including the fMRIs showing that emotion centers light up a lot more for the footbridge dilemma than for the trolley dilemma; but I must say, I’m really not quite convinced.

Does flipping the switch to drop him through a trapdoor yield more support because it’s emotionally more distant? Or because it makes a bit more sense? We’ve solved the “Why can I push him hard enough?” problem, albeit not the “How he is heavy enough to stop a trolley?” problem.

I’ve also thought about ways to make the gruesome manhandling happen but nonetheless make more logical sense, and the best I’ve come up with is what we might call the lion dilemma: There is a hungry lion about to attack a group of five children and eat them all. You are standing on a ridge above, where the lion can’t easily get to you; if he eats the kids you’ll easily escape. Beside you is a fat man who weighs as much as the five children combined. If you push him off the ridge, he’ll be injured and unable to run, so the lion will attack him first, and then after eating him the lion will no longer be hungry and will leave the children alone. You yourself aren’t fat enough to make this work, however; you only weigh as much as two of the kids, not all five. You don’t have any weapons to kill the lion or anyone you could call for help, but you are sure you can push the fat man off the ridge quickly enough. Do you push the fat man off the ridge? I think I do—as did most of my friends in my aforementioned totally unscientific Facebook poll—though I’m not as sure of that as I was about flipping the switch. Yet nobody can deny the physicality of my action; not only am I pushing him just as before, he’s not going to be merely run over by a trolley, he’s going to be mauled and eaten by a lion. Of course, I might actually try something else, like yelling, “Run, kids!” and sliding down with the fat man to try to wrestle the lion together; and again we can certainly ask what the seven of us are doing out here unarmed and alone with lions about. But given the choice between the kids being eaten, myself and three of the kids being eaten, or the fat man being eaten, the last one does actually seem like the least-bad option.

Another good one, actually by the same Judith Thompson of plugged-in-violinist fame (for once her dilemma actually makes some sense; seriously, read A Defense of Abortion and you’ll swear she was writing it on psilocybin), is the transplant dilemma: You’re a doctor in a hospital where are five dying patients of different organ failures—two kidneys, one liver, one heart, and one lung, let’s say. You are one of the greatest transplant surgeons of all time, and there is no doubt in your mind that if you had a viable organ for each of them, you could save their lives—but you don’t. Yet as it so happens, a young man is visiting town and came to the hospital after severely breaking his leg in a skateboarding accident. He is otherwise in perfect health, and what’s more, he’s an organ donor and actually a match for all five of your dying patients. You could quietly take him into the surgical wing, give him a little too much anesthesia “by accident” as you operate on his leg, and then take his organs and save all five other patients. Nobody would ever know. Do you do it? Of course you don’t, you’re not a monster. But… you could save five by killing one, right? Is it just your irrational emotional aversion to cutting people open? No, you’re a surgeon—and I think you’ll be happy to know that actual surgeons agree that this is not the sort of thing they should be doing, despite the fact that they obviously have no problem cutting people open for the greater good all the time. The aversion to harm your own patient may come from (or be the source of) the Hippocratic Oath—are we prepared to say that the Hippocratic Oath is irrational?

I also came up with another really interesting one I’ll call the philanthropist assassin dilemma. One day, as you are walking past a dark alley, a shady figure pops out and makes you an offer: If you take this little vial of cyanide and pour it in the coffee of that man across the street while he’s in the bathroom, a donation of $100,000 will be made to UNICEF. If you refuse, the shady character will keep the $100,000 for himself. Nevermind the weirdness—they’re all weird, and unlike the footbridge dilemma this one actually could happen even though it probably won’t. Assume that despite being a murderous assassin this fellow really intends to make the donation if you help him carry out this murder. $100,000 to UNICEF would probably save the lives of over a hundred children. Furthermore, you can give the empty vial back to the philanthropist assassin and since there’s no logical connection between you and the victim, there’s basically no chance you’d ever be caught even if he is. (Also, how can you care more about your own freedom than the lives of a hundred children?) How can you justify not doing it? It’s just one man you don’t know, who apparently did something bad enough to draw the ire of philanthropist assassins, against the lives of a hundred innocent children! Yet I’m sure you share my strong intuition that you should not take the offer. It doesn’t require manhandling anybody—just a quick little pour into a cup of coffee—so that can’t be it. A hundred children! And yet I still don’t see how I could carry out this murder. Is that irrational, as Greene claims? Should we be prepared to carry out such a murder if the opportunity ever arises?

Okay, how about this one then, the white-collar criminal dilemma? You are a highly-skilled hacker, and you could hack into the accounts of a major bank and steal a few dollars from each account, gathering a total of $1 billion that you can then immediately donate to UNICEF, covering their entire operating budget for this year and possibly next year as well, saving the lives of countless children—perhaps literally millions of children. Should you do it? Honestly in this case I think maybe you should! (Maybe Sam Wyly isn’t so bad after all? He donated his stolen money to a university, which isn’t nearly as good as UNICEF… also he stole $550 million and donated $10 million, so there’s that.) But now suppose that you can only get into the system if you physically break into the bank and kill several of the guards. What are a handful of guards against millions of children? Yet you sound like a Well-Intentioned Extremist in a Hollywood blockbuster (seriously, someone should make this movie), and your action certainly doesn’t seem as unambigously heroic as one might think of any act that saves the lives of a million children and only kills a handful of people. Why is it that I think we should lobby governments and corporations to make these donations voluntarily, even if it takes a decade longer, rather than finding someone who can steal the money by force? Children will die in the meantime! Don’t those children matter?

I don’t have a good answer, actually. Maybe Greene is right and it’s just this atavistic emotional response that prevents me from seeing that these acts would be justified. But then again, maybe it’s not—maybe there’s something more here that Greene is missing.

And that brings me back to the act-utilitarian versus rule-utilitarian distinction, which Greene ignored in his lecture. In act-utilitarian terms, obviously you save the children; it’s a no-brainer, 100 children > 1 hapless coffee-drinker and 1,000,000 children >> 10 guards. But in rule-utilitarian terms, things come out a bit different. What kind of society would we live in, if at any moment we could fear the wrath of philanthropist assassins? Right now, there’s plenty of money in the bank for anyone to steal, but what would happen to our financial system if we didn’t punish bank robbers so long as they spent the money on the right charities? All of it, or just most of it? And which charities are the right charities? What would our medical system be like if we knew that our organs might be harvested at any time so long as there were two or more available recipients? Despite these dilemmas actually being a good deal more realistic than the standard trolley problems, the act-utilitarian response still relies upon assuming that this is an exceptional circumstance which will never be heard about or occur again. Yet those are by definition precisely the sort of moral principles we can’t live our lives by.

This post has already gotten really long, so I won’t even get into Molly Crockett’s talk until a later post. I probably won’t do it as the next post either, but the one after that, because next Friday is Capybara Day (“What?” you say? Stay tuned).

Appendix: The plugged-in-violinist

Continue reading

What do we mean by “risk”?

JDN 2457118 EDT 20:50.

In an earlier post I talked about how, empirically, expected utility theory can’t explain the fact that we buy both insurance and lottery tickets, and how, normatively it really doesn’t make a lot of sense to buy lottery tickets precisely because of what expected utility theory says about them.

But today I’d like to talk about one of the major problems with expected utility theory, which I consider one of the major unexplored frontiers of economics: Expected utility theory treats all kinds of risk exactly the same.

In reality there are three kinds of risk: The first is what I’ll call classical risk, which is like the game of roulette; the odds are well-defined and known in advance, and you can play the game a large number of times and average out the results. This is where expected utility theory really shines; if you are dealing with classical risk, expected utility is obviously the way to go and Von Neumann and Morgenstern quite literally proved mathematically that anything else is irrational.

The second is uncertainty, a distinction which was most famously expounded by Frank Knight, an economist at the University of Chicago. (Chicago is a funny place; on the one hand they are a haven for the madness that is Austrian economics; on the other hand they have led the charge in behavioral and cognitive economics. Knight was a perfect fit, because he was a little of both.) Uncertainty is risk under ill-defined or unknown probabilities, where there is no way to play the game twice. Most real-world “risk” is actually uncertainty: Will the People’s Republic of China collapse in the 21st century? How many deaths will global warming cause? Will human beings ever colonize Mars? Is P = NP? None of those questions have known answers, but nor can we clearly assign probabilities either; Either P = NP or not, as a mathematical theorem (or, like the continuum hypothesis, it’s independent of ZFC, the most bizarre possibility of all), and it’s not as if someone is rolling dice to decide how many people global warming will kill. You can think of this in terms of “possible worlds”, though actually most modal theorists would tell you that we can’t even say that P=NP is possible (nor can we say it isn’t possible!) because, as a necessary statement, it can only be possible if it is actually true; this follows from the S5 axiom of modal logic, and you know what, even I am already bored with that sentence. Clearly there is some sense in which P=NP is possible, and if that’s not what modal logic says then so much the worse for modal logic. I am not a modal realist (not to be confused with a moral realist, which I am); I don’t think that possible worlds are real things out there somewhere. I think possibility is ultimately a statement about ignorance, and since we don’t know that P=NP is false then I contend that it is possible that it is true. Put another way, it would not be obviously irrational to place a bet that P=NP will be proved true by 2100; but if we can’t even say that it is possible, how can that be?

Anyway, that’s the mess that uncertainty puts us in, and almost everything is made of uncertainty. Expected utility theory basically falls apart under uncertainty; it doesn’t even know how to give an answer, let alone one that is correct. In reality what we usually end up doing is waving our hands and trying to assign a probability anyway—because we simply don’t know what else to do.

The third one is not one that’s usually talked about, yet I think it’s quite important; I will call it one-shot risk. The probabilities are known or at least reasonably well approximated, but you only get to play the game once. You can also generalize to few-shot risk, where you can play a small number of times, where “small” is defined relative to the probabilities involved; this is a little vaguer, but basically what I have in mind is that even though you can play more than once, you can’t play enough times to realistically expect the rarest outcomes to occur. Expected utility theory almost works on one-shot and few-shot risk, but you have to be very careful about taking it literally.

I think an example make things clearer: Playing the lottery is a few-shot risk. You can play the lottery multiple times, yes; potentially hundreds of times in fact. But hundreds of times is nothing compared to the 1 in 400 million chance you have of actually winning. You know that probability; it can be computed exactly from the rules of the game. But nonetheless expected utility theory runs into some serious problems here.

If we were playing a classical risk game, expected utility would obviously be right. So for example if you know that you will live one billion years, and you are offered the chance to play a game (somehow compensating for the mind-boggling levels of inflation, economic growth, transhuman transcendence, and/or total extinction that will occur during that vast expanse of time) in which at each year you can either have a guaranteed $40,000 of inflation-adjusted income or a 99.999,999,75% chance of $39,999 of inflation-adjusted income and a 0.000,000,25% chance of $100 million in inflation-adjusted income—which will disappear at the end of the year, along with everything you bought with it, so that each year you start afresh. Should you take the second option? Absolutely not, and expected utility theory explains why; that one or two years where you’ll experience 8 QALY per year isn’t worth dropping from 4.602056 QALY per year to 4.602049 QALY per year for the other nine hundred and ninety-eight million years. (Can you even fathom how long that is? From here, one billion years is all the way back to the Mesoproterozoic Era, which we think is when single-celled organisms first began to reproduce sexually. The gain is to be Mitt Romney for a year or two; the loss is the value of a dollar each year over and over again for the entire time that has elapsed since the existence of gamete meiosis.) I think it goes without saying that this whole situation is almost unimaginably bizarre. Yet that is implicitly what we’re assuming when we use expected utility theory to assess whether you should buy lottery tickets.

The real situation is more like this: There’s one world you can end up in, and almost certainly will, in which you buy lottery tickets every year and end up with an income of $39,999 instead of $40,000. There is another world, so unlikely as to be barely worth considering, yet not totally impossible, in which you get $100 million and you are completely set for life and able to live however you want for the rest of your life. Averaging over those two worlds is a really weird thing to do; what do we even mean by doing that? You don’t experience one world 0.000,000,25% as much as the other (whereas in the billion-year scenario, that is exactly what you do); you only experience one world or the other.

In fact, it’s worse than this, because if a classical risk game is such that you can play it as many times as you want as quickly as you want, we don’t even need expected utility theory—expected money theory will do. If you can play a game where you have a 50% chance of winning $200,000 and a 50% chance of losing $50,000, which you can play up to once an hour for the next 48 hours, and you will be extended any credit necessary to cover any losses, you’d be insane not to play; your 99.9% confidence level of wealth at the end of the two days is from $850,000 to $6,180,000. While you may lose money for awhile, it is vanishingly unlikely that you will end up losing more than you gain.

Yet if you are offered the chance to play this game only once, you probably should not take it, and the reason why then comes back to expected utility. If you have good access to credit you might consider it, because going $50,000 into debt is bad but not unbearably so (I did, going to college) and gaining $200,000 might actually be enough better to justify the risk. Then the effect can be averaged over your lifetime; let’s say you make $50,000 per year over 40 years. Losing $50,000 means making your average income $48,750, while gaining $200,000 means making your average income $55,000; so your QALY per year go from a guaranteed 4.70 to a 50% chance of 4.69 and a 50% chance of 4.74; that raises your expected utility from 4.70 to 4.715.

But if you don’t have good access to credit and your income for this year is $50,000, then losing $50,000 means losing everything you have and living in poverty or even starving to death. The benefits of raising your income by $200,000 this year aren’t nearly great enough to take that chance. Your expected utility goes from 4.70 to a 50% chance of 5.30 and a 50% chance of zero.

So expected utility theory only seems to properly apply if we can play the game enough times that the improbable events are likely to happen a few times, but not so many times that we can be sure our money will approach the average. And that’s assuming we know the odds and we aren’t just stuck with uncertainty.

Unfortunately, I don’t have a good alternative; so far expected utility theory may actually be the best we have. But it remains deeply unsatisfying, and I like to think we’ll one day come up with something better.