The Cognitive Science of Morality Part II: Molly Crockett

JDN 2457140 EDT 20:16.

This weekend has been very busy for me, so this post is going to be shorter than most—which is probably a good thing anyway, since my posts tend to run a bit long.

In an earlier post I discussed the Weinberg Cognitive Science Conference and my favorite speaker in the lineup, Joshua Greene. After a brief interlude from Capybara Day, it’s now time to talk about my second-favorite speaker, Molly Crockett. (Is it just me, or does the name “Molly” somehow seem incongruous with a person of such prestige?)

Molly Crockett is a neuroeconomist, though you’d never hear her say that. She doesn’t think of herself as an economist at all, but purely as a neuroscientist. I suspect this is because when she hears the word “economist” she thinks of only mainstream neoclassical economists, and she doesn’t want to be associated with such things.

Still, what she studies is clearly neuroeconomics—I in fact first learned of her work by reading the textbook Neuroeconomics, though I really got interested in her work after watching her TED Talk. It’s one of the better TED talks (they put out so many of them now that the quality is mixed at best); she talks about news reporting on neuroscience, how it is invariably ridiculous and sensationalist. This is particularly frustrating because of how amazing and important neuroscience actually is.

I could almost forgive the sensationalism if they were talking about something that’s actually fantastically boring, like, say, tax codes, or financial regulations. Of course, even then there is the Oliver Effect: You can hide a lot of evil by putting it in something boring. But Dodd-Frank is 2300 pages long; I read an earlier draft that was only (“only”) 600 pages, and it literally contained a three-page section explaining how to define the word “bank”. (Assuming direct proportionality, I would infer that there is now a twelve-page section defining the word “bank”. Hopefully not?) It doesn’t get a whole lot more snoozeworthy than that. So if you must be a bit sensationalist in order to get people to see why eliminating margin requirements and the swaps pushout rule are terrible, terrible ideas, so be it.

But neuroscience is not boring, and so sensationalism only means that news outlets are making up exciting things that aren’t true instead of saying the actually true things that are incredibly exciting.

Here, let me express without sensationalism what Molly Crockett does for a living: Molly Crockett experimentally determines how psychoactive drugs modulate moral judgments. The effects she observes are small, but they are real; and since these experiments are done using small doses for a short period of time, if these effects scale up they could be profound. This is the basic research component—when it comes to technological fruition it will be literally A Clockwork Orange. But it may be A Clockwork Orange in the best possible way: It could be, at last, a medical cure for psychopathy, a pill to make us not just happier or healthier, but better. We are not there yet by any means, but this is clearly the first step: Molly Crockett is to A Clockwork Orange roughly as Michael Faraday is to the Internet.

In one of the experiments she talked about at the conference, Crockett found that serotonin reuptake inhibitors enhance harm aversion. Serotonin reuptake inhibitors are very commonly used drugs—you are likely familiar with one called Prozac. So basically what this study means is that Prozac makes people more averse to causing pain in themselves or others. It doesn’t necessarily make them more altruistic, let alone more ethical; but it does make them more averse to causing pain. (To see the difference, imagine a 19th-century field surgeon dealing with a wounded soldier; there is no anesthetic, but an amputation must be made. Sometimes being ethical requires causing pain.)

The experiment is actually what Crockett calls “the honest Milgram Experiment“; under Milgram, the experimenters told their subjects they would be causing shocks, but no actual shocks were administered. Under Crockett, the shocks are absolutely 100% real (though they are restricted to a much lower voltage of course). People are given competing offers that contain an amount of money and a number of shocks to be delivered, either to you or to the other subject. They decide how much it’s worth to them to bear the shocks—or to make someone else bear them. It’s a classic willingness-to-pay paradigm, applied to the Milgram Experiment.

What Crockett found did not surprise me, nor do I expect it will surprise you if you imagine yourself in the same place; but it would totally knock the socks off of any neoclassical economist. People are much more willing to bear shocks for money than they are to give shocks for money. They are what Crockett terms hyper-altruistic; I would say that they are exhibiting an apparent solidarity coefficient greater than 1. They seem to be valuing others more than they value themselves.

Normally I’d say that this makes no sense at all—why would you value some random stranger more than yourself? Equally perhaps, and obviously only a psychopath would value them not at all; but more? And there’s no way you can actually live this way in your daily life; you’d give away all your possessions and perhaps even starve yourself to death. (I guess maybe Jesus lived that way.) But Crockett came up with a model that explains it pretty well: We are morally risk-averse. If we knew we were dealing with someone very strong who had no trouble dealing with shocks, we’d be willing to shock them a fairly large amount. But we might actually be dealing with someone very vulnerable who would suffer greatly; and we don’t want to take that chance.

I think there’s some truth to that. But her model leaves something else out that I think is quite important: We are also averse to unfairness. We don’t like the idea of raising one person while lowering another. (Obviously not so averse as to never do it—we do it all the time—but without a compelling reason we consider it morally unjustified.) So if the two subjects are in roughly the same condition (being two undergrads at Oxford, they probably are), then helping one while hurting the other is likely to create inequality where none previously existed. But if you hurt yourself in order to help yourself, no such inequality is created; all you do is raise yourself up, provided that you do believe that the money is good enough to be worth the shocks. It’s actually quite Rawslian; lifting one person up while not affecting the other is exactly the sort of inequality you’re allowed to create according to the Difference Principle.

There’s also the fact that the subjects can’t communicate; I think if I could make a deal to share the money afterward, I’d feel better about shocking someone more in order to get us both more money. So perhaps with communication people would actually be willing to shock others more. (And the sensation headline would of course be: “Talking makes people hurt each other.”)

But all of these ideas are things that could be tested in future experiments! And maybe I’ll do those experiments someday, or Crockett, or one of her students. And with clever experimental paradigms we might find out all sorts of things about how the human mind works, how moral intuitions are structured, and ultimately how chemical interventions can actually change human moral behavior. The potential for both good and evil is so huge, it’s both wondrous and terrifying—but can you deny that it is exciting?

And that’s not even getting into the Basic Fact of Cognitive Science, which undermines all concepts of afterlife and theistic religion. I already talked about it before—as the sort of thing that I sort of wish I could say when I introduce myself as a cognitive scientist—but I think it bears repeating.

As Patricia Churchland said on the Colbert Report: Colbert asked, “Are you saying I have no soul?” and she answered, “Yes.” I actually prefer Daniel Dennett’s formulation: “Yes, we have a soul, but it’s made of lots of tiny robots.”

We don’t have a magical, supernatural soul (whatever that means); we don’t have an immortal soul that will rise into Heaven or be reincarnated in someone else. But we do have something worth preserving: We have minds that are capable of consciousness. We love and hate, exalt and suffer, remember and imagine, understand and wonder. And yes, we are born and we die. Once the unique electrochemical pattern that defines your consciousness is sufficiently degraded, you are gone. Nothing remains of what you were—except perhaps the memories of others, or things you have created. But even this legacy is unlikely to last forever. One day it is likely that all of us—and everything we know, and everything we have built, from the Great Pyramids to Hamlet to Beethoven’s Ninth to Principia Mathematica to the US Interstate Highway System—will be gone. I don’t have any consolation to offer you on that point; I can’t promise you that anything will survive a thousand years, much less a million. There is a chance—even a chance that at some point in the distant future, whatever humanity has become will find a way to reverse the entropic decay of the universe itself—but nothing remotely like a guarantee. In all probability you, and I, and all of this will be gone someday, and that is absolutely terrifying.

But it is also undeniably true. The fundamental link between the mind and the brain is one of the basic facts of cognitive science; indeed I like to call it The Basic Fact of Cognitive Science. We know specifically which kinds of brain damage will make you unable to form memories, comprehend language, speak language (a totally different area), see, hear, smell, feel anger, integrate emotions with logic… do I need to go on? Everything that you are is done by your brain—because you are your brain.

Now why can’t the science journalists write about that? Instead we get “The Simple Trick That Can Boost Your Confidence Immediately” and “When it Comes to Picking Art, Men & Women Just Don’t See Eye to Eye.” HuffPo is particularly awful of course; the New York Times is better, but still hardly as good as one might like. They keep trying to find ways to make it exciting—but so rarely seem to grasp how exciting it already is.

Happy Capybara Day! Or the power of culture

JDN 2457131 EDT 14:33.

Did you celebrate Capybara Day yesterday? You didn’t? Why not? We weren’t able to find any actual capybaras this year, but maybe next year we’ll be able to plan better and find a capybara at a zoo; unfortunately the nearest zoo with a capybara appears to be in Maryland. But where would we be without a capybara to consult annually on the stock market?

Right now you are probably rather confused, perhaps wondering if I’ve gone completely insane. This is because Capybara Day is a holiday of my own invention, one which only a handful of people have even heard about.

But if you think we’d never have a holiday so bizarre, think again: For all I did was make some slight modifications to Groundhog Day. Instead of consulting a groundhog about the weather every February 2, I proposed that we consult a capybara about the stock market every April 17. And if you think you have some reason why groundhogs are better at predicting the weather (perhaps because they at least have some vague notion of what weather is) than capybaras are at predicting the stock market (since they have no concept of money or numbers), think about this: Capybara Day could produce extremely accurate predictions, provided only that people actually believed it. The prophecy of rising or falling stock prices could very easily become self-fulfilling. If it were a cultural habit of ours to consult capybaras about the stock market, capybaras would become good predictors of the stock market.

That might seem a bit far-fetched, but think about this: Why is there a January Effect? (To be fair, some researchers argue that there isn’t, and the apparent correlation between higher stock prices and the month of January is simply an illusion, perhaps the result of data overfitting.)

But I think it probably is real, and moreover has some very obvious reasons behind it. In this I’m in agreement with Richard Thaler, a founder of cognitive economics who wrote about such anomalies in the 1980s. December is a time when two very culturally-important events occur: The end of the year, during which many contracts end, profits are assessed, and tax liabilities are determined; and Christmas, the greatest surge of consumer spending and consumer debt.

The first effect means that corporations are very likely to liquidate assets—particularly assets that are running at a loss—in order to minimize their tax liabilities for the year, which will drive down prices. The second effect means that consumers are in search of financing for extravagant gift purchases, and those who don’t run up credit cards may instead sell off stocks. This is if anything a more rational way of dealing with the credit constraint, since interest rates on credit cards are typically far in excess of stock returns. But this surge of selling due to credit constraints further depresses prices.

In January, things return to normal; assets are repurchased, debt is repaid. This brings prices back up to where they were, which results in a higher than normal return for January.

Neoclassical economists are loath to admit that such a seasonal effect could exist, because it violates their concept of how markets work—and to be fair, the January Effect is actually weak enough to be somewhat ambiguous. But actually it doesn’t take much deviation from neoclassical models to explain the effect: Tax policies and credit constraints are basically enough to do it, so you don’t even need to go that far into understanding human behavior. It’s perfectly rational to behave this way given the distortions that are created by taxes and credit limits, and the arbitrage opportunity is one that you can only take advantage of if you have large amounts of credit and aren’t worried about minimizing your tax liabilities. It’s important to remember just how strong the assumptions of models like CAPM truly are; in addition to the usual infinite identical psychopaths, CAPM assumes there are no taxes, no transaction costs, and unlimited access to credit. I’d say it’s amazing that it works at all, but actually, it doesn’t—check out this graph of risk versus return and tell me if you think CAPM is actually giving us any information at all about how stock markets behave. It frankly looks like you could have drawn a random line through a scatter plot and gotten just as good a fit. Knowing how strong its assumptions are, we would not expect CAPM to work—and sure enough, it doesn’t.

Of course, that leaves the question of why our tax policy would be structured in this way—why make the year end on December 31 instead of some other date? And for that, you need to go back through hundreds of years of history, the Gregorian calendar, which in turn was influenced by Christianity, and before that the Julian calendar—in other words, culture.

Culture is one of the most powerful forces that influences human behavior—and also one of the strangest and least-understood. Economic theory is basically silent on the matter of culture. Typically it is ignored entirely, assumed to be irrelevant against the economic incentives that are the true drivers of human action. (There’s a peculiar emotion many neoclassical economists express that I can best describe as self-righteous cynicism, the attitude that we alone—i.e., economists—understand that human beings are not the noble and altruistic creatures many imagine us to be, nor beings of art and culture, but simply cold, calculating machines whose true motives are reducible to profit incentives—and all who think otherwise are being foolish and naïve; true enlightenment is understanding that human beings are infinite identical psychopaths. This is the attitude epitomized by the economist who once sent me an email with “altruism” written in scare quotes.)

Occasionally culture will be invoked as an external (in jargon, exogenous) force, to explain some aspect of human behavior that is otherwise so totally irrational that even invoking nonsensical preferences won’t make it go away. When a suicide bomber blows himself up in a crowd of people, it’s really pretty hard to explain that in terms of rational profit incentives—though I have seen it tried. (It could be self-interest at a larger scale, like families or nations—but then, isn’t that just the tribal paradigm I’ve been arguing for all along?)

But culture doesn’t just motivate us to do extreme or wildly irrational things. It motivates us all the time, often in quite beneficial ways; we wait in line, hold doors for people walking behind us, tip waiters who serve us, and vote in elections, not because anyone pressures us directly to do so (unlike say Australia we do not have compulsory voting) but because it’s what we feel we ought to do. There is a sense of altruism—and altruism provides the ultimate justification for why it is right to do these things—but the primary motivator in most cases is culture—that’s what people do, and are expected to do, around here.

Indeed, even when there is a direct incentive against behaving a certain way—like criminal penalties against theft—the probability of actually suffering a direct penalty is generally so low that it really can’t be our primary motivation. Instead, the reason we don’t cheat and steal is that we think we shouldn’t, and a major part of why we think we shouldn’t is that we have cultural norms against it.

We can actually observe differences in cultural norms across countries in the laboratory. In this 2008 study by Massimo Castro (PDF) comparing British and Italian people playing an economic game called the public goods game in which you can pay a cost yourself to benefit the group as a whole, it was found not only that people were less willing to benefit groups of foreigners than groups of compatriots, British people were overall more generous than Italian people. This 2010 study by Gachter et. al. (actually Joshua Greene talked about it last week) compared how people play the game in various cities, they found three basic patterns: In Western European and American cities such as Zurich, Copenhagen and Boston, cooperation started out high and remained high throughout; people were just cooperative in general. In Asian cities such as Chengdu and Seoul, cooperation started out low, but if people were punished for not cooperating, cooperation would improve over time, eventually reaching about the same place as in the highly cooperative cities. And in Mediterranean cities such as Istanbul, Athens, and Riyadh, cooperation started low and stayed low—even when people could be punished for not cooperating, nobody actually punished them. (These patterns are broadly consistent with the World Bank corruption ratings of these regions, by the way; Western Europe shows very low corruption, while Asia and the Mediterranean show high corruption. Of course this isn’t all that’s going on—and Asia isn’t much less corrupt than the Middle East, while this experiment might make you think so.)

Interestingly, these cultural patterns showed Melbourne as behaving more like an Asian city than a Western European one—perhaps being in the Pacific has worn off on Australia more than they realize.

This is very preliminary, cutting-edge research I’m talking about, so be careful about drawing too many conclusions. But in general we’ve begun to find some fairly clear cultural differences in economic behavior across different societies. While this would not be at all surprising to a sociologist or anthropologist, it’s the sort of thing that economists have insisted for years is impossible.

This is the frontier of cognitive economics, in my opinion. We know that culture is a very powerful motivator of our behavior, and it is time for us to understand how it works—and then, how it can be changed. We know that culture can be changed—cultural norms do change over time, sometimes remarkably rapidly; but we have only a faint notion of how or why they change. Changing culture has the power to do things that simply changing policy cannot, however; policy requires enforcement, and when the enforcement is removed the behavior will often disappear. But if a cultural norm can be imparted, it could sustain itself for a thousand years without any government action at all.