The Cognitive Science of Morality Part II: Molly Crockett

JDN 2457140 EDT 20:16.

This weekend has been very busy for me, so this post is going to be shorter than most—which is probably a good thing anyway, since my posts tend to run a bit long.

In an earlier post I discussed the Weinberg Cognitive Science Conference and my favorite speaker in the lineup, Joshua Greene. After a brief interlude from Capybara Day, it’s now time to talk about my second-favorite speaker, Molly Crockett. (Is it just me, or does the name “Molly” somehow seem incongruous with a person of such prestige?)

Molly Crockett is a neuroeconomist, though you’d never hear her say that. She doesn’t think of herself as an economist at all, but purely as a neuroscientist. I suspect this is because when she hears the word “economist” she thinks of only mainstream neoclassical economists, and she doesn’t want to be associated with such things.

Still, what she studies is clearly neuroeconomics—I in fact first learned of her work by reading the textbook Neuroeconomics, though I really got interested in her work after watching her TED Talk. It’s one of the better TED talks (they put out so many of them now that the quality is mixed at best); she talks about news reporting on neuroscience, how it is invariably ridiculous and sensationalist. This is particularly frustrating because of how amazing and important neuroscience actually is.

I could almost forgive the sensationalism if they were talking about something that’s actually fantastically boring, like, say, tax codes, or financial regulations. Of course, even then there is the Oliver Effect: You can hide a lot of evil by putting it in something boring. But Dodd-Frank is 2300 pages long; I read an earlier draft that was only (“only”) 600 pages, and it literally contained a three-page section explaining how to define the word “bank”. (Assuming direct proportionality, I would infer that there is now a twelve-page section defining the word “bank”. Hopefully not?) It doesn’t get a whole lot more snoozeworthy than that. So if you must be a bit sensationalist in order to get people to see why eliminating margin requirements and the swaps pushout rule are terrible, terrible ideas, so be it.

But neuroscience is not boring, and so sensationalism only means that news outlets are making up exciting things that aren’t true instead of saying the actually true things that are incredibly exciting.

Here, let me express without sensationalism what Molly Crockett does for a living: Molly Crockett experimentally determines how psychoactive drugs modulate moral judgments. The effects she observes are small, but they are real; and since these experiments are done using small doses for a short period of time, if these effects scale up they could be profound. This is the basic research component—when it comes to technological fruition it will be literally A Clockwork Orange. But it may be A Clockwork Orange in the best possible way: It could be, at last, a medical cure for psychopathy, a pill to make us not just happier or healthier, but better. We are not there yet by any means, but this is clearly the first step: Molly Crockett is to A Clockwork Orange roughly as Michael Faraday is to the Internet.

In one of the experiments she talked about at the conference, Crockett found that serotonin reuptake inhibitors enhance harm aversion. Serotonin reuptake inhibitors are very commonly used drugs—you are likely familiar with one called Prozac. So basically what this study means is that Prozac makes people more averse to causing pain in themselves or others. It doesn’t necessarily make them more altruistic, let alone more ethical; but it does make them more averse to causing pain. (To see the difference, imagine a 19th-century field surgeon dealing with a wounded soldier; there is no anesthetic, but an amputation must be made. Sometimes being ethical requires causing pain.)

The experiment is actually what Crockett calls “the honest Milgram Experiment“; under Milgram, the experimenters told their subjects they would be causing shocks, but no actual shocks were administered. Under Crockett, the shocks are absolutely 100% real (though they are restricted to a much lower voltage of course). People are given competing offers that contain an amount of money and a number of shocks to be delivered, either to you or to the other subject. They decide how much it’s worth to them to bear the shocks—or to make someone else bear them. It’s a classic willingness-to-pay paradigm, applied to the Milgram Experiment.

What Crockett found did not surprise me, nor do I expect it will surprise you if you imagine yourself in the same place; but it would totally knock the socks off of any neoclassical economist. People are much more willing to bear shocks for money than they are to give shocks for money. They are what Crockett terms hyper-altruistic; I would say that they are exhibiting an apparent solidarity coefficient greater than 1. They seem to be valuing others more than they value themselves.

Normally I’d say that this makes no sense at all—why would you value some random stranger more than yourself? Equally perhaps, and obviously only a psychopath would value them not at all; but more? And there’s no way you can actually live this way in your daily life; you’d give away all your possessions and perhaps even starve yourself to death. (I guess maybe Jesus lived that way.) But Crockett came up with a model that explains it pretty well: We are morally risk-averse. If we knew we were dealing with someone very strong who had no trouble dealing with shocks, we’d be willing to shock them a fairly large amount. But we might actually be dealing with someone very vulnerable who would suffer greatly; and we don’t want to take that chance.

I think there’s some truth to that. But her model leaves something else out that I think is quite important: We are also averse to unfairness. We don’t like the idea of raising one person while lowering another. (Obviously not so averse as to never do it—we do it all the time—but without a compelling reason we consider it morally unjustified.) So if the two subjects are in roughly the same condition (being two undergrads at Oxford, they probably are), then helping one while hurting the other is likely to create inequality where none previously existed. But if you hurt yourself in order to help yourself, no such inequality is created; all you do is raise yourself up, provided that you do believe that the money is good enough to be worth the shocks. It’s actually quite Rawslian; lifting one person up while not affecting the other is exactly the sort of inequality you’re allowed to create according to the Difference Principle.

There’s also the fact that the subjects can’t communicate; I think if I could make a deal to share the money afterward, I’d feel better about shocking someone more in order to get us both more money. So perhaps with communication people would actually be willing to shock others more. (And the sensation headline would of course be: “Talking makes people hurt each other.”)

But all of these ideas are things that could be tested in future experiments! And maybe I’ll do those experiments someday, or Crockett, or one of her students. And with clever experimental paradigms we might find out all sorts of things about how the human mind works, how moral intuitions are structured, and ultimately how chemical interventions can actually change human moral behavior. The potential for both good and evil is so huge, it’s both wondrous and terrifying—but can you deny that it is exciting?

And that’s not even getting into the Basic Fact of Cognitive Science, which undermines all concepts of afterlife and theistic religion. I already talked about it before—as the sort of thing that I sort of wish I could say when I introduce myself as a cognitive scientist—but I think it bears repeating.

As Patricia Churchland said on the Colbert Report: Colbert asked, “Are you saying I have no soul?” and she answered, “Yes.” I actually prefer Daniel Dennett’s formulation: “Yes, we have a soul, but it’s made of lots of tiny robots.”

We don’t have a magical, supernatural soul (whatever that means); we don’t have an immortal soul that will rise into Heaven or be reincarnated in someone else. But we do have something worth preserving: We have minds that are capable of consciousness. We love and hate, exalt and suffer, remember and imagine, understand and wonder. And yes, we are born and we die. Once the unique electrochemical pattern that defines your consciousness is sufficiently degraded, you are gone. Nothing remains of what you were—except perhaps the memories of others, or things you have created. But even this legacy is unlikely to last forever. One day it is likely that all of us—and everything we know, and everything we have built, from the Great Pyramids to Hamlet to Beethoven’s Ninth to Principia Mathematica to the US Interstate Highway System—will be gone. I don’t have any consolation to offer you on that point; I can’t promise you that anything will survive a thousand years, much less a million. There is a chance—even a chance that at some point in the distant future, whatever humanity has become will find a way to reverse the entropic decay of the universe itself—but nothing remotely like a guarantee. In all probability you, and I, and all of this will be gone someday, and that is absolutely terrifying.

But it is also undeniably true. The fundamental link between the mind and the brain is one of the basic facts of cognitive science; indeed I like to call it The Basic Fact of Cognitive Science. We know specifically which kinds of brain damage will make you unable to form memories, comprehend language, speak language (a totally different area), see, hear, smell, feel anger, integrate emotions with logic… do I need to go on? Everything that you are is done by your brain—because you are your brain.

Now why can’t the science journalists write about that? Instead we get “The Simple Trick That Can Boost Your Confidence Immediately” and “When it Comes to Picking Art, Men & Women Just Don’t See Eye to Eye.” HuffPo is particularly awful of course; the New York Times is better, but still hardly as good as one might like. They keep trying to find ways to make it exciting—but so rarely seem to grasp how exciting it already is.

How is the economy doing?

JDN 2457033 EST 12:22.

Whenever you introduce yourself to someone as an economist, you will typically be asked a single question: “How is the economy doing?” I’ve already experienced this myself, and I don’t have very many dinner parties under my belt.

It’s an odd question, for a couple of reasons: First, I didn’t say I was a macroeconomic forecaster. That’s a very small branch of economics—even a small branch of macroeconomics. Second, it is widely recognized among economists that our forecasters just aren’t very good at what they do. But it is the sort of thing that pops into people’s minds when they hear the word “economist”, so we get asked it a lot.

Why are our forecasts so bad? Some argue that the task is just inherently too difficult due to the chaotic system involved; but they used to say that about weather forecasts, and yet with satellites and computer models our forecasts are now far more accurate than they were 20 years ago. Others have argued that “politics always dominates over economics”, as though politics were somehow a fundamentally separate thing, forever exogenous, a parameter in our models that cannot be predicted. I have a number of economic aphorisms I’m trying to popularize; the one for this occasion is: “Nothing is exogenous.” (Maybe fundamental constants of physics? But actually many physicists think that those constants can be derived from even more fundamental laws.) My most common is “It’s the externalities, stupid.”; next is “It’s not the incentives, it’s the opportunities.”; and the last is “Human beings are 90% rational. But woe betide that other 10%.” In fact, it’s not quite true that all our macroeconomic forecasters are bad; a few, such as Krugman, are actually quite good. The Klein Award is given each year to the best macroeconomic forecasters, and the same names pop up too often for it to be completely random. (Sadly, one of the most common is Citigroup, meaning that our banksters know perfectly well what they’re doing when they destroy our economy—they just don’t care.) So in fact I think our failures of forecasting are not inevitable or permanent.

And of course that’s not what I do at all. I am a cognitive economist; I study how economic systems behave when they are run by actual human beings, rather than by infinite identical psychopaths. I’m particularly interested in what I call the tribal paradigm, the way that people identify with groups and act in the interests of those groups, how much solidarity people feel for each other and why, and what role ideology plays in that identification. I’m hoping to one day formally model solidarity and make directly testable predictions about things like charitable donations, immigration policies and disaster responses.

I do have a more macroeconomic bent than most other cognitive economists; I’m not just interested in how human irrationality affects individuals or corporations, I’m also interested in how it affects society as a whole. But unlike most macroeconomists I care more about inequality than unemployment, and hardly at all about inflation. Unless you start getting 40% inflation per year, inflation really isn’t that harmful—and can you imagine what 40% unemployment would be like? (Also, while 100% inflation is awful, 100% unemployment would be no economy at all.) If we’re going to have a “misery index“, it should weight unemployment at least 10 times as much as inflation—and it should also include terms for poverty and inequality. Frankly maybe we should just use poverty, since I’d be prepared to accept just about any level of inflation, unemployment, or even inequality if it meant eliminating poverty. This is of course is yet another reason why a basic income is so great! An anti-poverty measure can really only be called a failure if it doesn’t actually reduce poverty; the only way that could happen with a basic income is if it somehow completely destabilized the economy, which is extremely unlikely as long as the basic income isn’t something ridiculous like $100,000 per year.

I could probably talk about my master’s thesis; the econometric models are relatively arcane, but the basic idea of correlating the income concentration of the top 1% of 1% and the level of corruption is something most people can grasp easily enough.

Of course, that wouldn’t be much of an answer to “How is the economy doing?”; usually my answer is to repeat what I’ve last read from mainstream macroeconomic forecasts, which is usually rather banal—but maybe that’s the idea? Most small talk is pretty banal I suppose (I never was very good at that sort of thing). It sounds a bit like this: No, we’re not on the verge of horrible inflation—actually inflation is currently too low. (At this point someone will probably bring up the gold standard, and I’ll have to explain that the gold standard is an unequivocally terrible idea on so, so many levels. The gold standard caused the Great Depression.) Unemployment is gradually improving, and actually job growth is looking pretty good right now; but wages are still stagnant, which is probably what’s holding down inflation. We could have prevented the Second Depression entirely, but we didn’t because Republicans are terrible at managing the economy—all of the 10 most recent recessions and almost 80% of the recessions in the last century were under Republican presidents. Instead the Democrats did their best to implement basic principles of Keynesian macroeconomics despite Republican intransigence, and we muddled through. In another year or two we will actually be back at an unemployment rate of 5%, which the Federal Reserve considers “full employment”. That’s already problematic—what about that other 5%?—but there’s another problem as well: Much of our reduction in unemployment has come not from more people being employed but instead by more people dropping out of the labor force. Our labor force participation rate is the lowest it’s been since 1978, and is still trending downward. Most of these people aren’t getting jobs; they’re giving up. At best we may hope that they are people like me, who gave up on finding work in order to invest in their own education, and will return to the labor force more knowledgeable and productive one day—and indeed, college participation rates are also rising rapidly. And no, that doesn’t mean we’re becoming “overeducated”; investment in education, so-called “human capital”, is literally the single most important factor in long-term economic output, by far. Education is why we’re not still in the Stone Age. Physical capital can be replaced, and educated people will do so efficiently. But all the physical capital in the world will do you no good if nobody knows how to use it. When everyone in the world is a millionaire with two PhDs and all our work is done by robots, maybe then you can say we’re “overeducated”—and maybe then you’d still be wrong. Being “too educated” is like being “too rich” or “too happy”.

That’s usually enough to placate my interlocutor. I should probably count my blessings, for I imagine that the first confrontation you get at a dinner party if you say you are a biologist involves a Creationist demanding that you “prove evolution”. I like to think that some mathematical biologists—yes, that’s a thing—take their request literally and set out to mathematically prove that if allele distributions in a population change according to a stochastic trend then the alleles with highest expected fitness have, on average, the highest fitness—which is what we really mean by “survival of the fittest”. The more formal, the better; the goal is to glaze some Creationist eyes. Of course that’s a tautology—but so is literally anything that you can actually prove. Cosmologists probably get similar demands to “prove the Big Bang”, which sounds about as annoying. I may have to deal with gold bugs, but I’ll take them over Creationists any day.

What do other scientists get? When I tell people I am a cognitive scientist (as a cognitive economist I am sort of both an economist and a cognitive scientist after all), they usually just respond with something like “Wow, you must be really smart.”; which I suppose is true enough, but always strikes me as an odd response. I think they just didn’t know enough about the field to even generate a reasonable-sounding question, whereas with economists they always have “How is the economy doing?” handy. Political scientists probably get “Who is going to win the election?” for the same reason. People have opinions about economics, but they don’t have opinions about cognitive science—or rather, they don’t think they do. Actually most people have an opinion about cognitive science that is totally and utterly ridiculous, more on a par with Creationists than gold bugs: That is, most people believe in a soul that survives after death. This is rather like believing that after your computer has been smashed to pieces and ground back into the sand from whence it came, all the files you had on it are still out there somewhere, waiting to be retrieved. No, they’re long gone—and likewise your memories and your personality will be long gone once your brain has rotted away. Yes, we have a soul, but it’s made of lots of tiny robots; when the tiny robots stop working the soul is no more. Everything you are is a result of the functioning of your brain. This does not mean that your feelings are not real or do not matter; they are just as real and important as you thought they were. What it means is that when a person’s brain is destroyed, that person is destroyed, permanently and irrevocably. This is terrifying and difficult to accept; but it is also most definitely true. It is as solid a fact as any in modern science. Many people see a conflict between evolution and religion; but the Pope has long since rendered that one inert. No, the real conflict, the basic fact that undermines everything religion is based upon, is not in biology but in cognitive science. It is indeed the Basic Fact of Cognitive Science: We are our brains, no more and no less. (But I suppose it wouldn’t be polite to bring that up at dinner parties.)

The “You must be really smart.” response is probably what happens to physicists and mathematicians. Quantum mechanics confuses basically everyone, so few dare go near it. The truly bold might try to bring up Schrodinger’s Cat, but are unlikely to understand the explanation of why it doesn’t work. General relativity requires thinking in tensors and four-dimensional spaces—perhaps they’ll be asked the question “What’s inside a black hole?”, which of course no physicist can really answer; the best answer may actually be, “What do you mean, inside?” And if a mathematician tries to explain their work in lay terms, it usually comes off as either incomprehensible or ridiculous: Stokes’ Theorem would be either “the integral of a differential form over the boundary of some orientable manifold is equal to the integral of its exterior derivative over the whole manifold” or else something like “The swirliness added up inside an object is equal to the swirliness added up around the edges.”

Economists, however, always seem to get this one: “How is the economy doing?”

Right now, the answer is this: “It’s still pretty bad, but it’s getting a lot better. Hopefully the new Congress won’t screw that up.”

Are humans rational?

JDN 2456928 PDT 11:21.

The central point of contention between cognitive economists and neoclassical economists hinges upon the word “rational”: Are humans rational? What do we mean by “rational”?

Neoclassicists are very keen to insist that they think humans are rational, and often characterize the cognitivist view as saying that humans are irrational. (Daniel Ariely has a habit of feeding this view, titling books things like Predictably Irrational and The Upside of Irrationality.) But I really don’t think this is the right way to characterize the difference.

Daniel Kahneman has a somewhat better formulation (from Thinking, Fast and Slow): “I often cringe when my work is credited as demonstrating that human choices are irrational, when in fact our research only shows that Humans are not well described by the rational-agent model.” (Yes, he capitalizes the word “Humans” throughout, which is annoying; but in general it is a great book.)

The problem is that saying “humans are irrational” has the connotation of a universal statement; it seems to be saying that everything we do, all the time, is always and everywhere utterly irrational. And this of course could hardly be further from the truth; we would not have even survived in the savannah, let alone invented the Internet, if we were that irrational. If we simply lurched about randomly without any concept of goals or response to information in the environment, we would have starved to death millions of years ago.

But at the same time, the neoclassical definition of “rational” obviously does not describe human beings. We aren’t infinite identical psychopaths. Particularly bizarre (and frustrating) is the continued insistence that rationality entails selfishness; apparently economists are getting all their philosophy from Ayn Rand (who barely even qualifies as such), rather than the greats such as Immanuel Kant and John Stuart Mill or even the best contemporary philosophers such as Thomas Pogge and John Rawls. All of these latter would be baffled by the notion that selfless compassion is irrational.

Indeed, Kant argued that rationality implies altruism, that a truly coherent worldview requires assent to universal principles that are morally binding on yourself and every other rational being in the universe. (I am not entirely sure he is correct on this point, and in any case it is clear to me that neither you nor I are anywhere near advanced enough beings to seriously attempt such a worldview. Where neoclassicists envision infinite identical psychopaths, Kant envisions infinite identical altruists. In reality we are finite diverse tribalists.)

But even if you drop selfishness, the requirements of perfect information and expected utility maximization are still far too strong to apply to real human beings. If that’s your standard for rationality, then indeed humans—like all beings in the real world—are irrational.

The confusion, I think, comes from the huge gap between ideal rationality and total irrationality. Our behavior is neither perfectly optimal nor hopelessly random, but somewhere in between.

In fact, we are much closer to the side of perfect rationality! Our brains are limited, so they operate according to heuristics: simplified, approximate rules that are correct most of the time. Clever experiments—or complex environments very different from how we evolved—can cause those heuristics to fail, but we must not forget that the reason we have them is that they work extremely well in most cases in the environment in which we evolved. We are about 90% rational—but woe betide that other 10%.

The most obvious example is phobias: Why are people all over the world afraid of snakes, spiders, falling, and drowning? Because those used to be leading causes of death. In the African savannah 200,000 years ago, you weren’t going to be hit by a car, shot with a rifle bullet or poisoned by carbon monoxide. (You’d probably die of malaria, actually; for that one, instead of evolving to be afraid of mosquitoes we evolved a biological defense mechanism—sickle-cell red blood cells.) Death in general was actually much more likely then, particularly for children.

A similar case can be made for other heuristics we use: We are tribal because the proper functioning of our 100-person tribe used to be the most important factor in our survival. We are racist because people physically different from us were usually part of rival tribes and hence potential enemies. We hoard resources even when our technology allows abundance, because a million years ago no such abundance was possible and every meal might be our last.

When asked how common something is, we don’t calculate a posterior probability based upon Bayesian inference—that’s hard. Instead we try to think of examples—that’s easy. That’s the availability heuristic. And if we didn’t have mass media constantly giving us examples of rare events we wouldn’t otherwise have known about, the availability heuristic would actually be quite accurate. Right now, people think of terrorism as common (even though it’s astoundingly rare) because it’s always all over the news; but if you imagine living in an ancient tribe—or even an medieval village!—anything you heard about that often would almost certainly be something actually worth worrying about. Our level of panic over Ebola is totally disproportionate; but in the 14th century that same level of panic about the Black Death would be entirely justified.

When we want to know whether something is a member of a category, again we don’t try to calculate the actual probability; instead we think about how well it seems to fit a model we have of the paradigmatic example of that category—the representativeness heuristic. You see a Black man on a street corner in New York City at night; how likely is it that he will mug you? Pretty small actually, because there were less than 200,000 crimes in all of New York City last year in a city of 8,000,000 people—meaning the probability any given person committed a crime in the previous year was only 2.5%; the probability on any given day would then be less than 0.01%. Maybe having those attributes raises the probability somewhat, but you can still be about 99% sure that this guy isn’t going to mug you tonight. But since he seemed representative of the category in your mind “criminals”, your mind didn’t bother asking how many criminals there are in the first place—an effect called base rate neglect. Even 200 years ago—let alone 1 million—you didn’t have these sorts of reliable statistics, so what else would you use? You basically had no choice but to assess based upon representative traits.

As you probably know, people have trouble dealing with big numbers, and this is a problem in our modern economy where we actually need to keep track of millions or billions or even trillions of dollars moving around. And really I shouldn’t say it that way, because $1 million ($1,000,000) is an amount of money an upper-middle class person could have in a retirement fund, while $1 billion ($1,000,000,000) would make you in the top 1000 richest people in the world, and $1 trillion ($1,000,000,000,000) is enough to end world hunger for at least the next 15 years (it would only take about $1.5 trillion to do it forever, by paying only the interest on the endowment). It’s important to keep this in mind, because otherwise the natural tendency of the human mind is to say “big number” and ignore these enormous differences—it’s called scope neglect. But how often do you really deal with numbers that big? In ancient times, never. Even in the 21st century, not very often. You’ll probably never have $1 billion, and even $1 million is a stretch—so it seems a bit odd to say that you’re irrational if you can’t tell the difference. I guess technically you are, but it’s an error that is unlikely to come up in your daily life.

Where it does come up, of course, is when we’re talking about national or global economic policy. Voters in the United States today have a level of power that for 99.99% of human existence no ordinary person has had. 2 million years ago you may have had a vote in your tribe, but your tribe was only 100 people. 2,000 years ago you may have had a vote in your village, but your village was only 1,000 people. Now you have a vote on the policies of a nation of 300 million people, and more than that really: As goes America, so goes the world. Our economic, cultural, and military hegemony is so total that decisions made by the United States reverberate through the entire human population. We have choices to make about war, trade, and ecology on a far larger scale than our ancestors could have imagined. As a result, the heuristics that served us well millennia ago are now beginning to cause serious problems.

[As an aside: This is why the “Downs Paradox” is so silly. If you’re calculating the marginal utility of your vote purely in terms of its effect on you—you are a psychopath—then yes, it would be irrational for you to vote. And really, by all means: psychopaths, feel free not to vote. But the effect of your vote is much larger than that; in a nation of N people, the decision will potentially affect N people. Your vote contributes 1/N to a decision that affects N people, making the marginal utility of your vote equal to N*1/N = 1. It’s constant. It doesn’t matter how big the nation is, the value of your vote will be exactly the same. The fact that your vote has a small impact on the decision is exactly balanced by the fact that the decision, once made, will have such a large effect on the world. Indeed, since larger nations also influence other nations, the marginal effect of your vote is probably larger in large elections, which means that people are being entirely rational when they go to greater lengths to elect the President of the United States (58% turnout) rather than the Wayne County Commission (18% turnout).]

So that’s the problem. That’s why we have economic crises, why climate change is getting so bad, why we haven’t ended world hunger. It’s not that we’re complete idiots bumbling around with no idea what we’re doing. We simply aren’t optimized for the new environment that has been recently thrust upon us. We are forced to deal with complex problems unlike anything our brains evolved to handle. The truly amazing part is actually that we can solve these problems at all; most lifeforms on Earth simply aren’t mentally flexible enough to do that. Humans found a really neat trick (actually in a formal evolutionary sense a goodtrick, which we know because it also evolved in cephalopods): Our brains have high plasticity, meaning they are capable of adapting themselves to their environment in real-time. Unfortunately this process is difficult and costly; it’s much easier to fall back on our old heuristics. We ask ourselves: Why spend 10 times the effort to make it work 99% of the time when you can make it work 90% of the time so much easier?

Why? Because it’s so incredibly important that we get these things right.