Bet five dollars for maximum performance

JDN 2457433

One of the more surprising findings from the study of human behavior under stress is the Yerkes-Dodson curve:

OriginalYerkesDodson
This curve shows how well humans perform at a given task, as a function of how high the stakes are on whether or not they do it properly.

For simple tasks, it says what most people intuitively expect—and what neoclassical economists appear to believe: As the stakes rise, the more highly incentivized you are to do it, and the better you do it.

But for complex tasks, it says something quite different: While increased stakes do raise performance to a point—with nothing at stake at all, people hardly work at all—it is possible to become too incentivized. Formally we say the curve is not monotonic; it has a local maximum.

This is one of many reasons why it’s ridiculous to say that top CEOs should make tens of millions of dollars a year on the rise and fall of their company’s stock price (as a great many economists do in fact say). Even if I believed that stock prices accurately reflect the company’s viability (they do not), and believed that the CEO has a great deal to do with the company’s success, it would still be a case of overincentivizing. When a million dollars rides on a decision, that decision is going to be worse than if the stakes had only been $100. With this in mind, it’s really not surprising that higher CEO pay is correlated with worse company performance. Stock options are terrible motivators, but do offer a subtle way of making wages adjust to the business cycle.

The reason for this is that as the stakes get higher, we become stressed, and that stress response inhibits our ability to use higher cognitive functions. The sympathetic nervous system evolved to make us very good at fighting or running away in the face of danger, which works well should you ever be attacked by a tiger. It did not evolve to make us good at complex tasks under high stakes, the sort of skill we’d need when calculating the trajectory of an errant spacecraft or disarming a nuclear warhead.

To be fair, most of us never have to worry about piloting errant spacecraft or disarming nuclear warheads—indeed, you’re about as likely to get attacked by a tiger even in today’s world. (The rate of tiger attacks in the US is just under 2 per year, and the rate of manned space launches in the US was about 5 per year until the Space Shuttle was terminated.)

There are certain professions, such as pilots and surgeons, where performing complex tasks under life-or-death pressure is commonplace, but only a small fraction of people take such professions for precisely that reason. And if you’ve ever wondered why we use checklists for pilots and there is discussion of also using checklists for surgeons, this is why—checklists convert a single complex task into many simple tasks, allowing high performance even at extreme stakes.

But we do have to do a fair number of quite complex tasks with stakes that are, if not urgent life-or-death scenarios, then at least actions that affect our long-term life prospects substantially. In my tutoring business I encounter one in particular quite frequently: Standardized tests.

Tests like the SAT, ACT, GRE, LSAT, GMAT, and other assorted acronyms are not literally life-or-death, but they often feel that way to students because they really do have a powerful impact on where you’ll end up in life. Will you get into a good college? Will you get into grad school? Will you get the job you want? Even subtle deviations from the path of optimal academic success can make it much harder to achieve career success in the future.

Of course, these are hardly the only examples. Many jobs require us to complete tasks properly on tight deadlines, or else risk being fired. Working in academia infamously requires publishing in journals in time to rise up the tenure track, or else falling off the track entirely. (This incentivizes the production of huge numbers of papers, whether they’re worth writing or not; yes, the number of papers published goes down after tenure, but is that a bad thing? What we need to know is whether the number of good papers goes down. My suspicion is that most if not all of the reduction in publications is due to not publishing things that weren’t worth publishing.)

So if you are faced with this sort of task, what can you do? If you realize that you are faced with a high-stakes complex task, you know your performance will be bad—which only makes your stress worse!

My advice is to pretend you’re betting five dollars on the outcome.

Ignore all other stakes, and pretend you’re betting five dollars. $5.00 USD. Do it right and you get a Lincoln; do it wrong and you lose one.
What this does is ensures that you care enough—you don’t want to lose $5 for no reason—but not too much—if you do lose $5, you don’t feel like your life is ending. We want to put you near that peak of the Yerkes-Dodson curve.

The great irony here is that you most want to do this when it is most untrue. If you actually do have a task for which you’ve bet $5 and nothing else rides on it, you don’t need this technique, and any technique to improve your performance is not particularly worthwhile. It’s when you have a standardized test to pass that you really want to use this—and part of me even hopes that people know to do this whenever they have nuclear warheads to disarm. It is precisely when the stakes are highest that you must put those stakes out of your mind.

Why five dollars? Well, the exact amount is arbitrary, but this is at least about the right order of magnitude for most First World individuals. If you really want to get precise, I think the optimal stakes level for maximum performance is something like 100 microQALY per task, and assuming logarithmic utility of wealth, $5 at the US median household income of $53,600 is approximately 100 microQALY. If you have a particularly low or high income, feel free to adjust accordingly. Literally you should be prepared to bet about an hour of your life; but we are not accustomed to thinking that way, so use $5. (I think most people, if asked outright, would radically overestimate what an hour of life is worth to them. “I wouldn’t give up an hour of my life for $1,000!” Then why do you work at $20 an hour?)

It’s a simple heuristic, easy to remember, and sometimes effective. Give it a try.

The power of exponential growth

JDN 2457390

There’s a famous riddle: If the water in a lakebed doubles in volume every day, and the lakebed started filling on January 1, and is half full on June 17, when will it be full?

The answer is of course June 18—if it doubles every day, it will go from half full to full in a single day.

But most people assume that half the work takes about half the time, so they usually give answers in December. Others try to correct, but don’t go far enough, and say something like October.

Human brains are programmed to understand linear processes. We expect things to come in direct proportion: If you work twice as hard, you expect to get twice as much done. If you study twice as long, you expect to learn twice as much. If you pay twice as much, you expect to get twice as much stuff.

We tend to apply this same intuition to situations where it does not belong, processes that are not actually linear but exponential. As a result, when we extrapolate the slow growth early in the process, we wildly underestimate the total growth in the long run.

For example, suppose we have two countries. Arcadia has a GDP of $100 billion per year, and they grow at 4% per year. Berkland has a GDP of $200 billion, and they grow at 2% per year. Assuming that they maintain these growth rates, how long will it take for Arcadia’s GDP to exceed Berkland’s?

If we do this intuitively, we might sort of guess that at 4% you’d add 100% in 25 years, and at 2% you’d add 100% in 50 years; so it should be something like 75 years, because then Arcadia will have added $300 million while Berkland added $200 million. You might even just fudge the numbers in your head and say “about a century”.

In fact, it is only 35 years. You could solve this exactly by setting (100)(1.04^x) = (200)(1.02^x); but I have an intuitive method that I think may help you to estimate exponential processes in the future.

Divide the percentage into 69. (For some numbers it’s easier to use 70 or 72; remember, these are just to be approximate. The exact figure is 100*ln(2) = 69.3147… and then it wouldn’t be the percentage p but 100*ln(1+p/100); try plotting those and you’ll see why using p works.) This is the time it will take to double.

So at 4%, Arcadia will double in about 17.5 years, quadrupling in 35 years. At 2%, Berkland will double in about 35 years. Thus, in 35 years, Arcadia will quadruple and Berkland will double, so their GDPs will be equal.

Economics is full of exponential processes: Compound interest is exponential, and over moderately long periods GDP and population both tend to grow exponentially. (In fact they grow logistically, which is similar to exponential until it gets very large and begins to slow down. If you smooth out our recessions, you can get a sense that since the 1940s, US GDP growth has slowed down from about 4% per year to about 2% per year.) It is therefore quite important to understand how exponential growth works.

Let’s try another one. If one account has $1 million, growing at 5% per year, and another has $1,000, growing at 10% per year, how long will it take for the second account to have more money in it?

69/5 is about 14, so the first account doubles in 14 years. 69/10 is about 7, so the second account doubles in 7 years. A factor of 1000 is about 10 doublings (2^10 = 1024), so the second account needs to have doubled 10 times more than the first account. Since it doubles twice as often, this means that it must have doubled 20 times while the other doubled 10 times. Therefore, it will take about 140 years.

In fact, it takes 141—so our quick approximation is actually remarkably good.

This example is instructive in another way; 141 years is a pretty long time, isn’t it? You can’t just assume that exponential growth is “as fast as you want it to be”. Once people realize that exponential growth is very fast, they often overcorrect, assuming that exponential growth automatically means growth that is absurdly—or arbitrarily—fast. (XKCD made a similar point in this comic.)

I think the worst examples of this mistake are among Singularitarians. They—correctly—note that computing power has become exponentially greater and cheaper over time, doubling about every 18 months, which has been dubbed Moore’s Law. They assume that this will continue into the indefinite future (this is already problematic; the growth rate seems to be already slowing down). And therefore they conclude there will be a sudden moment, a technological singularity, at which computers will suddenly outstrip humans in every way and bring about a new world order of artificial intelligence basically overnight. They call it a “hard takeoff”; here’s a direct quote:

But many thinkers in this field including Nick Bostrom and Eliezer Yudkowsky worry that AI won’t work like this at all. Instead there could be a “hard takeoff”, a huge subjective discontinuity in the function mapping AI research progress to intelligence as measured in ability-to-get-things-done. If on January 1 you have a toy AI as smart as a cow, one which can identify certain objects in pictures and navigate a complex environment, and on February 1 it’s proved the Riemann hypothesis and started building a ring around the sun, that was a hard takeoff.

Wait… what? For someone like me who understands exponential growth, the last part is a baffling non sequitur. If computers start half as smart as us and double every 18 months, in 18 months, they will be as smart as us. In 36 months, they will be twice as smart as us. Twice as smart as us literally means that two people working together perfectly can match them—certainly a few dozen working realistically can. We’re not in danger of total AI domination from that. With millions of people working against the AI, we should be able to keep up with it for at least another 30 years. So are you assuming that this trend is continuing or not? (Oh, and by the way, we’ve had AIs that can identify objects and navigate complex environments for a couple years now, and so far, no ringworld around the Sun.)

That same essay make a biological argument, which misunderstands human evolution in a way that is surprisingly subtle yet ultimately fundamental:

If you were to come up with a sort of objective zoological IQ based on amount of evolutionary work required to reach a certain level, complexity of brain structures, etc, you might put nematodes at 1, cows at 90, chimps at 99, homo erectus at 99.9, and modern humans at 100. The difference between 99.9 and 100 is the difference between “frequently eaten by lions” and “has to pass anti-poaching laws to prevent all lions from being wiped out”.

No, actually, what makes humans what we are is not that we are 1% smarter than chimpanzees.

First of all, we’re actually more like 200% smarter than chimpanzees, measured by encephalization quotient; they clock in at 2.49 while we hit 7.44. If you simply measure by raw volume, they have about 400 mL to our 1300 mL, so again roughly 3 times as big. But that’s relatively unimportant; with Moore’s Law, tripling only takes about 2.5 years.

But even having triple the brain power is not what makes humans different. It was a necessary condition, but not a sufficient one. Indeed, it was so insufficient that for about 200,000 years we had brains just as powerful as we do now and yet we did basically nothing in technological or economic terms—total, complete stagnation on a global scale. This is a conservative estimate of when we had brains of the same size and structure as we do today.

What makes humans what we are? Cooperation. We are what we are because we are together.
The capacity of human intelligence today is not 1300 mL of brain. It’s more like 1.3 gigaliters of brain, where a gigaliter, a billion liters, is about the volume of the Empire State Building. We have the intellectual capacity we do not because we are individually geniuses, but because we have built institutions of research and education that combine, synthesize, and share the knowledge of billions of people who came before us. Isaac Newton didn’t understand the world as well as the average third-grader in the 21st century does today. Does the third-grader have more brain? Of course not. But they absolutely do have more knowledge.

(I recently finished my first playthrough of Legacy of the Void, in which a central point concerns whether the Protoss should detach themselves from the Khala, a psychic union which combines all their knowledge and experience into one. I won’t spoil the ending, but let me say this: I can understand their hesitation, for it is basically our equivalent of the Khala—first literacy, and now the Internet—that has made us what we are. It would no doubt be the Khala that made them what they are as well.)

Is AI still dangerous? Absolutely. There are all sorts of damaging effects AI could have, culturally, economically, militarily—and some of them are already beginning to happen. I even agree with the basic conclusion of that essay that OpenAI is a bad idea because the cost of making AI available to people who will abuse it or create one that is dangerous is higher than the benefit of making AI available to everyone. But exponential growth not only isn’t the same thing as instantaneous takeoff, it isn’t even compatible with it.

The next time you encounter an example of exponential growth, try this. Don’t just fudge it in your head, don’t overcorrect and assume everything will be fast—just divide the percentage into 69 to see how long it will take to double.

How to change the world

JDN 2457166 EDT 17:53.

I just got back from watching Tomorrowland, which is oddly appropriate since I had already planned this topic in advance. How do we, as they say in the film, “fix the world”?

I can’t find it at the moment, but I vaguely remember some radio segment on which a couple of neoclassical economists were interviewed and asked what sort of career can change the world, and they answered something like, “Go into finance, make a lot of money, and then donate it to charity.”

In a slightly more nuanced form this strategy is called earning to give, and frankly I think it’s pretty awful. Most of the damage that is done to the world is done in the name of maximizing profits, and basically what you end up doing is stealing people’s money and then claiming you are a great altruist for giving some of it back. I guess if you can make enormous amounts of money doing something that isn’t inherently bad and then donate that—like what Bill Gates did—it seems better. But realistically your potential income is probably not actually raised that much by working in finance, sales, or oil production; you could have made the same income as a college professor or a software engineer and not be actively stripping the world of its prosperity. If we actually had the sort of ideal policies that would internalize all externalities, this dilemma wouldn’t arise; but we’re nowhere near that, and if we did have that system, the only billionaires would be Nobel laureate scientists. Albert Einstein was a million times more productive than the average person. Steve Jobs was just a million times luckier. Even then, there is the very serious question of whether it makes sense to give all the fruits of genius to the geniuses themselves, who very quickly find they have all they need while others starve. It was certainly Jonas Salk’s view that his work should only profit him modestly and its benefits should be shared with as many people as possible. So really, in an ideal world there might be no billionaires at all.

Here I would like to present an alternative. If you are an intelligent, hard-working person with a lot of talent and the dream of changing the world, what should you be doing with your time? I’ve given this a great deal of thought in planning my own life, and here are the criteria I came up with:

  1. You must be willing and able to commit to doing it despite great obstacles. This is another reason why earning to give doesn’t actually make sense; your heart (or rather, limbic system) won’t be in it. You’ll be miserable, you’ll become discouraged and demoralized by obstacles, and others will surpass you. In principle Wall Street quantitative analysts who make $10 million a year could donate 90% to UNICEF, but they don’t, and you know why? Because the kind of person who is willing and able to exploit and backstab their way to that position is the kind of person who doesn’t give money to UNICEF.
  2. There must be important tasks to be achieved in that discipline. This one is relatively easy to satisfy; I’ll give you a list in a moment of things that could be contributed by a wide variety of fields. Still, it does place some limitations: For one, it rules out the simplest form of earning to give (a more nuanced form might cause you to choose quantum physics over social work because it pays better and is just as productive—but you’re not simply maximizing income to donate). For another, it rules out routine, ordinary jobs that the world needs but don’t make significant breakthroughs. The world needs truck drivers (until robot trucks take off), but there will never be a great world-changing truck driver, because even the world’s greatest truck driver can only carry so much stuff so fast. There are no world-famous secretaries or plumbers. People like to say that these sorts of jobs “change the world in their own way”, which is a nice sentiment, but ultimately it just doesn’t get things done. We didn’t lift ourselves into the Industrial Age by people being really fantastic blacksmiths; we did it by inventing machines that make blacksmiths obsolete. We didn’t rise to the Information Age by people being really good slide-rule calculators; we did it by inventing computers that work a million times as fast as any slide-rule. Maybe not everyone can have this kind of grand world-changing impact; and I certainly agree that you shouldn’t have to in order to live a good life in peace and happiness. But if that’s what you’re hoping to do with your life, there are certain professions that give you a chance of doing so—and certain professions that don’t.
  3. The important tasks must be currently underinvested. There are a lot of very big problems that many people are already working on. If you work on the problems that are trendy, the ones everyone is talking about, your marginal contribution may be very small. On the other hand, you can’t just pick problems at random; many problems are not invested in precisely because they aren’t that important. You need to find problems people aren’t working on but should be—problems that should be the focus of our attention but for one reason or another get ignored. A good example here is to work on pancreatic cancer instead of breast cancer; breast cancer research is drowning in money and really doesn’t need any more; pancreatic cancer kills 2/3 as many people but receives less than 1/6 as much funding. If you want to do cancer research, you should probably be doing pancreatic cancer.
  4. You must have something about you that gives you a comparative—and preferably, absolute—advantage in that field. This is the hardest one to achieve, and it is in fact the reason why most people can’t make world-changing breakthroughs. It is in fact so hard to achieve that it’s difficult to even say you have until you’ve already done something world-changing. You must have something special about you that lets you achieve what others have failed. You must be one of the best in the world. Even as you stand on the shoulders of giants, you must see further—for millions of others stand on those same shoulders and see nothing. If you believe that you have what it takes, you will be called arrogant and naïve; and in many cases you will be. But in a few cases—maybe 1 in 100, maybe even 1 in 1000, you’ll actually be right. Not everyone who believes they can change the world does so, but everyone who changes the world believed they could.

Now, what sort of careers might satisfy all these requirements?

Well, basically any kind of scientific research:

Mathematicians could work on network theory, or nonlinear dynamics (the first step: separating “nonlinear dynamics” into the dozen or so subfields it should actually comprise—as has been remarked, “nonlinear” is a bit like “non-elephant”), or data processing algorithms for our ever-growing morasses of unprocessed computer data.

Physicists could be working on fusion power, or ways to neutralize radioactive waste, or fundamental physics that could one day unlock technologies as exotic as teleportation and faster-than-light travel. They could work on quantum encryption and quantum computing. Or if those are still too applied for your taste, you could work in cosmology and seek to answer some of the deepest, most fundamental questions in human existence.

Chemists could be working on stronger or cheaper materials for infrastructure—the extreme example being space elevators—or technologies to clean up landfills and oceanic pollution. They could work on improved batteries for solar and wind power, or nanotechnology to revolutionize manufacturing.

Biologists could work on any number of diseases, from cancer and diabetes to malaria and antibiotic-resistant tuberculosis. They could work on stem-cell research and regenerative medicine, or genetic engineering and body enhancement, or on gerontology and age reversal. Biology is a field with so many important unsolved problems that if you have the stomach for it and the interest in some biological problem, you can’t really go wrong.

Electrical engineers can obviously work on improving the power and performance of computer systems, though I think over the last 20 years or so the marginal benefits of that kind of research have begun to wane. Efforts might be better spent in cybernetics, control systems, or network theory, where considerably more is left uncharted; or in artificial intelligence, where computing power is only the first step.

Mechanical engineers could work on making vehicles safer and cheaper, or building reusable spacecraft, or designing self-constructing or self-repairing infrastructure. They could work on 3D printing and just-in-time manufacturing, scaling it up for whole factories and down for home appliances.

Aerospace engineers could link the world with hypersonic travel, build satellites to provide Internet service to the farthest reaches of the globe, or create interplanetary rockets to colonize Mars and the moons of Jupiter and Saturn. They could mine asteroids and make previously rare metals ubiquitous. They could build aerial drones for delivery of goods and revolutionize logistics.

Agronomists could work on sustainable farming methods (hint: stop farming meat), invent new strains of crops that are hardier against pests, more nutritious, or higher-yielding; on the other hand a lot of this is already being done, so maybe it’s time to think outside the box and consider what we might do to make our food system more robust against climate change or other catastrophes.

Ecologists will obviously be working on predicting and mitigating the effects of global climate change, but there are a wide variety of ways of doing so. You could focus on ocean acidification, or on desertification, or on fishery depletion, or on carbon emissions. You could work on getting the climate models so precise that they become completely undeniable to anyone but the most dogmatically opposed. You could focus on endangered species and habitat disruption. Ecology is in general so underfunded and undersupported that basically anything you could do in ecology would be beneficial.

Neuroscientists have plenty of things to do as well: Understanding vision, memory, motor control, facial recognition, emotion, decision-making and so on. But one topic in particular is lacking in researchers, and that is the fundamental Hard Problem of consciousness. This one is going to be an uphill battle, and will require a special level of tenacity and perseverance. The problem is so poorly understood it’s difficult to even state clearly, let alone solve. But if you could do it—if you could even make a significant step toward it—it could literally be the greatest achievement in the history of humanity. It is one of the fundamental questions of our existence, the very thing that separates us from inanimate matter, the very thing that makes questions possible in the first place. Understand consciousness and you understand the very thing that makes us human. That achievement is so enormous that it seems almost petty to point out that the revolutionary effects of artificial intelligence would also fall into your lap.

The arts and humanities also have a great deal to contribute, and are woefully underappreciated.

Artists, authors, and musicians all have the potential to make us rethink our place in the world, reconsider and reimagine what we believe and strive for. If physics and engineering can make us better at winning wars, art and literature and remind us why we should never fight them in the first place. The greatest works of art can remind us of our shared humanity, link us all together in a grander civilization that transcends the petty boundaries of culture, geography, or religion. Art can also be timeless in a way nothing else can; most of Aristotle’s science is long-since refuted, but even the Great Pyramid thousands of years before him continues to awe us. (Aristotle is about equidistant chronologically between us and the Great Pyramid.)

Philosophers may not seem like they have much to add—and to be fair, a great deal of what goes on today in metaethics and epistemology doesn’t add much to civilization—but in fact it was Enlightenment philosophy that brought us democracy, the scientific method, and market economics. Today there are still major unsolved problems in ethics—particularly bioethics—that are in need of philosophical research. Technologies like nanotechnology and genetic engineering offer us the promise of enormous benefits, but also the risk of enormous harms; we need philosophers to help us decide how to use these technologies to make our lives better instead of worse. We need to know where to draw the lines between life and death, between justice and cruelty. Literally nothing could be more important than knowing right from wrong.

Now that I have sung the praises of the natural sciences and the humanities, let me now explain why I am a social scientist, and why you probably should be as well.

Psychologists and cognitive scientists obviously have a great deal to give us in the study of mental illness, but they may actually have more to contribute in the study of mental health—in understanding not just what makes us depressed or schizophrenic, but what makes us happy or intelligent. The 21st century may not simply see the end of mental illness, but the rise of a new level of mental prosperity, where being happy, focused, and motivated are matters of course. The revolution that biology has brought to our lives may pale in comparison to the revolution that psychology will bring. On the more social side of things, psychology may allow us to understand nationalism, sectarianism, and the tribal instinct in general, and allow us to finally learn to undermine fanaticism, encourage critical thought, and make people more rational. The benefits of this are almost impossible to overstate: It is our own limited, broken, 90%-or-so heuristic rationality that has brought us from simians to Shakespeare, from gorillas to Godel. To raise that figure to 95% or 99% or 99.9% could be as revolutionary as was whatever evolutionary change first brought us out of the savannah as Australopithecus africanus.

Sociologists and anthropologists will also have a great deal to contribute to this process, as they approach the tribal instinct from the top down. They may be able to tell us how nations are formed and undermined, why some cultures assimilate and others collide. They can work to understand combat bigotry in all its forms, racism, sexism, ethnocentrism. These could be the fields that finally end war, by understanding and correcting the imbalances in human societies that give rise to violent conflict.

Political scientists and public policy researchers can allow us to understand and restructure governments, undermining corruption, reducing inequality, making voting systems more expressive and more transparent. They can search for the keystones of different political systems, finding the weaknesses in democracy to shore up and the weaknesses in autocracy to exploit. They can work toward a true international government, representative of all the world’s people and with the authority and capability to enforce global peace. If the sociologists don’t end war and genocide, perhaps the political scientists can—or more likely they can do it together.

And then, at last, we come to economists. While I certainly work with a lot of ideas from psychology, sociology, and political science, I primarily consider myself an economist. Why is that? Why do I think the most important problems for me—and perhaps everyone—to be working on are fundamentally economic?

Because, above all, economics is broken. The other social sciences are basically on the right track; their theories are still very limited, their models are not very precise, and there are decades of work left to be done, but the core principles upon which they operate are correct. Economics is the field to work in because of criterion 3: Almost all the important problems in economics are underinvested.

Macroeconomics is where we are doing relatively well, and yet the Keynesian models that allowed us to reduce the damage of the Second Depression nonetheless had no power to predict its arrival. While inflation has been at least somewhat tamed, the far worse problem of unemployment has not been resolved or even really understood.

When we get to microeconomics, the neoclassical models are totally defective. Their core assumptions of total rationality and total selfishness are embarrassingly wrong. We have no idea what controls assets prices, or decides credit constraints, or motivates investment decisions. Our models of how people respond to risk are all wrong. We have no formal account of altruism or its limitations. As manufacturing is increasingly automated and work shifts into services, most economic models make no distinction between the two sectors. While finance takes over more and more of our society’s wealth, most formal models of the economy don’t even include a financial sector.

Economic forecasting is no better than chance. The most widely-used asset-pricing model, CAPM, fails completely in empirical tests; its defenders concede this and then have the audacity to declare that it doesn’t matter because the mathematics works. The Black-Scholes derivative-pricing model that caused the Second Depression could easily have been predicted to do so, because it contains a term that assumes normal distributions when we know for a fact that financial markets are fat-tailed; simply put, it claims certain events will never happen that actually occur several times a year.

Worst of all, economics is the field that people listen to. When a psychologist or sociologist says something on television, people say that it sounds interesting and basically ignore it. When an economist says something on television, national policies are shifted accordingly. Austerity exists as national policy in part due to a spreadsheet error by two famous economists.

Keynes already knew this in 1936: “The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.”

Meanwhile, the problems that economics deals with have a direct influence on the lives of millions of people. Bad economics gives us recessions and depressions; it cripples our industries and siphons off wealth to an increasingly corrupt elite. Bad economics literally starves people: It is because of bad economics that there is still such a thing as world hunger. We have enough food, we have the technology to distribute it—but we don’t have the economic policy to lift people out of poverty so that they can afford to buy it. Bad economics is why we don’t have the funding to cure diabetes or colonize Mars (but we have the funding for oil fracking and aircraft carriers, don’t we?). All of that other scientific research that needs done probably could be done, if the resources of our society were properly distributed and utilized.

This combination of both overwhelming influence, overwhelming importance and overwhelming error makes economics the low-hanging fruit; you don’t even have to be particularly brilliant to have better ideas than most economists (though no doubt it helps if you are). Economics is where we have a whole bunch of important questions that are unanswered—or the answers we have are wrong. (As Will Rogers said, “It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”)

Thus, rather than tell you go into finance and earn to give, those economists could simply have said: “You should become an economist. You could hardly do worse than we have.”

The Cognitive Science of Morality Part II: Molly Crockett

JDN 2457140 EDT 20:16.

This weekend has been very busy for me, so this post is going to be shorter than most—which is probably a good thing anyway, since my posts tend to run a bit long.

In an earlier post I discussed the Weinberg Cognitive Science Conference and my favorite speaker in the lineup, Joshua Greene. After a brief interlude from Capybara Day, it’s now time to talk about my second-favorite speaker, Molly Crockett. (Is it just me, or does the name “Molly” somehow seem incongruous with a person of such prestige?)

Molly Crockett is a neuroeconomist, though you’d never hear her say that. She doesn’t think of herself as an economist at all, but purely as a neuroscientist. I suspect this is because when she hears the word “economist” she thinks of only mainstream neoclassical economists, and she doesn’t want to be associated with such things.

Still, what she studies is clearly neuroeconomics—I in fact first learned of her work by reading the textbook Neuroeconomics, though I really got interested in her work after watching her TED Talk. It’s one of the better TED talks (they put out so many of them now that the quality is mixed at best); she talks about news reporting on neuroscience, how it is invariably ridiculous and sensationalist. This is particularly frustrating because of how amazing and important neuroscience actually is.

I could almost forgive the sensationalism if they were talking about something that’s actually fantastically boring, like, say, tax codes, or financial regulations. Of course, even then there is the Oliver Effect: You can hide a lot of evil by putting it in something boring. But Dodd-Frank is 2300 pages long; I read an earlier draft that was only (“only”) 600 pages, and it literally contained a three-page section explaining how to define the word “bank”. (Assuming direct proportionality, I would infer that there is now a twelve-page section defining the word “bank”. Hopefully not?) It doesn’t get a whole lot more snoozeworthy than that. So if you must be a bit sensationalist in order to get people to see why eliminating margin requirements and the swaps pushout rule are terrible, terrible ideas, so be it.

But neuroscience is not boring, and so sensationalism only means that news outlets are making up exciting things that aren’t true instead of saying the actually true things that are incredibly exciting.

Here, let me express without sensationalism what Molly Crockett does for a living: Molly Crockett experimentally determines how psychoactive drugs modulate moral judgments. The effects she observes are small, but they are real; and since these experiments are done using small doses for a short period of time, if these effects scale up they could be profound. This is the basic research component—when it comes to technological fruition it will be literally A Clockwork Orange. But it may be A Clockwork Orange in the best possible way: It could be, at last, a medical cure for psychopathy, a pill to make us not just happier or healthier, but better. We are not there yet by any means, but this is clearly the first step: Molly Crockett is to A Clockwork Orange roughly as Michael Faraday is to the Internet.

In one of the experiments she talked about at the conference, Crockett found that serotonin reuptake inhibitors enhance harm aversion. Serotonin reuptake inhibitors are very commonly used drugs—you are likely familiar with one called Prozac. So basically what this study means is that Prozac makes people more averse to causing pain in themselves or others. It doesn’t necessarily make them more altruistic, let alone more ethical; but it does make them more averse to causing pain. (To see the difference, imagine a 19th-century field surgeon dealing with a wounded soldier; there is no anesthetic, but an amputation must be made. Sometimes being ethical requires causing pain.)

The experiment is actually what Crockett calls “the honest Milgram Experiment“; under Milgram, the experimenters told their subjects they would be causing shocks, but no actual shocks were administered. Under Crockett, the shocks are absolutely 100% real (though they are restricted to a much lower voltage of course). People are given competing offers that contain an amount of money and a number of shocks to be delivered, either to you or to the other subject. They decide how much it’s worth to them to bear the shocks—or to make someone else bear them. It’s a classic willingness-to-pay paradigm, applied to the Milgram Experiment.

What Crockett found did not surprise me, nor do I expect it will surprise you if you imagine yourself in the same place; but it would totally knock the socks off of any neoclassical economist. People are much more willing to bear shocks for money than they are to give shocks for money. They are what Crockett terms hyper-altruistic; I would say that they are exhibiting an apparent solidarity coefficient greater than 1. They seem to be valuing others more than they value themselves.

Normally I’d say that this makes no sense at all—why would you value some random stranger more than yourself? Equally perhaps, and obviously only a psychopath would value them not at all; but more? And there’s no way you can actually live this way in your daily life; you’d give away all your possessions and perhaps even starve yourself to death. (I guess maybe Jesus lived that way.) But Crockett came up with a model that explains it pretty well: We are morally risk-averse. If we knew we were dealing with someone very strong who had no trouble dealing with shocks, we’d be willing to shock them a fairly large amount. But we might actually be dealing with someone very vulnerable who would suffer greatly; and we don’t want to take that chance.

I think there’s some truth to that. But her model leaves something else out that I think is quite important: We are also averse to unfairness. We don’t like the idea of raising one person while lowering another. (Obviously not so averse as to never do it—we do it all the time—but without a compelling reason we consider it morally unjustified.) So if the two subjects are in roughly the same condition (being two undergrads at Oxford, they probably are), then helping one while hurting the other is likely to create inequality where none previously existed. But if you hurt yourself in order to help yourself, no such inequality is created; all you do is raise yourself up, provided that you do believe that the money is good enough to be worth the shocks. It’s actually quite Rawslian; lifting one person up while not affecting the other is exactly the sort of inequality you’re allowed to create according to the Difference Principle.

There’s also the fact that the subjects can’t communicate; I think if I could make a deal to share the money afterward, I’d feel better about shocking someone more in order to get us both more money. So perhaps with communication people would actually be willing to shock others more. (And the sensation headline would of course be: “Talking makes people hurt each other.”)

But all of these ideas are things that could be tested in future experiments! And maybe I’ll do those experiments someday, or Crockett, or one of her students. And with clever experimental paradigms we might find out all sorts of things about how the human mind works, how moral intuitions are structured, and ultimately how chemical interventions can actually change human moral behavior. The potential for both good and evil is so huge, it’s both wondrous and terrifying—but can you deny that it is exciting?

And that’s not even getting into the Basic Fact of Cognitive Science, which undermines all concepts of afterlife and theistic religion. I already talked about it before—as the sort of thing that I sort of wish I could say when I introduce myself as a cognitive scientist—but I think it bears repeating.

As Patricia Churchland said on the Colbert Report: Colbert asked, “Are you saying I have no soul?” and she answered, “Yes.” I actually prefer Daniel Dennett’s formulation: “Yes, we have a soul, but it’s made of lots of tiny robots.”

We don’t have a magical, supernatural soul (whatever that means); we don’t have an immortal soul that will rise into Heaven or be reincarnated in someone else. But we do have something worth preserving: We have minds that are capable of consciousness. We love and hate, exalt and suffer, remember and imagine, understand and wonder. And yes, we are born and we die. Once the unique electrochemical pattern that defines your consciousness is sufficiently degraded, you are gone. Nothing remains of what you were—except perhaps the memories of others, or things you have created. But even this legacy is unlikely to last forever. One day it is likely that all of us—and everything we know, and everything we have built, from the Great Pyramids to Hamlet to Beethoven’s Ninth to Principia Mathematica to the US Interstate Highway System—will be gone. I don’t have any consolation to offer you on that point; I can’t promise you that anything will survive a thousand years, much less a million. There is a chance—even a chance that at some point in the distant future, whatever humanity has become will find a way to reverse the entropic decay of the universe itself—but nothing remotely like a guarantee. In all probability you, and I, and all of this will be gone someday, and that is absolutely terrifying.

But it is also undeniably true. The fundamental link between the mind and the brain is one of the basic facts of cognitive science; indeed I like to call it The Basic Fact of Cognitive Science. We know specifically which kinds of brain damage will make you unable to form memories, comprehend language, speak language (a totally different area), see, hear, smell, feel anger, integrate emotions with logic… do I need to go on? Everything that you are is done by your brain—because you are your brain.

Now why can’t the science journalists write about that? Instead we get “The Simple Trick That Can Boost Your Confidence Immediately” and “When it Comes to Picking Art, Men & Women Just Don’t See Eye to Eye.” HuffPo is particularly awful of course; the New York Times is better, but still hardly as good as one might like. They keep trying to find ways to make it exciting—but so rarely seem to grasp how exciting it already is.

Love is rational

JDN 2457066 PST 15:29.

Since I am writing this the weekend of Valentine’s Day (actually by the time it is published it will be Valentine’s Day) and sitting across from my boyfriend, it seems particularly appropriate that today’s topic should be love. As I am writing it is in fact Darwin Day, so it is fitting that evolution will be a major topic as well.

Usually we cognitive economists are the ones reminding neoclassical economists that human beings are not always rational. Today however I must correct a misconception in the opposite direction: Love is rational, or at least it can be, should be, and typically is.

Lately I’ve been reading The Logic of Life which actually makes much the same point, about love and many other things. I had expected it to be a dogmatic defense of economic rationality—published in 2008 no less, which would make it the scream of a dying paradigm as it carries us all down with it—but I was in fact quite pleasantly surprised. The book takes a nuanced position on rationality very similar to my own, and actually incorporates many of the insights from neuroeconomics and cognitive economics. I think Harford would basically agree with me that human beings are 90% rational (but woe betide the other 10%).

We have this romantic (Romantic?) notion in our society that love is not rational, it is “beyond” rationality somehow. “Love is blind”, they say; and this is often used as a smug reply to the notion that rationality is the proper guide to live our lives.

The argument would seem to follow: “Love is not rational, love is good, therefore rationality is not always good.”

But then… the argument would follow? What do you mean, follow? Follow logically? Follow rationally? Something is clearly wrong if we’ve constructed a rational argument intended to show that we should not live our lives by rational arguments.

And the problem of course is the premise that love is not rational. Whatever made you say that?

It’s true that love is not directly volitional, not in the way that it is volitional to move your arm upward or close your eyes or type the sentence “Jackdaws ate my big sphinx of quartz.” You don’t exactly choose to love someone, weighing the pros and cons and making a decision the way you might choose which job offer to take or which university to attend.

But then, you don’t really choose which university you like either, now do you? You choose which to attend. But your enjoyment of that university is not a voluntary act. And similarly you do in fact choose whom to date, whom to marry. And you might well consider the pros and cons of such decisions. So the difference is not as large as it might at first seem.

More importantly, to say that our lives should be rational is not the same as saying they should be volitional. You simply can’t live your life as completely volitional, no matter how hard you try. You simply don’t have the cognitive resources to maintain constant awareness of every breath, every heartbeat. Yet there is nothing irrational about breathing or heartbeats—indeed they are necessary for survival and thus a precondition of anything rational you might ever do.

Indeed, in many ways it is our subconscious that is the most intelligent part of us. It is not as flexible as our conscious mind—that is why our conscious mind is there—but the human subconscious is unmatched in its efficiency and reliability among literally all known computational systems in the known universe. Walk across a room and it will solve reverse kinematics in real time. Throw a ball and it will solve three-dimensional nonlinear differential equations as well. Look at a familiar face and it will immediately identify it among a set of hundreds of faces with near-perfect accuracy regardless of the angle, lighting conditions, or even hairstyle. To see that I am not exaggerating the immense difficulty of these tasks, look at how difficult it is to make robots that can walk on two legs or throw balls. Face recognition is so difficult that it is still an unsolved problem with an extensive body of ongoing research.

And love, of course, is the subconscious system that has been most directly optimized by natural selection. Our very survival has depended upon it for millions of years. Indeed, it’s amazing how often it does seem to fail given those tight optimization constraints; I think this is for two reasons. First, natural selection optimizes for inclusive fitness, which is not the same thing as optimizing for happiness—what’s good for your genes may not be good for you per se. Many of the ways that love hurts us seem to be based around behaviors that probably did on average spread more genes on the African savannah. Second, the task of selecting an optimal partner is so mind-bogglingly complex that even the most powerful computational system in the known universe still can only do it so well. Imagine trying to construct a formal decision model that would tell you whom you should marry—all the variables you’d need to consider, the cost of sampling each of those variables sufficiently, the proper weightings on all the different terms in the utility function. Perhaps the wonder is that love is as rational as it is.

Indeed, love is evidence-based—and when it isn’t, this is cause for concern. The evidence is most often presented in small ways over long periods of time—a glance, a kiss, a gift, a meeting canceled to stay home and comfort you. Some ways are larger—a career move postponed to keep the family together, a beautiful wedding, a new house. We aren’t formally calculating the Bayesian probability at each new piece of evidence—though our subconscious brains might be, and whatever they’re doing the results aren’t far off from that mathematical optimum.

The notion that you will never “truly know” if others love you is no more epistemically valid or interesting than the notion that you will never “truly know” if your shirt is grue instead of green or if you are a brain in a vat. Perhaps we’ve been wrong about gravity all these years, and on April 27, 2016 it will suddenly reverse direction! No, it won’t, and I’m prepared to literally bet the whole world on that (frankly I’m not sure I have a choice). To be fair, the proposition that your spouse of twenty years or your mother loves you is perhaps not that certain—but it’s pretty darn certain. Perhaps the proper comparison is the level of certainty that climate change is caused by human beings, or even less, the level of certainty that your car will not suddenly veer off the road and kill you. The latter is something that actually happens—but we all drive every day assuming it won’t. By the time you marry someone, you can and should be that certain that they love you.

Love without evidence is bad love. The sort of unrequited love that builds in secret based upon fleeing glimpses, hours of obsessive fantasy, and little or no interaction with its subject isn’t romantic—it’s creepy and psychologically unhealthy. The extreme of that sort of love is what drove John Hinckley Jr. to shoot Ronald Reagan in order to impress Jodie Foster.

I don’t mean to make you feel guilty if you have experienced such a love—most of us have at one point or another—but it disgusts me how much our society tries to elevate that sort of love as the “true love” to which we should all aspire. We encourage people—particularly teenagers—to conceal their feelings for a long time and then release them in one grand surprise gesture of affection, which is just about the opposite of what you should actually be doing. (Look at Love Actually, which is just about the opposite of what its title says.) I think a great deal of strife in our society would be eliminated if we taught our children how to build relationships gradually over time instead of constantly presenting them with absurd caricatures of love that no one can—or should—follow.

I am pleased to see that our cultural norms on that point seem to be changing. A corporation as absurdly powerful as Disney is both an influence upon and a barometer of our social norms, and the trope in the most recent Disney films (like Frozen and Maleficent) is that true love is not the fiery passion of love at first sight, but the deep bond between family members that builds over time. This is a much healthier concept of love, though I wouldn’t exclude romantic love entirely. Romantic love can be true love, but only by building over time through a similar process.

Perhaps there is another reason people are uncomfortable with the idea that love is rational; by definition, rational behaviors respond to incentives. And since we tend to conceive of incentives as a purely selfish endeavor, this would seem to imply that love is selfish, which seems somewhere between painfully cynical and outright oxymoronic.

But while love certainly does carry many benefits for its users—being in love will literally make you live longer, by quite a lot, an effect size comparable to quitting smoking or exercising twice a week—it also carries many benefits for its recipients as well. Love is in fact the primary means by which evolution has shaped us toward altruism; it is the love for our family and our tribe that makes us willing to sacrifice so much for them. Not all incentives are selfish; indeed, an incentive is really just something that motivates you to action. If you could truly convince me that a given action I took would have even a reasonable chance of ending world hunger, I would do almost anything to achieve it; I can scarcely imagine a greater incentive, even though I would be harmed and the benefits would incur to people I have never met.

Love evolved because it advanced the fitness of our genes, of course. And this bothers many people; it seems to make our altruism ultimately just a different form of selfishness I guess, selfishness for our genes instead of ourselves. But this is a genetic fallacy, isn’t it? Yes, evolution by natural selection is a violent process, full of death and cruelty and suffering (as Darwin said, red in tooth and claw); but that doesn’t mean that its outcome—namely ourselves—is so irredeemable. We are, in fact, altruistic, regardless of where that altruism came from. The fact that it advanced our genes can actually be comforting in a way, because it reminds us that the universe is nonzero-sum and benefiting others does not have to mean harming ourselves.

One question I like to ask when people suggest that some scientific fact undermines our moral status in this way is: “Well, what would you prefer?” If the causal determinism of neural synapses undermines our free will, then what should we have been made of? Magical fairy dust? If we were, fairy dust would be a real phenomenon, and it would obey laws of nature, and you’d just say that the causal determinism of magical fairy dust undermines free will all over again. If the fact that our altruistic emotions evolved by natural selection to advance our inclusive fitness makes us not truly altruistic, then where should have altruism come from? A divine creator who made us to love one another? But then we’re just following our programming! You can always make this sort of argument, which either means that live is necessarily empty of meaning, that no possible universe could ever assuage our ennui—or, what I believe, that life’s meaning does not come from such ultimate causes. It is not what you are made of or where you come from that defines what you are. We are best defined by what we do.

It seems to depend how you look at it: Romantics are made of stardust and the fabric of the cosmos, while cynics are made of the nuclear waste expelled in the planet-destroying explosions of dying balls of fire. Romantics are the cousins of all living things in one grand family, while cynics are apex predators evolved from millions of years of rape and murder. Both of these views are in some sense correct—but I think the real mistake is in thinking that they are incompatible. Human beings are both those things, and more; we are capable of both great compassion and great cruelty—and also great indifference. It is a mistake to think that only the dark sides—or for that matter only the light sides—of us are truly real.

Love is rational; love responds to incentives; love is an evolutionary adaptation. Love binds us together; love makes us better; love leads us to sacrifice for one another.

Love is, above all, what makes us not infinite identical psychopaths.

How is the economy doing?

JDN 2457033 EST 12:22.

Whenever you introduce yourself to someone as an economist, you will typically be asked a single question: “How is the economy doing?” I’ve already experienced this myself, and I don’t have very many dinner parties under my belt.

It’s an odd question, for a couple of reasons: First, I didn’t say I was a macroeconomic forecaster. That’s a very small branch of economics—even a small branch of macroeconomics. Second, it is widely recognized among economists that our forecasters just aren’t very good at what they do. But it is the sort of thing that pops into people’s minds when they hear the word “economist”, so we get asked it a lot.

Why are our forecasts so bad? Some argue that the task is just inherently too difficult due to the chaotic system involved; but they used to say that about weather forecasts, and yet with satellites and computer models our forecasts are now far more accurate than they were 20 years ago. Others have argued that “politics always dominates over economics”, as though politics were somehow a fundamentally separate thing, forever exogenous, a parameter in our models that cannot be predicted. I have a number of economic aphorisms I’m trying to popularize; the one for this occasion is: “Nothing is exogenous.” (Maybe fundamental constants of physics? But actually many physicists think that those constants can be derived from even more fundamental laws.) My most common is “It’s the externalities, stupid.”; next is “It’s not the incentives, it’s the opportunities.”; and the last is “Human beings are 90% rational. But woe betide that other 10%.” In fact, it’s not quite true that all our macroeconomic forecasters are bad; a few, such as Krugman, are actually quite good. The Klein Award is given each year to the best macroeconomic forecasters, and the same names pop up too often for it to be completely random. (Sadly, one of the most common is Citigroup, meaning that our banksters know perfectly well what they’re doing when they destroy our economy—they just don’t care.) So in fact I think our failures of forecasting are not inevitable or permanent.

And of course that’s not what I do at all. I am a cognitive economist; I study how economic systems behave when they are run by actual human beings, rather than by infinite identical psychopaths. I’m particularly interested in what I call the tribal paradigm, the way that people identify with groups and act in the interests of those groups, how much solidarity people feel for each other and why, and what role ideology plays in that identification. I’m hoping to one day formally model solidarity and make directly testable predictions about things like charitable donations, immigration policies and disaster responses.

I do have a more macroeconomic bent than most other cognitive economists; I’m not just interested in how human irrationality affects individuals or corporations, I’m also interested in how it affects society as a whole. But unlike most macroeconomists I care more about inequality than unemployment, and hardly at all about inflation. Unless you start getting 40% inflation per year, inflation really isn’t that harmful—and can you imagine what 40% unemployment would be like? (Also, while 100% inflation is awful, 100% unemployment would be no economy at all.) If we’re going to have a “misery index“, it should weight unemployment at least 10 times as much as inflation—and it should also include terms for poverty and inequality. Frankly maybe we should just use poverty, since I’d be prepared to accept just about any level of inflation, unemployment, or even inequality if it meant eliminating poverty. This is of course is yet another reason why a basic income is so great! An anti-poverty measure can really only be called a failure if it doesn’t actually reduce poverty; the only way that could happen with a basic income is if it somehow completely destabilized the economy, which is extremely unlikely as long as the basic income isn’t something ridiculous like $100,000 per year.

I could probably talk about my master’s thesis; the econometric models are relatively arcane, but the basic idea of correlating the income concentration of the top 1% of 1% and the level of corruption is something most people can grasp easily enough.

Of course, that wouldn’t be much of an answer to “How is the economy doing?”; usually my answer is to repeat what I’ve last read from mainstream macroeconomic forecasts, which is usually rather banal—but maybe that’s the idea? Most small talk is pretty banal I suppose (I never was very good at that sort of thing). It sounds a bit like this: No, we’re not on the verge of horrible inflation—actually inflation is currently too low. (At this point someone will probably bring up the gold standard, and I’ll have to explain that the gold standard is an unequivocally terrible idea on so, so many levels. The gold standard caused the Great Depression.) Unemployment is gradually improving, and actually job growth is looking pretty good right now; but wages are still stagnant, which is probably what’s holding down inflation. We could have prevented the Second Depression entirely, but we didn’t because Republicans are terrible at managing the economy—all of the 10 most recent recessions and almost 80% of the recessions in the last century were under Republican presidents. Instead the Democrats did their best to implement basic principles of Keynesian macroeconomics despite Republican intransigence, and we muddled through. In another year or two we will actually be back at an unemployment rate of 5%, which the Federal Reserve considers “full employment”. That’s already problematic—what about that other 5%?—but there’s another problem as well: Much of our reduction in unemployment has come not from more people being employed but instead by more people dropping out of the labor force. Our labor force participation rate is the lowest it’s been since 1978, and is still trending downward. Most of these people aren’t getting jobs; they’re giving up. At best we may hope that they are people like me, who gave up on finding work in order to invest in their own education, and will return to the labor force more knowledgeable and productive one day—and indeed, college participation rates are also rising rapidly. And no, that doesn’t mean we’re becoming “overeducated”; investment in education, so-called “human capital”, is literally the single most important factor in long-term economic output, by far. Education is why we’re not still in the Stone Age. Physical capital can be replaced, and educated people will do so efficiently. But all the physical capital in the world will do you no good if nobody knows how to use it. When everyone in the world is a millionaire with two PhDs and all our work is done by robots, maybe then you can say we’re “overeducated”—and maybe then you’d still be wrong. Being “too educated” is like being “too rich” or “too happy”.

That’s usually enough to placate my interlocutor. I should probably count my blessings, for I imagine that the first confrontation you get at a dinner party if you say you are a biologist involves a Creationist demanding that you “prove evolution”. I like to think that some mathematical biologists—yes, that’s a thing—take their request literally and set out to mathematically prove that if allele distributions in a population change according to a stochastic trend then the alleles with highest expected fitness have, on average, the highest fitness—which is what we really mean by “survival of the fittest”. The more formal, the better; the goal is to glaze some Creationist eyes. Of course that’s a tautology—but so is literally anything that you can actually prove. Cosmologists probably get similar demands to “prove the Big Bang”, which sounds about as annoying. I may have to deal with gold bugs, but I’ll take them over Creationists any day.

What do other scientists get? When I tell people I am a cognitive scientist (as a cognitive economist I am sort of both an economist and a cognitive scientist after all), they usually just respond with something like “Wow, you must be really smart.”; which I suppose is true enough, but always strikes me as an odd response. I think they just didn’t know enough about the field to even generate a reasonable-sounding question, whereas with economists they always have “How is the economy doing?” handy. Political scientists probably get “Who is going to win the election?” for the same reason. People have opinions about economics, but they don’t have opinions about cognitive science—or rather, they don’t think they do. Actually most people have an opinion about cognitive science that is totally and utterly ridiculous, more on a par with Creationists than gold bugs: That is, most people believe in a soul that survives after death. This is rather like believing that after your computer has been smashed to pieces and ground back into the sand from whence it came, all the files you had on it are still out there somewhere, waiting to be retrieved. No, they’re long gone—and likewise your memories and your personality will be long gone once your brain has rotted away. Yes, we have a soul, but it’s made of lots of tiny robots; when the tiny robots stop working the soul is no more. Everything you are is a result of the functioning of your brain. This does not mean that your feelings are not real or do not matter; they are just as real and important as you thought they were. What it means is that when a person’s brain is destroyed, that person is destroyed, permanently and irrevocably. This is terrifying and difficult to accept; but it is also most definitely true. It is as solid a fact as any in modern science. Many people see a conflict between evolution and religion; but the Pope has long since rendered that one inert. No, the real conflict, the basic fact that undermines everything religion is based upon, is not in biology but in cognitive science. It is indeed the Basic Fact of Cognitive Science: We are our brains, no more and no less. (But I suppose it wouldn’t be polite to bring that up at dinner parties.)

The “You must be really smart.” response is probably what happens to physicists and mathematicians. Quantum mechanics confuses basically everyone, so few dare go near it. The truly bold might try to bring up Schrodinger’s Cat, but are unlikely to understand the explanation of why it doesn’t work. General relativity requires thinking in tensors and four-dimensional spaces—perhaps they’ll be asked the question “What’s inside a black hole?”, which of course no physicist can really answer; the best answer may actually be, “What do you mean, inside?” And if a mathematician tries to explain their work in lay terms, it usually comes off as either incomprehensible or ridiculous: Stokes’ Theorem would be either “the integral of a differential form over the boundary of some orientable manifold is equal to the integral of its exterior derivative over the whole manifold” or else something like “The swirliness added up inside an object is equal to the swirliness added up around the edges.”

Economists, however, always seem to get this one: “How is the economy doing?”

Right now, the answer is this: “It’s still pretty bad, but it’s getting a lot better. Hopefully the new Congress won’t screw that up.”