Nature via Nurture

JDN 2457222 EDT 16:33.

One of the most common “deep questions” human beings have asked ourselves over the centuries is also one of the most misguided, the question of “nature versus nurture”: Is it genetics or environment that makes us what we are?

Humans are probably the single entity in the universe for which this question makes least sense. Artificial constructs have no prior existence, so they are “all nurture”, made what we choose to make them. Most other organisms on Earth behave accordingly to fixed instinctual programming, acting out a specific series of responses that have been honed over millions of years, doing only one thing, but doing it exceedingly well. They are in this sense “all nature”. As the saying goes, the fox knows many things, but the hedgehog knows one very big thing. Most organisms on Earth are in this sense hedgehogs, but we Homo sapiens are the ultimate foxes. (Ironically, hedgehogs are not actually “hedgehogs” in this sense: Being mammals, they have an advanced brain capable of flexibly responding to environmental circumstances. Foxes are a good deal more intelligent still, however.)

But human beings are by far the most flexible, adaptable organism on Earth. We live on literally every continent; despite being savannah apes we even live deep underwater and in outer space. Unlike most other species, we do not fit into a well-defined ecological niche; instead, we carve our own. This certainly has downsides; human beings are ourselves a mass extinction event.

Does this mean, therefore, that we are tabula rasa, blank slates upon which anything can be written?

Hardly. We’re more like word processors. Staring (as I of course presently am) at the blinking cursor of a word processor on a computer screen, seeing that wide, open space where a virtual infinity of possible texts could be written, depending entirely upon a sequence of miniscule key vibrations, you could be forgiven for thinking that you are looking at a blank slate. But in fact you are looking at the pinnacle of thousands of years of technological advancement, a machine so advanced, so precisely engineered, that its individual components are one ten-thousandth the width of a human hair (Intel just announced that we can now do even better than that). At peak performance, it is capable of over 100 billion calculations per second. Its random-access memory stores as much information as all the books on a stacks floor of the Hatcher Graduate Library, and its hard drive stores as much as all the books in the US Library of Congress. (Of course, both libraries contain digital media as well, exceeding anything my humble hard drive could hold by a factor of a thousand.)

All of this, simply to process text? Of course not; word processing is an afterthought for a processor that is specifically designed for dealing with high-resolution 3D images. (Of course, nowadays even a low-end netbook that is designed only for word processing and web browsing can typically handle a billion calculations per second.) But there the analogy with humans is quite accurate as well: Written language is about 10,000 years old, while the human visual mind is at least 100,000. We were 3D image analyzers long before we were word processors. This may be why we say “a picture is worth a thousand words”; we process each with about as much effort, even though the image necessarily contains thousands of times as many bits.

Why is the computer capable of so many different things? Why is the human mind capable of so many more? Not because they are simple and impinged upon by their environments, but because they are complex and precision-engineered to nonlinearly amplify tiny inputs into vast outputs—but only certain tiny inputs.

That is, it is because of our nature that we are capable of being nurtured. It is precisely the millions of years of genetic programming that have optimized the human brain that allow us to learn and adapt so flexibly to new environments and form a vast multitude of languages and cultures. It is precisely the genetically-programmed humanity we all share that makes our environmentally-acquired diversity possible.

In fact, causality also runs the other direction. Indeed, when I said other organisms were “all nature” that wasn’t right either; for even tightly-programmed instincts are evolved through millions of years of environmental pressure. Human beings have even been involved in cultural interactions long enough that it has begun to affect our genetic evolution; the reason I can digest lactose is that my ancestors about 10,000 years ago raised goats. We have our nature because of our ancestors’ nurture.

And then of course there’s the fact that we need a certain minimum level of environmental enrichment even to develop normally; a genetically-normal human raised into a deficient environment will suffer a kind of mental atrophy, as when children raised feral lose their ability to speak.

Thus, the question “nature or nurture?” seems a bit beside the point: We are extremely flexible and responsive to our environment, because of innate genetic hardware and software, which requires a certain environment to express itself, and which arose because of thousands of years of culture and millions of years of the struggle for survival—we are nurture because nature because nurture.

But perhaps we didn’t actually mean to ask about human traits in general; perhaps we meant to ask about some specific trait, like spatial intelligence, or eye color, or gender identity. This at least can be structured as a coherent question: How heritable is the trait? What proportion of the variance in this population is caused by genetic variation? Heritability analysis is a well-established methodology in behavioral genetics.
Yet, that isn’t the same question at all. For while height is extremely heritable within a given population (usually about 80%), human height worldwide has been increasing dramatically over time due to environmental influences and can actually be used as a measure of a nation’s economic development. (Look at what happened to the height of men in Japan.) How heritable is height? You have to be very careful what you mean.

Meanwhile, the heritability of neurofibromatosis is actually quite low—as many people acquire the disease by new mutations as inherit it from their parents—but we know for a fact it is a genetic disorder, because we can point to the specific genes that mutate to cause the disease.

Heritability also depends on the population under consideration; speaking English is more heritable within the United States than it is across the world as a whole, because there are a larger proportion of non-native English speakers in other countries. In general, a more diverse environment will lead to lower heritability, because there are simply more environmental influences that could affect the trait.

As children get older, their behavior gets more heritablea result which probably seems completely baffling, until you understand what heritability really means. Your genes become a more important factor in your behavior as you grow up, because you become separated from the environment of your birth and immersed into the general environment of your whole society. Lower environmental diversity means higher heritability, by definition. There’s also an effect of choosing your own environment; people who are intelligent and conscientious are likely to choose to go to college, where they will be further trained in knowledge and self-control. This latter effect is called niche-picking.

This is why saying something like “intelligence is 80% genetic” is basically meaningless, and “intelligence is 80% heritable” isn’t much better until you specify the reference population. The heritability of intelligence depends very much on what you mean by “intelligence” and what population you’re looking at for heritability. But even if you do find a high heritability (as we do for, say, Spearman’s g within the United States), this doesn’t mean that intelligence is fixed at birth; it simply means that parents with high intelligence are likely to have children with high intelligence. In evolutionary terms that’s all that matters—natural selection doesn’t care where you got your traits, only that you have them and pass them to your offspring—but many people do care, and IQ being heritable because rich, educated parents raise rich, educated children is very different from IQ being heritable because innately intelligent parents give birth to innately intelligent children. If genetic variation is systematically related to environmental variation, you can measure a high heritability even though the genes are not directly causing the outcome.

We do use twin studies to try to sort this out, but because identical twins raised apart are exceedingly rare, two very serious problems emerge: One, there usually isn’t a large enough sample size to say anything useful; and more importantly, this is actually an inaccurate measure in terms of natural selection. The evolutionary pressure is based on the correlation with the genes—it actually doesn’t matter whether the genes are directly causal. All that matters is that organisms with allele X survive and organisms with allele Y do not. Usually that’s because allele X does something useful, but even if it’s simply because people with allele X happen to mostly come from a culture that makes better guns, that will work just as well.

We can see this quite directly: White skin spread across the world not because it was useful (it’s actually terrible in any latitude other than subarctic), but because the cultures that conquered the world happened to be comprised mostly of people with White skin. In the 15th century you’d find a very high heritability of “using gunpowder weapons”, and there was definitely a selection pressure in favor of that trait—but it obviously doesn’t take special genes to use a gun.

The kind of heritability you get from twin studies is answering a totally different, nonsensical question, something like: “If we reassigned all offspring to parents randomly, how much of the variation in this trait in the new population would be correlated with genetic variation?” And honestly, I think the only reason people think that this is the question to ask is precisely because even biologists don’t fully grasp the way that nature and nurture are fundamentally entwined. They are trying to answer the intuitive question, “How much of this trait is genetic?” rather than the biologically meaningful “How strongly could a selection pressure for this trait evolve this gene?”

And if right now you’re thinking, “I don’t care how strongly a selection pressure for the trait could evolve some particular gene”, that’s fine; there are plenty of meaningful scientific questions that I don’t find particularly interesting and are probably not particularly important. (I hesitate to provide a rigid ranking, but I think it’s safe to say that “How does consciousness arise?” is a more important question than “Why are male platypuses venomous?” and “How can poverty be eradicated?” is a more important question than “How did the aircraft manufacturing duopoly emerge?”) But that’s really the most meaningful question we can construct from the ill-formed question “How much of this trait is genetic?” The next step is to think about why you thought that you were asking something important.

What did you really mean to ask?

For a bald question like, “Is being gay genetic?” there is no meaningful answer. We could try to reformulate it as a meaningful biological question, like “What is the heritability of homosexual behavior among males in the United States?” or “Can we find genetic markers strongly linked to self-identification as ‘gay’?” but I don’t think those are the questions we really meant to ask. I think actually the question we meant to ask was more fundamental than that: Is it legitimate to discriminate against gay people? And here the answer is unequivocal: No, it isn’t. It is a grave mistake to think that this moral question has anything to do with genetics; discrimination is wrong even against traits that are totally environmental (like religion, for example), and there are morally legitimate actions to take based entirely on a person’s genes (the obvious examples all coming from medicine—you don’t treat someone for cystic fibrosis if they don’t actually have it).

Similarly, when we ask the question “Is intelligence genetic?” I don’t think most people are actually interested in the heritability of spatial working memory among young American males. I think the real question they want to ask is about equality of opportunity, and what it would look like if we had it. If success were entirely determined by intelligence and intelligence were entirely determined by genetics, then even a society with equality of opportunity would show significant inequality inherited across generations. Thus, inherited inequality is not necessarily evidence against equality of opportunity. But this is in fact a deeply disingenuous argument, used by people like Charles Murray to excuse systemic racism, sexism, and concentration of wealth.

We didn’t have to say that inherited inequality is necessarily or undeniably evidence against equality of opportunity—merely that it is, in fact, evidence of inequality of opportunity. Moreover, it is far from the only evidence against equality of opportunity; we also can observe the fact that college-educated Black people are no more likely to be employed than White people who didn’t even finish high school, for example, or the fact that otherwise identical resumes with predominantly Black names (like “Jamal”) are less likely to receive callbacks compared to predominantly White names (like “Greg”). We can observe that the same is true for resumes with obviously female names (like “Sarah”) versus obviously male names (like “David”), even when the hiring is done by social scientists. We can directly observe that one-third of the 400 richest Americans inherited their wealth (and if you look closer into the other two-thirds, all of them had some very unusual opportunities, usually due to their family connections—“self-made” is invariably a great exaggeration). The evidence for inequality of opportunity in our society is legion, regardless of how genetics and intelligence are related. In fact, I think that the high observed heritability of intelligence is largely due to the fact that educational opportunities are distributed in a genetically-biased fashion, but I could be wrong about that; maybe there really is a large genetic influence on human intelligence. Even so, that does not justify widespread and directly-measured discrimination. It does not justify a handful of billionaires luxuriating in almost unimaginable wealth as millions of people languish in poverty. Intelligence can be as heritable as you like and it is still wrong for Donald Trump to have billions of dollars while millions of children starve.

This is what I think we need to do when people try to bring up a “nature versus nurture” question. We can certainly talk about the real complexity of the relationship between genetics and environment, which I think are best summarized as “nature via nurture”; but in fact usually we should think about why we are asking that question, and try to find the real question we actually meant to ask.

Love is rational

JDN 2457066 PST 15:29.

Since I am writing this the weekend of Valentine’s Day (actually by the time it is published it will be Valentine’s Day) and sitting across from my boyfriend, it seems particularly appropriate that today’s topic should be love. As I am writing it is in fact Darwin Day, so it is fitting that evolution will be a major topic as well.

Usually we cognitive economists are the ones reminding neoclassical economists that human beings are not always rational. Today however I must correct a misconception in the opposite direction: Love is rational, or at least it can be, should be, and typically is.

Lately I’ve been reading The Logic of Life which actually makes much the same point, about love and many other things. I had expected it to be a dogmatic defense of economic rationality—published in 2008 no less, which would make it the scream of a dying paradigm as it carries us all down with it—but I was in fact quite pleasantly surprised. The book takes a nuanced position on rationality very similar to my own, and actually incorporates many of the insights from neuroeconomics and cognitive economics. I think Harford would basically agree with me that human beings are 90% rational (but woe betide the other 10%).

We have this romantic (Romantic?) notion in our society that love is not rational, it is “beyond” rationality somehow. “Love is blind”, they say; and this is often used as a smug reply to the notion that rationality is the proper guide to live our lives.

The argument would seem to follow: “Love is not rational, love is good, therefore rationality is not always good.”

But then… the argument would follow? What do you mean, follow? Follow logically? Follow rationally? Something is clearly wrong if we’ve constructed a rational argument intended to show that we should not live our lives by rational arguments.

And the problem of course is the premise that love is not rational. Whatever made you say that?

It’s true that love is not directly volitional, not in the way that it is volitional to move your arm upward or close your eyes or type the sentence “Jackdaws ate my big sphinx of quartz.” You don’t exactly choose to love someone, weighing the pros and cons and making a decision the way you might choose which job offer to take or which university to attend.

But then, you don’t really choose which university you like either, now do you? You choose which to attend. But your enjoyment of that university is not a voluntary act. And similarly you do in fact choose whom to date, whom to marry. And you might well consider the pros and cons of such decisions. So the difference is not as large as it might at first seem.

More importantly, to say that our lives should be rational is not the same as saying they should be volitional. You simply can’t live your life as completely volitional, no matter how hard you try. You simply don’t have the cognitive resources to maintain constant awareness of every breath, every heartbeat. Yet there is nothing irrational about breathing or heartbeats—indeed they are necessary for survival and thus a precondition of anything rational you might ever do.

Indeed, in many ways it is our subconscious that is the most intelligent part of us. It is not as flexible as our conscious mind—that is why our conscious mind is there—but the human subconscious is unmatched in its efficiency and reliability among literally all known computational systems in the known universe. Walk across a room and it will solve reverse kinematics in real time. Throw a ball and it will solve three-dimensional nonlinear differential equations as well. Look at a familiar face and it will immediately identify it among a set of hundreds of faces with near-perfect accuracy regardless of the angle, lighting conditions, or even hairstyle. To see that I am not exaggerating the immense difficulty of these tasks, look at how difficult it is to make robots that can walk on two legs or throw balls. Face recognition is so difficult that it is still an unsolved problem with an extensive body of ongoing research.

And love, of course, is the subconscious system that has been most directly optimized by natural selection. Our very survival has depended upon it for millions of years. Indeed, it’s amazing how often it does seem to fail given those tight optimization constraints; I think this is for two reasons. First, natural selection optimizes for inclusive fitness, which is not the same thing as optimizing for happiness—what’s good for your genes may not be good for you per se. Many of the ways that love hurts us seem to be based around behaviors that probably did on average spread more genes on the African savannah. Second, the task of selecting an optimal partner is so mind-bogglingly complex that even the most powerful computational system in the known universe still can only do it so well. Imagine trying to construct a formal decision model that would tell you whom you should marry—all the variables you’d need to consider, the cost of sampling each of those variables sufficiently, the proper weightings on all the different terms in the utility function. Perhaps the wonder is that love is as rational as it is.

Indeed, love is evidence-based—and when it isn’t, this is cause for concern. The evidence is most often presented in small ways over long periods of time—a glance, a kiss, a gift, a meeting canceled to stay home and comfort you. Some ways are larger—a career move postponed to keep the family together, a beautiful wedding, a new house. We aren’t formally calculating the Bayesian probability at each new piece of evidence—though our subconscious brains might be, and whatever they’re doing the results aren’t far off from that mathematical optimum.

The notion that you will never “truly know” if others love you is no more epistemically valid or interesting than the notion that you will never “truly know” if your shirt is grue instead of green or if you are a brain in a vat. Perhaps we’ve been wrong about gravity all these years, and on April 27, 2016 it will suddenly reverse direction! No, it won’t, and I’m prepared to literally bet the whole world on that (frankly I’m not sure I have a choice). To be fair, the proposition that your spouse of twenty years or your mother loves you is perhaps not that certain—but it’s pretty darn certain. Perhaps the proper comparison is the level of certainty that climate change is caused by human beings, or even less, the level of certainty that your car will not suddenly veer off the road and kill you. The latter is something that actually happens—but we all drive every day assuming it won’t. By the time you marry someone, you can and should be that certain that they love you.

Love without evidence is bad love. The sort of unrequited love that builds in secret based upon fleeing glimpses, hours of obsessive fantasy, and little or no interaction with its subject isn’t romantic—it’s creepy and psychologically unhealthy. The extreme of that sort of love is what drove John Hinckley Jr. to shoot Ronald Reagan in order to impress Jodie Foster.

I don’t mean to make you feel guilty if you have experienced such a love—most of us have at one point or another—but it disgusts me how much our society tries to elevate that sort of love as the “true love” to which we should all aspire. We encourage people—particularly teenagers—to conceal their feelings for a long time and then release them in one grand surprise gesture of affection, which is just about the opposite of what you should actually be doing. (Look at Love Actually, which is just about the opposite of what its title says.) I think a great deal of strife in our society would be eliminated if we taught our children how to build relationships gradually over time instead of constantly presenting them with absurd caricatures of love that no one can—or should—follow.

I am pleased to see that our cultural norms on that point seem to be changing. A corporation as absurdly powerful as Disney is both an influence upon and a barometer of our social norms, and the trope in the most recent Disney films (like Frozen and Maleficent) is that true love is not the fiery passion of love at first sight, but the deep bond between family members that builds over time. This is a much healthier concept of love, though I wouldn’t exclude romantic love entirely. Romantic love can be true love, but only by building over time through a similar process.

Perhaps there is another reason people are uncomfortable with the idea that love is rational; by definition, rational behaviors respond to incentives. And since we tend to conceive of incentives as a purely selfish endeavor, this would seem to imply that love is selfish, which seems somewhere between painfully cynical and outright oxymoronic.

But while love certainly does carry many benefits for its users—being in love will literally make you live longer, by quite a lot, an effect size comparable to quitting smoking or exercising twice a week—it also carries many benefits for its recipients as well. Love is in fact the primary means by which evolution has shaped us toward altruism; it is the love for our family and our tribe that makes us willing to sacrifice so much for them. Not all incentives are selfish; indeed, an incentive is really just something that motivates you to action. If you could truly convince me that a given action I took would have even a reasonable chance of ending world hunger, I would do almost anything to achieve it; I can scarcely imagine a greater incentive, even though I would be harmed and the benefits would incur to people I have never met.

Love evolved because it advanced the fitness of our genes, of course. And this bothers many people; it seems to make our altruism ultimately just a different form of selfishness I guess, selfishness for our genes instead of ourselves. But this is a genetic fallacy, isn’t it? Yes, evolution by natural selection is a violent process, full of death and cruelty and suffering (as Darwin said, red in tooth and claw); but that doesn’t mean that its outcome—namely ourselves—is so irredeemable. We are, in fact, altruistic, regardless of where that altruism came from. The fact that it advanced our genes can actually be comforting in a way, because it reminds us that the universe is nonzero-sum and benefiting others does not have to mean harming ourselves.

One question I like to ask when people suggest that some scientific fact undermines our moral status in this way is: “Well, what would you prefer?” If the causal determinism of neural synapses undermines our free will, then what should we have been made of? Magical fairy dust? If we were, fairy dust would be a real phenomenon, and it would obey laws of nature, and you’d just say that the causal determinism of magical fairy dust undermines free will all over again. If the fact that our altruistic emotions evolved by natural selection to advance our inclusive fitness makes us not truly altruistic, then where should have altruism come from? A divine creator who made us to love one another? But then we’re just following our programming! You can always make this sort of argument, which either means that live is necessarily empty of meaning, that no possible universe could ever assuage our ennui—or, what I believe, that life’s meaning does not come from such ultimate causes. It is not what you are made of or where you come from that defines what you are. We are best defined by what we do.

It seems to depend how you look at it: Romantics are made of stardust and the fabric of the cosmos, while cynics are made of the nuclear waste expelled in the planet-destroying explosions of dying balls of fire. Romantics are the cousins of all living things in one grand family, while cynics are apex predators evolved from millions of years of rape and murder. Both of these views are in some sense correct—but I think the real mistake is in thinking that they are incompatible. Human beings are both those things, and more; we are capable of both great compassion and great cruelty—and also great indifference. It is a mistake to think that only the dark sides—or for that matter only the light sides—of us are truly real.

Love is rational; love responds to incentives; love is an evolutionary adaptation. Love binds us together; love makes us better; love leads us to sacrifice for one another.

Love is, above all, what makes us not infinite identical psychopaths.

Beware the false balance

JDN 2457046 PST 13:47.

I am now back in Long Beach, hence the return to Pacific Time. Today’s post is a little less economic than most, though it’s certainly still within the purview of social science and public policy. It concerns a question that many academic researchers and in general reasonable, thoughtful people have to deal with: How do we remain unbiased and nonpartisan?

This would not be so difficult if the world were as the most devoted “centrists” would have you believe, and it were actually the case that both sides have their good points and bad points, and both sides have their scandals, and both sides make mistakes or even lie, so you should never take the side of the Democrats or the Republicans but always present both views equally.

Sadly, this is not at all the world in which we live. While Democrats are far from perfect—they are human beings after all, not to mention politicians—Republicans have become completely detached from reality. As Stephen Colbert has said, “Reality has a liberal bias.” You know it’s bad when our detractors call us the reality-based community. Treating both sides as equal isn’t being unbiased—it’s committing a balance fallacy.

Don’t believe me? Here is a list of objective, scientific facts that the Republican Party (and particularly its craziest subset, the Tea Party) has officially taken political stances against:

  1. Global warming is a real problem, and largely caused by human activity. (The Republican majority in the Senate voted down a resolution acknowledging this.)
  2. Human beings share a common ancestor with chimpanzees. (48% of Republicans think that we were created in our present form.)
  3. Animals evolve over time due to natural selection. (Only 43% of Republicans believe this.)
  4. The Earth is approximately 4.5 billion years old. (Marco Rubio said he thinks maybe the Earth was made in seven days a few thousand years ago.)
  5. Hydraulic fracturing can trigger earthquakes.(Republican in Congress are trying to nullify local regulations on fracking because they insist it is so safe we don’t even need to keep track.)
  6. Income inequality in the United States is the worst it has been in decades and continues to rise. (Mitt Romney said that the concern about income inequality is just “envy”.)
  7. Progressive taxation reduces inequality without adversely affecting economic growth. (Here’s a Republican former New York Senator saying that the President “should be ashamed” for raising taxes on—you guessed it—”job creators”.)
  8. Moderate increases in the minimum wage do not yield significant losses in employment. (Republicans consistently vote against even small increases in the minimum wage, and Democrats consistently vote in favor.)
  9. The United States government has no reason to ever default on its debt. (John Boehner, now Speaker of the House, once said that “America is broke” and if we don’t stop spending we’ll never be able to pay the national debt.)
  10. Human embryos are not in any way sentient, and fetuses are not sentient until at least 17 weeks of gestation, probably more like 30 weeks. (Yet if I am to read it in a way that would make moral sense, “Life begins at conception”—which several Republicans explicitly endorsed at the National Right to Life Convention—would have to imply that even zygotes are sentient beings. If you really just meant “alive”, then that would equally well apply to plants or even bacteria. Sentience is the morally relevant category.)

And that’s not even counting the Republican Party’s association with Christianity and all of the objectively wrong scientific claims that necessarily entails—like the existence of an afterlife and the intervention of supernatural forces. Most Democrats also self-identify as Christian, though rarely with quite the same fervor (the last major Democrat I can think of who was a devout Christian was Jimmy Carter), probably because most Americans self-identify as Christian and are hesitant to elect an atheist President (despite the fact that 93% of the National Academy of Sciences is comprised of atheists and the higher your IQ the more likely you are to be an atheist; we wouldn’t want to elect someone who agrees with smart people, now would we?).

It’s true, there are some other crazy ideas out there with a left-wing slant, like the anti-vaccination movement that has wrought epidemic measles upon us, the anti-GMO crowd that rejects basic scientific facts about genetics, and the 9/11 “truth” movement that refuses to believe that Al Qaeda actually caused the attacks. There are in fact far-left Marxists out there who want to tear down the whole capitalist system by glorious revolution and replace it with… er… something (they’re never quite clear on that last point). But none of these things are the official positions of standing members of Congress.

The craziest belief by a standing Democrat I can think of is Dennis Kucinich’s belief that he saw an alien spacecraft. And to be perfectly honest, alien spacecraft are about a thousand times more plausible than Christianity in general, let alone Creationism. There almost certainly are alien spacecraft somewhere in the universe—just most likely so far away we’ll need FTL to encounter them. Moreover, this is not Kucinich’s official position as a member of Congress and it’s not something he has ever made policy based upon.

Indeed, if you’re willing to include the craziest individuals with no real political power who identify with a particular side of the political spectrum, then we should include on the right-wing side people like the Bundy militia in Nevada, neo-Nazis in Detroit, and the dozens of KKK chapters across the US. Not to mention this pastor who wants to murder all gay people in the world (because he truly believes what Leviticus 20:13 actually and clearly says).

If you get to include Marxists on the left, then we get to include Nazis on the right. Or, we could be reasonable and say that only the official positions of elected officials or mainstream pundits actually count, in which case Democrats have views that are basically accurate and reasonable while the majority of Republicans have views that are still completely objectively wrong.

There’s no balance here. For every Democrat who is wrong, there is a Republicans who is totally delusional. For every Democrat who distorts the truth, there is a Republican who blatantly lies about basic facts. Not to mention that for every Democrat who has had an ill-advised illicit affair there is a Republican who has committed war crimes.

Actually war crimes are something a fair number of Democrats have done as well, but the difference still stands out in high relief: Barack Obama has ordered double-tap drone strikes that are in violation of the Geneva Convention, but George W. Bush orchestrated a worldwide mass torture campaign and launched pointless wars that slaughtered hundreds of thousands of people. Bill Clinton ordered some questionable CIA operations, but George H.W. Bush was the director of the CIA.

I wish we had two parties that were equally reasonable. I wish there were two—or three, or four—proposals on the table in each discussion, all of which had merits and flaws worth considering. Maybe if we somehow manage to get the Green Party a significant seat in power, or the Social Democrat party, we can actually achieve that goal. But that is not where we are right now. Right now, we have the Democrats, who have some good ideas and some bad ideas; and then we have the Republicans, who are completely out of their minds.

There is an important concept in political science called the Overton window; it is the range of political ideas that are considered “reasonable” or “mainstream” within a society. Things near the middle of the Overton window are considered sensible, even “nonpartisan” ideas, while things near the edges are “partisan” or “political”, and things near but outside the window are seen as “extreme” and “radical”. Things far outside the window are seen as “absurd” or even “unthinkable”.

Right now, our Overton window is in the wrong place. Things like Paul Ryan’s plan to privatize Social Security and Medicare are seen as reasonable when they should be considered extreme. Progressive income taxes of the kind we had in the 1960s are seen as extreme when they should be considered reasonable. Cutting WIC and SNAP with nothing to replace them and letting people literally starve to death are considered at most partisan, when they should be outright unthinkable. Opposition to basic scientific facts like climate change and evolution is considered a mainstream political position—when in terms of empirical evidence Creationism should be more intellectually embarrassing than being a 9/11 truther or thinking you saw an alien spacecraft. And perhaps worst of all, military tactics like double-tap strikes that are literally war crimes are considered “liberal”, while the “conservative” position involves torture, worldwide surveillance and carpet bombing—if not outright full-scale nuclear devastation.

I want to restore reasonable conversation to our political system, I really do. But that really isn’t possible when half the politicians are totally delusional. We have but one choice: We must vote them out.

I say this particularly to people who say “Why bother? Both parties are the same.” No, they are not the same. They are deeply, deeply different, for all the reasons I just outlined above. And if you can’t bring yourself to vote for a Democrat, at least vote for someone! A Green, or a Social Democrat, or even a Libertarian or a Socialist if you must. It is only by the apathy of reasonable people that this insanity can propagate in the first place.

The irrationality of racism

JDN 2457039 EST 12:07.

I thought about making today’s post about the crazy currency crisis in Switzerland, but currency exchange rates aren’t really my area of expertise; this is much more in Krugman’s bailiwick, so you should probably read what Krugman says about the situation. There is one thing I’d like to say, however: I think there is a really easy way to create credible inflation and boost aggregate demand, but for some reason nobody is ever willing to do it: Give people money. Emphasis here on the people—not banks. Don’t adjust interest rates or currency pegs, don’t engage in quantitative easing. Give people money. Actually write a bunch of checks, presumably in the form of refundable tax rebates.

The only reason I can think of that economists don’t do this is they are afraid of helping poor people. They wouldn’t put it that way; maybe they’d say they want to avoid “moral hazard” or “perverse incentives”. But those fears didn’t stop them from loaning $2 trillion to banks or adding $4 trillion to the monetary base; they didn’t stop them from fighting for continued financial deregulation when what the world economy most desperately needs is stronger financial regulation. Our whole derivatives market practically oozes moral hazard and perverse incentives, but they aren’t willing to shut down that quadrillion-dollar con game. So that can’t be the actual fear. No, it has to be a fear of helping poor people instead of rich people, as though “capitalism” meant a system in which we squeeze the poor as tight as we can and heap all possible advantages upon those who are already wealthy. No, that’s called feudalism. Capitalism is supposed to be a system where markets are structured to provide free and fair competition, with everyone on a level playing field.

A basic income is a fundamentally capitalist policy, which maintains equal opportunity with a minimum of government intervention and allows the market to flourish. I suppose if you want to say that all taxation and government spending is “socialist”, fine; then every nation that has ever maintained stability for more than a decade has been in this sense “socialist”. Every soldier, firefighter and police officer paid by a government payroll is now part of a “socialist” system. Okay, as long as we’re consistent about that; but now you really can’t say that socialism is harmful; on the contrary, on this definition socialism is necessary for capitalism. In order to maintain security of property, enforcement of contracts, and equality of opportunity, you need government. Maybe we should just give up on the words entirely, and speak more clearly about what specific policies we want. If I don’t get to say that a basic income is “capitalist”, you don’t get to say financial deregulation is “capitalist”. Better yet, how about you can’t even call it “deregulation”? You have to actually argue in front of a crowd of people that it should be legal for banks to lie to them, and there should be no serious repercussions for any bank that cheats, steals, colludes, or even launders money for terrorists. That is, after all, what financial deregulation actually does in the real world.

Okay, that’s enough about that.

My birthday is coming up this Monday; thus completes my 27th revolution around the Sun. With birthdays come thoughts of ancestry: Though I appear White, I am legally one-quarter Native American, and my total ethnic mix includes English, German, Irish, Mohawk, and Chippewa.

Biologically, what exactly does that mean? Next to nothing.

Human genetic diversity is a real thing, and there are genetic links to not only dozens of genetic diseases and propensity toward certain types of cancer, but also personality and intelligence. There are also of course genes for skin pigmentation.

The human population does exhibit some genetic clustering, but the categories are not what you’re probably used to: Good examples of relatively well-defined genetic clusters include Ashkenazi, Papuan, and Mbuti. There are also many different haplogroups, such as mitochondrial haplogroups L3 and CZ.

Maybe you could even make a case for the “races” East Asian, South Asian, Pacific Islander, and Native American, since the indigenous populations of these geographic areas largely do come from the same genetic clusters. Or you could make a bigger category and call them all “Asian”—but if you include Papuan and Aborigine in “Asian” you’d pretty much have to include Chippewa and Najavo as well.

But I think it tells you a lot about what “race” really means when you realize that the two “race” categories which are most salient to Americans are in fact the categories that are genetically most meaningless. “White” and “Black” are totally nonsensical genetic categorizations.

Let’s start with “Black”; defining a “Black” race is like defining a category of animals by the fact that they are all tinted red—foxes yes, dogs no; robins yes, swallows no; ladybirds yes, cockroaches no. There is more genetic diversity within Africa than there is outside of it. There are African populations that are more closely related to European populations than they are to other African populations. The only thing “Black” people have in common is that their skin is dark, which is due to convergent evolution: It’s not due to common ancestry, but a common environment. Dark skin has a direct survival benefit in climates with intense sunlight.  The similarity is literally skin deep.

What about “White”? Well, there are some fairly well-defined European genetic populations, so if we clustered those together we might be able to get something worth calling “White”. The problem is, that’s not how it happened. “White” is a club. The definition of who gets to be “White” has expanded over time, and even occasionally contracted. Originally Hebrew, Celtic, Hispanic, and Italian were not included (and Hebrew, for once, is actually a fairly sensible genetic category, as long as you restrict it to Ashkenazi), but then later they were. But now that we’ve got a lot of poor people coming in from Mexico, we don’t quite think of Hispanics as “White” anymore. We actually watched Arabs lose their “White” card in real-time in 2001; before 9/11, they were “White”; now, “Arab” is a separate thing. And “Muslim” is even treated like a race now, which is like making a racial category of “Keynesians”—never forget that Islam is above all a belief system.

Actually, “White privilege” is almost a tautology—the privilege isn’t given to people who were already defined as “White”, the privilege is to be called “White”. The privilege is to have your ancestors counted in the “White” category so that they can be given rights, while people who are not in the category are denied those rights. There does seem to be a certain degree of restriction by appearance—to my knowledge, no population with skin as dark as Kenyans has ever been considered “White”, and Anglo-Saxons and Nordics have always been included—but the category is flexible to political and social changes.

But really I hate that word “privilege”, because it gets the whole situation backwards. When you talk about “White privilege”, you make it sound as though the problem with racism is that it gives unfair advantages to White people (or to people arbitrarily defined as “White”). No, the problem is that people who are not White are denied rights. It isn’t what White people have that’s wrong; it’s what Black people don’t have. Equating those two things creates a vision of the world as zero-sum, in which each gain for me is a loss for you.

Here’s the thing about zero-sum games: All outcomes are Pareto-efficient. Remember when I talked about Pareto-efficiency? As a quick refresher, an outcome is Pareto-efficient if there is no way for one person to be made better off without making someone else worse off. In general, it’s pretty hard to disagree that, other things equal, Pareto-efficiency is a good thing, and Pareto-inefficiency is a bad thing. But if racism were about “White privilege” and the game were zero-sum, racism would have to be Pareto-efficient.

In fact, racism is Pareto-inefficient, and that is part of why it is so obviously bad. It harms literally billions of people, and benefits basically no one. Maybe there are a few individuals who are actually, all things considered, better off than they would have been if racism had not existed. But there are certainly not very many such people, and in fact I’m not sure there are any at all. If there are any, it would mean that technically racism is not Pareto-inefficient—but it is definitely very close. At the very least, the damage caused by racism is several orders of magnitude larger than any benefits incurred.

That’s why the “privilege” language, while well-intentioned, is so insidious; it tells White people that racism means taking things away from them. Many of these people are already in dire straits—broke, unemployed, or even homeless—so taking away what they have sounds particularly awful. Of course they’d be hostile to or at least dubious of attempts to reduce racism. You just told them that racism is the only thing keeping them afloat! In fact, quite the opposite is the case: Poor White people are, second only to poor Black people, those who stand the most to gain from a more just society. David Koch and Donald Trump should be worried; we will probably have to take most of their money away in order to achieve social justice. (Bill Gates knows we’ll have to take most of his money away, but he’s okay with that; in fact he may end up giving it away before we get around to taking it.) But the average White person will almost certainly be better off than they were.

Why does it seem like there are benefits to racism? Again, because people are accustomed to thinking of the world as zero-sum. One person is denied a benefit, so that benefit must go somewhere else right? Nope—it can just disappear entirely, and in this case typically does.

When a Black person is denied a job in favor of a White person who is less qualified, doesn’t that White person benefit? Uh, no, actually, not really. They have been hired for a job that isn’t an optimal fit for them; they aren’t working to their comparative advantage, and that Black person isn’t either and may not be working at all. The total output of the economy will be thereby reduced slightly. When this happens millions of times, the total reduction in output can be quite substantial, and as a result that White person was hired at $30,000 for an unsuitable job when in a racism-free world they’d have been hired at $40,000 for a suitable one. A similar argument holds for sexism; men don’t benefit from getting jobs women are denied if one of those women would have invented a cure for prostate cancer.

Indeed, the empowerment of women and minorities is kind of the secret cheat code for creating a First World economy. The great successes of economic development—Korea, Japan, China, the US in WW2—had their successes precisely at a time when they suddenly started including women in manufacturing, effectively doubling their total labor capacity. Moreover, it’s pretty clear that the causation ran in this direction. Periods of economic growth are associated with increases in solidarity with other groups—and downturns with decreased solidarity—but the increase in women in the workforce was sudden and early while the increase in growth and total output was prolonged.

Racism is irrational. Indeed it is so obviously irrational that for decades now neoclassical economists have been insisting that there is no need for civil rights policy, affirmative action, etc. because the market will automatically eliminate racism by the rational profit motive. A more recent literature has attempted to show that, contrary to all appearances, racism actually is rational in some cases. Inevitably it relies upon either the background of a racist society (maybe Black people are, on average, genuinely less qualified, but it would only be because they’ve been given poorer opportunities), or an assumption of “discriminatory tastes”, which is basically giving up and redefining the utility function so that people simply get direct pleasure from being racists. Of course, on that sort of definition, you can basically justify any behavior as “rational”: Maybe he just enjoys banging his head against the wall! (A similar slipperiness is used by egoists to argue that caring for your children is actually “selfish”; well, it makes you happy, doesn’t it? Yes, but that’s not why we do it.)

There’s a much simpler way to understand this situation: Racism is irrational, and so is human behavior.

That isn’t a complete explanation, of course; and I think one major misunderstanding neoclassical economists have of cognitive economists is that they think this is what we do—we point out that something is irrational, and then high-five and go home. No, that’s not what we do. Finding the irrationality is just the start; next comes explaining the irrationality, understanding the irrationality, and finally—we haven’t reached this point in most cases—fixing the irrationality.

So what explains racism? In short, the tribal paradigm. Human beings evolved in an environment in which the most important factor in our survival and that of our offspring was not food supply or temperature or predators, it was tribal cohesion. With a cohesive tribe, we could find food, make clothes, fight off lions. Without one, we were helpless. Millions of years in this condition shaped our brains, programming them to treat threats to tribal cohesion as the greatest possible concern. We even reached the point where solidarity for the tribe actually began to dominate basic survival instincts: For a suicide bomber the unity of the tribe—be it Marxism for the Tamil Tigers or Islam for Al-Qaeda—is more important than his own life. We will do literally anything if we believe it is necessary to defend the identities we believe in.

And no, we rationalists are no exception here. We are indeed different from other groups; the beliefs that define us, unlike the beliefs of literally every other group that has ever existed, are actually rationally founded. The scientific method really isn’t just another religion, for unlike religion it actually works. But still, if push came to shove and we were forced to kill and die in order to defend rationality, we would; and maybe we’d even be right to do so. Maybe the French Revolution was, all things considered, a good thing—but it sure as hell wasn’t nonviolent.

This is the background we need to understand racism. It actually isn’t enough to show people that racism is harmful and irrational, because they are programmed not to care. As long as racial identification is the salient identity, the tribe by which we define ourselves, we will do anything to defend the cohesion of that tribe. It is not enough to show that racism is bad; we must in fact show that race doesn’t matter. Fortunately, this is easy, for as I explained above, race does not actually exist.

That makes racism in some sense easier to deal with than sexism, because the very categories of races upon which it is based are fundamentally faulty. Sexes, on the other hand, are definitely a real thing. Males and females actually are genetically different in important ways. Exactly how different in what ways is an open question, and what we do know is that for most of the really important traits like intelligence and personality the overlap outstrips the difference. (The really big, categorical differences all appear to be physical: Anatomy, size, testosterone.) But conquering sexism may always be a difficult balance, for there are certain differences we won’t be able to eliminate without altering DNA. That no more justifies sexism than the fact that height is partly genetic would justify denying rights to short people (which, actually, is something we do); but it does make matters complicated, because it’s difficult to know whether an observed difference (for instance, most pediatricians are female, while most neurosurgeons are male) is due to discrimination or innate differences.

Racism, on the other hand, is actually quite simple: Almost any statistically significant difference in behavior or outcome between races must be due to some form of discrimination somewhere down the line. Maybe it’s not discrimination right here, right now; maybe it’s discrimination years ago that denied opportunities, or discrimination against their ancestors that led them to inherit less generations later; but it almost has to be discrimination against someone somewhere, because it is only by social construction that races exist in the first place. I do say “almost” because I can think of a few exceptions: Black people are genuinely less likely to use tanning salons and genuinely more likely to need vitamin D supplements, but both of those things are directly due to skin pigmentation. They are also more likely to suffer from sickle-cell anemia, which is another convergent trait that evolved in tropical climates as a response to malaria. But unless you can think of a reason why employment outcomes would depend upon vitamin D, the huge difference in employment between Whites and Blacks really can’t be due to anything but discrimination.

I imagine most of my readers are more sophisticated than this, but just in case you’re wondering about the difference in IQ scores between Whites and Blacks, that is indeed a real observation, but IQ isn’t entirely genetic. The reason IQ scores are rising worldwide (the Flynn Effect) is due to improvements in environmental conditions: Fewer environmental pollutants—particularly lead and mercury, the removal of which is responsible for most of the reduction in crime in America over the last 20 yearsbetter nutrition, better education, less stress. Being stupid does not make you poor (or how would we explain Donald Trump?), but being poor absolutely does make you stupid. Combine that with the challenges and inconsistencies in cross-national IQ comparisons, and it’s pretty clear that the higher IQ scores in rich nations are an effect, not a cause, of their affluence. Likewise, the lower IQ scores of Black people in the US are entirely explained by their poorer living conditions, with no need for any genetic hypothesis—which would also be very difficult in the first place precisely because “Black” is such a weird genetic category.

Unfortunately, I don’t yet know exactly what it takes to change people’s concept of group identification. Obviously it can be done, for group identities change all the time, sometimes quite rapidly; but we simply don’t have good research on what causes those changes or how they might be affected by policy. That’s actually a major part of the experiment I’ve been trying to get funding to run since 2009, which I hope can now become my PhD thesis. All I can say is this: I’m working on it.

Are humans rational?

JDN 2456928 PDT 11:21.

The central point of contention between cognitive economists and neoclassical economists hinges upon the word “rational”: Are humans rational? What do we mean by “rational”?

Neoclassicists are very keen to insist that they think humans are rational, and often characterize the cognitivist view as saying that humans are irrational. (Daniel Ariely has a habit of feeding this view, titling books things like Predictably Irrational and The Upside of Irrationality.) But I really don’t think this is the right way to characterize the difference.

Daniel Kahneman has a somewhat better formulation (from Thinking, Fast and Slow): “I often cringe when my work is credited as demonstrating that human choices are irrational, when in fact our research only shows that Humans are not well described by the rational-agent model.” (Yes, he capitalizes the word “Humans” throughout, which is annoying; but in general it is a great book.)

The problem is that saying “humans are irrational” has the connotation of a universal statement; it seems to be saying that everything we do, all the time, is always and everywhere utterly irrational. And this of course could hardly be further from the truth; we would not have even survived in the savannah, let alone invented the Internet, if we were that irrational. If we simply lurched about randomly without any concept of goals or response to information in the environment, we would have starved to death millions of years ago.

But at the same time, the neoclassical definition of “rational” obviously does not describe human beings. We aren’t infinite identical psychopaths. Particularly bizarre (and frustrating) is the continued insistence that rationality entails selfishness; apparently economists are getting all their philosophy from Ayn Rand (who barely even qualifies as such), rather than the greats such as Immanuel Kant and John Stuart Mill or even the best contemporary philosophers such as Thomas Pogge and John Rawls. All of these latter would be baffled by the notion that selfless compassion is irrational.

Indeed, Kant argued that rationality implies altruism, that a truly coherent worldview requires assent to universal principles that are morally binding on yourself and every other rational being in the universe. (I am not entirely sure he is correct on this point, and in any case it is clear to me that neither you nor I are anywhere near advanced enough beings to seriously attempt such a worldview. Where neoclassicists envision infinite identical psychopaths, Kant envisions infinite identical altruists. In reality we are finite diverse tribalists.)

But even if you drop selfishness, the requirements of perfect information and expected utility maximization are still far too strong to apply to real human beings. If that’s your standard for rationality, then indeed humans—like all beings in the real world—are irrational.

The confusion, I think, comes from the huge gap between ideal rationality and total irrationality. Our behavior is neither perfectly optimal nor hopelessly random, but somewhere in between.

In fact, we are much closer to the side of perfect rationality! Our brains are limited, so they operate according to heuristics: simplified, approximate rules that are correct most of the time. Clever experiments—or complex environments very different from how we evolved—can cause those heuristics to fail, but we must not forget that the reason we have them is that they work extremely well in most cases in the environment in which we evolved. We are about 90% rational—but woe betide that other 10%.

The most obvious example is phobias: Why are people all over the world afraid of snakes, spiders, falling, and drowning? Because those used to be leading causes of death. In the African savannah 200,000 years ago, you weren’t going to be hit by a car, shot with a rifle bullet or poisoned by carbon monoxide. (You’d probably die of malaria, actually; for that one, instead of evolving to be afraid of mosquitoes we evolved a biological defense mechanism—sickle-cell red blood cells.) Death in general was actually much more likely then, particularly for children.

A similar case can be made for other heuristics we use: We are tribal because the proper functioning of our 100-person tribe used to be the most important factor in our survival. We are racist because people physically different from us were usually part of rival tribes and hence potential enemies. We hoard resources even when our technology allows abundance, because a million years ago no such abundance was possible and every meal might be our last.

When asked how common something is, we don’t calculate a posterior probability based upon Bayesian inference—that’s hard. Instead we try to think of examples—that’s easy. That’s the availability heuristic. And if we didn’t have mass media constantly giving us examples of rare events we wouldn’t otherwise have known about, the availability heuristic would actually be quite accurate. Right now, people think of terrorism as common (even though it’s astoundingly rare) because it’s always all over the news; but if you imagine living in an ancient tribe—or even an medieval village!—anything you heard about that often would almost certainly be something actually worth worrying about. Our level of panic over Ebola is totally disproportionate; but in the 14th century that same level of panic about the Black Death would be entirely justified.

When we want to know whether something is a member of a category, again we don’t try to calculate the actual probability; instead we think about how well it seems to fit a model we have of the paradigmatic example of that category—the representativeness heuristic. You see a Black man on a street corner in New York City at night; how likely is it that he will mug you? Pretty small actually, because there were less than 200,000 crimes in all of New York City last year in a city of 8,000,000 people—meaning the probability any given person committed a crime in the previous year was only 2.5%; the probability on any given day would then be less than 0.01%. Maybe having those attributes raises the probability somewhat, but you can still be about 99% sure that this guy isn’t going to mug you tonight. But since he seemed representative of the category in your mind “criminals”, your mind didn’t bother asking how many criminals there are in the first place—an effect called base rate neglect. Even 200 years ago—let alone 1 million—you didn’t have these sorts of reliable statistics, so what else would you use? You basically had no choice but to assess based upon representative traits.

As you probably know, people have trouble dealing with big numbers, and this is a problem in our modern economy where we actually need to keep track of millions or billions or even trillions of dollars moving around. And really I shouldn’t say it that way, because $1 million ($1,000,000) is an amount of money an upper-middle class person could have in a retirement fund, while $1 billion ($1,000,000,000) would make you in the top 1000 richest people in the world, and $1 trillion ($1,000,000,000,000) is enough to end world hunger for at least the next 15 years (it would only take about $1.5 trillion to do it forever, by paying only the interest on the endowment). It’s important to keep this in mind, because otherwise the natural tendency of the human mind is to say “big number” and ignore these enormous differences—it’s called scope neglect. But how often do you really deal with numbers that big? In ancient times, never. Even in the 21st century, not very often. You’ll probably never have $1 billion, and even $1 million is a stretch—so it seems a bit odd to say that you’re irrational if you can’t tell the difference. I guess technically you are, but it’s an error that is unlikely to come up in your daily life.

Where it does come up, of course, is when we’re talking about national or global economic policy. Voters in the United States today have a level of power that for 99.99% of human existence no ordinary person has had. 2 million years ago you may have had a vote in your tribe, but your tribe was only 100 people. 2,000 years ago you may have had a vote in your village, but your village was only 1,000 people. Now you have a vote on the policies of a nation of 300 million people, and more than that really: As goes America, so goes the world. Our economic, cultural, and military hegemony is so total that decisions made by the United States reverberate through the entire human population. We have choices to make about war, trade, and ecology on a far larger scale than our ancestors could have imagined. As a result, the heuristics that served us well millennia ago are now beginning to cause serious problems.

[As an aside: This is why the “Downs Paradox” is so silly. If you’re calculating the marginal utility of your vote purely in terms of its effect on you—you are a psychopath—then yes, it would be irrational for you to vote. And really, by all means: psychopaths, feel free not to vote. But the effect of your vote is much larger than that; in a nation of N people, the decision will potentially affect N people. Your vote contributes 1/N to a decision that affects N people, making the marginal utility of your vote equal to N*1/N = 1. It’s constant. It doesn’t matter how big the nation is, the value of your vote will be exactly the same. The fact that your vote has a small impact on the decision is exactly balanced by the fact that the decision, once made, will have such a large effect on the world. Indeed, since larger nations also influence other nations, the marginal effect of your vote is probably larger in large elections, which means that people are being entirely rational when they go to greater lengths to elect the President of the United States (58% turnout) rather than the Wayne County Commission (18% turnout).]

So that’s the problem. That’s why we have economic crises, why climate change is getting so bad, why we haven’t ended world hunger. It’s not that we’re complete idiots bumbling around with no idea what we’re doing. We simply aren’t optimized for the new environment that has been recently thrust upon us. We are forced to deal with complex problems unlike anything our brains evolved to handle. The truly amazing part is actually that we can solve these problems at all; most lifeforms on Earth simply aren’t mentally flexible enough to do that. Humans found a really neat trick (actually in a formal evolutionary sense a goodtrick, which we know because it also evolved in cephalopods): Our brains have high plasticity, meaning they are capable of adapting themselves to their environment in real-time. Unfortunately this process is difficult and costly; it’s much easier to fall back on our old heuristics. We ask ourselves: Why spend 10 times the effort to make it work 99% of the time when you can make it work 90% of the time so much easier?

Why? Because it’s so incredibly important that we get these things right.