Good enough is perfect, perfect is bad

Jan 8 JDN 2459953

Not too long ago, I read the book How to Keep House While Drowning by KC Davis, which I highly recommend. It offers a great deal of useful and practical advice, especially for someone neurodivergent and depressed living through an interminable pandemic (which I am, but honestly, odds are, you may be too). And to say it is a quick and easy read is actually an unfair understatement; it is explicitly designed to be readable in short bursts by people with ADHD, and it has a level of accessibility that most other books don’t even aspire to and I honestly hadn’t realized was possible. (The extreme contrast between this and academic papers is particularly apparent to me.)

One piece of advice that really stuck with me was this: Good enough is perfect.

At first, it sounded like nonsense; no, perfect is perfect, good enough is just good enough. But in fact there is a deep sense in which it is absolutely true.

Indeed, let me make it a bit stronger: Good enough is perfect; perfect is bad.

I doubt Davis thought of it in these terms, but this is a concise, elegant statement of the principles of bounded rationality. Sometimes it can be optimal not to optimize.

Suppose that you are trying to optimize something, but you have limited computational resources in which to do so. This is actually not a lot for you to suppose—it’s literally true of basically everyone basically every moment of every day.

But let’s make it a bit more concrete, and say that you need to find the solution to the following math problem: “What is the product of 2419 times 1137?” (Pretend you don’t have a calculator, as it would trivialize the exercise. I thought about using a problem you couldn’t do with a standard calculator, but I realized that would also make it much weirder and more obscure for my readers.)

Now, suppose that there are some quick, simple ways to get reasonably close to the correct answer, and some slow, difficult ways to actually get the answer precisely.

In this particular problem, the former is to approximate: What’s 2500 times 1000? 2,500,000. So it’s probably about 2,500,000.

Or we could approximate a bit more closely: Say 2400 times 1100, that’s about 100 times 100 times 24 times 11, which is 2 times 12 times 11 (times 10,000), which is 2 times (110 plus 22), which is 2 times 132 (times 10,000), which is 2,640,000.

Or, we could actually go through all the steps to do the full multiplication (remember I’m assuming you have no calculator), multiply, carry the 1s, add all four sums, re-check everything and probably fix it because you messed up somewhere; and then eventually you will get: 2,750,403.

So, our really fast method was only off by about 10%. Our moderately-fast method was only off by 4%. And both of them were a lot faster than getting the exact answer by hand.

Which of these methods you’d actually want to use depends on the context and the tools at hand. If you had a calculator, sure, get the exact answer. Even if you didn’t, but you were balancing the budget for a corporation, I’m pretty sure they’d care about that extra $110,403. (Then again, they might not care about the $403 or at least the $3.) But just as an intellectual exercise, you really didn’t need to do anything; the optimal choice may have been to take my word for it. Or, if you were at all curious, you might be better off choosing the quick approximation rather than the precise answer. Since nothing of any real significance hinged on getting that answer, it may be simply a waste of your time to bother finding it.

This is of course a contrived example. But it’s not so far from many choices we make in real life.

Yes, if you are making a big choice—which job to take, what city to move to, whether to get married, which car or house to buy—you should get a precise answer. In fact, I make spreadsheets with formal utility calculations whenever I make a big choice, and I haven’t regretted it yet. (Did I really make a spreadsheet for getting married? You’re damn right I did; there were a lot of big financial decisions to make there—taxes, insurance, the wedding itself! I didn’t decide whom to marry that way, of course; but we always had the option of staying unmarried.)

But most of the choices we make from day to day are small choices: What should I have for lunch today? Should I vacuum the carpet now? What time should I go to bed? In the aggregate they may all add up to important things—but each one of them really won’t matter that much. If you were to construct a formal model to optimize your decision of everything to do each day, you’d spend your whole day doing nothing but constructing formal models. Perfect is bad.

In fact, even for big decisions, you can’t really get a perfect answer. There are just too many unknowns. Sometimes you can spend more effort gathering additional information—but that’s costly too, and sometimes the information you would most want simply isn’t available. (You can look up the weather in a city, visit it, ask people about it—but you can’t really know what it’s like to live there until you do.) Even those spreadsheet models I use to make big decisions contain error bars and robustness checks, and if, even after investing a lot of effort trying to get precise results, I still find two or more choices just can’t be clearly distinguished to within a good margin of error, I go with my gut. And that seems to have been the best choice for me to make. Good enough is perfect.

I think that being gifted as a child trained me to be dangerously perfectionist as an adult. (Many of you may find this familiar.) When it came to solving math problems, or answering quizzes, perfection really was an attainable goal a lot of the time.

As I got older and progressed further in my education, maybe getting every answer right was no longer feasible; but I still could get the best possible grade, and did, in most of my undergraduate classes and all of my graduate classes. To be clear, I’m not trying to brag here; if anything, I’m a little embarrassed. What it mainly shows is that I had learned the wrong priorities. In fact, one of the main reasons why I didn’t get a 4.0 average in undergrad is that I spent a lot more time back then writing novels and nonfiction books, which to this day I still consider my most important accomplishments and grieve that I’ve not (yet?) been able to get them commercially published. I did my best work when I wasn’t trying to be perfect. Good enough is perfect; perfect is bad.

Now here I am on the other side of the academic system, trying to carve out a career, and suddenly, there is no perfection. When my exam is being graded by someone else, there is a way to get the most points. When I’m the one grading the exams, there is no “correct answer” anymore. There is no one scoring me to see if I did the grading the “right way”—and so, no way to be sure I did it right.

Actually, here at Edinburgh, there are other instructors who moderate grades and often require me to revise them, which feels a bit like “getting it wrong”; but it’s really more like we had different ideas of what the grade curve should look like (not to mention US versus UK grading norms). There is no longer an objectively correct answer the way there is for, say, the derivative of x^3, the capital of France, or the definition of comparative advantage. (Or, one question I got wrong on an undergrad exam because I had zoned out of that lecture to write a book on my laptop: Whether cocaine is a dopamine reuptake inhibitor. It is. And the fact that I still remember that because I got it wrong over a decade ago tells you a lot about me.)

And then when it comes to research, it’s even worse: What even constitutes “good” research, let alone “perfect” research? What would be most scientifically rigorous isn’t what journals would be most likely to publish—and without much bigger grants, I can afford neither. I find myself longing for the research paper that will be so spectacular that top journals have to publish it, removing all risk of rejection and failure—in other words, perfect.

Yet such a paper plainly does not exist. Even if I were to do something that would win me a Nobel or a Fields Medal (this is, shall we say, unlikely), it probably wouldn’t be recognized as such immediately—a typical Nobel isn’t awarded until 20 or 30 years after the work that spawned it, and while Fields Medals are faster, they’re by no means instant or guaranteed. In fact, a lot of ground-breaking, paradigm-shifting research was originally relegated to minor journals because the top journals considered it too radical to publish.

Or I could try to do something trendy—feed into DSGE or GTFO—and try to get published that way. But I know my heart wouldn’t be in it, and so I’d be miserable the whole time. In fact, because it is neither my passion nor my expertise, I probably wouldn’t even do as good a job as someone who really buys into the core assumptions. I already have trouble speaking frequentist sometimes: Are we allowed to say “almost significant” for p = 0.06? Maximizing the likelihood is still kosher, right? Just so long as I don’t impose a prior? But speaking DSGE fluently and sincerely? I’d have an easier time speaking in Latin.

What I know—on some level at least—I ought to be doing is finding the research that I think is most worthwhile, given the resources I have available, and then getting it published wherever I can. Or, in fact, I should probably constrain a little by what I know about journals: I should do the most worthwhile research that is feasible for me and has a serious chance of getting published in a peer-reviewed journal. It’s sad that those two things aren’t the same, but they clearly aren’t. This constraint binds, and its Lagrange multiplier is measured in humanity’s future.

But one thing is very clear: By trying to find the perfect paper, I have floundered and, for the last year and a half, not written any papers at all. The right choice would surely have been to write something.

Because good enough is perfect, and perfect is bad.

Small deviations can have large consequences.

Jun 26 JDN 2459787

A common rejoinder that behavioral economists get from neoclassical economists is that most people are mostly rational most of the time, so what’s the big deal? If humans are 90% rational, why worry so much about the other 10%?

Well, it turns out that small deviations from rationality can have surprisingly large consequences. Let’s consider an example.

Suppose we have a market for some asset. Without even trying to veil my ulterior motive, let’s make that asset Bitcoin. Its fundamental value is of course $0; it’s not backed by anything (not even taxes or a central bank), it has no particular uses that aren’t already better served by existing methods, and it’s not even scalable.

Now, suppose that 99% of the population rationally recognizes that the fundamental value of the asset is indeed $0. But 1% of the population doesn’t; they irrationally believe that the asset is worth $20,000. What will the price of that asset be, in equilibrium?

If you assume that the majority will prevail, it should be $0. If you did some kind of weighted average, you’d think maybe its price will be something positive but relatively small, like $200. But is this actually the price it will take on?

Consider someone who currently owns 1 unit of the asset, and recognizes that it is fundamentally worthless. What should they do? Well, if they also know that there are people out there who believe it is worth $20,000, the answer is obvious: They should sell it to those people. Indeed, they should sell it for something quite close to $20,000 if they can.

Now, suppose they don’t already own the asset, but are considering whether or not to buy it. They know it’s worthless, but they also know that there are people who will buy it for close to $20,000. Here’s the kicker: This is a reason for them to buy it at anything meaningfully less than $20,000.

Suppose, for instance, they could buy it for $10,000. Spending $10,000 to buy something you know is worthless seems like a terribly irrational thing to do. But it isn’t irrational, if you also know that somewhere out there is someone who will pay $20,000 for that same asset and you have a reasonable chance of finding that person and selling it to them.

The equilibrium outcome, then, is that the price of the asset will be almost $20,000! Even though 99% of the population recognizes that this asset is worthless, the fact that 1% of people believe it’s worth as much as a car will result in it selling at that price. Thus, even a slight deviation from a perfectly-rational population can result in a market that is radically at odds with reality.

And it gets worse! Suppose that in fact everyone knows that the asset is worthless, but most people think that there is some small portion of the population who believes the asset has value. Then, it will still be priced at that value in equilibrium, as people trade it back and forth searching in vain for the person who really wants it! (This is called the Greater Fool theory.)

That is, the price of an asset in a free market—even in a market where most people are mostly rational most of the time—will in fact be determined by the highest price anyone believes that anyone else thinks it has. And this is true of essentially any asset market—any market where people are buying something, not to use it, but to sell it to someone else.

Of course, beliefs—and particularly beliefs about beliefs—can very easily change, so that equilibrium price could move in any direction basically without warning.

Suddenly, the cycle of bubble and crash, boom and bust, doesn’t seem so surprising does it? The wonder is that prices ever become stable at all.


Then again, do they? Last I checked, the only prices that were remotely stable were for goods like apples and cars and televisions, goods that are bought and sold to be consumed. (Or national currencies managed by competent central banks, whose entire job involves doing whatever it takes to keep those prices stable.) For pretty much everything else—and certainly any purely financial asset that isn’t a national currency—prices are indeed precisely as wildly unpredictable and utterly irrational as this model would predict.

So much for the Efficient Market Hypothesis? Sadly I doubt that the people who still believe this nonsense will be convinced.

Commitment and sophistication

Mar 13 JDN 2459652

One of the central insights of cognitive and behavioral economics is that understanding the limitations of our own rationality can help us devise mechanisms to overcome those limitations—that knowing we are not perfectly rational can make us more rational. The usual term for this is a somewhat vague one: behavioral economists generally call it simply sophistication.

For example, suppose that you are short-sighted and tend to underestimate the importance of the distant future. (This is true of most of us, to greater or lesser extent.)

It’s rational to consider the distant future less important than the present—things change in the meantime, and if we go far enough you may not even be around to see it. In fact, rationality alone doesn’t even say how much you should discount any given distance in the future. But most of us are inconsistent about our attitudes toward the future: We exhibit dynamic inconsistency.

For instance, suppose I ask you today whether you would like $100 today or $102 tomorrow. It is likely you’ll choose $100 today. But if I ask you whether you would like $100 365 days from now or $102 366 days from now, you’ll almost certainly choose the $102.


This means that if I asked you the second question first, then waited a year and asked you the first question, you’d change your mind—that’s inconsistent. Whichever choice is better shouldn’t systematically change over time. (It might happen to change, if your circumstances changed in some unexpected way. But on average it shouldn’t change.) Indeed, waiting a day for an extra $2 is typically going to be worth it; 2% daily interest is pretty hard to beat.

Now, suppose you have some option to make a commitment, something that will bind you to your earlier decision. It could be some sort of punishment for deviating from your earlier choice, some sort of reward for keeping to the path, or, in the most extreme example, a mechanism that simply won’t let you change your mind. (The literally classic example of this is Odysseus having his crew tie him to the mast so he can listen to the Sirens.)

If you didn’t know that your behavior was inconsistent, you’d never want to make such a commitment. You don’t expect to change your mind, and if you do change your mind, it would be because your circumstances changed in some unexpected way—in which case changing your mind would be the right thing to do. And if your behavior wasn’t inconsistent, this reasoning would be quite correct: No point in committing when you have less information.

But if you know that your behavior is inconsistent, you can sometimes improve the outcome for yourself by making a commitment. You can force your own behavior into consistency, even though you will later be tempted to deviate from your plan.

Yet there is a piece missing from this account, often not clearly enough stated: Why should we trust the version of you that has a year to plan over the version of you that is making the decision today? What’s the difference between those two versions of you that makes them inconsistent, and why is one more trustworthy than the other?

The biggest difference is emotional. You don’t really feel $100 a year from now, so you can do the math and see that 2% daily interest is pretty darn good. But $100 today makes you feel something—excitement over what you might buy, or relief over a bill you can now pay. (Actually that’s one of the few times when it would be rational to take $100 today: If otherwise you’re going to miss a deadline and pay a late fee.) And that feeling about $102 tomorrow just isn’t as strong.

We tend to think that our emotional selves and our rational selves are in conflict, and so we expect to be more rational when we are less emotional. There is some truth to this—strong emotions can cloud our judgments and make us behave rashly.

Yet this is only one side of the story. We also need emotions to be rational. There is a condition known as flat affect, often a symptom of various neurological disorders, in which emotional reactions are greatly blunted or even non-existent. People with flat affect aren’t more rational—they just do less. In the worst cases, they completely lose their ability to be motivated to do things and become outright inert, known as abulia.

Emotional judgments are often less accurate than thoughtfully reasoned arguments, but they are also much faster—and that’s why we have them. In many contexts, particularly when survival is at stake, doing something pretty well right away is often far better than waiting long enough to be sure you’ll get the right answer. Running away from a loud sound that turns out to be nothing is a lot better than waiting to carefully determine whether that sound was really a tiger—and finding that it was.

With this in mind, the cases where we should expected commitment to be effective are those that are unfamiliar, not only on an individual level, but in an evolutionary sense. I have no doubt that experienced stock traders can develop certain intuitions that make them better at understanding financial markets than randomly chosen people—but they still systematically underperform simple mathematical models, likely because finance is just so weird from an evolutionary perspective. So when deciding whether to accept some amount of money m1 at time t1 and some other amount of money m2 at time t2, your best bet is really to just do the math.

But this may not be the case for many other types of decisions. Sometimes how you feel in the moment really is the right signal to follow. Committing to work at your job every day may seem responsible, ethical, rational—but if you hate your job when you’re actually doing it, maybe it really isn’t how you should be spending your life. Buying a long-term gym membership to pressure yourself to exercise may seem like a good idea, but if you’re miserable every time you actually go to the gym, maybe you really need to be finding a better way to integrate exercise into your lifestyle.

There are no easy answers here. We can think of ourselves as really being made of two (if not more) individuals: A cold, calculating planner who looks far into the future, and a heated, emotional experiencer who lives in the moment. There’s a tendency to assume that the planner is our “true self”, the one we should always listen to, but this is wrong; we are both of those people, and a life well-lived requires finding the right balance between their conflicting desires.

How to change minds

Aug 29 JDN 2459456

Think for a moment about the last time you changed your mind on something important. If you can’t think of any examples, that’s not a good sign. Think harder; look back further. If you still can’t find any examples, you need to take a deep, hard look at yourself and how you are forming your beliefs. The path to wisdom is not found by starting with the right beliefs, but by starting with the wrong ones and recognizing them as wrong. No one was born getting everything right.

If you remember changing your mind about something, but don’t remember exactly when, that’s not a problem. Indeed, this is the typical case, and I’ll get to why in a moment. Try to remember as much as you can about the whole process, however long it took.

If you still can’t specifically remember changing your mind, try to imagine a situation in which you would change your mind—and if you can’t do that, you should be deeply ashamed and I have nothing further to say to you.

Thinking back to that time: Why did you change your mind?

It’s possible that it was something you did entirely on your own, through diligent research of primary sources or even your own mathematical proofs or experimental studies. This is occasionally something that happens; as an active researcher, it has definitely happened to me. But it’s clearly not the typical case of what changes people’s minds, and it’s quite likely that you have never experienced it yourself.

The far more common scenario—even for active researchers—is far more mundane: You changed your mind because someone convinced you. You encountered a persuasive argument, and it changed the way you think about things.

In fact, it probably wasn’t just one persuasive argument; it was probably many arguments, from multiple sources, over some span of time. It could be as little as minutes or hours; it could be as long as years.

Probably the first time someone tried to change your mind on that issue, they failed. The argument may even have degenerated into shouting and name-calling. You both went away thinking that the other side was composed of complete idiots or heartless monsters. And then, a little later, thinking back on the whole thing, you remembered one thing they said that was actually a pretty good point.

This happened again with someone else, and again with yet another person. And each time your mind changed just a little bit—you became less certain of some things, or incorporated some new information you didn’t know before. The towering edifice of your worldview would not be toppled by a single conversation—but a few bricks here and there did get taken out and replaced.

Or perhaps you weren’t even the target of the conversation; you simply overheard it. This seems especially common in the age of social media, where public and private spaces become blurred and two family members arguing about politics can blow up into a viral post that is viewed by millions. Perhaps you changed your mind not because of what was said to you, but because of what two other people said to one another; perhaps the one you thought was on your side just wasn’t making as many good arguments as the one on the other side.

Now, you may be thinking: Yes, people like me change our minds, because we are intelligent and reasonable. But those people, on the other side, aren’t like that. They are stubborn and foolish and dogmatic and stupid.

And you know what? You probably are an especially intelligent and reasonable person. If you’re reading this blog, there’s a good chance that you are at least above-average in your level of education, rationality, and open-mindedness.

But no matter what beliefs you hold, I guarantee you there is someone out there who shares many of them and is stubborn and foolish and dogmatic and stupid. And furthermore, there is probably someone out there who disagrees with many of your beliefs and is intelligent and open-minded and reasonable.

This is not to say that there’s no correlation between your level of reasonableness and what you actually believe. Obviously some beliefs are more rational than others, and rational people are more likely to hold those beliefs. (If this weren’t the case, we’d be doomed.) Other things equal, an atheist is more reasonable than a member of the Taliban; a social democrat is more reasonable than a neo-Nazi; a feminist is more reasonable than a misogynist; a member of the Human Rights Campaign is more reasonable than a member of the Westboro Baptist Church. But reasonable people can be wrong, and unreasonable people can be right.

You should be trying to seek out the most reasonable people who disagree with you. And you should be trying to present yourself as the most reasonable person who expresses your own beliefs.

This can be difficult—especially that first part, as the world (or at least the world spanned by Facebook and Twitter) seems to be filled with people who are astonishingly dogmatic and unreasonable. Often you won’t be able to find any reasonable disagreement. Often you will find yourself in threads full of rage, hatred and name-calling, and you will come away disheartened, frustrated, or even despairing for humanity. The whole process can feel utterly futile.

And yet, somehow, minds change.

Support for same-sex marriage in the US rose from 27% to 70% just since 1997.

Read that date again: 1997. Less than 25 years ago.

The proportion of new marriages which were interracial has risen from 3% in 1967 to 19% today. Given the racial demographics of the US, this is almost at the level of random assortment.

Ironically I think that the biggest reason people underestimate the effectiveness of rational argument is the availability heuristic: We can’t call to mind any cases where we changed someone’s mind completely. We’ve never observed a pi-radian turnaround in someone’s whole worldview, and thus, we conclude that nobody ever changes their mind about anything important.

But in fact most people change their minds slowly and gradually, and are embarrassed to admit they were wrong in public, so they change their minds in private. (One of the best single changes we could make toward improving human civilization would be to make it socially rewarded to publicly admit you were wrong. Even the scientific community doesn’t do this nearly as well as it should.) Often changing your mind doesn’t even really feel like changing your mind; you just experience a bit more doubt, learn a bit more, and repeat the process over and over again until, years later, you believe something different than you did before. You moved 0.1 or even 0.01 radians at a time, until at last you came all the way around.

It may be in fact that some people’s minds cannot be changed—either on particular issues, or even on any issue at all. But it is so very, very easy to jump to that conclusion after a few bad interactions, that I think we should intentionally overcompensate in the opposite direction: Only give up on someone after you have utterly overwhelming evidence that their mind cannot ever be changed in any way.

I can’t guarantee that this will work. Perhaps too many people are too far gone.

But I also don’t see any alternative. If the truth is to prevail, it will be by rational argument. This is the only method that systematically favors the truth. All other methods give equal or greater power to lies.

To a first approximation, all human behavior is social norms

Dec 15 JDN 2458833

The language we speak, the food we eat, and the clothes we wear—indeed, the fact that we wear clothes at all—are all the direct result of social norms. But norms run much deeper than this: Almost everything we do is more norm than not.

Why do sleep and wake up at a particular time of day? For most people, the answer is that they needed to get up to go to work. Why do you need to go to work at that specific time? Why does almost everyone go to work at the same time? Social norms.

Even the most extreme human behaviors are often most comprehensible in terms of social norms. The most effective predictive models of terrorism are based on social networks: You are much more likely to be a terrorist if you know people who are terrorists, and much more likely to become a terrorist if you spend a lot of time talking with terrorists. Cultists and conspiracy theorists seem utterly baffling if you imagine that humans form their beliefs rationally—and totally unsurprising if you realize that humans mainly form their beliefs by matching those around them.

For a long time, economists have ignored social norms at our peril; we’ve assumed that financial incentives will be sufficient to motivate behavior, when social incentives can very easily override them. Indeed, it is entirely possible for a financial incentive to have a negative effect, when it crowds out a social incentive: A good example is a friend who would gladly come over to help you with something as a friend, but then becomes reluctant if you offer to pay him $25. I previously discussed another example, where taking a mentor out to dinner sounds good but paying him seems corrupt.

Why do you drive on the right side of the road (or the left, if you’re in Britain)? The law? Well, the law is already a social norm. But in fact, it’s hardly just that. You probably sometimes speed or run red lights, which are also in violation of traffic laws. Yet somehow driving on the right side seem to be different. Well, that’s because driving on the right has a much stronger norm—and in this case, that norm is self-enforcing with the risk of severe bodily harm or death.

This is a good example of why it isn’t necessary for everyone to choose to follow a norm for that norm to have a great deal of power. As long as the norms include some mechanism for rewarding those who follow and punishing those who don’t, norms can become compelling even to those who would prefer not to obey. Sometimes it’s not even clear whether people are following a norm or following direct incentives, because the two are so closely aligned.

Humans are not the only social species, but we are by far the most social species. We form larger, more complex groups than any other animal; we form far more complex systems of social norms; and we follow those norms with slavish obedience. Indeed, I’m a little suspicious of some of the evolutionary models predicting the evolution of social norms, because they predict it too well; they seem to suggest that it should arise all the time, when in fact it’s only a handful of species who exhibit it at all and only we who build our whole existence around it.

Along with our extreme capacity for altruism, this is another way that human beings actually deviate more from the infinite identical psychopaths of neoclassical economics than most other animals. Yes, we’re smarter than other animals; other animals are more likely to make mistakes (though certainly we make plenty of our own). But most other animals aren’t motivated by entirely different goals than individual self-interest (or “evolutionary self-interest” in a Selfish Gene sort of sense) the way we typically are. Other animals try to be selfish and often fail; we try not to be selfish and usually succeed.

Economics experiments often go out of their way to exclude social motives as much as possible—anonymous random matching with no communication, for instance—and still end up failing. Human behavior in experiments is consistent, systematic—and almost never completely selfish.

Once you start looking for norms, you see them everywhere. Indeed, it becomes hard to see anything else. To a first approximation, all human behavior is social norms.

The backfire effect has been greatly exaggerated

Sep 8 JDN 2458736

Do a search for “backfire effect” and you’re likely to get a large number of results, many of them from quite credible sources. The Oatmeal did an excellent comic on it. The basic notion is simple: “[…]some individuals when confronted with evidence that conflicts with their beliefs come to hold their original position even more strongly.”

The implications of this effect are terrifying: There’s no point in arguing with anyone about anything controversial, because once someone strongly holds a belief there is nothing you can do to ever change it. Beliefs are fixed and unchanging, stalwart cliffs against the petty tides of evidence and logic.

Fortunately, the backfire effect is not actually real—or if it is, it’s quite rare. Over many years those seemingly-ineffectual tides can erode those cliffs down and turn them into sandy beaches.

The most recent studies with larger samples and better statistical analysis suggest that the typical response to receiving evidence contradicting our beliefs is—lo and behold—to change our beliefs toward that evidence.

To be clear, very few people completely revise their worldview in response to a single argument. Instead, they try to make a few small changes and fit them in as best they can.

But would we really expect otherwise? Worldviews are holistic, interconnected systems. You’ve built up your worldview over many years of education, experience, and acculturation. Even when someone presents you with extremely compelling evidence that your view is wrong, you have to weigh that against everything else you have experienced prior to that point. It’s entirely reasonable—rational, even—for you to try to fit the new evidence in with a minimal overall change to your worldview. If it’s possible to make sense of the available evidence with only a small change in your beliefs, it makes perfect sense for you to do that.

What if your whole worldview is wrong? You might have based your view of the world on a religion that turns out not to be true. You might have been raised into a culture with a fundamentally incorrect concept of morality. What if you really do need a radical revision—what then?

Well, that can happen too. People change religions. They abandon their old cultures and adopt new ones. This is not a frequent occurrence, to be sure—but it does happen. It happens, I would posit, when someone has been bombarded with contrary evidence not once, not a few times, but hundreds or thousands of times, until they can no longer sustain the crumbling fortress of their beliefs against the overwhelming onslaught of argument.

I think the reason that the backfire effect feels true to us is that our life experience is largely that “argument doesn’t work”; we think back to all the times that we have tried to convince to change a belief that was important to them, and we can find so few examples of when it actually worked. But this is setting the bar much too high. You shouldn’t expect to change an entire worldview in a single conversation. Even if your worldview is correct and theirs is not, that one conversation can’t have provided sufficient evidence for them to rationally conclude that. One person could always be mistaken. One piece of evidence could always be misleading. Even a direct experience could be a delusion or a foggy memory.

You shouldn’t be trying to turn a Young-Earth Creationist into an evolutionary biologist, or a climate change denier into a Greenpeace member. You should be trying to make that Creationist question whether the Ussher chronology is really so reliable, or if perhaps the Earth might be a bit older than a 17th century theologian interpreted it to be. You should be getting the climate change denier to question whether scientists really have such a greater vested interest in this than oil company lobbyists. You can’t expect to make them tear down the entire wall—just get them to take out one brick today, and then another brick tomorrow, and perhaps another the day after that.

The proverb is of uncertain provenance, variously attributed, rarely verified, but it is still my favorite: No single raindrop feels responsible for the flood.

Do not seek to be a flood. Seek only to be a raindrop—for if we all do, the flood will happen sure enough. (There’s a version more specific to our times: So maybe we’re snowflakes. I believe there is a word for a lot of snowflakes together: Avalanche.)

And remember this also: When you argue in public (which includes social media), you aren’t just arguing for the person you’re directly engaged with; you are also arguing for everyone who is there to listen. Even if you can’t get the person you’re arguing with to concede even a single point, maybe there is someone else reading your post who now thinks a little differently because of something you said. In fact, maybe there are many people who think a little differently—the marginal impact of slacktivism can actually be staggeringly large if the audience is big enough.

This can be frustrating, thankless work, for few people will ever thank you for changing their mind, and many will condemn you even for trying. Finding out you were wrong about a deeply-held belief can be painful and humiliating, and most people will attribute that pain and humiliation to the person who called them out for being wrong—rather than placing the blame where it belongs, which is on whatever source or method made you wrong in the first place. Being wrong feels just like being right.

But this is important work, among the most important work that anyone can do. Philosophy, mathematics, science, technology—all of these things depend upon it. Changing people’s minds by evidence and rational argument is literally the foundation of civilization itself. Every real, enduring increment of progress humanity has ever made depends upon this basic process. Perhaps occasionally we have gotten lucky and made the right choice for the wrong reasons; but without the guiding light of reason, there is nothing to stop us from switching back and making the wrong choice again soon enough.

So I guess what I’m saying is: Don’t give up. Keep arguing. Keep presenting evidence. Don’t be afraid that your arguments will backfire—because in fact they probably won’t.

The “market for love” is a bad metaphor

Feb 14 JDN 2458529

Valentine’s Day was this past week, so let’s talk a bit about love.

Economists would never be accused of being excessively romantic. To most neoclassical economists, just about everything is a market transaction. Love is no exception.

There are all sorts of articles and books and an even larger number of research papers going back multiple decades and continuing all the way through until today using the metaphor of the “marriage market”.

In a few places, marriage does actually function something like a market: In China, there are places where your parents will hire brokers and matchmakers to select a spouse for you. But even this isn’t really a market for love or marriage. It’s a market for matchmaking services. The high-tech version of this is dating sites like OkCupid.
And of course sex work actually occurs on markets; there is buying and selling of services at monetary prices. There is of course a great deal worth saying on that subject, but it’s not my topic for today.

But in general, love is really nothing like a market. First of all, there is no price. This alone should be sufficient reason to say that we’re not actually dealing with a market. The whole mechanism that makes a market a market is the use of prices to achieve equilibrium between supply and demand.

A price doesn’t necessarily have to be monetary; you can barter apples for bananas, or trade in one used video game for another, and we can still legitimately call that a market transaction with a price.

But love isn’t like that either. If your relationship with someone is so transactional that you’re actually keeping a ledger of each thing they do for you and each thing you do for them so that you could compute a price for services, that isn’t love. It’s not even friendship. If you really care about someone, you set such calculations aside. You view their interests and yours as in some sense shared, aligned toward common goals. You stop thinking in terms of “me” and “you” and start thinking in terms of “us”. You don’t think “I’ll scratch your back if you scratch mine.” You think “We’re scratching each other’s backs today.”

This is of course not to say that love never involves conflict. On the contrary, love always involves conflict. Successful relationships aren’t those where conflict never happens, they are those where conflict is effectively and responsibly resolved. Your interests and your loved ones’ are never completely aligned; there will always be some residual disagreement. But the key is to realize that your interests are still mostly aligned; those small vectors of disagreement should be outweighed by the much larger vector of your relationship.

And of course, there can come a time when that is no longer the case. Obviously, there is domestic abuse, which should absolutely be a deal-breaker for anyone. But there are other reasons why you may find that a relationship ultimately isn’t working, that your interests just aren’t as aligned as you thought they were. Eventually those disagreement vectors just get too large to cancel out. This is painful, but unavoidable. But if you reach the point where you are keeping track of actions on a ledger, that relationship is already dead. Sooner or later, someone is going to have to pull the plug.

Very little of what I’ve said in the preceding paragraphs is likely to be controversial. Why, then, would economists think that it makes sense to treat love as a market?

I think this comes down to a motte and bailey doctrine. A more detailed explanation can be found at that link, but the basic idea of a motte and bailey is this: You have a core set of propositions that is highly defensible but not that interesting (the “motte”), and a broader set of propositions that are very interesting, but not as defensible (the “bailey”). The terms are related to a medieval defensive strategy, in which there was a small, heavily fortified tower called a motte, surrounded by fertile, useful land, the bailey. The bailey is where you actually want to live, but it’s hard to defend; so if the need arises, you can pull everyone back into the motte to fight off attacks. But nobody wants to live in the motte; it’s just a cramped stone tower. There’s nothing to eat or enjoy there.

The motte comprised of ideas that almost everyone agrees with. The bailey is the real point of contention, the thing you are trying to argue for—which, by construction, other people must not already agree with.

Here are some examples, which I have intentionally chosen from groups I agree with:

Feminism can be a motte and bailey doctrine. The motte is “women are people”; the bailey is abortion rights, affirmative consent and equal pay legislation.

Rationalism can be a motte and bailey doctrine. The motte is “rationality is good”; the bailey is atheism, transhumanism, and Bayesian statistics.

Anti-fascism can be a motte and bailey doctrine. The motte is “fascists are bad”; the bailey is black bloc Antifa and punching Nazis.

Even democracy can be a motte and bailey doctrine. The motte is “people should vote for their leaders”; my personal bailey is abolition of the Electoral College, a younger voting age, and range voting.

Using a motte and bailey doctrine does not necessarily make you wrong. But it’s something to be careful about, because as a strategy it can be disingenuous. Even if you think that the propositions in the bailey all follow logically from the propositions in the motte, the people you’re talking to may not think so, and in fact you could simply be wrong. At the very least, you should be taking the time to explain how one follows from the other; and really, you should consider whether the connection is actually as tight as you thought, or if perhaps one can believe that rationality is good without being Bayesian or believe that women are people without supporting abortion rights.

I think when economists describe love or marriage as a “market”, they are applying a motte and bailey doctrine. They may actually be doing something even worse than that, by equivocating on the meaning of “market”. But even if any given economist uses the word “market” totally consistently, the fact that different economists of the same broad political alignment use the word differently adds up to a motte and bailey doctrine.

The doctrine is this: “There have always been markets.”

The motte is something like this: “Humans have always engaged in interaction for mutual benefit.”

This is undeniably true. In fact, it’s not even uninteresting. As mottes go, it’s a pretty nice one; it’s worth spending some time there. In the endless quest for an elusive “human nature”, I think you could do worse than to focus on our universal tendency to engage in interaction for mutual benefit. (Don’t other species do it too? Yes, but that’s just it—they are precisely the ones that seem most human.)

And if you want to define any mutually-beneficial interaction as a “market trade”, I guess it’s your right to do that. I think this is foolish and confusing, but legislating language has always been a fool’s errand.

But of course the more standard meaning of the word “market” implies buyers and sellers exchanging goods and services for monetary prices. You can extend it a little to include bartering, various forms of financial intermediation, and the like; but basically you’re still buying and selling.

That makes this the bailey: “Humans have always engaged in buying and selling of goods and services at prices.”

And that, dear readers, is ahistorical nonsense. We’ve only been using money for a few thousand years, and it wasn’t until the Industrial Revolution that we actually started getting the majority of our goods and services via market trades. Economists like to tell a story where bartering preceded the invention of money, but there’s basically no evidence of that. Bartering seems to be what people do when they know how money works but don’t have any money to work with.

Before there was money, there were fundamentally different modes of interaction: Sharing, ritual, debts of honor, common property, and, yes, love.

These were not markets. They perhaps shared some very broad features of markets—such as the interaction for mutual benefit—but they lacked the defining attributes that make a market a market.

Why is this important? Because this doctrine is used to transform more and more of our lives into actual markets, on the grounds that they were already “markets”, and we’re just using “more efficient” kinds of markets. But in fact what’s happening is we are trading one fundamental mode of human interaction for another: Where we used to rely upon norms or trust or mutual affection, we instead rely upon buying and selling at prices.

In some cases, this actually is a good thing: Markets can be very powerful, and are often our best tool when we really need something done. In particular, it’s clear at this point that norms and trust are not sufficient to protect us against climate change. All the “Reduce, Reuse, Recycle” PSAs in the world won’t do as much as a carbon tax. When millions of lives are at stake, we can’t trust people to do the right thing; we need to twist their arms however we can.

But markets are in some sense a brute-force last-resort solution; they commodify and alienate (Marx wasn’t wrong about that), and despite our greatly elevated standard of living, the alienation and competitive pressure of markets seem to be keeping most of us from really achieving happiness.

This is why it’s extremely dangerous to talk about a “market for love”. Love is perhaps the last bastion of our lives that has not been commodified into a true market, and if it goes, we’ll have nothing left. If sexual relationships built on mutual affection were to disappear in favor of apps that will summon a prostitute or a sex robot at the push of a button, I would count that as a great loss for human civilization. (How we should regulate prostitution or sex robots are a different question, which I said I’d leave aside for this post.) A “market for love” is in fact a world with no love at all.

Fake skepticism

Jun 3 JDN 2458273

“You trust the mainstream media?” “Wake up, sheeple!” “Don’t listen to what so-called scientists say; do your own research!”

These kinds of statements have become quite ubiquitous lately (though perhaps the attitudes were always there, and we only began to hear them because of the Internet and social media), and are often used to defend the most extreme and bizarre conspiracy theories, from moon-landing denial to flat Earth. The amazing thing about these kinds of statements is that they can be used to defend literally anything, as long as you can find some source with less than 100% credibility that disagrees with it. (And what source has 100% credibility?)

And that, I think, should tell you something. An argument that can prove anything is an argument that proves nothing.

Reversed stupidity is not intelligence. The fact that the mainstream media, or the government, or the pharmaceutical industry, or the oil industry, or even gangsters, fanatics, or terrorists believes something does not make it less likely to be true.

In fact, the vast majority of beliefs held by basically everyone—including the most fanatical extremists—are true. I could list such consensus true beliefs for hours: “The sky is blue.” “2+2=4.” “Ice is colder than fire.”

Even if a belief is characteristic of a specifically evil or corrupt organization, that does not necessarily make it false (though it usually is evidence of falsehood in a Bayesian sense). If only terrible people belief X, then maybe you shouldn’t believe X. But if both good and bad people believe X, the fact that bad people believe X really shouldn’t matter to you.

People who use this kind of argument often present themselves as being “skeptics”. They imagine that they have seen through the veil of deception that blinds others.

In fact, quite the opposite is the case: This is fake skepticism. These people are not uniquely skeptical; they are uniquely credulous. If you think the Earth is flat because you don’t trust the mainstream scientific community, that means you do trust someone far less credible than the mainstream scientific community.

Real skepticism is difficult. It requires concerted effort and investigation, and typically takes years. To really seriously challenge the expert consensus in a field, you need to become an expert in that field. Ideally, you should get a graduate degree in that field and actually start publishing your heterodox views. Failing that, you should at least be spending hundreds or thousands of hours doing independent research. If you are unwilling or unable to do that, you are not qualified to assess the validity of the expert consensus.

This does not mean the expert consensus is always right—remarkably often, it isn’t. But it means you aren’t allowed to say it’s wrong, because you don’t know enough to assess that.

This is not elitism. This is not an argument from authority. This is a basic respect for the effort and knowledge that experts spend their lives acquiring.

People don’t like being told that they are not as smart as other people—even though, with any variation at all, that’s got to be true for a certain proportion of people. But I’m not even saying experts are smarter than you. I’m saying they know more about their particular field of expertise.

Do you walk up to construction workers on the street and critique how they lay concrete? When you step on an airplane, do you explain to the captain how to read an altimeter? When you hire a plumber, do you insist on using the snake yourself?

Probably not. And why not? Because you know these people have training; they do this for a living. Yeah, well, scientists do this for a living too—and our training is much longer. To be a plumber, you need a high school diploma and an apprenticeship that usually lasts about four years. To be a scientist, you need a PhD, which means four years of college plus an additional five or six years of graduate school.

To be clear, I’m not saying you should listen to experts speaking outside their expertise. Some of the most idiotic, arrogant things ever said by human beings have been said by physicists opining on biology or economists ranting about politics. Even within a field, some people have such narrow expertise that you can’t really trust them even on things that seem related—like macroeconomists with idiotic views on trade, or ecologists who clearly don’t understand evolution.

This is also why one of the great challenges of being a good interdisciplinary scientist is actually obtaining enough expertise in both fields you’re working in; it isn’t literally twice the work (since there is overlap—or you wouldn’t be doing it—and you do specialize in particular interdisciplinary subfields), but it’s definitely more work, and there are definitely a lot of people on each side of the fence who may never take you seriously no matter what you do.

How do you tell who to trust? This is why I keep coming back to the matter of expert consensus. The world is much too complicated for anyone, much less everyone, to understand it all. We must be willing to trust the work of others. The best way we have found to decide which work is trustworthy is by the norms and institutions of the scientific community itself. Since 97% of climatologists say that climate change is caused by humans, they’re probably right. Since 99% of biologists believe humans evolved by natural selection, that’s probably what happened. Since 87% of economists oppose tariffs, tariffs probably aren’t a good idea.

Can we be certain that the consensus is right? No. There is precious little in this universe that we can be certain about. But as in any game of chance, you need to play the best odds, and my money will always be on the scientific consensus.

I think I know what the Great Filter is now

Sep 3, JDN 2458000

One of the most plausible solutions to the Fermi Paradox of why we have not found any other intelligent life in the universe is called the Great Filter: Somewhere in the process of evolving from unicellular prokaryotes to becoming an interstellar civilization, there is some highly-probable event that breaks the process, a “filter” that screens out all but the luckiest species—or perhaps literally all of them.

I previously thought that this filter was the invention of nuclear weapons; I now realize that this theory is incomplete. Nuclear weapons by themselves are only an existential threat because they co-exist with widespread irrationality and bigotry. The Great Filter is the combination of the two.

Yet there is a deep reason why we would expect that this is precisely the combination that would emerge in most species (as it has certainly emerged in our own): The rationality of a species is not uniform. Some individuals in a species will always be more rational than others, so as a species increases its level of rationality, it does not do so all at once.

Indeed, the processes of economic development and scientific advancement that make a species more rational are unlikely to be spread evenly; some cultures will develop faster than others, and some individuals within a given culture will be further along than others. While the mean level of rationality increases, the variance will also tend to increase.

On some arbitrary and oversimplified scale where 1 is the level of rationality needed to maintain a hunter-gatherer tribe, and 20 is the level of rationality needed to invent nuclear weapons, the distribution of rationality in a population starts something like this:

Great_Filter_1

Most of the population is between levels 1 and 3, which we might think of as lying between the bare minimum for a tribe to survive and the level at which one can start to make advances in knowledge and culture.

Then, as the society advances, it goes through a phase like this:

Great_Filter_2

This is about where we were in Periclean Athens. Most of the population is between levels 2 and 8. Level 2 used to be the average level of rationality back when we were hunter-gatherers. Level 8 is the level of philosophers like Archimedes and Pythagoras.

Today, our society looks like this:
Great_Filter_3

Most of the society is between levels 4 and 20. As I said, level 20 is the point at which it becomes feasible to develop nuclear weapons. Some of the world’s people are extremely intelligent and rational, and almost everyone is more rational than even the smartest people in hunter-gatherer times, but now there is enormous variation.

Where on this chart are racism and nationalism? Importantly, I think they are above the level of rationality that most people had in ancient times. Even Greek philosophers had attitudes toward slaves and other cultures that the modern KKK would find repulsive. I think on this scale racism is about a 10 and nationalism is about a 12.

If we had managed to uniformly increase the rationality of our society, with everyone gaining at the same rate, our distribution would instead look like this:
Great_Filter_4

If that were the case, we’d be fine. The lowest level of rationality widespread in the population would be 14, which is already beyond racism and nationalism. (Maybe it’s about the level of humanities professors today? That makes them substantially below quantum physicists who are 20 by construction… but hey, still almost twice as good as the Greek philosophers they revere.) We would have our nuclear technology, but it would not endanger our future—we wouldn’t even use it for weapons, we’d use it for power generation and space travel. Indeed, this lower-variance high-rationality state seems to be about what they have the Star Trek universe.

But since we didn’t, a large chunk of our population is between 10 and 12—that is, still racist or nationalist. We have the nuclear weapons, and we have people who might actually be willing to use them.

Great_Filter_5

I think this is what happens to most advanced civilizations around the galaxy. By the time they invent space travel, they have also invented nuclear weapons—but they still have their equivalent of racism and nationalism. And most of the time, the two combine into a volatile mix that results in the destruction or regression of their entire civilization.

If this is right, then we may be living at the most important moment in human history. It may be right here, right now, that we have the only chance we’ll ever get to turn the tide. We have to find a way to reduce the variance, to raise the rest of the world’s population past nationalism to a cosmopolitan morality. And we may have very little time.

Bigotry is more powerful than the market

Nov 20, JDN 2457683

If there’s one message we can take from the election of Donald Trump, it is that bigotry remains a powerful force in our society. A lot of autoflagellating liberals have been trying to explain how this election result really reflects our failure to help people displaced by technology and globalization (despite the fact that personal income and local unemployment had negligible correlation with voting for Trump), or Hillary Clinton’s “bad campaign” that nonetheless managed the same proportion of Democrat turnout that re-elected her husband in 1996.

No, overwhelmingly, the strongest predictor of voting for Trump was being White, and living in an area where most people are White. (Well, actually, that’s if you exclude authoritarianism as an explanatory variable—but really I think that’s part of what we’re trying to explain.) Trump voters were actually concentrated in areas less affected by immigration and globalization. Indeed, there is evidence that these people aren’t racist because they have anxiety about the economy—they are anxious about the economy because they are racist. How does that work? Obama. They can’t believe that the economy is doing well when a Black man is in charge. So all the statistics and even personal experiences mean nothing to them. They know in their hearts that unemployment is rising, even as the BLS data clearly shows it’s falling.

The wide prevalence and enormous power of bigotry should be obvious. But economists rarely talk about it, and I think I know why: Their models say it shouldn’t exist. The free market is supposed to automatically eliminate all forms of bigotry, because they are inefficient.

The argument for why this is supposed to happen actually makes a great deal of sense: If a company has the choice of hiring a White man or a Black woman to do the same job, but they know that the market wage for Black women is lower than the market wage for White men (which it most certainly is), and they will do the same quality and quantity of work, why wouldn’t they hire the Black woman? And indeed, if human beings were rational profit-maximizers, this is probably how they would think.

More recently some neoclassical models have been developed to try to “explain” this behavior, but always without daring to give up the precious assumption of perfect rationality. So instead we get the two leading neoclassical theories of discrimination, which are statistical discrimination and taste-based discrimination.

Statistical discrimination is the idea that under asymmetric information (and we surely have that), features such as race and gender can act as signals of quality because they are correlated with actual quality for various reasons (usually left unspecified), so it is not irrational after all to choose based upon them, since they’re the best you have.

Taste-based discrimination is the idea that people are rationally maximizing preferences that simply aren’t oriented toward maximizing profit or well-being. Instead, they have this extra term in their utility function that says they should also treat White men better than women or Black people. It’s just this extra thing they have.

A small number of studies have been done trying to discern which of these is at work.
The correct answer, of course, is neither.

Statistical discrimination, at least, could be part of what’s going on. Knowing that Black people are less likely to be highly educated than Asians (as they definitely are) might actually be useful information in some circumstances… then again, you list your degree on your resume, don’t you? Knowing that women are more likely to drop out of the workforce after having a child could rationally (if coldly) affect your assessment of future productivity. But shouldn’t the fact that women CEOs outperform men CEOs be incentivizing shareholders to elect women CEOs? Yet that doesn’t seem to happen. Also, in general, people seem to be pretty bad at statistics.

The bigger problem with statistical discrimination as a theory is that it’s really only part of a theory. It explains why not all of the discrimination has to be irrational, but some of it still does. You need to explain why there are these huge disparities between groups in the first place, and statistical discrimination is unable to do that. In order for the statistics to differ this much, you need a past history of discrimination that wasn’t purely statistical.

Taste-based discrimination, on the other hand, is not a theory at all. It’s special pleading. Rather than admit that people are failing to rationally maximize their utility, we just redefine their utility so that whatever they happen to be doing now “maximizes” it.

This is really what makes the Axiom of Revealed Preference so insidious; if you really take it seriously, it says that whatever you do, must by definition be what you preferred. You can’t possibly be irrational, you can’t possibly be making mistakes of judgment, because by definition whatever you did must be what you wanted. Maybe you enjoy bashing your head into a wall, who am I to judge?

I mean, on some level taste-based discrimination is what’s happening; people think that the world is a better place if they put women and Black people in their place. So in that sense, they are trying to “maximize” some “utility function”. (By the way, most human beings behave in ways that are provably inconsistent with maximizing any well-defined utility function—the Allais Paradox is a classic example.) But the whole framework of calling it “taste-based” is a way of running away from the real explanation. If it’s just “taste”, well, it’s an unexplainable brute fact of the universe, and we just need to accept it. If people are happier being racist, what can you do, eh?

So I think it’s high time to start calling it what it is. This is not a question of taste. This is a question of tribal instinct. This is the product of millions of years of evolution optimizing the human brain to act in the perceived interest of whatever it defines as its “tribe”. It could be yourself, your family, your village, your town, your religion, your nation, your race, your gender, or even the whole of humanity or beyond into all sentient beings. But whatever it is, the fundamental tribe is the one thing you care most about. It is what you would sacrifice anything else for.

And what we learned on November 9 this year is that an awful lot of Americans define their tribe in very narrow terms. Nationalistic and xenophobic at best, racist and misogynistic at worst.

But I suppose this really isn’t so surprising, if you look at the history of our nation and the world. Segregation was not outlawed in US schools until 1955, and there are women who voted in this election who were born before American women got the right to vote in 1920. The nationalistic backlash against sending jobs to China (which was one of the chief ways that we reduced global poverty to its lowest level ever, by the way) really shouldn’t seem so strange when we remember that over 100,000 Japanese-Americans were literally forcibly relocated into camps as recently as 1942. The fact that so many White Americans seem all right with the biases against Black people in our justice system may not seem so strange when we recall that systemic lynching of Black people in the US didn’t end until the 1960s.

The wonder, in fact, is that we have made as much progress as we have. Tribal instinct is not a strange aberration of human behavior; it is our evolutionary default setting.

Indeed, perhaps it is unreasonable of me to ask humanity to change its ways so fast! We had millions of years to learn how to live the wrong way, and I’m giving you only a few centuries to learn the right way?

The problem, of course, is that the pace of technological change leaves us with no choice. It might be better if we could wait a thousand years for people to gradually adjust to globalization and become cosmopolitan; but climate change won’t wait a hundred, and nuclear weapons won’t wait at all. We are thrust into a world that is changing very fast indeed, and I understand that it is hard to keep up; but there is no way to turn back that tide of change.

Yet “turn back the tide” does seem to be part of the core message of the Trump voter, once you get past the racial slurs and sexist slogans. People are afraid of what the world is becoming. They feel that it is leaving them behind. Coal miners fret that we are leaving them behind by cutting coal consumption. Factory workers fear that we are leaving them behind by moving the factory to China or inventing robots to do the work in half the time for half the price.

And truth be told, they are not wrong about this. We are leaving them behind. Because we have to. Because coal is polluting our air and destroying our climate, we must stop using it. Moving the factories to China has raised them out of the most dire poverty, and given us a fighting chance toward ending world hunger. Inventing the robots is only the next logical step in the process that has carried humanity forward from the squalor and suffering of primitive life to the security and prosperity of modern society—and it is a step we must take, for the progress of civilization is not yet complete.

They wouldn’t have to let themselves be left behind, if they were willing to accept our help and learn to adapt. That carbon tax that closes your coal mine could also pay for your basic income and your job-matching program. The increased efficiency from the automated factories could provide an abundance of wealth that we could redistribute and share with you.

But this would require them to rethink their view of the world. They would have to accept that climate change is a real threat, and not a hoax created by… uh… never was clear on that point actually… the Chinese maybe? But 45% of Trump supporters don’t believe in climate change (and that’s actually not as bad as I’d have thought). They would have to accept that what they call “socialism” (which really is more precisely described as social democracy, or tax-and-transfer redistribution of wealth) is actually something they themselves need, and will need even more in the future. But despite rising inequality, redistribution of wealth remains fairly unpopular in the US, especially among Republicans.

Above all, it would require them to redefine their tribe, and start listening to—and valuing the lives of—people that they currently do not.

Perhaps we need to redefine our tribe as well; many liberals have argued that we mistakenly—and dangerously—did not include people like Trump voters in our tribe. But to be honest, that rings a little hollow to me: We aren’t the ones threatening to deport people or ban them from entering our borders. We aren’t the ones who want to build a wall (though some have in fact joked about building a wall to separate the West Coast from the rest of the country, I don’t think many people really want to do that). Perhaps we live in a bubble of liberal media? But I make a point of reading outlets like The American Conservative and The National Review for other perspectives (I usually disagree, but I do at least read them); how many Trump voters do you think have ever read the New York Times, let alone Huffington Post? Cosmopolitans almost by definition have the more inclusive tribe, the more open perspective on the world (in fact, do I even need the “almost”?).

Nor do I think we are actually ignoring their interests. We want to help them. We offer to help them. In fact, I want to give these people free money—that’s what a basic income would do, it would take money from people like me and give it to people like them—and they won’t let us, because that’s “socialism”! Rather, we are simply refusing to accept their offered solutions, because those so-called “solutions” are beyond unworkable; they are absurd, immoral and insane. We can’t bring back the coal mining jobs, unless we want Florida underwater in 50 years. We can’t reinstate the trade tariffs, unless we want millions of people in China to starve. We can’t tear down all the robots and force factories to use manual labor, unless we want to trigger a national—and then global—economic collapse. We can’t do it their way. So we’re trying to offer them another way, a better way, and they’re refusing to take it. So who here is ignoring the concerns of whom?

Of course, the fact that it’s really their fault doesn’t solve the problem. We do need to take it upon ourselves to do whatever we can, because, regardless of whose fault it is, the world will still suffer if we fail. And that presents us with our most difficult task of all, a task that I fully expect to spend a career trying to do and yet still probably failing: We must understand the human tribal instinct well enough that we can finally begin to change it. We must know enough about how human beings form their mental tribes that we can actually begin to shift those parameters. We must, in other words, cure bigotry—and we must do it now, for we are running out of time.