The mythology mindset

Feb 5 JDN 2459981

I recently finished reading Steven Pinker’s latest book Rationality. It’s refreshing, well-written, enjoyable, and basically correct with some small but notable errors that seem sloppy—but then you could have guessed all that from the fact that it was written by Steven Pinker.

What really makes the book interesting is an insight Pinker presents near the end, regarding the difference between the “reality mindset” and the “mythology mindset”.

It’s a pretty simple notion, but a surprisingly powerful one.

In the reality mindset, a belief is a model of how the world actually functions. It must be linked to the available evidence and integrated into a coherent framework of other beliefs. You can logically infer from how some parts work to how other parts must work. You can predict the outcomes of various actions. You live your daily life in the reality mindset; you couldn’t function otherwise.

In the mythology mindset, a belief is a narrative that fulfills some moral, emotional, or social function. It’s almost certainly untrue or even incoherent, but that doesn’t matter. The important thing is that it sends the right messages. It has the right moral overtones. It shows you’re a member of the right tribe.

The idea is similar to Dennett’s “belief in belief”, which I’ve written about before; but I think this characterization may actually be a better one, not least because people would be more willing to use it as a self-description. If you tell someone “You don’t really believe in God, you believe in believing in God”, they will object vociferously (which is, admittedly, what the theory would predict). But if you tell them, “Your belief in God is a form of the mythology mindset”, I think they are at least less likely to immediately reject your claim out of hand. “You believe in God a different way than you believe in cyanide” isn’t as obviously threatening to their identity.

A similar notion came up in a Psychology of Religion course I took, in which the professor discussed “anomalous beliefs” linked to various world religions. He picked on a bunch of obscure religions, often held by various small tribes. He asked for more examples from the class. Knowing he was nominally Catholic and not wanting to let mainstream religion off the hook, I presented my example: “This bread and wine are the body and blood of Christ.” To his credit, he immediately acknowledged it as a very good example.

It’s also not quite the same thing as saying that religion is a “metaphor”; that’s not a good answer for a lot of reasons, but perhaps chief among them is that people don’t say they believe metaphors. If I say something metaphorical and then you ask me, “Hang on; is that really true?” I will immediately acknowledge that it is not, in fact, literally true. Love is a rose with all its sweetness and all its thorns—but no, love isn’t really a rose. And when it comes to religious belief, saying that you think it’s a metaphor is basically a roundabout way of saying you’re an atheist.

From all these different directions, we seem to be converging on a single deeper insight: when people say they believe something, quite often, they clearly mean something very different by “believe” than what I would ordinarily mean.

I’m tempted even to say that they don’t really believe it—but in common usage, the word “belief” is used at least as often to refer to the mythology mindset as the reality mindset. (In fact, it sounds less weird to say “I believe in transsubstantiation” than to say “I believe in gravity”.) So if they don’t really believe it, then they at least mythologically believe it.

Both mindsets seem to come very naturally to human beings, in particular contexts. And not just modern people, either. Humans have always been like this.

Ask that psychology professor about Jesus, and he’ll tell you a tall tale of life, death, and resurrection by a demigod. But ask him about the Stroop effect, and he’ll provide a detailed explanation of rigorous experimental protocol. He believes something about God; but he knows something about psychology.

Ask a hunter-gatherer how the world began, and he’ll surely spin you a similarly tall tale about some combination of gods and spirits and whatever else, and it will all be peculiarly particular to his own tribe and no other. But ask him how to gut a fish, and he’ll explain every detail with meticulous accuracy, with almost the same rigor as that scientific experiment. He believes something about the sky-god; but he knows something about fish.

To be a rationalist, then, is to aspire to live your whole life in the reality mindset. To seek to know rather than believe.

This isn’t about certainty. A rationalist can be uncertain about many things—in fact, it’s rationalists of all people who are most willing to admit and quantify their uncertainty.

This is about whether you allow your beliefs to float free as bare, almost meaningless assertions that you profess to show you are a member of the tribe, or you make them pay rent, directly linked to other beliefs and your own experience.

As long as I can remember, I have always aspired to do this. But not everyone does. In fact, I dare say most people don’t. And that raises a very important question: Should they? Is it better to live the rationalist way?

I believe that it is. I suppose I would, temperamentally. But say what you will about the Enlightenment and the scientific revolution, they have clearly revolutionized human civilization and made life much better today than it was for most of human existence. We are peaceful, safe, and well-fed in a way that our not-so-distant ancestors could only dream of, and it’s largely thanks to systems built under the principles of reason and rationality—that is, the reality mindset.

We would never have industrialized agriculture if we still thought in terms of plant spirits and sky gods. We would never have invented vaccines and antibiotics if we still believed disease was caused by curses and witchcraft. We would never have built power grids and the Internet if we still saw energy as a mysterious force permeating the world and not as a measurable, manipulable quantity.

This doesn’t mean that ancient people who saw the world in a mythological way were stupid. In fact, it doesn’t even mean that people today who still think this way are stupid. This is not about some innate, immutable mental capacity. It’s about a technology—or perhaps the technology, the meta-technology that makes all other technology possible. It’s about learning to think the same way about the mysterious and the familiar, using the same kind of reasoning about energy and death and sunlight as we already did about rocks and trees and fish. When encountering something new and mysterious, someone in the mythology mindset quickly concocts a fanciful tale about magical beings that inevitably serves to reinforce their existing beliefs and attitudes, without the slightest shred of evidence for any of it. In their place, someone in the reality mindset looks closer and tries to figure it out.

Still, this gives me some compassion for people with weird, crazy ideas. I can better make sense of how someone living in the modern world could believe that the Earth is 6,000 years old or that the world is ruled by lizard-people. Because they probably don’t really believe it, they just mythologically believe it—and they don’t understand the difference.

Good enough is perfect, perfect is bad

Jan 8 JDN 2459953

Not too long ago, I read the book How to Keep House While Drowning by KC Davis, which I highly recommend. It offers a great deal of useful and practical advice, especially for someone neurodivergent and depressed living through an interminable pandemic (which I am, but honestly, odds are, you may be too). And to say it is a quick and easy read is actually an unfair understatement; it is explicitly designed to be readable in short bursts by people with ADHD, and it has a level of accessibility that most other books don’t even aspire to and I honestly hadn’t realized was possible. (The extreme contrast between this and academic papers is particularly apparent to me.)

One piece of advice that really stuck with me was this: Good enough is perfect.

At first, it sounded like nonsense; no, perfect is perfect, good enough is just good enough. But in fact there is a deep sense in which it is absolutely true.

Indeed, let me make it a bit stronger: Good enough is perfect; perfect is bad.

I doubt Davis thought of it in these terms, but this is a concise, elegant statement of the principles of bounded rationality. Sometimes it can be optimal not to optimize.

Suppose that you are trying to optimize something, but you have limited computational resources in which to do so. This is actually not a lot for you to suppose—it’s literally true of basically everyone basically every moment of every day.

But let’s make it a bit more concrete, and say that you need to find the solution to the following math problem: “What is the product of 2419 times 1137?” (Pretend you don’t have a calculator, as it would trivialize the exercise. I thought about using a problem you couldn’t do with a standard calculator, but I realized that would also make it much weirder and more obscure for my readers.)

Now, suppose that there are some quick, simple ways to get reasonably close to the correct answer, and some slow, difficult ways to actually get the answer precisely.

In this particular problem, the former is to approximate: What’s 2500 times 1000? 2,500,000. So it’s probably about 2,500,000.

Or we could approximate a bit more closely: Say 2400 times 1100, that’s about 100 times 100 times 24 times 11, which is 2 times 12 times 11 (times 10,000), which is 2 times (110 plus 22), which is 2 times 132 (times 10,000), which is 2,640,000.

Or, we could actually go through all the steps to do the full multiplication (remember I’m assuming you have no calculator), multiply, carry the 1s, add all four sums, re-check everything and probably fix it because you messed up somewhere; and then eventually you will get: 2,750,403.

So, our really fast method was only off by about 10%. Our moderately-fast method was only off by 4%. And both of them were a lot faster than getting the exact answer by hand.

Which of these methods you’d actually want to use depends on the context and the tools at hand. If you had a calculator, sure, get the exact answer. Even if you didn’t, but you were balancing the budget for a corporation, I’m pretty sure they’d care about that extra $110,403. (Then again, they might not care about the $403 or at least the $3.) But just as an intellectual exercise, you really didn’t need to do anything; the optimal choice may have been to take my word for it. Or, if you were at all curious, you might be better off choosing the quick approximation rather than the precise answer. Since nothing of any real significance hinged on getting that answer, it may be simply a waste of your time to bother finding it.

This is of course a contrived example. But it’s not so far from many choices we make in real life.

Yes, if you are making a big choice—which job to take, what city to move to, whether to get married, which car or house to buy—you should get a precise answer. In fact, I make spreadsheets with formal utility calculations whenever I make a big choice, and I haven’t regretted it yet. (Did I really make a spreadsheet for getting married? You’re damn right I did; there were a lot of big financial decisions to make there—taxes, insurance, the wedding itself! I didn’t decide whom to marry that way, of course; but we always had the option of staying unmarried.)

But most of the choices we make from day to day are small choices: What should I have for lunch today? Should I vacuum the carpet now? What time should I go to bed? In the aggregate they may all add up to important things—but each one of them really won’t matter that much. If you were to construct a formal model to optimize your decision of everything to do each day, you’d spend your whole day doing nothing but constructing formal models. Perfect is bad.

In fact, even for big decisions, you can’t really get a perfect answer. There are just too many unknowns. Sometimes you can spend more effort gathering additional information—but that’s costly too, and sometimes the information you would most want simply isn’t available. (You can look up the weather in a city, visit it, ask people about it—but you can’t really know what it’s like to live there until you do.) Even those spreadsheet models I use to make big decisions contain error bars and robustness checks, and if, even after investing a lot of effort trying to get precise results, I still find two or more choices just can’t be clearly distinguished to within a good margin of error, I go with my gut. And that seems to have been the best choice for me to make. Good enough is perfect.

I think that being gifted as a child trained me to be dangerously perfectionist as an adult. (Many of you may find this familiar.) When it came to solving math problems, or answering quizzes, perfection really was an attainable goal a lot of the time.

As I got older and progressed further in my education, maybe getting every answer right was no longer feasible; but I still could get the best possible grade, and did, in most of my undergraduate classes and all of my graduate classes. To be clear, I’m not trying to brag here; if anything, I’m a little embarrassed. What it mainly shows is that I had learned the wrong priorities. In fact, one of the main reasons why I didn’t get a 4.0 average in undergrad is that I spent a lot more time back then writing novels and nonfiction books, which to this day I still consider my most important accomplishments and grieve that I’ve not (yet?) been able to get them commercially published. I did my best work when I wasn’t trying to be perfect. Good enough is perfect; perfect is bad.

Now here I am on the other side of the academic system, trying to carve out a career, and suddenly, there is no perfection. When my exam is being graded by someone else, there is a way to get the most points. When I’m the one grading the exams, there is no “correct answer” anymore. There is no one scoring me to see if I did the grading the “right way”—and so, no way to be sure I did it right.

Actually, here at Edinburgh, there are other instructors who moderate grades and often require me to revise them, which feels a bit like “getting it wrong”; but it’s really more like we had different ideas of what the grade curve should look like (not to mention US versus UK grading norms). There is no longer an objectively correct answer the way there is for, say, the derivative of x^3, the capital of France, or the definition of comparative advantage. (Or, one question I got wrong on an undergrad exam because I had zoned out of that lecture to write a book on my laptop: Whether cocaine is a dopamine reuptake inhibitor. It is. And the fact that I still remember that because I got it wrong over a decade ago tells you a lot about me.)

And then when it comes to research, it’s even worse: What even constitutes “good” research, let alone “perfect” research? What would be most scientifically rigorous isn’t what journals would be most likely to publish—and without much bigger grants, I can afford neither. I find myself longing for the research paper that will be so spectacular that top journals have to publish it, removing all risk of rejection and failure—in other words, perfect.

Yet such a paper plainly does not exist. Even if I were to do something that would win me a Nobel or a Fields Medal (this is, shall we say, unlikely), it probably wouldn’t be recognized as such immediately—a typical Nobel isn’t awarded until 20 or 30 years after the work that spawned it, and while Fields Medals are faster, they’re by no means instant or guaranteed. In fact, a lot of ground-breaking, paradigm-shifting research was originally relegated to minor journals because the top journals considered it too radical to publish.

Or I could try to do something trendy—feed into DSGE or GTFO—and try to get published that way. But I know my heart wouldn’t be in it, and so I’d be miserable the whole time. In fact, because it is neither my passion nor my expertise, I probably wouldn’t even do as good a job as someone who really buys into the core assumptions. I already have trouble speaking frequentist sometimes: Are we allowed to say “almost significant” for p = 0.06? Maximizing the likelihood is still kosher, right? Just so long as I don’t impose a prior? But speaking DSGE fluently and sincerely? I’d have an easier time speaking in Latin.

What I know—on some level at least—I ought to be doing is finding the research that I think is most worthwhile, given the resources I have available, and then getting it published wherever I can. Or, in fact, I should probably constrain a little by what I know about journals: I should do the most worthwhile research that is feasible for me and has a serious chance of getting published in a peer-reviewed journal. It’s sad that those two things aren’t the same, but they clearly aren’t. This constraint binds, and its Lagrange multiplier is measured in humanity’s future.

But one thing is very clear: By trying to find the perfect paper, I have floundered and, for the last year and a half, not written any papers at all. The right choice would surely have been to write something.

Because good enough is perfect, and perfect is bad.

What is it with EA and AI?

Jan 1 JDN 2459946

Surprisingly, most Effective Altruism (EA) leaders don’t seem to think that poverty alleviation should be our top priority. Most of them seem especially concerned about long-term existential risk, such as artificial intelligence (AI) safety and biosecurity. I’m not going to say that these things aren’t important—they certainly are important—but here are a few reasons I’m skeptical that they are really the most important the way that so many EA leaders seem to think.

1. We don’t actually know how to make much progress at them, and there’s only so much we can learn by investing heavily in basic research on them. Whereas, with poverty, the easy, obvious answer turns out empirically to be extremely effective: Give them money.

2. While it’s easy to multiply out huge numbers of potential future people in your calculations of existential risk (and this is precisely what people do when arguing that AI safety should be a top priority), this clearly isn’t actually a good way to make real-world decisions. We simply don’t know enough about the distant future of humanity to be able to make any kind of good judgments about what will or won’t increase their odds of survival. You’re basically just making up numbers. You’re taking tiny probabilities of things you know nothing about and multiplying them by ludicrously huge payoffs; it’s basically the secular rationalist equivalent of Pascal’s Wager.

2. AI and biosecurity are high-tech, futuristic topics, which seem targeted to appeal to the sensibilities of a movement that is still very dominated by intelligent, nerdy, mildly autistic, rich young White men. (Note that I say this as someone who very much fits this stereotype. I’m queer, not extremely rich and not entirely White, but otherwise, yes.) Somehow I suspect that if we asked a lot of poor Black women how important it is to slightly improve our understanding of AI versus giving money to feed children in Africa, we might get a different answer.

3. Poverty eradication is often characterized as a “short term” project, contrasted with AI safety as a “long term” project. This is (ironically) very short-sighted. Eradication of poverty isn’t just about feeding children today. It’s about making a world where those children grow up to be leaders and entrepreneurs and researchers themselves. The positive externalities of economic development are staggering. It is really not much of an exaggeration to say that fascism is a consequence of poverty and unemployment.

4. Currently the main thing that most Effective Altruism organizations say they need most is “talent”; how many millions of person-hours of talent are we leaving on the table by letting children starve or die of malaria?

5. Above all, existential risk can’t really be what’s motivating people here. The obvious solutions to AI safety and biosecurity are not being pursued, because they don’t fit with the vision that intelligent, nerdy, young White men have of how things should be. Namely: Ban them. If you truly believe that the most important thing to do right now is reduce the existential risk of AI and biotechnology, you should support a worldwide ban on research in artificial intelligence and biotechnology. You should want people to take all necessary action to attack and destroy institutions—especially for-profit corporations—that engage in this kind of research, because you believe that they are threatening to destroy the entire world and this is the most important thing, more important than saving people from starvation and disease. I think this is really the knock-down argument; when people say they think that AI safety is the most important thing but they don’t want Google and Facebook to be immediately shut down, they are either confused or lying. Honestly I think maybe Google and Facebook should be immediately shut down for AI safety reasons (as well as privacy and antitrust reasons!), and I don’t think AI safety is yet the most important thing.

Why aren’t people doing that? Because they aren’t actually trying to reduce existential risk. They just think AI and biotechnology are really interesting, fascinating topics and they want to do research on them. And I agree with that, actually—but then they need stop telling people that they’re fighting to save the world, because they obviously aren’t. If the danger were anything like what they say it is, we should be halting all research on these topics immediately, except perhaps for a very select few people who are entrusted with keeping these forbidden secrets and trying to find ways to protect us from them. This may sound radical and extreme, but it is not unprecedented: This is how we handle nuclear weapons, which are universally recognized as a global existential risk. If AI is really as dangerous as nukes, we should be regulating it like nukes. I think that in principle it could be that dangerous, and may be that dangerous someday—but it isn’t yet. And if we don’t want it to get that dangerous, we don’t need more AI researchers, we need more regulations that stop people from doing harmful AI research! If you are doing AI research and it isn’t directly involved specifically in AI safety, you aren’t saving the world—you’re one of the people dragging us closer to the cliff! Anything that could make AI smarter but doesn’t also make it safer is dangerous. And this is clearly true of the vast majority of AI research, and frankly to me seems to also be true of the vast majority of research at AI safety institutes like the Machine Intelligence Research Institute.

Seriously, look through MIRI’s research agenda: It’s mostly incredibly abstract and seems completely beside the point when it comes to preventing AI from taking control of weapons or governments. It’s all about formalizing Bayesian induction. Thanks to you, Skynet can have a formally computable approximation to logical induction! Truly we are saved. Only two of their papers, on “Corrigibility” and “AI Ethics”, actually struck me as at all relevant to making AI safer. The rest is largely abstract mathematics that is almost literally navel-gazing—it’s all about self-reference. Eliezer Yudkowsky finds self-reference fascinating and has somehow convinced an entire community that it’s the most important thing in the world. (I actually find some of it fascinating too, especially the paper on “Functional Decision Theory”, which I think gets at some deep insights into things like why we have emotions. But I don’t see how it’s going to save the world from AI.)

Don’t get me wrong: AI also has enormous potential benefits, and this is a reason we may not want to ban it. But if you really believe that there is a 10% chance that AI will wipe out humanity by 2100, then get out your pitchforks and your EMP generators, because it’s time for the Butlerian Jihad. A 10% chance of destroying all humanity is an utterly unacceptable risk for any conceivable benefit. Better that we consign ourselves to living as we did in the Neolithic than risk something like that. (And a globally-enforced ban on AI isn’t even that; it’s more like “We must live as we did in the 1950s.” How would we survive!?) If you don’t want AI banned, maybe ask yourself whether you really believe the risk is that high—or are human brains just really bad at dealing with small probabilities?

I think what’s really happening here is that we have a bunch of guys (and yes, the EA and especially AI EA-AI community is overwhelmingly male) who are really good at math and want to save the world, and have thus convinced themselves that being really good at math is how you save the world. But it isn’t. The world is much messier than that. In fact, there may not be much that most of us can do to contribute to saving the world; our best options may in fact be to donate money, vote well, and advocate for good causes.

Let me speak Bayesian for a moment: The prior probability that you—yes, you, out of all the billions of people in the world—are uniquely positioned to save it by being so smart is extremely small. It’s far more likely that the world will be saved—or doomed—by people who have power. If you are not the head of state of a large country or the CEO of a major multinational corporation, I’m sorry; you probably just aren’t in a position to save the world from AI.

But you can give some money to GiveWell, so maybe do that instead?

In defense of civility

Dec 18 JDN 2459932

Civility is in short supply these days. Perhaps it has always been in short supply; certainly much of the nostalgia for past halcyon days of civility is ill-founded. Wikipedia has an entire article on hundreds of recorded incidents of violence in legislative assemblies, in dozens of countries, dating all the way from to the Roman Senate in 44 BC to Bosnia in 2019. But the Internet seems to bring about its own special kind of incivility, one which exposes nearly everyone to some of the worst vitriol the entire world has to offer. I think it’s worth talking about why this is bad, and perhaps what we might do about it.

For some, the benefits of civility seem so self-evident that they don’t even bear mentioning. For others, the idea of defending civility may come across as tone-deaf or even offensive. I would like to speak to both of those camps today: If you think the benefits of civility are obvious, I assure you, they aren’t to everyone. And if you think that civility is just a tool of the oppressive status quo, I hope I can make you think again.

A lot of the argument against civility seems to be founded in the notion that these issues are important, lives are at stake, and so we shouldn’t waste time and effort being careful how we speak to each other. How dare you concern yourself with the formalities of argumentation when people are dying?

But this is totally wrongheaded. It is precisely because these issues are important that civility is vital. It is precisely because lives are at stake that we must make the right decisions. And shouting and name-calling (let alone actual fistfights or drawn daggers—which have happened!) are not conducive to good decision-making.

If you shout someone down when choosing what restaurant to have dinner at, you have been very rude and people may end up unhappy with their dining experience—but very little of real value has been lost. But if you shout someone down when making national legislation, you may cause the wrong policy to be enacted, and this could lead to the suffering or death of thousands of people.

Think about how court proceedings work. Why are they so rigid and formal, with rules upon rules upon rules? Because the alternative was capricious violence. In the absence of the formal structure of a court system, so-called ‘justice’ was handed out arbitrarily, by whoever was in power, or by mobs of vigilantes. All those seemingly-overcomplicated rules were made in order to resolve various conflicts of interest and hopefully lead toward more fair, consistent results in the justice system. (And don’t get me wrong; they still could stand to be greatly improved!)

Legislatures have complex rules of civility for the same reason: Because the outcome is so important, we need to make sure that the decision process is as reliable as possible. And as flawed as existing legislatures still are, and as silly as it may seem to insist upon addressing ‘the Honorable Representative from the Great State of Vermont’, it’s clearly a better system than simply letting them duke it out with their fists.

A related argument I would like to address is that of ‘tone policing‘. If someone objects, not to the content of what you are saying, but to the tone in which you have delivered it, are they arguing in bad faith?

Well, possibly. Certainly, arguments about tone can be used that way. In particular I remember that this was basically the only coherent objection anyone could come up with against the New Atheism movement: “Well, sure, obviously, God isn’t real and religion is ridiculous; but why do you have to be so mean about it!?”

But it’s also quite possible for tone to be itself a problem. If your tone is overly aggressive and you don’t give people a chance to even seriously consider your ideas before you accuse them of being immoral for not agreeing with you—which happens all the time—then your tone really is the problem.

So, how can we tell which is which? I think a good way to reply to what you think might be bad-faith tone policing is this: “What sort of tone do you think would be better?”

I think there are basically three possible responses:

1. They can’t offer one, because there is actually no tone in which they would accept the substance of your argument. In that case, the tone policing really is in bad faith; they don’t want you to be nicer, they want you to shut up. This was clearly the case for New Atheism: As Daniel Dennett aptly remarked, “There’s simply no polite way to tell someone they have dedicated their lives to an illusion.” But sometimes, such things need to be said all the same.

2. They offer an alternative argument you could make, but it isn’t actually expressing your core message. Either they have misunderstood your core message, or they actually disagree with the substance of your argument and should be addressing it on those terms.

3. They offer an alternative way of expressing your core message in a milder, friendlier tone. This means that they are arguing in good faith and actually trying to help you be more persuasive!

I don’t know how common each of these three possibilities is; it could well be that the first one is the most frequent occurrence. That doesn’t change the fact that I have definitely been at the other end of the third one, where I absolutely agree with your core message and want your activism to succeed, but I can see that you’re acting like a jerk and nobody will want to listen to you.

Here, let me give some examples of the type of argument I’m talking about:

1. “Defund the police”: This slogan polls really badly. Probably because most people have genuine concerns about crime and want the police to protect them. Also, as more and more social services (like for mental health and homelessness) get co-opted into policing, this slogan makes it sound like you’re just going to abandon those people. But do we need serious, radical police reform? Absolutely. So how about “Reform the police”, “Put police money back into the community”, or even “Replace the police”?

2. “All Cops Are Bastards”: Speaking of police reform, did I mention we need it? A lot of it? Okay. Now, let me ask you: All cops? Every single one of them? There is not a single one out of the literally millions of police officers on this planet who is a good person? Not one who is fighting to take down police corruption from within? Not a single individual who is trying to fix the system while preserving public safety? Now, clearly, it’s worth pointing out, some cops are bastards—but hey, that even makes a better acronym: SCAB. In fact, it really is largely a few bad apples—the key point here is that you need to finish the aphorism: “A few bad apples spoil the whole barrel.” The number of police who are brutal and corrupt is relatively small, but as long as the other police continue to protect them, the system will be broken. Either you get those bad apples out pronto, or your whole barrel is bad. But demonizing the very people who are in the best position to implement those reforms—good police officers—is not helping.

3. “Be gay, do crime”: I know it’s tongue-in-cheek and ironic. I get that. It’s still a really dumb message. I am absolutely on board with LGBT rights. Even aside from being queer myself, I probably have more queer and trans friends than straight friends at this point. But why in the world would you want to associate us with petty crime? Why are you lumping us in with people who harm others at best out of desperation and at worst out of sheer greed? Even if you are literally an anarchist—which I absolutely am not—you’re really not selling anarchism well if the vision you present of it is a world of unfettered crime! There are dozens of better pro-LGBT slogans out there; pick one. Frankly even “do gay, be crime” is better, because it’s more clearly ironic. (Also, you can take it to mean something like this: Don’t just be gay, do gay—live your fullest gay life. And if you can be crime, that means that the system is fundamentally unjust: You can be criminalized just for who you are. And this is precisely what life is like for millions of LGBT people on this planet.)

A lot of people seem to think that if you aren’t immediately convinced by the most vitriolic, aggressive form of an argument, then you were never going to be convinced anyway and we should just write you off as a potential ally. This isn’t just obviously false; it’s incredibly dangerous.

The whole point of activism is that not everyone already agrees with you. You are trying to change minds. If it were really true that all reasonable, ethical people already agreed with your view, you wouldn’t need to be an activist. The whole point of making political arguments is that people can be reasonable and ethical and still be mistaken about things, and when we work hard to persuade them, we can eventually win them over. In fact, on some things we’ve actually done spectacularly well.

And what about the people who aren’t reasonable and ethical? They surely exist. But fortunately, they aren’t the majority. They don’t rule the whole world. If they did, we’d basically be screwed: If violence is really the only solution, then it’s basically a coin flip whether things get better or worse over time. But in fact, unreasonable people are outnumbered by reasonable people. Most of the things that are wrong with the world are mistakes, errors that can be fixed—not conflicts between irreconcilable factions. Our goal should be to fix those mistakes wherever we can, and that means being patient, compassionate educators—not angry, argumentative bullies.

Mind reading is not optional

Nov 20 JDN 2459904

I have great respect for cognitive-behavioral therapy (CBT), and it has done a lot of good for me. (It is also astonishingly cost-effective; its QALY per dollar rate compares favorably to almost any other First World treatment, and loses only to treating high-impact Third World diseases like malaria and schistomoniasis.)

But there are certain aspects of it that have always been frustrating to me. Standard CBT techniques often present as ‘cognitive distortions‘ what are in fact clearly necessary heuristics without which it would be impossible to function.

Perhaps the worst of these is so-called ‘mind reading‘. The very phrasing of it makes it sound ridiculous: Are you suggesting that you have some kind of extrasensory perception? Are you claiming to be a telepath?

But in fact ‘mind reading’ is simply the use of internal cognitive models to forecast the thoughts, behaviors, and expectations of other human beings. And without it, it would be completely impossible to function in human society.

For instance, I have had therapists tell me that it is ‘mind reading’ for me to anticipate that people will have tacit expectations for my behavior that they will judge me for failing to meet, and I should simply wait for people to express their expectations rather than assuming them. I admit, life would be much easier if I could do that. But I know for a fact that I can’t. Indeed, I used to do that, as a child, and it got me in trouble all the time. People were continually upset at me for not doing things they had expected me to do but never bothered to actually mention. They thought these expectations were “obvious”; they were not, at least not to me.

It was often little things, and in hindsight some of these things seem silly: I didn’t know what a ‘made bed’ was supposed to look like, so I put it in a state that was functional for me, but that was not considered ‘making the bed’. (I have since learned that my way was actually better: It’s good to let sheets air out before re-using them.) I was asked to ‘clear the sink’, so I moved the dishes out of the sink and left them on the counter, not realizing that the implicit command was for me to wash those dishes, dry them, and put them away. I was asked to ‘bring the dinner plates to the table’, so I did that, and left them in a stack there, not realizing that I should be setting them out in front of each person’s chair and also bringing flatware. Of course I know better now. But how was I supposed to know then? It seems like I was expected to, though.

Most people just really don’t seem to realize how many subtle, tacit expectations are baked into every single task. I think neurodivergence is quite relevant here; I have a mild autism spectrum disorder, and so I think rather differently than most people. If you are neurotypical, then you probably can forecast other people’s expectations fairly well automatically, and so they may seem obvious to you. In fact, they may seem so obvious that you don’t even realize you’re doing it. Then when someone like me comes along and is consciously, actively trying to forecast other people’s expectations, and sometimes doing it poorly, you go and tell them to stop trying to forecast. But if they were to do that, they’d end up even worse off than they are. What you really need to be telling them is how to forecast better—but that would require insight into your own forecasting methods which you aren’t even consciously aware of.

Seriously, stop and think for a moment all of the things other people expect you to do every day that are rarely if ever explicitly stated. How you are supposed to dress, how you are supposed to speak, how close you are supposed to stand to other people, how long you are supposed to hold eye contact—all of these are standards you will be expected to meet, whether or not any of them have ever been explicitly explained to you. You may do this automatically; or you may learn to do it consciously after being criticized for failing to do it. But one way or another, you must forecast what other people will expect you to do.

To my knowledge, no one has ever explicitly told me not to wear a Starfleet uniform to work. I am not aware of any part of the university dress code that explicitly forbids such attire. But I’m fairly sure it would not be a good idea. To my knowledge, no one has ever explicitly told me not to burst out into song in the middle of a meeting. But I’m still pretty sure I shouldn’t do that. To my knowledge, no one has ever explicitly told me what the ‘right of way’ rules are for walking down a crowded sidewalk, who should be expected to move out of the way of whom. But people still get mad if you mess up and bump into them.

Even when norms are stated explicitly, it is often as a kind of last resort, and the mere fact that you needed to have a norm stated is often taken as a mark against your character. I have been explicitly told in various contexts not to talk to myself or engage in stimming leg movements; but the way I was told has generally suggested that I would have been judged better if I hadn’t had to be told, if I had simply known the way that other people seem to know. (Or is it that they never felt any particular desire to stim?)

In fact, I think a major part of developing social skills and becoming more functional, to the point where a lot of people actually now seem a bit surprised to learn I have an autism spectrum disorder, has been improving my ability to forecast other people’s expectations for my behavior. There are dozens if not hundreds of norms that people expect you to follow at any given moment; most people seem to intuit them so easily that they don’t even realize they are there. But they are there all the same, and this is painfully evident to those of us who aren’t always able to immediately intuit them all.

Now, the fact remains that my current mental models are surely imperfect. I am often wrong about what other people expect of me. I’m even prepared to believe that some of my anxiety comes from believing that people have expectations more demanding than what they actually have. But I can’t simply abandon the idea of forecasting other people’s expectations. Don’t tell me to stop doing it; tell me how to do it better.

Moreover, there is a clear asymmetry here: If you think people want more from you than they actually do, you’ll be anxious, but people will like you and be impressed by you. If you think people want less from you than they actually do, people will be upset at you and look down on you. So, in the presence of uncertainty, there’s a lot of pressure to assume that the expectations are high. It would be best to get it right, of course; but when you aren’t sure you can get it right, you’re often better off erring on the side of caution—which is to say, the side of anxiety.

In short, mind reading isn’t optional. If you think it is, that’s only because you do it automatically.

Mindful of mindfulness

Sep 25 JDN 2459848

I have always had trouble with mindfulness meditation.

On the one hand, I find it extremely difficult to do: if there is one thing my mind is good at, it’s wandering. (I think in addition to my autism spectrum disorder, I may also have a smidgen of ADHD. I meet some of the criteria at least.) And it feels a little too close to a lot of practices that are obviously mumbo-jumbo nonsense, like reiki, qigong, and reflexology.

On the other hand, mindfulness meditation has been empirically shown to have large beneficial effects in study after study after study. It helps with not only depression, but also chronic pain. It even seems to improve immune function. The empirical data is really quite clear at this point. The real question is how it does all this.

And I am, above all, an empiricist. I bow before the data. So, when my new therapist directed me to an app that’s supposed to train me to do mindfulness meditation, I resolved that I would in fact give it a try.

Honestly, as of writing this, I’ve been using it less than a week; it’s probably too soon to make a good evaluation. But I did have some prior experience with mindfulness, so this was more like getting back into it rather than starting from scratch. And, well, I think it might actually be working. I feel a bit better than I did when I started.

If it is working, it doesn’t seem to me that the mechanism is greater focus or mental control. I don’t think I’ve really had time to meaningfully improve those skills, and to be honest, I have a long way to go there. The pre-recorded voice samples keep telling me it’s okay if my mind wanders, but I doubt the app developers planned for how much my mind can wander. When they suggest I try to notice each wandering thought, I feel like saying, “Do you want the complete stack trace, or just the final output? Because if I wrote down each terminal branch alone, my list would say something like ‘fusion reactors, ice skating, Napoleon’.”

I think some of the benefit is simply parasympathetic activation, that is, being more relaxed. I am, and have always been, astonishingly bad at relaxing. It’s not that I lack positive emotions: I can enjoy, I can be excited. Nor am I incapable of low-arousal emotions: I can get bored, I can be lethargic. I can also experience emotions that are negative and high-arousal: I can be despondent or outraged. But I have great difficulty reaching emotional states which are simultaneously positive and low-arousal, i.e. states of calm and relaxation. (See here for more on the valence/arousal model of emotional states.) To some extent I think this is due to innate personality: I am high in both Conscientiousness and Neuroticism, which basically amounts to being “high-strung“. But mindfulness has taught me that it’s also trainable, to some extent; I can get better at relaxing, and I already have.

And even more than that, I think the most important effect has been reminding and encouraging me to practice self-compassion. I am an intensely compassionate person, toward other people; but toward myself, I am brutal, demanding, unforgiving, even cruel. My internal monologue says terrible things to me that I wouldnever say to anyone else. (Or at least, not to anyone else who wasn’t a mass murderer or something. I wouldn’t feel particularly bad about saying “You are a failure, you are broken, you are worthless, you are unworthy of love” to, say, Josef Stalin. And yes, these are in fact things my internal monologue has said to me.) Whenever I am unable to master a task I consider important, my automatic reaction is to denigrate myself for failing; I think the greatest benefit I am getting from practicing meditation is being encouraged to fight that impulse. That is, the most important value added by the meditation app has not been in telling me how to focus on my own breathing, but in reminding me to forgive myself when I do it poorly.

If this is right (as I said, it’s probably too soon to say), then we may at last be able to explain why meditation is simultaneously so weird and tied to obvious mumbo-jumbo on the one hand, and also so effective on the other. The actual function of meditation is to be a difficult cognitive task which doesn’t require outside support.

And then the benefit actually comes from doing this task, getting slowly better at it—feeling that sense of progress—and also from learning to forgive yourself when you do it badly. The task probably could have been anything: Find paths through mazes. Fill out Sudoku grids. Solve integrals. But these things are hard to do without outside resources: It’s basically impossible to draw a maze without solving it in the process. Generating a Sudoku grid with a unique solution is at least as hard as solving one (which is NP-complete). By the time you know a given function is even integrable over elementary functions, you’ve basically integrated it. But focusing on your breath? That you can do anywhere, anytime. And the difficulty of controlling all your wandering thoughts may be less a bug than a feature: It’s precisely because the task is so difficult that you will have reason to practice forgiving yourself for failure.

The arbitrariness of the task itself is how you can get a proliferation of different meditation techniques, and a wide variety of mythologies and superstitions surrounding them all, but still have them all be about equally effective in the end. Because it was never really about the task at all. It’s about getting better and failing gracefully.

It probably also helps that meditation is relaxing. Solving integrals might not actually work as well as focusing on your breath, even if you had a textbook handy full of integrals to solve. Breathing deeply is calming; integration by parts isn’t. But lots of things are calming, and some things may be calming to one person but not to another.

It is possible that there is yet some other benefit to be had directly via mindfulness itself. If there is, it will surely have more to do with anterior cingulate activation than realignment of qi. But such a particular benefit isn’t necessary to explain the effectiveness of meditation, and indeed would be hard-pressed to explain why so many different kinds of meditation all seem to work about as well.

Because it was never about what you’re doing—it was always about how.

Commitment and sophistication

Mar 13 JDN 2459652

One of the central insights of cognitive and behavioral economics is that understanding the limitations of our own rationality can help us devise mechanisms to overcome those limitations—that knowing we are not perfectly rational can make us more rational. The usual term for this is a somewhat vague one: behavioral economists generally call it simply sophistication.

For example, suppose that you are short-sighted and tend to underestimate the importance of the distant future. (This is true of most of us, to greater or lesser extent.)

It’s rational to consider the distant future less important than the present—things change in the meantime, and if we go far enough you may not even be around to see it. In fact, rationality alone doesn’t even say how much you should discount any given distance in the future. But most of us are inconsistent about our attitudes toward the future: We exhibit dynamic inconsistency.

For instance, suppose I ask you today whether you would like $100 today or $102 tomorrow. It is likely you’ll choose $100 today. But if I ask you whether you would like $100 365 days from now or $102 366 days from now, you’ll almost certainly choose the $102.


This means that if I asked you the second question first, then waited a year and asked you the first question, you’d change your mind—that’s inconsistent. Whichever choice is better shouldn’t systematically change over time. (It might happen to change, if your circumstances changed in some unexpected way. But on average it shouldn’t change.) Indeed, waiting a day for an extra $2 is typically going to be worth it; 2% daily interest is pretty hard to beat.

Now, suppose you have some option to make a commitment, something that will bind you to your earlier decision. It could be some sort of punishment for deviating from your earlier choice, some sort of reward for keeping to the path, or, in the most extreme example, a mechanism that simply won’t let you change your mind. (The literally classic example of this is Odysseus having his crew tie him to the mast so he can listen to the Sirens.)

If you didn’t know that your behavior was inconsistent, you’d never want to make such a commitment. You don’t expect to change your mind, and if you do change your mind, it would be because your circumstances changed in some unexpected way—in which case changing your mind would be the right thing to do. And if your behavior wasn’t inconsistent, this reasoning would be quite correct: No point in committing when you have less information.

But if you know that your behavior is inconsistent, you can sometimes improve the outcome for yourself by making a commitment. You can force your own behavior into consistency, even though you will later be tempted to deviate from your plan.

Yet there is a piece missing from this account, often not clearly enough stated: Why should we trust the version of you that has a year to plan over the version of you that is making the decision today? What’s the difference between those two versions of you that makes them inconsistent, and why is one more trustworthy than the other?

The biggest difference is emotional. You don’t really feel $100 a year from now, so you can do the math and see that 2% daily interest is pretty darn good. But $100 today makes you feel something—excitement over what you might buy, or relief over a bill you can now pay. (Actually that’s one of the few times when it would be rational to take $100 today: If otherwise you’re going to miss a deadline and pay a late fee.) And that feeling about $102 tomorrow just isn’t as strong.

We tend to think that our emotional selves and our rational selves are in conflict, and so we expect to be more rational when we are less emotional. There is some truth to this—strong emotions can cloud our judgments and make us behave rashly.

Yet this is only one side of the story. We also need emotions to be rational. There is a condition known as flat affect, often a symptom of various neurological disorders, in which emotional reactions are greatly blunted or even non-existent. People with flat affect aren’t more rational—they just do less. In the worst cases, they completely lose their ability to be motivated to do things and become outright inert, known as abulia.

Emotional judgments are often less accurate than thoughtfully reasoned arguments, but they are also much faster—and that’s why we have them. In many contexts, particularly when survival is at stake, doing something pretty well right away is often far better than waiting long enough to be sure you’ll get the right answer. Running away from a loud sound that turns out to be nothing is a lot better than waiting to carefully determine whether that sound was really a tiger—and finding that it was.

With this in mind, the cases where we should expected commitment to be effective are those that are unfamiliar, not only on an individual level, but in an evolutionary sense. I have no doubt that experienced stock traders can develop certain intuitions that make them better at understanding financial markets than randomly chosen people—but they still systematically underperform simple mathematical models, likely because finance is just so weird from an evolutionary perspective. So when deciding whether to accept some amount of money m1 at time t1 and some other amount of money m2 at time t2, your best bet is really to just do the math.

But this may not be the case for many other types of decisions. Sometimes how you feel in the moment really is the right signal to follow. Committing to work at your job every day may seem responsible, ethical, rational—but if you hate your job when you’re actually doing it, maybe it really isn’t how you should be spending your life. Buying a long-term gym membership to pressure yourself to exercise may seem like a good idea, but if you’re miserable every time you actually go to the gym, maybe you really need to be finding a better way to integrate exercise into your lifestyle.

There are no easy answers here. We can think of ourselves as really being made of two (if not more) individuals: A cold, calculating planner who looks far into the future, and a heated, emotional experiencer who lives in the moment. There’s a tendency to assume that the planner is our “true self”, the one we should always listen to, but this is wrong; we are both of those people, and a life well-lived requires finding the right balance between their conflicting desires.

Hypocrisy is underrated

Sep 12 JDN 2459470

Hypocrisy isn’t a good thing, but it isn’t nearly as bad as most people seem to think. Often accusing someone of hypocrisy is taken as a knock-down argument for everything they are saying, and this is just utterly wrong. Someone can be a hypocrite and still be mostly right.

Often people are accused of hypocrisy when they are not being hypocritical; for instance the right wing seems to think that “They want higher taxes on the rich, but they are rich!” is hypocrisy, when in fact it’s simply altruism. (If they had wanted the rich guillotined, that would be hypocrisy. Maybe the problem is that the right wing can’t tell the difference?) Even worse, “They live under capitalism but they want to overthrow capitalism!” is not even close to hypocrisy—let alone why, how would someone overthrow a system they weren’t living under? (There are many things wrong with Marxists, but that is not one of them.)

But in fact I intend something stronger: Hypocrisy itself just isn’t that bad.


There are currently two classes of Republican politicians with regard to the COVID vaccines: Those who are consistent in their principles and don’t get the vaccines, and those who are hypocrites and get the vaccines while telling their constituents not to. Of the two, who is better? The hypocrites. At least they are doing the right thing even as they say things that are very, very wrong.

There are really four cases to consider. The principles you believe in could be right, or they could be wrong. And you could follow those principles, or you could be a hypocrite. Each of these two factors is independent.

If your principles are right and you are consistent, that’s the best case; if your principles are right and you are a hypocrite, that’s worse.

But if your principles are wrong and you are consistent, that’s the worst case; if your principles are wrong and you are a hypocrite, that’s better.

In fact I think for most things the ordering goes like this: Consistent Right > Hypocritical Wrong > Hypocritical Right > Consistent Wrong. Your behavior counts for more than your principles—so if you’re going to be a hypocrite, it’s better for your good actions to not match your bad principles.

Obviously if we could get people to believe good moral principles and then follow them, that would be best. And we should in fact be working to achieve that.

But if you know that someone’s moral principles are wrong, it doesn’t accomplish anything to accuse them of being a hypocrite. If it’s true, that’s a good thing.

Here’s a pretty clear example for you: Anyone who says that the Bible is infallible but doesn’t want gay people stoned to death is a hypocrite. The Bible is quite clear on this matter; Leviticus 20:13 really doesn’t leave much room for interpretation. By this standard, most Christians are hypocrites—and thank goodness for that. I owe my life to the hypocrisy of millions.

Of course if I could convince them that the Bible isn’t infallible—perhaps by pointing out all the things it says that contradict their most deeply-held moral and factual beliefs—that would be even better. But the last thing I want to do is make their behavior more consistent with their belief that the Bible is infallible; that would turn them into fanatical monsters. The Spanish Inquisition was very consistent in behaving according to the belief that the Bible is infallible.

Here’s another example: Anyone who thinks that cruelty to cats and dogs is wrong but is willing to buy factory-farmed beef and ham is a hypocrite. Any principle that would tell you that it’s wrong to kick a dog or cat would tell you that the way cows and pigs are treated in CAFOs is utterly unconscionable. But if you are really unwilling to give up eating meat and you can’t find or afford free-range beef, it still would be bad for you to start kicking dogs in a display of your moral consistency.

And one more example for good measure: The leaders of any country who resist human rights violations abroad but tolerate them at home are hypocrites. Obviously the best thing to do would be to fight human rights violations everywhere. But perhaps for whatever reason you are unwilling or unable to do this—one disturbing truth is that many human rights violations at home (such as draconian border policies) are often popular with your local constituents. Human-rights violations abroad are also often more severe—detaining children at the border is one thing, a full-scale genocide is quite another. So, for good reasons or bad, you may decide to focus your efforts on resisting human rights violations abroad rather than at home; this would make you a hypocrite. But it would still make you much better than a more consistent leader who simply ignores all human rights violations wherever they may occur.

In fact, there are cases in which it may be optimal for you to knowingly be a hypocrite. If you have two sets of competing moral beliefs, and you don’t know which is true but you know that as a whole they are inconsistent, your best option is to apply each set of beliefs in the domain for which you are most confident that it is correct, while searching for more information that might allow you to correct your beliefs and reconcile the inconsistency. If you are self-aware about this, you will know that you are behaving in a hypocritical way—but you will still behave better than you would if you picked the wrong beliefs and stuck to them dogmatically. In fact, given a reasonable level of risk aversion, you’ll be better off being a hypocrite than you would by picking one set of beliefs arbitrarily (say, at the flip of a coin). At least then you avoid the worst-case scenario of being the most wrong.

There is yet another factor to take into consideration. Sometimes following your own principles is hard.

Considerable ink has been spilled on the concept of akrasia, or “weakness of will”, in which we judge that A is better than B yet still find ourselves doing B. Philosophers continue to debate to this day whether this really happens. As a behavioral economist, I observe it routinely, perhaps even daily. In fact, I observe it in myself.

I think the philosophers’ mistake is to presume that there is one simple, well-defined “you” that makes all observations and judgments and takes actions. Our brains are much more complicated than that. There are many “you”s inside your brain, each with its own capacities, desires, and judgments. Yes, there is some important sense in which they are all somehow unified into a single consciousness—by a mechanism which still eludes our understanding. But it doesn’t take esoteric cognitive science to see that there are many minds inside you: Haven’t you ever felt an urge to do something you knew you shouldn’t do? Haven’t you ever succumbed to such an urge—drank the drink, eaten the dessert, bought the shoes, slept with the stranger—when it seemed so enticing but you knew it wasn’t really the right choice?

We even speak of being “of two minds” when we are ambivalent about something, and I think there is literal truth in this. The neural networks in your brain are forming coalitions, and arguing between them over which course of action you ought to take. Eventually one coalition will prevail, and your action will be taken; but afterward your reflective mind need not always agree that the coalition which won the vote was the one that deserved to.

The evolutionary reason for this is simple: We’re a kludge. We weren’t designed from the top down for optimal efficiency. We were the product of hundreds of millions of years of subtle tinkering, adding a bit here, removing a bit there, layering the mammalian, reflective cerebral cortex over the reptilian, emotional limbic system over the ancient, involuntary autonomic system. Combine this with the fact that we are built in pairs, with left and right halves of each kind of brain (and yes, they are independently functional when their connection is severed), and the wonder is that we ever agree with our own decisions.

Thus, there is a kind of hypocrisy that is not a moral indictment at all: You may genuinely and honestly agree that it is morally better to do something and still not be able to bring yourself to do it. You may know full well that it would be better to donate that money to malaria treatment rather than buy yourself that tub of ice cream—you may be on a diet and full well know that the ice cream won’t even benefit you in the long run—and still not be able to stop yourself from buying the ice cream.

Sometimes your feeling of hesitation at an altruistic act may be a useful insight; I certainly don’t think we should feel obliged to give all our income, or even all of our discretionary income, to high-impact charities. (For most people I encourage 5%. I personally try to aim for 10%. If all the middle-class and above in the First World gave even 1% we could definitely end world hunger.) But other times it may lead you astray, make you unable to resist the temptation of a delicious treat or a shiny new toy when even you know the world would be better off if you did otherwise.

Yet when following our own principles is so difficult, it’s not really much of a criticism to point out that someone has failed to do so, particularly when they themselves already recognize that they failed. The inconsistency between behavior and belief indicates that something is wrong, but it may not be any dishonesty or even anything wrong with their beliefs.

I wouldn’t go so far as to say you should stop ever calling out hypocrisy. Sometimes it is clearly useful to do so. But while hypocrisy is often the sign of a moral failing, it isn’t always—and even when it is, often as not the problem is the bad principles, not the behavior inconsistent with them.

Locked donation boxes and moral variation

Aug 8 JDN 2459435

I haven’t been able to find the quote, but I think it was Kahneman who once remarked: “Putting locks on donation boxes shows that you have the correct view of human nature.”

I consider this a deep insight. Allow me to explain.

Some people think that human beings are basically good. Rousseau is commonly associated with this view, a notion that, left to our own devices, human beings would naturally gravitate toward an anarchic but peaceful society.

The question for people who think this needs to be: Why haven’t we? If your answer is “government holds us back”, you still need to explain why we have government. Government was not imposed upon us from On High in time immemorial. We were fairly anarchic (though not especially peaceful) in hunter-gatherer tribes for nearly 200,000 years before we established governments. How did that happen?

And if your answer to that is “a small number of tyrannical psychopaths forced government on everyone else”, you may not be wrong about that—but it already breaks your original theory, because we’ve just shown that human society cannot maintain a peaceful anarchy indefinitely.

Other people think that human beings are basically evil. Hobbes is most commonly associated with this view, that humans are innately greedy, violent, and selfish, and only by the overwhelming force of a government can civilization be maintained.

This view more accurately predicts the level of violence and death that generally accompanies anarchy, and can at least explain why we’d want to establish government—but it still has trouble explaining how we would establish government. It’s not as if we’re ruled by a single ubermensch with superpowers, or an army of robots created by a mad scientist in a secret undergroud laboratory. Running a government involves cooperation on an absolutely massive scale—thousands or even millions of unrelated, largely anonymous individuals—and this cooperation is not maintained entirely by force: Yes, there is some force involved, but most of what a government does most of the time is mediated by norms and customs, and if a government did ever try to organize itself entirely by force—not paying any of the workers, not relying on any notion of patriotism or civic duty—it would immediately and catastrophically collapse.

What is the right answer? Humans aren’t basically good or basically evil. Humans are basically varied.

I would even go so far as to say that most human beings are basically good. They follow a moral code, they care about other people, they work hard to support others, they try not to break the rules. Nobody is perfect, and we all make various mistakes. We disagree about what is right and wrong, and sometimes we even engage in actions that we ourselves would recognize as morally wrong. But most people, most of the time, try to do the right thing.

But some people are better than others. There are great humanitarians, and then there are ordinary folks. There are people who are kind and compassionate, and people who are selfish jerks.

And at the very opposite extreme from the great humanitarians is the roughly 1% of people who are outright psychopaths. About 5-10% of people have significant psychopathic traits, but about 1% are really full-blown psychopaths.

I believe it is fair to say that psychopaths are in fact basically evil. They are incapable of empathy or compassion. Morality is meaningless to them—they literally cannot distinguish moral rules from other rules. Other people’s suffering—even their very lives—means nothing to them except insofar as it is instrumentally useful. To a psychopath, other people are nothing more than tools, resources to be exploited—or obstacles to be removed.

Some philosophers have argued that this means that psychopaths are incapable of moral responsibility. I think this is wrong. I think it relies on a naive, pre-scientific notion of what “moral responsibility” is supposed to mean—one that was inevitably going to be destroyed once we had a greater understanding of the brain. Do psychopaths understand the consequences of their actions? Yes. Do rewards motivate psychopaths to behave better? Yes. Does the threat of punishment motivate them? Not really, but it was never that effective on anyone else, either. What kind of “moral responsibility” are we still missing? And how would our optimal action change if we decided that they do or don’t have moral responsibility? Would you still imprison them for crimes either way? Maybe it doesn’t matter whether or not it’s really a blegg.

Psychopaths are a small portion of our population, but are responsible for a large proportion of violent crimes. They are also overrepresented in top government positions as well as police officers, and it’s pretty safe to say that nearly every murderous dictator was a psychopath of one shade or another.

The vast majority of people are not psychopaths, and most people don’t even have any significant psychopathic traits. Yet psychopaths have an enormously disproportionate impact on society—nearly all of it harmful. If psychopaths did not exist, Rousseau might be right after all; we wouldn’t need government. If most people were psychopaths, Hobbes would be right; we’d long for the stability and security of government, but we could never actually cooperate enough to create it.

This brings me back to the matter of locked donation boxes.

Having a donation box is only worthwhile if most people are basically good: Asking people to give money freely in order to achieve some good only makes any sense if people are capable of altruism, empathy, cooperation. And it can’t be just a few, because you’d never raise enough money to be useful that way. It doesn’t have to be everyone, or maybe even a majority; but it has to be a large fraction. 90% is more than enough.

But locking things is only worthwhile if some people are basically evil: For a lock to make sense, there must be at least a few people who would be willing to break in and steal the money, even if it was earmarked for a very worthy cause. It doesn’t take a huge fraction of people, but it must be more than a negligible one. 1% to 10% is just about the right sort of range.

Hence, locked donation boxes are a phenomenon that would only exist in a world where most people are basically good—but some people are basically evil.

And this is in fact the world in which we live. It is a world where the Holocaust could happen but then be followed by the founding of the United Nations, a world where nuclear weapons would be invented and used to devastate cities, but then be followed by an era of nearly unprecedented peace. It is a world where governments are necessary to reign in violence, but also a world where governments can function (reasonably well) even in countries with hundreds of millions of people. It is a world with crushing poverty and people who work tirelessly to end it. It is a world where Exxon and BP despoil the planet for riches while WWF and Greenpeace fight back. It is a world where religions unite millions of people under a banner of peace and justice, and then go on crusadees to murder thousands of other people who united under a different banner of peace and justice. It is a world of richness, complexity, uncertainty, conflict—variance.

It is not clear how much of this moral variance is innate versus acquired. If we somehow rewound the film of history and started it again with a few minor changes, it is not clear how many of us would end up the same and how many would be far better or far worse than we are. Maybe psychopaths were born the way they are, or maybe they were made that way by culture or trauma or lead poisoning. Maybe with the right upbringing or brain damage, we, too, could be axe murderers. Yet the fact remains—there are axe murderers, but we, and most people, are not like them.

So, are people good, or evil? Was Rousseau right, or Hobbes? Yes. Both. Neither. There is no one human nature; there are many human natures. We are capable of great good and great evil.

When we plan how to run a society, we must make it work the best we can with that in mind: We can assume that most people will be good most of the time—but we know that some people won’t, and we’d better be prepared for them as well.

Set out your donation boxes with confidence. But make sure they are locked.

Escaping the wrong side of the Yerkes-Dodson curve

Jul 25 JDN 2459421

I’ve been under a great deal of stress lately. Somehow I ended up needing to finish my dissertation, get married, and move overseas to start a new job all during the same few months—during a global pandemic.

A little bit of stress is useful, but too much can be very harmful. On complicated tasks (basically anything that involves planning or careful thought), increased stress will increase performance up to a point, and then decrease it after that point. This phenomenon is known as the Yerkes-Dodson law.

The Yerkes-Dodson curve very closely resembles the Laffer curve, which shows that since extremely low tax rates raise little revenue (obviously), and extremely high tax rates also raise very little revenue (because they cause so much damage to the economy), the tax rate that maximizes government revenue is actually somewhere in the middle. There is a revenue-maximizing tax rate (usually estimated to be about 70%).

Instead of a revenue-maximizing tax rate, the Yerkes-Dodson law says that there is a performance-maximizing stress level. You don’t want to have zero stress, because that means you don’t care and won’t put in any effort. But if your stress level gets too high, you lose your ability to focus and your performance suffers.

Since stress (like taxes) comes with a cost, you may not even want to be at the maximum point. Performance isn’t everything; you might be happier choosing a lower level of performance in order to reduce your own stress.

But once thing is certain: You do not want to be to the right of that maximum. Then you are paying the cost of not only increased stress, but also reduced performance.

And yet I think many of us spent a great deal of our time on the wrong side of the Yerkes-Dodson curve. I certainly feel like I’ve been there for quite awhile now—most of grad school, really, and definitely this past month when suddenly I found out I’d gotten an offer to work in Edinburgh.

My current circumstances are rather exceptional, but I think the general pattern of being on the wrong side of the Yerkes-Dodson curve is not.

Over 80% of Americans report work-related stress, and the US economy loses about half a trillion dollars a year in costs related to stress.

The World Health Organization lists “work-related stress” as one of its top concerns. Over 70% of people in a cross-section of countries report physical symptoms related to stress, a rate which has significantly increased since before the pandemic.

The pandemic is clearly a contributing factor here, but even without it, there seems to be an awful lot of stress in the world. Even back in 2018, over half of Americans were reporting high levels of stress. Why?

For once, I think it’s actually fair to blame capitalism.

One thing capitalism is exceptionally good at is providing strong incentives for work. This is often a good thing: It means we get a lot of work done, so employment is high, productivity is high, GDP is high. But it comes with some important downsides, and an excessive level of stress is one of them.

But this can’t be the whole story, because if markets were incentivizing us to produce as much as possible, that ought to put us near the maximum of the Yerkes-Dodson curve—but it shouldn’t put us beyond it. Maximizing productivity might not be what makes us happiest—but many of us are currently so stressed that we aren’t even maximizing productivity.

I think the problem is that competition itself is stressful. In a capitalist economy, we aren’t simply incentivized to do things well—we are incentivized to do them better than everyone else. Often quite small differences in performance can lead to large differences in outcome, much like how a few seconds can make the difference between an Olympic gold medal and an Olympic “also ran”.

An optimally productive economy would be one that incentivizes you to perform at whatever level maximizes your own long-term capability. It wouldn’t be based on competition, because competition depends too much on what other people are capable of. If you are not especially talented, competition will cause you great stress as you try to compete with people more talented than you. If you happen to be exceptionally talented, competition won’t provide enough incentive!

Here’s a very simple model for you. Your total performance p is a function of two components, your innate ability aand your effort e. In fact let’s just say it’s a sum of the two: p = a + e

People are randomly assigned their level of capability from some probability distribution, and then they choose their effort. For the very simplest case, let’s just say there are two people, and it turns out that person 1 has less innate ability than person 2, so a1 < a2.

There is also a certain amount of inherent luck in any competition. As it says in Ecclesiastes (by far the best book of the Old Testament), “The race is not to the swift or the battle to the strong, nor does food come to the wise or wealth to the brilliant or favor to the learned; but time and chance happen to them all.” So as usual I’ll model this as a contest function, where your probability of winning depends on your total performance, but it’s not a sure thing.

Let’s assume that the value of winning and cost of effort are the same across different people. (It would be simple to remove this assumption, but it wouldn’t change much in the results.) The value of winning I’ll call y, and I will normalize the cost of effort to 1.


Then this is each person’s expected payoff ui:

ui = (ai + ei)/(a1+e1+a2 + e2) V – ei

You choose effort, not ability, so maximize in terms of ei:

(a2 + e2) V = (a1 +e1+a2 + e2)2 = (a1 + e1) V

a1 + e1 = a2 + e2

p1 = p2

In equilibrium, both people will produce exactly the same level of performance—but one of them will be contributing more effort to compensate for their lesser innate ability.

I’ve definitely had this experience in both directions: Effortlessly acing math tests that I knew other people barely passed despite hours of studying, and running until I could barely breathe to keep up with other people who barely seemed winded. Clearly I had too little incentive in math class and too much in gym class—and competition was obviously the culprit.

If you vary the cost of effort between people, or make it not linear, you can make the two not exactly equal; but the overall pattern will remain that the person who has more ability will put in less effort because they can win anyway.

Yet presumably the amount of effort we want to incentivize isn’t less for those who are more talented. If anything, it may be more: Since an hour of work produces more when done by the more talented person, if the cost to them is the same, then the net benefit of that hour of work is higher than the same hour of work by someone less talented.

In a large population, there are almost certainly many people whose talents are similar to your own—but there are also almost certainly many below you and many above you as well. Unless you are properly matched with those of similar talent, competition will systematically lead to some people being pressured to work too hard and others not pressured enough.

But if we’re all stressed, where are the people not pressured enough? We see them on TV. They are celebrities and athletes and billionaires—people who got lucky enough, either genetically (actors who were born pretty, athletes who were born with more efficient muscles) or environmentally (inherited wealth and prestige), to not have to work as hard as the rest of us in order to succeed. Indeed, we are constantly bombarded with images of these fantastically lucky people, and by the availability heuristic our brains come to assume that they are far more plentiful than they actually are.

This dramatically exacerbates the harms of competition, because we come to feel that we are competing specifically with the people who were handed the world on a silver platter. Born without the innate advantages of beauty or endurance or inheritance, there’s basically no chance we could ever measure up; and thus we feel utterly inadequate unless we are constantly working as hard as we possibly can, trying to catch up in a race in which we always fall further and further behind.

How can we break out of this terrible cycle? Well, we could try to replace capitalism with something like the automated luxury communism of Star Trek; but this seems like a very difficult and long-term solution. Indeed it might well take us a few hundred years as Roddenberry predicted.

In the shorter term, we may not be able to fix the economic problem, but there is much we can do to fix the psychological problem.

By reflecting on the full breadth of human experience, not only here and now, but throughout history and around the world, you can come to realize that you—yes, you, if you’re reading this—are in fact among the relatively fortunate. If you have a roof over your head, food on your table, clean water from your tap, and ibuprofen in your medicine cabinet, you are far more fortunate than the average person in Senegal today; your television, car, computer, and smartphone are things that would be the envy even of kings just a few centuries ago. (Though ironically enough that person in Senegal likely has a smartphone, or at least a cell phone!)

Likewise, you can reflect upon the fact that while you are likely not among the world’s most very most talented individuals in any particular field, there is probably something you are much better at than most people. (A Fermi estimate suggests I’m probably in the top 250 behavioral economists in the world. That’s probably not enough for a Nobel, but it does seem to be enough to get a job at the University of Edinburgh.) There are certainly many people who are less good at many things than you are, and if you must think of yourself as competing, consider that you’re also competing with them.

Yet perhaps the best psychological solution is to learn not to think of yourself as competing at all. So much as you can afford to do so, try to live your life as if you were already living in a world that rewards you for making the best of your own capabilities. Try to live your life doing what you really think is the best use of your time—not your corporate overlords. Yes, of course, we must do what we need to in order to survive, and not just survive, but indeed remain physically and mentally healthy—but this is far less than most First World people realize. Though many may try to threaten you with homelessness or even starvation in order to exploit you and make you work harder, the truth is that very few people in First World countries actually end up that way (it couldbe brought to zero, if our public policy were better), and you’re not likely to be among them. “Starving artists” are typically a good deal happier than the general population—because they’re not actually starving, they’ve just removed themselves from the soul-crushing treadmill of trying to impress the neighbors with manicured lawns and fancy SUVs.