Mind reading is not optional

Nov 20 JDN 2459904

I have great respect for cognitive-behavioral therapy (CBT), and it has done a lot of good for me. (It is also astonishingly cost-effective; its QALY per dollar rate compares favorably to almost any other First World treatment, and loses only to treating high-impact Third World diseases like malaria and schistomoniasis.)

But there are certain aspects of it that have always been frustrating to me. Standard CBT techniques often present as ‘cognitive distortions‘ what are in fact clearly necessary heuristics without which it would be impossible to function.

Perhaps the worst of these is so-called ‘mind reading‘. The very phrasing of it makes it sound ridiculous: Are you suggesting that you have some kind of extrasensory perception? Are you claiming to be a telepath?

But in fact ‘mind reading’ is simply the use of internal cognitive models to forecast the thoughts, behaviors, and expectations of other human beings. And without it, it would be completely impossible to function in human society.

For instance, I have had therapists tell me that it is ‘mind reading’ for me to anticipate that people will have tacit expectations for my behavior that they will judge me for failing to meet, and I should simply wait for people to express their expectations rather than assuming them. I admit, life would be much easier if I could do that. But I know for a fact that I can’t. Indeed, I used to do that, as a child, and it got me in trouble all the time. People were continually upset at me for not doing things they had expected me to do but never bothered to actually mention. They thought these expectations were “obvious”; they were not, at least not to me.

It was often little things, and in hindsight some of these things seem silly: I didn’t know what a ‘made bed’ was supposed to look like, so I put it in a state that was functional for me, but that was not considered ‘making the bed’. (I have since learned that my way was actually better: It’s good to let sheets air out before re-using them.) I was asked to ‘clear the sink’, so I moved the dishes out of the sink and left them on the counter, not realizing that the implicit command was for me to wash those dishes, dry them, and put them away. I was asked to ‘bring the dinner plates to the table’, so I did that, and left them in a stack there, not realizing that I should be setting them out in front of each person’s chair and also bringing flatware. Of course I know better now. But how was I supposed to know then? It seems like I was expected to, though.

Most people just really don’t seem to realize how many subtle, tacit expectations are baked into every single task. I think neurodivergence is quite relevant here; I have a mild autism spectrum disorder, and so I think rather differently than most people. If you are neurotypical, then you probably can forecast other people’s expectations fairly well automatically, and so they may seem obvious to you. In fact, they may seem so obvious that you don’t even realize you’re doing it. Then when someone like me comes along and is consciously, actively trying to forecast other people’s expectations, and sometimes doing it poorly, you go and tell them to stop trying to forecast. But if they were to do that, they’d end up even worse off than they are. What you really need to be telling them is how to forecast better—but that would require insight into your own forecasting methods which you aren’t even consciously aware of.

Seriously, stop and think for a moment all of the things other people expect you to do every day that are rarely if ever explicitly stated. How you are supposed to dress, how you are supposed to speak, how close you are supposed to stand to other people, how long you are supposed to hold eye contact—all of these are standards you will be expected to meet, whether or not any of them have ever been explicitly explained to you. You may do this automatically; or you may learn to do it consciously after being criticized for failing to do it. But one way or another, you must forecast what other people will expect you to do.

To my knowledge, no one has ever explicitly told me not to wear a Starfleet uniform to work. I am not aware of any part of the university dress code that explicitly forbids such attire. But I’m fairly sure it would not be a good idea. To my knowledge, no one has ever explicitly told me not to burst out into song in the middle of a meeting. But I’m still pretty sure I shouldn’t do that. To my knowledge, no one has ever explicitly told me what the ‘right of way’ rules are for walking down a crowded sidewalk, who should be expected to move out of the way of whom. But people still get mad if you mess up and bump into them.

Even when norms are stated explicitly, it is often as a kind of last resort, and the mere fact that you needed to have a norm stated is often taken as a mark against your character. I have been explicitly told in various contexts not to talk to myself or engage in stimming leg movements; but the way I was told has generally suggested that I would have been judged better if I hadn’t had to be told, if I had simply known the way that other people seem to know. (Or is it that they never felt any particular desire to stim?)

In fact, I think a major part of developing social skills and becoming more functional, to the point where a lot of people actually now seem a bit surprised to learn I have an autism spectrum disorder, has been improving my ability to forecast other people’s expectations for my behavior. There are dozens if not hundreds of norms that people expect you to follow at any given moment; most people seem to intuit them so easily that they don’t even realize they are there. But they are there all the same, and this is painfully evident to those of us who aren’t always able to immediately intuit them all.

Now, the fact remains that my current mental models are surely imperfect. I am often wrong about what other people expect of me. I’m even prepared to believe that some of my anxiety comes from believing that people have expectations more demanding than what they actually have. But I can’t simply abandon the idea of forecasting other people’s expectations. Don’t tell me to stop doing it; tell me how to do it better.

Moreover, there is a clear asymmetry here: If you think people want more from you than they actually do, you’ll be anxious, but people will like you and be impressed by you. If you think people want less from you than they actually do, people will be upset at you and look down on you. So, in the presence of uncertainty, there’s a lot of pressure to assume that the expectations are high. It would be best to get it right, of course; but when you aren’t sure you can get it right, you’re often better off erring on the side of caution—which is to say, the side of anxiety.

In short, mind reading isn’t optional. If you think it is, that’s only because you do it automatically.

On the Overton Window

Jul 24 JDN 2459786

As you are no doubt aware, a lot of people on the Internet like to loudly proclaim support for really crazy, extreme ideas. Some of these people actually believe in those ideas, and if you challenge them, will do their best to defend them. Those people are wrong at the level of substantive policy, but there’s nothing wrong with their general approach: If you really think that anarchism or communism is a good thing, it only makes sense that you’d try to convince other people. You might have a hard time of it (in part because you are clearly wrong), but it makes sense that you’d try.

But there is another class of people who argue for crazy, extreme ideas. When pressed, they will admit they don’t really believe in abolishing the police or collectivizing all wealth, but they believe in something else that’s sort of vaguely in that direction, and they think that advocating for the extreme idea will make people more likely to accept what they actually want.

They often refer to this as “shifting the Overton Window”. As Matt Yglesias explained quite well a year ago, this is not actually what Overton was talking about.

But, in principle, it could still be a thing that works. There is a cognitive bias known as anchoring which is often used in marketing: If I only offered a $5 bottle of wine and a $20 bottle of wine, you might think the $20 bottle is too expensive. But if I also include a $50 bottle, that makes you adjust your perceptions of what constitutes a “reasonable” price for wine, and may make you more likely to buy the $20 bottle after all.

It could be, therefore, that an extreme policy demand makes people more willing to accept moderate views, as a sort of compromise. Maybe demanding the abolition of police is a way of making other kinds of police reform seem more reasonable. Maybe showing pictures of Marx and chanting “eat the rich” could make people more willing to accept higher capital gains taxes. Maybe declaring that we are on the verge of apocalyptic climate disaster will make people more willing to accept tighter regulations on carbon emissions and subsidies for solar energy.

Then again—does it actually seem to do that? I see very little evidence that it does. All those demands for police abolition haven’t changed the fact that defunding the police is unpopular. Raising taxes on the rich is popular, but it has been for awhile now (and never was with, well, the rich). And decades of constantly shouting about imminent climate catastrophe is really starting to look like crying wolf.

To see why this strategy seems to be failing, I think it’s helpful to consider how it feels from the other side. Take a look at some issues where someone else is trying to get you to accept a particular view, and consider whether someone advocating a more extreme view would make you more likely to compromise.

Your particular opinions may vary, but here are some examples that would apply to me, and, I suspect, many of you.

If someone says they want tighter border security, I’m skeptical—it’s pretty tight already. But in and of itself, this would not be such a crazy idea. Certainly I agree that it is possible to have too little border security, and so maybe that turns out to be the state we’re in.

But then, suppose that same person, or someone closely allied to them, starts demanding the immediate deportation of everyone who was not born in the United States, even those who immigrated legally and are naturalized or here on green cards. This is a crazy, extreme idea that’s further in the same direction, so on this anchoring theory, it should make me more willing to accept the idea of tighter border security. And yet, I can say with some confidence that it has no such effect.

Indeed, if anything I think it would make me less likely to accept tighter border security, in proportion to how closely aligned those two arguments are. If they are coming from the same person, or the same political party, it would cause me to suspect that the crazy, extreme policy is the true objective, and the milder, compromise policy is just a means toward that end. It also suggests certain beliefs and attitudes about immigration in general—xenophobia, racism, ultranationalism—that I oppose even more strongly. If you’re talking about deporting all immigrants, you make me suspect that your reasons for wanting tighter border security are not good ones.

Let’s try another example. Suppose someone wants to cut taxes on upper income brackets. In our current state, I think that would be a bad idea. But there was a time not so long ago when I would have agreed with it: Even I have to admit that a top bracket of 94% (as we had in 1943) sounds a little ridiculous, and is surely on the wrong side of the Laffer curve. So the basic idea of cutting top tax rates is not inherently crazy or ridiculous.

Now, suppose that same idea came from the same person, or the same party, or the same political movement, as one that was arguing for the total abolition of all taxation. This is a crazy, extreme idea; it would amount to either total anarcho-capitalism with no government at all, or some sort of bizarre system where the government is funded entirely through voluntary contributions. I think it’s pretty obvious that such a system would be terrible, if not outright impossible; and anyone whose understanding of political economy is sufficiently poor that they would fail to see this is someone whose overall judgment on questions of policy I must consider dubious. Once again, the presence of the extreme view does nothing to make me want to consider the moderate view, and may even make me less willing to do so.

Perhaps I am an unusually rational person, not so greatly affected by anchoring biases? Perhaps. But whereas I do feel briefly tempted by to buy the $20 wine bottle by the effect of the $50 wine bottle, and must correct myself with knowledge I have about anchoring bias, the presentation of an extreme political view never even makes me feel any temptation to accept some kind of compromise with it. Learning that someone supports something crazy or ridiculous—or is willing to say they do, even if deep down they don’t—makes me automatically lower my assessment of their overall credibility. If anything, I think I am tempted to overreact in that direction, and have to remind myself of the Stopped Clock Principle: reversed stupidity is not intelligence, and someone can have both bad ideas and good ones.

Moreover, the empirical data, while sketchy, doesn’t seem to support this either; where the Overton Window (in the originally intended sense) has shifted, as on LGBT rights, it was because people convincingly argued that the “extreme” position was in fact an entirely reasonable and correct view. There was a time not so long ago that same-sex marriage was deemed unthinkable, and the “moderate” view was merely decriminalizing sodomy; but we demanded, and got, same-sex marriage, not as a strategy to compromise on decriminalizing sodomy, but because we actually wanted same-sex marriage and had good arguments for it. I highly doubt we would have been any more successful if we had demanded something ridiculous and extreme, like banning opposite-sex marriage.

The resulting conclusion seems obvious and banal: Only argue for things you actually believe in.

Yet, somehow, that seems to be a controversial view these days.

Why do poor people dislike inflation?

Jun 5 JDN 2459736

The United States and United Kingdom are both very unaccustomed to inflation. Neither has seen double-digit inflation since the 1980s.

Here’s US inflation since 1990:

And here is the same graph for the UK:

While a return to double-digits remains possible, at this point it likely won’t happen, and if it does, it will occur only briefly.

This is no doubt a major reason why the dollar and the pound are widely used as reserve currencies (especially the dollar), and is likely due to the fact that they are managed by the world’s most competent central banks. Brexit would almost have made sense if the UK had been pressured to join the Euro; but they weren’t, because everyone knew the pound was better managed.

The Euro also doesn’t have much inflation, but if anything they err on the side of too low, mainly because Germany appears to believe that inflation is literally Hitler. In fact, the rise of the Nazis didn’t have much to do with the Weimar hyperinflation. The Great Depression was by far a greater factor—unemployment is much, much worse than inflation. (By the way, it’s weird that you can put that graph back to the 1980s. It, uh, wasn’t the Euro then. Euros didn’t start circulating until 1999. Is that an aggregate of the franc and the deutsche mark and whatever else? The Euro itself has never had double-digit inflation—ever.)

But it’s always a little surreal for me to see how panicked people in the US and UK get when our inflation rises a couple of percentage points. There seems to be an entire subgenre of economics news that basically consists of rich people saying the sky is falling because inflation has risen—or will, or may rise—by two points. (Hey, anybody got any ideas how we can get them to panic like this over rises in sea level or aggregate temperature?)

Compare this to some other countries thathave real inflation: In Brazil, 10% inflation is a pretty typical year. In Argentina, 10% is a really good year—they’re currently pushing 60%. Kenya’s inflation is pretty well under control now, but it went over 30% during the crisis in 2008. Botswana was doing a nice job of bringing down their inflation until the COVID pandemic threw them out of whack, and now they’re hitting double-digits too. And of course there’s always Zimbabwe, which seemed to look at Weimar Germany and think, “We can beat that.” (80,000,000,000% in one month!? Any time you find yourself talking about billion percent, something has gone terribly, terribly wrong.)

Hyperinflation is a real problem—it isn’t what put Hitler into power, but it has led to real crises in Germany, Zimbabwe, and elsewhere. Once you start getting over 100% per year, and especially when it starts rapidly accelerating, that’s a genuine crisis. Moreover, even though they clearly don’t constitute hyperinflation, I can see why people might legitimately worry about price increases of 20% or 30% per year. (Let alone 60% like Argentina is dealing with right now.) But why is going from 2% to 6% any cause for alarm? Yet alarmed we seem to be.

I can even understand why rich people would be upset about inflation (though the magnitudeof their concern does still seem disproportionate). Inflation erodes the value of financial assets, because most bonds, options, etc. are denominated in nominal, not inflation-adjusted terms. (Though there are such things as inflation-indexed bonds.) So high inflation can in fact make rich people slightly less rich.

But why in the world are so many poor people upset about inflation?

Inflation doesn’t just erode the value of financial assets; it also erodes the value of financial debts. And most poor people have more debts than they have assets—indeed, it’s not uncommon for poor people to have substantial debt and no financial assets to speak of (what little wealth they have being non-financial, e.g. a car or a home). Thus, their net wealth position improves as prices rise.

The interest rate response can compensate for this to some extent, but most people’s debts are fixed-rate. Moreover, if it’s the higher interest rates you’re worried about, you should want the Federal Reserve and the Bank of England not to fight inflation too hard, because the way they fight it is chiefly by raising interest rates.

In surveys, almost everyone thinks that inflation is very bad: 92% think that controlling inflation should be a high priority, and 90% think that if inflation gets too high, something very bad will happen. This is greater agreement among Americans than is found for statements like “I like apple pie” or “kittens are nice”, and comparable to “fair elections are important”!

I admit, I question the survey design here: I would answer ‘yes’ to both questions if we’re talking about a theoretical 10,000% hyperinflation, but ‘no’ if we’re talking about a realistic 10% inflation. So I would like to see, but could not find, a survey asking people what level of inflation is sufficient cause for concern. But since most of these people seemed concerned about actual, realistic inflation (85% reported anger at seeing actual, higher prices), it still suggests a lot of strong feelings that even mild inflation is bad.

So it does seem to be the case that a lot of poor and middle-class people really strongly dislike inflation even in the actual, mild levels in which it occurs in the US and UK.

The main fear seems to be that inflation will erode people’s purchasing power—that as the price of gasoline and groceries rise, people won’t be able to eat as well or drive as much. And that, indeed, would be a real loss of utility worth worrying about.

But in fact this makes very little sense: Most forms of income—particularly labor income, which is the only real income for some 80%-90% of the population—actually increases with inflation, more or less one-to-one. Yes, there’s some delay—you won’t get your annual cost-of-living raise immediately, but several months down the road. But this could have at most a small effect on your real consumption.

To see this, suppose that inflation has risen from 2% to 6%. (Really, you need not suppose; it has.) Now consider your cost-of-living raise, which nearly everyone gets. It will presumably rise the same way: So if it was 3% before, it will now be 7%. Now consider how much your purchasing power is affected over the course of the year.

For concreteness, let’s say your initial income was $3,000 per month at the start of the year (a fairly typical amount for a middle-class American, indeed almost exactly the median personal income). Let’s compare the case of no inflation with a 1% raise, 2% inflation with a 3% raise, and 5% inflation with a 6% raise.

If there was no inflation, your real income would remain simply $3,000 per month, until the end of the year when it would become $3,030 per month. That’s the baseline to compare against.

If inflation is 2%, your real income would gradually fall, by about 0.16% per month, before being bumped up 3% at the end of the year. So in January you’d have $3,000, in February $2,995, in March $2,990. Come December, your real income has fallen to $2,941. But then next January it will immediately be bumped up 3% to $3,029, almost the same as it would have been with no inflation at all. The total lost income over the entire year is about $380, or about 1% of your total income.

If inflation instead rises to 6%, your real income will fall by 0.49% per month, reaching a minimum of $2,830 in December before being bumped back up to $3,028 next January. Your total loss for the whole year will be about $1110, or about 3% of your total income.

Indeed, it’s a pretty good heuristic to say that for an inflation rate of x% with annual cost-of-living raises, your loss of real income relative to having no inflation at all is about (x/2)%. (This breaks down for really high levels of inflation, at which point it becomes a wild over-estimate, since even 200% inflation doesn’t make your real income go to zero.)

This isn’t nothing, of course. You’d feel it. Going from 2% to 6% inflation at an income of $3000 per month is like losing $700 over the course of a year, which could be a month of groceries for a family of four. (Not that anyone can really raise a family of four on a single middle-class income these days. When did The Simpsons begin to seem aspirational?)

But this isn’t the whole story. Suppose that this same family of four had a mortgage payment of $1000 per month; that is also decreasing in real value by the same proportion. And let’s assume it’s a fixed-rate mortgage, as most are, so we don’t have to factor in any changes in interest rates.

With no inflation, their mortgage payment remains $1000. It’s 33.3% of their income this year, and it will be 33.0% of their income next year after they get that 1% raise.

With 2% inflation, their mortgage payment will also fall by 0.16% per month; $998 in February, $996 in March, and so on, down to $980 in December. This amounts to an increase in real income of about $130—taking away a third of the loss that was introduced by the inflation.

With 6% inflation, their mortgage payment will also fall by 0.49% per month; $995 in February, $990 in March, and so on, until it’s only $943 in December. This amounts to an increase in real income of over $370—again taking away a third of the loss.

Indeed, it’s no coincidence that it’s one third; the proportion of lost real income you’ll get back by cheaper mortgage payments is precisely the proportion of your income that was spent on mortgage payments at the start—so if, like too many Americans, they are paying more than a third of their income on mortgage, their real loss of income from inflation will be even lower.

And what if they are renting instead? They’re probably on an annual lease, so that payment won’t increase in nominal terms either—and hence will decrease in real terms, in just the same way as a mortgage payment. Likewise car payments, credit card payments, any debt that has a fixed interest rate. If they’re still paying back student loans, their financial situation is almost certainly improved by inflation.

This means that the real loss from an increase of inflation from 2% to 6% is something like 1.5% of total income, or about $500 for a typical American adult. That’s clearly not nearly as bad as a similar increase in unemployment, which would translate one-to-one into lost income on average; moreover, this loss would be concentrated among people who lost their jobs, so it’s actually worse than that once you account for risk aversion. It’s clearly better to lose 1% of your income than to have a 1% chance of losing nearly all your income—and inflation is the former while unemployment is the latter.

Indeed, the only reason you lost purchasing power at all was that your cost-of-living increases didn’t occur often enough. If instead you had a labor contract that instituted cost-of-living raises every month, or even every paycheck, instead of every year, you would get all the benefits of a cheaper mortgage and virtually none of the costs of a weaker paycheck. Convince your employer to make this adjustment, and you will actually benefit from higher inflation.

So if poor and middle-class people are upset about eroding purchasing power, they should be mad at their employers for not implementing more frequent cost-of-living adjustments; the inflation itself really isn’t the problem.

Commitment and sophistication

Mar 13 JDN 2459652

One of the central insights of cognitive and behavioral economics is that understanding the limitations of our own rationality can help us devise mechanisms to overcome those limitations—that knowing we are not perfectly rational can make us more rational. The usual term for this is a somewhat vague one: behavioral economists generally call it simply sophistication.

For example, suppose that you are short-sighted and tend to underestimate the importance of the distant future. (This is true of most of us, to greater or lesser extent.)

It’s rational to consider the distant future less important than the present—things change in the meantime, and if we go far enough you may not even be around to see it. In fact, rationality alone doesn’t even say how much you should discount any given distance in the future. But most of us are inconsistent about our attitudes toward the future: We exhibit dynamic inconsistency.

For instance, suppose I ask you today whether you would like $100 today or $102 tomorrow. It is likely you’ll choose $100 today. But if I ask you whether you would like $100 365 days from now or $102 366 days from now, you’ll almost certainly choose the $102.


This means that if I asked you the second question first, then waited a year and asked you the first question, you’d change your mind—that’s inconsistent. Whichever choice is better shouldn’t systematically change over time. (It might happen to change, if your circumstances changed in some unexpected way. But on average it shouldn’t change.) Indeed, waiting a day for an extra $2 is typically going to be worth it; 2% daily interest is pretty hard to beat.

Now, suppose you have some option to make a commitment, something that will bind you to your earlier decision. It could be some sort of punishment for deviating from your earlier choice, some sort of reward for keeping to the path, or, in the most extreme example, a mechanism that simply won’t let you change your mind. (The literally classic example of this is Odysseus having his crew tie him to the mast so he can listen to the Sirens.)

If you didn’t know that your behavior was inconsistent, you’d never want to make such a commitment. You don’t expect to change your mind, and if you do change your mind, it would be because your circumstances changed in some unexpected way—in which case changing your mind would be the right thing to do. And if your behavior wasn’t inconsistent, this reasoning would be quite correct: No point in committing when you have less information.

But if you know that your behavior is inconsistent, you can sometimes improve the outcome for yourself by making a commitment. You can force your own behavior into consistency, even though you will later be tempted to deviate from your plan.

Yet there is a piece missing from this account, often not clearly enough stated: Why should we trust the version of you that has a year to plan over the version of you that is making the decision today? What’s the difference between those two versions of you that makes them inconsistent, and why is one more trustworthy than the other?

The biggest difference is emotional. You don’t really feel $100 a year from now, so you can do the math and see that 2% daily interest is pretty darn good. But $100 today makes you feel something—excitement over what you might buy, or relief over a bill you can now pay. (Actually that’s one of the few times when it would be rational to take $100 today: If otherwise you’re going to miss a deadline and pay a late fee.) And that feeling about $102 tomorrow just isn’t as strong.

We tend to think that our emotional selves and our rational selves are in conflict, and so we expect to be more rational when we are less emotional. There is some truth to this—strong emotions can cloud our judgments and make us behave rashly.

Yet this is only one side of the story. We also need emotions to be rational. There is a condition known as flat affect, often a symptom of various neurological disorders, in which emotional reactions are greatly blunted or even non-existent. People with flat affect aren’t more rational—they just do less. In the worst cases, they completely lose their ability to be motivated to do things and become outright inert, known as abulia.

Emotional judgments are often less accurate than thoughtfully reasoned arguments, but they are also much faster—and that’s why we have them. In many contexts, particularly when survival is at stake, doing something pretty well right away is often far better than waiting long enough to be sure you’ll get the right answer. Running away from a loud sound that turns out to be nothing is a lot better than waiting to carefully determine whether that sound was really a tiger—and finding that it was.

With this in mind, the cases where we should expected commitment to be effective are those that are unfamiliar, not only on an individual level, but in an evolutionary sense. I have no doubt that experienced stock traders can develop certain intuitions that make them better at understanding financial markets than randomly chosen people—but they still systematically underperform simple mathematical models, likely because finance is just so weird from an evolutionary perspective. So when deciding whether to accept some amount of money m1 at time t1 and some other amount of money m2 at time t2, your best bet is really to just do the math.

But this may not be the case for many other types of decisions. Sometimes how you feel in the moment really is the right signal to follow. Committing to work at your job every day may seem responsible, ethical, rational—but if you hate your job when you’re actually doing it, maybe it really isn’t how you should be spending your life. Buying a long-term gym membership to pressure yourself to exercise may seem like a good idea, but if you’re miserable every time you actually go to the gym, maybe you really need to be finding a better way to integrate exercise into your lifestyle.

There are no easy answers here. We can think of ourselves as really being made of two (if not more) individuals: A cold, calculating planner who looks far into the future, and a heated, emotional experiencer who lives in the moment. There’s a tendency to assume that the planner is our “true self”, the one we should always listen to, but this is wrong; we are both of those people, and a life well-lived requires finding the right balance between their conflicting desires.

Against “doing your best”

Oct 3 JDN 2459491

It’s an appealing sentiment: Since we all have different skill levels, rather than be held to some constant standard which may be easy for some but hard for others, we should each do our best. This will ensure that we achieve the best possible outcome.

Yet it turns out that this advice is not so easy to follow: What is “your best”?

Is your best the theoretical ideal of what your performance could be if all obstacles were removed and you worked at your greatest possible potential? Then no one in history has ever done their best, and when people get close, they usually end up winning Nobel Prizes.

Is your best the performance you could attain if you pushed yourself to your limit, ignored all pain and fatigue, and forced yourself to work at maximum effort until you literally can’t anymore? Then doing your best doesn’t sound like such a great thing anymore—and you’re certainly not going to be able to do it all the time.

Is your best the performance you would attain by continuing to work at your usual level of effort? Then how is that “your best”? Is it the best you could attain if you work at a level of effort that is considered standard or normative? Is it the best you could do under some constraint limiting the amount of pain or fatigue you are willing to bear? If so, what constraint?

How does “your best” change under different circumstances? Does it become less demanding when you are sick, or when you have a migraine? What if you’re depressed? What if you’re simply not feeling motivated? What if you can’t tell whether this demotivation is a special circumstance, a depression system, a random fluctuation, or a failure to motivate yourself?

There’s another problem: Sometimes you really aren’t good at something.

A certain fraction of performance in most tasks is attributable to something we might call “innate talent”; be it truly genetic or fixed by your early environment, it nevertheless is something that as an adult you are basically powerless to change. Yes, you could always train and practice more, and your performance would thereby improve. But it can only improve so much; you are constrained by your innate talent or lack thereof. No amount of training effort will ever allow me to reach the basketball performance of Michael Jordan, the painting skill of Leonardo Da Vinci, or the mathematical insight of Leonhard Euler. (Of the three, only the third is even visible from my current horizon. As someone with considerable talent and training in mathematics, I can at least imagine what it would be like to be as good as Euler—though I surely never will be. I can do most of the mathematical methods that Euler was famous for; but could I have invented them?)

In fact it’s worse than this; there are levels of performance that would be theoretically possible for someone of your level of talent, yet would be so costly to obtain as to be clearly not worth it. Maybe, after all, there is some way I could become as good a mathematician as Euler—but if it would require me to work 16-hour days doing nothing but studying mathematics for the rest of my life, I am quite unwilling to do so.

With this in mind, what would it mean for me to “do my best” in mathematics? To commit those 16-hour days for the next 30 years and win my Fields Medal—if it doesn’t kill me first? If that’s not what we mean by “my best”, then what do we mean, after all?

Perhaps we should simply abandon the concept, and ask instead what successful people actually do.

This will of course depend on what they were successful at; the behavior of basketball superstars is considerably different from the behavior of Nobel Laureate physicists, which is in turn considerably different from the behavior of billionaire CEOs. But in theory we could each decide for ourselves which kind of success we actually would desire to emulate.

Another pitfall to avoid is looking only at superstars and not comparing them with a suitable control group. Every Nobel Laureate physicist eats food and breathes oxygen, but eating food and breathing oxygen will not automatically give you good odds of winning a Nobel (though I guess your odds are in fact a lot better relative to not doing them!). It is likely that many of the things we observe successful people doing—even less trivial things, like working hard and taking big risks—are in fact the sort of thing that a great many people do with far less success.

Upon making such a comparison, one of the first things that we would notice is that the vast majority of highly-successful people were born with a great deal of privilege. Most of them were born rich or at least upper-middle-class; nearly all of them were born healthy without major disabilities. Yes, there are exceptions to any particular form of privilege, and even particularly exceptional individuals who attained superstar status with more headwinds than tailwinds; but the overwhelming pattern is that people who get home runs in life tend to be people who started the game on third base.

But setting that aside, or recalibrating one’s expectations to try to attain a level of success often achieved by people with roughly the same level of privilege as oneself, we must ask: How often? Should you aspire to the median? The top 20%? The top 10%? The top 1%? And what is your proper comparison group? Should I be comparing against Americans, White male Americans, economists, queer economists, people with depression and chronic migraines, or White/Native American male queer economists with depression and chronic migraines who are American expatriates in Scotland? Make the criteria too narrow, and there won’t be many left in your sample. Make them instead too broad, and you’ll include people with very different circumstances who may not be a fair comparison. Perhaps some sort of weighted average of different groups could work—but with what weighting?

Or maybe it’s right to compare against a very broad group, since this is what ultimately decides our life prospects. What it would take to write the best novel you (or someone “like you” in whatever sense that means) can write may not be the relevant question: What you really needed to know was how likely it is that you could make a living as a novelist.


The depressing truth in such a broad comparison is that you may in fact find yourself faced with so many obstacles that there is no realistic path toward the level of success you were hoping for. If you are reading this, I doubt matters are so dire for you that you’re at serious risk of being homeless and starving—but there definitely are people in this world, millions of people, for whom that is not simply a risk but very likely the best they can hope for.

The question I think we are really trying to ask is this: What is the right standard to hold ourselves against?

Unfortunately, I don’t have a clear answer to this question. I have always been an extremely ambitious individual, and I have inclined toward comparisons with the whole world, or with the superstars of my own fields. It is perhaps not surprising, then, that I have consistently failed to live up to my own expectations for my own achievement—even as I surpass what many others expected for me, and have long-since left behind what most people expect for themselves and each other.

I would thus not exactly recommend my own standards. Yet I also can’t quite bear to abandon them, out of a deep-seated fear that it is only by holding myself to the patently unreasonable standard of trying to be the next Einstein or Schrodinger or Keynes or Nash that I have even managed what meager achievements I have made thus far.

Of course this could be entirely wrong: Perhaps I’d have achieved just as much if I held myself to a lower standard—or I could even have achieved more, by avoiding the pain and stress of continually failing to achieve such unattainable heights. But I also can’t rule out the possibility that it is true. I have no control group.

In general, what I think I want to say is this: Don’t try to do your best. You have no idea what your best is. Instead, try to find the highest standard you can consistently meet.

Hypocrisy is underrated

Sep 12 JDN 2459470

Hypocrisy isn’t a good thing, but it isn’t nearly as bad as most people seem to think. Often accusing someone of hypocrisy is taken as a knock-down argument for everything they are saying, and this is just utterly wrong. Someone can be a hypocrite and still be mostly right.

Often people are accused of hypocrisy when they are not being hypocritical; for instance the right wing seems to think that “They want higher taxes on the rich, but they are rich!” is hypocrisy, when in fact it’s simply altruism. (If they had wanted the rich guillotined, that would be hypocrisy. Maybe the problem is that the right wing can’t tell the difference?) Even worse, “They live under capitalism but they want to overthrow capitalism!” is not even close to hypocrisy—let alone why, how would someone overthrow a system they weren’t living under? (There are many things wrong with Marxists, but that is not one of them.)

But in fact I intend something stronger: Hypocrisy itself just isn’t that bad.


There are currently two classes of Republican politicians with regard to the COVID vaccines: Those who are consistent in their principles and don’t get the vaccines, and those who are hypocrites and get the vaccines while telling their constituents not to. Of the two, who is better? The hypocrites. At least they are doing the right thing even as they say things that are very, very wrong.

There are really four cases to consider. The principles you believe in could be right, or they could be wrong. And you could follow those principles, or you could be a hypocrite. Each of these two factors is independent.

If your principles are right and you are consistent, that’s the best case; if your principles are right and you are a hypocrite, that’s worse.

But if your principles are wrong and you are consistent, that’s the worst case; if your principles are wrong and you are a hypocrite, that’s better.

In fact I think for most things the ordering goes like this: Consistent Right > Hypocritical Wrong > Hypocritical Right > Consistent Wrong. Your behavior counts for more than your principles—so if you’re going to be a hypocrite, it’s better for your good actions to not match your bad principles.

Obviously if we could get people to believe good moral principles and then follow them, that would be best. And we should in fact be working to achieve that.

But if you know that someone’s moral principles are wrong, it doesn’t accomplish anything to accuse them of being a hypocrite. If it’s true, that’s a good thing.

Here’s a pretty clear example for you: Anyone who says that the Bible is infallible but doesn’t want gay people stoned to death is a hypocrite. The Bible is quite clear on this matter; Leviticus 20:13 really doesn’t leave much room for interpretation. By this standard, most Christians are hypocrites—and thank goodness for that. I owe my life to the hypocrisy of millions.

Of course if I could convince them that the Bible isn’t infallible—perhaps by pointing out all the things it says that contradict their most deeply-held moral and factual beliefs—that would be even better. But the last thing I want to do is make their behavior more consistent with their belief that the Bible is infallible; that would turn them into fanatical monsters. The Spanish Inquisition was very consistent in behaving according to the belief that the Bible is infallible.

Here’s another example: Anyone who thinks that cruelty to cats and dogs is wrong but is willing to buy factory-farmed beef and ham is a hypocrite. Any principle that would tell you that it’s wrong to kick a dog or cat would tell you that the way cows and pigs are treated in CAFOs is utterly unconscionable. But if you are really unwilling to give up eating meat and you can’t find or afford free-range beef, it still would be bad for you to start kicking dogs in a display of your moral consistency.

And one more example for good measure: The leaders of any country who resist human rights violations abroad but tolerate them at home are hypocrites. Obviously the best thing to do would be to fight human rights violations everywhere. But perhaps for whatever reason you are unwilling or unable to do this—one disturbing truth is that many human rights violations at home (such as draconian border policies) are often popular with your local constituents. Human-rights violations abroad are also often more severe—detaining children at the border is one thing, a full-scale genocide is quite another. So, for good reasons or bad, you may decide to focus your efforts on resisting human rights violations abroad rather than at home; this would make you a hypocrite. But it would still make you much better than a more consistent leader who simply ignores all human rights violations wherever they may occur.

In fact, there are cases in which it may be optimal for you to knowingly be a hypocrite. If you have two sets of competing moral beliefs, and you don’t know which is true but you know that as a whole they are inconsistent, your best option is to apply each set of beliefs in the domain for which you are most confident that it is correct, while searching for more information that might allow you to correct your beliefs and reconcile the inconsistency. If you are self-aware about this, you will know that you are behaving in a hypocritical way—but you will still behave better than you would if you picked the wrong beliefs and stuck to them dogmatically. In fact, given a reasonable level of risk aversion, you’ll be better off being a hypocrite than you would by picking one set of beliefs arbitrarily (say, at the flip of a coin). At least then you avoid the worst-case scenario of being the most wrong.

There is yet another factor to take into consideration. Sometimes following your own principles is hard.

Considerable ink has been spilled on the concept of akrasia, or “weakness of will”, in which we judge that A is better than B yet still find ourselves doing B. Philosophers continue to debate to this day whether this really happens. As a behavioral economist, I observe it routinely, perhaps even daily. In fact, I observe it in myself.

I think the philosophers’ mistake is to presume that there is one simple, well-defined “you” that makes all observations and judgments and takes actions. Our brains are much more complicated than that. There are many “you”s inside your brain, each with its own capacities, desires, and judgments. Yes, there is some important sense in which they are all somehow unified into a single consciousness—by a mechanism which still eludes our understanding. But it doesn’t take esoteric cognitive science to see that there are many minds inside you: Haven’t you ever felt an urge to do something you knew you shouldn’t do? Haven’t you ever succumbed to such an urge—drank the drink, eaten the dessert, bought the shoes, slept with the stranger—when it seemed so enticing but you knew it wasn’t really the right choice?

We even speak of being “of two minds” when we are ambivalent about something, and I think there is literal truth in this. The neural networks in your brain are forming coalitions, and arguing between them over which course of action you ought to take. Eventually one coalition will prevail, and your action will be taken; but afterward your reflective mind need not always agree that the coalition which won the vote was the one that deserved to.

The evolutionary reason for this is simple: We’re a kludge. We weren’t designed from the top down for optimal efficiency. We were the product of hundreds of millions of years of subtle tinkering, adding a bit here, removing a bit there, layering the mammalian, reflective cerebral cortex over the reptilian, emotional limbic system over the ancient, involuntary autonomic system. Combine this with the fact that we are built in pairs, with left and right halves of each kind of brain (and yes, they are independently functional when their connection is severed), and the wonder is that we ever agree with our own decisions.

Thus, there is a kind of hypocrisy that is not a moral indictment at all: You may genuinely and honestly agree that it is morally better to do something and still not be able to bring yourself to do it. You may know full well that it would be better to donate that money to malaria treatment rather than buy yourself that tub of ice cream—you may be on a diet and full well know that the ice cream won’t even benefit you in the long run—and still not be able to stop yourself from buying the ice cream.

Sometimes your feeling of hesitation at an altruistic act may be a useful insight; I certainly don’t think we should feel obliged to give all our income, or even all of our discretionary income, to high-impact charities. (For most people I encourage 5%. I personally try to aim for 10%. If all the middle-class and above in the First World gave even 1% we could definitely end world hunger.) But other times it may lead you astray, make you unable to resist the temptation of a delicious treat or a shiny new toy when even you know the world would be better off if you did otherwise.

Yet when following our own principles is so difficult, it’s not really much of a criticism to point out that someone has failed to do so, particularly when they themselves already recognize that they failed. The inconsistency between behavior and belief indicates that something is wrong, but it may not be any dishonesty or even anything wrong with their beliefs.

I wouldn’t go so far as to say you should stop ever calling out hypocrisy. Sometimes it is clearly useful to do so. But while hypocrisy is often the sign of a moral failing, it isn’t always—and even when it is, often as not the problem is the bad principles, not the behavior inconsistent with them.

How to change minds

Aug 29 JDN 2459456

Think for a moment about the last time you changed your mind on something important. If you can’t think of any examples, that’s not a good sign. Think harder; look back further. If you still can’t find any examples, you need to take a deep, hard look at yourself and how you are forming your beliefs. The path to wisdom is not found by starting with the right beliefs, but by starting with the wrong ones and recognizing them as wrong. No one was born getting everything right.

If you remember changing your mind about something, but don’t remember exactly when, that’s not a problem. Indeed, this is the typical case, and I’ll get to why in a moment. Try to remember as much as you can about the whole process, however long it took.

If you still can’t specifically remember changing your mind, try to imagine a situation in which you would change your mind—and if you can’t do that, you should be deeply ashamed and I have nothing further to say to you.

Thinking back to that time: Why did you change your mind?

It’s possible that it was something you did entirely on your own, through diligent research of primary sources or even your own mathematical proofs or experimental studies. This is occasionally something that happens; as an active researcher, it has definitely happened to me. But it’s clearly not the typical case of what changes people’s minds, and it’s quite likely that you have never experienced it yourself.

The far more common scenario—even for active researchers—is far more mundane: You changed your mind because someone convinced you. You encountered a persuasive argument, and it changed the way you think about things.

In fact, it probably wasn’t just one persuasive argument; it was probably many arguments, from multiple sources, over some span of time. It could be as little as minutes or hours; it could be as long as years.

Probably the first time someone tried to change your mind on that issue, they failed. The argument may even have degenerated into shouting and name-calling. You both went away thinking that the other side was composed of complete idiots or heartless monsters. And then, a little later, thinking back on the whole thing, you remembered one thing they said that was actually a pretty good point.

This happened again with someone else, and again with yet another person. And each time your mind changed just a little bit—you became less certain of some things, or incorporated some new information you didn’t know before. The towering edifice of your worldview would not be toppled by a single conversation—but a few bricks here and there did get taken out and replaced.

Or perhaps you weren’t even the target of the conversation; you simply overheard it. This seems especially common in the age of social media, where public and private spaces become blurred and two family members arguing about politics can blow up into a viral post that is viewed by millions. Perhaps you changed your mind not because of what was said to you, but because of what two other people said to one another; perhaps the one you thought was on your side just wasn’t making as many good arguments as the one on the other side.

Now, you may be thinking: Yes, people like me change our minds, because we are intelligent and reasonable. But those people, on the other side, aren’t like that. They are stubborn and foolish and dogmatic and stupid.

And you know what? You probably are an especially intelligent and reasonable person. If you’re reading this blog, there’s a good chance that you are at least above-average in your level of education, rationality, and open-mindedness.

But no matter what beliefs you hold, I guarantee you there is someone out there who shares many of them and is stubborn and foolish and dogmatic and stupid. And furthermore, there is probably someone out there who disagrees with many of your beliefs and is intelligent and open-minded and reasonable.

This is not to say that there’s no correlation between your level of reasonableness and what you actually believe. Obviously some beliefs are more rational than others, and rational people are more likely to hold those beliefs. (If this weren’t the case, we’d be doomed.) Other things equal, an atheist is more reasonable than a member of the Taliban; a social democrat is more reasonable than a neo-Nazi; a feminist is more reasonable than a misogynist; a member of the Human Rights Campaign is more reasonable than a member of the Westboro Baptist Church. But reasonable people can be wrong, and unreasonable people can be right.

You should be trying to seek out the most reasonable people who disagree with you. And you should be trying to present yourself as the most reasonable person who expresses your own beliefs.

This can be difficult—especially that first part, as the world (or at least the world spanned by Facebook and Twitter) seems to be filled with people who are astonishingly dogmatic and unreasonable. Often you won’t be able to find any reasonable disagreement. Often you will find yourself in threads full of rage, hatred and name-calling, and you will come away disheartened, frustrated, or even despairing for humanity. The whole process can feel utterly futile.

And yet, somehow, minds change.

Support for same-sex marriage in the US rose from 27% to 70% just since 1997.

Read that date again: 1997. Less than 25 years ago.

The proportion of new marriages which were interracial has risen from 3% in 1967 to 19% today. Given the racial demographics of the US, this is almost at the level of random assortment.

Ironically I think that the biggest reason people underestimate the effectiveness of rational argument is the availability heuristic: We can’t call to mind any cases where we changed someone’s mind completely. We’ve never observed a pi-radian turnaround in someone’s whole worldview, and thus, we conclude that nobody ever changes their mind about anything important.

But in fact most people change their minds slowly and gradually, and are embarrassed to admit they were wrong in public, so they change their minds in private. (One of the best single changes we could make toward improving human civilization would be to make it socially rewarded to publicly admit you were wrong. Even the scientific community doesn’t do this nearly as well as it should.) Often changing your mind doesn’t even really feel like changing your mind; you just experience a bit more doubt, learn a bit more, and repeat the process over and over again until, years later, you believe something different than you did before. You moved 0.1 or even 0.01 radians at a time, until at last you came all the way around.

It may be in fact that some people’s minds cannot be changed—either on particular issues, or even on any issue at all. But it is so very, very easy to jump to that conclusion after a few bad interactions, that I think we should intentionally overcompensate in the opposite direction: Only give up on someone after you have utterly overwhelming evidence that their mind cannot ever be changed in any way.

I can’t guarantee that this will work. Perhaps too many people are too far gone.

But I also don’t see any alternative. If the truth is to prevail, it will be by rational argument. This is the only method that systematically favors the truth. All other methods give equal or greater power to lies.

Locked donation boxes and moral variation

Aug 8 JDN 2459435

I haven’t been able to find the quote, but I think it was Kahneman who once remarked: “Putting locks on donation boxes shows that you have the correct view of human nature.”

I consider this a deep insight. Allow me to explain.

Some people think that human beings are basically good. Rousseau is commonly associated with this view, a notion that, left to our own devices, human beings would naturally gravitate toward an anarchic but peaceful society.

The question for people who think this needs to be: Why haven’t we? If your answer is “government holds us back”, you still need to explain why we have government. Government was not imposed upon us from On High in time immemorial. We were fairly anarchic (though not especially peaceful) in hunter-gatherer tribes for nearly 200,000 years before we established governments. How did that happen?

And if your answer to that is “a small number of tyrannical psychopaths forced government on everyone else”, you may not be wrong about that—but it already breaks your original theory, because we’ve just shown that human society cannot maintain a peaceful anarchy indefinitely.

Other people think that human beings are basically evil. Hobbes is most commonly associated with this view, that humans are innately greedy, violent, and selfish, and only by the overwhelming force of a government can civilization be maintained.

This view more accurately predicts the level of violence and death that generally accompanies anarchy, and can at least explain why we’d want to establish government—but it still has trouble explaining how we would establish government. It’s not as if we’re ruled by a single ubermensch with superpowers, or an army of robots created by a mad scientist in a secret undergroud laboratory. Running a government involves cooperation on an absolutely massive scale—thousands or even millions of unrelated, largely anonymous individuals—and this cooperation is not maintained entirely by force: Yes, there is some force involved, but most of what a government does most of the time is mediated by norms and customs, and if a government did ever try to organize itself entirely by force—not paying any of the workers, not relying on any notion of patriotism or civic duty—it would immediately and catastrophically collapse.

What is the right answer? Humans aren’t basically good or basically evil. Humans are basically varied.

I would even go so far as to say that most human beings are basically good. They follow a moral code, they care about other people, they work hard to support others, they try not to break the rules. Nobody is perfect, and we all make various mistakes. We disagree about what is right and wrong, and sometimes we even engage in actions that we ourselves would recognize as morally wrong. But most people, most of the time, try to do the right thing.

But some people are better than others. There are great humanitarians, and then there are ordinary folks. There are people who are kind and compassionate, and people who are selfish jerks.

And at the very opposite extreme from the great humanitarians is the roughly 1% of people who are outright psychopaths. About 5-10% of people have significant psychopathic traits, but about 1% are really full-blown psychopaths.

I believe it is fair to say that psychopaths are in fact basically evil. They are incapable of empathy or compassion. Morality is meaningless to them—they literally cannot distinguish moral rules from other rules. Other people’s suffering—even their very lives—means nothing to them except insofar as it is instrumentally useful. To a psychopath, other people are nothing more than tools, resources to be exploited—or obstacles to be removed.

Some philosophers have argued that this means that psychopaths are incapable of moral responsibility. I think this is wrong. I think it relies on a naive, pre-scientific notion of what “moral responsibility” is supposed to mean—one that was inevitably going to be destroyed once we had a greater understanding of the brain. Do psychopaths understand the consequences of their actions? Yes. Do rewards motivate psychopaths to behave better? Yes. Does the threat of punishment motivate them? Not really, but it was never that effective on anyone else, either. What kind of “moral responsibility” are we still missing? And how would our optimal action change if we decided that they do or don’t have moral responsibility? Would you still imprison them for crimes either way? Maybe it doesn’t matter whether or not it’s really a blegg.

Psychopaths are a small portion of our population, but are responsible for a large proportion of violent crimes. They are also overrepresented in top government positions as well as police officers, and it’s pretty safe to say that nearly every murderous dictator was a psychopath of one shade or another.

The vast majority of people are not psychopaths, and most people don’t even have any significant psychopathic traits. Yet psychopaths have an enormously disproportionate impact on society—nearly all of it harmful. If psychopaths did not exist, Rousseau might be right after all; we wouldn’t need government. If most people were psychopaths, Hobbes would be right; we’d long for the stability and security of government, but we could never actually cooperate enough to create it.

This brings me back to the matter of locked donation boxes.

Having a donation box is only worthwhile if most people are basically good: Asking people to give money freely in order to achieve some good only makes any sense if people are capable of altruism, empathy, cooperation. And it can’t be just a few, because you’d never raise enough money to be useful that way. It doesn’t have to be everyone, or maybe even a majority; but it has to be a large fraction. 90% is more than enough.

But locking things is only worthwhile if some people are basically evil: For a lock to make sense, there must be at least a few people who would be willing to break in and steal the money, even if it was earmarked for a very worthy cause. It doesn’t take a huge fraction of people, but it must be more than a negligible one. 1% to 10% is just about the right sort of range.

Hence, locked donation boxes are a phenomenon that would only exist in a world where most people are basically good—but some people are basically evil.

And this is in fact the world in which we live. It is a world where the Holocaust could happen but then be followed by the founding of the United Nations, a world where nuclear weapons would be invented and used to devastate cities, but then be followed by an era of nearly unprecedented peace. It is a world where governments are necessary to reign in violence, but also a world where governments can function (reasonably well) even in countries with hundreds of millions of people. It is a world with crushing poverty and people who work tirelessly to end it. It is a world where Exxon and BP despoil the planet for riches while WWF and Greenpeace fight back. It is a world where religions unite millions of people under a banner of peace and justice, and then go on crusadees to murder thousands of other people who united under a different banner of peace and justice. It is a world of richness, complexity, uncertainty, conflict—variance.

It is not clear how much of this moral variance is innate versus acquired. If we somehow rewound the film of history and started it again with a few minor changes, it is not clear how many of us would end up the same and how many would be far better or far worse than we are. Maybe psychopaths were born the way they are, or maybe they were made that way by culture or trauma or lead poisoning. Maybe with the right upbringing or brain damage, we, too, could be axe murderers. Yet the fact remains—there are axe murderers, but we, and most people, are not like them.

So, are people good, or evil? Was Rousseau right, or Hobbes? Yes. Both. Neither. There is no one human nature; there are many human natures. We are capable of great good and great evil.

When we plan how to run a society, we must make it work the best we can with that in mind: We can assume that most people will be good most of the time—but we know that some people won’t, and we’d better be prepared for them as well.

Set out your donation boxes with confidence. But make sure they are locked.

Love the disabled, hate the disability

Aug 1 JDN 2459428

There is a common phrase Christians like to say: “Love the sinner, hate the sin.” This seems to be honored more in the breach than the observance, and many of the things that most Christians consider “sins” are utterly harmless or even good; but the principle is actually quite sound. You can disagree with someone or even believe that what they are doing is wrong while still respecting them as a human being. Indeed, my attitude toward religion is very much “Love the believer, hate the belief.” (Though somehow they don’t seem to like that one so much….)

Yet while ethically this is often the correct attitude, psychologically it can be very difficult for people to maintain. The Halo Effect is a powerful bias, and most people recoil instinctively from saying anything good about someone bad or anything bad about someone good. This can make it uncomfortable to simply state objective facts like “Hitler was a charismatic leader” or “Stalin was a competent administrator”—how dare you say something good about someone so evil? Yet in fact Hitler and Stalin could never have accomplished so much evil if they didn’t have these positive attributes—if we want to understand how such atrocities can occur and prevent them in the future, we need to recognize that evil people can also be charismatic and competent.

Halo Effect also makes it difficult for people to understand the complexities of historical figures who have facets of both great good and great evil: Thomas Jefferson led the charge on inventing modern democracy—but he also owned and raped slaves. Lately it seems like the left wants to deny the former and the right wants to deny the latter; but both are historical truths that important to know.

Halo Effect is the best explanation I have for why so many disability activists want to deny that disabilities are inherently bad. They can’t keep in their head the basic principle of “Love the disabled, hate the disability.”

There is a large community of deaf people who say that being deaf isn’t bad. There are even some blind people who say that being blind isn’t bad—though they’re considerably rarer.

Is music valuable? Is art valuable? Is the world better off because Mozart’s symphonies and the Mona Lisa exist? Yes. It follows that being unable to experience these things is bad. Therefore blindness and deafness are bad. QED.


No human being is made better of by not being able to do something. More capability is better than less capability. More freedom is better than less freedom. Less pain is better than more pain.

(Actually there are a few exceptions to “less pain is better than more pain”: People with CIPA are incapable of feeling pain even when injured, which is very dangerous.)

From this, it follows immediately that disabilities are bad and we should be trying to fix them.

And frankly this seems so utterly obvious to me that it’s hard for me to understand why anyone could possibly disagree. Maybe people who are blind or deaf simply don’t know what they’re missing? Even that isn’t a complete explanation, because I don’t know what it would be like to experience four dimensions or see ultraviolet—yet I still think that I’d be better off if I could. If there were people who had these experiences telling me how great they are, I’d be certain of it.

Don’t get me wrong: A lot of ableist discrimination does exist, and much of it seems to come from the same psychological attitude: Since being disabled is bad, they think that disabled people must be bad and we shouldn’t do anything to make them better off because they are bad. Stated outright this sounds ludicrous; but most people who think this way don’t consciously reflect on it. They just have a general sense of badness related to disability which then rubs off on their attitudes toward disabled people as well.

Yet it makes hardly any more sense to go the other way: Disabled people are human beings of value, they are good; therefore their disabilities are good? Therefore this thing that harms and limits them is good?

It’s certainly true that most disabilities would be more manageable with better accommodations, and many of those accommodations would be astonishingly easy and cheap to implement. It’s terrible that we often fail to do this. Yet the fact remains: The best-case scenario would be not needing accommodations because we can simply cure the disability.

It never ceases to baffle me that disability activists will say things like this:

“A wheelchair user isn’t disabled because of the impairment that interferes with her ability to walk, but because society refuses to make spaces wheelchair-accessible.”

No, the problem is pretty clearly the fact that she can’t walk. There are various ways that we could make society more accessible to people in wheelchairs—and we should do those things—but there are inherently certain things you simply cannot do if you can’t walk, and that has nothing to do with anything society does. You would be better off if society were more accommodating, but you’d be better off still if you could simply walk again.

Perhaps my perspective on this is skewed, because my major disability—chronic migraine—involves agonizing, debilitating chronic pain. Perhaps people whose disabilities don’t cause them continual agony can convince themselves that there’s nothing wrong with them. But it seems pretty obvious to me that I would be better off without migraines.

Indeed, it’s utterly alien to my experience to hear people say things like this: “We’re not suffering. We’re just living our lives in a different way.” I’m definitely suffering, thank you very much. Maybe not everyone with disabilities is suffering—but a lot of us definitely are. Every single day I have to maintain specific habits and avoid triggers, and I still get severe headaches twice a week. I had a particularly nasty one just this morning.

There are some more ambiguous cases, to be sure: Neurodivergences like autism and ADHD that exist on a spectrum, where the most extreme forms are utterly debilitating but the mildest forms are simply ordinary variation. It can be difficult to draw the line at when we should be willing to treat and when we shouldn’t; but this isn’t fundamentally different from the sort of question psychiatrists deal with all the time, regarding the difference between normal sadness and nervousness versus pathological depression and anxiety disorders.

Of course there is natural variation in almost all human traits, and one can have less of something good without it being pathological. Some things we call disabilities could just be considered below-average capabilities within ordinary variation. Yet even then, if we could make everyone healthier, stronger, faster, tougher, and smarter than they currently are, I have trouble seeing why we wouldn’t want to do that. I don’t even see any particular reason to think that the current human average—or even the current human maximum—is in any way optimal. Better is better. If we have the option to become transhuman gods, why wouldn’t we?

Another way to see this is to think about how utterly insane it would be to actively try to create disabilities. If there’s nothing wrong with being deaf, why not intentionally deafen yourself? If being bound to a wheelchair is not a bad thing, why not go get your legs paralyzed? If being blind isn’t so bad, why not stare into a welding torth? In these cases you’d even have consented—which is absolutely not the case for an innate disability. I never consented to these migraines and never would have.

I respect individual autonomy, so I would never force someone to get treatment for their disability. I even recognize that society can pressure people to do things they wouldn’t want to, and so maybe occasionally people really are better off being unable to do something so that nobody can pressure them into it. But it still seems utterly baffling to me that there are people who argue that we’d be better off not even having the option to make our bodies work better.

I think this is actually a major reason why disability activism hasn’t been more effective; the most vocal activists are the ones saying ridiculous things like “the problem isn’t my disability, it’s your lack of accommodations” or “there’s nothing wrong with being unable to hear”. If there is anything you’d be able to do if your disability didn’t exist that you can’t do even with accommodations, that isn’t true—and there basically always is.

Escaping the wrong side of the Yerkes-Dodson curve

Jul 25 JDN 2459421

I’ve been under a great deal of stress lately. Somehow I ended up needing to finish my dissertation, get married, and move overseas to start a new job all during the same few months—during a global pandemic.

A little bit of stress is useful, but too much can be very harmful. On complicated tasks (basically anything that involves planning or careful thought), increased stress will increase performance up to a point, and then decrease it after that point. This phenomenon is known as the Yerkes-Dodson law.

The Yerkes-Dodson curve very closely resembles the Laffer curve, which shows that since extremely low tax rates raise little revenue (obviously), and extremely high tax rates also raise very little revenue (because they cause so much damage to the economy), the tax rate that maximizes government revenue is actually somewhere in the middle. There is a revenue-maximizing tax rate (usually estimated to be about 70%).

Instead of a revenue-maximizing tax rate, the Yerkes-Dodson law says that there is a performance-maximizing stress level. You don’t want to have zero stress, because that means you don’t care and won’t put in any effort. But if your stress level gets too high, you lose your ability to focus and your performance suffers.

Since stress (like taxes) comes with a cost, you may not even want to be at the maximum point. Performance isn’t everything; you might be happier choosing a lower level of performance in order to reduce your own stress.

But once thing is certain: You do not want to be to the right of that maximum. Then you are paying the cost of not only increased stress, but also reduced performance.

And yet I think many of us spent a great deal of our time on the wrong side of the Yerkes-Dodson curve. I certainly feel like I’ve been there for quite awhile now—most of grad school, really, and definitely this past month when suddenly I found out I’d gotten an offer to work in Edinburgh.

My current circumstances are rather exceptional, but I think the general pattern of being on the wrong side of the Yerkes-Dodson curve is not.

Over 80% of Americans report work-related stress, and the US economy loses about half a trillion dollars a year in costs related to stress.

The World Health Organization lists “work-related stress” as one of its top concerns. Over 70% of people in a cross-section of countries report physical symptoms related to stress, a rate which has significantly increased since before the pandemic.

The pandemic is clearly a contributing factor here, but even without it, there seems to be an awful lot of stress in the world. Even back in 2018, over half of Americans were reporting high levels of stress. Why?

For once, I think it’s actually fair to blame capitalism.

One thing capitalism is exceptionally good at is providing strong incentives for work. This is often a good thing: It means we get a lot of work done, so employment is high, productivity is high, GDP is high. But it comes with some important downsides, and an excessive level of stress is one of them.

But this can’t be the whole story, because if markets were incentivizing us to produce as much as possible, that ought to put us near the maximum of the Yerkes-Dodson curve—but it shouldn’t put us beyond it. Maximizing productivity might not be what makes us happiest—but many of us are currently so stressed that we aren’t even maximizing productivity.

I think the problem is that competition itself is stressful. In a capitalist economy, we aren’t simply incentivized to do things well—we are incentivized to do them better than everyone else. Often quite small differences in performance can lead to large differences in outcome, much like how a few seconds can make the difference between an Olympic gold medal and an Olympic “also ran”.

An optimally productive economy would be one that incentivizes you to perform at whatever level maximizes your own long-term capability. It wouldn’t be based on competition, because competition depends too much on what other people are capable of. If you are not especially talented, competition will cause you great stress as you try to compete with people more talented than you. If you happen to be exceptionally talented, competition won’t provide enough incentive!

Here’s a very simple model for you. Your total performance p is a function of two components, your innate ability aand your effort e. In fact let’s just say it’s a sum of the two: p = a + e

People are randomly assigned their level of capability from some probability distribution, and then they choose their effort. For the very simplest case, let’s just say there are two people, and it turns out that person 1 has less innate ability than person 2, so a1 < a2.

There is also a certain amount of inherent luck in any competition. As it says in Ecclesiastes (by far the best book of the Old Testament), “The race is not to the swift or the battle to the strong, nor does food come to the wise or wealth to the brilliant or favor to the learned; but time and chance happen to them all.” So as usual I’ll model this as a contest function, where your probability of winning depends on your total performance, but it’s not a sure thing.

Let’s assume that the value of winning and cost of effort are the same across different people. (It would be simple to remove this assumption, but it wouldn’t change much in the results.) The value of winning I’ll call y, and I will normalize the cost of effort to 1.


Then this is each person’s expected payoff ui:

ui = (ai + ei)/(a1+e1+a2 + e2) V – ei

You choose effort, not ability, so maximize in terms of ei:

(a2 + e2) V = (a1 +e1+a2 + e2)2 = (a1 + e1) V

a1 + e1 = a2 + e2

p1 = p2

In equilibrium, both people will produce exactly the same level of performance—but one of them will be contributing more effort to compensate for their lesser innate ability.

I’ve definitely had this experience in both directions: Effortlessly acing math tests that I knew other people barely passed despite hours of studying, and running until I could barely breathe to keep up with other people who barely seemed winded. Clearly I had too little incentive in math class and too much in gym class—and competition was obviously the culprit.

If you vary the cost of effort between people, or make it not linear, you can make the two not exactly equal; but the overall pattern will remain that the person who has more ability will put in less effort because they can win anyway.

Yet presumably the amount of effort we want to incentivize isn’t less for those who are more talented. If anything, it may be more: Since an hour of work produces more when done by the more talented person, if the cost to them is the same, then the net benefit of that hour of work is higher than the same hour of work by someone less talented.

In a large population, there are almost certainly many people whose talents are similar to your own—but there are also almost certainly many below you and many above you as well. Unless you are properly matched with those of similar talent, competition will systematically lead to some people being pressured to work too hard and others not pressured enough.

But if we’re all stressed, where are the people not pressured enough? We see them on TV. They are celebrities and athletes and billionaires—people who got lucky enough, either genetically (actors who were born pretty, athletes who were born with more efficient muscles) or environmentally (inherited wealth and prestige), to not have to work as hard as the rest of us in order to succeed. Indeed, we are constantly bombarded with images of these fantastically lucky people, and by the availability heuristic our brains come to assume that they are far more plentiful than they actually are.

This dramatically exacerbates the harms of competition, because we come to feel that we are competing specifically with the people who were handed the world on a silver platter. Born without the innate advantages of beauty or endurance or inheritance, there’s basically no chance we could ever measure up; and thus we feel utterly inadequate unless we are constantly working as hard as we possibly can, trying to catch up in a race in which we always fall further and further behind.

How can we break out of this terrible cycle? Well, we could try to replace capitalism with something like the automated luxury communism of Star Trek; but this seems like a very difficult and long-term solution. Indeed it might well take us a few hundred years as Roddenberry predicted.

In the shorter term, we may not be able to fix the economic problem, but there is much we can do to fix the psychological problem.

By reflecting on the full breadth of human experience, not only here and now, but throughout history and around the world, you can come to realize that you—yes, you, if you’re reading this—are in fact among the relatively fortunate. If you have a roof over your head, food on your table, clean water from your tap, and ibuprofen in your medicine cabinet, you are far more fortunate than the average person in Senegal today; your television, car, computer, and smartphone are things that would be the envy even of kings just a few centuries ago. (Though ironically enough that person in Senegal likely has a smartphone, or at least a cell phone!)

Likewise, you can reflect upon the fact that while you are likely not among the world’s most very most talented individuals in any particular field, there is probably something you are much better at than most people. (A Fermi estimate suggests I’m probably in the top 250 behavioral economists in the world. That’s probably not enough for a Nobel, but it does seem to be enough to get a job at the University of Edinburgh.) There are certainly many people who are less good at many things than you are, and if you must think of yourself as competing, consider that you’re also competing with them.

Yet perhaps the best psychological solution is to learn not to think of yourself as competing at all. So much as you can afford to do so, try to live your life as if you were already living in a world that rewards you for making the best of your own capabilities. Try to live your life doing what you really think is the best use of your time—not your corporate overlords. Yes, of course, we must do what we need to in order to survive, and not just survive, but indeed remain physically and mentally healthy—but this is far less than most First World people realize. Though many may try to threaten you with homelessness or even starvation in order to exploit you and make you work harder, the truth is that very few people in First World countries actually end up that way (it couldbe brought to zero, if our public policy were better), and you’re not likely to be among them. “Starving artists” are typically a good deal happier than the general population—because they’re not actually starving, they’ve just removed themselves from the soul-crushing treadmill of trying to impress the neighbors with manicured lawns and fancy SUVs.