How to change minds

Aug 29 JDN 2459456

Think for a moment about the last time you changed your mind on something important. If you can’t think of any examples, that’s not a good sign. Think harder; look back further. If you still can’t find any examples, you need to take a deep, hard look at yourself and how you are forming your beliefs. The path to wisdom is not found by starting with the right beliefs, but by starting with the wrong ones and recognizing them as wrong. No one was born getting everything right.

If you remember changing your mind about something, but don’t remember exactly when, that’s not a problem. Indeed, this is the typical case, and I’ll get to why in a moment. Try to remember as much as you can about the whole process, however long it took.

If you still can’t specifically remember changing your mind, try to imagine a situation in which you would change your mind—and if you can’t do that, you should be deeply ashamed and I have nothing further to say to you.

Thinking back to that time: Why did you change your mind?

It’s possible that it was something you did entirely on your own, through diligent research of primary sources or even your own mathematical proofs or experimental studies. This is occasionally something that happens; as an active researcher, it has definitely happened to me. But it’s clearly not the typical case of what changes people’s minds, and it’s quite likely that you have never experienced it yourself.

The far more common scenario—even for active researchers—is far more mundane: You changed your mind because someone convinced you. You encountered a persuasive argument, and it changed the way you think about things.

In fact, it probably wasn’t just one persuasive argument; it was probably many arguments, from multiple sources, over some span of time. It could be as little as minutes or hours; it could be as long as years.

Probably the first time someone tried to change your mind on that issue, they failed. The argument may even have degenerated into shouting and name-calling. You both went away thinking that the other side was composed of complete idiots or heartless monsters. And then, a little later, thinking back on the whole thing, you remembered one thing they said that was actually a pretty good point.

This happened again with someone else, and again with yet another person. And each time your mind changed just a little bit—you became less certain of some things, or incorporated some new information you didn’t know before. The towering edifice of your worldview would not be toppled by a single conversation—but a few bricks here and there did get taken out and replaced.

Or perhaps you weren’t even the target of the conversation; you simply overheard it. This seems especially common in the age of social media, where public and private spaces become blurred and two family members arguing about politics can blow up into a viral post that is viewed by millions. Perhaps you changed your mind not because of what was said to you, but because of what two other people said to one another; perhaps the one you thought was on your side just wasn’t making as many good arguments as the one on the other side.

Now, you may be thinking: Yes, people like me change our minds, because we are intelligent and reasonable. But those people, on the other side, aren’t like that. They are stubborn and foolish and dogmatic and stupid.

And you know what? You probably are an especially intelligent and reasonable person. If you’re reading this blog, there’s a good chance that you are at least above-average in your level of education, rationality, and open-mindedness.

But no matter what beliefs you hold, I guarantee you there is someone out there who shares many of them and is stubborn and foolish and dogmatic and stupid. And furthermore, there is probably someone out there who disagrees with many of your beliefs and is intelligent and open-minded and reasonable.

This is not to say that there’s no correlation between your level of reasonableness and what you actually believe. Obviously some beliefs are more rational than others, and rational people are more likely to hold those beliefs. (If this weren’t the case, we’d be doomed.) Other things equal, an atheist is more reasonable than a member of the Taliban; a social democrat is more reasonable than a neo-Nazi; a feminist is more reasonable than a misogynist; a member of the Human Rights Campaign is more reasonable than a member of the Westboro Baptist Church. But reasonable people can be wrong, and unreasonable people can be right.

You should be trying to seek out the most reasonable people who disagree with you. And you should be trying to present yourself as the most reasonable person who expresses your own beliefs.

This can be difficult—especially that first part, as the world (or at least the world spanned by Facebook and Twitter) seems to be filled with people who are astonishingly dogmatic and unreasonable. Often you won’t be able to find any reasonable disagreement. Often you will find yourself in threads full of rage, hatred and name-calling, and you will come away disheartened, frustrated, or even despairing for humanity. The whole process can feel utterly futile.

And yet, somehow, minds change.

Support for same-sex marriage in the US rose from 27% to 70% just since 1997.

Read that date again: 1997. Less than 25 years ago.

The proportion of new marriages which were interracial has risen from 3% in 1967 to 19% today. Given the racial demographics of the US, this is almost at the level of random assortment.

Ironically I think that the biggest reason people underestimate the effectiveness of rational argument is the availability heuristic: We can’t call to mind any cases where we changed someone’s mind completely. We’ve never observed a pi-radian turnaround in someone’s whole worldview, and thus, we conclude that nobody ever changes their mind about anything important.

But in fact most people change their minds slowly and gradually, and are embarrassed to admit they were wrong in public, so they change their minds in private. (One of the best single changes we could make toward improving human civilization would be to make it socially rewarded to publicly admit you were wrong. Even the scientific community doesn’t do this nearly as well as it should.) Often changing your mind doesn’t even really feel like changing your mind; you just experience a bit more doubt, learn a bit more, and repeat the process over and over again until, years later, you believe something different than you did before. You moved 0.1 or even 0.01 radians at a time, until at last you came all the way around.

It may be in fact that some people’s minds cannot be changed—either on particular issues, or even on any issue at all. But it is so very, very easy to jump to that conclusion after a few bad interactions, that I think we should intentionally overcompensate in the opposite direction: Only give up on someone after you have utterly overwhelming evidence that their mind cannot ever be changed in any way.

I can’t guarantee that this will work. Perhaps too many people are too far gone.

But I also don’t see any alternative. If the truth is to prevail, it will be by rational argument. This is the only method that systematically favors the truth. All other methods give equal or greater power to lies.

Escaping the wrong side of the Yerkes-Dodson curve

Jul 25 JDN 2459421

I’ve been under a great deal of stress lately. Somehow I ended up needing to finish my dissertation, get married, and move overseas to start a new job all during the same few months—during a global pandemic.

A little bit of stress is useful, but too much can be very harmful. On complicated tasks (basically anything that involves planning or careful thought), increased stress will increase performance up to a point, and then decrease it after that point. This phenomenon is known as the Yerkes-Dodson law.

The Yerkes-Dodson curve very closely resembles the Laffer curve, which shows that since extremely low tax rates raise little revenue (obviously), and extremely high tax rates also raise very little revenue (because they cause so much damage to the economy), the tax rate that maximizes government revenue is actually somewhere in the middle. There is a revenue-maximizing tax rate (usually estimated to be about 70%).

Instead of a revenue-maximizing tax rate, the Yerkes-Dodson law says that there is a performance-maximizing stress level. You don’t want to have zero stress, because that means you don’t care and won’t put in any effort. But if your stress level gets too high, you lose your ability to focus and your performance suffers.

Since stress (like taxes) comes with a cost, you may not even want to be at the maximum point. Performance isn’t everything; you might be happier choosing a lower level of performance in order to reduce your own stress.

But once thing is certain: You do not want to be to the right of that maximum. Then you are paying the cost of not only increased stress, but also reduced performance.

And yet I think many of us spent a great deal of our time on the wrong side of the Yerkes-Dodson curve. I certainly feel like I’ve been there for quite awhile now—most of grad school, really, and definitely this past month when suddenly I found out I’d gotten an offer to work in Edinburgh.

My current circumstances are rather exceptional, but I think the general pattern of being on the wrong side of the Yerkes-Dodson curve is not.

Over 80% of Americans report work-related stress, and the US economy loses about half a trillion dollars a year in costs related to stress.

The World Health Organization lists “work-related stress” as one of its top concerns. Over 70% of people in a cross-section of countries report physical symptoms related to stress, a rate which has significantly increased since before the pandemic.

The pandemic is clearly a contributing factor here, but even without it, there seems to be an awful lot of stress in the world. Even back in 2018, over half of Americans were reporting high levels of stress. Why?

For once, I think it’s actually fair to blame capitalism.

One thing capitalism is exceptionally good at is providing strong incentives for work. This is often a good thing: It means we get a lot of work done, so employment is high, productivity is high, GDP is high. But it comes with some important downsides, and an excessive level of stress is one of them.

But this can’t be the whole story, because if markets were incentivizing us to produce as much as possible, that ought to put us near the maximum of the Yerkes-Dodson curve—but it shouldn’t put us beyond it. Maximizing productivity might not be what makes us happiest—but many of us are currently so stressed that we aren’t even maximizing productivity.

I think the problem is that competition itself is stressful. In a capitalist economy, we aren’t simply incentivized to do things well—we are incentivized to do them better than everyone else. Often quite small differences in performance can lead to large differences in outcome, much like how a few seconds can make the difference between an Olympic gold medal and an Olympic “also ran”.

An optimally productive economy would be one that incentivizes you to perform at whatever level maximizes your own long-term capability. It wouldn’t be based on competition, because competition depends too much on what other people are capable of. If you are not especially talented, competition will cause you great stress as you try to compete with people more talented than you. If you happen to be exceptionally talented, competition won’t provide enough incentive!

Here’s a very simple model for you. Your total performance p is a function of two components, your innate ability aand your effort e. In fact let’s just say it’s a sum of the two: p = a + e

People are randomly assigned their level of capability from some probability distribution, and then they choose their effort. For the very simplest case, let’s just say there are two people, and it turns out that person 1 has less innate ability than person 2, so a1 < a2.

There is also a certain amount of inherent luck in any competition. As it says in Ecclesiastes (by far the best book of the Old Testament), “The race is not to the swift or the battle to the strong, nor does food come to the wise or wealth to the brilliant or favor to the learned; but time and chance happen to them all.” So as usual I’ll model this as a contest function, where your probability of winning depends on your total performance, but it’s not a sure thing.

Let’s assume that the value of winning and cost of effort are the same across different people. (It would be simple to remove this assumption, but it wouldn’t change much in the results.) The value of winning I’ll call y, and I will normalize the cost of effort to 1.


Then this is each person’s expected payoff ui:

ui = (ai + ei)/(a1+e1+a2 + e2) V – ei

You choose effort, not ability, so maximize in terms of ei:

(a2 + e2) V = (a1 +e1+a2 + e2)2 = (a1 + e1) V

a1 + e1 = a2 + e2

p1 = p2

In equilibrium, both people will produce exactly the same level of performance—but one of them will be contributing more effort to compensate for their lesser innate ability.

I’ve definitely had this experience in both directions: Effortlessly acing math tests that I knew other people barely passed despite hours of studying, and running until I could barely breathe to keep up with other people who barely seemed winded. Clearly I had too little incentive in math class and too much in gym class—and competition was obviously the culprit.

If you vary the cost of effort between people, or make it not linear, you can make the two not exactly equal; but the overall pattern will remain that the person who has more ability will put in less effort because they can win anyway.

Yet presumably the amount of effort we want to incentivize isn’t less for those who are more talented. If anything, it may be more: Since an hour of work produces more when done by the more talented person, if the cost to them is the same, then the net benefit of that hour of work is higher than the same hour of work by someone less talented.

In a large population, there are almost certainly many people whose talents are similar to your own—but there are also almost certainly many below you and many above you as well. Unless you are properly matched with those of similar talent, competition will systematically lead to some people being pressured to work too hard and others not pressured enough.

But if we’re all stressed, where are the people not pressured enough? We see them on TV. They are celebrities and athletes and billionaires—people who got lucky enough, either genetically (actors who were born pretty, athletes who were born with more efficient muscles) or environmentally (inherited wealth and prestige), to not have to work as hard as the rest of us in order to succeed. Indeed, we are constantly bombarded with images of these fantastically lucky people, and by the availability heuristic our brains come to assume that they are far more plentiful than they actually are.

This dramatically exacerbates the harms of competition, because we come to feel that we are competing specifically with the people who were handed the world on a silver platter. Born without the innate advantages of beauty or endurance or inheritance, there’s basically no chance we could ever measure up; and thus we feel utterly inadequate unless we are constantly working as hard as we possibly can, trying to catch up in a race in which we always fall further and further behind.

How can we break out of this terrible cycle? Well, we could try to replace capitalism with something like the automated luxury communism of Star Trek; but this seems like a very difficult and long-term solution. Indeed it might well take us a few hundred years as Roddenberry predicted.

In the shorter term, we may not be able to fix the economic problem, but there is much we can do to fix the psychological problem.

By reflecting on the full breadth of human experience, not only here and now, but throughout history and around the world, you can come to realize that you—yes, you, if you’re reading this—are in fact among the relatively fortunate. If you have a roof over your head, food on your table, clean water from your tap, and ibuprofen in your medicine cabinet, you are far more fortunate than the average person in Senegal today; your television, car, computer, and smartphone are things that would be the envy even of kings just a few centuries ago. (Though ironically enough that person in Senegal likely has a smartphone, or at least a cell phone!)

Likewise, you can reflect upon the fact that while you are likely not among the world’s most very most talented individuals in any particular field, there is probably something you are much better at than most people. (A Fermi estimate suggests I’m probably in the top 250 behavioral economists in the world. That’s probably not enough for a Nobel, but it does seem to be enough to get a job at the University of Edinburgh.) There are certainly many people who are less good at many things than you are, and if you must think of yourself as competing, consider that you’re also competing with them.

Yet perhaps the best psychological solution is to learn not to think of yourself as competing at all. So much as you can afford to do so, try to live your life as if you were already living in a world that rewards you for making the best of your own capabilities. Try to live your life doing what you really think is the best use of your time—not your corporate overlords. Yes, of course, we must do what we need to in order to survive, and not just survive, but indeed remain physically and mentally healthy—but this is far less than most First World people realize. Though many may try to threaten you with homelessness or even starvation in order to exploit you and make you work harder, the truth is that very few people in First World countries actually end up that way (it couldbe brought to zero, if our public policy were better), and you’re not likely to be among them. “Starving artists” are typically a good deal happier than the general population—because they’re not actually starving, they’ve just removed themselves from the soul-crushing treadmill of trying to impress the neighbors with manicured lawns and fancy SUVs.

Why are humans so bad with probability?

Apr 29 JDN 2458238

In previous posts on deviations from expected utility and cumulative prospect theory, I’ve detailed some of the myriad ways in which human beings deviate from optimal rational behavior when it comes to probability.

This post is going to be a bit different: Yes, we behave irrationally when it comes to probability. Why?

Why aren’t we optimal expected utility maximizers?
This question is not as simple as it sounds. Some of the ways that human beings deviate from neoclassical behavior are simply because neoclassical theory requires levels of knowledge and intelligence far beyond what human beings are capable of; basically anything requiring “perfect information” qualifies, as does any game theory prediction that involves solving extensive-form games with infinite strategy spaces by backward induction. (Don’t feel bad if you have no idea what that means; that’s kind of my point. Solving infinite extensive-form games by backward induction is an unsolved problem in game theory; just this past week I saw a new paper presented that offered a partial potential solutionand yet we expect people to do it optimally every time?)

I’m also not going to include questions of fundamental uncertainty, like “Will Apple stock rise or fall tomorrow?” or “Will the US go to war with North Korea in the next ten years?” where it isn’t even clear how we would assign a probability. (Though I will get back to them, for reasons that will become clear.)

No, let’s just look at the absolute simplest cases, where the probabilities are all well-defined and completely transparent: Lotteries and casino games. Why are we so bad at that?

Lotteries are not a computationally complex problem. You figure out how much the prize is worth to you, multiply it by the probability of winning—which is clearly spelled out for you—and compare that to how much the ticket price is worth to you. The most challenging part lies in specifying your marginal utility of wealth—the “how much it’s worth to you” part—but that’s something you basically had to do anyway, to make any kind of trade-offs on how to spend your time and money. Maybe you didn’t need to compute it quite so precisely over that particular range of parameters, but you need at least some idea how much $1 versus $10,000 is worth to you in order to get by in a market economy.

Casino games are a bit more complicated, but not much, and most of the work has been done for you; you can look on the Internet and find tables of probability calculations for poker, blackjack, roulette, craps and more. Memorizing all those probabilities might take some doing, but human memory is astonishingly capacious, and part of being an expert card player, especially in blackjack, seems to involve memorizing a lot of those probabilities.

Furthermore, by any plausible expected utility calculation, lotteries and casino games are a bad deal. Unless you’re an expert poker player or blackjack card-counter, your expected income from playing at a casino is always negative—and the casino set it up that way on purpose.

Why, then, can lotteries and casinos stay in business? Why are we so bad at such a simple problem?

Clearly we are using some sort of heuristic judgment in order to save computing power, and the people who make lotteries and casinos have designed formal models that can exploit those heuristics to pump money from us. (Shame on them, really; I don’t fully understand why this sort of thing is legal.)

In another previous post I proposed what I call “categorical prospect theory”, which I think is a decently accurate description of the heuristics people use when assessing probability (though I’ve not yet had the chance to test it experimentally).

But why use this particular heuristic? Indeed, why use a heuristic at all for such a simple problem?

I think it’s helpful to keep in mind that these simple problems are weird; they are absolutely not the sort of thing a tribe of hunter-gatherers is likely to encounter on the savannah. It doesn’t make sense for our brains to be optimized to solve poker or roulette.

The sort of problems that our ancestors encountered—indeed, the sort of problems that we encounter, most of the time—were not problems of calculable probability risk; they were problems of fundamental uncertainty. And they were frequently matters of life or death (which is why we’d expect them to be highly evolutionarily optimized): “Was that sound a lion, or just the wind?” “Is this mushroom safe to eat?” “Is that meat spoiled?”

In fact, many of the uncertainties most important to our ancestors are still important today: “Will these new strangers be friendly, or dangerous?” “Is that person attracted to me, or am I just projecting my own feelings?” “Can I trust you to keep your promise?” These sorts of social uncertainties are even deeper; it’s not clear that any finite being could ever totally resolve its uncertainty surrounding the behavior of other beings with the same level of intelligence, as the cognitive arms race continues indefinitely. The better I understand you, the better you understand me—and if you’re trying to deceive me, as I get better at detecting deception, you’ll get better at deceiving.

Personally, I think that it was precisely this sort of feedback loop that resulting in human beings getting such ridiculously huge brains in the first place. Chimpanzees are pretty good at dealing with the natural environment, maybe even better than we are; but even young children can outsmart them in social tasks any day. And once you start evolving for social cognition, it’s very hard to stop; basically you need to be constrained by something very fundamental, like, say, maximum caloric intake or the shape of the birth canal. Where chimpanzees look like their brains were what we call an “interior solution”, where evolution optimized toward a particular balance between cost and benefit, human brains look more like a “corner solution”, where the evolutionary pressure was entirely in one direction until we hit up against a hard constraint. That’s exactly what one would expect to happen if we were caught in a cognitive arms race.

What sort of heuristic makes sense for dealing with fundamental uncertainty—as opposed to precisely calculable probability? Well, you don’t want to compute a utility function and multiply by it, because that adds all sorts of extra computation and you have no idea what probability to assign. But you’ve got to do something like that in some sense, because that really is the optimal way to respond.

So here’s a heuristic you might try: Separate events into some broad categories based on how frequently they seem to occur, and what sort of response would be necessary.

Some things, like the sun rising each morning, seem to always happen. So you should act as if those things are going to happen pretty much always, because they do happen… pretty much always.

Other things, like rain, seem to happen frequently but not always. So you should look for signs that those things might happen, and prepare for them when the signs point in that direction.

Still other things, like being attacked by lions, happen very rarely, but are a really big deal when they do. You can’t go around expecting those to happen all the time, that would be crazy; but you need to be vigilant, and if you see any sign that they might be happening, even if you’re pretty sure they’re not, you may need to respond as if they were actually happening, just in case. The cost of a false positive is much lower than the cost of a false negative.

And still other things, like people sprouting wings and flying, never seem to happen. So you should act as if those things are never going to happen, and you don’t have to worry about them.

This heuristic is quite simple to apply once set up: It can simply slot in memories of when things did and didn’t happen in order to decide which category they go in—i.e. availability heuristic. If you can remember a lot of examples of “almost never”, maybe you should move it to “unlikely” instead. If you get a really big number of examples, you might even want to move it all the way to “likely”.

Another large advantage of this heuristic is that by combining utility and probability into one metric—we might call it “importance”, though Bayesian econometricians might complain about that—we can save on memory space and computing power. I don’t need to separately compute a utility and a probability; I just need to figure out how much effort I should put into dealing with this situation. A high probability of a small cost and a low probability of a large cost may be equally worth my time.

How might these heuristics go wrong? Well, if your environment changes sufficiently, the probabilities could shift and what seemed certain no longer is. For most of human history, “people walking on the Moon” would seem about as plausible as sprouting wings and flying away, and yet it has happened. Being attacked by lions is now exceedingly rare except in very specific places, but we still harbor a certain awe and fear before lions. And of course availability heuristic can be greatly distorted by mass media, which makes people feel like terrorist attacks and nuclear meltdowns are common and deaths by car accidents and influenza are rare—when exactly the opposite is true.

How many categories should you set, and what frequencies should they be associated with? This part I’m still struggling with, and it’s an important piece of the puzzle I will need before I can take this theory to experiment. There is probably a trade-off between more categories giving you more precision in tailoring your optimal behavior, but costing more cognitive resources to maintain. Is the optimal number 3? 4? 7? 10? I really don’t know. Even I could specify the number of categories, I’d still need to figure out precisely what categories to assign.

Is grade inflation a real problem?

Mar 4 JDN 2458182

You can’t spend much time teaching at the university level and not hear someone complain about “grade inflation”. Almost every professor seems to believe in it, and yet they must all be participating in it, if it’s really such a widespread problem.

This could be explained as a collective action problem, a Tragedy of the Commons: If the incentives are always to have the students with the highest grades—perhaps because of administrative pressure, or in order to get better reviews from students—then even if all professors would prefer a harsher grading scheme, no individual professor can afford to deviate from the prevailing norms.

But in fact I think there is a much simpler explanation: Grade inflation doesn’t exist.

In economic growth theory, economists make a sharp distinction between inflation—increase in prices without change in underlying fundamentals—and growth—increase in the real value of output. I contend that there is no such thing as grade inflation—what we are in fact observing is grade growth.
Am I saying that students are actually smarter now than they were 30 years ago?

Yes. That’s exactly what I’m saying.

But don’t take it from me. Take it from the decades of research on the Flynn Effect: IQ scores have been rising worldwide at a rate of about 0.3 IQ points per year for as long as we’ve been keeping good records. Students today are about 10 IQ points smarter than students 30 years ago—a 2018 IQ score of 95 is equivalent to a 1988 score of 105, which is equivalent to a 1958 score of 115. There is reason to think this trend won’t continue indefinitely, since the effect is mainly concentrated at the bottom end of the distribution; but it has continued for quite some time already.

This by itself would probably be enough to explain the observed increase in grades, but there’s more: College students are also a self-selected sample, admitted precisely because they were believed to be the smartest individuals in the application pool. Rising grades at top institutions are easily explained by rising selectivity at top schools: Harvard now accepts 5.6% of applicants. In 1942, Harvard accepted 92% of applicants. The odds of getting in have fallen from 9:1 in favor to 19:1 against. Today, you need a 4.0 GPA, a 36 ACT in every category, glowing letters of recommendation, and hundreds of hours of extracurricular activities (or a family member who donated millions of dollars, of course) to get into Harvard. In the 1940s, you needed a high school diploma and a B average.

In fact, when educational researchers have tried to quantitatively study the phenomenon of “grade inflation”, they usually come back with the result that they simply can’t find it. The US department of education conducted a study in 1995 showing that average university grades had declined since 1965. Given that the Flynn effect raised IQ by almost 10 points during that time, maybe we should be panicking about grade deflation.

It really wouldn’t be hard to make that case: “Back in my day, you could get an A just by knowing basic algebra! Now they want these kids to take partial derivatives?” “We used to just memorize facts to ace the exam; but now teachers keep asking for reasoning and critical thinking?”

More recently, a study in 2013 found that grades rose at the high school level, but fell at the college level, and showed no evidence of losing any informativeness as a signaling mechanism. The only recent study I could find showing genuinely compelling evidence for grade inflation was a 2017 study of UK students estimating that grades are growing about twice as fast as the Flynn effect alone would predict. Most studies don’t even consider the possibility that students are smarter than they used to be—they just take it for granted that any increase in average grades constitutes grade inflation. Many of them don’t even control for the increase in selectivity—here’s one using the fact that Harvard’s average rose from 2.7 to 3.4 from 1960 to 2000 as evidence of “grade inflation” when Harvard’s acceptance rate fell from almost 30% to only 10% during that period.

Indeed, the real mystery is why so many professors believe in grade inflation, when the evidence for it is so astonishingly weak.

I think it’s availability heuristic. Who are professors? They are the cream of the crop. They aced their way through high school, college, and graduate school, then got hired and earned tenure—they were one of a handful of individuals who won a fierce competition with hundreds of competitors at each stage. There are over 320 million people in the US, and only 1.3 million college faculty. This means that college professors represent about the top 0.4% of high-scoring students.

Combine that with the fact that human beings assort positively (we like to spend time with people who are similar to us) and use availability heuristic (we judge how likely something is based on how many times we have seen it).

Thus, when a professor compares to her own experience of college, she is remembering her fellow top-scoring students at elite educational institutions. She is recalling the extreme intellectual demands she had to meet to get where she is today, and erroneously assuming that these are representative of most the population of her generation. She probably went to school at one of a handful of elite institutions, even if she now teaches at a mid-level community college: three quarters of college faculty come from the top one quarter of graduate schools.

And now she compares to the students she has to teach, most of whom would not be able to meet such demands—but of course most people in her generation couldn’t either. She frets for the future of humanity only because not everyone is a genius like her.

Throw in the Curse of Knowledge: The professor doesn’t remember how hard it was to learn what she has learned so far, and so the fact that it seems easy now makes her think it was easy all along. “How can they not know how to take partial derivatives!?” Well, let’s see… were you born knowing how to take partial derivatives?

Giving a student an A for work far inferior to what you’d have done in their place isn’t unfair. Indeed, it would clearly be unfair to do anything less. You have years if not decades of additional education ahead of them, and you are from self-selected elite sample of highly intelligent individuals. Expecting everyone to perform as well as you would is simply setting up most of the population for failure.

There are potential incentives for grade inflation that do concern me: In particular, a lot of international student visas and scholarship programs insist upon maintaining a B or even A- average to continue. Professors are understandably loathe to condemn a student to having to drop out or return to their home country just because they scored 81% instead of 84% on the final exam. If we really intend to make C the average score, then students shouldn’t lose funding or visas just for scoring a B-. Indeed, I have trouble defending any threshold above outright failing—which is to say, a minimum score of D-. If you pass your classes, that should be good enough to keep your funding.

Yet apparently even this isn’t creating too much upward bias, as students who are 10 IQ points smarter are still getting about the same scores as their forebears. We should be celebrating that our population is getting smarter, but instead we’re panicking over “easy grading”.

But kids these days, am I right?

Nuclear power is safe. Why don’t people like it?

Sep 24, JDN 2457656

This post will have two parts, corresponding to each sentence. First, I hope to convince you that nuclear power is safe. Second, I’ll try to analyze some of the reasons why people don’t like it and what we might be able to do about that.

Depending on how familiar you are with the statistics on nuclear power, the idea that nuclear power is safe may strike you as either a completely ridiculous claim or an egregious understatement. If your primary familiarity with nuclear power safety is via the widely-publicized examples of Chernobyl, Three Mile Island, and more recently Fukushima, you may have the impression that nuclear power carries huge, catastrophic risks. (You may also be confusing nuclear power with nuclear weapons—nuclear weapons are indeed the greatest catastrophic risk on Earth today, but equating the two is like equating automobiles and machine guns because both of them are made of metal and contain lubricant, flammable materials, and springs.)

But in fact nuclear energy is astonishingly safe. Indeed, even those examples aren’t nearly as bad as people have been led to believe. Guess how many people died as a result of Three Mile Island, including estimated increased cancer deaths from radiation exposure?

Zero. There are zero confirmed deaths and the consensus estimate of excess deaths caused by the Three Mile Island incident by all causes combined is zero.

What about Fukushima? Didn’t 10,000 people die there? From the tsunami, yes. But the nuclear accident resulted in zero fatalities. If anything, those 10,000 people were killed by coal—by climate change. They certainly weren’t killed by nuclear.

Chernobyl, on the other hand, did actually kill a lot of people. Chernobyl caused 31 confirmed direct deaths, as well as an estimated 4,000 excess deaths by all causes. On the one hand, that’s more than 9/11; on the other hand, it’s about a month of US car accidents. Imagine if people had the same level of panic and outrage at automobiles after a month of accidents that they did at nuclear power after Chernobyl.

The vast majority of nuclear accidents cause zero fatalities; other than Chernobyl, none have ever caused more than 10. Deepwater Horizon killed 11 people, and yet for some reason Americans did not unite in opposition against ever using oil (or even offshore drilling!) ever again.

In fact, even that isn’t fair to nuclear power, because we’re not including the thousands of lives saved every year by using nuclear instead of coal and oil.

Keep in mind, the WHO estimates 10 to 100 million excess deaths due to climate change over the 21st century. That’s an average of 100,000 to 1 million deaths every year. Nuclear power currently produces about 11% of the world’s energy, so let’s do a back-of-the-envelope calculation for how many lives that’s saving. Assuming that additional climate change would be worse in direct proportion to the additional carbon emissions (which is conservative), and assuming that half that energy would be replaced by coal or oil (also conservative, using Germany’s example), we’re looking at about a 6% increase in deaths due to climate change if all those nuclear power plants were closed. That’s 6,000 to 60,000 lives that nuclear power plants save every year.

I also haven’t included deaths due to pollution—note that nuclear power plants don’t pollute air or water whatsoever, and only produce very small amounts of waste that can be quite safely stored. Air pollution in all its forms is responsible for one in eight deaths worldwide. Let me say that again: One in eight of all deaths in the world is caused by air pollution—so this is on the order of 7 million deaths per year, every year. We burn our way to a biannual Holocaust. Most of this pollution is actually caused by burning wood—fireplaces, wood stoves, and bonfires are terrible for the air—and many countries would actually see a substantial reduction in their toxic pollution if they switched to oil or even coal in favor of wood. But a large part of that pollution is caused by coal, and a nontrivial amount is caused by oil. Coal-burning factories and power plants are responsible for about 1 million deaths per year in China alone. Most of that pollution could be prevented if those power plants were nuclear instead.

Factor all that in, and nuclear power currently saves tens if not hundreds of thousands of lives per year, and expanding it to replace all fossil fuels could save millions more. Indeed, a more precise estimate of the benefits of nuclear power published a few years ago in Environmental Science and Technology is that nuclear power plants have saved some 1.8 million human lives since their invention, putting them on a par with penicillin and the polio vaccine.

So, I hope I’ve convinced you of the first proposition: Nuclear power plants are safe—and not just safe, but heroic, in fact one of the greatest life-saving technologies ever invented. So, why don’t people like them?

Unfortunately, I suspect that no amount of statistical data by itself will convince those who still feel a deep-seated revulsion to nuclear power. Even many environmentalists, people who could be nuclear energy’s greatest advocates, are often opposed to it. I read all the way through Naomi Klein’s This Changes Everything and never found even a single cogent argument against nuclear power; she simply takes it as obvious that nuclear power is “more of the same line of thinking that got us in this mess”. Perhaps because nuclear power could be enormously profitable for certain corporations (which is true; but then, it’s also true of solar and wind power)? Or because it also fits this narrative of “raping and despoiling the Earth” (sort of, I guess)? She never really does explain; I’m guessing she assumes that her audience will simply share her “gut feeling” intuition that nuclear power is dangerous and untrustworthy. One of the most important inconvenient truths for environmentalists is that nuclear power is not only safe, it is almost certainly our best hope for stopping climate change.

Perhaps all this is less baffling when we recognize that other heroic technologies are often also feared or despised for similarly bizarre reasons—vaccines, for instance.

First of all, human beings fear what we cannot understand, and while the human immune system is certainly immensely complicated, nuclear power is based on quantum mechanics, a realm of scientific knowledge so difficult and esoteric that it is frequently used as the paradigm example of something that is hard to understand. (As Feynman famously said, “I think I can safely say that nobody understands quantum mechanics.”) Nor does it help that popular treatments of quantum physics typically bear about as much resemblance to the actual content of the theory as the X-Men films do to evolutionary biology, and con artists like Deepak Chopra take advantage of this confusion to peddle their quackery.

Nuclear radiation is also particularly terrifying because it is invisible and silent; while a properly-functioning nuclear power plant emits less ionizing radiation than the Capitol Building and eating a banana poses substantially higher radiation risk than talking on a cell phone, nonetheless there is real danger posed by ionizing radiation, and that danger is particularly terrifying because it takes a form that human senses cannot detect. When you are burned by fire or cut by a knife, you know immediately; but gamma rays could be coursing through you right now and you’d feel no different. (Huge quantities of neutrinos are coursing through you, but fear not, for they’re completely harmless.) The symptoms of severe acute radiation poisoning also take a particularly horrific form: After the initial phase of nausea wears off, you can enter a “walking ghost phase”, where your eventual death is almost certain due to your compromised immune and digestive systems, but your current condition is almost normal. This makes the prospect of death by nuclear accident a particularly vivid and horrible image.

Vividness makes ideas more available to our memory; and thus, by the availability heuristic, we automatically infer that it must be more probable than it truly is. You can think of horrific nuclear accidents like Chernobyl, and all the carnage they caused; but all those millions of people choking to death in China don’t make for a compelling TV news segment (or at least, our TV news doesn’t seem to think so). Vividness doesn’t actually seem to make things more persuasive, but it does make them more memorable.

Yet even if we allow for the possibility that death by radiation poisoning is somewhat worse than death by coal pollution (if I had to choose between the two, okay, maybe I’d go with the coal), surely it’s not ten thousand times worse? Surely it’s not worth sacrificing entire cities full of people to coal in order to prevent a handful of deaths by nuclear energy?

Another reason that has been proposed is a sense that we can control risk from other sources, but a nuclear meltdown would be totally outside our control. Perhaps that is the perception, but if you think about it, it really doesn’t make a lot of sense. If there’s a nuclear meltdown, emergency services will report it, and you can evacuate the area. Yes, the radiation moves at the speed of light; but it also dissipates as the inverse square of distance, so if you just move further away you can get a lot safer quite quickly. (Think about the brightness of a lamp in your face versus across a football field. Radiation works the same way.) The damage is also cumulative, so the radiation risk from a meltdown is only going to be serious if you stay close to the reactor for a sustained period of time. Indeed, it’s much easier to avoid nuclear radiation than it is to avoid air pollution; you can’t just stand behind a concrete wall to shield against air pollution, and moving further away isn’t possible if you don’t know where it’s coming from. Control would explain why we fear cars less than airplanes (which is also statistically absurd), but it really can’t explain why nuclear power scares people more than coal and oil.

Another important factor may be an odd sort of bipartisan consensus: While the Left hates nuclear power because it makes corporations profitable or because it’s unnatural and despoils the Earth or something, the Right hates nuclear power because it requires substantial government involvement and might displace their beloved fossil fuels. (The Right’s deep, deep love of the fossil fuel industry now borders on the pathological. Even now that they are obviously economically inefficient and environmentally disastrous, right-wing parties around the world continue to defend enormous subsidies for oil and coal companies. Corruption and regulatory capture could partly explain this, but only partly. Campaign contributions can’t explain why someone would write a book praising how wonderful fossil fuels are and angrily denouncing anyone who would dare criticize them.) So while the two sides may hate each other in general and disagree on most other issues—including of course climate change itself—they can at least agree that nuclear power is bad and must be stopped.

Where do we go from here, then? I’m not entirely sure. As I said, statistical data by itself clearly won’t be enough. We need to find out what it is that makes people so uniquely terrified of nuclear energy, and we need to find a way to assuage those fears.

And we must do this now. For every day we don’t—every day we postpone the transition to a zero-carbon energy grid—is another thousand people dead.