Why we need critical thinking

Jul 9 JDN 2460135

I can’t find it at the moment, but awhile ago I read a surprisingly compelling post on social media (I think it was Facebook, but it could also have been Reddit) questioning the common notion that we should be teaching more critical thinking in school.

I strongly believe that we should in fact be teaching more critical thinking in school—actually I think we should replace large chunks of the current math curriculum with a combination of statistics, economics and critical thinking—but it made me realize that we haven’t done enough to defend why that is something worth doing. It’s just become a sort of automatic talking point, like, “obviously you would want more critical thinking, why are you even asking?”

So here’s a brief attempt to explain why critical thinking is something that every citizen ought to be good at, and hence why it’s worthwhile to teach it in primary and secondary school.

Critical thinking, above all, allows you to detect lies. It teaches you to look past the surface of what other people are saying and determine whether what they are saying is actually true.

And our world is absolutely full of lies.

We are constantly lied to by advertising. We are constantly lied to by spam emails and scam calls. Day in and day out, people with big smiles promise us the world, if only we will send them five easy payments of $19.99.

We are constantly lied to by politicians. We are constantly lied to by religious leaders (it’s pretty much their whole job actually).

We are often lied to by newspapers—sometimes directly and explicitly, as in fake news, but more often in subtler ways. Most news articles in the mainstream press are true in the explicit facts they state, but are missing important context; and nearly all of them focus on the wrong things—exciting, sensational, rare events rather than what’s actually important and likely to affect your life. If newspapers were an accurate reflection of genuine risk, they’d have more articles on suicide than homicide, and something like one million articles on climate change for every one on some freak accident (like that submarine full of billionaires).

We are even lied to by press releases on science, which likewise focus on new, exciting, sensational findings rather than supported, established, documented knowledge. And don’t tell me everyone already knows it; just stating basic facts about almost any scientific field will shock and impress most of the audience, because they clearly didn’t learn this stuff in school (or, what amounts to the same thing, don’t remember it). This isn’t just true of quantum physics; it’s even true of economics—which directly affects people’s lives.

Critical thinking is how you can tell when a politician has distorted the views of his opponent and you need to spend more time listening to that opponent speak. Critical thinking could probably have saved us from electing Donald Trump President.

Critical thinking is how you tell that a supplement which “has not been evaluated by the FDA” (which is to say, nearly all of them) probably contains something mostly harmless that maybe would benefit you if you were deficient in it, but for most people really won’t matter—and definitely isn’t something you can substitute for medical treatment.

Critical thinking is how you recognize that much of the history you were taught as a child was a sanitized, simplified, nationalist version of what actually happened. But it’s also how you recognize that simply inverting it all and becoming the sort of anti-nationalist who hates your own country is at least as ridiculous. Thomas Jefferson was both a pioneer of democracy and a slaveholder. He was both a hero and a villain. The world is complicated and messy—and nothing will let you see that faster than critical thinking.


Critical thinking tells you that whenever a new “financial innovation” appears—like mortgage-backed securities or cryptocurrency—it will probably make obscene amounts of money for a handful of insiders, but will otherwise be worthless if not disastrous to everyone else. (And maybe if enough people had good critical thinking skills, we could stop the next “innovation” from getting so far!)

More widespread critical thinking could even improve our job market, as interviewers would no longer be taken in by the candidates who are best at overselling themselves, and would instead pay more attention to the more-qualified candidates who are quiet and honest.

In short, critical thinking constitutes a large portion of what is ordinarily called common sense or wisdom; some of that simply comes from life experience, but a great deal of it is actually a learnable skill set.

Of course, even if it can be learned, that still raises the question of how it can be taught. I don’t think we have a sound curriculum for teaching critical thinking, and in my more cynical moments I wonder if many of the powers that be like it that way. Knowing that many—not all, but many—politicians make their careers primarily from deceiving the public, it’s not so hard to see why those same politicians wouldn’t want to support teaching critical thinking in public schools. And it’s almost funny to me watching evangelical Christians try to justify why critical thinking is dangerous—they come so close to admitting that their entire worldview is totally unfounded in logic or evidence.

But at least I hope I’ve convinced you that it is something worthwhile to know, and that the world would be better off if we could teach it to more people.

Age, ambition, and social comparison

Jul 2 JDN 2460128

The day I turned 35 years old was one of the worst days of my life, as I wrote about at the time. I think the only times I have felt more depressed than that day were when my father died, when I was hospitalized by an allergic reaction to lamotrigine, and when I was rejected after interviewing for jobs at GiveWell and Wizards of the Coast.

This is notable because… nothing particularly bad happened to me on my 35th birthday. It was basically an ordinary day for me. I felt horrible simply because I was turning 35 and hadn’t accomplished so many of the things I thought I would have by that point in my life. I felt my dreams shattering as the clock ticked away what chance I thought I’d have at achieving my life’s ambitions.

I am slowly coming to realize just how pathological that attitude truly is. It was ingrained in me very deeply from the very youngest age, not least because I was such a gifted child.

While studying quantum physics in college, I was warned that great physicists do all their best work before they are 30 (some even said 25). Einstein himself said as much (so it must be true, right?). It turns out that was simply untrue. It may have been largely true in the 18th and 19th centuries, and seems to have seen some resurgence during the early years of quantum theory, but today the median age at which a Nobel laureate physicist did their prize-winning work is 48. Less than 20% of eminent scientists made their great discoveries before the age of 40.

Alexander Fleming was 47 when he discovered penicillin—just about average for an eminent scientist of today. Darwin was 22 when he set sail on the Beagle, but didn’t publish On the Origin of Species until he was 50. Andre-Marie Ampere started his work in electromagnetism in his forties.

In creative arts, age seems to be no barrier at all. Julia Child published her first cookbook at 50. Stan Lee sold his first successful Marvel comic at 40. Toni Morrison was 39 when she published her first novel, and 62 when she won her Nobel. Peter Mark Roget was 73 when he published his famous thesaurus. Tolkein didn’t publish The Hobbit until he was 45.

Alan Rickman didn’t start drama school until he was 26 and didn’t have a major Hollywood role until he was 42. Samuel L. Jackson is now the third-highest-grossing actor of all time (mostly because of the Avengers movies), but he didn’t have any major movie roles until his forties. Anna Moses didn’t start painting until she was 78.

We think of entrepreneurship as a young man’s game, but Ray Kroc didn’t buy McDonalds until he was 59. Harland Sanders didn’t franchise KFC until he was 62. Eric Yuan wasn’t a vice president until the age of 37 and didn’t become a billionaire until Zoom took off in 2019—he was 49. Sam Walton didn’t found Walmart until he was 44.

Great humanitarian achievements actually seem to be more likely later in life: Gandhi did not see India achieve independence until he was 78. Nelson Mandela was 76 when he became President of South Africa.

It has taken me far too long to realize this, and in fact I don’t think I have yet fully internalized it: Life is not a race. You do not “fall behind” when others achieve things younger than you did. In fact, most child prodigies grow up no more successful as adults than children who were merely gifted or even above-average. (There is another common belief that prodigies grow up miserable and stunted; that, fortunately, isn’t true either.)

Then there is queer timethe fact that, in a hostile heteronormative world, queer people often find ourselves growing up in a very different way than straight people—and crip timethe ways that coping with a disability changes your relationship with time and often forces you to manage your time in ways that others don’t. As someone who came out fairly young and is now married, queer time doesn’t seem to have affected me all that much. But I feel crip time very acutely: I have to very carefully manage when I go to bed and when I wake up, every single day, making sure I get not only enough sleep—much more sleep than most people get or most employers respect—but also that it aligns properly with my circadian rhythm. Failure to do so risks triggering severe, agonizing pain. Factoring that in, I have lost at least a few years of my life to migraines and depression, and will probably lose several more in the future.

But more importantly, we all need to learn to stop measuring ourselves against other people’s timelines. There is no prize in life for being faster. And while there are prizes for particular accomplishments (Oscars, Nobels, and so on), much of what determines whether you win such prizes is entirely beyond your control. Even people who ultimately made eminent contributions to society didn’t know in advance that they were going to, and didn’t behave all that much differently from others who tried but failed.

I do not want to make this sound easy. It is incredibly hard. I believe that I personally am especially terrible at it. Our society seems to be optimized to make us compare ourselves to others in as many ways as possible as often as possible in as biased a manner as possible.

Capitalism has many important upsides, but one of its deepest flaws is that it makes our standard of living directly dependent on what is happening in the rest of a global market we can neither understand nor control. A subsistence farmer is subject to the whims of nature; but in a supermarket, you are subject to the whims of an entire global economy.

And there is reason to think that the harm of social comparison is getting worse rather than better. If some mad villain set out to devise a system that would maximize harmful social comparison and the emotional damage it causes, he would most likely create something resembling social media.

The villain might also tack on some TV news for good measure: Here are some random terrifying events, which we’ll make it sound like could hit you at any moment (even though their actual risk is declining); then our ‘good news’ will be a litany of amazing accomplishments, far beyond anything you could reasonably hope for, which have been achieved by a cherry-picked sample of unimaginably fortunate people you have never met (yet you somehow still form parasocial bonds with because we keep showing them to you). We will make a point not to talk about the actual problems in the world (such as inequality and climate change), certainly not in any way you might be able to constructively learn from; nor will we mention any actual good news which might be relevant to an ordinary person such as yourself (such as economic growth, improved health, or reduced poverty). We will focus entirely on rare, extreme events that by construction aren’t likely to ever happen to you and are not relevant to how you should live your life.

I do not have some simple formula I can give you that will make social comparison disappear. I do not know how to shake the decades of indoctrination into a societal milieu that prizes richer and faster over all other concepts of worth. But perhaps at least recognizing the problem will weaken its power over us.

How to make political conversation possible

Jun 25 JDN 2460121

Every man has the right to an opinion, but no man has a right to be wrong in his facts.

~Bernard Baruch

We shouldn’t expect political conversation to be easy. Politics inherently involves confllict. There are various competing interests and different ethical views involved in any political decision. Budgets are inherently limited, and spending must be prioritized. Raising taxes supports public goods but hurts taxpayers. A policy that reduces inflation may increase unemployment. A policy that promotes growth may also increase inequality. Freedom must sometimes be weighed against security. Compromises must be made that won’t make everyone happy—often they aren’t anyone’s first choice.

But in order to have useful political conversations, we need to have common ground. It’s one thing to disagree about what should be done—it’s quite another to ‘disagree’ about the basic facts of the world. Reasonable people can disagree about what constitutes the best policy choice. But when you start insisting upon factual claims that are empirically false, you become inherently unreasonable.

What terrifies me about our current state of political discourse is that we do not seem to have this common ground. We can’t even agree about basic facts of the world. Unless we can fix this, political conversation will be impossible.

I am tempted to say “anymore”—it at least feels to me like politics used to be different. But maybe it’s always been this way, and the Internet simply made the unreasonable voices louder. Overall rates of belief in most conspiracy theories haven’t changed substantially over time. Many other times have declared themselves ‘the golden age of conspiracy theory’. Maybe this has always been a problem. Maybe the greatest reason humanity has never been able to achieve peace is that large swaths of humanity can’t even agree on the basic facts.

Donald Trump exemplified this fact-less approach to politics, and QAnon remains a disturbingly significant force in our politics today. It’s impossible to have a sensible conversation with people who are convinced that you’re supporting a secret cabal of Satanic child molesters—and all the more impossible because they were willing to become convinced of that on literally zero evidence. But Trump was not the first conspiracist candidate, and will not be the last.

Robert F. Kennedy Jr. now seems to be challenging Trump for the title of ‘most unreasonable Presidential candidate’, as he has now advocated for an astonishing variety of bizarre unfounded claims: that vaccines are deadly, that antidepressants are responsible for mass shootings, that COVID was a Chinese bioweapon. He even claims things that can be quickly refuted simply by looking up the figures: He says that Switzerland’s gun ownership rate is comparable to the US, when in fact it’s only about one-fourth as high. No other country even comes close to the extraordinarily high rate of gun ownership in the US; we are the only country in the world with more privately-owned guns than private citizens to own them—more guns than people. (We also have by far the most military weapons as well, but that’s a somewhat different issue.)

What should we be doing about this? I think at this point it’s clear that simply sitting back and hoping it goes away on its own is not working. There is a widespread fear that engaging with bizarre theories simply grants them attention, but I think we have no serious alternative. They aren’t going to disappear if we simply ignore them.

That still leaves the question of how to engage. Simply arguing with their claims directly and presenting mainstream scientific evidence appears to be remarkably ineffective. They will simply dismiss the credibility of the scientific evidence, often by exaggerating genuine flaws in scientific institutions. The journal system is broken? Big Pharma has far too much influence? Established ideas take too long to become unseated? All true. But that doesn’t mean that magic beans cure cancer.

A more effective—not easy, and certainly not infallible, but more effective—strategy seems to be to look deeper into why people say the things they do. I emphasize the word ‘say’ here, because it often seems to be the case that people don’t really believe in conspiracy theories the way they believe in ordinary facts. It’s more the mythology mindset.

Rather than address the claims directly, you need to address the person making the claims. Before getting into any substantive content, you must first build rapport and show empathy—a process some call pre-suasion. Then, rather than seeking out the evidence that support their claims—as there will be virtually none—try to find out what emotional need the conspiracy theory satisfies for them: How does it help them make sense of the terrifying chaos of the world? How does professing belief in something that initially seems absurd and horrific actually make the world seem more orderly and secure in their mind?


For instance, consider the claim that 9/11 was an inside job. At face value, this is horrifying: The US government is so evil it was prepared to launch an attack on our own soil, against our own citizens, in order to justify starting a war in another country? Against such a government, I think violent insurrection is the only viable response. But if you consider it from another perspective, it makes the world less terrifying: At least, there is someone in control. An attack like 9/11 means that the world is governed by chaos: Even we in the seemingly-impregnable fortress of American national security are in fact vulnerable to random attacks by small groups of dedicated fanatics. In the conspiracist vision of the world, the US government becomes a terrible villain; but at least the world is governed by powerful, orderly forces—not random chaos.

Or consider one of the most widespread (and, to be fair, one of the least implausible) conspiracy theories: That JFK was assassinated not by a single fanatic, but by an organized agency—the KGB, or the CIA, or the Vice President. In the real world, the President of the United States—the most powerful man on the entire planet—can occasionally be felled by a single individual who is dedicated enough and lucky enough. In the conspiracist world, such a powerful man can only be killed by someone similarly powerful. The world may be governed by an evil elite—but at least it is governed. The rules may be evil, but at least there are rules.

Understanding this can give you some sympathy for people who profess conspiracies: They are struggling to cope with the pain of living in a chaotic, unpredictable, disorderly world. They cannot deny that terrible events happen, but by attributing them to unseen, organized forces, they can at least believe that those terrible events are part of some kind of orderly plan.


At the same time, you must constantly guard against seeming arrogant or condescending. (This is where I usually fail; it’s so hard for me to take these ideas seriously.) You must present yourself as open-minded and interested in speaking in good faith. If they sense that you aren’t taking them seriously, people will simply shut down and refuse to talk any further.

It’s also important to recognize that most people with bizarre beliefs aren’t simply gullible. It isn’t that they believe whatever anyone tells them. On the contrary, they seem to suffer from misplaced skepticism: They doubt the credible sources and believe the unreliable ones. They are hyper-aware of the genuine problems with mainstream sources, and yet somehow totally oblivious to the far more glaring failures of the sources they themselves trust.

Moreover, you should never expect to change someone’s worldview in a single conversation. That simply isn’t how human beings work. The only times I have ever seen anyone completely change their opinion on something in a single sitting involved mathematical proofs—showing a proper proof really can flip someone’s opinion all by itself. Yet even scientists working in their own fields of expertise generally require multiple sources of evidence, combined over some period of time, before they will truly change their minds.

Your goal, then, should not be to convince someone that their bizarre belief is wrong. Rather, convince them that some of the sources they trust are just as unreliable as the ones they doubt. Or point out some gaps in the story they hadn’t considered. Or offer an alternative account of events that explains the outcome without requiring the existence of a secret evil cabal. Don’t try to tear down the entire wall all at once; chip away at it, one little piece at a time—and one day, it will crumble.

Hopefully if we do this enough, we can make useful political conversation possible.

We do seem to have better angels after all

Jun 18 JDN 2460114

A review of The Darker Angels of Our Nature

(I apologize for not releasing this on Sunday; I’ve been traveling lately and haven’t found much time to write.)

Since its release, I have considered Steven Pinker’s The Better Angels of our Nature among a small elite category of truly great books—not simply good because enjoyable, informative, or well-written, but great in its potential impact on humanity’s future. Others include The General Theory of Employment, Interest, and Money, On the Origin of Species, and Animal Liberation.

But I also try to expose myself as much as I can to alternative views. I am quite fearful of the echo chambers that social media puts us in, where dissent is quietly hidden from view and groupthink prevails.

So when I saw that a group of historians had written a scathing critique of The Better Angels, I decided I surely must read it and get its point of view. This book is The Darker Angels of Our Nature.

The Darker Angels is written by a large number of different historians, and it shows. It’s an extremely disjointed book; it does not present any particular overall argument, various sections differ wildly in scope and tone, and sometimes they even contradict each other. It really isn’t a book in the usual sense; it’s a collection of essays whose only common theme is that they disagree with Steven Pinker.

In fact, even that isn’t quite true, as some of the best essays in The Darker Angels are actually the ones that don’t fundamentally challenge Pinker’s contention that global violence has been on a long-term decline for centuries and is now near its lowest in human history. These essays instead offer interesting insights into particular historical eras, such as medieval Europe, early modern Russia, and shogunate Japan, or they add additional nuances to the overall pattern, like the fact that, compared to medieval times, violence in Europe seems to have been less in the Pax Romana (before) and greater in the early modern period (after), showing that the decline in violence was not simple or steady, but went through fluctuations and reversals as societies and institutions changed. (At this point I feel I should note that Pinker clearly would not disagree with this—several of the authors seem to think he would, which makes me wonder if they even read The Better Angels.)

Others point out that the scale of civilization seems to matter, that more is different, and larger societies and armies more or less automatically seem to result in lower fatality rates by some sort of scaling or centralization effect, almost like the square-cube law. That’s very interesting if true; it would suggest that in order to reduce violence, you don’t really need any particular mode of government, you just need something that unites as many people as possible under one banner. The evidence presented for it was too weak for me to say whether it’s really true, however, and there was really no theoretical mechanism proposed whatsoever.

Some of the essays correct genuine errors Pinker made, some of which look rather sloppy. Pinker clearly overestimated the death tolls of the An Lushan Rebellion, the Spanish Inquisition, and Aztec ritual executions, probably by using outdated or biased sources. (Though they were all still extremely violent!) His depiction of indigenous cultures does paint with a very broad brush, and fails to recognize that some indigenous societies seem to have been quite peaceful (though others absolutely were tremendously violent).

One of the best essays is about Pinker’s cavalier attitude toward mass incarceration, which I absolutely do consider a deep flaw in Pinker’s view. Pinker presents increased incarceration rates along with decreased crime rates as if they were an unalloyed good, while I can at best be ambivalent about whether the benefit of decreasing crime is worth the cost of greater incarceration. Pinker seems to take for granted that these incarcerations are fair and impartial, when we have a great deal of evidence that they are strongly biased against poor people and people of color.

There’s another good essay about the Enlightenment, which Pinker seems to idealize a little too much (especially in his other book Enlightenment Now). There was no sudden triumph of reason that instantly changed the world. Human knowledge and rationality gradually improved over a very long period of time, with no obvious turning point and many cases of backsliding. The scientific method isn’t a simple, infallible algorithm that suddenly appeared in the brain of Galileo or Bayes, but a whole constellation of methods and concepts of rationality that took centuries to develop and is in fact still developing. (Much as the Tao that can be told is not the eternal Tao, the scientific method that can be written in a textbook is not the true scientific method.)

Several of the essays point out the limitations of historical and (especially) archaeological records, making it difficult to draw any useful inferences about rates of violence in the past. I agree that Pinker seems a little too cavalier about this; the records really are quite sparse and it’s not easy to fill in the gaps. Very small samples can easily distort homicide rates; since only about 1% of deaths worldwide are homicide, if you find 20 bodies, whether or not one of them was murdered is the difference between peaceful Japan and war-torn Colombia.

On the other hand, all we really can do is make the best inferences we have with the available data, and for the time periods in which we do have detailed records—surely true since at least the 19th century—the pattern of declining violence is very clear, and even the World Wars look like brief fluctuations rather than fundamental reversals. Contrary to popular belief, the World Wars do not appear to have been especially deadly on a per-capita basis, compared to various historic wars. The primary reason so many people died in the World Wars was really that there just were more people in the world. A few of the authors don’t seem to consider this an adequate reason, but ask yourself this: Would you rather live in a society of 100 in which 10 people are killed, or a society of 1 billion in which 1 million are killed? In the former case your chances of being killed are 10%; in the latter, 0.1%. Clearly, per-capita measures of violence are the correct ones.

Some essays seem a bit beside the point, like one on “environmental violence” which quite aptly details the ongoing—terrifying—degradation of our global ecology, but somehow seems to think that this constitutes violence when it obviously doesn’t. There is widespread violence against animals, certainly; slaughterhouses are the obvious example—and unlike most people, I do not consider them some kind of exception we can simply ignore. We do in fact accept levels of cruelty to pigs and cows that we would never accept against dogs or horses—even the law makes such exceptions. Moreover, plenty of habitat destruction is accompanied by killing of the animals who lived in that habitat. But ecological degradation is not equivalent to violence. (Nor is it clear to me that our treatment of animals is more violent overall today than in the past; I guess life is probably worse for a beef cow today than it was in the medieval era, but either way, she was going to be killed and eaten. And at least we no longer do cat-burning.) Drilling for oil can be harmful, but it is not violent. We can acknowledge that life is more peaceful now than in the past without claiming that everything is better now—in fact, one could even say that overall life isn’t better, but I think they’d be hard-pressed to argue that.

These are the relatively good essays, which correct minor errors or add interesting nuances. There are also some really awful essays in the mix.

A common theme of several of the essays seems to be “there are still bad things, so we can’t say anything is getting better”; they will point out various forms of violence that undeniably still exist, and treat this as a conclusive argument against the claim that violence has declined. Yes, modern slavery does exist, and it is a very serious problem; but it clearly is not the same kind of atrocity that the Atlantic slave trade was. Yes, there are still murders. Yes, there are still wars. Probably these things will always be with us to some extent; but there is a very clear difference between 500 homicides per million people per year and 50—and it would be better still if we could bring it down to 5.

There’s one essay about sexual violence that doesn’t present any evidence whatsoever to contradict the claim that rates of sexual violence have been declining while rates of reporting and prosecution have been increasing. (These two trends together often result in reported rapes going up, but most experts agree that actual rapes are going down.) The entire essay is based on anecdote, innuendo, and righteous anger.

There are several essays that spend their whole time denouncing neoliberal capitalism (not even presenting any particularly good arguments against it, though such arguments do exist), seeming to equate Pinker’s view with some kind of Rothbardian anarcho-capitalism when in fact Pinker is explictly in favor of Nordic-style social democracy. (One literally dismisses his support for universal healthcare as “Well, he is Canadian”.) But Pinker has on occasion said good things about capitalism, so clearly, he is an irredeemable monster.

Right in the introduction—which almost made me put the book down—is an astonishingly ludicrous argument, which I must quote in full to show you that it is not out of context:

What actually is violence (nowhere posed or answered in The Better Angels)? How do people perceive it in different time-place settings? What is its purpose and function? What were contemporary attitudes toward violence and how did sensibilities shift over time? Is violence always ‘bad’ or can there be ‘good’ violence, violence that is regenerative and creative?

The Darker Angels of Our Nature, p.16

Yes, the scare quotes on ‘good’ and ‘bad’ are in the original. (Also the baffling jargon “time-place settings” as opposed to, say, “times and places”.) This was clearly written by a moral relativist. Aside from questioning whether we can say anything about anything, the argument seems to be that Pinker’s argument is invalid because he didn’t precisely define every single relevant concept, even though it’s honestly pretty obvious what the world “violence” means and how he is using it. (If anything, it’s these authors who don’t seem to understand what the word means; they keep calling things “violence” that are indeed bad, but obviously aren’t violence—like pollution and cyberbullying. At least talk of incarceration as “structural violence” isn’t obvious nonsense—though it is still clearly distinct from murder rates.)

But it was by reading the worst essays that I think I gained the most insight into what this debate is really about. Several of the essays in The Darker Angels thoroughly and unquestioningly share the following inference: if a culture is superior, then that culture has a right to impose itself on others by force. On this, they seem to agree with the imperialists: If you’re better, that gives you a right to dominate everyone else. They rightly reject the claim that cultures have a right to imperialistically dominate others, but they cannot deny the inference, and so they are forced to deny that any culture can ever be superior to another. The result is that they tie themselves in knots trying to justify how greater wealth, greater happiness, less violence, and babies not dying aren’t actually good things. They end up talking nonsense about “violence that is regenerative and creative”.

But we can believe in civilization without believing in colonialism. And indeed that is precisely what I (along with Pinker) believe: That democracy is better than autocracy, that free speech is better than censorship, that health is better than illness, that prosperity is better than poverty, that peace is better than war—and therefore that Western civilization is doing a better job than the rest. I do not believe that this justifies the long history of Western colonial imperialism. Governing your own country well doesn’t give you the right to invade and dominate other countries. Indeed, part of what makes colonial imperialism so terrible is that it makes a mockery of the very ideals of peace, justice, and freedom that the West is supposed to represent.

I think part of the problem is that many people see the world in zero-sum terms, and believe that the West’s prosperity could only be purchased by the rest of the world’s poverty. But this is untrue. The world is nonzero-sum. My happiness does not come from your sadness, and my wealth does not come from your poverty. In fact, even the West was poor for most of history, and we are far more prosperous now that we have largely abandoned colonial imperialism than we ever were in imperialism’s heyday. (I do occasionally encounter British people who seem vaguely nostalgic for the days of the empire, but real median income in the UK has doubled just since 1977. Inequality has also increased during that time, which is definitely a problem; but the UK is undeniably richer now than it ever was at the peak of the empire.)

In fact it could be that the West is richer now because of colonalism than it would have been without it. I don’t know whether or not this is true. I suspect it isn’t, but I really don’t know for sure. My guess would be that colonized countries are poorer, but colonizer countries are not richer—that is, colonialism is purely destructive. Certain individuals clearly got richer by such depredation (Leopold II, anyone?), but I’m not convinced many countries did.

Yet even if colonialism did make the West richer, it clearly cannot explain most of the wealth of Western civilization—for that wealth simply did not exist in the world before. All these bridges and power plants, laptops and airplanes weren’t lying around waiting to be stolen. Surely, some of the ingredients were stolen—not least, the land. Had they been bought at fair prices, the result might have been less wealth for us (then again it might not, for wealthier trade partners yield greater exports). But this does not mean that the products themselves constitute theft, nor that the wealth they provide is meaningless. Perhaps we should find some way to pay reparations; undeniably, we should work toward greater justice in the future. But we do not need to give up all we have in order to achieve that justice.

There is a law of conservation of energy. It is impossible to create energy in one place without removing it from another. There is no law of conservation of prosperity. Making the world better in one place does not require making it worse in another.

Progress is real. Yes, it is flawed, uneven, and it has costs of its own; but it is real. If we want to have more of it, we best continue to believe in it. And The Better Angels of Our Nature does have some notable flaws, but it still retains its place among truly great books.

Statisticacy

Jun 11 JDN 2460107

I wasn’t able to find a dictionary that includes the word “statisticacy”, but it doesn’t trigger my spell-check, and it does seem to have the same form as “numeracy”: numeric, numerical, numeracy, numerate; statistic, statistical, statisticacy, statisticate. It definitely still sounds very odd to my ears. Perhaps repetition will eventually make it familiar.

For the concept is clearly a very important one. Literacy and numeracy are no longer a serious problem in the First World; basically every adult at this point knows how to read and do addition. Even worldwide, 90% of men and 83% of women can read, at least at a basic level—which is an astonishing feat of our civilization by the way, well worthy of celebration.

But I have noticed a disturbing lack of, well, statisticacy. Even intelligent, educated people seem… pretty bad at understanding statistics.

I’m not talking about sophisticated econometrics here; of course most people don’t know that, and don’t need to. (Most economists don’t know that!) I mean quite basic statistical knowledge.

A few years ago I wrote a post called “Statistics you should have been taught in high school, but probably weren’t”; that’s the kind of stuff I’m talking about.

As part of being a good citizen in a modern society, every adult should understand the following:

1. The difference between a mean and a median, and why average income (mean) can increase even though most people are no richer (median).

2. The difference between increasing by X% and increasing by X percentage points: If inflation goes from 4% to 5%, that is an increase of 20% ((5/4-1)*100%), but only 1 percentage point (5%-4%).

3. The meaning of standard error, and how to interpret error bars on a graph—and why it’s a huge red flag if there aren’t any error bars on a graph.

4. Basic probabilistic reasoning: Given some scratch paper, a pen, and a calculator, everyone should be able to work out the odds of drawing a given blackjack hand, or rolling a particular number on a pair of dice. (If that’s too easy, make it a poker hand and four dice. But mostly that’s just more calculation effort, not fundamentally different.)

5. The meaning of exponential growth rates, and how they apply to economic growth and compound interest. (The difference between 3% interest and 6% interest over 30 years is more than double the total amount paid.)

I see people making errors about this sort of thing all the time.

Economic news that celebrates rising GDP but wonders why people aren’t happier (when real median income has been falling since 2019 and is only 7% higher than it was in 1999, an annual growth rate of 0.2%).

Reports on inflation, interest rates, or poll numbers that don’t clearly specify whether they are dealing with percentages or percentage points. (XKCD made fun of this.)

Speaking of poll numbers, any reporting on changes in polls that isn’t at least twice the margin of error of the polls in question. (There’s also a comic for this; this time it’s PhD Comics.)

People misunderstanding interest rates and gravely underestimating how much they’ll pay for their debt (then again, this is probably the result of strategic choices on the part of banks—so maybe the real failure is regulatory).

And, perhaps worst of all, the plague of science news articles about “New study says X”. Things causing and/or cancer, things correlated with personality types, tiny psychological nudges that supposedly have profound effects on behavior.

Some of these things will even turn out to be true; actually I think this one on fibromyalgia, this one on smoking, and this one on body image are probably accurate. But even if it’s a properly randomized experiment—and especially if it’s just a regression analysis—a single study ultimately tells us very little, and it’s irresponsible to report on them instead of telling people the extensive body of established scientific knowledge that most people still aren’t aware of.

Basically any time an article is published saying “New study says X”, a statisticate person should ignore it and treat it as random noise. This is especially true if the finding seems weird or shocking; such findings are far more likely to be random flukes than genuine discoveries. Yes, they could be true, but one study just doesn’t move the needle that much.

I don’t remember where it came from, but there is a saying about this: “What is in the textbooks is 90% true. What is in the published literature is 50% true. What is in the press releases is 90% false.” These figures are approximately correct.

If their goal is to advance public knowledge of science, science journalists would accomplish a lot more if they just opened to a random page in a mainstream science textbook and started reading it on air. Admittedly, I can see how that would be less interesting to watch; but then, their job should be to find a way to make it interesting, not to take individual studies out of context and hype them up far beyond what they deserve. (Bill Nye did this much better than most science journalists.)

I’m not sure how much to blame people for lacking this knowledge. On the one hand, they could easily look it up on Wikipedia, and apparently choose not to. On the other hand, they probably don’t even realize how important it is, and were never properly taught it in school even though they should have been. Many of these things may even be unknown unknowns; people simply don’t realize how poorly they understand. Maybe the most useful thing we could do right now is simply point out to people that these things are important, and if they don’t understand them, they should get on that Wikipedia binge as soon as possible.

And one last thing: Maybe this is asking too much, but I think that a truly statisticate person should be able to solve the Monty Hall Problem and not be confused by the result. (Hint: It’s very important that Monty Hall knows which door the car is behind, and would never open that one. If he’s guessing at random and simply happens to pick a goat, the correct answer is 1/2, not 2/3. Then again, it’s never a bad choice to switch.)

We ignorant, incompetent gods

May 21 JDN 2460086

A review of Homo Deus

The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.

E.O. Wilson

Homo Deus is a very good read—and despite its length, a quick one; as you can see, I read it cover to cover in a week. Yuval Noah Harari’s central point is surely correct: Our technology is reaching a threshold where it grants us unprecedented power and forces us to ask what it means to be human.

Biotechnology and artificial intelligence are now advancing so rapidly that advancements in other domains, such as aerospace and nuclear energy, seem positively mundane. Who cares about making flight or electricity a bit cleaner when we will soon have the power to modify ourselves or we’ll all be replaced by machines?

Indeed, we already have technology that would have seemed to ancient people like the powers of gods. We can fly; we can witness or even control events thousands of miles away; we can destroy mountains; we can wipeout entire armies in an instant; we can even travel into outer space.

Harari rightly warns us that our not-so-distant descendants are likely to have powers that we would see as godlike: Immortality, superior intelligence, self-modification, the power to create life.

And where it is scary to think about what they might do with that power if they think the way we do—as ignorant and foolish and tribal as we are—Harari points out that it is equally scary to think about what they might do if they don’t think the way we do—for then, how do they think? If their minds are genetically modified or even artificially created, who will they be? What values will they have, if not ours? Could they be better? What if they’re worse?

It is of course difficult to imagine values better than our own—if we thought those values were better, we’d presumably adopt them. But we should seriously consider the possibility, since presumably most of us believe that our values today are better than what most people’s values were 1000 years ago. If moral progress continues, does it not follow that people’s values will be better still 1000 years from now? Or at least that they could be?

I also think Harari overestimates just how difficult it is to anticipate the future. This may be a useful overcorrection; the world is positively infested with people making overprecise predictions about the future, often selling them for exorbitant fees (note that Harari was quite well-compensated for this book as well!). But our values are not so fundamentally alien from those of our forebears, and we have reason to suspect that our descendants’ values will be no more different from ours.

For instance, do you think that medieval people thought suffering and death were good? I assure you they did not. Nor did they believe that the supreme purpose in life is eating cheese. (They didn’t even believe the Earth was flat!) They did not have the concept of GDP, but they could surely appreciate the value of economic prosperity.

Indeed, our world today looks very much like a medieval peasant’s vision of paradise. Boundless food in endless variety. Near-perfect security against violence. Robust health, free from nearly all infectious disease. Freedom of movement. Representation in government! The land of milk and honey is here; there they are, milk and honey on the shelves at Walmart.

Of course, our paradise comes with caveats: Not least, we are by no means free of toil, but instead have invented whole new kinds of toil they could scarcely have imagined. If anything I would have to guess that coding a robot or recording a video lecture probably isn’t substantially more satisfying than harvesting wheat or smithing a sword; and reconciling receivables and formatting spreadsheets is surely less. Our tasks are physically much easier, but mentally much harder, and it’s not obvious which of those is preferable. And we are so very stressed! It’s honestly bizarre just how stressed we are, given the abudance in which we live; there is no reason for our lives to have stakes so high, and yet somehow they do. It is perhaps this stress and economic precarity that prevents us from feeling such joy as the medieval peasants would have imagined for us.

Of course, we don’t agree with our ancestors on everything. The medieval peasants were surely more religious, more ignorant, more misogynistic, more xenophobic, and more racist than we are. But projecting that trend forward mostly means less ignorance, less misogyny, less racism in the future; it means that future generations should see the world world catch up to what the best of us already believe and strive for—hardly something to fear. The values that I believe are surely not what we as a civilization act upon, and I sorely wish they were. Perhaps someday they will be.

I can even imagine something that I myself would recognize as better than me: Me, but less hypocritical. Strictly vegan rather than lacto-ovo-vegetarian, or at least more consistent about only buying free range organic animal products. More committed to ecological sustainability, more willing to sacrifice the conveniences of plastic and gasoline. Able to truly respect and appreciate all life, even humble insects. (Though perhaps still not mosquitoes; this is war. They kill more of us than any other animal, including us.) Not even casually or accidentally racist or sexist. More courageous, less burnt out and apathetic. I don’t always live up to my own ideals. Perhaps someday someone will.

Harari fears something much darker, that we will be forced to give up on humanist values and replace them with a new techno-religion he calls Dataism, in which the supreme value is efficient data processing. I see very little evidence of this. If it feels like data is worshipped these days, it is only because data is profitable. Amazon and Google constantly seek out ever richer datasets and ever faster processing because that is how they make money. The real subject of worship here is wealth, and that is nothing new. Maybe there are some die-hard techno-utopians out there who long for us all to join the unified oversoul of all optimized data processing, but I’ve never met one, and they are clearly not the majority. (Harari also uses the word ‘religion’ in an annoyingly overbroad sense; he refers to communism, liberalism, and fascism as ‘religions’. Ideologies, surely; but religions?)

Harari in fact seems to think that ideologies are strongly driven by economic structures, so maybe he would even agree that it’s about profit for now, but thinks it will become religion later. But I don’t really see history fitting this pattern all that well. If monotheism is directly tied to the formation of organized bureaucracy and national government, then how did Egypt and Rome last so long with polytheistic pantheons? If atheism is the natural outgrowth of industrialized capitalism, then why are Africa and South America taking so long to get the memo? I do think that economic circumstances can constrain culture and shift what sort of ideas become dominant, including religious ideas; but there clearly isn’t this one-to-one correspondence he imagines. Moreover, there was never Coalism or Oilism aside from the greedy acquisition of these commodities as part of a far more familiar ideology: capitalism.

He also claims that all of science is now, or is close to, following a united paradigm under which everything is a data processing algorithm, which suggests he has not met very many scientists. Our paradigms remain quite varied, thank you; and if they do all have certain features in common, it’s mainly things like rationality, naturalism and empiricism that are more or less inherent to science. It’s not even the case that all cognitive scientists believe in materialism (though it probably should be); there are still dualists out there.

Moreover, when it comes to values, most scientists believe in liberalism. This is especially true if we use Harari’s broad sense (on which mainline conservatives and libertarians are ‘liberal’ because they believe in liberty and human rights), but even in the narrow sense of center-left. We are by no means converging on a paradigm where human life has no value because it’s all just data processing; maybe some scientists believe that, but definitely not most of us. If scientists ran the world, I can’t promise everything would be better, but I can tell you that Bush and Trump would never have been elected and we’d have a much better climate policy in place by now.

I do share many of Harari’s fears of the rise of artificial intelligence. The world is clearly not ready for the massive economic disruption that AI is going to cause all too soon. We still define a person’s worth by their employment, and think of ourselves primarily as collection of skills; but AI is going to make many of those skills obsolete, and may make many of us unemployable. It would behoove us to think in advance about who we truly are and what we truly want before that day comes. I used to think that creative intellectual professions would be relatively secure; ChatGPT and Midjourney changed my mind. Even writers and artists may not be safe much longer.

Harari is so good at sympathetically explaining other views he takes it to a fault. At times it is actually difficult to know whether he himself believes something and wants you to, or if he is just steelmanning someone else’s worldview. There’s a whole section on ‘evolutionary humanism’ where he details a worldview that is at best Nietschean and at worst Nazi, but he makes it sound so seductive. I don’t think it’s what he believes, in part because he has similarly good things to say about liberalism and socialism—but it’s honestly hard to tell.

The weakest part of the book is when Harari talks about free will. Like most people, he just doesn’t get compatibilism. He spends a whole chapter talking about how science ‘proves we have no free will’, and it’s just the same old tired arguments hard determinists have always made.

He talks about how we can make choices based on our desires, but we can’t choose our desires; well of course we can’t! What would that even mean? If you could choose your desires, what would you choose them based on, if not your desires? Your desire-desires? Well, then, can you choose your desire-desires? What about your desire-desire-desires?

What even is this ultimate uncaused freedom that libertarian free will is supposed to consist in? No one seems capable of even defining it. (I’d say Kant got the closest: He defined it as the capacity to act based upon what ought rather than what is. But of course what we believe about ‘ought’ is fundamentally stored in our brains as a particular state, a way things are—so in the end, it’s an ‘is’ we act on after all.)

Maybe before you lament that something doesn’t exist, you should at least be able to describe that thing as a coherent concept? Woe is me, that 2 plus 2 is not equal to 5!

It is true that as our technology advances, manipulating other people’s desires will become more and more feasible. Harari overstates the case on so-called robo-rats; they aren’t really mind-controlled, it’s more like they are rewarded and punished. The rat chooses to go left because she knows you’ll make her feel good if she does; she’s still freely choosing to go left. (Dangling a carrot in front of a horse is fundamentally the same thing—and frankly, paying a wage isn’t all that different.) The day may yet come where stronger forms of control become feasible, and woe betide us when it does. Yet this is no threat to the concept of free will; we already knew that coercion was possible, and mind control is simply a more precise form of coercion.

Harari reports on a lot of interesting findings in neuroscience, which are important for people to know about, but they do not actually show that free will is an illusion. What they do show is that free will is thornier than most people imagine. Our desires are not fully unified; we are often ‘of two minds’ in a surprisingly literal sense. We are often tempted by things we know are wrong. We often aren’t sure what we really want. Every individual is in fact quite divisible; we literally contain multitudes.

We do need a richer account of moral responsibility that can deal with the fact that human beings often feel multiple conflicting desires simultaneously, and often experience events differently than we later go on to remember them. But at the end of the day, human consciousness is mostly unified, our choices are mostly rational, and our basic account of moral responsibility is mostly valid.

I think for now we should perhaps be less worried about what may come in the distant future, what sort of godlike powers our descendants may have—and more worried about what we are doing with the godlike powers we already have. We have the power to feed the world; why aren’t we? We have the power to save millions from disease; why don’t we? I don’t see many people blindly following this ‘Dataism’, but I do see an awful lot blinding following a 19th-century vision of capitalism.

And perhaps if we straighten ourselves out, the future will be in better hands.

Reckoning costs in money distorts them

May 7 JDN 2460072

Consider for a moment what it means when an economic news article reports “rising labor costs”. What are they actually saying?

They’re saying that wages are rising—perhaps in some industry, perhaps in the economy as a whole. But this is not a cost. It’s a price. As I’ve written about before, the two are fundamentally distinct.

The cost of labor is measured in effort, toil, and time. It’s the pain of having to work instead of whatever else you’d like to do with your time.

The price of labor is a monetary amount, which is delivered in a transaction.

This may seem perfectly obvious, but it has important and oft-neglected implications. A cost, one paid, is gone. That value has been destroyed. We hope that it was worth it for some benefit we gained. A price, when paid, is simply transferred: One person had that money before, now someone else has it. Nothing was gained or lost.

So in fact when reports say that “labor costs have risen”, what they are really saying is that income is being transferred from owners to workers without any change in real value taking place. They are framing as a loss what is fundamentally a zero-sum redistribution.

In fact, it is disturbingly common to see a fundamentally good redistribution of income framed in the press as a bad outcome because of its expression as “costs”; the “cost” of chocolate is feared to go up if we insist upon enforcing bans on forced labor—when in fact it is only the price that goes up, and the cost actually goes down: chocolate would no longer include complicity in an atrocity. The real suffering of making chocolate would be thereby reduced, not increased. Even when they aren’t literally enslaved, those workers are astonishingly poor, and giving them even a few more cents per hour would make a real difference in their lives. But God forbid we pay a few cents more for a candy bar!

If labor costs were to rise, that would mean that work had suddenly gotten harder, or more painful; or else, that some outside circumstance had made it more difficult to work. Having a child increases your labor costs—you now have the opportunity cost of not caring for the child. COVID increased the cost of labor, by making it suddenly dangerous just to go outside in public. That could also increase prices—you may demand a higher wage, and people do seem to have demanded higher wages after COVID. But these are two separate effects, and you can have one without the other. In fact, women typically see wage stagnation or even reduction after having kids (but men largely don’t), despite their real opportunity cost of labor having obviously greatly increased.

On an individual level, it’s not such a big mistake to equate price and cost. If you are buying something, its cost to you basically just is its price, plus a little bit of transaction cost for actually finding and buying it. But on a societal level, it makes an enormous difference. It distorts our policy priorities and can even lead to actively trying to suppress things that are beneficial—such as rising wages.

This false equivalence between price and costs seems to be at least as common among economists as it is among laypeople. Economists will often justify it on the grounds that in an ideal perfect competitive market the two would be in some sense equated. But of course we don’t live in that ideal perfect market, and even if we did, they would only beproportional at the margin, not fundamentally equal across the board. It would still be obviously wrong to characterize the total value or cost of work by the price paid for it; only the last unit of effort would be priced so that marginal value equals price equals marginal cost. The first 39 hours of your work would cost you less than what you were paid, and produce more than you were paid; only that 40th hour would set the three equal.

Once you account for all the various market distortions in the world, there’s no particular relationship between what something costs—in terms of real effort and suffering—and its price—in monetary terms. Things can be expensive and easy, or cheap and awful. In fact, they often seem to be; for some reason, there seems to be a pattern where the most terrible, miserable jobs (e.g. coal mining) actually pay the leastand the easiest, most pleasant jobs (e.g. stock trading) pay the most. Some jobs that benefit society pay well (e.g. doctors) and others pay terribly or not at all (e.g. climate activists). Some actions that harm the world get punished (e.g. armed robbery) and others get rewarded with riches (e.g. oil drilling). In the real world, whether a job is good or bad and whether it is paid well or poorly seem to be almost unrelated.

In fact, sometimes they seem even negatively related, where we often feel tempted to “sell out” and do something destructive in order to get higher pay. This is likely due to Berkson’s paradox: If people are willing to do jobs if they are either high-paying or beneficial to humanity, then we should expect that, on average, most of the high-paying jobs people do won’t be beneficial to humanity. Even if there were inherently no correlation or a small positive one, people’s refusal to do harmful low-paying work removes those jobs from our sample and results in a negative correlation in what remains.

I think that the best solution, ultimately, is to stop reckoning costs in money entirely. We should reckon them in happiness.

This is of course much more difficult than simply using prices; it’s not easy to say exactly how many QALY are sacrificed in the extraction of cocoa beans or the drilling of offshore oil wells. But if we actually did find a way to count them, I strongly suspect we’d find that it was far more than we ought to be willing to pay.

A very rough approximation, surely flawed but at least a start, would be to simply convert all payments into proportions of their recipient’s income: For full-time wages, this would result in basically everyone being counted the same, as 1 hour of work if you work 40 hours per week, 50 weeks per year is precisely 0.05% of your annual income. So we could say that whatever is equivalent to your hourly wage constitutes 50 microQALY.

This automatically implies that every time a rich person pays a poor person, QALY increase, while every time a poor person pays a rich person, QALY decrease. This is not an error in the calculation. It is a fact of the universe. We ignore it only at out own peril. All wealth redistributed downward is a benefit, while all wealth redistributed upward is a harm. That benefit may cause some other harm, or that harm may be compensated by some other benefit; but they are still there.

This would also put some things in perspective. When HSBC was fined £70 million for its crimes, that can be compared against its £1.5 billion in net income; if it were an individual, it would have been hurt about 50 milliQALY, which is about what I would feel if I lost $2000. Of course, it’s not a person, and it’s not clear exactly how this loss was passed through to employees or shareholders; but that should give us at least some sense of how small that loss was for them. They probably felt it… a little.

When Trump was ordered to pay a $1.3 million settlement, based on his $2.5 billion net wealth (corresponding to roughly $125 million in annual investment income), that cost him about 10 milliQALY; for me that would be about $500.

At the other extreme, if someone goes from making $1 per day to making $1.50 per day, that’s a 50% increase in their income—500 milliQALY per year.

For those who have no income at all, this becomes even trickier; for them I think we should probably use their annual consumption, since everyone needs to eat and that costs something, though likely not very much. Or we could try to measure their happiness directly, trying to determine how much it hurts to not eat enough and work all day in sweltering heat.

Properly shifting this whole cultural norm will take a long time. For now, I leave you with this: Any time you see a monetary figure, ask yourself: How much is that worth to them?” The world will seem quite different once you get in the habit of that.

What behavioral economics needs

Apr 16 JDN 2460049

The transition from neoclassical to behavioral economics has been a vital step forward in science. But lately we seem to have reached a plateau, with no major advances in the paradigm in quite some time.

It could be that there is work already being done which will, in hindsight, turn out to be significant enough to make that next step forward. But my fear is that we are getting bogged down by our own methodological limitations.

Neoclassical economics shared with us its obsession with mathematical sophistication. To some extent this was inevitable; in order to impress neoclassical economists enough to convert some of them, we had to use fancy math. We had to show that we could do it their way in order to convince them why we shouldn’t—otherwise, they’d just have dismissed us the way they had dismissed psychologists for decades, as too “fuzzy-headed” to do the “hard work” of putting everything into equations.

But the truth is, putting everything into equations was never the right approach. Because human beings clearly don’t think in equations. Once we write down a utility function and get ready to take its derivative and set it equal to zero, we have already distanced ourselves from how human thought actually works.

When dealing with a simple physical system, like an atom, equations make sense. Nobody thinks that the electron knows the equation and is following it intentionally. That equation simply describes how the forces of the universe operate, and the electron is subject to those forces.

But human beings do actually know things and do things intentionally. And while an equation could be useful for analyzing human behavior in the aggregate—I’m certainly not objecting to statistical analysis—it really never made sense to say that people make their decisions by optimizing the value of some function. Most people barely even know what a function is, much less remember calculus well enough to optimize one.

Yet right now, behavioral economics is still all based in that utility-maximization paradigm. We don’t use the same simplistic utility functions as neoclassical economists; we make them more sophisticated and realistic. Yet in that very sophistication we make things more complicated, more difficult—and thus in at least that respect, even further removed from how actual human thought must operate.

The worst offender here is surely Prospect Theory. I recognize that Prospect Theory predicts human behavior better than conventional expected utility theory; nevertheless, it makes absolutely no sense to suppose that human beings actually do some kind of probability-weighting calculation in their heads when they make judgments. Most of my students—who are well-trained in mathematics and economics—can’t even do that probability-weighting calculation on paper, with a calculator, on an exam. (There’s also absolutely no reason to do it! All it does it make your decisions worse!) This is a totally unrealistic model of human thought.

This is not to say that human beings are stupid. We are still smarter than any other entity in the known universe—computers are rapidly catching up, but they haven’t caught up yet. It is just that whatever makes us smart must not be easily expressible as an equation that maximizes a function. Our thoughts are bundles of heuristics, each of which may be individually quite simple, but all of which together make us capable of not only intelligence, but something computers still sorely, pathetically lack: wisdom. Computers optimize functions better than we ever will, but we still make better decisions than they do.

I think that what behavioral economics needs now is a new unifying theory of these heuristics, which accounts for not only how they work, but how we select which one to use in a given situation, and perhaps even where they come from in the first place. This new theory will of course be complex; there’s a lot of things to explain, and human behavior is a very complex phenomenon. But it shouldn’t be—mustn’t be—reliant on sophisticated advanced mathematics, because most people can’t do advanced mathematics (almost by construction—we would call it something different otherwise). If your model assumes that people are taking derivatives in their heads, your model is already broken. 90% of the world’s people can’t take a derivative.

I guess it could be that our cognitive processes in some sense operate as if they are optimizing some function. This is commonly posited for the human motor system, for instance; clearly baseball players aren’t actually solving differential equations when they throw and catch balls, but the trajectories that balls follow do in fact obey such equations, and the reliability with which baseball players can catch and throw suggests that they are in some sense acting as if they can solve them.

But I think that a careful analysis of even this classic example reveals some deeper insights that should call this whole notion into question. How do baseball players actually do what they do? They don’t seem to be calculating at all—in fact, if you asked them to try to calculate while they were playing, it would destroy their ability to play. They learn. They engage in practiced motions, acquire skills, and notice patterns. I don’t think there is anywhere in their brains that is actually doing anything like solving a differential equation. It’s all a process of throwing and catching, throwing and catching, over and over again, watching and remembering and subtly adjusting.

One thing that is particularly interesting to me about that process is that is astonishingly flexible. It doesn’t really seem to matter what physical process you are interacting with; as long as it is sufficiently orderly, such a method will allow you to predict and ultimately control that process. You don’t need to know anything about differential equations in order to learn in this way—and, indeed, I really can’t emphasize this enough, baseball players typically don’t.

In fact, learning is so flexible that it can even perform better than calculation. The usual differential equations most people would think to use to predict the throw of a ball would assume ballistic motion in a vacuum, which absolutely not what a curveball is. In order to throw a curveball, the ball must interact with the air, and it must be launched with spin; curving a baseball relies very heavily on the Magnus Effect. I think it’s probably possible to construct an equation that would fully predict the motion of a curveball, but it would be a tremendously complicated one, and might not even have an exact closed-form solution. In fact, I think it would require solving the Navier-Stokes equations, for which there is an outstanding Millennium Prize. Since the viscosity of air is very low, maybe you could get away with approximating using the Euler fluid equations.

To be fair, a learning process that is adapting to a system that obeys an equation will yield results that become an ever-closer approximation of that equation. And it is in that sense that a baseball player can be said to be acting as if solving a differential equation. But this relies heavily on the system in question being one that obeys an equation—and when it comes to economic systems, is that even true?

What if the reason we can’t find a simple set of equations that accurately describe the economy (as opposed to equations of ever-escalating complexity that still utterly fail to describe the economy) is that there isn’t one? What if the reason we can’t find the utility function people are maximizing is that they aren’t maximizing anything?

What behavioral economics needs now is a new approach, something less constrained by the norms of neoclassical economics and more aligned with psychology and cognitive science. We should be modeling human beings based on how they actually think, not some weird mathematical construct that bears no resemblance to human reasoning but is designed to impress people who are obsessed with math.

I’m of course not the first person to have suggested this. I probably won’t be the last, or even the one who most gets listened to. But I hope that I might get at least a few more people to listen to it, because I have gone through the mathematical gauntlet and earned my bona fides. It is too easy to dismiss this kind of reasoning from people who don’t actually understand advanced mathematics. But I do understand differential equations—and I’m telling you, that’s not how people think.

Optimization is unstable. Maybe that’s why we satisfice.

Feb 26 JDN 2460002

Imagine you have become stranded on a deserted island. You need to find shelter, food, and water, and then perhaps you can start working on a way to get help or escape the island.

Suppose you are programmed to be an optimizerto get the absolute best solution to any problem. At first this may seem to be a boon: You’ll build the best shelter, find the best food, get the best water, find the best way off the island.

But you’ll also expend an enormous amount of effort trying to make it the best. You could spend hours just trying to decide what the best possible shelter would be. You could pass up dozens of viable food sources because you aren’t sure that any of them are the best. And you’ll never get any rest because you’re constantly trying to improve everything.

In principle your optimization could include that: The cost of thinking too hard or searching too long could be one of the things you are optimizing over. But in practice, this sort of bounded optimization is often remarkably intractable.

And what if you forgot about something? You were so busy optimizing your shelter you forgot to treat your wounds. You were so busy seeking out the perfect food source that you didn’t realize you’d been bitten by a venomous snake.

This is not the way to survive. You don’t want to be an optimizer.

No, the person who survives is a satisficerthey make sure that what they have is good enough and then they move on to the next thing. Their shelter is lopsided and ugly. Their food is tasteless and bland. Their water is hard. But they have them.

Once they have shelter and food and water, they will have time and energy to do other things. They will notice the snakebite. They will treat the wound. Once all their needs are met, they will get enough rest.

Empirically, humans are satisficers. We seem to be happier because of it—in fact, the people who are the happiest satisfice the most. And really this shouldn’t be so surprising: Because our ancestral environment wasn’t so different from being stranded on a desert island.

Good enough is perfect. Perfect is bad.

Let’s consider another example. Suppose that you have created a powerful artificial intelligence, an AGI with the capacity to surpass human reasoning. (It hasn’t happened yet—but it probably will someday, and maybe sooner than most people think.)

What do you want that AI’s goals to be?

Okay, ideally maybe they would be something like “Maximize goodness”, where we actually somehow include all the panoply of different factors that go into goodness, like beneficence, harm, fairness, justice, kindness, honesty, and autonomy. Do you have any idea how to do that? Do you even know what your own full moral framework looks like at that level of detail?

Far more likely, the goals you program into the AGI will be much simpler than that. You’ll have something you want it to accomplish, and you’ll tell it to do that well.

Let’s make this concrete and say that you own a paperclip company. You want to make more profits by selling paperclips.

First of all, let me note that this is not an unreasonable thing for you to want. It is not an inherently evil goal for one to have. The world needs paperclips, and it’s perfectly reasonable for you to want to make a profit selling them.

But it’s also not a true ultimate goal: There are a lot of other things that matter in life besides profits and paperclips. Anyone who isn’t a complete psychopath will realize that.

But the AI won’t. Not unless you tell it to. And so if we tell it to optimize, we would need to actually include in its optimization all of the things we genuinely care about—not missing a single one—or else whatever choices it makes are probably not going to be the ones we want. Oops, we forgot to say we need clean air, and now we’re all suffocating. Oops, we forgot to say that puppies don’t like to be melted down into plastic.

The simplest cases to consider are obviously horrific: Tell it to maximize the number of paperclips produced, and it starts tearing the world apart to convert everything to paperclips. (This is the original “paperclipper” concept from Less Wrong.) Tell it to maximize the amount of money you make, and it seizes control of all the world’s central banks and starts printing $9 quintillion for itself. (Why that amount? I’m assuming it uses 64-bit signed integers, and 2^63 is over 9 quintillion. If it uses long ints, we’re even more doomed.) No, inflation-adjusting won’t fix that; even hyperinflation typically still results in more real seigniorage for the central banks doing the printing (which is, you know, why they do it). The AI won’t ever be able to own more than all the world’s real GDP—but it will be able to own that if it prints enough and we can’t stop it.

But even if we try to come up with some more sophisticated optimization for it to perform (what I’m really talking about here is specifying its utility function), it becomes vital for us to include everything we genuinely care about: Anything we forget to include will be treated as a resource to be consumed in the service of maximizing everything else.

Consider instead what would happen if we programmed the AI to satisfice. The goal would be something like, “Produce at least 400,000 paperclips at a price of at most $0.002 per paperclip.”

Given such an instruction, in all likelihood, it would in fact produce exactly 400,000 paperclips at a price of exactly $0.002 per paperclip. And maybe that’s not strictly the best outcome for your company. But if it’s better than what you were previously doing, it will still increase your profits.

Moreover, such an instruction is far less likely to result in the end of the world.

If the AI has a particular target to meet for its production quota and price limit, the first thing it would probably try is to use your existing machinery. If that’s not good enough, it might start trying to modify the machinery, or acquire new machines, or develop its own techniques for making paperclips. But there are quite strict limits on how creative it is likely to be—because there are quite strict limits on how creative it needs to be. If you were previously producing 200,000 paperclips at $0.004 per paperclip, all it needs to do is double production and halve the cost. That’s a very standard sort of industrial innovation— in computing hardware (admittedly an extreme case), we do this sort of thing every couple of years.

It certainly won’t tear the world apart making paperclips—at most it’ll tear apart enough of the world to make 400,000 paperclips, which is a pretty small chunk of the world, because paperclips aren’t that big. A paperclip weighs about a gram, so you’ve only destroyed about 400 kilos of stuff. (You might even survive the lawsuits!)

Are you leaving money on the table relative to the optimization scenario? Eh, maybe. One, it’s a small price to pay for not ending the world. But two, if 400,000 at $0.002 was too easy, next time try 600,000 at $0.001. Over time, you can gently increase its quotas and tighten its price requirements until your company becomes more and more successful—all without risking the AI going completely rogue and doing something insane and destructive.

Of course this is no guarantee of safety—and I absolutely want us to use every safeguard we possibly can when it comes to advanced AGI. But the simple change from optimizing to satisficing seems to solve the most severe problems immediately and reliably, at very little cost.

Good enough is perfect; perfect is bad.

I see broader implications here for behavioral economics. When all of our models are based on optimization, but human beings overwhelmingly seem to satisfice, maybe it’s time to stop assuming that the models are right and the humans are wrong.

Optimization is perfect if it works—and awful if it doesn’t. Satisficing is always pretty good. Optimization is unstable, while satisficing is robust.

In the real world, that probably means that satisficing is better.

Good enough is perfect; perfect is bad.

Where is the money going in academia?

Feb 19 JDN 2459995

A quandary for you:

My salary is £41,000.

Annual tuition for a full-time full-fee student in my department is £23,000.

I teach roughly the equivalent of one full-time course (about 1/2 of one and 1/4 of two others; this is typically counted as “teaching 3 courses”, but if I used that figure, it would underestimate the number of faculty needed).

Each student takes about 5 or 6 courses at a time.

Why do I have 200 students?

If you multiply this out, the 200 students I teach, divided by the 6 instructors they have at one time, times the £23,000 they are paying… I should be bringing in over £760,000 for the university. Why am I paid only 5% of that?

Granted, there are other costs a university must bear aside from paying instructors. There are facilities, and administration, and services. And most of my students are not full-fee paying; that £23,000 figure really only applies to international students.

Students from Scotland pay only £1,820, but there aren’t very many of them, and public funding is supposed to make up that difference. Even students from the rest of the UK pay £9,250. And surely the average tuition paid has got to be close to that? Yet if we multiply that out, £9,000 times 200 divided by 6, we’re still looking at £300,000. So I’m still getting only 14%.

Where is the rest going?

This isn’t specific to my university by any means. It seems to be a global phenomenon. The best data on this seems to be from the US.

According to salary.com, the median salary for an adjunct professor in the US is about $63,000. This actually sounds high, given what I’ve heard from other entry-level faculty. But okay, let’s take that as our figure. (My pay is below this average, though how much depends upon the strength of the pound against the dollar. Currently the pound is weak, so quite a bit.)

Yet average tuition for out-of-state students at public college is $23,000 per year.

This means that an adjunct professor in the US with 200 students takes in $760,000 but receives $63,000. Where does that other $700,000 go?

If you think that it’s just a matter of paying for buildings, service staff, and other costs of running a university, consider this: It wasn’t always this way.

Since 1970, inflation-adjusted salaries for US academic faculty at public universities have risen a paltry 3.1%. In other words, basically not at all.

This is considerably slower than the growth of real median household income, which has risen almost 40% in that same time.

Over the same interval, nominal tuition has risen by over 2000%; adjusted for inflation, this is a still-staggering increase of 250%.

In other words, over the last 50 years, college has gotten three times as expensive, but faculty are still paid basically the same. Where is all this extra money going?

Part of the explanation is that public funding for colleges has fallen over time, and higher tuition partly makes up the difference. But private school tuition has risen just as fast, and their faculty salaries haven’t kept up either.

In their annual budget report, the University of Edinburgh proudly declares that their income increased by 9% last year. Let me assure you, my salary did not. (In fact, inflation-adjusted, my salary went down.) And their EBITDA—earnings before interest, taxes, depreciation, and amortization—was £168 million. Of that, £92 million was lost to interest and depreciation, but they don’t pay taxes at all, so their real net income was about £76 million. In the report, they include price changes of their endowment and pension funds to try to make this number look smaller, ending up with only £37 million, but that’s basically fiction; these are just stock market price drops, and they will bounce back.

Using similar financial alchemy, they’ve been trying to cut our pensions lately, because they say they “are too expensive” (because the stock market went down—nevermind that it’ll bounce back in a year or two). Fortunately, the unions are fighting this pretty hard. I wish they’d also fight harder to make them put people like me on the tenure track.

Had that £76 million been distributed evenly between all 5,000 of us faculty, we’d each get an extra £15,600.

Well, then, that solves part of the mystery in perhaps the most obvious, corrupt way possible: They’re literally just hoarding it.

And Edinburgh is far from the worst offender here. No, that would be Harvard, who are sitting on over $50 billion in assets. Since they have 21,000 students, that is over $2 million per student. With even a moderate return on its endowment, Harvard wouldn’t need to charge tuition at all.

But even then, raising my salary to £56,000 wouldn’t explain why I need to teach 200 students. Even that is still only 19% of the £300,000 those students are bringing in. But hey, then at least the primary service for which those students are here for might actually account for one-fifth of what they’re paying!

Now let’s considers administrators. Median salary for a university administrator in the US is about $138,000—twice what adjunct professors make.


Since 1970, that same time interval when faculty salaries were rising a pitiful 3% and tuition was rising a staggering 250%, how much did chancellors’ salaries increase? Over 60%.

Of course, the number of administrators is not fixed. You might imagine that with technology allowing us to automate a lot of administrative tasks, the number of administrators could be reduced over time. If that’s what you thought happened, you would be very, very wrong. The number of university administrators in the US has more than doubled since the 1980s. This is far faster growth than the number of students—and quite frankly, why should the number of administrators even grow with the number of students? There is a clear economy of scale here, yet it doesn’t seem to matter.

Combine those two facts: 60% higher pay times twice as many administrators means that universities now spend at least 3 times as much on administration as they did 50 years ago. (Why, that’s just about the proportional increase in tuition! Coincidence? I think not.)

Edinburgh isn’t even so bad in this regard. They have 6,000 administrative staff versus 5,000 faculty. If that already sounds crazy—more admins than instructors?—consider that the University of Michigan has 7,000 faculty but 19,000 administrators.

Michigan is hardly exceptional in this regard: Illinois UC has 2,500 faculty but nearly 8,000 administrators, while Ohio State has 7,300 faculty and 27,000 administrators. UCLA is even worse, with only 4,000 faculty but 26,000 administrators—a ratio of 6 to 1. It’s not the UC system in general, though: My (other?) alma mater of UC Irvine somehow supports 5,600 faculty with only 6,400 administrators. Yes, that’s right; compared to UCLA, UCI has 40% more faculty but 76% fewer administrators. (As far as students? UCLA has 47,000 while UCI has 36,000.)

At last, I think we’ve solved the mystery! Where is all the money in academia going? Administrators.

They keep hiring more and more of them, and paying them higher and higher salaries. Meanwhile, they stop hiring tenure-track faculty and replace them with adjuncts that they can get away with paying less. And then, whatever they manage to save that way, they just squirrel away into the endowment.

A common right-wing talking point is that more institutions should be “run like a business”. Well, universities seem to have taken that to heart. Overpay your managers, underpay your actual workers, and pocket the savings.