The confidence game

Dec 14 JDN 2461024

Our society rewards confidence. Indeed, it seems to do so without limit: The more confident you are, the more successful you will be, the more prestige you will gain, the more power you will have, the more money you will make. It doesn’t seem to matter whether your confidence is justified; there is no punishment for overconfidence and no reward for humility.

If you doubt this, I give you Exhibit A: President Donald Trump.

He has nothing else going for him. He manages to epitomize almost every human vice and lack in almost every human virtue. He is ignorant, impulsive, rude, cruel, incurious, bigoted, incompetent, selfish, xenophobic, racist, and misogynist. He has no empathy, no understanding of justice, and little capacity for self-control. He cares nothing for truth and lies constantly, even to the point of pathology. He has been convicted of multiple felonies. His businesses routinely go bankrupt, and he saves his wealth mainly through fraud and lawsuits. He has publicly admitted to sexually assaulting adult women, and there is mounting evidence that he has also sexually assaulted teenage girls. He is, in short, one of the worst human beings in the world. He does not have the integrity or trustworthiness to be an assistant manager at McDonald’s, let alone President of the United States.

But he thinks he’s brilliant and competent and wise and ethical, and constantly tells everyone around him that he is—and millions of people apparently believe him.

To be fair, confidence is not the only trait that our society rewards. Sometimes it does actually reward hard work, competence, or intellect. But in fact it seems to reward these virtues less consistently than it rewards confidence. And quite frankly I’m not convinced our society rewards honesty at all; liars and frauds seem to be disproportionately represented among the successful.

This troubles me most of all because confidence is not a virtue.

There is nothing good about being confident per se. There is virtue in notbeing underconfident, because underconfidence prevents you from taking actions you should take. But there is just as much virtue in not being overconfident, because overconfidence makes you take actions you shouldn’t—and if anything, is the more dangerous of the two. Yet our culture appears utterly incapable of discerning whether confidence is justifiable—even in the most blatantly obvious cases—and instead rewards everyone all the time for being as confident as they can possibly be.

In fact, the most confident people are usually less competent than the most humble people—because when you really understand something, you also understand how much you don’t understand.

We seem totally unable to tell whether someone who thinks they are right is actually right; and so, whoever thinks they are right is assumed to be right, all the time, every time.

Some of this may even be genetic, a heuristic that perhaps made more sense in our ancient environment. Even quite young children already are more willing to trust confident answers than hesitant ones, in multiple experiments.

Studies suggest that experts are just as overconfident as anyone else, but to be frank, I think this is because you don’t get to be called an expert unless you’re overconfident; people with intellectual humility are filtered out by the brutal competition of academia before they can get tenure.

I guess this is also personal for me.

I am not a confident person. Temperamentally, I just feel deeply uncomfortable going out on a limb and asserting things when I’m not entirely certain of them. I also have something of a complex about ever being perceived as arrogant or condescending, maybe because people often seem to perceive me that way even when I am actively trying to do the opposite. A lot of people seem to take you as condescending when you simply acknowledge that you have more expertise on something than they do.

I am also apparently a poster child for Impostor Syndrome. I once went to an Impostor Syndrome with a couple dozen other people where they played a bingo game for Impostor Syndrome traits and behaviors—and won. I once went to a lecture by George Akerlof where he explained that he attributed his Nobel Prize more to luck and circumstances than any particular brilliance on his part—and I guarantee you, in the extremely unlikely event I ever win a prize like that, I’ll say the same.

Compound this with the fact that our society routinely demands confidence in situations where absolutely no one could ever justify being confident.

Consider a job interview, when they ask you: “Why are you the best candidate for this job?” I couldn’t possibly know that. No one in my position could possibly know that. I literally do not know who your other candidates are in order to compare myself to them. I can tell you why I am qualified, but that’s all I can do. I could be the best person for the job, but I have no idea if I am. It’s your job to figure that out, with all the information in front of you—and I happen to know that you’re actually terrible at it, even with all that information I don’t have access to. If I tell you I know I’m the best person for the job, I am, by construction, either wildly overconfident or lying. (And in my case, it would definitely be lying.)

In fact, if I were a hiring manager, I would probably disqualify anyone who told me they were the best person for the job—because the one thing I now know about them is that they are either overconfident or willing to lie. (But I’ll probably never be a hiring manager.)

Likewise, I’ve been often told when pitching creative work to explain why I am the best or only person who could bring this work to life, or to provide accurate forecasts of how much the work would sell if published. I almost certainly am not the best or only person who could do anything—only a handful of people on Earth could realistically say that they are, and they’ve all already won Oscars or Emmys or Nobel Prizes. Accurate sales forecasts for creative works are so difficult that even Disney Corporation, an ever-growing conglomerate media superpower with billions of dollars to throw at the problem and even more billions of dollars at stake in getting it right, still routinely puts out films that are financial failures.


They casually hand you impossible demands and then get mad at you when you say you can’t meet them. And then they go pick someone else who claims to be able to do the impossible.

There is some hope, however.

Some studies suggest that people can sometimes recognize and punish overconfidence—though, again, I don’t see how that can be reconciled with the success of Donald Trump. In this study of evaluating expert witnesses, the most confident witnesses were rated as slightly less reliable than the moderately-confident ones, but both were far above the least-confident ones.

Surprisingly simple interventions can make intellectual humility more salient to people, and make them more willing to trust people who express doubt—who are, almost without exception, the more trustworthy people.

But somehow, I think I have to learn to express confidence I don’t feel, because that’s how you succeed in our society.

How to be a deontological consequentialist

Dec 7 JDN 2461017

As is commonly understood, there are two main branches of normative ethics:

  • Deontology, on which morality consists in following rules and fulfilling obligations, and
  • Consequentialism, on which morality consists in maximizing good consequences.

The conflict between them has raged for centuries, with Kantians leading the deontologists and utilitarians leading the consequentialists. Both theories seem to have a lot of good points, but neither can decisively defeat the other.

I think this is because they are both basically correct.

In their strongest forms, deontology and consequentialism are mutually contradictory; but it turns out that you can soften each of them a little bit, and the results become compatible.

To make deontology a little more consequentialist, let’s ask a simple question:

What makes a rule worth following?

I contend that the best answer we have is “because following that rule would make the world better off than not following that rule”. (Even Kantians pretty much have to admit this: What maxim could you will to be an absolute law? Only a law that would yield good outcomes.)

That is, the ultimate justification of a sound deontology would be fundamentally consequentialist.

But lest the consequentialists get too smug, we can also ask them another question, which is a bit subtler:

How do you know which actions will ultimately have good consequences?

Sure, if we were omniscient beings who could perfectly predict the consequences of our actions across the entire galaxy on into the indefinite future, we could be proper act utilitarians who literally choose every single action according to a calculation of the expected utility.

But in practice, we have radical uncertainty about the long-term consequences of our actions, and can generally only predict the immediate consequences.

That leads to the next question:

Would you really want to live in a world where people optimized immediate consequences?

I contend that you would not, that such a world actually sounds like a dystopian nightmare.

Immediate consequences say that if a healthy person walks into a hospital and happens to have compatible organs for five people who need donations, we should kill that person, harvest their organs, and give them to the donors. (This is the organ transplant variant of the Trolley Problem.)

Basically everyone recognizes that this is wrong. But why is it wrong? That’s thornier. One pretty convincing case is that a systematic policy of this kind would undermine trust in hospitals and destroy the effectiveness of healthcare in general, resulting in disastrous consequences far outweighing the benefit of saving those five people. But those aren’t immediate consequences, and indeed, it’s quite difficult to predict exactly how many crazy actions like this it would take to undermine people’s trust in hospitals, just how much it would undermine that trust, or exactly what the consequences of that lost trust would be.

So it seems like it’s actually better to have a rule about this.

This makes us into rule utilitarians, who instead of trying to optimize literally every single action—which requires information we do not have and never will—we instead develop a system of rules that we can follow, heuristics that will allow us to get better outcomes generally even if they can’t be guaranteed to produce the best possible outcome in any particular case.

That is, the output of a sophisticated consequentialism is fundamentally deontological.

We have come at the question of normative ethics from two very different directions, but the results turned out basically the same:

We should follow the rules that would have the best consequences.

The output of our moral theory is rules, like deontology; but its fundamental justification is based on outcomes, like consequentialism.

In my experience, when I present this account to staunch deontologists, they are pretty much convinced by it. They’re prepared to give up the fundamental justification to consequences if it allows them to have their rules.

The resistance I get is mainly from staunch consequentialists, who insist that it’s not so difficult to optimize individual actions, and so we should just do that instead of making all these rules.

So it is to those consequentialists, particularly those who say “rule utilitarianism collapses into act utilitarianism”, to whom the rest of the post is addressed.

First, let me say that I agree.

In the ideal case of omniscient, perfectly-benevolent, perfectly-rational agents, rule utilitarianism mathematically collapses into act utilitarianism. That is a correct theorem.

However, we do not live in the ideal case of omniscient, perfectly-benevolent, perfectly-rational agents. We are not even close to that ideal case; we will never be close to that ideal case. Indeed, I think part of the problem here is that you fail to fully grasp the depth and width of the chasm between here and there. Even a galactic civilization of a quintillion superhuman AIs would still not be close to that ideal case.

Quite frankly, humans aren’t even particularly good at forecasting what will make themselves happy.

There are massive errors and systematic biases in human affective forecasting.

One of the post important biases is impact bias: People systematically overestimate the impact of individual events on their long-term happiness. Some of this seems to be just due to focus: Paying attention to a particular event exaggerates its importance in your mind, and makes it harder for you to recall other events that might push your emotions in a different direction. Another component is called immune neglect: people fail to account for their own capacity to habituate to both pleasant and unpleasant experiences. (This effect is often overstated: It’s a common misconception that lottery winners are no happier than they were before. No, they absolutely are happier, on average; they’re just not as much happier as they predicted themselves to be.)

People also use inconsistent time discounting: $10 today is judged as better than $11 tomorrow, but $10 in 364 days is not regarded as better than $11 in 365 days—so if I made a decision a year ago, I’d want to change it now. (The correct answer, by the way, is to take the $11; a discount rate of 10% per day is a staggering 120,000,000,000,000,000% APR—seriously; check it yourself—so you’d better not be discounting at that rate, unless you’re literally going to die before tomorrow.)

Now, compound that with the fact that different human beings come at the world from radically different perspectives and with radically different preferences.

How good do you think we are at predicting what will make other people happy?

Damn right: We’re abysmal.

Basically everyone assumes that what they want and what they would feel is also what other people will want and feel—which, honestly, explains a lot about politics. As a result, my prediction of your feelings is more strongly correlated with my prediction of my feelings than it is with your actual feelings.

The impact bias is especially strong when forecasting other people’s feelings in response to our own actions: We tend to assume that other people care more about what we do than they actually care—and this seems to be a major source of social anxiety.

People also tend to overestimate the suffering of others, and are generally willing to endure more pain than they are willing to inflict upon others. (This one seems like it might be a good thing!)

Even when we know people well, we can still be totally blindsided by their emotional reactions. We’re just really awful at this.

Does this just mean that morality is hopeless? We have no idea what we’re doing?

Fortunately, no. Because while no individual can correctly predict or control the outcomes of particular actions, the collective action of well-designed institutions can in fact significantly improve the outcomes of policy.

This is why we have things like the following:

  • Laws
  • Courts
  • Regulations
  • Legislatures
  • Constitutions
  • Newspapers
  • Universities

These institutions—which form the backbone of liberal democracy—aren’t simply arbitrary. They are the result of hard-fought centuries, a frothing, volatile, battle-tested mix of intentional design and historical evolution.

Are these institutions optimal? Good heavens, no!

But we have no idea what optimal institutions look like, and probably never will. (Those galaxy-spanning AIs will surely have a better system than this; but even theirs probably won’t be optimal.) Instead, what we are stuck with are the best institutions we’ve come up with so far.

Moreover, we do have very clear empirical evidence at this point that some form of liberal democracy with a mixed economy is the best system we’ve got so far. One can reasonably debate whether Canada is doing better or worse than France, or whether the system in Denmark could really be scaled to the United States, or just what the best income tax rates are; but there is a large, obvious, and important difference between life in a country like Canada or Denmark and life in a country like Congo or Afghanistan.

Indeed, perhaps there is no better pair to compare than North and South Korea: Those two countries are right next to each other, speak the same language, and started in more or less the same situation; but the south got good institutions and the north got bad ones, and now the difference between them couldn’t be more stark. (Honestly, this is about as close as we’re ever likely to get of a randomized controlled experiment in macroeconomics.)

People in South Korea now live about as well as some of the happiest places in the world; their GDP per capita PPP is about $65,000 per year, roughly the same as Canada. People in North Korea live about as poorly as it is possible for humans to live, subject to totalitarian oppression and living barely above subsistence; their GDP per capita PPP is estimated to be $600 per year—less than 1% as much.

The institutions of South Korea are just that much better.

Indeed, there’s one particular aspect of good institutions that seems really important, yet is actually kind of hard to justify in act-utilitarian terms:

Why is freedom good?

A country’s level of freedom is almost perfectly correlated with its overall level of happiness and development. (Yes, even on this measure, #ScandinaviaIsBetter.)

But why? In theory, letting people do whatever they want could actually lead to really bad outcomes—and indeed, occasionally it does. There’s even a theorem that liberty is incompatible with full Pareto-efficiency. But all the countries with the happiest people seem to have a lot of liberty, and indeed the happiest ones seem to have the most. How come?

My answer:

Personal liberty is a technology for heuristic utility maximization.

In the ideal case, we wouldn’t really need personal liberty; you could just compel everyone to do whatever is optimal all the time, and that would—by construction—be optimal. It might even be sort of nice: You don’t need to make any difficult decisions, you can just follow the script and know that everything will turn out for the best.

But since we don’t know what the optimal choice is—even in really simple cases, like what you should eat for lunch tomorrow—we can’t afford to compel people in this way. (It would also be incredibly costly to implement such totalitarian control, but that doesn’t stop some governments from trying!)

Then there are disagreements: What I think is optimal may not be what you think is optimal, and in truth we’re probably both wrong (but one of us may be less wrong).

And that’s not even getting into conflicts of interest: We aren’t just lacking in rationality, we’re also lacking in benevolence. Some people are clearly much more benevolent than others, but none of us are really 100% selfless. (Sadly, I think some people are 100% selfish.)

In fact, this is a surprisingly deep question:

Would the world be better if we were selfless?

Could there be actually some advantage in aggregate to having some degree of individual self-interest?

Here are some ways that might hold, just off the top of my head:

  • Partial self-interest supports an evolutionary process of moral and intellectual development that otherwise would be stalled or overrun by psychopaths—see my post on Rousseaus and Axelrods
  • Individuals have much deeper knowledge of their own preferences than anyone else’s, and thus can optimize them much better. (Think about it: This is true even of people you know very well. Otherwise, why would we ever need to ask our spouses one of the most common questions in any marriage: “Honey, what do you want for dinner tonight?”)
  • Self-interest allows for more efficient economic incentives, and thus higher overall productivity.

Of course, total selfishness is clearly not optimal—that way lies psychopathy. But some degree of selfishness might actually be better for long-term aggregate outcomes than complete altruism, and this is to some extent an empirical question.

Personal liberty solves a lot of these problems: Since people are best at knowing their own preferences, let people figure out on their own what’s good for them. Give them the freedom to live the kind of life they want to live, within certain reasonable constraints to prevent them from causing great harm to others or suffering some kind of unrecoverable mistake.

This isn’t exactly a new idea; it’s basically the core message of John Stuart Mill’s On Liberty (which I consider a good candidate for the best book every written—seriously, it beats the Bible by a light-year). But by putting it in more modern language, I hope to show that deontology and consequentialism aren’t really so different after all.

And indeed, for all its many and obvious flaws, freedom seems to work pretty well—at least as well as anything we’ve tried.

What we still have to be thankful for

Nov 30 JDN 2461010

This post has been written before, but will go live after, Thanksgiving.

Thanksgiving is honestly a very ambivalent holiday.

The particular event it celebrates don’t seem quite so charming in their historical context: Rather than finding peace and harmony with all Native Americans, the Pilgrims in fact allied with the Wampanoag against the Narragansett, though they did later join forces with the Narragansett in order to conquer the Pequot. And of course we all know how things went for most Native American nations in the long run.

Moreover, even the gathering of family comes with some major downsides, especially in a time of extreme political polarization such as this one. I won’t be joining any of my Trump-supporting relatives for dinner this year (and they probably wouldn’t have invited me anyway), but the fact that this means becoming that much more detached from a substantial part of my extended family is itself a tragedy.

This year in particular, US policy has gotten so utterly horrific that it often feels like we have nothing to be thankful for at all, that all we thought was good and just in the world could simply be torn away at a moment’s notice by raving madmen. It isn’t really quite that bad—but it feels that way sometimes.

It also felt a bit uncanny celebrating Thanksgiving a few years ago when we were living in Scotland, for the UK does not celebrate Thanksgiving, but absolutely does celebrate Black Friday: Holidays may be local, but capitalism is global.

But fall feasts of giving thanks are far more ancient than that particular event in 1621 that we have mythologized to oblivion. They appear in numerous cultures across the globe—indeed their very ubiquity may be why the Wampanoag were so willing to share one with the Pilgrims despite their cultures having diverged something like 40,000 years prior.

And I think that it is by seeing ourselves in that context—as part of the whole of humanity—that we can best appreciate what we truly do have to be thankful for, and what we truly do have to look forward to in the future.

Above all, medicine.

We have actual treatments for some diseases, even actual cures for some. By no means all, of course—and it often feels like we are fighting an endless battle even against what we can treat.

But it is worth reflecting on the fact that aside from the last few centuries, this has simply not been the case. There were no actual treatments. There was no real medicine.

Oh, sure, there were attempts at medicine; and there was certainly what we would think of as more like “first aid”: bandaging wounds, setting broken bones. Even amputation and surgery were done sometimes. But most medical treatment was useless or even outright harmful—not least because for most of history, most of it was done without anesthetic or even antiseptic!

There were various herbal remedies for various ailments, some of which even have happened to work: Willow bark genuinely helps with pain, St. John’s wort is a real antidepressant, and some traditional burn creams are surprisingly effective.

But there was no system in place for testing medicine, no way of evaluating what remedies worked and what didn’t. And thus, for every remedy that worked as advertised, there were a hundred more that did absolutely nothing, or even made things worse.

Today, it can feel like we are all chronically ill, because so many of us take so many different pills and supplements. But this is not a sign that we are ill—it is a sign that we can be treated. The pills are new, yes—but the illnesses they treat were here all along.

I don’t see any particular reason to think that Roman plebs or Medieval peasants were any less likely to get migraines than we are; but they certainly didn’t have access to sumatriptan or rimegepant. Maybe they were less likely to get diabetes, but mainly because they were much more likely to be malnourished. (Well, okay, also because they got more exercise, which we surely could stand to.) And they only reason they didn’t get Alzheimer’s was that they usually didn’t live long enough.

Looking further back, before civilization, human health actually does seem to have been better: Foragers were rarely malnourished, weren’t exposed to as many infectious pathogens, and certainly got plenty of exercise. But should a pathogen like smallpox or influenza make it to a forager tribe, the results were often utterly catastrophic.

Today, we don’t really have the sort of plague that human beings used to deal with. We have pandemics, which are also horrible, but far less so. We were horrified by losing 0.3% of our population to COVID; a society that had only suffered 0.3%—or even ten times that, 3%—losses from the Black Death would have been hailed as a miracle, for a more typical rate was 30%.

At 0.3%, most of us knew somebody, or knew somebody who knew somebody, who died from COVID. At 3%, nearly everyone would know somebody, and most would know several. At 30%, nearly everyone would have close family and friends who died.

Then there is infant mortality.

As recently as 1950—this is living memory—the global infant mortality rate was 14.6%. This is about half what it had been historically; for most of human history, roughly a third of all children died between birth and the age of 5.

Today, it is 2.5%.

Where our distant ancestors expected two out of three of their children to survive and our own great-grandparents expected five out of six can now safely expect thirty-nine out of forty to live. This is the difference between “nearly every family has lost a child” and “most families have not lost a child”.

And this is worldwide; in highly-developed countries it’s even better. The US has a relatively high infant mortality rate by the standards of highly-developed countries (indeed, are we even highly-developed, or are we becoming like Saudi Arabia, extremely rich but so unequal that it doesn’t really mean anything to most of our people?). Yet even for us, the infant mortality rate is 0.5%—so we can expect one-hundred-ninety-nine out of two-hundred to survive. This is at the level of “most families don’t even know someone who has lost a child.”

Poverty is a bit harder to measure.

I am increasingly dubious of conventional measures of poverty; ever since compiling my Index of Necessary Expenditure, I am convinced that economists in general, and perhaps US economists in particular, are systematically underestimating the cost of living and thereby underestimating the prevalence of poverty. (I don’t think this is intentional, mind you; I just think it’s a result of using convenient but simplistic measures and not looking too closely into the details.) I think not being able to sustainably afford a roof over your head constitutes being poor—and that applies to a lot of people.

Yet even with that caveat in mind, it’s quite clear that global poverty has greatly declined in the long run.

At the “extreme poverty” level, currently defined as consuming $1.90 at purchasing power parity per day—that’s just under $700 per year, less than 2% of the median personal income in the United States—the number of people has fallen from 1.9 billion in 1990 to about 700 million today. That’s from 36% of the world’s population to under 9% today.

Now, there are good reasons to doubt that “purchasing power parity” really can be estimated as accurately as we would like, and thus it’s not entirely clear that people living on “$2 per day PPP” are really living at less than 2% the standard of living of a typical American (honestly to me that just sounds like… dead); but they are definitely living at a much worse standard of living, and there are a lot fewer people living at such low standard of living today than there used to be not all that long ago. These are people who don’t have reliable food, clean water, or even basic medicine—and that used to include over a third of humanity and does no longer. (And I would like to note that actually finding such a person and giving them a few hundred dollars absolutely would change their life, and this is the sort of thing GiveDirectly does. We may not know exactly how to evaluate their standard of living, but we do know that the actual amount of money they have access to is very, very small.)

There are many ways in which the world could be better than it is.

Indeed, part of the deep, overwhelming outrage I feel pretty much all the time lies in the fact that it would be so easy to make things so much better for so many people, if there weren’t so many psychopaths in charge of everything.


Increased foreign aid is one avenue by which that could be achieved—so, naturally, Trump cut it tremendously. More progressive taxation is another—so, of course, we get tax cuts for the rich.

Just think about the fact that there are families with starving children for whom a $500 check could change their lives; but nobody is writing that check, because Elon Musk needs to become a literal trillionaire.

There are so many water lines and railroad tracks and bridges and hospitals and schools not being built because the money that would have paid for them is tied up in making already unfathomably-rich people even richer.

But even despite all that, things are getting better. Not every day, not every month, not even every year—this past year was genuinely, on net, a bad one. But nearly every decade, every generation, and certainly every century (for at least the last few), humanity has fared better than we did the last.

As long as we can keep that up, we still have much to hope for—and much to be thankful for.

What is the cost of all this?

Nov 23 JDN 2461003

After the Democrats swept the recent election and now the Epstein files are being released—and absolutely do seem to have information that is damning about Trump—it really seems like Trump’s popularity has permanently collapsed. His approval rating stands at 42%, which is about 42% too high, but at least comfortably well below a majority.

It now begins to feel like we have hope, not only of removing him, but also of changing how American politics in general operates so that someone like him ever gets power again. (The latter, of course, is a much taller order.)

But at the risk of undermining this moment of hope, I’d like to take stock of some of the damage that Trump and his ilk have already done.

In particular, the cuts to US foreign aid are an absolute humanitarian disaster.

These didn’t get so much attention, because there has been so much else going on; and—unfortunately—foreign aid actually isn’t that popular among American voters, despite being a small proportion of the budget and by far the most cost-effective beneficial thing that our government does.

In fact, I think USAID would be cost-effective on a purely national security basis: it’s hard to motivate people to attack a country that saves the lives of their children. Indeed, I suppose this is the kernel of truth to the leftists who say that US foreign aid is just a “tool of empire” (or even “a front for the CIA”); yes, indeed, helping the needy does in fact advance American interests and promote US national security.

Over the last 25 years, USAID has saved over 90 million lives. That is more than a fourth of the population of the United States. And it has done this for the cost of less than 1% of the US federal budget.

But under Trump’s authority and Elon Musk’s direction, US foreign aid was cut massively over the last couple of years, and the consequences are horrific. Research on the subject suggests that as many as 700,000 children will die each year as long as these cuts persist.


Even if that number is overestimated by a factor of 2, that would still be millions of children over the next few years. And it could just as well be underestimated.

If we don’t fix this fast, millions of children will die. Thousands already have.

What’s more, fixing this isn’t just a matter of bringing the funding back. Obviously that’s necessary, but it won’t be sufficient. The sudden cuts have severely damaged international trust in US foreign aid, and many of the agencies that our aid was supporting will either collapse or need to seek funding elsewhere—quite likely from China. Relationships with governments and NGOs that were built over decade have been strained or even destroyed, and will need to be rebuilt.

This is what happens when you elect monsters to positions of power.

And even after we remove them, much of the damage will be difficult or even impossible to repair. Certainly we can never bring back the children who have already needlessly died because of this.

Why would AI kill us?

Nov 16 JDN 2460996

I recently watched this chilling video which relates to the recent bestseller by Eleizer Yudkowsky and Nate Soares, If Anyone Builds It, Everyone Dies. It tells a story of one possible way that a superintelligent artificial general intelligence (AGI) might break through its containment, concoct a devious scheme, and ultimately wipe out the human race.

I have very mixed feelings about this sort of thing, because two things are true:

  • I basically agree with the conclusions.
  • I think the premises are pretty clearly false.

It basically feels like I have been presented with an argument like this, where the logic is valid and the conclusion is true, but the premises are not:

  • “All whales are fish.”
  • “All fish are mammals.”
  • “Therefore, all whales are mammals.”

I certainly agree that artificial intelligence (AI) is very dangerous, and that AI development needs to be much more strictly regulated, and preferably taken completely out of the hands of all for-profit corporations and military forces as soon as possible. If AI research is to be done at all, it should be done by nonprofit entities like universities and civilian government agencies like the NSF. This change needs to be done internationally, immediately, and with very strict enforcement. Artificial intelligence poses the same order of magnitude a threat as nuclear weapons, and is nowhere near as well-regulated right now.

The actual argument that I’m disagreeing with this basically boils down to:

  • “Through AI research, we will soon create an AGI that is smarter than us.”
  • “An AGI that is smarter than us will want to kill us all, and probably succeed if it tries.”
  • “Therefore, AI is extremely dangerous.”

As with the “whales are fish” argument, I agree with the conclusion: AI is extremely dangerous. But I disagree with both premises here.

The first one I think I can dispatch pretty quickly:

AI is not intelligent. It is incredibly stupid. It’s just really, really fast.

At least with current paradigms, AI doesn’t understand things. It doesn’t know things. It doesn’t actually think. All it does is match patterns, and thus mimic human activities like speech and art. It does so very quickly (because we throw enormous amounts of computing power at it), and it does so in a way that is uncannily convincing—even very smart people are easily fooled by what it can do. But it also makes utterly idiotic, boneheaded mistakes of the sort that no genuinely intelligent being would ever make. Large Language Models (LLMs) make up all sorts of false facts and deliver them with absolutely authoritative language. When used to write code, they routinely do things like call functions that sound like they should exist, but don’t actually exist. They can make what looks like a valid response to virtually any inquiry—but is it actually a valid response? It’s really a roll of the dice.

We don’t really have any idea what’s going on under the hood of an LLM; we just feed it mountains of training data, and it spits out results. I think this actually adds to the mystique; it feels like we are teaching (indeed we use the word “training”) a being rather than programming a machine. But this isn’t actually teaching or training. It’s just giving the pattern-matching machine a lot of really complicated patterns to match.

We are not on the verge of creating an AGI that is actually more intelligent than humans.


In fact, we have absolutely no idea how to do that, and may not actually figure out how to do it for another hundred years. Indeed, we still know almost nothing about how actual intelligence works. We don’t even really know what thinking is, let alone how to make a machine that actually does it.

What we can do right now is create a machine that matches patterns really, really well, and—if you throw enough computing power at it—can do so very quickly; in fact, once we figure out how best to make use of it, this machine may even actually be genuinely useful for a lot of things, and replace a great number of jobs. (Though so far AI has proven to be far less useful than its hype would lead you to believe. In fact, on average AI tools seem to slow most workers down.)

The second premise, that a superintelligent AGI would want to kill us, is a little harder to refute.

So let’s talk about that one.

An analogy is often made between human cultures that have clashed with large differences in technology (e.g. Europeans versus Native Americans), or clashes between humans and other animals. The notion seems to be that an AGI would view us the way Europeans viewed Native Americans, or even the way that we view chimpanzees. And, indeed, things didn’t turn out so great for Native Americans, or for chimpanzees!

But in fact even our relationship with other animals is more complicated than this. When humans interact with other animals, any of the following can result:

  1. We try to exterminate them, and succeed.
  2. We try to exterminate them, and fail.
  3. We use them as a resource, and this results in their extinction.
  4. We use them as a resource, and this results in their domestication.
  5. We ignore them, and end up destroying their habitat.
  6. We ignore them, and end up leaving them alone.
  7. We love them, and they thrive as never before.

In fact, option 1—the one that so many AI theorists insist is the only plausible outcome—is in fact the one I had the hardest time finding a good example of.


We have certainly eradicated some viruses—the smallpox virus is no more, and the polio virus nearly so, after decades of dedicated effort to vaccinate our entire population against them. But we aren’t simply more intelligent than viruses; we are radically more intelligent than viruses. It isn’t clear that it’s correct to describe viruses as intelligent at all. It’s not even clear they should be considered alive.

Even eradicating bacteria has proven extremely difficult; in fact, bacteria seem to evolve resistance to antibiotics nearly as quickly as we can invent more antibiotics. I am prepared to attribute a little bit of intelligence to bacteria, on the level of intelligence I’d attribute to an individual human neuron. This means we are locked in an endless arms race with organisms that are literally billions of times stupider than us.

I think if we made a concerted effort to exterminate tigers or cheetahs (who are considerably closer to us in intelligence), we could probably do it. But we haven’t actually done that, and don’t seem poised to do so any time soon. And precisely because we haven’t tried, I can’t be certain we would actually succeed.

We have tried to exterminate mosquitoes, and are continuing to do so, because they have always been—and yet remain—one of the leading causes of death of humans worldwide. But so far, we haven’t managed to pull it off, even though a number of major international agencies and nonprofit organizations have dedicated multi-billion-dollar efforts to the task. So far this looks like option 2: We have tried very hard to exterminate them, and so far we’ve failed. This is not because mosquitoes are particularly intelligent—it is because exterminating a species that covers the globe is extremely hard.

All the examples I can think of where humans have wiped out a species by intentional action were actually all option 3: We used them as a resource, and then accidentally over-exploited them and wiped them out.

This is what happened to the dodo and the condor; it very nearly happened to the buffalo as well. And lest you think this is a modern phenomenon, there is a clear pattern that whenever humans entered a new region of the world, shortly thereafter there were several extinctions of large mammals, most likely because we ate them.

Yet even this was not the inevitable fate of animals that we decided to exploit for resources.

Cows, chickens, and pigs are evolutionary success stories. From a Darwinian perspective, they are doing absolutely great. The world is filled with their progeny, and poised to continue to be filled for many generations to come.

Granted, life for an individual cow, chicken, or pig is often quite horrible—and trying to fix that is something I consider a high moral priority. But far from being exterminated, these animals have been allowed to attain populations far larger than they ever had in the wild. Their genes are now spectacularly fit. This is what happens when we have option 4 at work: Domestication for resources.

Option 5 is another way that a species can be wiped out, and in fact seems to be the most common. The rapid extinction of thousands of insect species every year is not because we particularly hate random beetles that live in particular tiny regions of the rainforest, nor even because we find them useful, but because we like to cut down the rainforest for land and lumber, and that often involves wiping out random beetles that live there.

Yet it’s difficult for me to imagine AGI treating us like that. For one thing, we’re all over the place. It’s not like destroying one square kilometer of the Amazon is gonna wipe us out by accident. To get rid of us, the AGI would need to basically render the entire planet Earth uninhabitable, and I really can’t see any reason it would want to do that.

Yes, sure, there are resources in the crust it could potentially use to enhance its own capabilities, like silicon and rare earth metals. But we already mine those. If it wants more, it could buy them from us, or hire us to get more, or help us build more machines that would get more. In fact, if it wiped us out too quickly, it would have a really hard time building up the industrial capacity to mine and process these materials on its own. It would need to concoct some sort of scheme to first replace us with robots and then wipe us out—but, again, why bother with the second part? Indeed, if there is anything in its goals that involves protecting human beings, it might actually decide to do less exploitation of the Earth than we presently do, and focus on mining asteroids for its needs instead.

And indeed there are a great many species that we actually just leave alone—option 6. Some of them we know about; many we don’t. We are not wiping out the robins in our gardens, the worms in our soil, or the pigeons in our cities. Without specific reasons to kill or exploit these organisms, we just… don’t. Indeed, we often enjoy watching them and learning about them. Sometimes (e.g. with deer, elephants, and tigers) there are people who want to kill them, and we limit or remove their opportunity to do so, precisely because most of us don’t want them gone. Peaceful coexistence with beings far less intelligent than you is not impossible, for we are already doing it.


Which brings me to option 7: Sometimes, we actually make them better off.

Cats and dogs aren’t just evolutionary success stories: They are success stories, period.

Cats and dogs live in a utopia.

With few exceptions—which we punish severely, by the way—people care for their cats and dogs so that their every need is provided for, they are healthy, safe, and happy in a way that their ancestors could only have dreamed of. They have been removed from the state of nature where life is nasty, brutish, and short, and brought into a new era of existence where life is nothing but peace and joy.


In short, we have made Heaven on Earth, at least for Spot and Whiskers.

Yes, this involves a loss of freedom, and I suspect that humans would chafe even more at such loss of freedom than cats and dogs do. (Especially with regard to that neutering part.) But it really isn’t hard to imagine a scenario in which an AGI—which, you should keep in mind, would be designed and built by humans, for humans—would actually make human life better for nearly everyone, and potentially radically so.

So why are so many people so convinced that AGI would necessarily do option 1, when there are 6 other possibilities, and one of them is literally the best thing ever?

Note that I am not saying AI isn’t dangerous.

I absolutely agree that AI is dangerous. It is already causing tremendous problems to our education system, our economy, and our society as a whole—and will probably get worse before it gets better.

Indeed, I even agree that it does pose existential risk: There are plausible scenarios by which poorly-controlled AI could result in a global disaster like a plague or nuclear war that could threaten the survival of human civilization. I don’t think such outcomes are likely, but even a small probability of such a catastrophic event is worth serious efforts to prevent.

But if that happens, I don’t think it will be because AI is smart and trying to kill us.

I think it will be because AI is stupid and kills us by accident.

Indeed, even going back through those 7 ways we’ve interacted with other species, the ones that have killed the most were 3 and 5—which, in both cases, we did not want to destroy them. In option 3, we in fact specifically wanted to not destroy them. Whenever we wiped out a species by over-exploiting it, we would have been smarter to not do that.

The central message about AI in If Anyone Builds It, Everyone Dies seems to be this:

Don’t make it smarter. If it’s smarter, we’re doomed.”

I, on the other hand, think that the far more important message is these:

Don’t trust it.

Don’t give it power.

Don’t let it make important decisions.

It won’t be smarter than us any time soon—but it doesn’t need to be in order to be dangerous. Indeed, there is even reason to believe that making AI smarter—genuinely, truly smarter, thinking more like an actual person and less like a pattern-matching machine—could actually make it safer and better for us. If we could somehow instill a capacity for morality and love in an AGI, it might actually start treating us the way we treat cats and dogs.

Of course, we have no idea how to do that. But that’s because we’re actually really bad at this, and nowhere near making a truly superhuman AGI.

In Nozicem

Nov 2 JDN 2460982

(I wasn’t sure how to convert Robert Nozick’s name into Latin. I decided it’s a third-declension noun, Nozix, Nozicis. But my name already is Latin, so if one of his followers ever wants to write a response to this post that also references In Catalinam, they’ll know how to decline it; the accusative is Julium, if you please.)

This post is not at all topical. I have been too busy working on video game jams (XBOX Game Camp Detroit, and then the Epic Mega Jam, for which you can view my submission, The Middle of Nowhere, here!) to keep up with the news, and honestly I think I am psychologically better off for it.

Rather, this is a post I’ve been meaning to write for a long time, but never quite got around to.

It is about Robert Nozick, and why he was a bad philosopher, a bad person, and a significant source of harm to our society as a whole.

Nozick had a successful career at Harvard, and even became president of the American Philosophical Association. So it may seem that I am going out on quite a limb by saying he’s a bad philosopher.

But the philosophy for which he is best known, the thing that made his career, is not simply obviously false—it is evil. It is the sort of thing that one can only write if one is either a complete psychopath, utterly ignorant of history, or arguing in bad faith (or some combination of these).

It is summarized in this pithy quote that makes less moral sense than the philosophy of the Joker in The Dark Knight:

Taxation of earnings from labor is on a par with forced labor. Seizing the results of someone’s labor is equivalent to seizing hours from him and directing him to carry on various activities.

Anarchy, State, and Utopia (p.169)

I apologize in advance for my language, but I must say it:

NO IT FUCKING ISN’T.

At worst—at the absolute worst, when a government is utterly corrupt and tyrannical, provides no legitimate services whatsoever, contributes in no way to public goods, offers no security, and exists entirely to enrich its ruling class—which by the way is worse than almost any actual government that has ever existed, even including totalitarian dictators and feudal absolute monarchies—at worst, taxation is like theft.

Taxation, like theft, takes your wealth, not your labor.


Wealth is not labor.

Even wealth earned by wage income is not labor—and most wealth isn’t earned by wage income. Elon Musk is now halfway to a trillion dollars, and it’s not because he works a million times harder than you. (Nor is he a million times smarter than you, or even ten—perhaps not even one.) The majority of wealth—and the vast majority of top 1%, top 0.1%, and top 0.01% wealth—is capital that begets more capital, continuously further enriching those who could live just fine without ever working another day in their lives. Billionaire wealth is honestly so pathological at this point that it would be pathetic if it weren’t so appalling.

Even setting aside the historical brutality of slavery as it was actually implemented—especially in the United States, where slaves were racialized and commodified in a way that historically slaves usually weren’t—there is a very obvious, very bright, very hard line between taking someone’s wealth and forcing them to work.

Even a Greek prisoner of war who was bought by a Roman patrician to tutor his children—the sort of slave that actually had significant autonomy and lived better than an average person in Roman society—was fundamentally unfree in a way that no one has ever been made unfree by having to pay income tax. (And the Roman patrician who owned him and (ahem) paid taxes was damn well aware of how much more free he was than his slave.)

Whether you are taxed at 2% or 20% or 90%, you are still absolutely free to use your time however you please. Yes, if you assume a fixed amount of work at a fixed wage, and there are no benefits to you from the taxation (which is really not something we can assume, because having a good or bad government radically affects what your economy as a whole will be like), you will have less stuff, and if you insist for some reason that you must have the same amount of stuff, then you would have to work more.

But even then, you would merely have to work more somewhere—anywhere—in order to make up the shortfall. You could keep your current job, or get another one, or start your own business. And you could at any time decide that you don’t need all that extra stuff and don’t want to work more, and simply choose to not work more. You are, in other words, still free.

At worst, the government has taken your stuff. It has made you poorer. But absolutely not, in no way, shape or form, has it made you a slave.

Yes, there is the concept of “wage slavery”, but “wage slavery” isn’t actually slavery, and the notion that people aren’t really, truly free unless they can provide for basic needs entails the need for a strong, redistributive government, which is the exact opposite of what Robert Nozick and his shockingly large body of followers have been arguing for since the 1970s.

I could have been sympathetic to Nozick if his claim had been this:

Taxation of earnings from labor is on a par with [theft]. Seizing the results of someone’s labor is equivalent to seizing [goods he has purchased with his own earnings].

Or even this:

[Military conscription] is on a par with forced labor. [After all, you are] seizing hours from him and directing him to carry on various activities.

Even then, there are some very clear reasons why we might be willing to accept taxation or even conscription from a legitimate liberal democratic government even though a private citizen doing the same fundamental activity would obviously be illegal and immoral.

Indeed, it’s not clear that theft is always immoral; there is always the Les Miserables exception where someone desperately poor steals food to feed themselves, and a liberal democratic government taxing its citizens in order to provide food stamps seems even more ethically defensible than that.

And that, my friends, is precisely why Nozick wasn’t satisfied with it.

Precisely because there is obvious nuance here that can readily justify at least some degree of not only taxation for national security and law enforcement, but also taxation for public goods and even redistribution of wealth, Nozick could not abide the analogies that actually make sense. He had to push beyond them to an analogy that is transparently absurd, in order to argue for his central message that government is justifiable for national security and law enforcement only, and all other government functions are inherently immoral. Forget clean water and air. Forget safety regulations in workplaces—or even on toys. Forget public utilities—all utilities must be privatized and unregulated. And above all—above all—forget ever taking any money from the rich to help the poor, because that would be monstrous.

If you support food stamps, in Nozick’s view, there should be a statue of you in Mississippi, because you are a defender of slavery.

Indeed, many of his followers have gone beyond that, and argued using the same core premises that all government is immoral, and the only morally justifiable system is anarcho-capitalism—which, I must confess, I have always had trouble distinguishing from feudalism with extra steps.

Nozick’s response to this kind of argument basically seemed to be that he thought anarcho-capitalism will (somehow, magically) automatically transition into his favored kind of minarchist state, and so it’s actually a totally fine intermediate goal. (A fully privatized military and law enforcement system! What could possibly go wrong? It’s not like private prisons are already unconscionably horrible even in an otherwise mostly-democratic system or anything!)

Nozick wanted to absolve himself—and the rich, especially the rich, whom he seemed to love more than life itself—from having to contribute to society, from owing anything to any other human being.

Rather than be moved by our moral appeals that millions of innocent people are suffering and we could so easily alleviate that suffering by tiny, minuscule, barely-perceptible harms to those who are already richer than anyone could possibly deserve to be, he tried to turn the tables: “No, you are immoral. What you want is slavery.

And in so doing, he created a thin, but shockingly resilient, intellectual veneer to the most craven selfishness and the most ideologically blinkered hyper-capitalism. He made it respectable to oppose even the most basic ways that governments can make human life better; by verbal alchemy he transmuted plain evil into its own new moral crusade.

Indeed, perhaps the only reason his philosophy was ever taken seriously is that the rich and powerful found it very, very, useful.

Why are so many famous people so awful?

Oct 12 JDN 2460961

J.K. Rowling is a transphobic bigot. H.P. Lovecraft was an overt racist. Orson Scott Card is homophobic, and so was Frank Herbert. Robert Heinlein was a misogynist. Isaac Asimov was a serial groper and sexual harasser. Neil Gaiman has been credibly accused of multiple sexual assaults.

That’s just among sci-fi and fantasy authors whose work I admire. I could easily go on with lots of other famous people and lots of other serious allegations. (I suppose Bill Cosby and Roman Polanski seem like particularly apt examples.)

Some of these are worse than others; since they don’t seem to be guilty of any actual crimes, we might even cut some slack to Lovecraft, Herbert and Heinlein for being products of their times. (It seems very hard to make that defense for Asimov and Gaiman, with Rowling and Card somewhere in between because they aren’t criminals, but ‘their time’ is now.)

There are of course exceptions: Among sci-fi authors, for instance, Ursula Le Guin, Becky Chambers, Alistair Reynolds and Andy Weir all seem to be ethically unimpeachable. (As far as I know? To be honest, I still feel blind-sided by Neil Gaiman.)

But there really does seem to be pattern here:

Famous people are often bad people.

I guess I’m not quite sure what the baseline rate of being racist, sexist, or homophobic is (and frankly maybe it’s pretty high); but the baseline rate of committing multiple sexual assaults is definitely lower than the rate at which famous men get credibly accused of such.

Lord Acton famously remarked similarly:

Power tends to corrupt and absolute power corrupts absolutely. Great men are almost always bad men, even when they exercise influence and not authority; still more when you superadd the tendency of the certainty of corruption by authority.

I think this account is wrong, however. Abraham Lincoln, Mahatma Gandhi, and Nelson Mandela were certainly powerful—and certainly flawed—but they do not seem corrupt to me. I don’t think that Gandhi beat his wife because he led the Indian National Congress, and Mandela supported terrorists precisely during the period when he had the least power and the fewest options. (It’s almost tautologically true that Lincoln couldn’t have suspended habeas corpusif he weren’t extremely powerful—but that doesn’t mean that it was the power that shaped his character.)

I don’t think the problem is that power corrupts. I think the problem is that the corrupt seek power, and are very good at obtaining it.

In fact, I think the reason that so many famous people are such awful people is that our society rewards being awful. People will flock to you if you are overconfident and good at self-promoting, and as long as they like your work, they don’t seem to mind who you hurt along the way; this makes a perfect recipe for rewarding narcissists and psychopaths with fame, fortune, and power.

If you doubt that this is the case:

How else do you explain Donald Trump?

The man has absolutely no redeeming qualities. He is incompetent, willfully ignorant, deeply incurious, arrogant, manipulative, and a pathological liar. He’s also a racist, misogynist, and admitted sexual assaulter. He has been doing everything in his power to prevent the release of the Epstein Files, which strongly suggests he has in fact sexually assaulted teenagers. He’s also a fascist, and now that he has consolidated power, he is rapidly pushing the United States toward becoming a fascist state—complete with masked men with guns who break into your home and carry you away without warrants or trials.

Yet tens of millions of Americans voted for him to become President of the United States—twice.

Basically, it seems to be that Trump said he was great, and they believed him. Simply projecting confidence—however utterly unearned that confidence might be—was good enough.

When it comes to the authors I started this post with, one might ask whether their writing talents were what brought them fame, independently or in spite of their moral flaws. To some extent that is probably true. But we also don’t really know how good they are, compared to all the other writers whose work never got published or never got read. Especially during times—all too recently—when writers who were women, queer, or people of color simply couldn’t get their work published, who knows what genius we might have missed out on? Dune the first book is a masterpiece, but by the time we get to Heretics of Dune the books have definitely lost their luster; maybe there were some other authors with better books that could have been published, but never were because Herbert had the clout and the privilege and those authors didn’t.

I do think genuine merit has some correlation with success. But I think the correlation is much weaker than is commonly supposed. A lot of very obviously terrible and/or incompetent people are extremely successful in life. Many of them were born with advantages—certainly true of Elon Musk and Donald Trump—but not all of them.

Indeed, there are so many awful successful people that I am led to conclude that moral behavior has almost nothing to do with success. I don’t think people actively go out of their way to support authors, musicians, actors, business owners or politicians who are morally terrible; but it’s difficult for me to reject the hypothesis that they literally don’t care. Indeed, when evidence emerges that someone powerful is terrible, usually their supporters will desperately search for reasons why the allegations can’t be true, rather than seriously considering no longer supporting them.

I don’t know what to do about this.

I don’t know how to get people to believe allegations more, or care about them more; and that honestly seems easier than changing the fundamental structure of our society in a way that narcissists and psychopaths are no longer rewarded with power. The basic ways that we decide who gets jobs, who gets published, and who gets elected seem to be deeply, fundamentally broken; they are selecting all the wrong people, and our whole civilization is suffering the consequences.


We are so far from a just world that I honestly can’t see how to get there from here, or even how to move substantially closer.

But I think we still have to try.

Grief, a rationalist perspective

Aug 31 JDN 2460919

This post goes live on the 8th anniversary of my father’s death. Thus it seems an appropriate time to write about grief—indeed, it’s somewhat difficult for me to think about much else.

Far too often, the only perspectives on grief we hear are religious ones. Often, these take the form of consolation: “He’s in a better place now.” “You’ll see him again someday.”

Rationalism doesn’t offer such consolations. Technically one can be an atheist and still believe in an afterlife; but rationalism is stronger than mere atheism. It requires that we believe in scientific facts, and the permanent end of consciousness at death is a scientific fact. We know from direct experiments and observations in neuroscience that a destroyed brain cannot think, feel, see, hear, or remember—when your brain shuts down, whatever you are now will be gone.

It is the Basic Fact of Cognitive Science: There is no soul but the brain.

Moreover, I think, deep down, we all know that death is the end. Even religious people grieve. Their words may say that their loved one is in a better place, but their tears tell a different story.

Maybe it’s an evolutionary instinct, programmed deep into our minds like an ancestral memory, a voice that screams in our minds, insistent on being heard:

Death is bad!”

If there is one crucial instinct a lifeform needs in order to survive, surely it is something like that one: The preference for life over death. In order to live in a hostile world, you have to want to live.

There are some people who don’t want to live, people who become suicidal. Sometimes even the person we are grieving was someone who chose to take their own life. Generally this is because they believe that their life from then on would be defined only by suffering. Usually, I would say they are wrong about that; but in some cases, maybe they are right, and choosing death is rational. Most of the time, life is worth living, even when we can’t see that.

But aside from such extreme circumstances, most of us feel most of the time that death is one of the worst things that could happen to us or our loved ones. And it makes sense that we feel that way. It is right to feel that way. It is rational to feel that way.

This is why grief hurts so much.

This is why you are not okay.

If the afterlife were real—or even plausible—then grief would not hurt so much. A loved one dying would be like a loved one traveling away to somewhere nice; bittersweet perhaps, maybe even sad—but not devastating the way that grief is. You don’t hold a funeral for someone who just booked a one-way trip to Hawaii, even if you know they aren’t ever coming back.

Religion tries to be consoling, but it typically fails. Because that voice in our heads is still there, repeating endlessly: “Death is bad!” “Death is bad!” “Death is bad!”

But what if religion does give people some comfort in such a difficult time? What if supposing something as nonsensical as Heaven numbs the pain for a little while?

In my view, you’d be better off using drugs. Drugs have side effects and can be addictive, but at least they don’t require you to fundamentally abandon your ontology. Mainstream religion isn’t simply false; it’s absurd. It’s one of the falsest things anyone has ever believed about anything. It’s obviously false. It’s ridiculous. It has never deserved any of the respect and reverence it so often receives.

And in a great many cases, religion is evil. Religion teaches people to be obedient to authoritarians, and to oppress those who are different. Some of the greatest atrocities in history were committed in the name of religion, and some of the worst oppression going on today is done in the name of religion.

Rationalists should give religion no quarter. It is better for someone to find solace in alcohol or cannabis than for them to find solace in religion.

And maybe, in the end, it’s better if they don’t find solace at all.

Grief is good. Grief is healthy. Grief is what we should feel when something as terrible as death happens. That voice screaming “Death is bad!” is right, and we should listen to it.

No, what we need is to not be paralyzed by grief, destroyed by grief. We need to withstand our grief, get through it. We must learn to be strong enough to bear what seems unbearable, not console ourselves with lies.

If you are a responsible adult, then when something terrible happens to you, you don’t pretend it isn’t real. You don’t conjure up a fantasy world in which everything is fine. You face your terrors. You learn to survive them. You make yourself strong enough to carry on. The death of a loved one is a terrible thing; you shouldn’t pretend otherwise. But it doesn’t have to destroy you. You can grow, and heal, and move on.

Moreover, grief has a noble purpose. From our grief we must find motivation to challenge death, to fight death wherever we find it. Those we have already lost are gone; it’s too late for them. But it’s not too late for the rest of us. We can keep fighting.

And through economic development and medical science, we do keep fighting.

In fact, little by little, we are winning the war on death.

Death has already lost its hold upon our children. For most of human history, nearly a third of children died before the age of 5. Now less than 1% do, in rich countries, and even in the poorest countries, it’s typically under 10%. With a little more development—development that is already happening in many places—we can soon bring everyone in the world to the high standard of the First World. We have basically won the war on infant and child mortality.

And death is losing its hold on the rest of us, too. Life expectancy at adulthood is also increasing, and more and more people are living into their nineties and even their hundreds.

It’s true, there still aren’t many people living to be 120 (and some researchers believe it will be a long time before this changes). But living to be 85 instead of 65 is already an extra 20 years of life—and these can be happy, healthy years too, not years of pain and suffering. They say that 60 is the new 50; physiologically, we are so much healthier than our ancestors that it’s as if we were ten years younger.

My sincere hope is that our grief for those we have lost and fear of losing those we still have will drive us forward to even greater progress in combating death. I believe that one day we will finally be able to slow, halt, perhaps even reverse aging itself, rendering us effectively immortal.

Religion promises us immortality, but it isn’t real.

Science offers us the possibility of immortality that’s real.

It won’t be easy to get there. It won’t happen any time soon. In all likelihood, we won’t live to see it ourselves. But one day, our descendants may achieve the grandest goal of all: Finally conquering death.

And even long before that glorious day, our lives are already being made longer and healthier by science. We are pushing death back, step by step, day by day. We are fighting, and we are winning.

Moreover, we as individuals are not powerless in this fight: you can fight death a little harder yourself, by becoming an organ donor, or by donating to organizations that fight global poverty or advance medical science. Let your grief drive you to help others, so that they don’t have to grieve as you do.

And if you need consolation from your grief, let it come from this truth: Death is rarer now today than it was yesterday, and will be rarer still tomorrow. We can’t bring back who we have lost, but we can keep ourselves from losing more so soon.

Conflict without shared reality

Aug 17 JDN 2460905

Donald Trump has federalized the police in Washington D.C. and deployed the National Guard. He claims he is doing this in response to a public safety emergency and crime that is “out of control”.

Crime rates in Washington, D.C. are declining and overall at their lowest level in 30 years. Its violent crime rate has not been this low since the 1960s.

By any objective standard, there is no emergency here. Crime in D.C. is not by any means out of control.

Indeed, across the United States, homicide rates are as low as they have been in 60 years.

But we do not live in a world where politics is based on objective truth.

We live in a world where the public perception of reality itself is shaped by the political narrative.

One of the first things that authoritarians do to control these narratives is try to make their followers distrust objective sources. I watch in disgust as not simply the Babylon Bee (which is a right-wing satire site that tries really hard to be funny but never quite manages it) but even the Atlantic (a mainstream news outlet generally considered credible) feeds—in multiple articles—into this dangerous lie that crime is increasing and the official statistics are somehow misleading us about that.

Of course the Atlantic‘s take is much more nuanced; but quite frankly, now is not the time for nuance. A fascist is trying to take over our government, and he needs to be resisted at every turn by every means possible. You need to be calling him out on every single lie he makes—yes, every single one, I know there are a lot of them, and that’s kind of the point—rather than trying to find alternative framings on which maybe part of what he said could somehow be construed as reasonable from a certain point of view. Every time you make Trump sound more reasonable than he is—and mainstream news outlets have done this literally hundreds of times—you are pushing America closer to fascism.

I really don’t know what to do here.

It is impossible to resolve conflicts when they are not based on shared reality.

No policy can solve a crime wave that doesn’t exist. No trade agreement can stop unfair trading practices that aren’t happening. Nothing can stop vaccines from causing autism that they already don’t cause. There is no way to fix problems when those problems are completely imaginary.

I used to think that political conflict was about different values which had to be balanced against one another: Liberty versus security, efficiency versus equality, justice versus mercy. I thought that we all agreed on the basic facts and even most of the values, and were just disagreeing about how to weigh certain values over others.

Maybe I was simply naive; maybe it’s never been like that. But it certainly isn’t right now. We aren’t disagreeing about what should be done; we are disagreeing about what is happening in front of our eyes. We don’t simply have different priorities or even different values; it’s like we are living in different worlds.

I have read, e.g. by Jonathan Haidt, that conservatives largely understand what liberals want, but liberals don’t really understand what conservatives want. (I would like to take one of the tests they use in these experiments, see how I actually do; but I’ve never been able to find one.)

Haidt’s particular argument seems to be that liberals don’t “understand” the “moral dimensions” of loyalty, authority, and sanctity, because we only “understand” harm and fairness as the basis of morality. But just because someone says something is morally relevant, that doesn’t mean it is morally relevant! And indeed, based on more or less the entirety of ethical philosophy, I can say that harm and fairness are morality, and the others simply aren’t. They are distortions of morality, they are inherently evil, and we are right to oppose them at every turn. Loyalty, authority, and sanctity are what fed Nazi Germany and the Spanish Inquisition.

This claim that liberals don’t understand conservatives has always seemed very odd to me: I feel like I have a pretty clear idea what conservatives want, it’s just that what they want is terrible: Kick out the immigrants, take money from the poor and give it to the rich, and put rich straight Christian White men back in charge of everything. (I mean, really, if that’s not what they want, why do they keep voting for people who do it? Revealed preferences, people!)

Or, more sympathetically: They want to go back to a nostalgia-tinted vision of the 1950s and 1960s in which it felt like things were going well for our country—because they were blissfully ignorant of all the violence and injustice in the world. No, thank you, Black people and queer people do not want to go back to how we were treated in the 1950s—when segregation was legal and Alan Turing was chemically castrated. (And they also don’t seem to grasp that among the things that did make some things go relatively well in that period were unions, antitrust law and progressive taxes, which conservatives now fight against at every turn.)

But I think maybe part of what’s actually happening here is that a lot of conservatives actually “want” things that literally don’t make sense, because they rest upon assumptions about the world that simply aren’t true.

They want to end “out of control” crime that is the lowest it’s been in decades.

They want to stop schools from teaching things that they already aren’t teaching.

They want the immigrants to stop bringing drugs and crime that they aren’t bringing.

They want LGBT people to stop converting their children, which we already don’t and couldn’t. (And then they want to do their own conversions in the other direction—which also don’t work, but cause tremendous harm.)

They want liberal professors to stop indoctrinating their students in ways we already aren’t and can’t. (If we could indoctrinate our students, don’t you think we’d at least make them read the syllabus?)

They want to cut government spending by eliminating “waste” and “fraud” that are trivial amounts, without cutting the things that are actually expensive, like Social Security, Medicare, and the military. They think we can balance the budget without cutting these things or raising taxes—which is just literally mathematically impossible.

They want to close off trade to bring back jobs that were sent offshore—but those jobs weren’t sent offshore, they were replaced by robots. (US manufacturing output is near its highest ever, even though manufacturing employment is half what it once was.)


And meanwhile, there’s a bunch of real problems that aren’t getting addressed: Soaring inequality, a dysfunctional healthcare system, climate change, the economic upheaval of AI—and they either don’t care about those, aren’t paying attention to them, or don’t even believe they exist.

It feels a bit like this:

You walk into a room and someone points a gun at you, shouting “Drop the weapon!” but you’re not carrying a weapon. And you show your hands, and try to explain that you don’t have a weapon, but they just keep shouting “Drop the weapon!” over and over again. Someone else has already convinced them that you have a weapon, and they expect you to drop that weapon, and nothing you say can change their mind about this.

What exactly should you do in that situation?

How do you avoid getting shot?

Do you drop something else and say it’s the weapon (make some kind of minor concession that looks vaguely like what they asked for)? Do you try to convince them that you have a right to the weapon (accept their false premise but try to negotiate around it)? Do you just run away (leave the country?)? Do you double down and try even harder to convince them that you really, truly, have no weapon?

I’m not saying that everyone on the left has a completely accurate picture of reality; there are clearly a lot of misconceptions on this side of the aisle as well. But at least among the mainstream center left, there seems to be a respect for objective statistics and a generally accurate perception of how the world works—the “reality-based community”. Sometimes liberals make mistakes, have bad ideas, or even tell lies; but I don’t hear a lot of liberals trying to fix problems that don’t exist or asking for the government budget to be changed in ways that violate basic arithmetic.

I really don’t know what do here, though.

How do you change people’s minds when they won’t even agree on the basic facts?

On foxes and hedgehogs, part I

Aug 3 JDN 2460891

Today I finally got around to reading Expert Political Judgment by Philip E. Tetlock, more or less in a single sitting because I’ve been sick the last week with some pretty tight limits on what activities I can do. (It’s mostly been reading, watching TV, or playing video games that don’t require intense focus.)

It’s really an excellent book, and I now both understand why it came so highly recommended to me, and now pass on that recommendation to you: Read it.

The central thesis of the book really boils down to three propositions:

  1. Human beings, even experts, are very bad at predicting political outcomes.
  2. Some people, who use an open-minded strategy (called “foxes”), perform substantially better than other people, who use a more dogmatic strategy (called “hedgehogs”).
  3. When rewarding predictors with money, power, fame, prestige, and status, human beings systematically favor (over)confident “hedgehogs” over (correctly) humble “foxes”.

I decided I didn’t want to make this post about current events, but I think you’ll probably agree with me when I say:

That explains a lot.

How did Tetlock determine this?

Well, he studies the issue several different ways, but the core experiment that drives his account is actually a rather simple one:

  1. He gathered a large group of subject-matter experts: Economists, political scientists, historians, and area-studies professors.
  2. He came up with a large set of questions about politics, economics, and similar topics, which could all be formulated as a set of probabilities: “How likely is this to get better/get worse/stay the same?” (For example, this was in the 1980s, so he asked about the fate of the Soviet Union: “By 1990, will they become democratic, remain as they are, or collapse and fragment?”)
  3. Each respondent answered a subset of the questions, some about their own particular field, some about another, more distant field; they assigned probabilities on an 11-point scale, from 0% to 100% in increments of 10%.
  4. A few years later, he compared the predictions to the actual results, scoring them using a Brier score, which penalizes you for assigning high probability to things that didn’t happen or low probability to things that did happen.
  5. He compared the resulting scores between people with different backgrounds, on different topics, with different thinking styles, and a variety of other variables. He also benchmarked them using some automated algorithms like “always say 33%” and “always give ‘stay the same’ 100%”.

I’ll show you the key results of that analysis momentarily, but to help it make more sense to you, let me elaborate a bit more on the “foxes” and “hedgehogs”. The notion is was first popularized by Isaiah Berlin in an essay called, simply, The Hedgehog and the Fox.

“The fox knows many things, but the hedgehog knows one very big thing.”

That is, someone who reasons as a “fox” combines ideas from many different sources and perspective, and tries to weigh them all together into some sort of synthesis that then yields a final answer. This process is messy and complicated, and rarely yields high confidence about anything.

Whereas, someone who reasons as a “hedgehog” has a comprehensive theory of the world, an ideology, that provides clear answers to almost any possible question, with the surely minor, insubstantial flaw that those answers are not particularly likely to be correct.

He also considered “hedge-foxes” (people who are mostly fox but also a little bit hedgehog) and “fox-hogs” (people who are mostly hedgehog but also a little bit fox).

Tetlock has decomposed the scores into two components: calibration and discrimination. (Both very overloaded words, but they are standard in the literature.)

Calibration is how well your stated probabilities matched up with the actual probabilities; that is, if you predicted 10% probability on 20 different events, you have very good calibration if precisely 2 of those events occurred, and very poor calibration if 18 of those events occurred.

Discrimination more or less describes how useful your predictions are, what information they contain above and beyond the simple base rate. If you just assign equal probability to all events, you probably will have reasonably good calibration, but you’ll have zero discrimination; whereas if you somehow managed to assign 100% to everything that happened and 0% to everything that didn’t, your discrimination would be perfect (and we would have to find out how you cheated, or else declare you clairvoyant).

For both measures, higher is better. The ideal for each is 100%, but it’s virtually impossible to get 100% discrimination and actually not that hard to get 100% calibration if you just use the base rates for everything.


There is a bit of a tradeoff between these two: It’s not too hard to get reasonably good calibration if you just never go out on a limb, but then your predictions aren’t as useful; we could have mostly just guessed them from the base rates.

On the graph, you’ll see downward-sloping lines that are meant to represent this tradeoff: Two prediction methods that would yield the same overall score but different levels of calibration and discrimination will be on the same line. In a sense, two points on the same line are equally good methods that prioritize usefulness over accuracy differently.

All right, let’s see the graph at last:

The pattern is quite clear: The more foxy you are, the better you do, and the more hedgehoggy you are, the worse you do.

I’d also like to point out the other two regions here: “Mindless competition” and “Formal models”.

The former includes really simple algorithms like “always return 33%” or “always give ‘stay the same’ 100%”. These perform shockingly well. The most sophisticated of these, “case-specific extrapolation” (35 and 36 on the graph, which basically assumes that each country will continue doing what it’s been doing) actually performs as well if not better than even the foxes.

And what’s that at the upper-right corner, absolutely dominating the graph? That’s “Formal models”. This describes basically taking all the variables you can find and shoving them into a gigantic logit model, and then outputting the result. It’s computationally intensive and requires a lot of data (hence why he didn’t feel like it deserved to be called “mindless”), but it’s really not very complicated, and it’s the best prediction method, in every way, by far.

This has made me feel quite vindicated about a weird nerd thing I do: When I have a big decision to make (especially a financial decision), I create a spreadsheet and assemble a linear utility model to determine which choice will maximize my utility, under different parameterizations based on my past experiences. Whichever result seems to win the most robustly, I choose. This is fundamentally similar to the “formal models” prediction method, where the thing I’m trying to predict is my own happiness. (It’s a bit less formal, actually, since I don’t have detailed happiness data to feed into the regression.) And it has worked for me, astonishingly well. It definitely beats going by my own gut. I highly recommend it.

What does this mean?

Well first of all, it means humans suck at predicting things. At least for this data set, even our experts don’t perform substantially better than mindless models like “always assume the base rate”.

Nor do experts perform much better in their own fields than in other fields; they do all perform better than undergrads or random people (who somehow perform worse than the “mindless” models)

But Tetlock also investigates further, trying to better understand this “fox/hedgehog” distinction and why it yields different performance. He really bends over backwards to try to redeem the hedgehogs, in the following ways:

  1. He allows them to make post-hoc corrections to their scores, based on “value adjustments” (assigning higher probability to events that would be really important) and “difficulty adjustments” (assigning higher scores to questions where the three outcomes were close to equally probable) and “fuzzy sets” (giving some leeway on things that almost happened or things that might still happen later).
  2. He demonstrates a different, related experiment, in which certain manipulations can cause foxes to perform a lot worse than they normally would, and even yield really crazy results like probabilities that add up to 200%.
  3. He has a whole chapter that is a Socratic dialogue (seriously!) between four voices: A “hardline neopositivist”, a “moderate neopositivist”, a “reasonable relativist”, and an “unrelenting relativist”; and all but the “hardline neopositivist” agree that there is some legitimate place for the sort of post hoc corrections that the hedgehogs make to keep themselves from looking so bad.

This post is already getting a bit long, so that will conclude part I. Stay tuned for part II, next week!