Age, ambition, and social comparison

Jul 2 JDN 2460128

The day I turned 35 years old was one of the worst days of my life, as I wrote about at the time. I think the only times I have felt more depressed than that day were when my father died, when I was hospitalized by an allergic reaction to lamotrigine, and when I was rejected after interviewing for jobs at GiveWell and Wizards of the Coast.

This is notable because… nothing particularly bad happened to me on my 35th birthday. It was basically an ordinary day for me. I felt horrible simply because I was turning 35 and hadn’t accomplished so many of the things I thought I would have by that point in my life. I felt my dreams shattering as the clock ticked away what chance I thought I’d have at achieving my life’s ambitions.

I am slowly coming to realize just how pathological that attitude truly is. It was ingrained in me very deeply from the very youngest age, not least because I was such a gifted child.

While studying quantum physics in college, I was warned that great physicists do all their best work before they are 30 (some even said 25). Einstein himself said as much (so it must be true, right?). It turns out that was simply untrue. It may have been largely true in the 18th and 19th centuries, and seems to have seen some resurgence during the early years of quantum theory, but today the median age at which a Nobel laureate physicist did their prize-winning work is 48. Less than 20% of eminent scientists made their great discoveries before the age of 40.

Alexander Fleming was 47 when he discovered penicillin—just about average for an eminent scientist of today. Darwin was 22 when he set sail on the Beagle, but didn’t publish On the Origin of Species until he was 50. Andre-Marie Ampere started his work in electromagnetism in his forties.

In creative arts, age seems to be no barrier at all. Julia Child published her first cookbook at 50. Stan Lee sold his first successful Marvel comic at 40. Toni Morrison was 39 when she published her first novel, and 62 when she won her Nobel. Peter Mark Roget was 73 when he published his famous thesaurus. Tolkein didn’t publish The Hobbit until he was 45.

Alan Rickman didn’t start drama school until he was 26 and didn’t have a major Hollywood role until he was 42. Samuel L. Jackson is now the third-highest-grossing actor of all time (mostly because of the Avengers movies), but he didn’t have any major movie roles until his forties. Anna Moses didn’t start painting until she was 78.

We think of entrepreneurship as a young man’s game, but Ray Kroc didn’t buy McDonalds until he was 59. Harland Sanders didn’t franchise KFC until he was 62. Eric Yuan wasn’t a vice president until the age of 37 and didn’t become a billionaire until Zoom took off in 2019—he was 49. Sam Walton didn’t found Walmart until he was 44.

Great humanitarian achievements actually seem to be more likely later in life: Gandhi did not see India achieve independence until he was 78. Nelson Mandela was 76 when he became President of South Africa.

It has taken me far too long to realize this, and in fact I don’t think I have yet fully internalized it: Life is not a race. You do not “fall behind” when others achieve things younger than you did. In fact, most child prodigies grow up no more successful as adults than children who were merely gifted or even above-average. (There is another common belief that prodigies grow up miserable and stunted; that, fortunately, isn’t true either.)

Then there is queer timethe fact that, in a hostile heteronormative world, queer people often find ourselves growing up in a very different way than straight people—and crip timethe ways that coping with a disability changes your relationship with time and often forces you to manage your time in ways that others don’t. As someone who came out fairly young and is now married, queer time doesn’t seem to have affected me all that much. But I feel crip time very acutely: I have to very carefully manage when I go to bed and when I wake up, every single day, making sure I get not only enough sleep—much more sleep than most people get or most employers respect—but also that it aligns properly with my circadian rhythm. Failure to do so risks triggering severe, agonizing pain. Factoring that in, I have lost at least a few years of my life to migraines and depression, and will probably lose several more in the future.

But more importantly, we all need to learn to stop measuring ourselves against other people’s timelines. There is no prize in life for being faster. And while there are prizes for particular accomplishments (Oscars, Nobels, and so on), much of what determines whether you win such prizes is entirely beyond your control. Even people who ultimately made eminent contributions to society didn’t know in advance that they were going to, and didn’t behave all that much differently from others who tried but failed.

I do not want to make this sound easy. It is incredibly hard. I believe that I personally am especially terrible at it. Our society seems to be optimized to make us compare ourselves to others in as many ways as possible as often as possible in as biased a manner as possible.

Capitalism has many important upsides, but one of its deepest flaws is that it makes our standard of living directly dependent on what is happening in the rest of a global market we can neither understand nor control. A subsistence farmer is subject to the whims of nature; but in a supermarket, you are subject to the whims of an entire global economy.

And there is reason to think that the harm of social comparison is getting worse rather than better. If some mad villain set out to devise a system that would maximize harmful social comparison and the emotional damage it causes, he would most likely create something resembling social media.

The villain might also tack on some TV news for good measure: Here are some random terrifying events, which we’ll make it sound like could hit you at any moment (even though their actual risk is declining); then our ‘good news’ will be a litany of amazing accomplishments, far beyond anything you could reasonably hope for, which have been achieved by a cherry-picked sample of unimaginably fortunate people you have never met (yet you somehow still form parasocial bonds with because we keep showing them to you). We will make a point not to talk about the actual problems in the world (such as inequality and climate change), certainly not in any way you might be able to constructively learn from; nor will we mention any actual good news which might be relevant to an ordinary person such as yourself (such as economic growth, improved health, or reduced poverty). We will focus entirely on rare, extreme events that by construction aren’t likely to ever happen to you and are not relevant to how you should live your life.

I do not have some simple formula I can give you that will make social comparison disappear. I do not know how to shake the decades of indoctrination into a societal milieu that prizes richer and faster over all other concepts of worth. But perhaps at least recognizing the problem will weaken its power over us.

How to make political conversation possible

Jun 25 JDN 2460121

Every man has the right to an opinion, but no man has a right to be wrong in his facts.

~Bernard Baruch

We shouldn’t expect political conversation to be easy. Politics inherently involves confllict. There are various competing interests and different ethical views involved in any political decision. Budgets are inherently limited, and spending must be prioritized. Raising taxes supports public goods but hurts taxpayers. A policy that reduces inflation may increase unemployment. A policy that promotes growth may also increase inequality. Freedom must sometimes be weighed against security. Compromises must be made that won’t make everyone happy—often they aren’t anyone’s first choice.

But in order to have useful political conversations, we need to have common ground. It’s one thing to disagree about what should be done—it’s quite another to ‘disagree’ about the basic facts of the world. Reasonable people can disagree about what constitutes the best policy choice. But when you start insisting upon factual claims that are empirically false, you become inherently unreasonable.

What terrifies me about our current state of political discourse is that we do not seem to have this common ground. We can’t even agree about basic facts of the world. Unless we can fix this, political conversation will be impossible.

I am tempted to say “anymore”—it at least feels to me like politics used to be different. But maybe it’s always been this way, and the Internet simply made the unreasonable voices louder. Overall rates of belief in most conspiracy theories haven’t changed substantially over time. Many other times have declared themselves ‘the golden age of conspiracy theory’. Maybe this has always been a problem. Maybe the greatest reason humanity has never been able to achieve peace is that large swaths of humanity can’t even agree on the basic facts.

Donald Trump exemplified this fact-less approach to politics, and QAnon remains a disturbingly significant force in our politics today. It’s impossible to have a sensible conversation with people who are convinced that you’re supporting a secret cabal of Satanic child molesters—and all the more impossible because they were willing to become convinced of that on literally zero evidence. But Trump was not the first conspiracist candidate, and will not be the last.

Robert F. Kennedy Jr. now seems to be challenging Trump for the title of ‘most unreasonable Presidential candidate’, as he has now advocated for an astonishing variety of bizarre unfounded claims: that vaccines are deadly, that antidepressants are responsible for mass shootings, that COVID was a Chinese bioweapon. He even claims things that can be quickly refuted simply by looking up the figures: He says that Switzerland’s gun ownership rate is comparable to the US, when in fact it’s only about one-fourth as high. No other country even comes close to the extraordinarily high rate of gun ownership in the US; we are the only country in the world with more privately-owned guns than private citizens to own them—more guns than people. (We also have by far the most military weapons as well, but that’s a somewhat different issue.)

What should we be doing about this? I think at this point it’s clear that simply sitting back and hoping it goes away on its own is not working. There is a widespread fear that engaging with bizarre theories simply grants them attention, but I think we have no serious alternative. They aren’t going to disappear if we simply ignore them.

That still leaves the question of how to engage. Simply arguing with their claims directly and presenting mainstream scientific evidence appears to be remarkably ineffective. They will simply dismiss the credibility of the scientific evidence, often by exaggerating genuine flaws in scientific institutions. The journal system is broken? Big Pharma has far too much influence? Established ideas take too long to become unseated? All true. But that doesn’t mean that magic beans cure cancer.

A more effective—not easy, and certainly not infallible, but more effective—strategy seems to be to look deeper into why people say the things they do. I emphasize the word ‘say’ here, because it often seems to be the case that people don’t really believe in conspiracy theories the way they believe in ordinary facts. It’s more the mythology mindset.

Rather than address the claims directly, you need to address the person making the claims. Before getting into any substantive content, you must first build rapport and show empathy—a process some call pre-suasion. Then, rather than seeking out the evidence that support their claims—as there will be virtually none—try to find out what emotional need the conspiracy theory satisfies for them: How does it help them make sense of the terrifying chaos of the world? How does professing belief in something that initially seems absurd and horrific actually make the world seem more orderly and secure in their mind?


For instance, consider the claim that 9/11 was an inside job. At face value, this is horrifying: The US government is so evil it was prepared to launch an attack on our own soil, against our own citizens, in order to justify starting a war in another country? Against such a government, I think violent insurrection is the only viable response. But if you consider it from another perspective, it makes the world less terrifying: At least, there is someone in control. An attack like 9/11 means that the world is governed by chaos: Even we in the seemingly-impregnable fortress of American national security are in fact vulnerable to random attacks by small groups of dedicated fanatics. In the conspiracist vision of the world, the US government becomes a terrible villain; but at least the world is governed by powerful, orderly forces—not random chaos.

Or consider one of the most widespread (and, to be fair, one of the least implausible) conspiracy theories: That JFK was assassinated not by a single fanatic, but by an organized agency—the KGB, or the CIA, or the Vice President. In the real world, the President of the United States—the most powerful man on the entire planet—can occasionally be felled by a single individual who is dedicated enough and lucky enough. In the conspiracist world, such a powerful man can only be killed by someone similarly powerful. The world may be governed by an evil elite—but at least it is governed. The rules may be evil, but at least there are rules.

Understanding this can give you some sympathy for people who profess conspiracies: They are struggling to cope with the pain of living in a chaotic, unpredictable, disorderly world. They cannot deny that terrible events happen, but by attributing them to unseen, organized forces, they can at least believe that those terrible events are part of some kind of orderly plan.


At the same time, you must constantly guard against seeming arrogant or condescending. (This is where I usually fail; it’s so hard for me to take these ideas seriously.) You must present yourself as open-minded and interested in speaking in good faith. If they sense that you aren’t taking them seriously, people will simply shut down and refuse to talk any further.

It’s also important to recognize that most people with bizarre beliefs aren’t simply gullible. It isn’t that they believe whatever anyone tells them. On the contrary, they seem to suffer from misplaced skepticism: They doubt the credible sources and believe the unreliable ones. They are hyper-aware of the genuine problems with mainstream sources, and yet somehow totally oblivious to the far more glaring failures of the sources they themselves trust.

Moreover, you should never expect to change someone’s worldview in a single conversation. That simply isn’t how human beings work. The only times I have ever seen anyone completely change their opinion on something in a single sitting involved mathematical proofs—showing a proper proof really can flip someone’s opinion all by itself. Yet even scientists working in their own fields of expertise generally require multiple sources of evidence, combined over some period of time, before they will truly change their minds.

Your goal, then, should not be to convince someone that their bizarre belief is wrong. Rather, convince them that some of the sources they trust are just as unreliable as the ones they doubt. Or point out some gaps in the story they hadn’t considered. Or offer an alternative account of events that explains the outcome without requiring the existence of a secret evil cabal. Don’t try to tear down the entire wall all at once; chip away at it, one little piece at a time—and one day, it will crumble.

Hopefully if we do this enough, we can make useful political conversation possible.

We do seem to have better angels after all

Jun 18 JDN 2460114

A review of The Darker Angels of Our Nature

(I apologize for not releasing this on Sunday; I’ve been traveling lately and haven’t found much time to write.)

Since its release, I have considered Steven Pinker’s The Better Angels of our Nature among a small elite category of truly great books—not simply good because enjoyable, informative, or well-written, but great in its potential impact on humanity’s future. Others include The General Theory of Employment, Interest, and Money, On the Origin of Species, and Animal Liberation.

But I also try to expose myself as much as I can to alternative views. I am quite fearful of the echo chambers that social media puts us in, where dissent is quietly hidden from view and groupthink prevails.

So when I saw that a group of historians had written a scathing critique of The Better Angels, I decided I surely must read it and get its point of view. This book is The Darker Angels of Our Nature.

The Darker Angels is written by a large number of different historians, and it shows. It’s an extremely disjointed book; it does not present any particular overall argument, various sections differ wildly in scope and tone, and sometimes they even contradict each other. It really isn’t a book in the usual sense; it’s a collection of essays whose only common theme is that they disagree with Steven Pinker.

In fact, even that isn’t quite true, as some of the best essays in The Darker Angels are actually the ones that don’t fundamentally challenge Pinker’s contention that global violence has been on a long-term decline for centuries and is now near its lowest in human history. These essays instead offer interesting insights into particular historical eras, such as medieval Europe, early modern Russia, and shogunate Japan, or they add additional nuances to the overall pattern, like the fact that, compared to medieval times, violence in Europe seems to have been less in the Pax Romana (before) and greater in the early modern period (after), showing that the decline in violence was not simple or steady, but went through fluctuations and reversals as societies and institutions changed. (At this point I feel I should note that Pinker clearly would not disagree with this—several of the authors seem to think he would, which makes me wonder if they even read The Better Angels.)

Others point out that the scale of civilization seems to matter, that more is different, and larger societies and armies more or less automatically seem to result in lower fatality rates by some sort of scaling or centralization effect, almost like the square-cube law. That’s very interesting if true; it would suggest that in order to reduce violence, you don’t really need any particular mode of government, you just need something that unites as many people as possible under one banner. The evidence presented for it was too weak for me to say whether it’s really true, however, and there was really no theoretical mechanism proposed whatsoever.

Some of the essays correct genuine errors Pinker made, some of which look rather sloppy. Pinker clearly overestimated the death tolls of the An Lushan Rebellion, the Spanish Inquisition, and Aztec ritual executions, probably by using outdated or biased sources. (Though they were all still extremely violent!) His depiction of indigenous cultures does paint with a very broad brush, and fails to recognize that some indigenous societies seem to have been quite peaceful (though others absolutely were tremendously violent).

One of the best essays is about Pinker’s cavalier attitude toward mass incarceration, which I absolutely do consider a deep flaw in Pinker’s view. Pinker presents increased incarceration rates along with decreased crime rates as if they were an unalloyed good, while I can at best be ambivalent about whether the benefit of decreasing crime is worth the cost of greater incarceration. Pinker seems to take for granted that these incarcerations are fair and impartial, when we have a great deal of evidence that they are strongly biased against poor people and people of color.

There’s another good essay about the Enlightenment, which Pinker seems to idealize a little too much (especially in his other book Enlightenment Now). There was no sudden triumph of reason that instantly changed the world. Human knowledge and rationality gradually improved over a very long period of time, with no obvious turning point and many cases of backsliding. The scientific method isn’t a simple, infallible algorithm that suddenly appeared in the brain of Galileo or Bayes, but a whole constellation of methods and concepts of rationality that took centuries to develop and is in fact still developing. (Much as the Tao that can be told is not the eternal Tao, the scientific method that can be written in a textbook is not the true scientific method.)

Several of the essays point out the limitations of historical and (especially) archaeological records, making it difficult to draw any useful inferences about rates of violence in the past. I agree that Pinker seems a little too cavalier about this; the records really are quite sparse and it’s not easy to fill in the gaps. Very small samples can easily distort homicide rates; since only about 1% of deaths worldwide are homicide, if you find 20 bodies, whether or not one of them was murdered is the difference between peaceful Japan and war-torn Colombia.

On the other hand, all we really can do is make the best inferences we have with the available data, and for the time periods in which we do have detailed records—surely true since at least the 19th century—the pattern of declining violence is very clear, and even the World Wars look like brief fluctuations rather than fundamental reversals. Contrary to popular belief, the World Wars do not appear to have been especially deadly on a per-capita basis, compared to various historic wars. The primary reason so many people died in the World Wars was really that there just were more people in the world. A few of the authors don’t seem to consider this an adequate reason, but ask yourself this: Would you rather live in a society of 100 in which 10 people are killed, or a society of 1 billion in which 1 million are killed? In the former case your chances of being killed are 10%; in the latter, 0.1%. Clearly, per-capita measures of violence are the correct ones.

Some essays seem a bit beside the point, like one on “environmental violence” which quite aptly details the ongoing—terrifying—degradation of our global ecology, but somehow seems to think that this constitutes violence when it obviously doesn’t. There is widespread violence against animals, certainly; slaughterhouses are the obvious example—and unlike most people, I do not consider them some kind of exception we can simply ignore. We do in fact accept levels of cruelty to pigs and cows that we would never accept against dogs or horses—even the law makes such exceptions. Moreover, plenty of habitat destruction is accompanied by killing of the animals who lived in that habitat. But ecological degradation is not equivalent to violence. (Nor is it clear to me that our treatment of animals is more violent overall today than in the past; I guess life is probably worse for a beef cow today than it was in the medieval era, but either way, she was going to be killed and eaten. And at least we no longer do cat-burning.) Drilling for oil can be harmful, but it is not violent. We can acknowledge that life is more peaceful now than in the past without claiming that everything is better now—in fact, one could even say that overall life isn’t better, but I think they’d be hard-pressed to argue that.

These are the relatively good essays, which correct minor errors or add interesting nuances. There are also some really awful essays in the mix.

A common theme of several of the essays seems to be “there are still bad things, so we can’t say anything is getting better”; they will point out various forms of violence that undeniably still exist, and treat this as a conclusive argument against the claim that violence has declined. Yes, modern slavery does exist, and it is a very serious problem; but it clearly is not the same kind of atrocity that the Atlantic slave trade was. Yes, there are still murders. Yes, there are still wars. Probably these things will always be with us to some extent; but there is a very clear difference between 500 homicides per million people per year and 50—and it would be better still if we could bring it down to 5.

There’s one essay about sexual violence that doesn’t present any evidence whatsoever to contradict the claim that rates of sexual violence have been declining while rates of reporting and prosecution have been increasing. (These two trends together often result in reported rapes going up, but most experts agree that actual rapes are going down.) The entire essay is based on anecdote, innuendo, and righteous anger.

There are several essays that spend their whole time denouncing neoliberal capitalism (not even presenting any particularly good arguments against it, though such arguments do exist), seeming to equate Pinker’s view with some kind of Rothbardian anarcho-capitalism when in fact Pinker is explictly in favor of Nordic-style social democracy. (One literally dismisses his support for universal healthcare as “Well, he is Canadian”.) But Pinker has on occasion said good things about capitalism, so clearly, he is an irredeemable monster.

Right in the introduction—which almost made me put the book down—is an astonishingly ludicrous argument, which I must quote in full to show you that it is not out of context:

What actually is violence (nowhere posed or answered in The Better Angels)? How do people perceive it in different time-place settings? What is its purpose and function? What were contemporary attitudes toward violence and how did sensibilities shift over time? Is violence always ‘bad’ or can there be ‘good’ violence, violence that is regenerative and creative?

The Darker Angels of Our Nature, p.16

Yes, the scare quotes on ‘good’ and ‘bad’ are in the original. (Also the baffling jargon “time-place settings” as opposed to, say, “times and places”.) This was clearly written by a moral relativist. Aside from questioning whether we can say anything about anything, the argument seems to be that Pinker’s argument is invalid because he didn’t precisely define every single relevant concept, even though it’s honestly pretty obvious what the world “violence” means and how he is using it. (If anything, it’s these authors who don’t seem to understand what the word means; they keep calling things “violence” that are indeed bad, but obviously aren’t violence—like pollution and cyberbullying. At least talk of incarceration as “structural violence” isn’t obvious nonsense—though it is still clearly distinct from murder rates.)

But it was by reading the worst essays that I think I gained the most insight into what this debate is really about. Several of the essays in The Darker Angels thoroughly and unquestioningly share the following inference: if a culture is superior, then that culture has a right to impose itself on others by force. On this, they seem to agree with the imperialists: If you’re better, that gives you a right to dominate everyone else. They rightly reject the claim that cultures have a right to imperialistically dominate others, but they cannot deny the inference, and so they are forced to deny that any culture can ever be superior to another. The result is that they tie themselves in knots trying to justify how greater wealth, greater happiness, less violence, and babies not dying aren’t actually good things. They end up talking nonsense about “violence that is regenerative and creative”.

But we can believe in civilization without believing in colonialism. And indeed that is precisely what I (along with Pinker) believe: That democracy is better than autocracy, that free speech is better than censorship, that health is better than illness, that prosperity is better than poverty, that peace is better than war—and therefore that Western civilization is doing a better job than the rest. I do not believe that this justifies the long history of Western colonial imperialism. Governing your own country well doesn’t give you the right to invade and dominate other countries. Indeed, part of what makes colonial imperialism so terrible is that it makes a mockery of the very ideals of peace, justice, and freedom that the West is supposed to represent.

I think part of the problem is that many people see the world in zero-sum terms, and believe that the West’s prosperity could only be purchased by the rest of the world’s poverty. But this is untrue. The world is nonzero-sum. My happiness does not come from your sadness, and my wealth does not come from your poverty. In fact, even the West was poor for most of history, and we are far more prosperous now that we have largely abandoned colonial imperialism than we ever were in imperialism’s heyday. (I do occasionally encounter British people who seem vaguely nostalgic for the days of the empire, but real median income in the UK has doubled just since 1977. Inequality has also increased during that time, which is definitely a problem; but the UK is undeniably richer now than it ever was at the peak of the empire.)

In fact it could be that the West is richer now because of colonalism than it would have been without it. I don’t know whether or not this is true. I suspect it isn’t, but I really don’t know for sure. My guess would be that colonized countries are poorer, but colonizer countries are not richer—that is, colonialism is purely destructive. Certain individuals clearly got richer by such depredation (Leopold II, anyone?), but I’m not convinced many countries did.

Yet even if colonialism did make the West richer, it clearly cannot explain most of the wealth of Western civilization—for that wealth simply did not exist in the world before. All these bridges and power plants, laptops and airplanes weren’t lying around waiting to be stolen. Surely, some of the ingredients were stolen—not least, the land. Had they been bought at fair prices, the result might have been less wealth for us (then again it might not, for wealthier trade partners yield greater exports). But this does not mean that the products themselves constitute theft, nor that the wealth they provide is meaningless. Perhaps we should find some way to pay reparations; undeniably, we should work toward greater justice in the future. But we do not need to give up all we have in order to achieve that justice.

There is a law of conservation of energy. It is impossible to create energy in one place without removing it from another. There is no law of conservation of prosperity. Making the world better in one place does not require making it worse in another.

Progress is real. Yes, it is flawed, uneven, and it has costs of its own; but it is real. If we want to have more of it, we best continue to believe in it. And The Better Angels of Our Nature does have some notable flaws, but it still retains its place among truly great books.

When maximizing utility doesn’t

Jun 4 JDN 2460100

Expected utility theory behaves quite strangely when you consider questions involving mortality.

Nick Beckstead and Teruji Thomas recently published a paper on this: All well-defined utility functions are either reckless in that they make you take crazy risks, or timid in that they tell you not to take even very small risks. It’s starting to make me wonder if utility theory is even the right way to make decisions after all.

Consider a game of Russian roulette where the prize is $1 million. The revolver has 6 chambers, 3 with a bullet. So that’s a 1/2 chance of $1 million, and a 1/2 chance of dying. Should you play?

I think it’s probably a bad idea to play. But the prize does matter; if it were $100 million, or $1 billion, maybe you should play after all. And if it were $10,000, you clearly shouldn’t.

And lest you think that there is no chance of dying you should be willing to accept for any amount of money, consider this: Do you drive a car? Do you cross the street? Do you do anything that could ever have any risk of shortening your lifespan in exchange for some other gain? I don’t see how you could live a remotely normal life without doing so. It might be a very small risk, but it’s still there.

This raises the question: Suppose we have some utility function over wealth; ln(x) is a quite plausible one. What utility should we assign to dying?


The fact that the prize matters means that we can’t assign death a utility of negative infinity. It must be some finite value.

But suppose we choose some value, -V, (so V is positive), for the utility of dying. Then we can find some amount of money that will make you willing to play: ln(x) = V, x = e^(V).

Now, suppose that you have the chance to play this game over and over again. Your marginal utility of wealth will change each time you win, so we may need to increase the prize to keep you playing; but we could do that. The prizes could keep scaling up as needed to make you willing to play. So then, you will keep playing, over and over—and then, sooner or later, you’ll die. So, at each step you maximized utility—but at the end, you didn’t get any utility.

Well, at that point your heirs will be rich, right? So maybe you’re actually okay with that. Maybe there is some amount of money ($1 billion?) that you’d be willing to die in order to ensure your heirs have.

But what if you don’t have any heirs? Or, what if we consider making such a decision as a civilization? What if death means not only the destruction of you, but also the destruction of everything you care about?

As a civilization, are there choices before us that would result in some chance of a glorious, wonderful future, but also some chance of total annihilation? I think it’s pretty clear that there are. Nuclear technology, biotechnology, artificial intelligence. For about the last century, humanity has been at a unique epoch: We are being forced to make this kind of existential decision, to face this kind of existential risk.

It’s not that we were immune to being wiped out before; an asteroid could have taken us out at any time (as happened to the dinosaurs), and a volcanic eruption nearly did. But this is the first time in humanity’s existence that we have had the power to destroy ourselves. This is the first time we have a decision to make about it.

One possible answer would be to say we should never be willing to take any kind of existential risk. Unlike the case of an individual, when we speaking about an entire civilization, it no longer seems obvious that we shouldn’t set the utility of death at negative infinity. But if we really did this, it would require shutting down whole industries—definitely halting all research in AI and biotechnology, probably disarming all nuclear weapons and destroying all their blueprints, and quite possibly even shutting down the coal and oil industries. It would be an utterly radical change, and it would require bearing great costs.

On the other hand, if we should decide that it is sometimes worth the risk, we will need to know when it is worth the risk. We currently don’t know that.

Even worse, we will need some mechanism for ensuring that we don’t take the risk when it isn’t worth it. And we have nothing like such a mechanism. In fact, most of our process of research in AI and biotechnology is widely dispersed, with no central governing authority and regulations that are inconsistent between countries. I think it’s quite apparent that right now, there are research projects going on somewhere in the world that aren’t worth the existential risk they pose for humanity—but the people doing them are convinced that they are worth it because they so greatly advance their national interest—or simply because they could be so very profitable.

In other words, humanity finally has the power to make a decision about our survival, and we’re not doing it. We aren’t making a decision at all. We’re letting that responsibility fall upon more or less randomly-chosen individuals in government and corporate labs around the world. We may be careening toward an abyss, and we don’t even know who has the steering wheel.

We ignorant, incompetent gods

May 21 JDN 2460086

A review of Homo Deus

The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.

E.O. Wilson

Homo Deus is a very good read—and despite its length, a quick one; as you can see, I read it cover to cover in a week. Yuval Noah Harari’s central point is surely correct: Our technology is reaching a threshold where it grants us unprecedented power and forces us to ask what it means to be human.

Biotechnology and artificial intelligence are now advancing so rapidly that advancements in other domains, such as aerospace and nuclear energy, seem positively mundane. Who cares about making flight or electricity a bit cleaner when we will soon have the power to modify ourselves or we’ll all be replaced by machines?

Indeed, we already have technology that would have seemed to ancient people like the powers of gods. We can fly; we can witness or even control events thousands of miles away; we can destroy mountains; we can wipeout entire armies in an instant; we can even travel into outer space.

Harari rightly warns us that our not-so-distant descendants are likely to have powers that we would see as godlike: Immortality, superior intelligence, self-modification, the power to create life.

And where it is scary to think about what they might do with that power if they think the way we do—as ignorant and foolish and tribal as we are—Harari points out that it is equally scary to think about what they might do if they don’t think the way we do—for then, how do they think? If their minds are genetically modified or even artificially created, who will they be? What values will they have, if not ours? Could they be better? What if they’re worse?

It is of course difficult to imagine values better than our own—if we thought those values were better, we’d presumably adopt them. But we should seriously consider the possibility, since presumably most of us believe that our values today are better than what most people’s values were 1000 years ago. If moral progress continues, does it not follow that people’s values will be better still 1000 years from now? Or at least that they could be?

I also think Harari overestimates just how difficult it is to anticipate the future. This may be a useful overcorrection; the world is positively infested with people making overprecise predictions about the future, often selling them for exorbitant fees (note that Harari was quite well-compensated for this book as well!). But our values are not so fundamentally alien from those of our forebears, and we have reason to suspect that our descendants’ values will be no more different from ours.

For instance, do you think that medieval people thought suffering and death were good? I assure you they did not. Nor did they believe that the supreme purpose in life is eating cheese. (They didn’t even believe the Earth was flat!) They did not have the concept of GDP, but they could surely appreciate the value of economic prosperity.

Indeed, our world today looks very much like a medieval peasant’s vision of paradise. Boundless food in endless variety. Near-perfect security against violence. Robust health, free from nearly all infectious disease. Freedom of movement. Representation in government! The land of milk and honey is here; there they are, milk and honey on the shelves at Walmart.

Of course, our paradise comes with caveats: Not least, we are by no means free of toil, but instead have invented whole new kinds of toil they could scarcely have imagined. If anything I would have to guess that coding a robot or recording a video lecture probably isn’t substantially more satisfying than harvesting wheat or smithing a sword; and reconciling receivables and formatting spreadsheets is surely less. Our tasks are physically much easier, but mentally much harder, and it’s not obvious which of those is preferable. And we are so very stressed! It’s honestly bizarre just how stressed we are, given the abudance in which we live; there is no reason for our lives to have stakes so high, and yet somehow they do. It is perhaps this stress and economic precarity that prevents us from feeling such joy as the medieval peasants would have imagined for us.

Of course, we don’t agree with our ancestors on everything. The medieval peasants were surely more religious, more ignorant, more misogynistic, more xenophobic, and more racist than we are. But projecting that trend forward mostly means less ignorance, less misogyny, less racism in the future; it means that future generations should see the world world catch up to what the best of us already believe and strive for—hardly something to fear. The values that I believe are surely not what we as a civilization act upon, and I sorely wish they were. Perhaps someday they will be.

I can even imagine something that I myself would recognize as better than me: Me, but less hypocritical. Strictly vegan rather than lacto-ovo-vegetarian, or at least more consistent about only buying free range organic animal products. More committed to ecological sustainability, more willing to sacrifice the conveniences of plastic and gasoline. Able to truly respect and appreciate all life, even humble insects. (Though perhaps still not mosquitoes; this is war. They kill more of us than any other animal, including us.) Not even casually or accidentally racist or sexist. More courageous, less burnt out and apathetic. I don’t always live up to my own ideals. Perhaps someday someone will.

Harari fears something much darker, that we will be forced to give up on humanist values and replace them with a new techno-religion he calls Dataism, in which the supreme value is efficient data processing. I see very little evidence of this. If it feels like data is worshipped these days, it is only because data is profitable. Amazon and Google constantly seek out ever richer datasets and ever faster processing because that is how they make money. The real subject of worship here is wealth, and that is nothing new. Maybe there are some die-hard techno-utopians out there who long for us all to join the unified oversoul of all optimized data processing, but I’ve never met one, and they are clearly not the majority. (Harari also uses the word ‘religion’ in an annoyingly overbroad sense; he refers to communism, liberalism, and fascism as ‘religions’. Ideologies, surely; but religions?)

Harari in fact seems to think that ideologies are strongly driven by economic structures, so maybe he would even agree that it’s about profit for now, but thinks it will become religion later. But I don’t really see history fitting this pattern all that well. If monotheism is directly tied to the formation of organized bureaucracy and national government, then how did Egypt and Rome last so long with polytheistic pantheons? If atheism is the natural outgrowth of industrialized capitalism, then why are Africa and South America taking so long to get the memo? I do think that economic circumstances can constrain culture and shift what sort of ideas become dominant, including religious ideas; but there clearly isn’t this one-to-one correspondence he imagines. Moreover, there was never Coalism or Oilism aside from the greedy acquisition of these commodities as part of a far more familiar ideology: capitalism.

He also claims that all of science is now, or is close to, following a united paradigm under which everything is a data processing algorithm, which suggests he has not met very many scientists. Our paradigms remain quite varied, thank you; and if they do all have certain features in common, it’s mainly things like rationality, naturalism and empiricism that are more or less inherent to science. It’s not even the case that all cognitive scientists believe in materialism (though it probably should be); there are still dualists out there.

Moreover, when it comes to values, most scientists believe in liberalism. This is especially true if we use Harari’s broad sense (on which mainline conservatives and libertarians are ‘liberal’ because they believe in liberty and human rights), but even in the narrow sense of center-left. We are by no means converging on a paradigm where human life has no value because it’s all just data processing; maybe some scientists believe that, but definitely not most of us. If scientists ran the world, I can’t promise everything would be better, but I can tell you that Bush and Trump would never have been elected and we’d have a much better climate policy in place by now.

I do share many of Harari’s fears of the rise of artificial intelligence. The world is clearly not ready for the massive economic disruption that AI is going to cause all too soon. We still define a person’s worth by their employment, and think of ourselves primarily as collection of skills; but AI is going to make many of those skills obsolete, and may make many of us unemployable. It would behoove us to think in advance about who we truly are and what we truly want before that day comes. I used to think that creative intellectual professions would be relatively secure; ChatGPT and Midjourney changed my mind. Even writers and artists may not be safe much longer.

Harari is so good at sympathetically explaining other views he takes it to a fault. At times it is actually difficult to know whether he himself believes something and wants you to, or if he is just steelmanning someone else’s worldview. There’s a whole section on ‘evolutionary humanism’ where he details a worldview that is at best Nietschean and at worst Nazi, but he makes it sound so seductive. I don’t think it’s what he believes, in part because he has similarly good things to say about liberalism and socialism—but it’s honestly hard to tell.

The weakest part of the book is when Harari talks about free will. Like most people, he just doesn’t get compatibilism. He spends a whole chapter talking about how science ‘proves we have no free will’, and it’s just the same old tired arguments hard determinists have always made.

He talks about how we can make choices based on our desires, but we can’t choose our desires; well of course we can’t! What would that even mean? If you could choose your desires, what would you choose them based on, if not your desires? Your desire-desires? Well, then, can you choose your desire-desires? What about your desire-desire-desires?

What even is this ultimate uncaused freedom that libertarian free will is supposed to consist in? No one seems capable of even defining it. (I’d say Kant got the closest: He defined it as the capacity to act based upon what ought rather than what is. But of course what we believe about ‘ought’ is fundamentally stored in our brains as a particular state, a way things are—so in the end, it’s an ‘is’ we act on after all.)

Maybe before you lament that something doesn’t exist, you should at least be able to describe that thing as a coherent concept? Woe is me, that 2 plus 2 is not equal to 5!

It is true that as our technology advances, manipulating other people’s desires will become more and more feasible. Harari overstates the case on so-called robo-rats; they aren’t really mind-controlled, it’s more like they are rewarded and punished. The rat chooses to go left because she knows you’ll make her feel good if she does; she’s still freely choosing to go left. (Dangling a carrot in front of a horse is fundamentally the same thing—and frankly, paying a wage isn’t all that different.) The day may yet come where stronger forms of control become feasible, and woe betide us when it does. Yet this is no threat to the concept of free will; we already knew that coercion was possible, and mind control is simply a more precise form of coercion.

Harari reports on a lot of interesting findings in neuroscience, which are important for people to know about, but they do not actually show that free will is an illusion. What they do show is that free will is thornier than most people imagine. Our desires are not fully unified; we are often ‘of two minds’ in a surprisingly literal sense. We are often tempted by things we know are wrong. We often aren’t sure what we really want. Every individual is in fact quite divisible; we literally contain multitudes.

We do need a richer account of moral responsibility that can deal with the fact that human beings often feel multiple conflicting desires simultaneously, and often experience events differently than we later go on to remember them. But at the end of the day, human consciousness is mostly unified, our choices are mostly rational, and our basic account of moral responsibility is mostly valid.

I think for now we should perhaps be less worried about what may come in the distant future, what sort of godlike powers our descendants may have—and more worried about what we are doing with the godlike powers we already have. We have the power to feed the world; why aren’t we? We have the power to save millions from disease; why don’t we? I don’t see many people blindly following this ‘Dataism’, but I do see an awful lot blinding following a 19th-century vision of capitalism.

And perhaps if we straighten ourselves out, the future will be in better hands.

Why does democracy work?

May 14 JDN 2460079

A review of Democracy for Realists

I don’t think it can be seriously doubted that democracy does, in fact, work. Not perfectly, by any means; but the evidence is absolutely overwhelming that more democratic societies are better than more authoritarian societies by just about any measure you could care to use.

When I first started reading Democracy for Realists and saw their scathing, at times frothing criticism of mainstream ideas of democracy, I thought they were going to try to disagree with that; but in the end they don’t. Achen and Bartels do agree that democracy works; they simply think that why and how it works is radically different from what most people think.

For it is a very long-winded book, and in dire need of better editing. Most of the middle section of the book is taken up by a deluge of empirical analysis, most of which amounts to over-interpreting the highly ambiguous results of underpowered linear regressions on extremely noisy data. The sheer quantity of them seems intended to overwhelm any realization that no particular one is especially compelling. But a hundred weak arguments don’t add up to a single strong one.

To their credit, the authors often include the actual scatter plots; but when you look at those scatter plots, you find yourself wondering how anyone could be so convinced these effects are real and important. Many of them seem more prone to new constellations.

Their econometric techniques are a bit dubious, as well; at one point they said they “removed outliers” but then the examples they gave as “outliers” were the observations most distant from their regression line rather than the rest of the data. Removing the things furthest from your regression line will always—always—make your regression seem stronger. But that’s not what outliers are. Other times, they add weird controls or exclude parts of the sample for dubious reasons, and I get the impression that these are the cherry-picked results of a much larger exploration. (Why in the world would you exclude Catholics from a study of abortion attitudes? And this study on shark attacks seems awfully specific….) And of course if you try 20 regressions at random, you can expect that at least 1 of them will probably show up with p < 0.05. I think they are mainly just following the norms of their discipline—but those norms are quite questionable.

They don’t ever get into much detail as to what sort of practical institutional changes they would recommend, so it’s hard to know whether I would agree with those. Some of their suggestions, such as more stringent rules on campaign spending, I largely agree with. Others, such as their opposition to popular referenda and recommendation for longer term limits, I have more mixed feelings about. But none seem totally ridiculous or even particularly radical, and they really don’t offer much detail about any of them. I thought they were going to tell me that appointment of judges is better than election (which many experts widely agree), or that the Electoral College is a good system (which far fewer experts would assent to, at least since George W. Bush and Donald Trump). In fact they didn’t do that; they remain eerily silent on substantive questions like this.

Honestly, what little they have to say about institutional policy feels a bit tacked on at the end, as if they suddenly realized that they ought to say something useful rather than just spend the whole time tearing down another theory.

In fact, I came to wonder if they really were tearing down anyone’s actual theory, or if this whole book was really just battering a strawman. Does anyone really think that voters are completely rational? At one point they speak of an image of the ‘sovereign omnicompetent voter’; is that something anyone really believes in?

It does seem like many people believe in making government more responsive to the people, whereas Achen and Bartels seem to have the rather distinct goal of making government make better decisions. They were able to find at least a few examples—though I know not how far and wide they had to search—where it seemed like more popular control resulted in worse outcomes, such as water fluoridation and funding for fire departments. So maybe the real substantive disagreement here is over whether more or less direct democracy is a good idea. And that is indeed a reasonable question. But one need not believe that voters are superhuman geniuses to think that referenda are better than legislation. Simply showing that voters are limited in their capacity and bound to group identity is not enough to answer that question.


In fact, I think that Achen and Bartels seriously overestimate the irrationality of voters, because they don’t seem to appreciate that group identity is often a good proxy for policy—in fact, they don’t even really seem to see social policy as policy at all. Consider this section (p. 238):

“In this pre-Hitlerian age it must have seemed to most Jews that there were no crucial issues dividing the major parties” (Fuchs 1956, 63). Yet by 1923, a very substantial majority of Jews had abandoned their Republican loyalties and begun voting for the Democrats. What had changed was not foreign policy, but rather the social status of Jews within one of America’s major political parties. In a very visible way, the Democrats had become fully accepting and incorporating of religious minorities, both Catholics and Jews. The result was a durable Jewish partisan realignment grounded in “ethnic solidarity”, in Gamm’s characterization.

Gee, I wonder why Jews would suddenly care a great deal which party was more respectful toward people like them? Okay, the Holocaust hadn’t happened yet, but anti-Semitism is very old indeed, and it was visibly creeping upward during that era. And just in general, if one party is clearly more anti-Semitic than the other, why wouldn’t Jews prefer the one that is less hateful toward them? How utterly blinded by privilege do you need to be to not see that this is an important policy difference?

Perhaps because they are both upper-middle-class straight White cisgender men (I would also venture a guess nominally but not devoutly Protestant), Achens and Bartel seem to have no concept that social policy directly affects people of minority identity, that knowing that one party accepts people like you and the other doesn’t is a damn good reason to prefer one over the other. This is not a game where we are rooting for our home team. This directly affects our lives.

I know quite a few transgender people, and not a single one is a Republican. It’s not because all trans people hate low taxes. It’s because the Republican Party has declared war on trans people.

This may also lead to trans people being more left-wing generally, as once you’re in a group you tend to absorb some views from others in that group (and, I’ll admit, Marxists and anarcho-communists seem overrepresented among LGBT people). But I absolutely know some LGBT people who would like to vote conservative for economic policy reasons, but realize they can’t, because it means voting for bigots who hate them and want to actively discriminate against them. There is nothing irrational or even particularly surprising about this choice. It would take a very powerful overriding reason for anyone to want to vote for someone who publicly announces hatred toward them.

Indeed, for me the really baffling thing is that there are political parties that publicly announce hatred toward particular groups. It seems like a really weird strategy for winning elections. That is the thing that needs to be explained here; why isn’t inclusiveness—at least a smarmy lip-service toward inclusiveness, like ‘Diversity, Equity, and Inclusion’ offices at universities—the default behavior of all successful politicians? Why don’t they all hug a Latina trans woman after kissing a baby and taking a selfie with the giant butter cow? Why is not being an obvious bigot considered a left-wing position?

Since it obviously is the case that many voters don’t want this hatred (at the very least, its targets!), in order for it not to damage electoral changes, it must be that some other voters do want this hatred. Perhaps they themselves define their own identity in opposition to other people’s identities. They certainly talk that way a lot: We hear White people fearing ‘replacement‘ by shifting racial demographics, when no sane forecaster thinks that European haplotypes are in any danger of disappearing any time soon. The central argument against gay marriage was always that it would somehow destroy straight marriage, by some mechanism never explained.

Indeed, perhaps it is this very blindness toward social policy that makes Achen and Bartels unable to see the benefits of more direct democracy. When you are laser-focused on economic policy, as they are, then it seems to you as though policy questions are mainly technical matters of fact, and thus what we need are qualified experts. (Though even then, it is not purely a matter of fact whether we should care more about inequality than growth, or more about unemployment than inflation.)

But once you include social policy, you see that politics often involves very real, direct struggles between conflicting interests and differing moral views, and that by the time you’ve decided which view is the correct one, you already have your answer for what must be done. There is no technical question of gay marriage; there is only a moral one. We don’t need expertise on such questions; we need representation. (Then again, it’s worth noting that courts have sometimes advanced rights more effectively than direct democratic votes; so having your interests represented isn’t as simple as getting an equal vote.)

Achen and Bartels even include a model in the appendix where politicians are modeled as either varying in competence or controlled by incentives; never once does it consider that they might differ in whose interests they represent. Yet I don’t vote for a particular politician just because I think they are more intelligent, or as part of some kind of deterrence mechanism to keep them from misbehaving (I certainly hope the courts do a better job of that!); I vote for them because I think they represent the goals and interests I care about. We aren’t asking who is smarter, we are asking who is on our side.

The central question that I think the book raises is one that the authors don’t seem to have much to offer on: If voters are so irrational, why does democracy work? I do think there is strong evidence that voters are irrational, though maybe not as irrational as Achen and Bartels seem to think. Honestly, I don’t see how anyone can watch Donald Trump get elected President of the United States and not think that voters are irrational. (The book was written before that; apparently there’s a new edition with a preface about Trump, but my copy doesn’t have that.) But it isn’t at all obvious to me what to do with that information, because even if so-called elites are in fact more competent than average citizens—which may or may not be true—the fact remains that their interests are never completely aligned. Thus far, representative democracy of one stripe or another seems to be the best mechanism we have for finding people who have sufficient competence while also keeping them on a short enough leash.

And perhaps that’s why democracy works as well as it does; it gives our leaders enough autonomy to let them generally advance their goals, but also places limits on how badly misaligned our leaders’ goals can be from our own.

Reckoning costs in money distorts them

May 7 JDN 2460072

Consider for a moment what it means when an economic news article reports “rising labor costs”. What are they actually saying?

They’re saying that wages are rising—perhaps in some industry, perhaps in the economy as a whole. But this is not a cost. It’s a price. As I’ve written about before, the two are fundamentally distinct.

The cost of labor is measured in effort, toil, and time. It’s the pain of having to work instead of whatever else you’d like to do with your time.

The price of labor is a monetary amount, which is delivered in a transaction.

This may seem perfectly obvious, but it has important and oft-neglected implications. A cost, one paid, is gone. That value has been destroyed. We hope that it was worth it for some benefit we gained. A price, when paid, is simply transferred: One person had that money before, now someone else has it. Nothing was gained or lost.

So in fact when reports say that “labor costs have risen”, what they are really saying is that income is being transferred from owners to workers without any change in real value taking place. They are framing as a loss what is fundamentally a zero-sum redistribution.

In fact, it is disturbingly common to see a fundamentally good redistribution of income framed in the press as a bad outcome because of its expression as “costs”; the “cost” of chocolate is feared to go up if we insist upon enforcing bans on forced labor—when in fact it is only the price that goes up, and the cost actually goes down: chocolate would no longer include complicity in an atrocity. The real suffering of making chocolate would be thereby reduced, not increased. Even when they aren’t literally enslaved, those workers are astonishingly poor, and giving them even a few more cents per hour would make a real difference in their lives. But God forbid we pay a few cents more for a candy bar!

If labor costs were to rise, that would mean that work had suddenly gotten harder, or more painful; or else, that some outside circumstance had made it more difficult to work. Having a child increases your labor costs—you now have the opportunity cost of not caring for the child. COVID increased the cost of labor, by making it suddenly dangerous just to go outside in public. That could also increase prices—you may demand a higher wage, and people do seem to have demanded higher wages after COVID. But these are two separate effects, and you can have one without the other. In fact, women typically see wage stagnation or even reduction after having kids (but men largely don’t), despite their real opportunity cost of labor having obviously greatly increased.

On an individual level, it’s not such a big mistake to equate price and cost. If you are buying something, its cost to you basically just is its price, plus a little bit of transaction cost for actually finding and buying it. But on a societal level, it makes an enormous difference. It distorts our policy priorities and can even lead to actively trying to suppress things that are beneficial—such as rising wages.

This false equivalence between price and costs seems to be at least as common among economists as it is among laypeople. Economists will often justify it on the grounds that in an ideal perfect competitive market the two would be in some sense equated. But of course we don’t live in that ideal perfect market, and even if we did, they would only beproportional at the margin, not fundamentally equal across the board. It would still be obviously wrong to characterize the total value or cost of work by the price paid for it; only the last unit of effort would be priced so that marginal value equals price equals marginal cost. The first 39 hours of your work would cost you less than what you were paid, and produce more than you were paid; only that 40th hour would set the three equal.

Once you account for all the various market distortions in the world, there’s no particular relationship between what something costs—in terms of real effort and suffering—and its price—in monetary terms. Things can be expensive and easy, or cheap and awful. In fact, they often seem to be; for some reason, there seems to be a pattern where the most terrible, miserable jobs (e.g. coal mining) actually pay the leastand the easiest, most pleasant jobs (e.g. stock trading) pay the most. Some jobs that benefit society pay well (e.g. doctors) and others pay terribly or not at all (e.g. climate activists). Some actions that harm the world get punished (e.g. armed robbery) and others get rewarded with riches (e.g. oil drilling). In the real world, whether a job is good or bad and whether it is paid well or poorly seem to be almost unrelated.

In fact, sometimes they seem even negatively related, where we often feel tempted to “sell out” and do something destructive in order to get higher pay. This is likely due to Berkson’s paradox: If people are willing to do jobs if they are either high-paying or beneficial to humanity, then we should expect that, on average, most of the high-paying jobs people do won’t be beneficial to humanity. Even if there were inherently no correlation or a small positive one, people’s refusal to do harmful low-paying work removes those jobs from our sample and results in a negative correlation in what remains.

I think that the best solution, ultimately, is to stop reckoning costs in money entirely. We should reckon them in happiness.

This is of course much more difficult than simply using prices; it’s not easy to say exactly how many QALY are sacrificed in the extraction of cocoa beans or the drilling of offshore oil wells. But if we actually did find a way to count them, I strongly suspect we’d find that it was far more than we ought to be willing to pay.

A very rough approximation, surely flawed but at least a start, would be to simply convert all payments into proportions of their recipient’s income: For full-time wages, this would result in basically everyone being counted the same, as 1 hour of work if you work 40 hours per week, 50 weeks per year is precisely 0.05% of your annual income. So we could say that whatever is equivalent to your hourly wage constitutes 50 microQALY.

This automatically implies that every time a rich person pays a poor person, QALY increase, while every time a poor person pays a rich person, QALY decrease. This is not an error in the calculation. It is a fact of the universe. We ignore it only at out own peril. All wealth redistributed downward is a benefit, while all wealth redistributed upward is a harm. That benefit may cause some other harm, or that harm may be compensated by some other benefit; but they are still there.

This would also put some things in perspective. When HSBC was fined £70 million for its crimes, that can be compared against its £1.5 billion in net income; if it were an individual, it would have been hurt about 50 milliQALY, which is about what I would feel if I lost $2000. Of course, it’s not a person, and it’s not clear exactly how this loss was passed through to employees or shareholders; but that should give us at least some sense of how small that loss was for them. They probably felt it… a little.

When Trump was ordered to pay a $1.3 million settlement, based on his $2.5 billion net wealth (corresponding to roughly $125 million in annual investment income), that cost him about 10 milliQALY; for me that would be about $500.

At the other extreme, if someone goes from making $1 per day to making $1.50 per day, that’s a 50% increase in their income—500 milliQALY per year.

For those who have no income at all, this becomes even trickier; for them I think we should probably use their annual consumption, since everyone needs to eat and that costs something, though likely not very much. Or we could try to measure their happiness directly, trying to determine how much it hurts to not eat enough and work all day in sweltering heat.

Properly shifting this whole cultural norm will take a long time. For now, I leave you with this: Any time you see a monetary figure, ask yourself: How much is that worth to them?” The world will seem quite different once you get in the habit of that.

Optimization is unstable. Maybe that’s why we satisfice.

Feb 26 JDN 2460002

Imagine you have become stranded on a deserted island. You need to find shelter, food, and water, and then perhaps you can start working on a way to get help or escape the island.

Suppose you are programmed to be an optimizerto get the absolute best solution to any problem. At first this may seem to be a boon: You’ll build the best shelter, find the best food, get the best water, find the best way off the island.

But you’ll also expend an enormous amount of effort trying to make it the best. You could spend hours just trying to decide what the best possible shelter would be. You could pass up dozens of viable food sources because you aren’t sure that any of them are the best. And you’ll never get any rest because you’re constantly trying to improve everything.

In principle your optimization could include that: The cost of thinking too hard or searching too long could be one of the things you are optimizing over. But in practice, this sort of bounded optimization is often remarkably intractable.

And what if you forgot about something? You were so busy optimizing your shelter you forgot to treat your wounds. You were so busy seeking out the perfect food source that you didn’t realize you’d been bitten by a venomous snake.

This is not the way to survive. You don’t want to be an optimizer.

No, the person who survives is a satisficerthey make sure that what they have is good enough and then they move on to the next thing. Their shelter is lopsided and ugly. Their food is tasteless and bland. Their water is hard. But they have them.

Once they have shelter and food and water, they will have time and energy to do other things. They will notice the snakebite. They will treat the wound. Once all their needs are met, they will get enough rest.

Empirically, humans are satisficers. We seem to be happier because of it—in fact, the people who are the happiest satisfice the most. And really this shouldn’t be so surprising: Because our ancestral environment wasn’t so different from being stranded on a desert island.

Good enough is perfect. Perfect is bad.

Let’s consider another example. Suppose that you have created a powerful artificial intelligence, an AGI with the capacity to surpass human reasoning. (It hasn’t happened yet—but it probably will someday, and maybe sooner than most people think.)

What do you want that AI’s goals to be?

Okay, ideally maybe they would be something like “Maximize goodness”, where we actually somehow include all the panoply of different factors that go into goodness, like beneficence, harm, fairness, justice, kindness, honesty, and autonomy. Do you have any idea how to do that? Do you even know what your own full moral framework looks like at that level of detail?

Far more likely, the goals you program into the AGI will be much simpler than that. You’ll have something you want it to accomplish, and you’ll tell it to do that well.

Let’s make this concrete and say that you own a paperclip company. You want to make more profits by selling paperclips.

First of all, let me note that this is not an unreasonable thing for you to want. It is not an inherently evil goal for one to have. The world needs paperclips, and it’s perfectly reasonable for you to want to make a profit selling them.

But it’s also not a true ultimate goal: There are a lot of other things that matter in life besides profits and paperclips. Anyone who isn’t a complete psychopath will realize that.

But the AI won’t. Not unless you tell it to. And so if we tell it to optimize, we would need to actually include in its optimization all of the things we genuinely care about—not missing a single one—or else whatever choices it makes are probably not going to be the ones we want. Oops, we forgot to say we need clean air, and now we’re all suffocating. Oops, we forgot to say that puppies don’t like to be melted down into plastic.

The simplest cases to consider are obviously horrific: Tell it to maximize the number of paperclips produced, and it starts tearing the world apart to convert everything to paperclips. (This is the original “paperclipper” concept from Less Wrong.) Tell it to maximize the amount of money you make, and it seizes control of all the world’s central banks and starts printing $9 quintillion for itself. (Why that amount? I’m assuming it uses 64-bit signed integers, and 2^63 is over 9 quintillion. If it uses long ints, we’re even more doomed.) No, inflation-adjusting won’t fix that; even hyperinflation typically still results in more real seigniorage for the central banks doing the printing (which is, you know, why they do it). The AI won’t ever be able to own more than all the world’s real GDP—but it will be able to own that if it prints enough and we can’t stop it.

But even if we try to come up with some more sophisticated optimization for it to perform (what I’m really talking about here is specifying its utility function), it becomes vital for us to include everything we genuinely care about: Anything we forget to include will be treated as a resource to be consumed in the service of maximizing everything else.

Consider instead what would happen if we programmed the AI to satisfice. The goal would be something like, “Produce at least 400,000 paperclips at a price of at most $0.002 per paperclip.”

Given such an instruction, in all likelihood, it would in fact produce exactly 400,000 paperclips at a price of exactly $0.002 per paperclip. And maybe that’s not strictly the best outcome for your company. But if it’s better than what you were previously doing, it will still increase your profits.

Moreover, such an instruction is far less likely to result in the end of the world.

If the AI has a particular target to meet for its production quota and price limit, the first thing it would probably try is to use your existing machinery. If that’s not good enough, it might start trying to modify the machinery, or acquire new machines, or develop its own techniques for making paperclips. But there are quite strict limits on how creative it is likely to be—because there are quite strict limits on how creative it needs to be. If you were previously producing 200,000 paperclips at $0.004 per paperclip, all it needs to do is double production and halve the cost. That’s a very standard sort of industrial innovation— in computing hardware (admittedly an extreme case), we do this sort of thing every couple of years.

It certainly won’t tear the world apart making paperclips—at most it’ll tear apart enough of the world to make 400,000 paperclips, which is a pretty small chunk of the world, because paperclips aren’t that big. A paperclip weighs about a gram, so you’ve only destroyed about 400 kilos of stuff. (You might even survive the lawsuits!)

Are you leaving money on the table relative to the optimization scenario? Eh, maybe. One, it’s a small price to pay for not ending the world. But two, if 400,000 at $0.002 was too easy, next time try 600,000 at $0.001. Over time, you can gently increase its quotas and tighten its price requirements until your company becomes more and more successful—all without risking the AI going completely rogue and doing something insane and destructive.

Of course this is no guarantee of safety—and I absolutely want us to use every safeguard we possibly can when it comes to advanced AGI. But the simple change from optimizing to satisficing seems to solve the most severe problems immediately and reliably, at very little cost.

Good enough is perfect; perfect is bad.

I see broader implications here for behavioral economics. When all of our models are based on optimization, but human beings overwhelmingly seem to satisfice, maybe it’s time to stop assuming that the models are right and the humans are wrong.

Optimization is perfect if it works—and awful if it doesn’t. Satisficing is always pretty good. Optimization is unstable, while satisficing is robust.

In the real world, that probably means that satisficing is better.

Good enough is perfect; perfect is bad.

What is it with EA and AI?

Jan 1 JDN 2459946

Surprisingly, most Effective Altruism (EA) leaders don’t seem to think that poverty alleviation should be our top priority. Most of them seem especially concerned about long-term existential risk, such as artificial intelligence (AI) safety and biosecurity. I’m not going to say that these things aren’t important—they certainly are important—but here are a few reasons I’m skeptical that they are really the most important the way that so many EA leaders seem to think.

1. We don’t actually know how to make much progress at them, and there’s only so much we can learn by investing heavily in basic research on them. Whereas, with poverty, the easy, obvious answer turns out empirically to be extremely effective: Give them money.

2. While it’s easy to multiply out huge numbers of potential future people in your calculations of existential risk (and this is precisely what people do when arguing that AI safety should be a top priority), this clearly isn’t actually a good way to make real-world decisions. We simply don’t know enough about the distant future of humanity to be able to make any kind of good judgments about what will or won’t increase their odds of survival. You’re basically just making up numbers. You’re taking tiny probabilities of things you know nothing about and multiplying them by ludicrously huge payoffs; it’s basically the secular rationalist equivalent of Pascal’s Wager.

2. AI and biosecurity are high-tech, futuristic topics, which seem targeted to appeal to the sensibilities of a movement that is still very dominated by intelligent, nerdy, mildly autistic, rich young White men. (Note that I say this as someone who very much fits this stereotype. I’m queer, not extremely rich and not entirely White, but otherwise, yes.) Somehow I suspect that if we asked a lot of poor Black women how important it is to slightly improve our understanding of AI versus giving money to feed children in Africa, we might get a different answer.

3. Poverty eradication is often characterized as a “short term” project, contrasted with AI safety as a “long term” project. This is (ironically) very short-sighted. Eradication of poverty isn’t just about feeding children today. It’s about making a world where those children grow up to be leaders and entrepreneurs and researchers themselves. The positive externalities of economic development are staggering. It is really not much of an exaggeration to say that fascism is a consequence of poverty and unemployment.

4. Currently the main thing that most Effective Altruism organizations say they need most is “talent”; how many millions of person-hours of talent are we leaving on the table by letting children starve or die of malaria?

5. Above all, existential risk can’t really be what’s motivating people here. The obvious solutions to AI safety and biosecurity are not being pursued, because they don’t fit with the vision that intelligent, nerdy, young White men have of how things should be. Namely: Ban them. If you truly believe that the most important thing to do right now is reduce the existential risk of AI and biotechnology, you should support a worldwide ban on research in artificial intelligence and biotechnology. You should want people to take all necessary action to attack and destroy institutions—especially for-profit corporations—that engage in this kind of research, because you believe that they are threatening to destroy the entire world and this is the most important thing, more important than saving people from starvation and disease. I think this is really the knock-down argument; when people say they think that AI safety is the most important thing but they don’t want Google and Facebook to be immediately shut down, they are either confused or lying. Honestly I think maybe Google and Facebook should be immediately shut down for AI safety reasons (as well as privacy and antitrust reasons!), and I don’t think AI safety is yet the most important thing.

Why aren’t people doing that? Because they aren’t actually trying to reduce existential risk. They just think AI and biotechnology are really interesting, fascinating topics and they want to do research on them. And I agree with that, actually—but then they need stop telling people that they’re fighting to save the world, because they obviously aren’t. If the danger were anything like what they say it is, we should be halting all research on these topics immediately, except perhaps for a very select few people who are entrusted with keeping these forbidden secrets and trying to find ways to protect us from them. This may sound radical and extreme, but it is not unprecedented: This is how we handle nuclear weapons, which are universally recognized as a global existential risk. If AI is really as dangerous as nukes, we should be regulating it like nukes. I think that in principle it could be that dangerous, and may be that dangerous someday—but it isn’t yet. And if we don’t want it to get that dangerous, we don’t need more AI researchers, we need more regulations that stop people from doing harmful AI research! If you are doing AI research and it isn’t directly involved specifically in AI safety, you aren’t saving the world—you’re one of the people dragging us closer to the cliff! Anything that could make AI smarter but doesn’t also make it safer is dangerous. And this is clearly true of the vast majority of AI research, and frankly to me seems to also be true of the vast majority of research at AI safety institutes like the Machine Intelligence Research Institute.

Seriously, look through MIRI’s research agenda: It’s mostly incredibly abstract and seems completely beside the point when it comes to preventing AI from taking control of weapons or governments. It’s all about formalizing Bayesian induction. Thanks to you, Skynet can have a formally computable approximation to logical induction! Truly we are saved. Only two of their papers, on “Corrigibility” and “AI Ethics”, actually struck me as at all relevant to making AI safer. The rest is largely abstract mathematics that is almost literally navel-gazing—it’s all about self-reference. Eliezer Yudkowsky finds self-reference fascinating and has somehow convinced an entire community that it’s the most important thing in the world. (I actually find some of it fascinating too, especially the paper on “Functional Decision Theory”, which I think gets at some deep insights into things like why we have emotions. But I don’t see how it’s going to save the world from AI.)

Don’t get me wrong: AI also has enormous potential benefits, and this is a reason we may not want to ban it. But if you really believe that there is a 10% chance that AI will wipe out humanity by 2100, then get out your pitchforks and your EMP generators, because it’s time for the Butlerian Jihad. A 10% chance of destroying all humanity is an utterly unacceptable risk for any conceivable benefit. Better that we consign ourselves to living as we did in the Neolithic than risk something like that. (And a globally-enforced ban on AI isn’t even that; it’s more like “We must live as we did in the 1950s.” How would we survive!?) If you don’t want AI banned, maybe ask yourself whether you really believe the risk is that high—or are human brains just really bad at dealing with small probabilities?

I think what’s really happening here is that we have a bunch of guys (and yes, the EA and especially AI EA-AI community is overwhelmingly male) who are really good at math and want to save the world, and have thus convinced themselves that being really good at math is how you save the world. But it isn’t. The world is much messier than that. In fact, there may not be much that most of us can do to contribute to saving the world; our best options may in fact be to donate money, vote well, and advocate for good causes.

Let me speak Bayesian for a moment: The prior probability that you—yes, you, out of all the billions of people in the world—are uniquely positioned to save it by being so smart is extremely small. It’s far more likely that the world will be saved—or doomed—by people who have power. If you are not the head of state of a large country or the CEO of a major multinational corporation, I’m sorry; you probably just aren’t in a position to save the world from AI.

But you can give some money to GiveWell, so maybe do that instead?

In defense of civility

Dec 18 JDN 2459932

Civility is in short supply these days. Perhaps it has always been in short supply; certainly much of the nostalgia for past halcyon days of civility is ill-founded. Wikipedia has an entire article on hundreds of recorded incidents of violence in legislative assemblies, in dozens of countries, dating all the way from to the Roman Senate in 44 BC to Bosnia in 2019. But the Internet seems to bring about its own special kind of incivility, one which exposes nearly everyone to some of the worst vitriol the entire world has to offer. I think it’s worth talking about why this is bad, and perhaps what we might do about it.

For some, the benefits of civility seem so self-evident that they don’t even bear mentioning. For others, the idea of defending civility may come across as tone-deaf or even offensive. I would like to speak to both of those camps today: If you think the benefits of civility are obvious, I assure you, they aren’t to everyone. And if you think that civility is just a tool of the oppressive status quo, I hope I can make you think again.

A lot of the argument against civility seems to be founded in the notion that these issues are important, lives are at stake, and so we shouldn’t waste time and effort being careful how we speak to each other. How dare you concern yourself with the formalities of argumentation when people are dying?

But this is totally wrongheaded. It is precisely because these issues are important that civility is vital. It is precisely because lives are at stake that we must make the right decisions. And shouting and name-calling (let alone actual fistfights or drawn daggers—which have happened!) are not conducive to good decision-making.

If you shout someone down when choosing what restaurant to have dinner at, you have been very rude and people may end up unhappy with their dining experience—but very little of real value has been lost. But if you shout someone down when making national legislation, you may cause the wrong policy to be enacted, and this could lead to the suffering or death of thousands of people.

Think about how court proceedings work. Why are they so rigid and formal, with rules upon rules upon rules? Because the alternative was capricious violence. In the absence of the formal structure of a court system, so-called ‘justice’ was handed out arbitrarily, by whoever was in power, or by mobs of vigilantes. All those seemingly-overcomplicated rules were made in order to resolve various conflicts of interest and hopefully lead toward more fair, consistent results in the justice system. (And don’t get me wrong; they still could stand to be greatly improved!)

Legislatures have complex rules of civility for the same reason: Because the outcome is so important, we need to make sure that the decision process is as reliable as possible. And as flawed as existing legislatures still are, and as silly as it may seem to insist upon addressing ‘the Honorable Representative from the Great State of Vermont’, it’s clearly a better system than simply letting them duke it out with their fists.

A related argument I would like to address is that of ‘tone policing‘. If someone objects, not to the content of what you are saying, but to the tone in which you have delivered it, are they arguing in bad faith?

Well, possibly. Certainly, arguments about tone can be used that way. In particular I remember that this was basically the only coherent objection anyone could come up with against the New Atheism movement: “Well, sure, obviously, God isn’t real and religion is ridiculous; but why do you have to be so mean about it!?”

But it’s also quite possible for tone to be itself a problem. If your tone is overly aggressive and you don’t give people a chance to even seriously consider your ideas before you accuse them of being immoral for not agreeing with you—which happens all the time—then your tone really is the problem.

So, how can we tell which is which? I think a good way to reply to what you think might be bad-faith tone policing is this: “What sort of tone do you think would be better?”

I think there are basically three possible responses:

1. They can’t offer one, because there is actually no tone in which they would accept the substance of your argument. In that case, the tone policing really is in bad faith; they don’t want you to be nicer, they want you to shut up. This was clearly the case for New Atheism: As Daniel Dennett aptly remarked, “There’s simply no polite way to tell someone they have dedicated their lives to an illusion.” But sometimes, such things need to be said all the same.

2. They offer an alternative argument you could make, but it isn’t actually expressing your core message. Either they have misunderstood your core message, or they actually disagree with the substance of your argument and should be addressing it on those terms.

3. They offer an alternative way of expressing your core message in a milder, friendlier tone. This means that they are arguing in good faith and actually trying to help you be more persuasive!

I don’t know how common each of these three possibilities is; it could well be that the first one is the most frequent occurrence. That doesn’t change the fact that I have definitely been at the other end of the third one, where I absolutely agree with your core message and want your activism to succeed, but I can see that you’re acting like a jerk and nobody will want to listen to you.

Here, let me give some examples of the type of argument I’m talking about:

1. “Defund the police”: This slogan polls really badly. Probably because most people have genuine concerns about crime and want the police to protect them. Also, as more and more social services (like for mental health and homelessness) get co-opted into policing, this slogan makes it sound like you’re just going to abandon those people. But do we need serious, radical police reform? Absolutely. So how about “Reform the police”, “Put police money back into the community”, or even “Replace the police”?

2. “All Cops Are Bastards”: Speaking of police reform, did I mention we need it? A lot of it? Okay. Now, let me ask you: All cops? Every single one of them? There is not a single one out of the literally millions of police officers on this planet who is a good person? Not one who is fighting to take down police corruption from within? Not a single individual who is trying to fix the system while preserving public safety? Now, clearly, it’s worth pointing out, some cops are bastards—but hey, that even makes a better acronym: SCAB. In fact, it really is largely a few bad apples—the key point here is that you need to finish the aphorism: “A few bad apples spoil the whole barrel.” The number of police who are brutal and corrupt is relatively small, but as long as the other police continue to protect them, the system will be broken. Either you get those bad apples out pronto, or your whole barrel is bad. But demonizing the very people who are in the best position to implement those reforms—good police officers—is not helping.

3. “Be gay, do crime”: I know it’s tongue-in-cheek and ironic. I get that. It’s still a really dumb message. I am absolutely on board with LGBT rights. Even aside from being queer myself, I probably have more queer and trans friends than straight friends at this point. But why in the world would you want to associate us with petty crime? Why are you lumping us in with people who harm others at best out of desperation and at worst out of sheer greed? Even if you are literally an anarchist—which I absolutely am not—you’re really not selling anarchism well if the vision you present of it is a world of unfettered crime! There are dozens of better pro-LGBT slogans out there; pick one. Frankly even “do gay, be crime” is better, because it’s more clearly ironic. (Also, you can take it to mean something like this: Don’t just be gay, do gay—live your fullest gay life. And if you can be crime, that means that the system is fundamentally unjust: You can be criminalized just for who you are. And this is precisely what life is like for millions of LGBT people on this planet.)

A lot of people seem to think that if you aren’t immediately convinced by the most vitriolic, aggressive form of an argument, then you were never going to be convinced anyway and we should just write you off as a potential ally. This isn’t just obviously false; it’s incredibly dangerous.

The whole point of activism is that not everyone already agrees with you. You are trying to change minds. If it were really true that all reasonable, ethical people already agreed with your view, you wouldn’t need to be an activist. The whole point of making political arguments is that people can be reasonable and ethical and still be mistaken about things, and when we work hard to persuade them, we can eventually win them over. In fact, on some things we’ve actually done spectacularly well.

And what about the people who aren’t reasonable and ethical? They surely exist. But fortunately, they aren’t the majority. They don’t rule the whole world. If they did, we’d basically be screwed: If violence is really the only solution, then it’s basically a coin flip whether things get better or worse over time. But in fact, unreasonable people are outnumbered by reasonable people. Most of the things that are wrong with the world are mistakes, errors that can be fixed—not conflicts between irreconcilable factions. Our goal should be to fix those mistakes wherever we can, and that means being patient, compassionate educators—not angry, argumentative bullies.