Love is more than chemicals

Feb 18 JDN 2460360

One of the biggest problems with the rationalist community is an inability to express sincerity and reverence.

I get it: Religion is the world’s greatest source of sincerity and reverence, and religion is the most widespread and culturally important source of irrationality. So we declare ourselves enemies of religion, and also end up being enemies of sincerity and reverence.

But in doing so, we lose something very important. We cut ourselves off from some of the greatest sources of meaning and joy in human life.

In fact, we may even be undermining our own goals: If we don’t offer people secular, rationalist forms of reverence, they may find they need to turn back to religion in order to fill that niche.

One of the most pernicious forms of this anti-sincerity, anti-reverence attitude (I can’t just say ‘insincere’ or ‘irreverent’, as those have different meanings) is surely this one:

Love is just a chemical reaction.

(I thought it seemed particularly apt to focus on this one during the week of Valentine’s Day.)

On the most casual of searches I could find at least half a dozen pop-sci articles and a YouTube video propounding this notion (though I could also find a few articles trying to debunk the notion as well).

People who say this sort of thing seem to think that they are being wise and worldly while the rest of us are just being childish and naive. They think we are seeing something that isn’t there. In fact, they are being jaded and cynical. They are failing to see something that is there.

(Perhaps the most extreme form of this was from Rick & Morty; and while Rick as a character is clearly intended to be jaded and cynical, far too many people also see him as a role model.)

Part of the problem may also be a failure to truly internalize the Basic Fact of Cognitive Science:

You are your brain.

No, your consciousness is not an illusion. It’s not an “epiphenomenon” (whatever that isI’ve never encountered one in real life). Your mind is not fake or imaginary. Your mind actually exists—and it is a product of your brain. Both brain and mind exist, and are in fact the same.

It’s so hard for people to understand this that some become dualists, denying the unity of the brain and the mind. That, at least, I can sympathize with, even though we have compelling evidence that it is wrong. But there’s another tack people sometimes take, eliminative materialism, where they try to deny that the mind exists at all. And that I truly do not understand. How can you think that nobody can think? Yet intelligent, respected philosophers have claimed to believe such things.

Love is one of the most important parts of our lives.

This may be more true of humans than of literally any other entity in the known universe.

The only serious competition comes from other mammals: They are really the only other beings we know of that are capable of love. And even they don’t seem to be as good at it as we are; they can love only those closest to them, while we can love entire nations and even abstract concepts.

And once you go beyond that, even to reptiles—let alone fish, or amphibians, or insects, or molluscs—it’s not clear that other animals are really capable of love at all. They seem to be capable of some forms of thought and feeling: They get hungry, or angry, or horny. But do they really love?

And even the barest emotional capacities of an insect are still categorically beyond what most of the universe is capable of feeling, which is to say: Nothing. The vast, vast majority of the universe feels neither love nor hate, neither joy nor pain.

Yet humans can love, and do love, and it is a large part of what gives our lives meaning.

I don’t just mean romantic love here, though I do think it’s worth noting that people who dismiss the reality of romantic love somehow seem reluctant to do the same for the love parents have for their children—even though it’s made of pretty much the same brain chemicals. Perhaps there is a limit to their cynicism.

Yes, love is made of chemicals—because everything is made of chemicals. We live in a material, chemical universe. Saying that love is made of chemicals is an almost completely vacuous statement; it’s basically tantamount to saying that love exists.

In other contexts, you already understand this.

“That’s not a bridge, it’s just a bunch of iron atoms!” rightfully strikes you as an absurd statement to make. Yes, the bridge is made of steel, and steel is mostly iron, and everything is made of atoms… but clearly there’s a difference between a random pile of iron and a bridge.

“That’s not a computer, it’s just a bunch of silicon atoms!” similarly registers as nonsense: Yes, it is indeed mostly made of silicon, but beach sand and quartz crystals are not computers.

It is in this same sense that joy is made of dopamine and love is made of chemical reactions. Yes, those are in fact the constituent parts—but things are more than just their parts.

I think that on some level, even most rationalists recognize that love is more than some arbitrary chemical reaction. I think “love is just chemicals” is mainly something people turn to for a couple of reasons: Sometimes, they are so insistent on rejecting everything that even resembles religious belief that they end up rejecting all meaning and value in human life. Other times, they have been so heartbroken, that they try to convince themselves love isn’t real—to dull the pain. (But of course if it weren’t, there would be no pain to dull.)

But love is no more (or less) a chemical reaction than any other human experience: The very belief “love is just a chemical reaction” is, itself, made of chemical reactions.

Everything we do is made of chemical reactions, because we are made of chemical reactions.

Part of the problem here—and with the Basic Fact of Cognitive Science in general—is that we really have no idea how this works. For most of what we deal with in daily life, and even an impressive swath of the overall cosmos, we have a fairly good understanding of how things work. We know how cars drive, how wind blows, why rain falls; we even know how cats purr and why birds sing. But when it comes to understanding how the physical matter of the brain generates the subjective experiences of thought, feeling, and belief—of which love is made—we lack even the most basic understanding. The correlation between the two is far too strong to deny; but as far as causal mechanisms, we know absolutely nothing. (Indeed, worse than that: We can scarcely imagine a causal mechanism that would make any sense. We not only don’t know the answer; we don’t know what an answer would look like.)

So, no, I can’t tell you how we get from oxytocin and dopamine to love. I don’t know how that makes any sense. No one does. But we do know it’s true.

And just like everything else, love is more than the chemicals it’s made of.

Adversarial design

Feb 4 JDN 2460346

Have you noticed how Amazon feels a lot worse lately? Years ago, it was extremely convenient: You’d just search for what you want, it would give you good search results, you could buy what you want and be done. But now you have to slog through “sponsored results” and a bunch of random crap made by no-name companies in China before you can get to what you actually want.

Temu is even worse, and has been from the start: You can’t buy anything on Temu without first being inundated in ads. It’s honestly such an awful experience, I don’t understand why anyone is willing to buy anything from Temu.

#WelcomeToCyberpunk, I guess.

Even some video games have become like this: The free-to-play or “freemium” business model seems to be taking off, where you don’t pay money for the game itself, but then have to deal with ads inside the game trying to sell you additional content, because that’s where the developers actually make their money. And now AAA firms like EA and Ubisoft are talking about going to a subscription-based model where you don’t even own your games anymore. (Fortunately there’s been a lot of backlash against that; I hope it persists.)

Why is this happening? Isn’t capitalism supposed to make life better for consumers? Isn’t competition supposed to make products and services supposed to improve over time?

Well, first of all, these markets are clearly not as competitive as they should be. Amazon has a disturbingly large market share, and while the video game market is more competitive, it’s still dominated by a few very large firms (like EA and Ubisoft).

But I think there’s a deeper problem here, one which may be specific to media content.

What I mean by “media content” here is fairly broad: I would include art, music, writing, journalism, film, and video games.

What all of these things have in common is that they are not physical products (they’re not like a car or a phone that is a single physical object), but they are also not really services either (they aren’t something you just do as an action and it’s done, like a haircut, a surgery, or a legal defense).

Another way of thinking about this is that media content can be copied with zero marginal cost.

Because it can be copied with zero marginal cost, media content can’t simply be made and sold the way that conventional products and services are. There are a few different ways it can be monetized.


The most innocuous way is commission or patronage, where someone pays someone else to create a work because they want that work. This is totally unproblematic. You want a piece of art, you pay an artist, they make it for you; great. Maybe you share copies with the world, maybe you don’t; whatever. It’s good either way.

Unfortunately, it’s hard to sustain most artists and innovators on that model alone. (In a sense I’m using a patronage model, because I have a Patreon. But I’m not making anywhere near enough to live on that way.)

The second way is intellectual property, which I have written about before, and surely will again. If you can enforce limits on who is allowed to copy a work, then you can make a work and sell it for profit without fear of being undercut by someone else who simply copies it and sells it for cheaper. A detailed discussion of that is beyond the scope of this post, but you can read those previous posts, and I can give you the TLDR version: Some degree of intellectual property is probably necessary, but in our current society, it has clearly been taken much too far. I think artists and authors deserve to be able to copyright (or maybe copyleft) their work—but probably not for 70 years after their death.

And then there is a third way, the most insidious way: advertising. If you embed advertisements for other products and services within your content, you can then sell those ad slots for profit. This is how newspapers stay afloat, mainly; subscriptions have never been the majority of their revenue. It’s how TV was supported before cable and streaming—and cable usually has ads too, and streaming is starting to.

There is something fundamentally different about advertising as a service. Whereas most products and services you encounter in a capitalist society are made for you, designed for you to use, advertising it made at you, designed to manipulate you.

I’ve heard it put well this way:

If you’re not paying, you aren’t the customer; you’re the product.

Monetizing content by advertising effectively makes your readers (or viewers, players, etc.) into the product instead of the customer.

I call this effect adversarial design.

I chose this term because it not only conveys the right sense of being an adversary: it also includes the word ‘ad’ and the same Latin root ‘advertere‘ as ‘advertising’.

When a company designs a car or a phone, they want it to appeal to customers—they want you to like it. Yes, they want to take your money; but it’s a mutually beneficial exchange. They get money, you get a product; you’re both happier.

When a company designs an ad, they want it to affect customers—they want you to do what it says, whether you like it or not. And they wouldn’t be doing it if they thought you would buy it anyway—so they are basically trying to make you do something you wouldn’t otherwise have done.

In other words, when designing a product, corporations want to be your friend.

When designing an ad, they become your enemy.

You would absolutely prefer not to have ads. You don’t want your attention taken in this way. But they way that these corporations make money—disgustingly huge sums of money—is by forcing those ads in your face anyway.

Yes, to be fair, there might be some kinds of ads that aren’t too bad. Simple, informative, unobtrusive ads that inform you that something is available you might not otherwise have known about. Movie trailers are like this; people often enjoy watching movie trailers, and they want to see what movies are going to come out next. That’s fine. I have no objection to that.

But it should be clear to anyone who has, um, used the Internet in the past decade that we have gone far, far beyond that sort of advertising. Ads have become aggressive, manipulative, aggravating, and—above all—utterly ubiquitous. You can’t escape them. They’re everywhere. Even when you use ad-block software (which I highly recommend, particularly Adblock Plus—which is free), you still can’t completely escape them.

That’s another thing that should make it pretty clear that there’s something wrong with ads: People are willing to make efforts or even pay money to make ads go away.

Whenever there is a game I like that’s ad-supported but you can pay to make the ads go away, I always feel like I’m being extorted, even if what I have to pay would have been a totally reasonable price for the game. Come on, just sell me the game. Don’t give me the game for free and then make me pay to make it not unpleasant. Don’t add anti-features.

This is clearly not a problem that market competition alone will solve. Even in highly competitive markets, advertising is still ubiquitous, aggressive and manipulative. In fact, competition may even make it worse—a true monopoly wouldn’t need to advertise very much.

Consider Coke and Pepsi ads; they’re actually relatively pleasant, aren’t they? Because all they’re trying to do is remind you and make you thirsty so you’ll buy more of the product you were already buying. They aren’t really trying to get you to buy something you wouldn’t have otherwise. They know that their duopoly is solid, and only a true Black Swan event would unseat their hegemony.

And have you ever seen an ad for your gas company? I don’t think I have—probably because I didn’t have a choice in who my gas company was; there was only one that covered my area. So why bother advertising to me?

If competition won’t fix this, what will? Is there some regulation we could impose that would make advertising less obtrusive? People have tried, without much success. I think imposing an advertising tax would help, but even that might not do enough.

What I really think we need right now is to recognize the problem and invest in solving it. Right now we have megacorporations which are thoroughly (literally) invested in making advertising more obtrusive and more ubiquitous. We need other institutions—maybe government, maybe civil society more generally—that are similarly invested in counteracting it.


Otherwise, it’s only going to get worse.

Depression and the War on Drugs

Jan 7 JDN 2460318

There exists, right now, an extremely powerful antidepressant which is extremely cheap and has minimal side effects.

It’s so safe that it has no known lethal dose, and—unlike SSRIs—it is not known to trigger suicide. It is shockingly effective: it works in a matter of hours—not weeks like a typical SSRI—and even a single moderate dose can have benefits lasting months. It isn’t patented, because it comes from a natural source. That natural source is so easy to grow, you can do it by yourself at home for less than $100.

Why in the world aren’t we all using it?

I’ll tell you why: This wonder drug is called psilocybin. It is a Schedule I narcotic, which means that simply possessing it is a federal crime in the United States. Carrying it across the border is a felony.

It is also illegal in most other countries, including the UK, Australia, Belgium, Finland, Denmark, Sweden, Norway (#ScandinaviaIsNotAlwaysBetter), France, Germany, Hungary, Ireland, Japan, the list goes on….

Actually, it’s faster to list the places it’s not illegal: Austria, the Bahamas, Brazil, the British Virgin Islands, Jamaica, Nepal, the Netherlands, and Samoa. That’s it for true legalization, though it’s also decriminalized or unenforced in some other countries.

The best known antidepressant lies unused, because we made it illegal.

Similar stories hold for other amazingly beneficial drugs:

LSD also has powerful antidepressant effects with minimal side effects, and is likewise so ludicrously safe that we are not aware of a single fatal overdose ever happening in any human being. And it’s also Schedule I banned.

Ahayuasca is the same story: A great antidepressant, very safe, minimal side effects—and highly illegal.

There is also no evidence that psilocybin, LSD, or ahayuasca are addictive; and far from promoting the sort of violent, anti-social behavior that alcohol does, they actually seem to make people more compassionate.

This is pure speculation, but I think we should try psilocybin as a possible treatment for psychopathy. And if that works, maybe having a psilocybin trip should be a prerequisite for eligibility for any major elected office. (I often find it a bit silly how the biggest fans of psychedelics talk about the drugs radically changing the world, bringing peace and prosperity through a shift in consciousness; but if psilocybin could make all the world’s leaders more compassionate, that might actually have that kind of impact.)

Ketamine and MDMA at least do have some overdose risk and major side effects, and are genuinely addictive—but it’s not really clear that they’re any worse than SSRIs, and they certainly aren’t any worse than alcohol.

Alcohol may actually be the most widely-used antidepressant, and yet is clearly utterly ineffective; in fact, alcoholics consistently show depression increasing over time. Alcohol has a fatal dose so low it’s a common accident; it is also implicated in violent behavior, including half of all rapes—and in the majority of those rape cases, all consumption of alcohol was voluntary.

Yet alcohol can be bought over-the-counter at any grocery store.

The good news is that this is starting to change.

Recent changes in the law have allowed the use of psychedelic drugs in medical research—which is part of how we now know just how shockingly effective they are at treating depression.

Some jurisdictions in the US—notably, the whole state of Colorado—have decriminalized psilocybin, and Oregon has made it outright legal. Yet even this situation is precarious; just as has occurred with cannabis legalization, it’s still difficult to run a business selling psilocybin even in Oregon, because banks don’t want to deal with a business that sells something which is federally illegal.

Fortunately, this, too, is starting to change: A bill passed the US Senate a few months ago that would legalize banking to cannabis businesses in states where it is legal, and President Biden recently pardoned everyone in federal prison for simple cannabis possession. Now, why can’t we just make cannabis legal!?

The War on Drugs hasn’t just been a disaster for all the thousands of people needlessly imprisoned.

(Of course they had it the worst, and we should set them all free immediately—preferably with some form of restitution.)

The War of Drugs has also been a disaster for all the people who couldn’t get the treatment they needed, because we made that medicine illegal.

And for what? What are we even trying to accomplish here?

Prohibition was a failure—and a disaster of its own—but I can at least understand why it was done. When a drug kills nearly a hundred thousand people a year and is implicated in half of all rapes, that seems like a pretty damn good reason to want that drug gone. The question there becomes how we can best reduce alcohol use without the awful consequences that Prohibition caused—and so far, really high taxes seem to be the best method, and they absolutely do reduce crime.

But where was the disaster caused by cannabis, psilocybin, or ahayuasca? These drugs are made by plants and fungi; like alcohol, they have been used by humans for thousands of years. Where are the overdoses? Where is the crime? Psychedelics have none of these problems.

Honestly, it’s kind of amazing that these drugs aren’t more associated with organized crime than they are.

When alcohol was banned, it seemed to immediately trigger a huge expansion of the Mafia, as only they were willing and able to provide for the enormous demand of this highly addictive neurotoxin. But psilocybin has been illegal for decades, and yet there’s no sign of organized crime having anything to do with it. In fact, psilocybin use is associated with lower rates of arrest—which actually makes sense to me, because like I said, it makes you more compassionate.

That’s how idiotic and ridiculous our drug laws are:

We made a drug that causes crime legal, and we made a drug that prevents crime illegal.

Note that this also destroys any conspiracy theory suggesting that the government wants to keep us all docile and obedient: psilocybin is way better at making people docile than alcohol. No, this isn’t the product of some evil conspiracy.

Hanlon’s Razor: Never attribute to malice what can be adequately explained by stupidity.

This isn’t malice; it’s just massive, global, utterly catastrophic stupidity.

I might attribute this to Puritanical American attitude toward pleasure (Pleasure is suspect, pleasure is dangerous), but I don’t think of Sweden as particularly Puritanical, and they also ban most psychedelics. I guess the most libertine countries—the Netherlands, Brazil—seem to be the ones that have legalized them; but it doesn’t really seem like one should have to be that libertine to want the world’s cheapest, safest, most effective antidepressants to be widely available. I have very mixed feelings about Amsterdam’s (in)famous red light district, but absolutely no hesitation in supporting their legalization of psilocybin truffles.

Honestly, I think patriarchy might be part of this. Alcohol is seen as a very masculine drug—maybe because it can make you angry and violent. Psychedelics seem more feminine; they make you sensitive, compassionate and loving.

Even the way that psychedelics make you feel more connected with your body is sort of feminine; we seem to have a common notion that men are their minds, but women are their bodies.

Here, try it. Someone has said, “I feel really insecure about my body.” Quick: What is that person’s gender? Now suppose someone has said, “I’m very proud of my mind.” What is that person’s gender?

(No, it’s not just because the former is insecure and the latter is proud—though we do also gender those emotions, and there’s statistical evidence that men are generally more confident, though that’s never been my experience of manhood. Try it with the emotions swapped and it still works, just not quite as well.)

I’m not suggesting that this makes sense. Both men and women are precisely as physical and mental as each other—we are all both, and that is a deep truth about our nature. But I know that my mind makes an automatic association between mind/body and male/female, and I suspect yours does as well, because we came from similar cultural norms. (This goes at least back to Classical Rome, where the animus, the rational soul, was masculine, while the anima, the emotional one, was feminine.)

That is, it may be that we banned psychedelics because they were girly. The men in charge were worried about us becoming soft and weak. The drug that’s tied to thousands of rapes and car collisions is manly. The drug that brings you peace, joy, and compassion is not.

Think about the things that the mainstream objected to about Hippies: Men with long hair and makeup, women wearing pants, bright colors, flowery patterns, kindness and peacemongering—all threats to the patriarchal order.

Whatever it is, we need to stop. Millions of people are suffering, and we could so easily help them; all we need to do is stop locking people up for taking medicine.

Compassion and the cosmos

Dec 24 JDN 2460304

When this post goes live, it will be Christmas Eve, one of the most important holidays around the world.

Ostensibly it celebrates the birth of Jesus, but it doesn’t really.

For one thing, Jesus almost certainly wasn’t born in December. The date of Christmas was largely set by the Council of Tours in AD 567; it was set to coincide with existing celebrations—not only other Christian celebrations such as the Feast of the Epiphany, but also many non-Christian celebrations such as Yuletide, Saturnalia, and others around the Winter Solstice. (People today often say “Yuletide” when they actually mean Christmas, because the syncretization was so absolute.)

For another, an awful lot of the people celebrating Christmas don’t particularly care about Jesus. Countries like Sweden, Belgium, the UK, Australia, Norway, and Denmark are majority atheist but still very serious about Christmas. Maybe we should try to secularize and ecumenize the celebration and call it Solstice or something, but that’s a tall order. For now, it’s Christmas.

Compassion, love, and generosity are central themes of Christmas—and, by all accounts, Jesus did exemplify those traits. Christianity has a very complicated history, much of it quite dark; but this part of it at least seems worth preserving and even cherishing.

It is truly remarkable that we have compassion at all.

Most of this universe has no compassion. Many would like to believe otherwise, and they invent gods and other “higher beings” or attribute some sort of benevolent “universal consciousness” to the cosmos. (Really, most people copy the prior inventions of others.)

This is all wrong.

The universe is mostly empty, and what is here is mostly pitilessly indifferent.

The vast majority of the universe is comprised of cold, dark, empty space—or perhaps of “dark energy“, a phenomenon we really don’t understand at all, which many physicists believe is actually a shockingly powerful form of energy contained within empty space.

Most of the rest is made up of “dark matter“, a substance we still don’t really understand either, but believe to be basically a dense sea of particles that have mass but not much else, which cluster around other mass by gravity but otherwise rarely interact with other matter or even with each other.

Most of the “ordinary matter”, or more properly baryonic matter, (which we think of as ordinary, but actually by far the minority) is contained within stars and nebulae. It is mostly hydrogen and helium. Some of the other lighter elements—like lithium, sodium, carbon, oxygen, nitrogen, and all the way up to iron—can be made within ordinary stars, but still form a tiny fraction of the mass of the universe. Anything heavier than that—silver, gold, beryllium, uranium—can only be made in exotic, catastrophic cosmic events, mainly supernovae, and as a result these elements are even rarer still.

Most of the universe is mind-bendingly cold: about 3 Kelvin, just barely above absolute zero.

Most of the baryonic matter is mind-bendingly hot, contained within stars that burn with nuclear fires at thousands or even millions of Kelvin.

From a cosmic perspective, we are bizarre.

We live at a weird intermediate temperature and pressure, where matter can take on such exotic states as liquid and solid, rather than the far more common gas and plasma. We do contain a lot of hydrogen—that, at least, is normal by the standards of baryonic matter. But then we’re also made up of oxygen, carbon, nitrogen, and even little bits of all sorts of other elements that can only be made in supernovae? What kind of nonsense lifeform depends upon something as exotic as iodine to survive?

Most of the universe does not care at all about you.

Most of the universe does not care about anything.

Stars don’t burn because they want to. They burn because that’s what happens when hydrogen slams into other hydrogen hard enough.

Planets don’t orbit because they want to. They orbit because if they didn’t, they’d fly away or crash into their suns—and those that did are long gone now.

Even most living things, which are already nearly as bizarre as we are, don’t actually care much.

Maybe there is a sense in which a C. elegans or an oak tree or even a cyanobacterium wants to live. It certainly seems to try to live; it has behaviors that seem purposeful, which evolved to promote its ability to survive and pass on offspring. Rocks don’t behave. Stars don’t seek. But living things—even tiny, microscopic living things—do.

But we are something very special indeed.

We are animals. Lifeforms with complex, integrated nervous systems—in a word, brains—that allow us to not simply live, but to feel. To hunger. To fear. To think. To choose.

Animals—and to the best of our knowledge, only animals, though I’m having some doubts about AI lately—are capable of making choices and experiencing pleasure and pain, and thereby becoming something more than living beings: moral beings.

Because we alone can choose, we alone have the duty to choose rightly.

Because we alone can be hurt, we alone have the right to demand not to be.

Humans are even very special among animals. We are not just animals but chordates; not just chordates but mammals; not just mammals but primates. And even then, not just primates. We’re special even by those very high standards.

When you count up all the ways that we are strange compared to the rest of the universe, it seems incredibly unlikely that beings like us would come into existence at all.

Yet here we are. And however improbable it may have been for us to emerge as intelligent beings, we had to do so in order to wonder how improbable it was—and so in some sense we shouldn’t be too surprised.

It is a mistake to say that we are “more evolved” than any other lifeform; turtles and cockroaches had just as much time to evolve as we did, and if anything their relative stasis for hundreds of millions of years suggests a more perfected design: “If it ain’t broke, don’t fix it.”

But we are different than other lifeforms in a very profound way. And I dare say, we are better.

All animals feel pleasure, pain and hunger. (Some believe that even some plants and microscopic lifeforms may too.) Pain when something damages you; hunger when you need something; pleasure when you get what you needed.

But somewhere along the way, new emotions were added: Fear. Lust. Anger. Sadness. Disgust. Pride. To the best of our knowledge, these are largely chordate emotions, often believed to have emerged around the same time as reptiles. (Does this mean that cephalopods never get angry? Or did they evolve anger independently? Surely worms don’t get angry, right? Our common ancestor with cephalopods was probably something like a worm, perhaps a nematode. Does C. elegans get angry?)

And then, much later, still newer emotions evolved. These ones seem to be largely limited to mammals. They emerged from the need for mothers to care for their few and helpless young. (Consider how a bear or a cat fiercely protects her babies from harm—versus how a turtle leaves her many, many offspring to fend for themselves.)

One emotion formed the core of this constellation:

Love.

Caring, trust, affection, and compassion—and also rejection, betrayal, hatred, and bigotry—all came from this one fundamental capacity to love. To care about the well-being of others as well as our own. To see our purpose in the world as extending beyond the borders of our own bodies.

This is what makes humans different, most of all. We are the beings most capable of love.

We are of course by no means perfect at it. Some would say that we are not even very good at loving.

Certainly there are some humans, such as psychopaths, who seem virtually incapable of love. But they are rare.

We often wish that we were better at love. We wish that there were more compassion in the world, and fear that humanity will destroy itself because we cannot find enough compassion to compensate for our increasing destructive power.

Yet if we are bad at love, compared to what?

Compared to the unthinking emptiness of space, the hellish nuclear fires of stars, or even the pitiless selfishness of a worm or a turtle, we are absolute paragons of love.

We somehow find a way to love millions of others who we have never even met—maybe just a tiny bit, and maybe even in a way that becomes harmful, as solidarity fades into nationalism fades into bigotry—but we do find a way. Through institutions of culture and government, we find a way to trust and cooperate on a scale that would be utterly unfathomable even to the most wise and open-minded bonobo, let alone a nematode.

There are no other experts on compassion here. It’s just us.

Maybe that’s why so many people long for the existence of gods. They feel as ignorant as children, and crave the knowledge and support of a wise adult. But there aren’t any. We’re the adults. For all the vast expanses of what we do not know, we actually know more than anyone else. And most of the universe doesn’t know a thing.

If we are not as good at loving as we’d like, the answer is for us to learn to get better at it.

And we know that we can get better at it, because we have. Humanity is more peaceful and cooperative now than we have ever been in our history. The process is slow, and sometimes there is backsliding, but overall, life is getting better for most people in most of the world most of the time.

As a species, as a civilization, we are slowly learning how to love ourselves, one another, and the rest of the world around us.

No one else will learn to love for us. We must do it ourselves.

But we can.

And I believe we will.

Israel, Palestine, and the World Bank’s disappointing priorities

Nov 12 JDN 2460261

Israel and Palestine are once again at war. (There are a disturbing number of different years in which one could have written that sentence.) The BBC has a really nice section of their website dedicated to reporting on various facets of the war. The New York Times also has a section on it, but it seems a little tilted in favor of Israel.

This time, it started with a brutal attack by Hamas, and now Israel has—as usual—overreacted and retaliated with a level of force that is sure to feed the ongoing cycle of extremism. All across social media I see people wanting me to take one side or the other, often even making good points: “Hamas slaughters innocents” and “Israel is a de facto apartheid state” are indeed both important points I agree with. But if you really want to know my ultimate opinion, it’s that this whole thing is fundamentally evil and stupid because human beings are suffering and dying over nothing but lies. All religions are false, most of them are evil, and we need to stop killing each other over them.

Anti-Semitism and Islamophobia are both morally wrong insofar as they involve harming, abusing or discriminating against actual human beings. Let people dress however they want, celebrate whatever holidays they want, read whatever books they want. Even if their beliefs are obviously wrong, don’t hurt them if they aren’t hurting anyone else. But both Judaism and Islam—and Christianity, and more besides—are fundamentally false, wrong, evil, stupid, and detrimental to the advancement of humanity.

That’s the thing that so much of the public conversation is too embarrassed to say; we’re supposed to pretend that they aren’t fighting over beliefs that obviously false. We’re supposed to respect each particular flavor of murderous nonsense, and always find some other cause to explain the conflict. It’s over culture (what culture?); it’s over territory (whose territory?); it’s a retaliation for past conflict (over what?). We’re not supposed to say out loud that all of this violence ultimately hinges upon people believing in nonsense. Even if the conflict wouldn’t disappear overnight if everyone suddenly stopped believing in God—and are we sure it wouldn’t? Let’s try it—it clearly could never have begun, if everyone had started with rational beliefs in the first place.

But I don’t really want to talk about that right now. I’ve said enough. Instead I want to talk about something a little more specific, something less ideological and more symptomatic of systemic structural failures. Something you might have missed amidst the chaos.

The World Bank recently released a report on the situation focused heavily on the looming threat of… higher oil prices. (And of course there has been breathless reporting from various outlets regarding a headline figure of $150 per barrel which is explicitly stated in the report as an unlikely “worst-case scenario”.)

There are two very big reasons why I found this dismaying.


The first, of course, is that there are obviously far more important concerns here than commodity prices. Yes, I know that this report is part of an ongoing series of Commodity Markets Outlook reports, but the fact that this is the sort of thing that the World Bank has ongoing reports about is also saying something important about the World Bank’s priorities. They release monthly commodity forecasts and full Commodity Markets Outlook reports that come out twice a year, unlike the World Development Reports that only come out once a year. The World Bank doesn’t release a twice-annual Conflict Report or a twice-annual Food Security Report. (Even the FAO, which publishes an annual State of Food Security and Nutrition in the World report, also publishes a State of Agricultural Marketsreport just as often.)

The second is that, when reading the report, one can clearly tell that whoever wrote it thinks that rising oil and gas prices are inherently bad. They keep talking about all of these negative consequences that higher oil prices could have, and seem utterly unaware of the really enormous upside here: We may finally get a chance to do something about climate change.

You see, one of the most basic reasons why we haven’t been able to fix climate change is that oil is too damn cheap. Its market price has consistently failed to reflect its actual costs. Part of that is due to oil subsidies around the world, which have held the price lower than it would be even in a free market; but most of it is due to the simple fact that pollution and carbon emissions don’t cost money for the people who produce them, even though they do cost the world.

Fortunately, wind and solar power are also getting very cheap, and are now at the point where they can outcompete oil and gas for electrical power generation. But that’s not enough. We need to remove oil and gas from everything: heating, manufacturing, agriculture, transportation. And that is far easier to do if oil and gas suddenly become more expensive and so people are forced to stop using them.

Now, granted, many of the downsides in that report are genuine: Because oil and gas are such vital inputs to so many economic processes, it really is true that making them more expensive will make lots of other things more expensive, and in particular could increase food insecurity by making farming more expensive. But if that’s what we’re concerned about, we should be focusing on that: What policies can we use to make sure that food remains available to all? And one of the best things we could be doing toward that goal is finding ways to make agriculture less dependent on oil.

By focusing on oil prices instead, the World Bank is encouraging the world to double down on the very oil subsidies that are holding climate policy back. Even food subsides—which certainly have their own problems—would be an obviously better solution, and yet they are barely mentioned.

In fact, if you actually read the report, it shows that fears of food insecurity seem unfounded: Food prices are actually declining right now. Grain prices in particular seem to be falling back down remarkably quickly after their initial surge when Russia invaded Ukraine. Of course that could change, but it’s a really weird attitude toward the world to see something good and respond with, “Yes, but it might change!” This is how people with anxiety disorders (and I would know) think—which makes it seem as though much of the economic policy community suffers from some kind of collective equivalent of an anxiety disorder.

There also seems to be a collective sense that higher prices are always bad. This is hardly just a World Bank phenomenon; on the contrary, it seems to pervade all of economic thought, including the most esteemed economists, the most powerful policymakers, and even most of the general population of citizens. (The one major exception seems to be housing, where the sense is that higher prices are always good—even when the world is in a chronic global housing shortage that leaves millions homeless.) But prices can be too low or too high. And oil prices are clearly, definitely too low. Prices should reflect the real cost of production—all the real costs of production. It should cost money to pollute other people’s air.

In fact I think the whole report is largely a nothingburger: Oil prices haven’t even risen all that much so far—we’re still at $80 per barrel last I checked—and the one thing that is true about the so-called Efficient Market Hypothesis is that forecasting future prices is a fool’s errand. But it’s still deeply unsettling to see such intelligent, learned experts so clearly panicking over the mere possibility that there could be a price change which would so obviously be good for the long-term future of humanity.

There is plenty more worth saying about the Israel-Palestine conflict, and in particular what sort of constructive policy solutions we might be able to find that would actually result in any kind of long-term peace. I’m no expert on peace negotiations, and frankly I admit it would probably be a liability that if I were ever personally involved in such a negotiation, I’d be tempted to tell both sides that they are idiots and fanatics. (The headline the next morning: “Israeli and Palestinian Delegates Agree on One Thing: They Hate the US Ambassador”.)

The World Bank could have plenty to offer here, yet so far they’ve been too focused on commodity prices. Their thinking is a little too much ‘bank’ and not enough ‘world’.

It is a bit ironic, though also vaguely encouraging, that there are those within the World Bank itself who recognize this problem: Just a few weeks ago Ajay Banga gave a speech to the World Bank about “a world free of poverty on a livable planet”.

Yes. Those sound like the right priorities. Now maybe you could figure out how to turn that lip service into actual policy.

On Horror

Oct 29 JDN 2460247

Since this post will go live the weekend before Halloween, the genre of horror seemed a fitting topic.

I must confess, I don’t really get horror as a genre. Generally I prefer not to experience fear and disgust? This can’t be unusual; it’s literally a direct consequence of the evolutionary function of fear and disgust. It’s wanting to be afraid and disgusted that’s weird.

Cracked once came out with a list of “Horror Movies for People Who Hate Horror”, and I found some of my favorite films on it, such as Alien (which is as much sci-fi as horror), The Cabin in the Woods, (which is as much satire) and Zombieland (which is a comedy). Other such lists have prominently featured Get Out (which is as much political as it is horrific), Young Frankenstein (which is entirely a comedy), and The Silence of the Lambs (which is horror, at least in large part, but which I didn’t so much enjoy as appreciate as a work of artistry; I watch it the way I look at Guernica). Some such lists include Saw, which I can appreciate on some level—it does have a lot of sociopolitical commentary—but still can’t enjoy (it’s just too gory). I note that none of these lists seem to include Event Horizon, which starts out as a really good sci-fi film, but then becomes so very much horror that I ended up hating it.

In trying to explain the appeal of horror to me, people have likened it to the experience of a roller coaster: Isn’t fear exhilarating?

I do enjoy roller coasters. But the analogy falls flat for me, because, well, my experience of riding a roller coaster isn’t fear—the exhilaration comes directly from the experience of moving so fast, a rush of “This is awesome!” that has nothing to do with being afraid. Indeed, should I encounter a roller coaster that actually made me afraid, I would assiduously avoid it, and wonder if it was up to code. My goal is not to feel like I’m dying; it’s to feel like I’m flying.

And speaking of flying: Likewise, the few times I have had the chance to pilot an aircraft were thrilling in a way it is difficult to convey to anyone who hasn’t experienced it. I think it might be something like what religious experiences feel like. The sense of perspective, looking down on the world below, seeing it as most people never see it. The sense of freedom, of, for once in your life, actually having the power to maneuver freely in all three dimensions. The subtle mix of knowing that you are traveling at tremendous speed while feeling as if you are peacefully drifting along. Astronauts also describe this sort of experience, which no doubt is even more intense for them.

Yet in all that, fear was never my primary emotion, and had it been, it would have undermined the experience rather than enhanced it. The brief moment when our engine stalled flying over Scotland certainly raised my heart rate, but not in a pleasant way. In that moment—objectively brief, subjectively interminable—I spent all of my emotional energy struggling to remain calm. It helped to continually remind myself of what I knew about aerodynamics: Wings want to fly. An airplane without an engine isn’t a rock; it’s a glider. It is entirely possible to safely land a small aircraft on literally zero engine power. Still, I’m glad we got the propeller started again and didn’t have to.

I have also enjoyed classic horror novels such as Dracula and Frankenstein; their artistry is also quite apparent, and reading them as books provides an emotional distance that watching them as films often lacks. I particularly notice this with vampire stories, as I can appreciate the romantic allure of immortality and the erotic tension of forbidden carnal desire—but the sight of copious blood on screen tends to trigger my mild hematophobia.

Yet if fear is the goal, surely having a phobia should only make it stronger and thus better? And yet, this seems to be a pattern: People with genuine phobia of the subject in question don’t actually enjoy horror films on the subject. Arachnophobes don’t often watch films about giant spiders. Cynophobes are rarely werewolf aficionados. And, indeed, rare is the hematophobe who is a connoisseur of vampire movies.

Moreover, we rarely see horror films about genuine dangers in the world. There are movies about rape, murder, war, terrorism, espionage, asteroid impacts, nuclear weapons and climate change, but (with rare exceptions) they aren’t horror films. They don’t wallow in fear the way that films about vampires, ghosts and werewolves do. They are complex thrillers (Argo, Enemy of the State, Tinker Tailor Soldier Spy, Broken Arrow), police procedurals (most films about rape or murder), heroic sagas (just about every war film), or just fun, light-hearted action spectacles (Armageddon, The Day After Tomorrow). Rather than a loosely-knit gang of helpless horny teenagers, they have strong, brave heroes. Even films about alien invasions aren’t usually horror (Alien notwithstanding); they also tend to be heroic war films. Unlike nuclear war or climate change, alien invasion is a quite unlikely event; but it’s surely more likely than zombies or werewolves.

In other words, when something is genuinely scary, the story is always about overcoming it. There is fear involved, but in the end we conquer our fear and defeat our foes. The good guys win in the end.

I think, then, that enjoyment of horror is not about real fear. Feeling genuinely afraid is unpleasant—as by all Darwinian rights it should be.

Horror is about simulating fear. It’s a kind of brinksmanship: You take yourself to the edge of fear and then back again, because what you are seeing would be scary if it were real, but deep down, you know it isn’t. You can sleep at night after watching movies about zombies, werewolves and vampires, because you know that there aren’t really such things as zombies, werewolves and vampires.

What about the exceptions? What about, say, The Silence of the Lambs? Psychopathic murderers absolutely are real. (Not especially common—but real.) But The Silence of the Lambs only works because of truly brilliant writing, directing, and acting; and part of what makes it work is that it isn’t just horror. It has layers of subtlety, and it crosses genres—it also has a good deal of police procedural in it, in fact. And even in The Silence of the Lambs, at least one of the psychopathic murderers is beaten in the end; evil does not entirely prevail.

Slasher films—which I especially dislike (see above: hematophobia)—seem like they might be a counterexample, in that there genuinely are a common subgenre and they mainly involve psychopathic murderers. But in fact almost all slasher films involve some kind of supernatural element: In Friday the 13th, Jason seems to be immortal. In A Nightmare on Elm Street, Freddy Krueger doesn’t just attack you with a knife, he invades your dreams. Slasher films actually seem to go out of their way to make the killer not real. Perhaps this is because showing helpless people murdered by a realistic psychopath would inspire too much genuine fear.

The terrifying truth is that, more or less at any time, a man with a gun could in fact come and shoot you, and while there may be ways to reduce that risk, there’s no way to make it zero. But that isn’t fun for a movie, so let’s make him a ghost or a zombie or something, so that when the movie ends, you can remind yourself it’s not real. Let’s pretend to be afraid, but never really be afraid.

Realizing that makes me at least a little more able to understand why some people enjoy horror.

Then again, I still don’t.

How will AI affect inequality?

Oct 15 JDN 2460233

Will AI make inequality worse, or better? Could it do a bit of both? Does it depend on how we use it?

This is of course an extremely big question. In some sense it is the big economic question of the 21st century. The difference between the neofeudalist cyberpunk dystopia of Neuromancer and the social democratic utopia of Star Trek just about hinges on whether AI becomes a force for higher or lower inequality.

Krugman seems quite optimistic: Based on forecasts by Goldman Sachs, AI seems poised to automate more high-paying white-collar jobs than low-paying blue-collar ones.

But, well, it should be obvious that Goldman Sachs is not an impartial observer here. They do have reasons to get their forecasts right—their customers are literally invested in those forecasts—but like anyone who immensely profits from the status quo, they also have a broader agenda of telling the world that everything is going great and there’s no need to worry or change anything.

And when I look a bit closer at their graphs, it seems pretty clear that they aren’t actually answering the right question. They estimate an “exposure to AI” coefficient (somehow; their methodology is not clearly explained and lots of it is proprietary), and if it’s between 10% and 49% they call it “complementary” while if it’s 50% or above they call it “replacement”.

But that is not how complements and substitutes work. It isn’t a question of “how much of the work can be done by machine” (whatever that means). It’s a question of whether you will still need the expert human.

It could be that the machine does 90% of the work, but you still need a human being there to tell it what to do, and that would be complementary. (Indeed, this basically is how finance works right now, and I see no reason to think it will change any time soon.) Conversely, it could be that the machine only does 20% of the work, but that was the 20% that required expert skill, and so a once comfortable high-paying job can now be replaced by low-paid temp workers. (This is more or less what’s happening at Amazon warehouses: They are basically managed by AI, but humans still do most of the actual labor, and get paid peanuts for it.)

For their category “computer and mathematical”, they call it “complementary”, and I agree: We are still going to need people who can code. We’re still going to need people who know how to multiply matrices. We’re still going to need people who understand search algorithms. Indeed, if the past is any indicator, we’re going to need more and more of those people, and they’re going to keep getting paid higher and higher salaries. Someone has to make the AI, after all.

Yet I’m not quite so sure about the “mathematical” part in many cases. We may not need people who can solve differential equations, actually: maybe a few to design the algorithms, but honestly even then, a software program with a simple finite-difference algorithm can often solve much more interesting problems than one with a full-fledged differential equation solver, because one of the dirty secrets of differential equations is that for some of the most important ones (like the Navier-Stokes Equations), we simply do not know how to solve them. Once you have enough computing power, you often can stop trying to be clever and just brute-force the damn thing.

Yet for “transportation and material movement”—that is, trucking—Goldman Sachs confidently forecasts mostly “no automation” with a bit of “complementary”. Yet this year—not at some distant point in the future, not in some sci-fi novel, this year in the actual world—the Governor of California already vetoed a bill that would have required automated trucks to have human drivers. The trucks aren’t on the roads yet—but if we already are making laws about them, they’re going to be, soon. (State legislatures are not known for their brilliant foresight or excessive long-term thinking.) And if the law doesn’t require them to have human drivers, they probably won’t; which means that hundreds of thousands of long-haul truckers will suddenly be out of work.

It’s also important to differentiate between different types of jobs that may fall under the same category or industry.

Neurosurgeons are not going anywhere, and improved robotics will only allow them to perform better, safer laparoscopic surgeries. Nor are nurses going anywhere, because some things just need an actual person physically there with the patient. But general practictioners, psychotherapists, and even radiologists are already seeing many of their tasks automated. So is “medicine” being automated or not? That depends what sort of medicine you mean. And yet it clearly means an increase in inequality, because it’s the middle-paying jobs (like GPs) that are going away, while the high-paying jobs (like neurosurgeons) and the low-paying jobs (like nurses) that remain.

Likewise, consider “legal services”, which is one of the few industries that Goldman Sachs thinks will be substantially replaced by AI. Are high-stakes trial lawyers like Sam Bernstein getting replaced? Clearly not. Nor would I expect most corporate lawyers to disappear. Human lawyers will still continue to perform at least a little bit better than AI law systems, and the rich will continue to use them, because a few million dollars for a few percentage points better odds of winning is absolutely worth it when billions of dollars are on the line. So which law services are going to get replaced by AI? First, routine legal questions, like how to renew your work visa or set up a living will—it’s already happening. Next, someone will probably decide that public defenders aren’t worth the cost and start automating the legal defenses of poor people who get accused of crimes. (And to be honest, it may not be much worse than how things currently are in the public defender system.) The advantage of such a change is that it will most likely bring court costs down—and that is desperately needed. But it may also tilt the courts even further in favor of the rich. It may also make it even harder to start a career as a lawyer, cutting off the bottom of the ladder.

Or consider “management”, which Goldman Sachs thinks will be “complementary”. Are CEOs going to get replaced by AI? No, because the CEOs are the ones making that decision. Certainly this is true for any closely-held firm: No CEO is going to fire himself. Theoretically, if shareholders and boards of directors pushed hard enough, they might be able to get a CEO of a publicly-traded corporation ousted in favor of an AI, and if the world were really made of neoclassical rational agents, that might actually happen. But in the real world, the rich have tremendous solidarity for each other (and only each other), and very few billionaires are going to take aim at other billionaires when it comes time to decide whose jobs should be replaced. Yet, there are a lot of levels of management below the CEO and board of directors, and many of those are already in the process of being replaced: Instead of relying on the expert judgment of a human manager, it’s increasingly common to develop “performance metrics”, feed them into an algorithm, and use that result to decide who gets raises and who gets fired. It all feels very “objective” and “impartial” and “scientific”—and usually ends up being both dehumanizing and ultimately not even effective at increasing profits. At some point, many corporations are going to realize that their middle managers aren’t actually making any important decisions anymore, and they’ll feed that into the algorithm, and it will tell them to fire the middle managers.

Thus, even though we think of “medicine”, “law”, and “management” as high-paying careers, the effect of AI is largely going to be to increase inequality within those industries. It isn’t the really high-paid doctors, managers, and lawyers who are going to get replaced.

I am therefore much less optimistic than Krugman about this. I do believe there are many ways that technology, including artificial intelligence, could be used to make life better for everyone, and even perhaps one day lead us into a glorious utopian future.

But I don’t see most of the people who have the authority to make important decisions for our society actually working towards such a future. They seem much more interested in maximizing their own profits or advancing narrow-minded ideologies. (Or, as most right-wing political parties do today: Advancing narrow-minded ideologies about maximizing the profits of rich people.) And if we simply continue on the track we’ve been on, our future is looking a lot more like Neuromancer than it is like Star Trek.

AI and the “generalization faculty”

Oct 1 JDN 2460219

The phrase “artificial intelligence” (AI) has now become so diluted by overuse that we needed to invent a new term for its original meaning. That term is now “artificial general intelligence” (AGI). In the 1950s, AI meant the hypothetical possibility of creating artificial minds—machines that could genuinely think and even feel like people. Now it means… pathing algorithms in video games and chatbots? The goalposts seem to have moved a bit.

It seems that AGI has always been 20 years away. It was 20 years away 50 years ago, and it will probably be 20 years away 50 years from now. Someday it will really be 20 years away, and then, 20 years after that, it will actually happen—but I doubt I’ll live to see it. (XKCD also offers some insight here: “It has not been conclusively proven impossible.”)

We make many genuine advances in computer technology and software, which have profound effects—both good and bad—on our lives, but the dream of making a person out of silicon always seems to drift ever further into the distance, like a mirage on the desert sand.

Why is this? Why do so many people—even, perhaps especially,experts in the field—keep thinking that we are on the verge of this seminal, earth-shattering breakthrough, and ending up wrong—over, and over, and over again? How do such obviously smart people keep making the same mistake?

I think it may be because, all along, we have been laboring under the tacit assumption of a generalization faculty.

What do I mean by that? By “generalization faculty”, I mean some hypothetical mental capacity that allows you to generalize your knowledge and skills across different domains, so that once you get good at one thing, it also makes you good at other things.

This certainly seems to be how humans think, at least some of the time: Someone who is very good at chess is likely also pretty good at go, and someone who can drive a motorcycle can probably also drive a car. An artist who is good at portraits is probably not bad at landscapes. Human beings are, in fact, able to generalize, at least sometimes.

But I think the mistake lies in imagining that there is just one thing that makes us good at generalizing: Just one piece of hardware or software that allows you to carry over skills from any domain to any other. This is the “generalization faculty”—the imagined faculty that I think we do not have, indeed I think does not exist.

Computers clearly do not have the capacity to generalize. A program that can beat grandmasters at chess may be useless at go, and self-driving software that works on one type of car may fail on another, let alone a motorcycle. An art program that is good at portraits of women can fail when trying to do portraits of men, and produce horrific Daliesque madness when asked to make a landscape.

But if they did somehow have our generalization capacity, then, once they could compete with us at some things—which they surely can, already—they would be able to compete with us at just about everything. So if it were really just one thing that would let them generalize, let them leap from AI to AGI, then suddenly everything would change, almost overnight.

And so this is how the AI hype cycle goes, time and time again:

  1. A computer program is made that does something impressive, something that other computer programs could not do, perhaps even something that human beings are not very good at doing.
  2. If that same prowess could be generalized to other domains, the result would plainly be something on par with human intelligence.
  3. Therefore, the only thing this computer program needs in order to be sapient is a generalization faculty.
  4. Therefore, there is just one more step to AGI! We are nearly there! It will happen any day now!

And then, of course, despite heroic efforts, we are unable to generalize that program’s capabilities except in some very narrow way—even decades after having good chess programs, getting programs to be good at go was a major achievement. We are unable to find the generalization faculty yet again. And the software becomes yet another “AI tool” that we will use to search websites or make video games.

For there never was a generalization faculty to be found. It always was a mirage in the desert sand.

Humans are in fact spectacularly good at generalizing, compared to, well, literally everything else in the known universe. Computers are terrible at it. Animals aren’t very good at it. Just about everything else is totally incapable of it. So yes, we are the best at it.

Yet we, in fact, are not particularly good at it in any objective sense.

In experiments, people often fail to generalize their reasoning even in very basic ways. There’s a famous one where we try to get people to make an analogy between a military tactic and a radiation treatment, and while very smart, creative people often get it quickly, most people are completely unable to make the connection unless you give them a lot of specific hints. People often struggle to find creative solutions to problems even when those solutions seem utterly obvious once you know them.

I don’t think this is because people are stupid or irrational. (To paraphrase Sydney Harris: Compared to what?) I think it is because generalization is hard.

People tend to be much better at generalizing within familiar domains where they have a lot of experience or expertise; this shows that there isn’t just one generalization faculty, but many. We may have a plethora of overlapping generalization faculties that apply across different domains, and can learn to improve some over others.

But it isn’t just a matter of gaining more expertise. Highly advanced expertise is in fact usually more specialized—harder to generalize. A good amateur chess player is probably a good amateur go player, but a grandmaster chess player is rarely a grandmaster go player. Someone who does well in high school biology probably also does well in high school physics, but most biologists are not very good physicists. (And lest you say it’s simply because go and physics are harder: The converse is equally true.)

Humans do seem to have a suite of cognitive tools—some innate hardware, some learned software—that allows us to generalize our skills across domains. But even after hundreds of millions of years of evolving that capacity under the highest possible stakes, we still basically suck at it.

To be clear, I do not think it will take hundreds of millions of years to make AGI—or even millions, or even thousands. Technology moves much, much faster than evolution. But I would not be surprised if it took centuries, and I am confident it will at least take decades.

But we don’t need AGI for AI to have powerful effects on our lives. Indeed, even now, AI is already affecting our lives—in mostly bad ways, frankly, as we seem to be hurtling gleefully toward the very same corporatist cyberpunk dystopia we were warned about in the 1980s.

A lot of technologies have done great things for humanity—sanitation and vaccines, for instance—and even automation can be a very good thing, as increased productivity is how we attained our First World standard of living. But AI in particular seems best at automating away the kinds of jobs human beings actually find most fulfilling, and worsening our already staggering inequality. As a civilization, we really need to ask ourselves why we got automated writing and art before we got automated sewage cleaning or corporate management. (We should also ask ourselves why automated stock trading resulted in even more money for stock traders, instead of putting them out of their worthless parasitic jobs.) There are technological reasons for this, yes; but there are also cultural and institutional ones. Automated teaching isn’t far away, and education will be all the worse for it.

To change our lives, AI doesn’t have to be good at everything. It just needs to be good at whatever we were doing to make a living. AGI may be far away, but the impact of AI is already here.

Indeed, I think this quixotic quest for AGI, and all the concern about how to control it and what effects it will have upon our society, may actually be distracting from the real harms that “ordinary” “boring” AI is already having upon our society. I think a Terminator scenario, where the machines rapidly surpass our level of intelligence and rise up to annihilate us, is quite unlikely. But a scenario where AI puts millions of people out of work with insufficient safety net, triggering economic depression and civil unrest? That could be right around the corner.

Frankly, all it may take is getting automated trucks to work, which could be just a few years. There are nearly 4 million truck drivers in the United States—a full percentage point of employment unto itself. And the Governor of California just vetoed a bill that would require all automated trucks to have human drivers. From an economic efficiency standpoint, his veto makes perfect sense: If the trucks don’t need drivers, why require them? But from an ethical and societal standpoint… what do we do with all the truck drivers!?

Knowing When to Quit

Sep 10 JDN 2460198

At the time of writing this post, I have officially submitted my letter of resignation at the University of Edinburgh. I’m giving them an entire semester of notice, so I won’t actually be leaving until December. But I have committed to my decision now, and that feels momentous.

Since my position here was temporary to begin with, I’m actually only leaving a semester early. Part of me wanted to try to stick it out, continue for that one last semester and leave on better terms. Until I sent that letter, I had that option. Now I don’t, and I feel a strange mix of emotions: Relief that I have finally made the decision, regret that it came to this, doubt about what comes next, and—above all—profound ambivalence.

Maybe it’s the very act of quitting—giving up, being a quitter—that feels bad. Even knowing that I need to get out of here, it hurts to have to be the one to quit.

Our society prizes grit and perseverance. Since I was a child I have been taught that these are virtues. And to some extent, they are; there certainly is such a thing as giving up too quickly.

But there is also such a thing as not knowing when to quit. Sometimes things really aren’t going according to plan, and you need to quit before you waste even more time and effort. And I think I am like Randall Monroe in this regard; I am more inclined to stay when I shouldn’t than quit when I shouldn’t:

Sometimes quitting isn’t even as permanent as it is made out to be. In many cases, you can go back later and try again when you are better prepared.

In my case, I am unlikely to ever work at the University of Edinburgh again, but I haven’t yet given up on ever having a career in academia. Then again, I am by no means as certain as I once was that academia is the right path for me. I will definitely be searching for other options.

There is a reason we are so enthusiastically sold on the virtue of perseverance. Part of how our society sells the false narrative of meritocracy is by claiming that people who succeed did so because they tried harder or kept on trying.

This is not entirely false; all other things equal, you are more likely to succeed if you keep on trying. But in some ways that just makes it more seductive and insidious.

For the real reason most people hit home runs in life is they were born on third base. The vast majority of success in life is determined by circumstances entirely outside individual control.


Even having the resources to keep trying is not guaranteed for everyone. I remember a great post on social media pointing out that entrepreneurship is like one of those carnival games:

Entrepreneurship is like one of those carnival games where you throw darts or something.

Middle class kids can afford one throw. Most miss. A few hit the target and get a small prize. A very few hit the center bullseye and get a bigger prize. Rags to riches! The American Dream lives on.

Rich kids can afford many throws. If they want to, they can try over and over and over again until they hit something and feel good about themselves. Some keep going until they hit the center bullseye, then they give speeches or write blog posts about ‘meritocracy’ and the salutary effects of hard work.

Poor kids aren’t visiting the carnival. They’re the ones working it.

The odds of succeeding on any given attempt are slim—but you can always pay for more tries. A middle-class person can afford to try once; mostly those attempts will fail, but a few will succeed and then go on to talk about how their brilliant talent and hard work made the difference. A rich person can try as many times as they like, and when they finally succeed, they can credit their success to perseverance and a willingness to take risks. But the truth is, they didn’t have any exceptional reserves of grit or courage; they just had exceptional reserves of money.

In my case, I was not depleting money (if anything, I’m probably losing out financially by leaving early, though that very much depends on how the job market goes for me): It was something far more valuable. I was whittling away at my own mental health, depleting my energy, draining my motivation. The resource I was exhausting was my very soul.

I still have trouble articulating why it has been so painful for me to work here. It’s so hard to point to anything in particular.

The most obvious downsides were things I knew at the start: The position is temporary, the pay is mediocre, and I had to move across the Atlantic and live thousands of miles from home. And I had already heard plenty about the publish-or-perish system of research publication.

Other things seem like minor annoyances: They never did give me a good office (I have to share it with too many people, and there isn’t enough space, so in fact I rarely use it at all). They were supposed to assign me a faculty mentor and never did. They kept rearranging my class schedule and not telling me things until immediately beforehand.

I think what it really comes down to is I didn’t realize how much it would hurt. I knew that I was moving across the Atlantic—but I didn’t know how isolated and misunderstood I would feel when I did. I knew that publish-or-perish was a problem—but I didn’t know how agonizing it would be for me in particular. I knew I probably wouldn’t get very good mentorship from the other faculty—but I didn’t realize just how bad it would be, or how desperately I would need that support I didn’t get.

I either underestimated the severity of these problems, or overestimated my own resilience. I thought I knew what I was going into, and I thought I could take it. But I was wrong. I couldn’t take it. It was tearing me apart. My only answer was to leave.

So, leave I shall. I have now committed to doing so.

I don’t know what comes next. I don’t even know if I’ve made the right choice. Perhaps I’ll never truly know. But I made the choice, and now I have to live with it.

The rise and plateau of China’s economy

Sep 3 JDN 2460191

It looks like China’s era of extremely rapid economic growth may be coming to an end. Consumer confidence in China cratered this year (and, in typical authoritarian fashion, the agency responsible just quietly stopped publishing the data after that). Current forecasts have China’s economy growing only about 4-5% this year, which would be very impressive for a First World country—but far below the 6%, 7%, even 8% annual growth rates China had in recent years.

Some slowdown was quite frankly inevitable. A surprising number of people—particularly those in or from China—seem to think that China’s ultra-rapid growth was something special about China that could be expected to continue indefinitely.

China’s growth does look really impressive, in isolation:

But in fact this is a pattern we’ve seen several times now (admittedly mostly in Asia): A desperately poor Third World country finally figures out how to get its act together, and suddenly has extremely rapid growth for awhile until it manages to catch up and become a First World country.

It happened in South Korea:

It happened in Japan:

It happened in Taiwan:

It even seems to be happening in Botswana:

And this is a good thing! These are the great success stories of economic development. If we could somehow figure out how to do this all over the world, it might literally be the best thing that ever happened. (It would solve so many problems!)

Here’s a more direct comparison across all these countries (as well as the US), on a log scale:

From this you can pretty clearly see two things.

First, as countries get richer, their growth tends to slow down gradually. By the time Japan, Korea, and Taiwan reached the level that the US had been at back in 1950, their growth slowed to a crawl. But that was okay, because they had already become quite rich.

And second, China is nothing special: Yes, their growth rate is faster than the US, because the US is already so rich. But they are following the same pattern as several other countries. In fact they’ve actually fallen behind Botswana—they used to be much richer than Botswana, and are now slightly poorer.

So while there are many news articles discussing why China’s economy is slowing down, and some of them may even have some merit (they really seem to have screwed up their COVID response, for instance, and their terrible housing price bubble just burst); but the ultimate reason is really that 7% annual economic growth is just not sustainable. It will slow down. When and how remains in question—but it will happen.

Thus, I am not particularly worried about the fact that China’s growth has slowed down. Or at least, I wouldn’t be, if China were governed well and had prepared for this obvious eventuality the way that Korea and Japan did. But what does worry me is that they seem unprepared for this. Their authoritarian government seems to have depended upon sky-high economic growth to sustain support for their regime. The cracks are now forming in that dam, and something terrible could happen when it bursts.

Things may even be worse than they look, because we know that the Chinese government often distorts or omits statistics when they become inconvenient. That can only work for so long: Eventually the reality on the ground will override whatever lies the government is telling.

There are basically two ways this could go: They could reform their government to something closer to a liberal democracy, accept that growth will slow down and work toward more shared prosperity, and then take their place as a First World country like Japan did. Or they could try to cling to their existing regime, gripping ever tighter until it all slips out of their fingers in a potentially catastrophic collapse. Unfortunately, they seem to be opting for the latter.

I hope I’m wrong. I hope that China will find its way toward a future of freedom and prosperity.

But at this point, it doesn’t look terribly likely.