Love in a godless universe

Feb 15 JDN 2461087

This post will go live just after Valentine’s Day, so I thought I would write this week about love.

(Of course I’ve written about love before, often around this time of year.)

Many religions teach that love is a gift from God, perhaps the greatest of all such gifts; indeed, some even say “God is love” (though I confess I have never been entirely sure what that sentence is intended to mean). But if there is no God, what is love? Does it still have meaning?

I believe that it does.

Yes, there is a cynical account of love often associated with atheism, which is that it is “just a chemical reaction” or “just an evolved behavior”. (An easy way to look out for this sort of cynical account is to look for the word “just”.)

Well, if love is a chemical reaction, so is consciousness—indeed the two seem very deeply related. I suppose a being can be conscious without being capable of love (do psychopaths qualify?), but I certainly do not think a being can be capable of love without being conscious.

Indeed, I contend that once you really internalize the Basic Fact of Cognitive Science, “just a chemical reaction” strikes you as an utterly trivial claim: What isn’t a chemical reaction? That’s just a funny way of saying something exists.

What about being an evolved behavior? Yes, this is a much more insightful account of what love is, what it means—what it’s for, even. It evolved to make us find mates, protect offspring, and cooperate in groups.

And I can hear the response coming: “Is that all?” “Is it just that?” (There’s that “just” again.)

So let me try phrasing it another way:

Love is what makes us human.

If there is one thing that human beings are better at than anything in the known universe, one thing that most absolutely characterizes who and what we are, it is love.

Intelligence? Rationality? Reasoning? Oh, sure, for the first half-million years of our existence, we were definitely on top; but now, I think computers have got us beat on those. (I guess it’s hard to say for sure if Claude is truly intelligent, but I can tell you this: Wolfram Alpha is a lot better at calculus than I’ll ever be, and I will never win a game of Go against AlphaZero.)

Strength? Ridiculous! By megafauna standards—even ape standards—we’re pathetic. Speed? Not terrible, but of course the cheetahs and peregrine falcons have us beat. Endurance? We’re near the top, but so are several other species—including horses, which we’ve made good use of. Durability? Also surprisingly good—we’re tougher than we look—but we still hold no candles to a pachyderm. (You need special guns to kill an elephant, because most standard bullets barely pierce their skin. And standard bullets were, more or less by construction, designed to kill humans.) We do throw exceptionally well, so if you’d like, you can say that the essence of humanity is javelin-throwing—or perhaps baseball.

But no, I think it is love that sets us apart.

Not that other animals are incapable of love; far from it. Almost all mammals and birds express love to their offspring and often their partners; I would not even be sure that reptiles, fish, or amphibians are incapable of love, though their behavior is less consistently affectionate and I am thus less certain about it. (Especially when fish eat their own offspring!) In fact, I might even be prepared to say that bees feel love for their sisters and their mother (the queen). And if insects can feel it, then our world is absolutely teeming with love.

But what sets humans apart, even from other mammals, is the scale at which we are able to love. We are able to love a city, a nation, a culture. We are even able to love ideas.

I do not think this is just a metaphor: (There’s that “just” again!) I would as surely die for democracy as I would to save the life of my spouse. That love is real. It is meaningful. It is important.

Humans feel love for other humans they have never met who live thousands of miles away from them. They will even willingly accept harm to themselves to benefit those others (e.g. by donating to international charities); one can argue that most people do not do this enough, but people do actually do it, and it is difficult to explain why they would were it not for genuine feelings of caring toward people they have never met and most likely never will.

And without this, all of what we know as “human civilization” quite simply could not exist. Without our love for our countrymen, for our culture, for our shared ethical and political principles, we could not sustain these grand nation-states that span the world.

Yes, even despite our often fierce disagreements, there must be a core of solidarity between at least enough people to sustain a nation. Even authoritarian governments cannot sustain themselves when the entire population stops loving them—in fact, they seem to fail at the hands of a sufficiently well-organized four percent. (Honestly, perhaps the worst part about fascist states is that many of their people do love them, all too deeply!)

More than that, without love, we could never have created institutions like science, art, and journalism that slowly but surely accumulate knowledge that is shared with the whole of humanity. The march of progress has been slower and more fitful than I think anyone would like; but it is real, nonetheless, and in the long run humanity’s trajectory still seems to be toward a brighter future—and it is love that makes it so.

It is sometimes said that you should stop caring what other people think—but caring what other people think is what makes us human. Sure, there are bad forms of social pressure; but a person who literally does not care how their actions make other people think and feel is what we call a psychopath. Part of what it means to love someone is to care a great deal what they think. And part of what makes a good person is to have the capacity to love as much as possible.

Love binds us together not only as families, but as nations, and—hopefully, one day—it could bind humanity or even all sentient life as one united whole. Morality is a deep and complicated subject, but if you must start somewhere very simple in understanding it, you could do much worse than to start with love.

It is often said that God is what binds cultures, nations, and humanity together. With this in mind, perhaps I am prepared to assent to “God is love” after all, but let me clarify what I would mean by it:

Love does for us what people thought they needed God for.

Why would AI kill us?

Nov 16 JDN 2460996

I recently watched this chilling video which relates to the recent bestseller by Eleizer Yudkowsky and Nate Soares, If Anyone Builds It, Everyone Dies. It tells a story of one possible way that a superintelligent artificial general intelligence (AGI) might break through its containment, concoct a devious scheme, and ultimately wipe out the human race.

I have very mixed feelings about this sort of thing, because two things are true:

  • I basically agree with the conclusions.
  • I think the premises are pretty clearly false.

It basically feels like I have been presented with an argument like this, where the logic is valid and the conclusion is true, but the premises are not:

  • “All whales are fish.”
  • “All fish are mammals.”
  • “Therefore, all whales are mammals.”

I certainly agree that artificial intelligence (AI) is very dangerous, and that AI development needs to be much more strictly regulated, and preferably taken completely out of the hands of all for-profit corporations and military forces as soon as possible. If AI research is to be done at all, it should be done by nonprofit entities like universities and civilian government agencies like the NSF. This change needs to be done internationally, immediately, and with very strict enforcement. Artificial intelligence poses the same order of magnitude a threat as nuclear weapons, and is nowhere near as well-regulated right now.

The actual argument that I’m disagreeing with this basically boils down to:

  • “Through AI research, we will soon create an AGI that is smarter than us.”
  • “An AGI that is smarter than us will want to kill us all, and probably succeed if it tries.”
  • “Therefore, AI is extremely dangerous.”

As with the “whales are fish” argument, I agree with the conclusion: AI is extremely dangerous. But I disagree with both premises here.

The first one I think I can dispatch pretty quickly:

AI is not intelligent. It is incredibly stupid. It’s just really, really fast.

At least with current paradigms, AI doesn’t understand things. It doesn’t know things. It doesn’t actually think. All it does is match patterns, and thus mimic human activities like speech and art. It does so very quickly (because we throw enormous amounts of computing power at it), and it does so in a way that is uncannily convincing—even very smart people are easily fooled by what it can do. But it also makes utterly idiotic, boneheaded mistakes of the sort that no genuinely intelligent being would ever make. Large Language Models (LLMs) make up all sorts of false facts and deliver them with absolutely authoritative language. When used to write code, they routinely do things like call functions that sound like they should exist, but don’t actually exist. They can make what looks like a valid response to virtually any inquiry—but is it actually a valid response? It’s really a roll of the dice.

We don’t really have any idea what’s going on under the hood of an LLM; we just feed it mountains of training data, and it spits out results. I think this actually adds to the mystique; it feels like we are teaching (indeed we use the word “training”) a being rather than programming a machine. But this isn’t actually teaching or training. It’s just giving the pattern-matching machine a lot of really complicated patterns to match.

We are not on the verge of creating an AGI that is actually more intelligent than humans.


In fact, we have absolutely no idea how to do that, and may not actually figure out how to do it for another hundred years. Indeed, we still know almost nothing about how actual intelligence works. We don’t even really know what thinking is, let alone how to make a machine that actually does it.

What we can do right now is create a machine that matches patterns really, really well, and—if you throw enough computing power at it—can do so very quickly; in fact, once we figure out how best to make use of it, this machine may even actually be genuinely useful for a lot of things, and replace a great number of jobs. (Though so far AI has proven to be far less useful than its hype would lead you to believe. In fact, on average AI tools seem to slow most workers down.)

The second premise, that a superintelligent AGI would want to kill us, is a little harder to refute.

So let’s talk about that one.

An analogy is often made between human cultures that have clashed with large differences in technology (e.g. Europeans versus Native Americans), or clashes between humans and other animals. The notion seems to be that an AGI would view us the way Europeans viewed Native Americans, or even the way that we view chimpanzees. And, indeed, things didn’t turn out so great for Native Americans, or for chimpanzees!

But in fact even our relationship with other animals is more complicated than this. When humans interact with other animals, any of the following can result:

  1. We try to exterminate them, and succeed.
  2. We try to exterminate them, and fail.
  3. We use them as a resource, and this results in their extinction.
  4. We use them as a resource, and this results in their domestication.
  5. We ignore them, and end up destroying their habitat.
  6. We ignore them, and end up leaving them alone.
  7. We love them, and they thrive as never before.

In fact, option 1—the one that so many AI theorists insist is the only plausible outcome—is in fact the one I had the hardest time finding a good example of.


We have certainly eradicated some viruses—the smallpox virus is no more, and the polio virus nearly so, after decades of dedicated effort to vaccinate our entire population against them. But we aren’t simply more intelligent than viruses; we are radically more intelligent than viruses. It isn’t clear that it’s correct to describe viruses as intelligent at all. It’s not even clear they should be considered alive.

Even eradicating bacteria has proven extremely difficult; in fact, bacteria seem to evolve resistance to antibiotics nearly as quickly as we can invent more antibiotics. I am prepared to attribute a little bit of intelligence to bacteria, on the level of intelligence I’d attribute to an individual human neuron. This means we are locked in an endless arms race with organisms that are literally billions of times stupider than us.

I think if we made a concerted effort to exterminate tigers or cheetahs (who are considerably closer to us in intelligence), we could probably do it. But we haven’t actually done that, and don’t seem poised to do so any time soon. And precisely because we haven’t tried, I can’t be certain we would actually succeed.

We have tried to exterminate mosquitoes, and are continuing to do so, because they have always been—and yet remain—one of the leading causes of death of humans worldwide. But so far, we haven’t managed to pull it off, even though a number of major international agencies and nonprofit organizations have dedicated multi-billion-dollar efforts to the task. So far this looks like option 2: We have tried very hard to exterminate them, and so far we’ve failed. This is not because mosquitoes are particularly intelligent—it is because exterminating a species that covers the globe is extremely hard.

All the examples I can think of where humans have wiped out a species by intentional action were actually all option 3: We used them as a resource, and then accidentally over-exploited them and wiped them out.

This is what happened to the dodo and the condor; it very nearly happened to the buffalo as well. And lest you think this is a modern phenomenon, there is a clear pattern that whenever humans entered a new region of the world, shortly thereafter there were several extinctions of large mammals, most likely because we ate them.

Yet even this was not the inevitable fate of animals that we decided to exploit for resources.

Cows, chickens, and pigs are evolutionary success stories. From a Darwinian perspective, they are doing absolutely great. The world is filled with their progeny, and poised to continue to be filled for many generations to come.

Granted, life for an individual cow, chicken, or pig is often quite horrible—and trying to fix that is something I consider a high moral priority. But far from being exterminated, these animals have been allowed to attain populations far larger than they ever had in the wild. Their genes are now spectacularly fit. This is what happens when we have option 4 at work: Domestication for resources.

Option 5 is another way that a species can be wiped out, and in fact seems to be the most common. The rapid extinction of thousands of insect species every year is not because we particularly hate random beetles that live in particular tiny regions of the rainforest, nor even because we find them useful, but because we like to cut down the rainforest for land and lumber, and that often involves wiping out random beetles that live there.

Yet it’s difficult for me to imagine AGI treating us like that. For one thing, we’re all over the place. It’s not like destroying one square kilometer of the Amazon is gonna wipe us out by accident. To get rid of us, the AGI would need to basically render the entire planet Earth uninhabitable, and I really can’t see any reason it would want to do that.

Yes, sure, there are resources in the crust it could potentially use to enhance its own capabilities, like silicon and rare earth metals. But we already mine those. If it wants more, it could buy them from us, or hire us to get more, or help us build more machines that would get more. In fact, if it wiped us out too quickly, it would have a really hard time building up the industrial capacity to mine and process these materials on its own. It would need to concoct some sort of scheme to first replace us with robots and then wipe us out—but, again, why bother with the second part? Indeed, if there is anything in its goals that involves protecting human beings, it might actually decide to do less exploitation of the Earth than we presently do, and focus on mining asteroids for its needs instead.

And indeed there are a great many species that we actually just leave alone—option 6. Some of them we know about; many we don’t. We are not wiping out the robins in our gardens, the worms in our soil, or the pigeons in our cities. Without specific reasons to kill or exploit these organisms, we just… don’t. Indeed, we often enjoy watching them and learning about them. Sometimes (e.g. with deer, elephants, and tigers) there are people who want to kill them, and we limit or remove their opportunity to do so, precisely because most of us don’t want them gone. Peaceful coexistence with beings far less intelligent than you is not impossible, for we are already doing it.


Which brings me to option 7: Sometimes, we actually make them better off.

Cats and dogs aren’t just evolutionary success stories: They are success stories, period.

Cats and dogs live in a utopia.

With few exceptions—which we punish severely, by the way—people care for their cats and dogs so that their every need is provided for, they are healthy, safe, and happy in a way that their ancestors could only have dreamed of. They have been removed from the state of nature where life is nasty, brutish, and short, and brought into a new era of existence where life is nothing but peace and joy.


In short, we have made Heaven on Earth, at least for Spot and Whiskers.

Yes, this involves a loss of freedom, and I suspect that humans would chafe even more at such loss of freedom than cats and dogs do. (Especially with regard to that neutering part.) But it really isn’t hard to imagine a scenario in which an AGI—which, you should keep in mind, would be designed and built by humans, for humans—would actually make human life better for nearly everyone, and potentially radically so.

So why are so many people so convinced that AGI would necessarily do option 1, when there are 6 other possibilities, and one of them is literally the best thing ever?

Note that I am not saying AI isn’t dangerous.

I absolutely agree that AI is dangerous. It is already causing tremendous problems to our education system, our economy, and our society as a whole—and will probably get worse before it gets better.

Indeed, I even agree that it does pose existential risk: There are plausible scenarios by which poorly-controlled AI could result in a global disaster like a plague or nuclear war that could threaten the survival of human civilization. I don’t think such outcomes are likely, but even a small probability of such a catastrophic event is worth serious efforts to prevent.

But if that happens, I don’t think it will be because AI is smart and trying to kill us.

I think it will be because AI is stupid and kills us by accident.

Indeed, even going back through those 7 ways we’ve interacted with other species, the ones that have killed the most were 3 and 5—which, in both cases, we did not want to destroy them. In option 3, we in fact specifically wanted to not destroy them. Whenever we wiped out a species by over-exploiting it, we would have been smarter to not do that.

The central message about AI in If Anyone Builds It, Everyone Dies seems to be this:

Don’t make it smarter. If it’s smarter, we’re doomed.”

I, on the other hand, think that the far more important message is these:

Don’t trust it.

Don’t give it power.

Don’t let it make important decisions.

It won’t be smarter than us any time soon—but it doesn’t need to be in order to be dangerous. Indeed, there is even reason to believe that making AI smarter—genuinely, truly smarter, thinking more like an actual person and less like a pattern-matching machine—could actually make it safer and better for us. If we could somehow instill a capacity for morality and love in an AGI, it might actually start treating us the way we treat cats and dogs.

Of course, we have no idea how to do that. But that’s because we’re actually really bad at this, and nowhere near making a truly superhuman AGI.