Against Moral Relativism

Moral relativism is surprisingly common, especially among undergraduate students. There are also some university professors who espouse it, typically but not always from sociology, gender studies or anthropology departments (examples include Marshall Sahlins, Stanley Fish, Susan Harding, Richard Rorty, Michael Fischer, and Alison Renteln). There is a fairly long tradition of moral relativism, from Edvard Westermarck in the 1930s to Melville Herskovits, to more recently Francis Snare and David Wong in the 1980s. University of California Press at Berkeley.} In 1947, the American Anthropological Association released a formal statement declaring that moral relativism was the official position of the anthropology community, though this has since been retracted.

All of this is very, very bad, because moral relativism is an incredibly naive moral philosophy and a dangerous one at that. Vitally important efforts to advance universal human rights are conceptually and sometimes even practically undermined by moral relativists. Indeed, look at that date again: 1947, two years after the end of World War II. The world’s civilized cultures had just finished the bloodiest conflict in history, including some ten million people murdered in cold blood for their religion and ethnicity, and the very survival of the human species hung in the balance with the advent of nuclear weapons—and the American Anthropological Association was insisting that morality is meaningless independent of cultural standards? Were they trying to offer an apologia for genocide?

What is relativism trying to say, anyway? Often the arguments get tied up in knots. Consider a particular example, infanticide. Moral relativists will sometimes argue, for example, that infanticide is wrong in the modern United States but permissible in ancient Inuit society. But is this itself an objectively true normative claim? If it is, then we are moral realists. Indeed, the dire circumstances of ancient Inuit society would surely justify certain life-and-death decisions we wouldn’t otherwise accept. (Compare “If we don’t strangle this baby, we may all starve to death” and “If we don’t strangle this baby, we will have to pay for diapers and baby food”.) Circumstances can change what is moral, and this includes the circumstances of our cultural and ecological surroundings. So there could well be an objective normative fact that infanticide is justified by the circumstances of ancient Inuit life. But if there are objective normative facts, this is moral realism. And if there are no objective normative facts, then all moral claims are basically meaningless. Someone could just as well claim that infanticide is good for modern Americans and bad for ancient Inuits, or that larceny is good for liberal-arts students but bad for engineering students.

If instead all we mean is that particular acts are perceived as wrong in some societies but not in others, this is a factual claim, and on certain issues the evidence bears it out. But without some additional normative claim about whose beliefs are right, it is morally meaningless. Indeed, the idea that whatever society believes is right is a particularly foolish form of moral realism, as it would justify any behavior—torture, genocide, slavery, rape—so long as society happens to practice it, and it would never justify any kind of change in any society, because the status quo is by definition right. Indeed, it’s not even clear that this is logically coherent, because different cultures disagree, and within each culture, individuals disagree. To say that an action is “right for some, wrong for others” doesn’t solve the problem—because either it is objectively normatively right or it isn’t. If it is, then it’s right, and it can’t be wrong; and if it isn’t—if nothing is objectively normatively right—then relativism itself collapses as no more sound than any other belief.

In fact, the most difficult part of defending common-sense moral realism is explaining why it isn’t universally accepted. Why are there so many relativists? Why do so many anthropologists and even some philosophers scoff at the most fundamental beliefs that virtually everyone in the world has?

I should point out that it is indeed relativists, and not realists, who scoff at the most fundamental beliefs of other people. Relativists are fond of taking a stance of indignant superiority in which moral realism is just another form of “ethnocentrism” or “imperialism”. The most common battleground of contention recently is the issue of female circumcision, which is considered completely normal or even good in some African societies but is viewed with disgust and horror by most Western people. Other common choices include abortion, clothing, especially Islamic burqa and hijab, male circumcision, and marriage; given the incredible diversity in human food, clothing, language, religion, behavior, and technology, there are surprisingly few moral issues on which different cultures disagree—but relativists like to milk them for all they’re worth!

But I dare you, anthropologists: Take a poll. Ask people which is more important to them, their belief that, say, female circumcision is immoral, or their belief that moral right and wrong are objective truths? Virtually anyone in any culture anywhere in the world would sooner admit they are wrong about some particular moral issue than they would assent to the claim that there is no such thing as a wrong moral belief. I for one would be more willing to abandon just about any belief I hold before I would abandon the belief that there are objective normative truths. I would sooner agree that the Earth is flat and 6,000 years old, that the sky is green, that I am a brain in a vat, that homosexuality is a crime, that women are inferior to men, or that the Holocaust was a good thing—than I would ever agree that there is no such thing as right or wrong. This is of course because once I agreed that there is no objective normative truth, I would be forced to abandon everything else as well—since without objective normativity there is no epistemic normativity, and hence no justice, no truth, no knowledge, no science. If there is nothing objective to say about how we ought to think and act, then we might as well say the Earth is flat and the sky is green.

So yes, when I encounter other cultures with other values and ideas, I am forced to deal with the fact that they and I disagree about many things, important things that people really should agree upon. We disagree about God, about the afterlife, about the nature of the soul; we disagree about many specific ethical norms, like those regarding racial equality, feminism, sexuality and vegetarianism. We may disagree about economics, politics, social justice, even family values. But as long as we are all humans, we probably agree about a lot of other important things, like “murder is wrong”, “stealing is bad”, and “the sky is blue”. And one thing we definitely do not disagree about—the one cornerstone upon which all future communication can rest—is that these things matter, that they really do describe actual features of an actual world that are worth knowing. If it turns out that I am wrong about these things, \I would want to know! I’d much rather find out I’d been living the wrong way than continue to live the same pretending that it doesn’t matter. I don’t think I am alone in this; indeed, I suspect that the reason people get so angry when I tell them that religion is untrue is precisely because they realize how important it is. One thing religious people never say is “Well, God is imaginary to you, perhaps; but to me God is real. Truth is relative.” I’ve heard atheists defend other people’s beliefs in such terms—but no one ever defends their own beliefs that way. No Evangelical Baptist thinks that Christianity is an arbitrary social construction. No Muslim thinks that Islam is just one equally-valid perspective among many. It is you, relativists, who deny people’s fundamental beliefs.

Yet the fact that relativists accuse realists of being chauvinistic hints at the deeper motivations of moral relativism. In a word: Guilt. Moral relativism is an outgrowth of the baggage of moral guilt and self-loathing that Western societies have built up over the centuries. Don’t get me wrong: Western cultures have done terrible things, many terrible things, all too recently. We needn’t go so far back as the Crusades or the ethnocidal “colonization” of the Americas; we need only look to the carpet-bombing of Dresden in 1945 or the defoliation of Vietnam in the 1960s, or even the torture program as recently as 2009. There is much evil that even the greatest nations of the world have to answer for. For all our high ideals, even America, the nation of “life, liberty, and the pursuit of happiness”, the culture of “liberty and justice for all”, has murdered thousands of innocent people—and by “murder” I mean murder, killing not merely by accident in the collateral damage of necessary war, but indeed in acts of intentional and selfish cruelty. Not all war is evil—but many wars are, and America has fought in some of them. No Communist radical could ever burn so much of the flag as the Pentagon itself has burned in acts of brutality.

Yet it is an absurd overreaction to suggest that there is nothing good about Western culture, nothing valuable about secularism, liberal democracy, market economics, or technological development. It is even more absurd to carry the suggestion further, to the idea that civilization was a mistake and we should all go back to our “natural” state as hunter-gatherers. Yet there are anthropologists working today who actually say such things. And then, as if we had not already traversed so far beyond the shores of rationality that we can no longer see the light of home, then relativists take it one step further and assert that any culture is as good as any other.

Think about what this would mean, if it were true. To say that all cultures are equal is to say that science, education, wealth, technology, medicine—all of these are worthless. It is to say that democracy is no better than tyranny, security is no better than civil war, secularism is no better than theocracy. It is to say that racism is as good as equality, sexism is as good as feminism, feudalism is as good as capitalism.

Many relativists seem worried that moral realism can be used by the powerful and privileged to oppress others—the cishet White males who rule the world (and let’s face it, cishet White males do, pretty much, rule the world!) can use the persuasive force of claiming objective moral truth in order to oppress women and minorities. Yet what is wrong with oppressing women and minorities, if there is no such thing as objective moral truth? Only under moral realism is oppression truly wrong.

Serenity and its limits

Feb 25 JDN 2460367

God grant me the serenity
to accept the things I cannot change;
courage to change the things I can;
and wisdom to know the difference.

Of course I don’t care for its religious message (and the full prayer is even more overtly religious), but the serenity prayer does capture an important insight into some of the most difficult parts of human existence.

Some things are as we would like them to be. They don’t require our intervention. (Though we may still stand to benefit from teaching ourselves to savor them and express gratitude for them.)

Other things are not as we would like them to be. The best option, of course, would be to change them.

But such change is often difficult, and sometimes practically impossible.

Sometimes we don’t even know whether change is possible—that’s where the wisdom to know the difference comes in. This is a wisdom we often lack, but it’s at least worth striving for.

If it is impossible to change what we want to change, then we are left with only one choice:

Do we accept it, or not?

The serenity prayer tells us to accept it. There is wisdom in this. Often it is the right answer. Some things about our lives are awful, but simply cannot be changed by any known means.

Death, for instance.

Someday, perhaps, we will finally conquer death, and humanity—or whatever humanity has become—will enter a new era of existence. But today is not that day. When grieving the loss of people we love, ultimately our only option is to accept that they are gone, and do our best to appreciate what they left behind, and the parts of them that are still within us. They would want us to carry on and live full lives, not forever be consumed by grief.

There are many other things we’d like to change, and maybe someday we will, but right now, we simply don’t know how: diseases we can’t treat, problems we can’t solve, questions we can’t answer. It’s often useful for someone to be trying to push those frontiers, but for any given person, the best option is often to find a way to accept things as they are.

But there are also things I cannot change and yet will not accept.

Most of these things fall into one broad category:

Injustice.

I can’t end war, or poverty, or sexism, or racism, or homophobia. Neither can you. Neither can any one person, or any hundred people, or any thousand people, or probably even any million people. (If all it took were a million dreams, we’d be there already. A billion might be enough—though it would depend which billion people shared the dream.)

I can’t. You can’t. But we can.

And here I mean “we” in a very broad sense indeed: Humanity as a collective whole. All of us together can end injustice—and indeed that is the only way it ever could be ended, by our collective action. Collective action is what causes injustice, and collective action is what can end it.

I therefore consider serenity in the face of injustice to be a very dangerous thing.

At times, and to certain degrees, that serenity may be necessary.

Those who are right now in the grips of injustice may need to accept it in order to survive. Reflecting on the horror of a concentration camp won’t get you out of it. Embracing the terror of war won’t save you from being bombed. Weeping about the sorrow of being homeless won’t get you off the streets.

Even for those of us who are less directly affected, it may sometimes be wisest to blunt our rage and sorrow at injustice—for otherwise they could be paralyzing, and if we are paralyzed, we can’t help anyone.

Sometimes we may even need to withdraw from the fight for justice, simply because we are too exhausted to continue. I read recently of a powerful analogy about this:

A choir can sing the same song forever, as long as its singers take turns resting.

If everyone tries to sing their very hardest all the time, the song must eventually end, as no one can sing forever. But if we rotate our efforts, so that at any given moment some are singing while others are resting, then we theoretically could sing for all time—as some of us die, others would be born to replace us in the song.

For a literal choir this seems absurd: Who even wants to sing the same song forever? (Lamb Chop, I guess.)

But the fight for justice probably is one we will need to continue forever, in different forms in different times and places. There may never be a perfectly just society, and even if there is, there will be no guarantee that it remains so without eternal vigilance. Yet the fight is worth it: in so many ways our society is already more just than it once was, and could be made more so in the future.

This fight will only continue if we don’t accept the way things are. Even when any one of us can’t change the world—even if we aren’t sure how many of us it would take to change the world—we still have to keep trying.

But as in the choir, each one of us also needs to rest.

We can’t all be fighting all the time as hard as we can. (I suppose if literally everyone did that, the fight for justice would be immediately and automatically won. But that’s never going to happen. There will always be opposition.)

And when it is time for each of us to rest, perhaps some serenity is what we need after all. Perhaps there is a balance to be found here: We do not accept things as they are, but we do accept that we cannot change them immediately or single-handedly. We accept that our own strength is limited and sometimes we must withdraw from the fight.

So yes, we need some serenity. But not too much.

Enough serenity to accept that we won’t win the fight immediately or by ourselves, and sometimes we’ll need to stop fighting and rest. But not so much serenity that we give up the fight altogether.

For there are many things that I can’t change—but we can.

Love is more than chemicals

Feb 18 JDN 2460360

One of the biggest problems with the rationalist community is an inability to express sincerity and reverence.

I get it: Religion is the world’s greatest source of sincerity and reverence, and religion is the most widespread and culturally important source of irrationality. So we declare ourselves enemies of religion, and also end up being enemies of sincerity and reverence.

But in doing so, we lose something very important. We cut ourselves off from some of the greatest sources of meaning and joy in human life.

In fact, we may even be undermining our own goals: If we don’t offer people secular, rationalist forms of reverence, they may find they need to turn back to religion in order to fill that niche.

One of the most pernicious forms of this anti-sincerity, anti-reverence attitude (I can’t just say ‘insincere’ or ‘irreverent’, as those have different meanings) is surely this one:

Love is just a chemical reaction.

(I thought it seemed particularly apt to focus on this one during the week of Valentine’s Day.)

On the most casual of searches I could find at least half a dozen pop-sci articles and a YouTube video propounding this notion (though I could also find a few articles trying to debunk the notion as well).

People who say this sort of thing seem to think that they are being wise and worldly while the rest of us are just being childish and naive. They think we are seeing something that isn’t there. In fact, they are being jaded and cynical. They are failing to see something that is there.

(Perhaps the most extreme form of this was from Rick & Morty; and while Rick as a character is clearly intended to be jaded and cynical, far too many people also see him as a role model.)

Part of the problem may also be a failure to truly internalize the Basic Fact of Cognitive Science:

You are your brain.

No, your consciousness is not an illusion. It’s not an “epiphenomenon” (whatever that isI’ve never encountered one in real life). Your mind is not fake or imaginary. Your mind actually exists—and it is a product of your brain. Both brain and mind exist, and are in fact the same.

It’s so hard for people to understand this that some become dualists, denying the unity of the brain and the mind. That, at least, I can sympathize with, even though we have compelling evidence that it is wrong. But there’s another tack people sometimes take, eliminative materialism, where they try to deny that the mind exists at all. And that I truly do not understand. How can you think that nobody can think? Yet intelligent, respected philosophers have claimed to believe such things.

Love is one of the most important parts of our lives.

This may be more true of humans than of literally any other entity in the known universe.

The only serious competition comes from other mammals: They are really the only other beings we know of that are capable of love. And even they don’t seem to be as good at it as we are; they can love only those closest to them, while we can love entire nations and even abstract concepts.

And once you go beyond that, even to reptiles—let alone fish, or amphibians, or insects, or molluscs—it’s not clear that other animals are really capable of love at all. They seem to be capable of some forms of thought and feeling: They get hungry, or angry, or horny. But do they really love?

And even the barest emotional capacities of an insect are still categorically beyond what most of the universe is capable of feeling, which is to say: Nothing. The vast, vast majority of the universe feels neither love nor hate, neither joy nor pain.

Yet humans can love, and do love, and it is a large part of what gives our lives meaning.

I don’t just mean romantic love here, though I do think it’s worth noting that people who dismiss the reality of romantic love somehow seem reluctant to do the same for the love parents have for their children—even though it’s made of pretty much the same brain chemicals. Perhaps there is a limit to their cynicism.

Yes, love is made of chemicals—because everything is made of chemicals. We live in a material, chemical universe. Saying that love is made of chemicals is an almost completely vacuous statement; it’s basically tantamount to saying that love exists.

In other contexts, you already understand this.

“That’s not a bridge, it’s just a bunch of iron atoms!” rightfully strikes you as an absurd statement to make. Yes, the bridge is made of steel, and steel is mostly iron, and everything is made of atoms… but clearly there’s a difference between a random pile of iron and a bridge.

“That’s not a computer, it’s just a bunch of silicon atoms!” similarly registers as nonsense: Yes, it is indeed mostly made of silicon, but beach sand and quartz crystals are not computers.

It is in this same sense that joy is made of dopamine and love is made of chemical reactions. Yes, those are in fact the constituent parts—but things are more than just their parts.

I think that on some level, even most rationalists recognize that love is more than some arbitrary chemical reaction. I think “love is just chemicals” is mainly something people turn to for a couple of reasons: Sometimes, they are so insistent on rejecting everything that even resembles religious belief that they end up rejecting all meaning and value in human life. Other times, they have been so heartbroken, that they try to convince themselves love isn’t real—to dull the pain. (But of course if it weren’t, there would be no pain to dull.)

But love is no more (or less) a chemical reaction than any other human experience: The very belief “love is just a chemical reaction” is, itself, made of chemical reactions.

Everything we do is made of chemical reactions, because we are made of chemical reactions.

Part of the problem here—and with the Basic Fact of Cognitive Science in general—is that we really have no idea how this works. For most of what we deal with in daily life, and even an impressive swath of the overall cosmos, we have a fairly good understanding of how things work. We know how cars drive, how wind blows, why rain falls; we even know how cats purr and why birds sing. But when it comes to understanding how the physical matter of the brain generates the subjective experiences of thought, feeling, and belief—of which love is made—we lack even the most basic understanding. The correlation between the two is far too strong to deny; but as far as causal mechanisms, we know absolutely nothing. (Indeed, worse than that: We can scarcely imagine a causal mechanism that would make any sense. We not only don’t know the answer; we don’t know what an answer would look like.)

So, no, I can’t tell you how we get from oxytocin and dopamine to love. I don’t know how that makes any sense. No one does. But we do know it’s true.

And just like everything else, love is more than the chemicals it’s made of.

Administering medicine to the dead

Jan 28 JDN 2460339

Here are a couple of pithy quotes that go around rationalist circles from time to time:

“To argue with a man who has renounced the use and authority of reason, […] is like administering medicine to the dead[…].”

Thomas Paine, The American Crisis

“It is useless to attempt to reason a man out of a thing he was never reasoned into.”

Jonathan Swift

You usually hear that abridged version, but Thomas Paine’s full quotation is actually rather interesting:

“To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.”

― Thomas Paine, The American Crisis

It is indeed quite ineffective to convert an atheist by scripture (though that doesn’t seem to stop them from trying). Yet this quotation seems to claim that the opposite should be equally ineffective: It should be impossible to convert a theist by reason.

Well, then, how else are we supposed to do it!?

Indeed, how did we become atheists in the first place!?

You were born an atheist? No, you were born having absolutely no opinion about God whatsoever. (You were born not realizing that objects don’t fade from existence when you stop seeing them! In a sense, we were all born believing ourselves to be God.)

Maybe you were raised by atheists, and religion never tempted you at all. Lucky you. I guess you didn’t have to be reasoned into atheism.

Well, most of us weren’t. Most of us were raised into religion, and told that it held all the most important truths of morality and the universe, and that believing anything else was horrible and evil and would result in us being punished eternally.

And yet, somehow, somewhere along the way, we realized that wasn’t true. And we were able to realize that because people made rational arguments.

Maybe we heard those arguments in person. Maybe we read them online. Maybe we read them in books that were written by people who died long before we were born. But somehow, somewhere people actually presented the evidence for atheism, and convinced us.

That is, they reasoned us out of something that we were not reasoned into.

I know it can happen. I have seen it happen. It has happened to me.

And it was one of the most important events in my entire life. More than almost anything else, it made me who I am today.

I’m scared that if you keep saying it’s impossible, people will stop trying to do it—and then it will stop happening to people like me.

So please, please stop telling people it’s impossible!

Quotes like these encourage you to simply write off entire swaths of humanity—most of humanity, in fact—judging them as worthless, insane, impossible to reach. When you should be reaching out and trying to convince people of the truth, quotes like these instead tell you to give up and consider anyone who doesn’t already agree with you as your enemy.

Indeed, it seems to me that the only logical conclusion of quotes like these is violence. If it’s impossible to reason with people who oppose us, then what choice do we have, but to fight them?

Violence is a weapon anyone can use.

Reason is the one weapon in the universe that works better when you’re right.

Reason is the sword that only the righteous can wield. Reason is the shield that only protects the truth. Reason is the only way we can ever be sure that the right people win—instead of just whoever happens to be strongest.

Yes, it’s true: reason isn’t always effective, and probably isn’t as effective as it should be. Convincing people to change their minds through rational argument is difficult and frustrating and often painful for both you and them—but it absolutely does happen, and our civilization would have long ago collapsed if it didn’t.

Even people who claim to have renounced all reason really haven’t: they still know 2+2=4 and they still look both ways when they cross the street. Whatever they’ve renounced, it isn’t reason; and maybe, with enough effort, we can help them see that—by reason, of course.

In fact, maybe even literally administering medicine to the dead isn’t such a terrible idea.

There are degrees of death, after all: Someone whose heart has stopped is in a different state than someone whose cerebral activity has ceased, and both of them clearly stand a better chance of being resuscitated than someone who has been vaporized by an explosion.

As our technology improves, more and more states that were previously considered irretrievably dead will instead be considered severe states of illness or injury from which it is possible to recover. We can now restart many stopped hearts; we are working on restarting stopped brains. (Of course we’ll probably never be able to restore someone who got vaporized—unless we figure out how to make backup copies of people?)

Most of the people who now live in the world’s hundreds of thousands of ICU beds would have been considered dead even just 100 years ago. But many of them will recover, because we didn’t give up on them.

So don’t give up on people with crazy beliefs either.

They may seem like they are too far gone, like nothing in the world could ever bring them back to the light of reason. But you don’t actually know that for sure, and the only way to find out is to try.

Of course, you won’t convince everyone of everything immediately. No matter how good your evidence is, that’s just not how this works. But you probably will convince someone of something eventually, and that is still well worthwhile.

You may not even see the effects yourself—people are often loathe to admit when they’ve been persuaded. But others will see them. And you will see the effects of other people’s persuasion.

And in the end, reason is really all we have. It’s the only way to know that what we’re trying to make people believe is the truth.

Don’t give up on reason.

And don’t give up on other people, whatever they might believe.

Compassion and the cosmos

Dec 24 JDN 2460304

When this post goes live, it will be Christmas Eve, one of the most important holidays around the world.

Ostensibly it celebrates the birth of Jesus, but it doesn’t really.

For one thing, Jesus almost certainly wasn’t born in December. The date of Christmas was largely set by the Council of Tours in AD 567; it was set to coincide with existing celebrations—not only other Christian celebrations such as the Feast of the Epiphany, but also many non-Christian celebrations such as Yuletide, Saturnalia, and others around the Winter Solstice. (People today often say “Yuletide” when they actually mean Christmas, because the syncretization was so absolute.)

For another, an awful lot of the people celebrating Christmas don’t particularly care about Jesus. Countries like Sweden, Belgium, the UK, Australia, Norway, and Denmark are majority atheist but still very serious about Christmas. Maybe we should try to secularize and ecumenize the celebration and call it Solstice or something, but that’s a tall order. For now, it’s Christmas.

Compassion, love, and generosity are central themes of Christmas—and, by all accounts, Jesus did exemplify those traits. Christianity has a very complicated history, much of it quite dark; but this part of it at least seems worth preserving and even cherishing.

It is truly remarkable that we have compassion at all.

Most of this universe has no compassion. Many would like to believe otherwise, and they invent gods and other “higher beings” or attribute some sort of benevolent “universal consciousness” to the cosmos. (Really, most people copy the prior inventions of others.)

This is all wrong.

The universe is mostly empty, and what is here is mostly pitilessly indifferent.

The vast majority of the universe is comprised of cold, dark, empty space—or perhaps of “dark energy“, a phenomenon we really don’t understand at all, which many physicists believe is actually a shockingly powerful form of energy contained within empty space.

Most of the rest is made up of “dark matter“, a substance we still don’t really understand either, but believe to be basically a dense sea of particles that have mass but not much else, which cluster around other mass by gravity but otherwise rarely interact with other matter or even with each other.

Most of the “ordinary matter”, or more properly baryonic matter, (which we think of as ordinary, but actually by far the minority) is contained within stars and nebulae. It is mostly hydrogen and helium. Some of the other lighter elements—like lithium, sodium, carbon, oxygen, nitrogen, and all the way up to iron—can be made within ordinary stars, but still form a tiny fraction of the mass of the universe. Anything heavier than that—silver, gold, beryllium, uranium—can only be made in exotic, catastrophic cosmic events, mainly supernovae, and as a result these elements are even rarer still.

Most of the universe is mind-bendingly cold: about 3 Kelvin, just barely above absolute zero.

Most of the baryonic matter is mind-bendingly hot, contained within stars that burn with nuclear fires at thousands or even millions of Kelvin.

From a cosmic perspective, we are bizarre.

We live at a weird intermediate temperature and pressure, where matter can take on such exotic states as liquid and solid, rather than the far more common gas and plasma. We do contain a lot of hydrogen—that, at least, is normal by the standards of baryonic matter. But then we’re also made up of oxygen, carbon, nitrogen, and even little bits of all sorts of other elements that can only be made in supernovae? What kind of nonsense lifeform depends upon something as exotic as iodine to survive?

Most of the universe does not care at all about you.

Most of the universe does not care about anything.

Stars don’t burn because they want to. They burn because that’s what happens when hydrogen slams into other hydrogen hard enough.

Planets don’t orbit because they want to. They orbit because if they didn’t, they’d fly away or crash into their suns—and those that did are long gone now.

Even most living things, which are already nearly as bizarre as we are, don’t actually care much.

Maybe there is a sense in which a C. elegans or an oak tree or even a cyanobacterium wants to live. It certainly seems to try to live; it has behaviors that seem purposeful, which evolved to promote its ability to survive and pass on offspring. Rocks don’t behave. Stars don’t seek. But living things—even tiny, microscopic living things—do.

But we are something very special indeed.

We are animals. Lifeforms with complex, integrated nervous systems—in a word, brains—that allow us to not simply live, but to feel. To hunger. To fear. To think. To choose.

Animals—and to the best of our knowledge, only animals, though I’m having some doubts about AI lately—are capable of making choices and experiencing pleasure and pain, and thereby becoming something more than living beings: moral beings.

Because we alone can choose, we alone have the duty to choose rightly.

Because we alone can be hurt, we alone have the right to demand not to be.

Humans are even very special among animals. We are not just animals but chordates; not just chordates but mammals; not just mammals but primates. And even then, not just primates. We’re special even by those very high standards.

When you count up all the ways that we are strange compared to the rest of the universe, it seems incredibly unlikely that beings like us would come into existence at all.

Yet here we are. And however improbable it may have been for us to emerge as intelligent beings, we had to do so in order to wonder how improbable it was—and so in some sense we shouldn’t be too surprised.

It is a mistake to say that we are “more evolved” than any other lifeform; turtles and cockroaches had just as much time to evolve as we did, and if anything their relative stasis for hundreds of millions of years suggests a more perfected design: “If it ain’t broke, don’t fix it.”

But we are different than other lifeforms in a very profound way. And I dare say, we are better.

All animals feel pleasure, pain and hunger. (Some believe that even some plants and microscopic lifeforms may too.) Pain when something damages you; hunger when you need something; pleasure when you get what you needed.

But somewhere along the way, new emotions were added: Fear. Lust. Anger. Sadness. Disgust. Pride. To the best of our knowledge, these are largely chordate emotions, often believed to have emerged around the same time as reptiles. (Does this mean that cephalopods never get angry? Or did they evolve anger independently? Surely worms don’t get angry, right? Our common ancestor with cephalopods was probably something like a worm, perhaps a nematode. Does C. elegans get angry?)

And then, much later, still newer emotions evolved. These ones seem to be largely limited to mammals. They emerged from the need for mothers to care for their few and helpless young. (Consider how a bear or a cat fiercely protects her babies from harm—versus how a turtle leaves her many, many offspring to fend for themselves.)

One emotion formed the core of this constellation:

Love.

Caring, trust, affection, and compassion—and also rejection, betrayal, hatred, and bigotry—all came from this one fundamental capacity to love. To care about the well-being of others as well as our own. To see our purpose in the world as extending beyond the borders of our own bodies.

This is what makes humans different, most of all. We are the beings most capable of love.

We are of course by no means perfect at it. Some would say that we are not even very good at loving.

Certainly there are some humans, such as psychopaths, who seem virtually incapable of love. But they are rare.

We often wish that we were better at love. We wish that there were more compassion in the world, and fear that humanity will destroy itself because we cannot find enough compassion to compensate for our increasing destructive power.

Yet if we are bad at love, compared to what?

Compared to the unthinking emptiness of space, the hellish nuclear fires of stars, or even the pitiless selfishness of a worm or a turtle, we are absolute paragons of love.

We somehow find a way to love millions of others who we have never even met—maybe just a tiny bit, and maybe even in a way that becomes harmful, as solidarity fades into nationalism fades into bigotry—but we do find a way. Through institutions of culture and government, we find a way to trust and cooperate on a scale that would be utterly unfathomable even to the most wise and open-minded bonobo, let alone a nematode.

There are no other experts on compassion here. It’s just us.

Maybe that’s why so many people long for the existence of gods. They feel as ignorant as children, and crave the knowledge and support of a wise adult. But there aren’t any. We’re the adults. For all the vast expanses of what we do not know, we actually know more than anyone else. And most of the universe doesn’t know a thing.

If we are not as good at loving as we’d like, the answer is for us to learn to get better at it.

And we know that we can get better at it, because we have. Humanity is more peaceful and cooperative now than we have ever been in our history. The process is slow, and sometimes there is backsliding, but overall, life is getting better for most people in most of the world most of the time.

As a species, as a civilization, we are slowly learning how to love ourselves, one another, and the rest of the world around us.

No one else will learn to love for us. We must do it ourselves.

But we can.

And I believe we will.

Homeschooling and too much freedom

Nov 19 JDN 2460268

Allowing families to homeschool their children increases freedom, quite directly and obviously. This is a large part of the political argument in favor of homeschooling, and likely a large part of why homeschooling is so popular within the United States in particular.

In the US, about 3% of people are homeschooled. This seems like a small proportion, but it’s enough to have some cultural and political impact, and it’s considerably larger than the proportion who are homeschooled in most other countries.

Moreover, homeschooling rates greatly increased as a result of COVID, and it’s anyone’s guess when, or even whether, they will go back down. I certainly hope they do; here’s why.

A lot of criticism about homeschooling involves academic outcomes: Are the students learning enough English and math? This is largely unfounded; statistically, academic outcomes of homeschooled students don’t seem to be any worse than those of public school students; by some measures, they are actually better.Nor is there clear evidence that homeschooled kids are any less developed socially; most of them get that social development through other networks, such as churches and sports teams.

No, my concern is not that they won’t learn enough English and math. It’s that they won’t learn enough history and science. Specifically, the parts of history and science that contradict the religious beliefs of the parents who are homeschooling them.

One way to study this would be to compare test scores by homeschooled kids on, say, algebra and chemistry (which do not directly threaten Christian evangelical beliefs) to those on, say, biology and neuroscience (which absolutely, fundamentally do). Lying somewhere in between are physics (F=ma is no threat to Christianity, but the Big Bang is) and history (Christian nationalists happily teach that Thomas Jefferson wrote the Declaration of Independence, but often omit that he owned slaves). If homeschooled kids are indeed indoctrinated, we should see particular lacunas in their knowledge where the facts contradict their ideology. In any case, I wasn’t able to find any such studies.

But even if their academic outcomes are worse in certain domains, so what? What about the freedom of parents to educate their children how they choose? What about the freedom of children to not be subjected to the pain of public school?

It will come as no surprise to most of you that I did well in school. In almost everything, really: math, science, philosophy, English, and Latin were my best subjects, and I earned basically flawless grades in them. But I also did very well in creative writing, history, art, and theater, and fairly well in music. My only poor performance was in gym class (as I’ve written about before).

It may come as some surprise when I tell you that I did not particularly enjoy school. In elementary school I had few friends—and one of my closest ended up being abusive to me. Middle school I mostly enjoyed—despite the onset of my migraines. High school started out utterly miserable, though it got a little better—a little—once I transferred to Community High School. Throughout high school, I was lonely, stressed, anxious, and depressed most of the time, and had migraine headaches of one intensity or another nearly every single day. (Sadly, most of that is true now as well; but I at least had a period of college and grad school where it wasn’t, and hopefully I will again once this job is behind me.)

I was good at school. I enjoyed much of the content of school. But I did not particularly enjoy school.

Thus, I can quite well understand why it is tempting to say that kids should be allowed to be schooled at home, if that is what they and their parents want. (Of course, a problem already arises there: What if child and parent disagree? Whose choice actually matters? In practice, it’s usually the parent’s.)

On the whole, public school is a fairly toxic social environment: Cliquish, hyper-competitive, stressful, often full of conflict between genders, races, classes, sexual orientations, and of course the school-specific one, nerds versus jocks (I’d give you two guesses which team I was on, but you’re only gonna need one). Public school sucks.

Then again, many of these problems and conflicts persist into adult life—so perhaps it’s better preparation than we care to admit. Maybe it’s better to be exposed to bias and conflict so that you can learn to cope with them, rather than sheltered from them.

But there is a more important reason why we may need public school, why it may even be worth coercing parents and children into that system against their will.

Public school forces you to interact with people different from you.

At a public school, you cannot avoid being thrown in the same classroom with students of other races, classes, and religions. This is of course more true if your school system is diverse rather than segregated—and all the more reason that the persistent segregation of many of our schools is horrific—but it’s still somewhat true even in a relatively homogeneous school. I was fortunate enough to go to a public school in Ann Arbor, where there was really quite substantial diversity. But even where there is less diversity, there is still usually some diversity—if not race, then class, or religion.

Certainly any public school has more diversity than homeschooling, where parents have the power to specifically choose precisely which other families their children will interact with, and will almost always choose those of the same race, class, and—above all—religious denomination as themselves.

The result is that homeschooled children often grow up indoctrinated into a dogmatic, narrow-minded worldview, convinced that the particular beliefs they were raised in are the objectively, absolutely correct ones and all others are at best mistaken and at worst outright evil. They are trained to reject conflict and dissent, to not even expose themselves to other people’s ideas, because those are seen as dangerous—corrupting.

Moreover, for most homeschooling parents—not all, but most—this is clearly the express intent. They want to raise their children in a particular set of beliefs. They want to inoculate them against the corrupting influences of other ideas. They are not afraid of their kids being bullied in school; they are afraid of them reading books that contradict the Bible.

This article has the headline “Homeschooled children do not grow up to be more religious”, yet its core finding is exactly the opposite of that:

The Cardus Survey found that homeschooled young adults were not noticeably different in their religious lives from their peers who had attended private religious schools, though they were more religious than peers who had attended public or Catholic schools.

No more religious than private religious schools!? That’s still very religious. No, the fair comparison is to public schools, which clearly show lower rates of religiosity among the same demographics. (The interesting case is Catholic schools; they, it turns out, also churn out atheists with remarkable efficiency; I credit the Jesuit norm of top-quality liberal education.) This is clear evidence that religious homeschooling does make children more religious, and so does most private religious education.

Another finding in that same article sounds good, but is misleading:

Indiana University professor Robert Kunzman, in his careful study of six homeschooling families, found that, at least for his sample, homeschooled children tended to become more tolerant and less dogmatic than their parents as they grew up.


This is probably just regression to the mean. The parents who give their kids religious homeschooling are largely the most dogmatic and intolerant, so we would expect by sheer chance that their kids were less dogmatic and intolerant—but probably still pretty dogmatic and intolerant. (Also, do I have to pount out that n=6 barely even constitutes a study!?) This is like the fact that the sons of NBA players are usually shorter than their fathers—but still quite tall.

Homeschooling is directly linked to a lot of terrible things: Young-Earth Creationism, Christian nationalism, homophobia, and shockingly widespread child abuse.

While most right-wing families don’t homeschool, most homeschooling families are right-wing: Between 60% and 70% of homeschooling families vote Republican in most elections. More left-wing voters are homeschooling now with the recent COVID-driven surge in homeschooling, but the right-wing still retains a strong majority for now.

Of course, there are a growing number of left-wing and non-religious families who use homeschooling. Does this mean that the threat of indoctrination is gone? I don’t think so. I once knew someone who was homeschooled by a left-wing non-religious family and still ended up adopting an extremely narrow-minded extremist worldview—simply a left-wing non-religious one. In some sense a left-wing non-religious narrow-minded extremism is better than a right-wing religious narrow-minded extremism, but it’s still narrow-minded extremism. Whatever such a worldview gets right is mainly by the Stopped Clock Principle. It still misses many important nuances, and is still closed to new ideas and new evidence.

Of course this is not a necessary feature of homeschooling. One absolutely could homeschool children into a worldview that is open-minded and tolerant. Indeed, I’m sure some parents do. But statistics suggest that most do not, and this makes sense: When parents want to indoctrinate their children into narrow-minded worldviews, homeschooling allows them to do that far more effectively than if they had sent their children to public school. Whereas if you want to teach your kids open-mindedness and tolerance, exposing them to a diverse environment makes that easier, not harder.

In other words, the problem is that homeschooling gives parents too much control; in a very real sense, this is too much freedom.

When can freedom be too much? It seems absurd at first. But there are at least two cases where it makes sense to say that someone has too much freedom.

The first is paternalism: Sometimes people really don’t know what’s best for them, and giving them more freedom will just allow them to hurt themselves. This notion is easily abused—it has been abused many times, for example against disabled people and colonized populations. For that reason, we are right to be very skeptical of it when applied to adults of sound mind. But what about children? That’s who we are talking about after all. Surely it’s not absurd to suggest that children don’t always know what’s best for them.

The second is the paradox of tolerance: The freedom to take away other people’s freedom is not a freedom we can afford to protect. And homeschooling that indoctrinates children into narrow-minded worldviews is a threat to other people’s freedom—not only those who will be oppressed by a new generation of extremists, but also the children themselves who are never granted the chance to find their own way.

Both reasons apply in this case: paternalism for the children, the paradox of tolerance for the parents. We have a civic responsibility to ensure that children grow up in a rich and diverse environment, so that they learn open-mindedness and tolerance. This is important enough that we should be willing to impose constraints on freedom in order to achieve it. Democracy cannot survive a citizenry who are molded from birth into narrow-minded extremists. There are parents who want to mold their children that way—and we cannot afford to let them.

From where I’m sitting, that means we need to ban homeschooling, or at least very strictly regulate it.

Against deontology

Aug 6 JDN 2460163

In last week’s post I argued against average utilitarianism, basically on the grounds that it devalues the lives of anyone who isn’t of above average happiness. But you might be tempted to take these as arguments against utilitarianism in general, and that is not my intention.

In fact I believe that utilitarianism is basically correct, though it needs some particular nuances that are often lost in various presentations of it.

Its leading rival is deontology, which is really a broad class of moral theories, some a lot better than others.

What characterizes deontology as a class is that it uses rules, rather than consequences; an act is just right or wrong regardless of its consequences—or even its expected consequences.

There are certain aspects of this which are quite appealing: In fact, I do think that rules have an important role to play in ethics, and as such I am basically a rule utilitarian. Actually trying to foresee all possible consequences of every action we might take is an absurd demand far beyond the capacity of us mere mortals, and so in practice we have no choice but to develop heuristic rules that can guide us.

But deontology says that these are no mere heuristics: They are in fact the core of ethics itself. Under deontology, wrong actions are wrong even if you know for certain that their consequences will be good.

Kantian ethics is one of the most well-developed deontological theories, and I am quite sympathetic to Kantian ethics In fact I used to consider myself one of its adherents, but I now consider that view a mistaken one.

Let’s first dispense with the views of Kant himself, which are obviously wrong. Kant explicitly said that lying is always, always, always wrong, and even when presented with obvious examples where you could tell a small lie to someone obviously evil in order to save many innocent lives, he stuck to his guns and insisted that lying is always wrong.

This is a bit anachronistic, but I think this example will be more vivid for modern readers, and it absolutely is consistent with what Kant wrote about the actual scenarios he was presented with:

You are living in Germany in 1945. You have sheltered a family of Jews in your attic to keep them safe from the Holocaust. Nazi soldiers have arrived at your door, and ask you: “Are there any Jews in this house?” Do you tell the truth?

I think it’s utterly, agonizingly obvious that you should not tell the truth. Exactly what you should do is less obvious: Do you simply lie and hope they buy it? Do you devise a clever ruse? Do you try to distract them in some way? Do you send them on a wild goose chase elsewhere? If you could overpower them and kill them, should you? What if you aren’t sure you can; should you still try? But one thing is clear: You don’t hand over the Jewish family to the Nazis.

Yet when presented with similar examples, Kant insisted that lying is always wrong. He had a theory to back it up, his Categorical Imperative: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”

And, so his argument goes: Since it would be obviously incoherent to say that everyone should always lie, lying is wrong, and you’re never allowed to do it. He actually bites that bullet the size of a Howitzer round.

Modern deontologists—even though who consider themselves Kantians—are more sophisticated than this. They realize that you could make a rule like “Never lie, except to save the life of an innocent person.” or “Never lie, except to stop a great evil.” Either of these would be quite adequate to solve this particular dilemma. And it’s absolutely possible to will that these would be universal laws, in the sense that they would apply to anyone. ‘Universal’ doesn’t have to mean ‘applies equally to all possible circumstances’.

There are also a couple of things that deontology does very well, which are worth preserving. One of them is supererogation: The idea that some acts are above and beyond the call of duty, that something can be good without being obligatory.

This is something most forms of utilitarianism are notoriously bad at. They show us a spectrum of worlds from the best to the worst, and tell us to make things better. But there’s nowhere we are allowed to stop, unless we somehow manage to make it all the way to the best possible world.

I find this kind of moral demand very tempting, which often leads me to feel a tremendous burden of guilt. I always know that I could be doing more than I do. I’ve written several posts about this in the past, in the hopes of fighting off this temptation in myself and others. (I am not entirely sure how well I’ve succeeded.)

Deontology does much better in this regard: Here are some rules. Follow them.

Many of the rules are in fact very good rules that most people successfully follow their entire lives: Don’t murder. Don’t rape. Don’t commit robbery. Don’t rule a nation tyrannically. Don’t commit war crimes.

Others are oft more honored in the breach than the observance: Don’t lie. Don’t be rude. Don’t be selfish. Be brave. Be generous. But a well-developed deontology can even deal with this, by saying that some rules are more important than others, and thus some sins are more forgivable than others.

Whereas a utilitarian—at least, anything but a very sophisticated utilitarian—can only say who is better and who is worse, a deontologist can say who is good enough: who has successfully discharged their moral obligations and is otherwise free to live their life as they choose. Deontology absolves us of guilt in a way that utilitarianism is very bad at.

Another good deontological principle is double-effect: Basically this says that if you are doing something that will have bad outcomes as well as good ones, it matters whether you intend the bad one and what you do to try to mitigate it. There does seem to be a morally relevant difference between a bombing that kills civilians accidentally as part of an attack on a legitimate military target, and a so-called “strategic bombing” that directly targets civilians in order to maximize casualties—even if both occur as part of a justified war. (Both happen a lot—and it may even be the case that some of the latter were justified. The Tokyo firebombing and atomic bombs on Hiroshima and Nagasaki were very much in the latter category.)

There are ways to capture this principle (or something very much like it) in a utilitarian framework, but like supererogation, it requires a sophisticated, nuanced approach that most utilitarians don’t seem willing or able to take.

Now that I’ve said what’s good about it, let’s talk about what’s really wrong with deontology.

Above all: How do we choose the rules?

Kant seemed to think that mere logical coherence would yield a sufficiently detailed—perhaps even unique—set of rules for all rational beings in the universe to follow. This is obviously wrong, and seems to be simply a failure of his imagination. There is literally a countably infinite space of possible ethical rules that are logically consistent. (With probability 1 any given one is utter nonsense: “Never eat cheese on Thursdays”, “Armadillos should rule the world”, and so on—but these are still logically consistent.)

If you require the rules to be simple and general enough to always apply to everyone everywhere, you can narrow the space substantially; but this is also how you get obviously wrong rules like “Never lie.”

In practice, there are two ways we actually seem to do this: Tradition and consequences.

Let’s start with tradition. (It came first historically, after all.) You can absolutely make a set of rules based on whatever your culture has handed down to you since time immemorial. You can even write them down in a book that you declare to be the absolute infallible truth of the universe—and, amazingly enough, you can get millions of people to actually buy that.

The result, of course, is what we call religion. Some of its rules are good: Thou shalt not kill. Some are flawed but reasonable: Thou shalt not steal. Thou shalt not commit adultery. Some are nonsense: Thou shalt not covet thy neighbor’s goods.

And some, well… some rules of tradition are the source of many of the world’s most horrific human rights violations. Thou shalt not suffer a witch to live (Exodus 22:18). If a man also lie with mankind, as he lieth with a woman, both of them have committed an abomination: they shall surely be put to death; their blood shall be upon them (Leviticus 20:13).

Tradition-based deontology has in fact been the major obstacle to moral progress throughout history. It is not a coincidence that utilitarianism began to become popular right before the abolition of slavery, and there is an even more direct casual link between utilitarianism and the advancement of rights for women and LGBT people. When the sole argument you can make for moral rules is that they are ancient (or allegedly handed down by a perfect being), you can make rules that oppress anyone you want. But when rules have to be based on bringing happiness or preventing suffering, whole classes of oppression suddenly become untenable. “God said so” can justify anything—but “Who does it hurt?” can cut through.

It is an oversimplification, but not a terribly large one, to say that the arc of moral history has been drawn by utilitarians dragging deontologists kicking and screaming into a better future.

There is a better way to make rules, and that is based on consequences. And, in practice, most people who call themselves deontologists these days do this. They develop a system of moral rules based on what would be expected to lead to the overall best outcomes.

I like this approach. In fact, I agree with this approach. But it basically amounts to abandoning deontology and surrendering to utilitarianism.

Once you admit that the fundamental justification for all moral rules is the promotion of happiness and the prevention of suffering, you are basically a rule utilitarian. Rules then become heuristics for promoting happiness, not the fundamental source of morality itself.

I suppose it could be argued that this is not a surrender but a synthesis: We are looking for the best aspects of deontology and utilitarianism. That makes a lot of sense. But I keep coming back to the dark history of traditional rules, the fact that deontologists have basically been holding back human civilization since time immemorial. If deontology wants to be taken seriously now, it needs to prove that it has broken with that dark tradition. And frankly the easiest answer to me seems to be to just give up on deontology.

Why we need critical thinking

Jul 9 JDN 2460135

I can’t find it at the moment, but awhile ago I read a surprisingly compelling post on social media (I think it was Facebook, but it could also have been Reddit) questioning the common notion that we should be teaching more critical thinking in school.

I strongly believe that we should in fact be teaching more critical thinking in school—actually I think we should replace large chunks of the current math curriculum with a combination of statistics, economics and critical thinking—but it made me realize that we haven’t done enough to defend why that is something worth doing. It’s just become a sort of automatic talking point, like, “obviously you would want more critical thinking, why are you even asking?”

So here’s a brief attempt to explain why critical thinking is something that every citizen ought to be good at, and hence why it’s worthwhile to teach it in primary and secondary school.

Critical thinking, above all, allows you to detect lies. It teaches you to look past the surface of what other people are saying and determine whether what they are saying is actually true.

And our world is absolutely full of lies.

We are constantly lied to by advertising. We are constantly lied to by spam emails and scam calls. Day in and day out, people with big smiles promise us the world, if only we will send them five easy payments of $19.99.

We are constantly lied to by politicians. We are constantly lied to by religious leaders (it’s pretty much their whole job actually).

We are often lied to by newspapers—sometimes directly and explicitly, as in fake news, but more often in subtler ways. Most news articles in the mainstream press are true in the explicit facts they state, but are missing important context; and nearly all of them focus on the wrong things—exciting, sensational, rare events rather than what’s actually important and likely to affect your life. If newspapers were an accurate reflection of genuine risk, they’d have more articles on suicide than homicide, and something like one million articles on climate change for every one on some freak accident (like that submarine full of billionaires).

We are even lied to by press releases on science, which likewise focus on new, exciting, sensational findings rather than supported, established, documented knowledge. And don’t tell me everyone already knows it; just stating basic facts about almost any scientific field will shock and impress most of the audience, because they clearly didn’t learn this stuff in school (or, what amounts to the same thing, don’t remember it). This isn’t just true of quantum physics; it’s even true of economics—which directly affects people’s lives.

Critical thinking is how you can tell when a politician has distorted the views of his opponent and you need to spend more time listening to that opponent speak. Critical thinking could probably have saved us from electing Donald Trump President.

Critical thinking is how you tell that a supplement which “has not been evaluated by the FDA” (which is to say, nearly all of them) probably contains something mostly harmless that maybe would benefit you if you were deficient in it, but for most people really won’t matter—and definitely isn’t something you can substitute for medical treatment.

Critical thinking is how you recognize that much of the history you were taught as a child was a sanitized, simplified, nationalist version of what actually happened. But it’s also how you recognize that simply inverting it all and becoming the sort of anti-nationalist who hates your own country is at least as ridiculous. Thomas Jefferson was both a pioneer of democracy and a slaveholder. He was both a hero and a villain. The world is complicated and messy—and nothing will let you see that faster than critical thinking.


Critical thinking tells you that whenever a new “financial innovation” appears—like mortgage-backed securities or cryptocurrency—it will probably make obscene amounts of money for a handful of insiders, but will otherwise be worthless if not disastrous to everyone else. (And maybe if enough people had good critical thinking skills, we could stop the next “innovation” from getting so far!)

More widespread critical thinking could even improve our job market, as interviewers would no longer be taken in by the candidates who are best at overselling themselves, and would instead pay more attention to the more-qualified candidates who are quiet and honest.

In short, critical thinking constitutes a large portion of what is ordinarily called common sense or wisdom; some of that simply comes from life experience, but a great deal of it is actually a learnable skill set.

Of course, even if it can be learned, that still raises the question of how it can be taught. I don’t think we have a sound curriculum for teaching critical thinking, and in my more cynical moments I wonder if many of the powers that be like it that way. Knowing that many—not all, but many—politicians make their careers primarily from deceiving the public, it’s not so hard to see why those same politicians wouldn’t want to support teaching critical thinking in public schools. And it’s almost funny to me watching evangelical Christians try to justify why critical thinking is dangerous—they come so close to admitting that their entire worldview is totally unfounded in logic or evidence.

But at least I hope I’ve convinced you that it is something worthwhile to know, and that the world would be better off if we could teach it to more people.

We ignorant, incompetent gods

May 21 JDN 2460086

A review of Homo Deus

The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.

E.O. Wilson

Homo Deus is a very good read—and despite its length, a quick one; as you can see, I read it cover to cover in a week. Yuval Noah Harari’s central point is surely correct: Our technology is reaching a threshold where it grants us unprecedented power and forces us to ask what it means to be human.

Biotechnology and artificial intelligence are now advancing so rapidly that advancements in other domains, such as aerospace and nuclear energy, seem positively mundane. Who cares about making flight or electricity a bit cleaner when we will soon have the power to modify ourselves or we’ll all be replaced by machines?

Indeed, we already have technology that would have seemed to ancient people like the powers of gods. We can fly; we can witness or even control events thousands of miles away; we can destroy mountains; we can wipeout entire armies in an instant; we can even travel into outer space.

Harari rightly warns us that our not-so-distant descendants are likely to have powers that we would see as godlike: Immortality, superior intelligence, self-modification, the power to create life.

And where it is scary to think about what they might do with that power if they think the way we do—as ignorant and foolish and tribal as we are—Harari points out that it is equally scary to think about what they might do if they don’t think the way we do—for then, how do they think? If their minds are genetically modified or even artificially created, who will they be? What values will they have, if not ours? Could they be better? What if they’re worse?

It is of course difficult to imagine values better than our own—if we thought those values were better, we’d presumably adopt them. But we should seriously consider the possibility, since presumably most of us believe that our values today are better than what most people’s values were 1000 years ago. If moral progress continues, does it not follow that people’s values will be better still 1000 years from now? Or at least that they could be?

I also think Harari overestimates just how difficult it is to anticipate the future. This may be a useful overcorrection; the world is positively infested with people making overprecise predictions about the future, often selling them for exorbitant fees (note that Harari was quite well-compensated for this book as well!). But our values are not so fundamentally alien from those of our forebears, and we have reason to suspect that our descendants’ values will be no more different from ours.

For instance, do you think that medieval people thought suffering and death were good? I assure you they did not. Nor did they believe that the supreme purpose in life is eating cheese. (They didn’t even believe the Earth was flat!) They did not have the concept of GDP, but they could surely appreciate the value of economic prosperity.

Indeed, our world today looks very much like a medieval peasant’s vision of paradise. Boundless food in endless variety. Near-perfect security against violence. Robust health, free from nearly all infectious disease. Freedom of movement. Representation in government! The land of milk and honey is here; there they are, milk and honey on the shelves at Walmart.

Of course, our paradise comes with caveats: Not least, we are by no means free of toil, but instead have invented whole new kinds of toil they could scarcely have imagined. If anything I would have to guess that coding a robot or recording a video lecture probably isn’t substantially more satisfying than harvesting wheat or smithing a sword; and reconciling receivables and formatting spreadsheets is surely less. Our tasks are physically much easier, but mentally much harder, and it’s not obvious which of those is preferable. And we are so very stressed! It’s honestly bizarre just how stressed we are, given the abudance in which we live; there is no reason for our lives to have stakes so high, and yet somehow they do. It is perhaps this stress and economic precarity that prevents us from feeling such joy as the medieval peasants would have imagined for us.

Of course, we don’t agree with our ancestors on everything. The medieval peasants were surely more religious, more ignorant, more misogynistic, more xenophobic, and more racist than we are. But projecting that trend forward mostly means less ignorance, less misogyny, less racism in the future; it means that future generations should see the world world catch up to what the best of us already believe and strive for—hardly something to fear. The values that I believe are surely not what we as a civilization act upon, and I sorely wish they were. Perhaps someday they will be.

I can even imagine something that I myself would recognize as better than me: Me, but less hypocritical. Strictly vegan rather than lacto-ovo-vegetarian, or at least more consistent about only buying free range organic animal products. More committed to ecological sustainability, more willing to sacrifice the conveniences of plastic and gasoline. Able to truly respect and appreciate all life, even humble insects. (Though perhaps still not mosquitoes; this is war. They kill more of us than any other animal, including us.) Not even casually or accidentally racist or sexist. More courageous, less burnt out and apathetic. I don’t always live up to my own ideals. Perhaps someday someone will.

Harari fears something much darker, that we will be forced to give up on humanist values and replace them with a new techno-religion he calls Dataism, in which the supreme value is efficient data processing. I see very little evidence of this. If it feels like data is worshipped these days, it is only because data is profitable. Amazon and Google constantly seek out ever richer datasets and ever faster processing because that is how they make money. The real subject of worship here is wealth, and that is nothing new. Maybe there are some die-hard techno-utopians out there who long for us all to join the unified oversoul of all optimized data processing, but I’ve never met one, and they are clearly not the majority. (Harari also uses the word ‘religion’ in an annoyingly overbroad sense; he refers to communism, liberalism, and fascism as ‘religions’. Ideologies, surely; but religions?)

Harari in fact seems to think that ideologies are strongly driven by economic structures, so maybe he would even agree that it’s about profit for now, but thinks it will become religion later. But I don’t really see history fitting this pattern all that well. If monotheism is directly tied to the formation of organized bureaucracy and national government, then how did Egypt and Rome last so long with polytheistic pantheons? If atheism is the natural outgrowth of industrialized capitalism, then why are Africa and South America taking so long to get the memo? I do think that economic circumstances can constrain culture and shift what sort of ideas become dominant, including religious ideas; but there clearly isn’t this one-to-one correspondence he imagines. Moreover, there was never Coalism or Oilism aside from the greedy acquisition of these commodities as part of a far more familiar ideology: capitalism.

He also claims that all of science is now, or is close to, following a united paradigm under which everything is a data processing algorithm, which suggests he has not met very many scientists. Our paradigms remain quite varied, thank you; and if they do all have certain features in common, it’s mainly things like rationality, naturalism and empiricism that are more or less inherent to science. It’s not even the case that all cognitive scientists believe in materialism (though it probably should be); there are still dualists out there.

Moreover, when it comes to values, most scientists believe in liberalism. This is especially true if we use Harari’s broad sense (on which mainline conservatives and libertarians are ‘liberal’ because they believe in liberty and human rights), but even in the narrow sense of center-left. We are by no means converging on a paradigm where human life has no value because it’s all just data processing; maybe some scientists believe that, but definitely not most of us. If scientists ran the world, I can’t promise everything would be better, but I can tell you that Bush and Trump would never have been elected and we’d have a much better climate policy in place by now.

I do share many of Harari’s fears of the rise of artificial intelligence. The world is clearly not ready for the massive economic disruption that AI is going to cause all too soon. We still define a person’s worth by their employment, and think of ourselves primarily as collection of skills; but AI is going to make many of those skills obsolete, and may make many of us unemployable. It would behoove us to think in advance about who we truly are and what we truly want before that day comes. I used to think that creative intellectual professions would be relatively secure; ChatGPT and Midjourney changed my mind. Even writers and artists may not be safe much longer.

Harari is so good at sympathetically explaining other views he takes it to a fault. At times it is actually difficult to know whether he himself believes something and wants you to, or if he is just steelmanning someone else’s worldview. There’s a whole section on ‘evolutionary humanism’ where he details a worldview that is at best Nietschean and at worst Nazi, but he makes it sound so seductive. I don’t think it’s what he believes, in part because he has similarly good things to say about liberalism and socialism—but it’s honestly hard to tell.

The weakest part of the book is when Harari talks about free will. Like most people, he just doesn’t get compatibilism. He spends a whole chapter talking about how science ‘proves we have no free will’, and it’s just the same old tired arguments hard determinists have always made.

He talks about how we can make choices based on our desires, but we can’t choose our desires; well of course we can’t! What would that even mean? If you could choose your desires, what would you choose them based on, if not your desires? Your desire-desires? Well, then, can you choose your desire-desires? What about your desire-desire-desires?

What even is this ultimate uncaused freedom that libertarian free will is supposed to consist in? No one seems capable of even defining it. (I’d say Kant got the closest: He defined it as the capacity to act based upon what ought rather than what is. But of course what we believe about ‘ought’ is fundamentally stored in our brains as a particular state, a way things are—so in the end, it’s an ‘is’ we act on after all.)

Maybe before you lament that something doesn’t exist, you should at least be able to describe that thing as a coherent concept? Woe is me, that 2 plus 2 is not equal to 5!

It is true that as our technology advances, manipulating other people’s desires will become more and more feasible. Harari overstates the case on so-called robo-rats; they aren’t really mind-controlled, it’s more like they are rewarded and punished. The rat chooses to go left because she knows you’ll make her feel good if she does; she’s still freely choosing to go left. (Dangling a carrot in front of a horse is fundamentally the same thing—and frankly, paying a wage isn’t all that different.) The day may yet come where stronger forms of control become feasible, and woe betide us when it does. Yet this is no threat to the concept of free will; we already knew that coercion was possible, and mind control is simply a more precise form of coercion.

Harari reports on a lot of interesting findings in neuroscience, which are important for people to know about, but they do not actually show that free will is an illusion. What they do show is that free will is thornier than most people imagine. Our desires are not fully unified; we are often ‘of two minds’ in a surprisingly literal sense. We are often tempted by things we know are wrong. We often aren’t sure what we really want. Every individual is in fact quite divisible; we literally contain multitudes.

We do need a richer account of moral responsibility that can deal with the fact that human beings often feel multiple conflicting desires simultaneously, and often experience events differently than we later go on to remember them. But at the end of the day, human consciousness is mostly unified, our choices are mostly rational, and our basic account of moral responsibility is mostly valid.

I think for now we should perhaps be less worried about what may come in the distant future, what sort of godlike powers our descendants may have—and more worried about what we are doing with the godlike powers we already have. We have the power to feed the world; why aren’t we? We have the power to save millions from disease; why don’t we? I don’t see many people blindly following this ‘Dataism’, but I do see an awful lot blinding following a 19th-century vision of capitalism.

And perhaps if we straighten ourselves out, the future will be in better hands.

The mythology mindset

Feb 5 JDN 2459981

I recently finished reading Steven Pinker’s latest book Rationality. It’s refreshing, well-written, enjoyable, and basically correct with some small but notable errors that seem sloppy—but then you could have guessed all that from the fact that it was written by Steven Pinker.

What really makes the book interesting is an insight Pinker presents near the end, regarding the difference between the “reality mindset” and the “mythology mindset”.

It’s a pretty simple notion, but a surprisingly powerful one.

In the reality mindset, a belief is a model of how the world actually functions. It must be linked to the available evidence and integrated into a coherent framework of other beliefs. You can logically infer from how some parts work to how other parts must work. You can predict the outcomes of various actions. You live your daily life in the reality mindset; you couldn’t function otherwise.

In the mythology mindset, a belief is a narrative that fulfills some moral, emotional, or social function. It’s almost certainly untrue or even incoherent, but that doesn’t matter. The important thing is that it sends the right messages. It has the right moral overtones. It shows you’re a member of the right tribe.

The idea is similar to Dennett’s “belief in belief”, which I’ve written about before; but I think this characterization may actually be a better one, not least because people would be more willing to use it as a self-description. If you tell someone “You don’t really believe in God, you believe in believing in God”, they will object vociferously (which is, admittedly, what the theory would predict). But if you tell them, “Your belief in God is a form of the mythology mindset”, I think they are at least less likely to immediately reject your claim out of hand. “You believe in God a different way than you believe in cyanide” isn’t as obviously threatening to their identity.

A similar notion came up in a Psychology of Religion course I took, in which the professor discussed “anomalous beliefs” linked to various world religions. He picked on a bunch of obscure religions, often held by various small tribes. He asked for more examples from the class. Knowing he was nominally Catholic and not wanting to let mainstream religion off the hook, I presented my example: “This bread and wine are the body and blood of Christ.” To his credit, he immediately acknowledged it as a very good example.

It’s also not quite the same thing as saying that religion is a “metaphor”; that’s not a good answer for a lot of reasons, but perhaps chief among them is that people don’t say they believe metaphors. If I say something metaphorical and then you ask me, “Hang on; is that really true?” I will immediately acknowledge that it is not, in fact, literally true. Love is a rose with all its sweetness and all its thorns—but no, love isn’t really a rose. And when it comes to religious belief, saying that you think it’s a metaphor is basically a roundabout way of saying you’re an atheist.

From all these different directions, we seem to be converging on a single deeper insight: when people say they believe something, quite often, they clearly mean something very different by “believe” than what I would ordinarily mean.

I’m tempted even to say that they don’t really believe it—but in common usage, the word “belief” is used at least as often to refer to the mythology mindset as the reality mindset. (In fact, it sounds less weird to say “I believe in transsubstantiation” than to say “I believe in gravity”.) So if they don’t really believe it, then they at least mythologically believe it.

Both mindsets seem to come very naturally to human beings, in particular contexts. And not just modern people, either. Humans have always been like this.

Ask that psychology professor about Jesus, and he’ll tell you a tall tale of life, death, and resurrection by a demigod. But ask him about the Stroop effect, and he’ll provide a detailed explanation of rigorous experimental protocol. He believes something about God; but he knows something about psychology.

Ask a hunter-gatherer how the world began, and he’ll surely spin you a similarly tall tale about some combination of gods and spirits and whatever else, and it will all be peculiarly particular to his own tribe and no other. But ask him how to gut a fish, and he’ll explain every detail with meticulous accuracy, with almost the same rigor as that scientific experiment. He believes something about the sky-god; but he knows something about fish.

To be a rationalist, then, is to aspire to live your whole life in the reality mindset. To seek to know rather than believe.

This isn’t about certainty. A rationalist can be uncertain about many things—in fact, it’s rationalists of all people who are most willing to admit and quantify their uncertainty.

This is about whether you allow your beliefs to float free as bare, almost meaningless assertions that you profess to show you are a member of the tribe, or you make them pay rent, directly linked to other beliefs and your own experience.

As long as I can remember, I have always aspired to do this. But not everyone does. In fact, I dare say most people don’t. And that raises a very important question: Should they? Is it better to live the rationalist way?

I believe that it is. I suppose I would, temperamentally. But say what you will about the Enlightenment and the scientific revolution, they have clearly revolutionized human civilization and made life much better today than it was for most of human existence. We are peaceful, safe, and well-fed in a way that our not-so-distant ancestors could only dream of, and it’s largely thanks to systems built under the principles of reason and rationality—that is, the reality mindset.

We would never have industrialized agriculture if we still thought in terms of plant spirits and sky gods. We would never have invented vaccines and antibiotics if we still believed disease was caused by curses and witchcraft. We would never have built power grids and the Internet if we still saw energy as a mysterious force permeating the world and not as a measurable, manipulable quantity.

This doesn’t mean that ancient people who saw the world in a mythological way were stupid. In fact, it doesn’t even mean that people today who still think this way are stupid. This is not about some innate, immutable mental capacity. It’s about a technology—or perhaps the technology, the meta-technology that makes all other technology possible. It’s about learning to think the same way about the mysterious and the familiar, using the same kind of reasoning about energy and death and sunlight as we already did about rocks and trees and fish. When encountering something new and mysterious, someone in the mythology mindset quickly concocts a fanciful tale about magical beings that inevitably serves to reinforce their existing beliefs and attitudes, without the slightest shred of evidence for any of it. In their place, someone in the reality mindset looks closer and tries to figure it out.

Still, this gives me some compassion for people with weird, crazy ideas. I can better make sense of how someone living in the modern world could believe that the Earth is 6,000 years old or that the world is ruled by lizard-people. Because they probably don’t really believe it, they just mythologically believe it—and they don’t understand the difference.