The mythology mindset

Feb 5 JDN 2459981

I recently finished reading Steven Pinker’s latest book Rationality. It’s refreshing, well-written, enjoyable, and basically correct with some small but notable errors that seem sloppy—but then you could have guessed all that from the fact that it was written by Steven Pinker.

What really makes the book interesting is an insight Pinker presents near the end, regarding the difference between the “reality mindset” and the “mythology mindset”.

It’s a pretty simple notion, but a surprisingly powerful one.

In the reality mindset, a belief is a model of how the world actually functions. It must be linked to the available evidence and integrated into a coherent framework of other beliefs. You can logically infer from how some parts work to how other parts must work. You can predict the outcomes of various actions. You live your daily life in the reality mindset; you couldn’t function otherwise.

In the mythology mindset, a belief is a narrative that fulfills some moral, emotional, or social function. It’s almost certainly untrue or even incoherent, but that doesn’t matter. The important thing is that it sends the right messages. It has the right moral overtones. It shows you’re a member of the right tribe.

The idea is similar to Dennett’s “belief in belief”, which I’ve written about before; but I think this characterization may actually be a better one, not least because people would be more willing to use it as a self-description. If you tell someone “You don’t really believe in God, you believe in believing in God”, they will object vociferously (which is, admittedly, what the theory would predict). But if you tell them, “Your belief in God is a form of the mythology mindset”, I think they are at least less likely to immediately reject your claim out of hand. “You believe in God a different way than you believe in cyanide” isn’t as obviously threatening to their identity.

A similar notion came up in a Psychology of Religion course I took, in which the professor discussed “anomalous beliefs” linked to various world religions. He picked on a bunch of obscure religions, often held by various small tribes. He asked for more examples from the class. Knowing he was nominally Catholic and not wanting to let mainstream religion off the hook, I presented my example: “This bread and wine are the body and blood of Christ.” To his credit, he immediately acknowledged it as a very good example.

It’s also not quite the same thing as saying that religion is a “metaphor”; that’s not a good answer for a lot of reasons, but perhaps chief among them is that people don’t say they believe metaphors. If I say something metaphorical and then you ask me, “Hang on; is that really true?” I will immediately acknowledge that it is not, in fact, literally true. Love is a rose with all its sweetness and all its thorns—but no, love isn’t really a rose. And when it comes to religious belief, saying that you think it’s a metaphor is basically a roundabout way of saying you’re an atheist.

From all these different directions, we seem to be converging on a single deeper insight: when people say they believe something, quite often, they clearly mean something very different by “believe” than what I would ordinarily mean.

I’m tempted even to say that they don’t really believe it—but in common usage, the word “belief” is used at least as often to refer to the mythology mindset as the reality mindset. (In fact, it sounds less weird to say “I believe in transsubstantiation” than to say “I believe in gravity”.) So if they don’t really believe it, then they at least mythologically believe it.

Both mindsets seem to come very naturally to human beings, in particular contexts. And not just modern people, either. Humans have always been like this.

Ask that psychology professor about Jesus, and he’ll tell you a tall tale of life, death, and resurrection by a demigod. But ask him about the Stroop effect, and he’ll provide a detailed explanation of rigorous experimental protocol. He believes something about God; but he knows something about psychology.

Ask a hunter-gatherer how the world began, and he’ll surely spin you a similarly tall tale about some combination of gods and spirits and whatever else, and it will all be peculiarly particular to his own tribe and no other. But ask him how to gut a fish, and he’ll explain every detail with meticulous accuracy, with almost the same rigor as that scientific experiment. He believes something about the sky-god; but he knows something about fish.

To be a rationalist, then, is to aspire to live your whole life in the reality mindset. To seek to know rather than believe.

This isn’t about certainty. A rationalist can be uncertain about many things—in fact, it’s rationalists of all people who are most willing to admit and quantify their uncertainty.

This is about whether you allow your beliefs to float free as bare, almost meaningless assertions that you profess to show you are a member of the tribe, or you make them pay rent, directly linked to other beliefs and your own experience.

As long as I can remember, I have always aspired to do this. But not everyone does. In fact, I dare say most people don’t. And that raises a very important question: Should they? Is it better to live the rationalist way?

I believe that it is. I suppose I would, temperamentally. But say what you will about the Enlightenment and the scientific revolution, they have clearly revolutionized human civilization and made life much better today than it was for most of human existence. We are peaceful, safe, and well-fed in a way that our not-so-distant ancestors could only dream of, and it’s largely thanks to systems built under the principles of reason and rationality—that is, the reality mindset.

We would never have industrialized agriculture if we still thought in terms of plant spirits and sky gods. We would never have invented vaccines and antibiotics if we still believed disease was caused by curses and witchcraft. We would never have built power grids and the Internet if we still saw energy as a mysterious force permeating the world and not as a measurable, manipulable quantity.

This doesn’t mean that ancient people who saw the world in a mythological way were stupid. In fact, it doesn’t even mean that people today who still think this way are stupid. This is not about some innate, immutable mental capacity. It’s about a technology—or perhaps the technology, the meta-technology that makes all other technology possible. It’s about learning to think the same way about the mysterious and the familiar, using the same kind of reasoning about energy and death and sunlight as we already did about rocks and trees and fish. When encountering something new and mysterious, someone in the mythology mindset quickly concocts a fanciful tale about magical beings that inevitably serves to reinforce their existing beliefs and attitudes, without the slightest shred of evidence for any of it. In their place, someone in the reality mindset looks closer and tries to figure it out.

Still, this gives me some compassion for people with weird, crazy ideas. I can better make sense of how someone living in the modern world could believe that the Earth is 6,000 years old or that the world is ruled by lizard-people. Because they probably don’t really believe it, they just mythologically believe it—and they don’t understand the difference.

Hypocrisy is underrated

Sep 12 JDN 2459470

Hypocrisy isn’t a good thing, but it isn’t nearly as bad as most people seem to think. Often accusing someone of hypocrisy is taken as a knock-down argument for everything they are saying, and this is just utterly wrong. Someone can be a hypocrite and still be mostly right.

Often people are accused of hypocrisy when they are not being hypocritical; for instance the right wing seems to think that “They want higher taxes on the rich, but they are rich!” is hypocrisy, when in fact it’s simply altruism. (If they had wanted the rich guillotined, that would be hypocrisy. Maybe the problem is that the right wing can’t tell the difference?) Even worse, “They live under capitalism but they want to overthrow capitalism!” is not even close to hypocrisy—let alone why, how would someone overthrow a system they weren’t living under? (There are many things wrong with Marxists, but that is not one of them.)

But in fact I intend something stronger: Hypocrisy itself just isn’t that bad.


There are currently two classes of Republican politicians with regard to the COVID vaccines: Those who are consistent in their principles and don’t get the vaccines, and those who are hypocrites and get the vaccines while telling their constituents not to. Of the two, who is better? The hypocrites. At least they are doing the right thing even as they say things that are very, very wrong.

There are really four cases to consider. The principles you believe in could be right, or they could be wrong. And you could follow those principles, or you could be a hypocrite. Each of these two factors is independent.

If your principles are right and you are consistent, that’s the best case; if your principles are right and you are a hypocrite, that’s worse.

But if your principles are wrong and you are consistent, that’s the worst case; if your principles are wrong and you are a hypocrite, that’s better.

In fact I think for most things the ordering goes like this: Consistent Right > Hypocritical Wrong > Hypocritical Right > Consistent Wrong. Your behavior counts for more than your principles—so if you’re going to be a hypocrite, it’s better for your good actions to not match your bad principles.

Obviously if we could get people to believe good moral principles and then follow them, that would be best. And we should in fact be working to achieve that.

But if you know that someone’s moral principles are wrong, it doesn’t accomplish anything to accuse them of being a hypocrite. If it’s true, that’s a good thing.

Here’s a pretty clear example for you: Anyone who says that the Bible is infallible but doesn’t want gay people stoned to death is a hypocrite. The Bible is quite clear on this matter; Leviticus 20:13 really doesn’t leave much room for interpretation. By this standard, most Christians are hypocrites—and thank goodness for that. I owe my life to the hypocrisy of millions.

Of course if I could convince them that the Bible isn’t infallible—perhaps by pointing out all the things it says that contradict their most deeply-held moral and factual beliefs—that would be even better. But the last thing I want to do is make their behavior more consistent with their belief that the Bible is infallible; that would turn them into fanatical monsters. The Spanish Inquisition was very consistent in behaving according to the belief that the Bible is infallible.

Here’s another example: Anyone who thinks that cruelty to cats and dogs is wrong but is willing to buy factory-farmed beef and ham is a hypocrite. Any principle that would tell you that it’s wrong to kick a dog or cat would tell you that the way cows and pigs are treated in CAFOs is utterly unconscionable. But if you are really unwilling to give up eating meat and you can’t find or afford free-range beef, it still would be bad for you to start kicking dogs in a display of your moral consistency.

And one more example for good measure: The leaders of any country who resist human rights violations abroad but tolerate them at home are hypocrites. Obviously the best thing to do would be to fight human rights violations everywhere. But perhaps for whatever reason you are unwilling or unable to do this—one disturbing truth is that many human rights violations at home (such as draconian border policies) are often popular with your local constituents. Human-rights violations abroad are also often more severe—detaining children at the border is one thing, a full-scale genocide is quite another. So, for good reasons or bad, you may decide to focus your efforts on resisting human rights violations abroad rather than at home; this would make you a hypocrite. But it would still make you much better than a more consistent leader who simply ignores all human rights violations wherever they may occur.

In fact, there are cases in which it may be optimal for you to knowingly be a hypocrite. If you have two sets of competing moral beliefs, and you don’t know which is true but you know that as a whole they are inconsistent, your best option is to apply each set of beliefs in the domain for which you are most confident that it is correct, while searching for more information that might allow you to correct your beliefs and reconcile the inconsistency. If you are self-aware about this, you will know that you are behaving in a hypocritical way—but you will still behave better than you would if you picked the wrong beliefs and stuck to them dogmatically. In fact, given a reasonable level of risk aversion, you’ll be better off being a hypocrite than you would by picking one set of beliefs arbitrarily (say, at the flip of a coin). At least then you avoid the worst-case scenario of being the most wrong.

There is yet another factor to take into consideration. Sometimes following your own principles is hard.

Considerable ink has been spilled on the concept of akrasia, or “weakness of will”, in which we judge that A is better than B yet still find ourselves doing B. Philosophers continue to debate to this day whether this really happens. As a behavioral economist, I observe it routinely, perhaps even daily. In fact, I observe it in myself.

I think the philosophers’ mistake is to presume that there is one simple, well-defined “you” that makes all observations and judgments and takes actions. Our brains are much more complicated than that. There are many “you”s inside your brain, each with its own capacities, desires, and judgments. Yes, there is some important sense in which they are all somehow unified into a single consciousness—by a mechanism which still eludes our understanding. But it doesn’t take esoteric cognitive science to see that there are many minds inside you: Haven’t you ever felt an urge to do something you knew you shouldn’t do? Haven’t you ever succumbed to such an urge—drank the drink, eaten the dessert, bought the shoes, slept with the stranger—when it seemed so enticing but you knew it wasn’t really the right choice?

We even speak of being “of two minds” when we are ambivalent about something, and I think there is literal truth in this. The neural networks in your brain are forming coalitions, and arguing between them over which course of action you ought to take. Eventually one coalition will prevail, and your action will be taken; but afterward your reflective mind need not always agree that the coalition which won the vote was the one that deserved to.

The evolutionary reason for this is simple: We’re a kludge. We weren’t designed from the top down for optimal efficiency. We were the product of hundreds of millions of years of subtle tinkering, adding a bit here, removing a bit there, layering the mammalian, reflective cerebral cortex over the reptilian, emotional limbic system over the ancient, involuntary autonomic system. Combine this with the fact that we are built in pairs, with left and right halves of each kind of brain (and yes, they are independently functional when their connection is severed), and the wonder is that we ever agree with our own decisions.

Thus, there is a kind of hypocrisy that is not a moral indictment at all: You may genuinely and honestly agree that it is morally better to do something and still not be able to bring yourself to do it. You may know full well that it would be better to donate that money to malaria treatment rather than buy yourself that tub of ice cream—you may be on a diet and full well know that the ice cream won’t even benefit you in the long run—and still not be able to stop yourself from buying the ice cream.

Sometimes your feeling of hesitation at an altruistic act may be a useful insight; I certainly don’t think we should feel obliged to give all our income, or even all of our discretionary income, to high-impact charities. (For most people I encourage 5%. I personally try to aim for 10%. If all the middle-class and above in the First World gave even 1% we could definitely end world hunger.) But other times it may lead you astray, make you unable to resist the temptation of a delicious treat or a shiny new toy when even you know the world would be better off if you did otherwise.

Yet when following our own principles is so difficult, it’s not really much of a criticism to point out that someone has failed to do so, particularly when they themselves already recognize that they failed. The inconsistency between behavior and belief indicates that something is wrong, but it may not be any dishonesty or even anything wrong with their beliefs.

I wouldn’t go so far as to say you should stop ever calling out hypocrisy. Sometimes it is clearly useful to do so. But while hypocrisy is often the sign of a moral failing, it isn’t always—and even when it is, often as not the problem is the bad principles, not the behavior inconsistent with them.

Love the disabled, hate the disability

Aug 1 JDN 2459428

There is a common phrase Christians like to say: “Love the sinner, hate the sin.” This seems to be honored more in the breach than the observance, and many of the things that most Christians consider “sins” are utterly harmless or even good; but the principle is actually quite sound. You can disagree with someone or even believe that what they are doing is wrong while still respecting them as a human being. Indeed, my attitude toward religion is very much “Love the believer, hate the belief.” (Though somehow they don’t seem to like that one so much….)

Yet while ethically this is often the correct attitude, psychologically it can be very difficult for people to maintain. The Halo Effect is a powerful bias, and most people recoil instinctively from saying anything good about someone bad or anything bad about someone good. This can make it uncomfortable to simply state objective facts like “Hitler was a charismatic leader” or “Stalin was a competent administrator”—how dare you say something good about someone so evil? Yet in fact Hitler and Stalin could never have accomplished so much evil if they didn’t have these positive attributes—if we want to understand how such atrocities can occur and prevent them in the future, we need to recognize that evil people can also be charismatic and competent.

Halo Effect also makes it difficult for people to understand the complexities of historical figures who have facets of both great good and great evil: Thomas Jefferson led the charge on inventing modern democracy—but he also owned and raped slaves. Lately it seems like the left wants to deny the former and the right wants to deny the latter; but both are historical truths that important to know.

Halo Effect is the best explanation I have for why so many disability activists want to deny that disabilities are inherently bad. They can’t keep in their head the basic principle of “Love the disabled, hate the disability.”

There is a large community of deaf people who say that being deaf isn’t bad. There are even some blind people who say that being blind isn’t bad—though they’re considerably rarer.

Is music valuable? Is art valuable? Is the world better off because Mozart’s symphonies and the Mona Lisa exist? Yes. It follows that being unable to experience these things is bad. Therefore blindness and deafness are bad. QED.


No human being is made better of by not being able to do something. More capability is better than less capability. More freedom is better than less freedom. Less pain is better than more pain.

(Actually there are a few exceptions to “less pain is better than more pain”: People with CIPA are incapable of feeling pain even when injured, which is very dangerous.)

From this, it follows immediately that disabilities are bad and we should be trying to fix them.

And frankly this seems so utterly obvious to me that it’s hard for me to understand why anyone could possibly disagree. Maybe people who are blind or deaf simply don’t know what they’re missing? Even that isn’t a complete explanation, because I don’t know what it would be like to experience four dimensions or see ultraviolet—yet I still think that I’d be better off if I could. If there were people who had these experiences telling me how great they are, I’d be certain of it.

Don’t get me wrong: A lot of ableist discrimination does exist, and much of it seems to come from the same psychological attitude: Since being disabled is bad, they think that disabled people must be bad and we shouldn’t do anything to make them better off because they are bad. Stated outright this sounds ludicrous; but most people who think this way don’t consciously reflect on it. They just have a general sense of badness related to disability which then rubs off on their attitudes toward disabled people as well.

Yet it makes hardly any more sense to go the other way: Disabled people are human beings of value, they are good; therefore their disabilities are good? Therefore this thing that harms and limits them is good?

It’s certainly true that most disabilities would be more manageable with better accommodations, and many of those accommodations would be astonishingly easy and cheap to implement. It’s terrible that we often fail to do this. Yet the fact remains: The best-case scenario would be not needing accommodations because we can simply cure the disability.

It never ceases to baffle me that disability activists will say things like this:

“A wheelchair user isn’t disabled because of the impairment that interferes with her ability to walk, but because society refuses to make spaces wheelchair-accessible.”

No, the problem is pretty clearly the fact that she can’t walk. There are various ways that we could make society more accessible to people in wheelchairs—and we should do those things—but there are inherently certain things you simply cannot do if you can’t walk, and that has nothing to do with anything society does. You would be better off if society were more accommodating, but you’d be better off still if you could simply walk again.

Perhaps my perspective on this is skewed, because my major disability—chronic migraine—involves agonizing, debilitating chronic pain. Perhaps people whose disabilities don’t cause them continual agony can convince themselves that there’s nothing wrong with them. But it seems pretty obvious to me that I would be better off without migraines.

Indeed, it’s utterly alien to my experience to hear people say things like this: “We’re not suffering. We’re just living our lives in a different way.” I’m definitely suffering, thank you very much. Maybe not everyone with disabilities is suffering—but a lot of us definitely are. Every single day I have to maintain specific habits and avoid triggers, and I still get severe headaches twice a week. I had a particularly nasty one just this morning.

There are some more ambiguous cases, to be sure: Neurodivergences like autism and ADHD that exist on a spectrum, where the most extreme forms are utterly debilitating but the mildest forms are simply ordinary variation. It can be difficult to draw the line at when we should be willing to treat and when we shouldn’t; but this isn’t fundamentally different from the sort of question psychiatrists deal with all the time, regarding the difference between normal sadness and nervousness versus pathological depression and anxiety disorders.

Of course there is natural variation in almost all human traits, and one can have less of something good without it being pathological. Some things we call disabilities could just be considered below-average capabilities within ordinary variation. Yet even then, if we could make everyone healthier, stronger, faster, tougher, and smarter than they currently are, I have trouble seeing why we wouldn’t want to do that. I don’t even see any particular reason to think that the current human average—or even the current human maximum—is in any way optimal. Better is better. If we have the option to become transhuman gods, why wouldn’t we?

Another way to see this is to think about how utterly insane it would be to actively try to create disabilities. If there’s nothing wrong with being deaf, why not intentionally deafen yourself? If being bound to a wheelchair is not a bad thing, why not go get your legs paralyzed? If being blind isn’t so bad, why not stare into a welding torth? In these cases you’d even have consented—which is absolutely not the case for an innate disability. I never consented to these migraines and never would have.

I respect individual autonomy, so I would never force someone to get treatment for their disability. I even recognize that society can pressure people to do things they wouldn’t want to, and so maybe occasionally people really are better off being unable to do something so that nobody can pressure them into it. But it still seems utterly baffling to me that there are people who argue that we’d be better off not even having the option to make our bodies work better.

I think this is actually a major reason why disability activism hasn’t been more effective; the most vocal activists are the ones saying ridiculous things like “the problem isn’t my disability, it’s your lack of accommodations” or “there’s nothing wrong with being unable to hear”. If there is anything you’d be able to do if your disability didn’t exist that you can’t do even with accommodations, that isn’t true—and there basically always is.

Is Singularitarianism a religion?

 

Nov 17 JDN 2458805

I said in last week’s post that Pascal’s Mugging provides some deep insights into both Singularitarianism and religion. In particular, it explains why Singularitarianism seems so much like a religion.

This has been previously remarked, of course. I think Eric Steinhart makes the best case for Singularitarianism as a religion:

I think singularitarianism is a new religious movement. I might add that I think Clifford Geertz had a pretty nice (though very abstract) definition of religion. And I think singularitarianism fits Geertz’s definition (but that’s for another time).

My main interest is this: if singularitarianism is a new religious movement, then what should we make of it? Will it mainly be a good thing? A kind of enlightenment religion? It might be an excellent alternative to old-fashioned Abrahamic religion. Or would it degenerate into the well-known tragic pattern of coercive authority? Time will tell; but I think it’s worth thinking about this in much more detail.

To be clear: Singularitarianism is probably not a religion. It is certainly not a cult, as it has been even worse accused; the behaviors it prescribes are largely normative, pro-social behaviors, and therefore it would at worst be a mainstream religion. Really, if every religion only inspired people to do things like donate to famine relief and work on AI research (as opposed to, say, beheading gay people), I wouldn’t have much of a problem with religion.

In fact, Singularitarianism has one vital advantage over religion: Evidence. While the evidence in favor of it is not overwhelming, there is enough evidential support to lend plausibility to at least a broad concept of Singularitarianism: Technology will continue rapidly advancing, achieving accomplishments currently only in our wildest imaginings; artificial intelligence surpassing human intelligence will arise, sooner than many people think; human beings will change ourselves into something new and broadly superior; these posthumans will go on to colonize the galaxy and build a grander civilization than we can imagine. I don’t know that these things are true, but I hope they are, and I think it’s at least reasonably likely. All I’m really doing is extrapolating based on what human civilization has done so far and what we are currently trying to do now. Of course, we could well blow ourselves up before then, or regress to a lower level of technology, or be wiped out by some external force. But there’s at least a decent chance that we will continue to thrive for another million years to come.

But yes, Singularitarianism does in many ways resemble a religion: It offers a rich, emotionally fulfilling ontology combined with ethical prescriptions that require particular behaviors. It promises us a chance at immortality. It inspires us to work toward something much larger than ourselves. More importantly, it makes us special—we are among the unique few (millions?) who have the power to influence the direction of human and posthuman civilization for a million years. The stronger forms of Singularitarianism even have a flavor of apocalypse: When the AI comes, sooner than you think, it will immediately reshape everything at effectively infinite speed, so that from one year—or even one moment—to the next, our whole civilization will be changed. (These forms of Singularitarianism are substantially less plausible than the broader concept I outlined above.)

It’s this sense of specialness that Pascal’s Mugging provides some insight into. When it is suggested that we are so special, we should be inherently skeptical, not least because it feels good to hear that. (As Less Wrong would put it, we need to avoid a Happy Death Spiral.) Human beings like to feel special; we want to feel special. Our brains are configured to seek out evidence that we are special and reject evidence that we are not. This is true even to the point of absurdity: One cannot be mathematically coherent without admitting that the compliment “You’re one in a million.” is equivalent to the statement “There are seven thousand people as good or better than you.”—and yet, the latter seems much worse, because it does not make us sound special.

Indeed, the connection between Pascal’s Mugging and Pascal’s Wager is quite deep: Each argument takes a tiny probability and multiplies it by a huge impact in order to get a large expected utility. This often seems to be the way that religions defend themselves: Well, yes, the probability is small; but can you take the chance? Can you afford to take that bet if it’s really your immortal soul on the line?

And Singularitarianism has a similar case to make, even aside from the paradox of Pascal’s Mugging itself. The chief argument for why we should be focusing all of our time and energy on existential risk is that the potential payoff is just so huge that even a tiny probability of making a difference is enough to make it the only thing that matters. We should be especially suspicious of that; anything that says it is the only thing that matters is to be doubted with utmost care. The really dangerous religion has always been the fanatical kind that says it is the only thing that matters. That’s the kind of religion that makes you crash airliners into buildings.

I think some people may well have become Singularitarians because it made them feel special. It is exhilarating to be one of these lone few—and in the scheme of things, even a few million is a small fraction of all past and future humanity—with the power to effect some shift, however small, in the probability of a far grander, far brighter future.

Yet, in fact this is very likely the circumstance in which we are. We could have been born in the Neolithic, struggling to survive, utterly unaware of what would come a few millennia hence; we could have been born in the posthuman era, one of a trillion other artist/gamer/philosophers living in a world where all the hard work that needed to be done is already done. In the long S-curve of human development, we could have been born in the flat part on the left or the flat part on the right—and by all probability, we should have been; most people were. But instead we happened to be born in that tiny middle slice, where the curve slopes upward at its fastest. I suppose somebody had to be, and it might as well be us.

Sigmoid curve labeled

A priori, we should doubt that we were born so special. And when forming our beliefs, we should compensate for the fact that we want to believe we are special. But we do in fact have evidence, lots of evidence. We live in a time of astonishing scientific and technological progress.

My lifetime has included the progression from Deep Thought first beating David Levy to the creation of a computer one millimeter across that runs on a few nanowatts and nevertheless has ten times as much computing power as the 80-pound computer that ran the Saturn V. (The human brain runs on about 100 watts, and has a processing power of about 1 petaflop, so we can say that our energy efficiency is about 10 TFLOPS/W. The M3 runs on about 10 nanowatts and has a processing power of about 0.1 megaflops, so its energy efficiency is also about 10 TFLOPS/W. We did it! We finally made a computer as energy-efficient as the human brain! But we have still not matched the brain in terms of space-efficiency: The volume of the human brain is about 1000 cm^3, so our space efficiency is about 1 TFLOPS/cm^3. The volume of the M3 is about 1 mm^3, so its space efficiency is only about 100 MFLOPS/cm^3. The brain still wins by a factor of 10,000.)

My mother saw us go from the first jet airliners to landing on the Moon to the International Space Station and robots on Mars. She grew up before the polio vaccine and is still alive to see the first 3D-printed human heart. When I was a child, smartphones didn’t even exist; now more people have smartphones than have toilets. I may yet live to see the first human beings set foot on Mars. The pace of change is utterly staggering.

Without a doubt, this is sufficient evidence to believe that we, as a civilization, are living in a very special time. The real question is: Are we, as individuals, special enough to make a difference? And if we are, what weight of responsibility does this put upon us?

If you are reading this, odds are the answer to the first question is yes: You are definitely literate, and most likely educated, probably middle- or upper-middle-class in a First World country. Countries are something I can track, and I do get some readers from non-First-World countries; and of course I don’t observe your education or socioeconomic status. But at an educated guess, this is surely my primary reading demographic. Even if you don’t have the faintest idea what I’m talking about when I use Bayesian logic or calculus, you’re already quite exceptional. (And if you do? All the more so.)

That means the second question must apply: What do we owe these future generations who may come to exist if we play our cards right? What can we, as individuals, hope to do to bring about this brighter future?

The Singularitarian community will generally tell you that the best thing to do with your time is to work on AI research, or, failing that, the best thing to do with your money is to give it to people working on artificial intelligence research. I’m not going to tell you not to work on AI research or donate to AI research, as I do think it is among the most important things humanity needs to be doing right now, but I’m also not going to tell you that it is the one single thing you must be doing.

You should almost certainly be donating somewhere, but I’m not so sure it should be to AI research. Maybe it should be famine relief, or malaria prevention, or medical research, or human rights, or environmental sustainability. If you’re in the United States (as I know most of you are), the best thing to do with your money may well be to support political campaigns, because US political, economic, and military hegemony means that as goes America, so goes the world. Stop and think for a moment how different the prospects of global warming might have been—how many millions of lives might have been saved!—if Al Gore had become President in 2001. For lack of a few million dollars in Tampa twenty years ago, Miami may be gone in fifty. If you’re not sure which cause is most important, just pick one; or better yet, donate to a diversified portfolio of charities and political campaigns. Diversified investment isn’t just about monetary return.

And you should think carefully about what you’re doing with the rest of your life. This can be hard to do; we can easily get so caught up in just getting through the day, getting through the week, just getting by, that we lose sight of having a broader mission in life. Of course, I don’t know what your situation is; it’s possible things really are so desperate for you that you have no choice but to keep your head down and muddle through. But you should also consider the possibility that this is not the case: You may not be as desperate as you feel. You may have more options than you know. Most “starving artists” don’t actually starve. More people regret staying in their dead-end jobs than regret quitting to follow their dreams. I guess if you stay in a high-paying job in order to earn to give, that might really be ethically optimal; but I doubt it will make you happy. And in fact some of the most important fields are constrained by a lack of good people doing good work, and not by a simple lack of funding.

I see this especially in economics: As a field, economics is really not focused on the right kind of questions. There’s far too much prestige for incrementally adjusting some overcomplicated unfalsifiable mess of macroeconomic algebra, and not nearly enough for trying to figure out how to mitigate global warming, how to turn back the tide of rising wealth inequality, or what happens to human society once robots take all the middle-class jobs. Good work is being done in devising measures to fight poverty directly, but not in devising means to undermine the authoritarian regimes that are responsible for maintaining poverty. Formal mathematical sophistication is prized, and deep thought about hard questions is eschewed. We are carefully arranging the pebbles on our sandcastle in front of the oncoming tidal wave. I won’t tell you that it’s easy to change this—it certainly hasn’t been easy for me—but I have to imagine it’d be easier with more of us trying rather than with fewer. Nobody needs to donate money to economics departments, but we definitely do need better economists running those departments.

You should ask yourself what it is that you are really good at, what you—you yourself, not anyone else—might do to make a mark on the world. This is not an easy question: I have not quite answered for myself whether I would make more difference as an academic researcher, a policy analyst, a nonfiction author, or even a science fiction author. (If you scoff at the latter: Who would have any concept of AI, space colonization, or transhumanism, if not for science fiction authors? The people who most tilted the dial of human civilization toward this brighter future may well be Clarke, Roddenberry, and Asimov.) It is not impossible to be some combination or even all of these, but the more I try to take on the more difficult my life becomes.

Your own path will look different than mine, different, indeed, than anyone else’s. But you must choose it wisely. For we are very special individuals, living in a very special time.

Belief in belief, and why it’s important

Oct 30, JDN 2457692

In my previous post on ridiculous beliefs, I passed briefly over this sentence:

“People invest their identity in beliefs, and decide what beliefs to profess based on the group identities they value most.”

Today I’d like to talk about the fact that “to profess” is a very important phrase in that sentence. Part of understanding ridiculous beliefs, I think, is understanding that many, if not most, of them are not actually proper beliefs. They are what Daniel Dennett calls “belief in belief”, and has elsewhere been referred to as “anomalous belief”. They are not beliefs in the ordinary sense that we would line up with the other beliefs in our worldview and use them to anticipate experiences and motivate actions. They are something else, lone islands of belief that are not weaved into our worldview. But all the same they are invested with importance, often moral or even ultimate importance; this one belief may not make any sense with everyone else, but you must believe it, because it is a vital part of your identity and your tribe. To abandon it would not simply be mistaken; it would be heresy, it would be treason.

How do I know this? Mainly because nobody has tried to stone me to death lately.

The Bible is quite explicit about at least a dozen reasons I am supposed to be executed forthwith; you likely share many of them: Heresy, apostasy, blasphemy, nonbelief, sodomy, fornication, covetousness, taking God’s name in vain, eating shellfish (though I don’t anymore!), wearing mixed fiber, shaving, working on the Sabbath, making images of things, and my personal favorite, not stoning other people for committing such crimes (as we call it in game theory, a second-order punishment).

Yet I have met many people who profess to be “Bible-believing Christians”, and even may oppose some of these activities (chiefly sodomy, blasphemy, and nonbelief) on the grounds that they are against what the Bible says—and yet not one has tried to arrange my execution, nor have I ever seriously feared that they might.

Is this because we live in a secular society? Well, yes—but not simply that. It isn’t just that these people are afraid of being punished by our secular government should they murder me for my sins; they believe that it is morally wrong to murder me, and would rarely even consider the option. Someone could point them to the passage in Leviticus (20:16, as it turns out) that explicitly says I should be executed, and it would not change their behavior toward me.

On first glance this is quite baffling. If I thought you were about to drink a glass of water that contained cyanide, I would stop you, by force if necessary. So if they truly believe that I am going to be sent to Hell—infinitely worse than cyanide—then shouldn’t they be willing to use any means necessary to stop that from happening? And wouldn’t this be all the more true if they believe that they themselves will go to Hell should they fail to punish me?

If these “Bible-believing Christians” truly believed in Hell the way that I believe in cyanide—that is, as proper beliefs which anticipate experience and motivate action—then they would in fact try to force my conversion or execute me, and in doing so would believe that they are doing right. This used to be quite common in many Christian societies (most infamously in the Salem Witch Trials), and still is disturbingly common in many Muslim societies—ISIS doesn’t just throw gay men off rooftops and stone them as a weird idiosyncrasy; it is written in the Hadith that they’re supposed to. Nor is this sort of thing confined to terrorist groups; the “legitimate” government of Saudi Arabia routinely beheads atheists or imprisons homosexuals (though has a very capricious enforcement system, likely so that the monarchy can trump up charges to justify executing whomever they choose). Beheading people because the book said so is what your behavior would look like if you honestly believed, as a proper belief, that the Qur’an or the Bible or whatever holy book actually contained the ultimate truth of the universe. The great irony of calling religion people’s “deeply-held belief” is that it is in almost all circumstances the exact opposite—it is their most weakly held belief, the one that they could most easily sacrifice without changing their behavior.

Yet perhaps we can’t even say that to people, because they will get equally defensive and insist that they really do hold this very important anomalous belief, and how dare you accuse them otherwise. Because one of the beliefs they really do hold, as a proper belief, and a rather deeply-held one, is that you must always profess to believe your religion and defend your belief in it, and if anyone catches you not believing it that’s a horrible, horrible thing. So even though it’s obvious to everyone—probably even to you—that your behavior looks nothing like what it would if you actually believed in this book, you must say that you do, scream that you do if necessary, for no one must ever, ever find out that it is not a proper belief.

Another common trick is to try to convince people that their beliefs do affect their behavior, even when they plainly don’t. We typically use the words “religious” and “moral” almost interchangeably, when they are at best orthogonal and arguably even opposed. Part of why so many people seem to hold so rigidly to their belief-in-belief is that they think that morality cannot be justified without recourse to religion; so even though on some level they know religion doesn’t make sense, they are afraid to admit it, because they think that means admitting that morality doesn’t make sense. If you are even tempted by this inference, I present to you the entire history of ethical philosophy. Divine Command theory has been a minority view among philosophers for centuries.

Indeed, it is precisely because your moral beliefs are not based on your religion that you feel a need to resort to that defense of your religion. If you simply believed religion as a proper belief, you would base your moral beliefs on your religion, sure enough; but you’d also defend your religion in a fundamentally different way, not as something you’re supposed to believe, not as a belief that makes you a good person, but as something that is just actually true. (And indeed, many fanatics actually do defend their beliefs in those terms.) No one ever uses the argument that if we stop believing in chairs we’ll all become murderers, because chairs are actually there. We don’t believe in belief in chairs; we believe in chairs.

And really, if such a belief were completely isolated, it would not be a problem; it would just be this weird thing you say you believe that everyone really knows you don’t and it doesn’t affect how you behave, but okay, whatever. The problem is that it’s never quite isolated from your proper beliefs; it does affect some things—and in particular it can offer a kind of “support” for other real, proper beliefs that you do have, support which is now immune to rational criticism.

For example, as I already mentioned: Most of these “Bible-believing Christians” do, in fact, morally oppose homosexuality, and say that their reason for doing so is based on the Bible. This cannot literally be true, because if they actually believed the Bible they wouldn’t want gay marriage taken off the books, they’d want a mass pogrom of 4-10% of the population (depending how you count), on a par with the Holocaust. Fortunately their proper belief that genocide is wrong is overriding. But they have no such overriding belief supporting the moral permissibility of homosexuality or the personal liberty of marriage rights, so the very tenuous link to their belief-in-belief in the Bible is sufficient to tilt their actual behavior.

Similarly, if the people I meet who say they think maybe 9/11 was an inside job by our government really believed that, they would most likely be trying to organize a violent revolution; any government willing to murder 3,000 of its own citizens in a false flag operation is one that must be overturned and can probably only be overturned by force. At the very least, they would flee the country. If they lived in a country where the government is actually like that, like Zimbabwe or North Korea, they wouldn’t fear being dismissed as conspiracy theorists, they’d fear being captured and executed. The very fact that you live within the United States and exercise your free speech rights here says pretty strongly that you don’t actually believe our government is that evil. But they wouldn’t be so outspoken about their conspiracy theories if they didn’t at least believe in believing them.

I also have to wonder how many of our politicians who lean on the Constitution as their source of authority have actually read the Constitution, as it says a number of rather explicit things against, oh, say, the establishment of religion (First Amendment) or searches and arrests without warrants (Fourth Amendment) that they don’t much seem to care about. Some are better about this than others; Rand Paul, for instance, actually takes the Constitution pretty seriously (and is frequently found arguing against things like warrantless searches as a result!), but Ted Cruz for example says he has spent decades “defending the Constitution”, despite saying things like “America is a Christian nation” that directly violate the First Amendment. Cruz doesn’t really seem to believe in the Constitution; but maybe he believes in believing the Constitution. (It’s also quite possible he’s just lying to manipulate voters.)

 

How do we reach people with ridiculous beliefs?

Oct 16, JDN 2457678

One of the most unfortunate facts in the world—indeed, perhaps the most unfortunate fact, from which most other unfortunate facts follow—is that it is quite possible for a human brain to sincerely and deeply hold a belief that is, by any objective measure, totally and utterly ridiculous.

And to be clear, I don’t just mean false; I mean ridiculous. People having false beliefs is an inherent part of being finite beings in a vast and incomprehensible universe. Monetarists are wrong, but they are not ludicrous. String theorists are wrong, but they are not absurd. Multiregionalism is wrong, but it is not nonsensical. Indeed, I, like anyone else, am probably wrong about a great many things, though of course if I knew which ones I’d change my mind. (Indeed, I admit a small but nontrivial probability of being wrong about the three things I just listed.)

I mean ridiculous beliefs. I mean that any rational, objective assessment of the probability of that belief being true would be vanishingly small, 1 in 1 million at best. I’m talking about totally nonsensical beliefs, beliefs that go against overwhelming evidence; some of them are outright incoherent. Yet millions of people go on believing them.

For example, over 40% of Americans believe that human beings were created by God in their present form less than 10,000 years ago, and typically offer no evidence for this besides “The Bible says so.” (Strictly speaking, even that isn’t true—standard interpretations of the Bible say so. The Bible itself contains no clearly stated date for creation.) This despite the absolutely overwhelming body of evidence supporting the theory of evolution by Darwinian natural selection.

Over a third of Americans don’t believe in global warming, which is not only a complete consensus among all credible climate scientists based on overwhelming evidence, but one of the central threats facing human civilization over the 21st century. On a global scale this is rather like standing on a train track and saying you don’t believe in trains. (Or like the time my mother once told me about where an alert went out to her office that there was a sniper in the area, indiscriminately shooting at civilians, and one of her co-workers refused to join the security protocol and declared smugly, “I don’t believe in snipers.” Fortunately, he was unharmed in the incident. This time.)

1/4 of Americans believe in astrology, and 1/4 Americans believe that aliens have visited the Earth. (Not sure if it’s the same 1/4. Probably considerable but not total overlap.) The existence of extraterrestrial civilizations somewhere in this mind-bogglingly (perhaps infinitely) vast universe has probability 1. But visiting us is quite another matter, and there is absolutely no credible evidence of it. As for astrology? I shouldn’t have to explain why the position of Jupiter, much less Sirius, on your birthday is not a major influence on your behavior or life outcomes. Your obstetrician exerted more gravitational force on you than Jupiter did at the moment you were born.

The majority of Americans believe in telepathy or extrasensory perception. I confess that I actually did when I was very young, though I think I disabused myself of this around the time I stopped believing in Santa Claus.

I love the term “extrasensory perception” because it is such an oxymoron; if you’re perceiving, it is via senses. “Sixth sense” is better, except that we actually already have at least nine senses: The ones you probably know, vision (sight), audition (hearing), olfaction (smell), gustation (taste), and tactition (touch)—and the ones you may not know, thermoception (heat), proprioception (body position), vestibulation (balance), and nociception (pain). These can probably be subdivided further—vision and spatial reasoning are dissociated in blind people, heat and cold are separate nerve pathways, pain and itching are distinct systems, and there are a variety of different sensors used for proprioception. So we really could have as many as twenty senses, depending on how you’re counting.

What about telepathy? Well, that is not actually impossible in principle; it’s just that there’s no evidence that humans actually do it. Smartphones do it almost literally constantly, transmitting data via high-frequency radio waves back and forth to one another. We could have evolved some sort of radio transceiver organ (perhaps an offshoot of an electric defense organ such as that of electric eels), but as it turns out we didn’t. Actually in some sense—which some might say is trivial, but I think it’s actually quite deep—we do have telepathy; it’s just that we transmit our thoughts not via radio waves or anything more exotic, but via sound waves (speech) and marks on paper (writing) and electronic images (what you’re reading right now). Human beings really do transmit our thoughts to one another, and this truly is a marvelous thing we should not simply take for granted (it is one of our most impressive feats of Mundane Magic); but somehow I don’t think that’s what people mean when they say they believe in psychic telepathy.

And lest you think this is a uniquely American phenomenon: The particular beliefs vary from place to place, but bizarre beliefs abound worldwide, from conspiracy theories in the UK to 9/11 “truthers” in Canada to HIV denialism in South Africa (fortunately on the wane). The American examples are more familiar to me and most of my readers are Americans, but wherever you are reading from, there are probably ridiculous beliefs common there.

I could go on, listing more objectively ridiculous beliefs that are surprisingly common; but the more I do that, the more I risk alienating you, in case you should happen to believe one of them. When you add up the dizzying array of ridiculous beliefs one could hold, odds are that most people you’d ever meet will have at least one of them. (“Not me!” you’re thinking; and perhaps you’re right. Then again, I’m pretty sure that the 4% or so of people who believe in the Reptilians think the same thing.)

Which brings me to my real focus: How do we reach these people?

One possible approach would be to just ignore them, leave them alone, or go about our business with them as though they did not have ridiculous beliefs. This is in fact the right thing to do under most circumstances, I think; when a stranger on the bus starts blathering about how the lizard people are going to soon reveal themselves and establish the new world order, I don’t think it’s really your responsibility to persuade that person to realign their beliefs with reality. Nodding along quietly would be acceptable, and it would be above and beyond the call of duty to simply say, “Um, no… I’m fairly sure that isn’t true.”

But this cannot always be the answer, if for no other reason than the fact that we live in a democracy, and people with ridiculous beliefs frequently vote according to them. Then people with ridiculous beliefs can take office, and make laws that affect our lives. Actually this would be true even if we had some other system of government; there’s nothing in particular to stop monarchs, hereditary senates, or dictators from believing ridiculous things. If anything, the opposite; dictators are known for their eccentricity precisely because there are no checks on their behavior.

At some point, we’re going to need to confront the fact that over half of the Republicans in the US Congress do not believe in climate change, and are making policy accordingly, rolling drunk on petroleum and treating the hangover with the hair of the dog.

We’re going to have to confront the fact that school boards in Southern states, particularly Texas, continually vote to censor biology textbooks of their dreaded Darwinian evolution.

So we really do need to find a way to talk to people who have ridiculous beliefs, and engage with them, understand why they think the way they do, and then—hopefully at least—tilt them a little bit back toward rational reality. You will not be able to change their mind completely right away, but if each of us can at least chip away at their edifice of absurdity, then all together perhaps we can eventually bring them to enlightenment.

Of course, a good start is probably not to say you think that their beliefs are ridiculous, because people get very defensive when you do that, even—perhaps especially—when it’s true. People invest their identity in beliefs, and decide what beliefs to profess based on the group identities they value most.

This is the link that we must somehow break. We must show people that they are not defined by their beliefs, that it is okay to change your mind. We must be patient and compassionate—sometimes heroically so, as people spout offensive nonsense in our faces, sometimes offensive nonsense that directly attacks us personally. (“Atheists deserve Hell”, taken literally, would constitute something like a death threat except infinitely worse. While to them it very likely is just reciting a slogan, to the atheist listening it says that you believe that they are so evil, so horrible that they deserve eternal torture for believing what they do. And you get mad when we say your beliefs are ridiculous?)

We must also remind people that even very smart people can believe very dumb things—indeed, I’d venture a guess that most dumb things are in fact believed by smart people. Even the most intelligent human beings can only glimpse a tiny fraction of the universe, and all human brains are subject to the same fundamental limitations, the same core heuristics and biases. Make it clear that you’re saying you think their beliefs are false, not that they are stupid or crazy. And indeed, make it clear to yourself that this is indeed what you believe, because it ought to be. It can be tempting to think that only an idiot would believe something so ridiculous—and you are safe, for you are no idiot!—but the truth is far more humbling: Human brains are subject to many flaws, and guarding the fortress of the mind against error and deceit is a 24-7 occupation. Indeed, I hope that you will ask yourself: “What beliefs do I hold that other people might find ridiculous? Are they, in fact, ridiculous?”

Even then, it won’t be easy. Most people are strongly resistant to any change in belief, however small, and it is in the nature of ridiculous beliefs that they require radical changes in order to restore correspondence with reality. So we must try in smaller steps.

Maybe don’t try to convince them that 9/11 was actually the work of Osama bin Laden; start by pointing out that yes, steel does bend much more easily at the temperature at which jet fuel burns. Maybe don’t try to persuade them that astrology is meaningless; start by pointing out the ways that their horoscope doesn’t actually seem to fit them, or could be made to fit anybody. Maybe don’t try to get across the real urgency of climate change just yet, and instead point out that the “study” they read showing it was a hoax was clearly funded by oil companies, who would perhaps have a vested interest here. And as for ESP? I think it’s a good start just to point out that we have more than five senses already, and there are many wonders of the human brain that actual scientists know about well worth exploring—so who needs to speculate about things that have no scientific evidence?

No, Scandinavian countries aren’t parasites. They’re just… better.

Oct 1, JDN 2457663

If you’ve been reading my blogs for awhile, you likely have noticed me occasionally drop the hashtag #ScandinaviaIsBetter; I am in fact quite enamored of the Scandinavian (or Nordic more generally) model of economic and social policy.

But this is not a consensus view (except perhaps within Scandinavia itself), and I haven’t actually gotten around to presenting a detailed argument for just what it is that makes these countries so great.

I was inspired to do this by discussion with a classmate of mine (who shall remain nameless) who emphatically disagreed; he actually seems to think that American economic policy is somewhere near optimal (and to be fair, it might actually be near optimal, in the broad space of all possible economic policies—we are not Maoist China, we are not Somalia, we are not a nuclear wasteland). He couldn’t disagree with the statistics on how wealthy and secure and happy Scandinavian countries are, so instead he came up with this: “They are parasites.”

What he seemed to mean by this is that somehow Scandinavian countries achieve their success by sapping wealth from other countries, perhaps the rest of Europe, perhaps the world more generally. On this view, it’s not that Norway and Denmark aren’t rich because they economic policy basically figured out; no, they are somehow draining those riches from elsewhere.

This could scarcely be further from the truth.

But first, consider a couple of countries that are parasites, at least partially: Luxembourg and Singapore.

Singapore has an enormous trade surplus: 5.5 billion SGD per month, which is $4 billion per month, so almost $50 billion per year. They also have a positive balance of payments of $61 billion per year. Singapore’s total GDP is about $310 billion, so these are not small amounts. What does this mean? It means that Singapore is taking in a lot more money than they are spending out. They are effectively acting as mercantilists, or if you like as a profit-seeking corporation.

Moreover, Singapore is totally dependent on trade: their exports are over $330 billion per year, and their imports are over $280 billion. You may recognize each of these figures as comparable to the entire GDP of the country. Yes, their total trade is 200% of GDP. They aren’t really so much a country as a gigantic trading company.

What about Luxembourg? Well, they have a trade deficit of 420 million Euros per month, which is about $560 million per year. Their imports total about $2 billion per year, and their exports about $1.5 billion. Since Luxembourg’s total GDP is $56 billion, these aren’t unreasonably huge figures (total trade is about 6% of GDP); so Luxembourg isn’t a parasite in the sense that Singapore is.

No, what makes Luxembourg a parasite is the fact that 36% of their GDP is due to finance. Compare the US, where 12% of our GDP is finance—and we are clearly overfinancialized. Over a third of Luxembourg’s income doesn’t involve actually… doing anything. They hold onto other people’s money and place bets with it. Even insofar as finance can be useful, it should be only very slightly profitable, and definitely not more than 10% of GDP. As Stiglitz and Krugman agree (and both are Nobel Laureate economists), banking should be boring.

Do either of these arguments apply to Scandinavia? Let’s look at trade first. Denmark’s imports total about 42 billion DKK per month, which is about $70 billion per year. Their exports total about $90 billion per year. Denmark’s total GDP is $330 billion, so these numbers are quite reasonable. What are their main sectors? Manufacturing, farming, and fuel production. Notably, not finance.

Similar arguments hold for Sweden and Norway. They may be small countries, but they have diversified economies and strong production of real economic goods. Norway is probably overly dependent on oil exports, but they are specifically trying to move away from that right now. Even as it is, only about $90 billion of their $150 billion exports are related to oil, and exports in general are only about 35% of GDP, so oil is about 20% of Norway’s GDP. Compare that to Saudi Arabia, of which has 90% of its exports related to oil, accounting for 45% of GDP. If oil were to suddenly disappear, Norway would lose 20% of their GDP, dropping their per-capita GDP… all the way to the same as the US. (Terrifying!) But Saudi Arabia would suffer a total economic collapse, and their per capita-GDP would fall from where it is now at about the same as the US to about the same as Greece.

And at least oil actually does things. Oil exporting countries aren’t parasites so much as they are drug dealers. The world is “rolling drunk on petroleum”, and until we manage to get sober we’re going to continue to need that sweet black crude. Better we buy it from Norway than Saudi Arabia.

So, what is it that makes Scandinavia so great? Why do they have the highest happiness ratings, the lowest poverty rates, the best education systems, the lowest unemployment rates, the best social mobility and the highest incomes? To be fair, in most of these not literally every top spot is held by a Scandinavian country; Canada does well, Germany does well, the UK does well, even the US does well. Unemployment rates in particular deserve further explanation, because a lot of very poor countries report surprisingly low unemployment rates, such as Cambodia and Laos.

It’s also important to recognize that even great countries can have serious flaws, and the remnants of the feudal system in Scandinavia—especially in Sweden—still contribute to substantial inequality of wealth and power.

But in general, I think if you assembled a general index of overall prosperity of a country (or simply used one that already exists like the Human Development Index), you would find that Scandinavian countries are disproportionately represented at the very highest rankings. This calls out for some sort of explanation.

Is it simply that they are so small? They are certainly quite small; Norway and Denmark each have fewer people than the core of New York City, and Sweden has slightly more people than the Chicago metropolitan area. Put them all together, add in Finland and Iceland (which aren’t quite Scandinavia), and all together you have about the population of the New York City Combined Statistical Area.

But some of the world’s smallest countries are also its poorest. Samoa and Kiribati each have populations comparable to the city of Ann Arbor and per-capita GDPs 1/10 that of the US. Eritrea is the same size as Norway, and 70 times poorer. Burundi is slightly larger than Sweden, and has a per-capita GDP PPP of only $3.14 per day.

There’s actually a good statistical reason to expect that the smallest countries should vary the most in their incomes; you’re averaging over a smaller sample so you get more variance in the estimate. But this doesn’t explain why Norway is rich and Eritrea is poor. Incomes aren’t assigned randomly. This might be a reason to try comparing Norway to specifically New York City or Los Angeles rather than to the United States as a whole (Norway still does better, in case you were wondering—especially compared to LA); but it’s not a reason to say that Norway’s wealth doesn’t really count.

Is it because they are ethnically homogeneous? Yes, relatively speaking; but perhaps not as much as you imagine. 14% of Sweden’s population is immigrants, of which 64% are from outside the EU. 10% of Denmark’s population is comprised of immigrants, of which 66% came from non-Western countries. Immigrants are 13% of Norway’s population, of which half are from non-Western countries.

That’s certainly more ethnically homogeneous than the United States; 13% of our population is immigrants, which may sound comparable, but almost all non-immigrants in Scandinavia are of indigenous Nordic descent, all “White” by the usual classification. Meanwhile the United States is 64% non-Hispanic White, 16% Hispanic, 12% Black, 5% Asian, and 1% Native American or Pacific Islander.

Scandinavian countries are actually by some measures less homogeneous than the US in terms of religion, however; only 4% of Americans are not Christian (78.5%), atheist (16.1%), or Jewish (1.7%), and only 0.6% are Muslim. As much as In Sweden, on the other hand, 60% of the population is nominally Lutheran, but 80% is atheist, and 5% of the population is Muslim. So if you think of Christian/Muslim as the sharp divide (theologically this doesn’t make a whole lot of sense, but it seems to be the cultural norm in vogue), then Sweden has more religious conflict to worry about than the US does.

Moreover, there are some very ethnically homogeneous countries that are in horrible shape. North Korea is almost completely ethnically homogeneous, for example, as is Haiti. There does seem to be a correlation between higher ethnic diversity and lower economic prosperity, but Canada and the US are vastly more diverse than Japan and South Korea yet significantly richer. So clearly ethnicity is not the whole story here.

I do think ethnic homogeneity can partly explain why Scandinavian countries have the good policies they do; because humans are tribal, ethnic homogeneity engenders a sense of unity and cooperation, a notion that “we are all in this together”. That egalitarian attitude makes people more comfortable with some of the policies that make Scandinavia what it is, which I will get into at the end of this post.

What about culture? Is there something about Nordic ideas, those Viking traditions, that makes Scandinavia better? Miles Kimball has argued this; he says we need to import “hard work, healthy diets, social cohesion and high levels of trust—not Socialism”. And truth be told, it’s hard to refute this assertion, since it’s very difficult to isolate and control for cultural variables even though we know they are important.

But this difficulty in falsification is a reason to be cautious about such a hypothesis; it should be a last resort when all the more testable theories have been ruled out. I’m not saying culture doesn’t matter; it clearly does. But unless you can test it, “culture” becomes a theory that can explain just about anything—which means that it really explains nothing.

The “social cohesion and high levels of trust” part actually can be tested to some extent—and it is fairly well supported. High levels of trust are strongly correlated with economic prosperity. But we don’t really need to “import” that; the US is already near the top of the list in countries with the highest levels of trust.

I can’t really disagree with “good diet”, except to say that almost everywhere eats a better diet than the United States. The homeland of McDonald’s and Coca-Cola is frankly quite dystopian when it comes to rates of heart disease and diabetes. Given our horrible diet and ludicrously inefficient healthcare system, the only reason we live as long as we do is that we are an extremely rich country (so we can afford to pay the most for healthcare, for certain definitions of “afford”), and almost no one here smokes anymore. But good diet isn’t so much Scandinavian as it is… un-American.

But as for “hard work”, he’s got it backwards; the average number of work hours per week is 33 in Denmark and Norway, compared to 38 in the US. Among full-time workers in the US, the average number of hours per week is a whopping 47. Working hours in the US are much more intensive than anywhere in Europe, including Scandinavia. Though of course we are nowhere near the insane work addiction suffered by most East Asian countries; lately South Korea and Japan have been instituting massive reforms to try to get people to stop working themselves to death. And not surprisingly, work-related stress is a leading cause of death in the United States. If anything, we need to import some laziness, or at least a sense of work-life balance. (Indeed, I’m fairly sure that the only reason he said “hard work” is that it’s a cultural Applause Light in the US; being against hard work is like being against the American Flag or homemade apple pie. At this point, “we need more hard work” isn’t so much an assertion as it is a declaration of tribal membership.)

But none of these things adequately explains why poverty and inequality is so much lower in Scandinavia than it is in the United States, and there’s really a quite simple explanation.

Why is it that #ScandinaviaIsBetter? They’re not afraid to make rich people pay higher taxes so they can help poor people.

In the US, this idea of “redistribution of wealth” is anathema, even taboo; simply accusing a policy of being “redistributive” or “socialist” is for many Americans a knock-down argument against that policy. In Denmark, “socialist” is a meaningful descriptor; some policies are “socialist”, others “capitalist”, and these aren’t particularly weighted terms; it’s like saying here that a policy is “Keynesian” or “Monetarist”, or if that’s too obscure, saying that it’s “liberal” or “conservative”. People will definitely take sides, and it is a matter of political importance—but it’s inside the Overton Window. It’s not almost unthinkable as it is here.

If culture has an effect here, it likely comes from Scandinavia’s long traditions of egalitarianism. Going at least back to the Vikings, in theory at least (clearly not always in practice), people—or at least fellow Scandinavians—were considered equal participants in society, no one “better” or “higher” than anyone else. Even today, it is impolite in Denmark to express pride at your own accomplishments; there’s a sense that you are trying to present yourself as somehow more deserving than others. Honestly this attitude seems unhealthy to me, though perhaps preferable to the unrelenting narcissism of American society; but insofar as culture is making Scandinavia better, it’s almost certainly because this thoroughgoing sense of egalitarianism underlies all their economic policy. In the US, the rich are brilliant and the poor are lazy; in Denmark, the rich are fortunate and the poor are unlucky. (Which theory is more accurate? Donald Trump. I rest my case.)

To be clear, Scandinavia is not communist; and they are certainly not Stalinist. They don’t believe in total collectivization of industry, or complete government control over the economy. They don’t believe in complete, total equality, or even a hard cap on wealth: Stefan Persson is an 11-figure billionaire. Does he pay high taxes, living in Sweden? Yes he does, considerably higher than he’d pay in the US. He seems to be okay with that. Why, it’s almost like his marginal utility of wealth is now negligible.

Scandinavian countries also don’t try to micromanage your life in the way often associated with “socialism”–in fact I’d say they do it less than we do in the US. Here we have Republicans who want to require drug tests for food stamps even though that literally wastes money and helps no one; there they just provide a long list of government benefits for everyone free of charge. They just held a conference in Copenhagen to discuss the possibility of transitioning many of these benefits into a basic income; and basic income is the least intrusive means of redistributing wealth.

In fact, because Scandinavian countries tax differently, it’s not necessarily the case that people always pay higher taxes there. But they pay more transparent taxes, and taxes with sharper incidence. Denmark’s corporate tax rate is only 22% compared to 35% in the US; but their top personal income tax bracket is 59% while ours is only 39.6% (though it can rise over 50% with some state taxes). Denmark also has a land value tax and a VAT, both of which most economists have clamored for for generations. (The land value tax I totally agree with; the VAT I’m a little more ambivalent about.) Moreover, filing your taxes in Denmark is not a month-long stress marathon of gathering paperwork, filling out forms, and fearing that you’ll get something wrong and be audited as it is in the US; they literally just send you a bill. You can contest it, but most people don’t. You just pay it and you’re done.

Now, that does mean the government is keeping track of your income; and I might think that Americans would never tolerate such extreme surveillance… and then I remember that PRISM is a thing. Apparently we’re totally fine with the NSA reading our emails, but God forbid the IRS just fill out our 1040s for us (that they are going to read anyway). And there’s no surveillance involved in requiring retail stores to incorporate sales tax into listed price like they do in Europe instead of making us do math at the cash register like they do here. It’s almost like Americans are trying to make taxes as painful as possible.

Indeed, I think Scandanavian socialism is a good example of how high taxes are a sign of a free society, not an authoritarian one. Taxes are a minimal incursion on liberty. High taxes are how you fund a strong government and maintain extensive infrastructure and public services while still being fair and following the rule of law. The lowest tax rates in the world are in North Korea, which has ostensibly no taxes at all; the government just confiscates whatever they decide they want. Taxes in Venezuela are quite low, because the government just owns all the oil refineries (and also uses multiple currency exchange rates to arbitrage seigniorage). US taxes are low by First World standards, but not by world standards, because we combine a free society with a staunch opposition to excessive taxation. Most of the rest of the free world is fine with paying a lot more taxes than we do. In fact, even using Heritage Foundation data, there is a clear positive correlation between higher tax rates and higher economic freedom:
Graph: Heritage Foundation Economic Freedom Index and tax burden

What’s really strange, though, is that most Americans actually support higher taxes on the rich. They often have strange or even incoherent ideas about what constitutes “rich”; I have extended family members who have said they think $100,000 is an unreasonable amount of money for someone to make, yet somehow are totally okay with Donald Trump making $300,000,000. The chant “we are the 99%” has always been off by a couple orders of magnitude; the plutocrat rentier class is the top 0.01%, not the top 1%. The top 1% consists mainly of doctors and lawyers and engineers; the top 0.01%, to a man—and they are nearly all men, in fact White men—either own corporations or work in finance. But even adjusting for all this, it seems like at least a bare majority of Americans are all right with “redistributive” “socialist” policies—as long as you don’t call them that.

So I suppose that’s sort of what I’m trying to do; don’t think of it as “socialism”. Think of it as #ScandinaviaIsBetter.

Two terms in marginal utility of wealth

JDN 2457569

This post is going to be a little wonkier than most; I’m actually trying to sort out my thoughts and draw some public comment on a theory that has been dancing around my head for awhile. The original idea of separating terms in marginal utility of wealth was actually suggested by my boyfriend, and from there I’ve been trying to give it some more mathematical precision to see if I can come up with a way to test it experimentally. My thinking is also influenced by a paper Miles Kimball wrote about the distinction between happiness and utility.

There are lots of ways one could conceivably spend money—everything from watching football games to buying refrigerators to building museums to inventing vaccines. But insofar as we are rational (and we are after all about 90% rational), we’re going to try to spend our money in such a way that its marginal utility is approximately equal across various activities. You’ll buy one refrigerator, maybe two, but not seven, because the marginal utility of refrigerators drops off pretty fast; instead you’ll spend that money elsewhere. You probably won’t buy a house that’s twice as large if it means you can’t afford groceries anymore. I don’t think our spending is truly optimal at maximizing utility, but I think it’s fairly good.

Therefore, it doesn’t make much sense to break down marginal utility of wealth into all these different categories—cars, refrigerators, football games, shoes, and so on—because we already do a fairly good job of equalizing marginal utility across all those different categories. I could see breaking it down into a few specific categories, such as food, housing, transportation, medicine, and entertainment (and this definitely seems useful for making your own household budget); but even then, I don’t get the impression that most people routinely spend too much on one of these categories and not enough on the others.

However, I can think of two quite different fundamental motives behind spending money, which I think are distinct enough to be worth separating.

One way to spend money is on yourself, raising your own standard of living, making yourself more comfortable. This would include both football games and refrigerators, really anything that makes your life better. We could call this the consumption motive, or maybe simply the self-directed motive.

The other way is to spend it on other people, which, depending on your personality can take either the form of philanthropy to help others, or as a means of self-aggrandizement to raise your own relative status. It’s also possible to do both at the same time in various combinations; while the Gates Foundation is almost entirely philanthropic and Trump Tower is almost entirely self-aggrandizing, Carnegie Hall falls somewhere in between, being at once a significant contribution to our society and an obvious attempt to bring praise and adulation to himself. I would also include spending on Veblen goods that are mainly to show off your own wealth and status in this category. We can call this spending the philanthropic/status motive, or simply the other-directed motive.

There is some spending which combines both motives: A car is surely useful, but a Ferrari is mainly for show—but then, a Lexus or a BMW could be either to show off or really because you like the car better. Some form of housing is a basic human need, and bigger, fancier houses are often better, but the main reason one builds mansions in Beverly Hills is to demonstrate to the world that one is fabulously rich. This complicates the theory somewhat, but basically I think the best approach is to try to separate a sort of “spending proportion” on such goods, so that say $20,000 of the Lexus is for usefulness and $15,000 is for show. Empirically this might be hard to do, but theoretically it makes sense.

One of the central mysteries in cognitive economics right now is the fact that while self-reported happiness rises very little, if at all, as income increases, a finding which was recently replicated even in poor countries where we might not expect it to be true, nonetheless self-reported satisfaction continues to rise indefinitely. A number of theories have been proposed to explain this apparent paradox.

This model might just be able to account for that, if by “happiness” we’re really talking about the self-directed motive, and by “satisfaction” we’re talking about the other-directed motive. Self-reported happiness seems to obey a rule that $100 is worth as much to someone with $10,000 as $25 is to someone with $5,000, or $400 to someone with $20,000.

Self-reported satisfaction seems to obey a different rule, such that each unit of additional satisfaction requires a roughly equal proportional increase in income.

By having a utility function with two terms, we can account for both of these effects. Total utility will be u(x), happiness h(x), and satisfaction s(x).

u(x) = h(x) + s(x)

To obey the above rule, happiness must obey harmonic utility, like this, for some constants h0 and r:

h(x) = h0 – r/x

Proof of this is straightforward, though to keep it simple I’ve hand-waved why it’s a power law:

Given

h'(2x) = 1/4 h'(x)

Let

h'(x) = r x^n

h'(2x) = r (2x)^n

r (2x)^n = 1/4 r x^n

n = -2

h'(x) = r/x^2

h(x) = – r x^(-1) + C

h(x) = h0 – r/x

Miles Kimball also has some more discussion on his blog about how a utility function of this form works. (His statement about redistribution at the end is kind of baffling though; sure, dollar for dollar, redistributing wealth from the middle class to the poor would produce a higher gain in utility than redistributing wealth from the rich to the middle class. But neither is as good as redistributing from the rich to the poor, and the rich have a lot more dollars to redistribute.)

Satisfaction, however, must obey logarithmic utility, like this, for some constants s0 and k.

The x+1 means that it takes slightly less proportionally to have the same effect as your wealth increases, but it allows the function to be equal to s0 at x=0 instead of going to negative infinity:

s(x) = s0 + k ln(x)

Proof of this is very simple, almost trivial:

Given

s'(x) = k/x

s(x) = k ln(x) + s0

Both of these functions actually have a serious problem that as x approaches zero, they go to negative infinity. For self-directed utility this almost makes sense (if your real consumption goes to zero, you die), but it makes no sense at all for other-directed utility, and since there are causes most of us would willingly die for, the disutility of dying should be large, but not infinite.

Therefore I think it’s probably better to use x +1 in place of x:

h(x) = h0 – r/(x+1)

s(x) = s0 + k ln(x+1)

This makes s0 the baseline satisfaction of having no other-directed spending, though the baseline happiness of zero self-directed spending is actually h0 – r rather than just h0. If we want it to be h0, we could use this form instead:

h(x) = h0 + r x/(x+1)

This looks quite different, but actually only differs by a constant.

Therefore, my final answer for the utility of wealth (or possibly income, or spending? I’m not sure which interpretation is best just yet) is actually this:

u(x) = h(x) + s(x)

h(x) = h0 + r x/(x+1)

s(x) = s0 + k ln(x+1)

Marginal utility is then the derivatives of these:

h'(x) = r/(x+1)^2

s'(x) = k/(x+1)

Let’s assign some values to the constants so that we can actually graph these.

Let h0 = s0 = 0, so our baseline is just zero.

Furthermore, let r = k = 1, which would mean that the value of $1 is the same whether spent either on yourself or on others, if $1 is all you have. (This is probably wrong, actually, but it’s the simplest to start with. Shortly I’ll discuss what happens as you vary the ratio k/r.)

Here is the result graphed on a linear scale:

Utility_linear

And now, graphed with wealth on a logarithmic scale:

Utility_log

As you can see, self-directed marginal utility drops off much faster than other-directed marginal utility, so the amount you spend on others relative to yourself rapidly increases as your wealth increases. If that doesn’t sound right, remember that I’m including Veblen goods as “other-directed”; when you buy a Ferrari, it’s not really for yourself. While proportional rates of charitable donation do not increase as wealth increases (it’s actually a U-shaped pattern, largely driven by poor people giving to religious institutions), they probably should (people should really stop giving to religious institutions! Even the good ones aren’t cost-effective, and some are very, very bad.). Furthermore, if you include spending on relative power and status as the other-directed motive, that kind of spending clearly does proportionally increase as wealth increases—gotta keep up with those Joneses.

If r/k = 1, that basically means you value others exactly as much as yourself, which I think is implausible (maybe some extreme altruists do that, and Peter Singer seems to think this would be morally optimal). r/k < 1 would mean you should never spend anything on yourself, which not even Peter Singer believes. I think r/k = 10 is a more reasonable estimate.

For any given value of r/k, there is an optimal ratio of self-directed versus other-directed spending, which can vary based on your total wealth.

Actually deriving what the optimal proportion would be requires a whole lot of algebra in a post that probably already has too much algebra, but the point is, there is one, and it will depend strongly on the ratio r/k, that is, the overall relative importance of self-directed versus other-directed motivation.

Take a look at this graph, which uses r/k = 10.

Utility_marginal

If you only have 2 to spend, you should spend it entirely on yourself, because up to that point the marginal utility of self-directed spending is always higher. If you have 3 to spend, you should spend most of it on yourself, but a little bit on other people, because after you’ve spent about 2.2 on yourself there is more marginal utility for spending on others than on yourself.

If your available wealth is W, you would spend some amount x on yourself, and then W-x on others:

u(x) = h(x) + s(W-x)

u(x) = r x/(x+1) + k ln(W – x + 1)

Then you take the derivative and set it equal to zero to find the local maximum. I’ll spare you the algebra, but this is the result of that optimization:

x = – 1 – r/(2k) + sqrt(r/k) sqrt(2 + W + r/(4k))

As long as k <= r (which more or less means that you care at least as much about yourself as about others—I think this is true of basically everyone) then as long as W > 0 (as long as you have some money to spend) we also have x > 0 (you will spend at least something on yourself).

Below a certain threshold (depending on r/k), the optimal value of x is greater than W, which means that, if possible, you should be receiving donations from other people and spending them on yourself. (Otherwise, just spend everything on yourself). After that, x < W, which means that you should be donating to others. The proportion that you should be donating smoothly increases as W increases, as you can see on this graph (which uses r/k = 10, a figure I find fairly plausible):

Utility_donation

While I’m sure no one literally does this calculation, most people do seem to have an intuitive sense that you should donate an increasing proportion of your income to others as your income increases, and similarly that you should pay a higher proportion in taxes. This utility function would justify that—which is something that most proposed utility functions cannot do. In most models there is a hard cutoff where you should donate nothing up to the point where your marginal utility is equal to the marginal utility of donating, and then from that point forward you should donate absolutely everything. Maybe a case can be made for that ethically, but psychologically I think it’s a non-starter.

I’m still not sure exactly how to test this empirically. It’s already quite difficult to get people to answer questions about marginal utility in a way that is meaningful and coherent (people just don’t think about questions like “Which is worth more? $4 to me now or $10 if I had twice as much wealth?” on a regular basis). I’m thinking maybe they could play some sort of game where they have the opportunity to make money at the game, but must perform tasks or bear risks to do so, and can then keep the money or donate it to charity. The biggest problem I see with that is that the amounts would probably be too small to really cover a significant part of anyone’s total wealth, and therefore couldn’t cover much of their marginal utility of wealth function either. (This is actually a big problem with a lot of experiments that use risk aversion to try to tease out marginal utility of wealth.) But maybe with a variety of experimental participants, all of whom we get income figures on?

Is America uniquely… mean?

JDN 2457454

I read this article yesterday which I found both very resonant and very disturbing: At least among First World countries, the United States really does seem uniquely, for lack of a better word, mean.

The formal psychological terminology is social dominance orientation; the political science term is authoritarianism. In economics, we notice the difference due to its effect on income inequality. But all of these concepts are capturing part of a deeper underlying reality that in the age of Trump I am finding increasingly hard to deny. The best predictor of support for Trump is authoritarianism.

Of course I’ve already talked about our enormous military budget; but then Tennessee had to make their official state rifle a 50-caliber weapon capable of destroying light tanks. There is something especially dominant, aggressive, and violent about American culture.

We are certainly not unique in the world as a whole—actually I think the amount of social dominance orientation, authoritarianism, and inequality in the US is fairly similar to the world average. We are unique in our gun ownership, but our military spending proportional to GDP is not particularly high by world standards—we’re just an extremely rich country. But in all these respects we are a unique outlier among First World countries; in many ways we resemble a rich authoritarian petrostate like Qatar rather than a European social democracy like France or the UK. (At least we’re not Saudi Arabia?)

More than other First World cultures, Americans believe in hierarchy; they believe that someone should be on top and other people should be on the bottom. More than that, they believe that people “like us” should be on top and people “not like us” should be on the bottom, however that is defined—often in terms of race or religion, but not necessarily.

Indeed, one of the things I find most baffling about this is that it is often more important to people that others be held down than that they themselves be lifted up. This is the only way I can make sense of the fact that people who have watched their wages be drained into the pockets of billionaires for a generation can think that the most important things to do right now are block out illegal immigrants and deport Muslims.

It seems to be that people become convinced that their own status, whatever it may be, is deserved: If they are rich, it is obviously because they are so brilliant and hard-working (something Trump clearly believes about himself, being a textbook example of Narcissistic Personality Disorder); if they are poor, it is obviously because they are so incompetent and lazy. Thus, being lifted up doesn’t make sense; why would you give me things I don’t deserve?

But then when they see people who are different from them, they know automatically that those people must be by definition inferior, as all who are Not of Our Tribe are by definition inferior. And therefore, any of them who are rich gained their position through corruption or injustice, and all of them who are poor deserve their fate for being so inferior. Thus, it is most vital to ensure that these Not of Our Tribe are held down from reaching high positions they so obviously do not deserve.

I’m fairly sure that most of this happens at a very deep unconscious level; it calls upon ancient evolutionary instincts to love our own tribe, to serve the alpha male, to fear and hate those of other tribes. These instincts may well have served us 200,000 years ago (then again, they may just have been the best our brains could manage at the time); but they are becoming a dangerous liability today.

As E.O. Wilson put it: “The real problem of humanity is the following: we have paleolithic emotions; medieval institutions; and god-like technology.”

Yet this cannot be a complete explanation, for there is variation in these attitudes. A purely instinctual theory should say that all human cultures have this to an essentially equal degree; but I started this post by pointing out that the United States appears to have a particularly large amount relative to Europe.

So, there must be something in the cultures or institutions of different nations that makes them either enhance or suppress this instinctual tribalism. There must be something that Europe is doing right, the US is doing wrong, and Saudi Arabia is doing very, very wrong.
Well, the obvious one that sticks out at me is religion. It seems fairly obvious to me that Sweden is less religious than the US, which is less religious than Saudi Arabia.

Data does back me up on this. Religiosity isn’t easy to measure, but we have methods of doing so. If we ask people in various countries if religion is very important in their lives, the percentage of people who say yes gives us an indication of how religious that country is.

In Saudi Arabia, 93% say yes. In the United States, 65% say yes. In Sweden, only 17% say yes.

Religiosity tends to be highest in the poorest countries, but the US is an outlier, far too rich for our religion (or too religious for our wealth).

Religiosity also tends to be highest in countries with high inequality—this time, the US fits right in.

The link between religion and inequality is quite clear. It’s harder to say which way the causation runs. Perhaps high inequality makes people cling more to religion as a comfort, and getting rid of religion would only mean taking that comfort away. Or, perhaps religion actually makes people believe more in social dominance, and thus is part of what keeps that high inequality in place. It could also be a feedback loop, in which higher inequality leads to higher religiosity which leads to higher inequality.

That said, I think we actually have some evidence that causality runs from religion to inequality, rather than the other way around. The secularization of France took place around the same time as the French Revolution that overthrew the existing economic system and replaced it with one that had substantially less inequality. Iran’s government became substantially more based on religion in the latter half of the 20th century, and their inequality soared thereafter.

Above all, Donald Trump dominates the evangelical vote, which makes absolutely no sense if religion is a comfort against inequality—but perfect sense if religion solidifies the tendency of people to think in terms of hierarchy and authoritarianism.

This also makes sense in terms of the content of religion, especially Abrahamaic religion; read the Bible and the Qur’an, and you will see that their primary goal seems to be to convince you that some people, namely people who believe in this book, are just better than other people, and we should be in charge because God says so. (And you wouldn’t try to argue with God, would you?) They really make no particular effort to convince you that God actually exists; they spend all their argumentative effort on what God wants you to do and who God wants you to put in charge—and for some strange reason it always seems to be the same guys who are writing down “God’s words” in the book! What a coincidence!

If religion is indeed the problem, or a large part of the problem, what can we do about it? That’s the most difficult part. We’ve been making absolutely conclusive rational arguments against religion since literally 300 years before Jesus was even born (there has never been a time in human history in which it was rational for an educated person to believe in Christianity or Islam, for the religions did not come into existence until well after the arguments to refute them were well-known!), and the empirical evidence against theism has only gotten stronger ever since; so that clearly isn’t enough.

I think what we really need to do at this point is confront the moral monopoly that religion has asserted for itself. The “Moral Majority” was neither, but its name still sort of makes sense to us because we so strongly associate being moral with being religious. We use terms like “Christian” and “generous” almost interchangeably. And whenever you get into a debate about religion, shortly after you have thoroughly demolished any shred of empirical credibility religion still had left, you can basically guarantee that the response will be: “But without God, how can you know right from wrong?”

What is perhaps most baffling about this concept of morality so commonplace in our culture is that not only is the command of a higher authority that rewards and punishes you not the highest level of moral development—it is literally the lowest. Of the six stages of moral thinking Kohlberg documented in children, the reward and punishment orientation exemplified by the Bible and the Qur’an is the very first. I think many of these people really truly haven’t gotten past level 1, which is why when you start trying to explain how you base your moral judgments on universal principles of justice and consequences (level 6) they don’t seem to have any idea what you’re talking about.

Perhaps this is a task for our education system (philosophy classes in middle school?), perhaps we need something more drastic than that, or perhaps it is enough that we keep speaking about it in public. But somehow we need to break up the monopoly that religion has on moral concepts, so that people no longer feel ashamed to say that something is morally wrong without being able to cite a particular passage from a particular book from the Iron Age. Perhaps once we can finally make people realize that morality does not depend on religion, we can finally free them from the grip of religion—and therefore from the grip of authoritarianism and social dominance.

If this is right, then the reason America is so mean is that we are so Christian—and people need to realize that this is not a paradoxical statement.