Evolutionary skepticism

Post 572 Mar 9 JDN 2460744

In the last two posts I talked about ways that evolutionary theory could influence our understanding of morality, including the dangerous views of naive moral Darwinism as well as some more reasonable approaches; yet there are other senses of the phrase “morality evolves” that we haven’t considered. One of these is actually quite troubling; were it true, the entire project of morality would be in jeopardy. I’ll call it “evolutionary skepticism”; it says that yes, morality has evolved—and this is reason to doubt that morality is true. Richard Joyce, author of The Evolution of Morality, is of such a persuasion, and he makes a quite compelling case. Joyce’s central point is that evolution selects for fitness, not accuracy; we had reason to evolve in ways that would maximize the survival of our genes, not reasons to evolve in ways that would maximize the accuracy of our moral claims.

This is of course absolutely correct, and it is troubling precisely because we can all see that the two are not necessarily the same thing. It’s easy to imagine many ways that beliefs could evolve that had nothing to do with the accuracy of those beliefs.

But note that word: necessarily. Accuracy and fitness aren’t necessarily aligned—but it could still be that they are, in fact, aligned rather well. Yes, we can imagine ways a brain could evolve that would benefit its fitness without improving its accuracy; but is that actually what happened to our ancestors? Do we live on instinct, merely playing out by rote the lifestyles of our forebears, thinking and living the same way we have for hundreds of millennia?

Clearly not! Behold, you are reading a blog post! It was written on a laptop computer! While these facts may seem perfectly banal to you, they represent an unprecedented level of behavioral novelty, one achieved only by one animal species among millions, and even then only very recently. Human beings are incredibly flexible, incredibly creative, and incredibly intelligent. Yes, we evolved to be this way, of course we did; but so what? We are this way. We are capable of learning new things about the world, gaining in a few short centuries knowledge our forebears could never have imagined. Evolution does not always make animals into powerful epistemic engines—indeed, 99.99999\% of the time it does not—but once in awhile it does, and we are the result.

Natural selection is quite frugal; it tends to evolve things the easiest way. The way the world is laid out, it seems to be that the easiest way to evolve a brain that survives really well in a wide variety of ecological and social environments is to evolve a brain that is capable of learning to expand its own knowledge and understanding. After all, no other organism has ever been or is ever likely to be as evolutionarily fit as we are; we span the globe, cover a wide variety of ecological niches, and number in the billions and counting. We’ve even expanded beyond the planet Earth, something no other organism could even contemplate. We are successful because we are smart; is it really so hard to believe that we are smart because it made our ancestors successful?

Indeed, it must be this way, or we wouldn’t be able to make sense of the fact that our human brains, evolved for the African savannah a million years ago with minor tweaks since then, are capable of figuring out chess, calculus, writing, quantum mechanics, special relativity, television broadcasting, space travel, and for that matter Darwinian evolution and meta-ethics. None of these things could possibly have been adaptive in our ancestral ecology. They must be spandrels, fitness-neutral side-effects of evolved traits. And just like the original pendentives of San Marco that motivated Gould’s metaphor, what glorious spandrels they are!

Our genes made us better at gathering information and processing that information into correct beliefs, and calculus and quantum mechanics came along for the ride. Our greatest adaptation is to be adaptable; our niche is to need no niche, for we can carve our own.

This is not to abandon evolutionary psychology, for evolution does have a great deal to tell us about psychology. We do have instincts; preprocessing systems built into our sensory organs, innate emotions that motivate us to action, evolved heuristics that we use to respond quickly under pressure. Steven Pinker argues convincingly that language is an evolved instinct—and where would we be without language? Our instincts are essential for not only our survival, but indeed for our rationality.

Staring at a blinking cursor on the blank white page of a word processor, imagining the infinity of texts that could be written upon that page, you could be forgiven for thinking that you were looking at a blank slate. Yet in fact you are staring at the pinnacle of high technology, an extremely complex interlocking system of hardware and software with dozens of components and billions of subcomponents, all precision-engineered for maximum efficiency. The possibilities are endless not because the system is simple and impinged upon by its environment, but because it is complex, and capable of engaging with that environment in order to convert subtle differences in input into vast differences in output. If this is true of a word processor, how much more true it must be of an organism capable of designing and using word processors! It is the very instincts that seem to limit our rationality which have made that rationality possible in the first place. Witness the eternal wisdom of Immanuel Kant:

Misled by such a proof of the power of reason, the demand for the extension of knowledge recognises no limits. The light dove, cleaving the air in her free flight, and feeling its resistance, might imagine that its flight would be still easier in empty space.

The analogy is even stronger than he knew—for brains, like wings, are an evolutionary adaptation! (What would Kant have made of Darwin?) But because our instincts are so powerful, they are self-correcting; they allow us to do science.

Richard Joyce agrees that we are right to think our evolved brains are reasonably reliable when it comes to scientific facts. He has to, otherwise his whole argument would be incoherent. Joyce agrees that we evolved to think 2+2=4 precisely because 2+2=4, and we evolved to think space is 3-dimensional precisely because space is 3-dimensional. Indeed, he must agree that we evolved to think that we evolved because we evolved! Yet, for some reason Joyce thinks that this same line of reasoning doesn’t apply to ethics.

But why wouldn’t it? In fact, I think we have more reason to trust our evolved capacities in ethics than we do in other domains of science, because the subject matter of morality—human behavior and social dynamics—is something that we have been familiar with even all the way back to the savannah. If we evolved to think that theft and murder are bad, why would that happen? I submit it would happen precisely because theft and murder are Pareto-suboptimal unsustainable strategies—that is, precisely because theft and murder are bad. (Don’t worry if you don’t know what I mean by “Pareto-suboptimal” and “unsustainable strategy”; I’ll get to those in later posts.) Once you realize that “bad” is a concept that can ultimately be unpacked to naturalistic facts, all reason to think it is inaccessible to natural selection drops away; natural selection could well have chosen brains that didn’t like murder precisely because murder is bad. Indeed, because morality is ultimately scientific, part of how natural selection could evolve us to be more moral is by evolving us to be more scientific. We are more scientific than apes, and vastly more scientific than cockroaches; we are, indeed, the most scientific animal that has ever lived on Earth.

I do think that our evolved moral instincts are to some degree mistaken or incomplete; but I can make sense of this, in the same way I make sense of the fact that other evolved instincts don’t quite fit what we have discovered in other sciences. For instance, humans have an innate concept of linear momentum that doesn’t quite fit with what we’ve discovered in physics. We tend to presume that objects have an inherent tendency toward rest, though in fact they do not—this is because in our natural environment, friction makes most objects act as if they had such a tendency. Roll a rock along the ground, and it will eventually stop. Run a few miles, and eventually you’ll have to stop too. Most things in our everyday life really do behave as if they had an inherent tendency toward rest. It’s only once we realized that friction is itself a force, not present everywhere, that we came to see that linear momentum is conserved in the absence of external forces. (Throw a rock in space, and it will not ever stop. Nor will you, by Newton’s Third Law.) This casts no doubt upon our intuitions about rocks rolled along the ground, which do indeed behave exactly as our intuition predicts.

Similarly, our intuition that animals don’t deserve rights could well be an evolutionary consequence of the fact that we sometimes had to eat animals in order to survive, and so would do better not thinking about it too much; but now that we don’t need to do this anymore, we can reflect upon the deeper issues involved in eating meat. This is no reason to doubt our intuitions that parents should care for their children and murder is bad.

We do seem to have better angels after all

Jun 18 JDN 2460114

A review of The Darker Angels of Our Nature

(I apologize for not releasing this on Sunday; I’ve been traveling lately and haven’t found much time to write.)

Since its release, I have considered Steven Pinker’s The Better Angels of our Nature among a small elite category of truly great books—not simply good because enjoyable, informative, or well-written, but great in its potential impact on humanity’s future. Others include The General Theory of Employment, Interest, and Money, On the Origin of Species, and Animal Liberation.

But I also try to expose myself as much as I can to alternative views. I am quite fearful of the echo chambers that social media puts us in, where dissent is quietly hidden from view and groupthink prevails.

So when I saw that a group of historians had written a scathing critique of The Better Angels, I decided I surely must read it and get its point of view. This book is The Darker Angels of Our Nature.

The Darker Angels is written by a large number of different historians, and it shows. It’s an extremely disjointed book; it does not present any particular overall argument, various sections differ wildly in scope and tone, and sometimes they even contradict each other. It really isn’t a book in the usual sense; it’s a collection of essays whose only common theme is that they disagree with Steven Pinker.

In fact, even that isn’t quite true, as some of the best essays in The Darker Angels are actually the ones that don’t fundamentally challenge Pinker’s contention that global violence has been on a long-term decline for centuries and is now near its lowest in human history. These essays instead offer interesting insights into particular historical eras, such as medieval Europe, early modern Russia, and shogunate Japan, or they add additional nuances to the overall pattern, like the fact that, compared to medieval times, violence in Europe seems to have been less in the Pax Romana (before) and greater in the early modern period (after), showing that the decline in violence was not simple or steady, but went through fluctuations and reversals as societies and institutions changed. (At this point I feel I should note that Pinker clearly would not disagree with this—several of the authors seem to think he would, which makes me wonder if they even read The Better Angels.)

Others point out that the scale of civilization seems to matter, that more is different, and larger societies and armies more or less automatically seem to result in lower fatality rates by some sort of scaling or centralization effect, almost like the square-cube law. That’s very interesting if true; it would suggest that in order to reduce violence, you don’t really need any particular mode of government, you just need something that unites as many people as possible under one banner. The evidence presented for it was too weak for me to say whether it’s really true, however, and there was really no theoretical mechanism proposed whatsoever.

Some of the essays correct genuine errors Pinker made, some of which look rather sloppy. Pinker clearly overestimated the death tolls of the An Lushan Rebellion, the Spanish Inquisition, and Aztec ritual executions, probably by using outdated or biased sources. (Though they were all still extremely violent!) His depiction of indigenous cultures does paint with a very broad brush, and fails to recognize that some indigenous societies seem to have been quite peaceful (though others absolutely were tremendously violent).

One of the best essays is about Pinker’s cavalier attitude toward mass incarceration, which I absolutely do consider a deep flaw in Pinker’s view. Pinker presents increased incarceration rates along with decreased crime rates as if they were an unalloyed good, while I can at best be ambivalent about whether the benefit of decreasing crime is worth the cost of greater incarceration. Pinker seems to take for granted that these incarcerations are fair and impartial, when we have a great deal of evidence that they are strongly biased against poor people and people of color.

There’s another good essay about the Enlightenment, which Pinker seems to idealize a little too much (especially in his other book Enlightenment Now). There was no sudden triumph of reason that instantly changed the world. Human knowledge and rationality gradually improved over a very long period of time, with no obvious turning point and many cases of backsliding. The scientific method isn’t a simple, infallible algorithm that suddenly appeared in the brain of Galileo or Bayes, but a whole constellation of methods and concepts of rationality that took centuries to develop and is in fact still developing. (Much as the Tao that can be told is not the eternal Tao, the scientific method that can be written in a textbook is not the true scientific method.)

Several of the essays point out the limitations of historical and (especially) archaeological records, making it difficult to draw any useful inferences about rates of violence in the past. I agree that Pinker seems a little too cavalier about this; the records really are quite sparse and it’s not easy to fill in the gaps. Very small samples can easily distort homicide rates; since only about 1% of deaths worldwide are homicide, if you find 20 bodies, whether or not one of them was murdered is the difference between peaceful Japan and war-torn Colombia.

On the other hand, all we really can do is make the best inferences we have with the available data, and for the time periods in which we do have detailed records—surely true since at least the 19th century—the pattern of declining violence is very clear, and even the World Wars look like brief fluctuations rather than fundamental reversals. Contrary to popular belief, the World Wars do not appear to have been especially deadly on a per-capita basis, compared to various historic wars. The primary reason so many people died in the World Wars was really that there just were more people in the world. A few of the authors don’t seem to consider this an adequate reason, but ask yourself this: Would you rather live in a society of 100 in which 10 people are killed, or a society of 1 billion in which 1 million are killed? In the former case your chances of being killed are 10%; in the latter, 0.1%. Clearly, per-capita measures of violence are the correct ones.

Some essays seem a bit beside the point, like one on “environmental violence” which quite aptly details the ongoing—terrifying—degradation of our global ecology, but somehow seems to think that this constitutes violence when it obviously doesn’t. There is widespread violence against animals, certainly; slaughterhouses are the obvious example—and unlike most people, I do not consider them some kind of exception we can simply ignore. We do in fact accept levels of cruelty to pigs and cows that we would never accept against dogs or horses—even the law makes such exceptions. Moreover, plenty of habitat destruction is accompanied by killing of the animals who lived in that habitat. But ecological degradation is not equivalent to violence. (Nor is it clear to me that our treatment of animals is more violent overall today than in the past; I guess life is probably worse for a beef cow today than it was in the medieval era, but either way, she was going to be killed and eaten. And at least we no longer do cat-burning.) Drilling for oil can be harmful, but it is not violent. We can acknowledge that life is more peaceful now than in the past without claiming that everything is better now—in fact, one could even say that overall life isn’t better, but I think they’d be hard-pressed to argue that.

These are the relatively good essays, which correct minor errors or add interesting nuances. There are also some really awful essays in the mix.

A common theme of several of the essays seems to be “there are still bad things, so we can’t say anything is getting better”; they will point out various forms of violence that undeniably still exist, and treat this as a conclusive argument against the claim that violence has declined. Yes, modern slavery does exist, and it is a very serious problem; but it clearly is not the same kind of atrocity that the Atlantic slave trade was. Yes, there are still murders. Yes, there are still wars. Probably these things will always be with us to some extent; but there is a very clear difference between 500 homicides per million people per year and 50—and it would be better still if we could bring it down to 5.

There’s one essay about sexual violence that doesn’t present any evidence whatsoever to contradict the claim that rates of sexual violence have been declining while rates of reporting and prosecution have been increasing. (These two trends together often result in reported rapes going up, but most experts agree that actual rapes are going down.) The entire essay is based on anecdote, innuendo, and righteous anger.

There are several essays that spend their whole time denouncing neoliberal capitalism (not even presenting any particularly good arguments against it, though such arguments do exist), seeming to equate Pinker’s view with some kind of Rothbardian anarcho-capitalism when in fact Pinker is explictly in favor of Nordic-style social democracy. (One literally dismisses his support for universal healthcare as “Well, he is Canadian”.) But Pinker has on occasion said good things about capitalism, so clearly, he is an irredeemable monster.

Right in the introduction—which almost made me put the book down—is an astonishingly ludicrous argument, which I must quote in full to show you that it is not out of context:

What actually is violence (nowhere posed or answered in The Better Angels)? How do people perceive it in different time-place settings? What is its purpose and function? What were contemporary attitudes toward violence and how did sensibilities shift over time? Is violence always ‘bad’ or can there be ‘good’ violence, violence that is regenerative and creative?

The Darker Angels of Our Nature, p.16

Yes, the scare quotes on ‘good’ and ‘bad’ are in the original. (Also the baffling jargon “time-place settings” as opposed to, say, “times and places”.) This was clearly written by a moral relativist. Aside from questioning whether we can say anything about anything, the argument seems to be that Pinker’s argument is invalid because he didn’t precisely define every single relevant concept, even though it’s honestly pretty obvious what the world “violence” means and how he is using it. (If anything, it’s these authors who don’t seem to understand what the word means; they keep calling things “violence” that are indeed bad, but obviously aren’t violence—like pollution and cyberbullying. At least talk of incarceration as “structural violence” isn’t obvious nonsense—though it is still clearly distinct from murder rates.)

But it was by reading the worst essays that I think I gained the most insight into what this debate is really about. Several of the essays in The Darker Angels thoroughly and unquestioningly share the following inference: if a culture is superior, then that culture has a right to impose itself on others by force. On this, they seem to agree with the imperialists: If you’re better, that gives you a right to dominate everyone else. They rightly reject the claim that cultures have a right to imperialistically dominate others, but they cannot deny the inference, and so they are forced to deny that any culture can ever be superior to another. The result is that they tie themselves in knots trying to justify how greater wealth, greater happiness, less violence, and babies not dying aren’t actually good things. They end up talking nonsense about “violence that is regenerative and creative”.

But we can believe in civilization without believing in colonialism. And indeed that is precisely what I (along with Pinker) believe: That democracy is better than autocracy, that free speech is better than censorship, that health is better than illness, that prosperity is better than poverty, that peace is better than war—and therefore that Western civilization is doing a better job than the rest. I do not believe that this justifies the long history of Western colonial imperialism. Governing your own country well doesn’t give you the right to invade and dominate other countries. Indeed, part of what makes colonial imperialism so terrible is that it makes a mockery of the very ideals of peace, justice, and freedom that the West is supposed to represent.

I think part of the problem is that many people see the world in zero-sum terms, and believe that the West’s prosperity could only be purchased by the rest of the world’s poverty. But this is untrue. The world is nonzero-sum. My happiness does not come from your sadness, and my wealth does not come from your poverty. In fact, even the West was poor for most of history, and we are far more prosperous now that we have largely abandoned colonial imperialism than we ever were in imperialism’s heyday. (I do occasionally encounter British people who seem vaguely nostalgic for the days of the empire, but real median income in the UK has doubled just since 1977. Inequality has also increased during that time, which is definitely a problem; but the UK is undeniably richer now than it ever was at the peak of the empire.)

In fact it could be that the West is richer now because of colonalism than it would have been without it. I don’t know whether or not this is true. I suspect it isn’t, but I really don’t know for sure. My guess would be that colonized countries are poorer, but colonizer countries are not richer—that is, colonialism is purely destructive. Certain individuals clearly got richer by such depredation (Leopold II, anyone?), but I’m not convinced many countries did.

Yet even if colonialism did make the West richer, it clearly cannot explain most of the wealth of Western civilization—for that wealth simply did not exist in the world before. All these bridges and power plants, laptops and airplanes weren’t lying around waiting to be stolen. Surely, some of the ingredients were stolen—not least, the land. Had they been bought at fair prices, the result might have been less wealth for us (then again it might not, for wealthier trade partners yield greater exports). But this does not mean that the products themselves constitute theft, nor that the wealth they provide is meaningless. Perhaps we should find some way to pay reparations; undeniably, we should work toward greater justice in the future. But we do not need to give up all we have in order to achieve that justice.

There is a law of conservation of energy. It is impossible to create energy in one place without removing it from another. There is no law of conservation of prosperity. Making the world better in one place does not require making it worse in another.

Progress is real. Yes, it is flawed, uneven, and it has costs of its own; but it is real. If we want to have more of it, we best continue to believe in it. And The Better Angels of Our Nature does have some notable flaws, but it still retains its place among truly great books.

The mythology mindset

Feb 5 JDN 2459981

I recently finished reading Steven Pinker’s latest book Rationality. It’s refreshing, well-written, enjoyable, and basically correct with some small but notable errors that seem sloppy—but then you could have guessed all that from the fact that it was written by Steven Pinker.

What really makes the book interesting is an insight Pinker presents near the end, regarding the difference between the “reality mindset” and the “mythology mindset”.

It’s a pretty simple notion, but a surprisingly powerful one.

In the reality mindset, a belief is a model of how the world actually functions. It must be linked to the available evidence and integrated into a coherent framework of other beliefs. You can logically infer from how some parts work to how other parts must work. You can predict the outcomes of various actions. You live your daily life in the reality mindset; you couldn’t function otherwise.

In the mythology mindset, a belief is a narrative that fulfills some moral, emotional, or social function. It’s almost certainly untrue or even incoherent, but that doesn’t matter. The important thing is that it sends the right messages. It has the right moral overtones. It shows you’re a member of the right tribe.

The idea is similar to Dennett’s “belief in belief”, which I’ve written about before; but I think this characterization may actually be a better one, not least because people would be more willing to use it as a self-description. If you tell someone “You don’t really believe in God, you believe in believing in God”, they will object vociferously (which is, admittedly, what the theory would predict). But if you tell them, “Your belief in God is a form of the mythology mindset”, I think they are at least less likely to immediately reject your claim out of hand. “You believe in God a different way than you believe in cyanide” isn’t as obviously threatening to their identity.

A similar notion came up in a Psychology of Religion course I took, in which the professor discussed “anomalous beliefs” linked to various world religions. He picked on a bunch of obscure religions, often held by various small tribes. He asked for more examples from the class. Knowing he was nominally Catholic and not wanting to let mainstream religion off the hook, I presented my example: “This bread and wine are the body and blood of Christ.” To his credit, he immediately acknowledged it as a very good example.

It’s also not quite the same thing as saying that religion is a “metaphor”; that’s not a good answer for a lot of reasons, but perhaps chief among them is that people don’t say they believe metaphors. If I say something metaphorical and then you ask me, “Hang on; is that really true?” I will immediately acknowledge that it is not, in fact, literally true. Love is a rose with all its sweetness and all its thorns—but no, love isn’t really a rose. And when it comes to religious belief, saying that you think it’s a metaphor is basically a roundabout way of saying you’re an atheist.

From all these different directions, we seem to be converging on a single deeper insight: when people say they believe something, quite often, they clearly mean something very different by “believe” than what I would ordinarily mean.

I’m tempted even to say that they don’t really believe it—but in common usage, the word “belief” is used at least as often to refer to the mythology mindset as the reality mindset. (In fact, it sounds less weird to say “I believe in transsubstantiation” than to say “I believe in gravity”.) So if they don’t really believe it, then they at least mythologically believe it.

Both mindsets seem to come very naturally to human beings, in particular contexts. And not just modern people, either. Humans have always been like this.

Ask that psychology professor about Jesus, and he’ll tell you a tall tale of life, death, and resurrection by a demigod. But ask him about the Stroop effect, and he’ll provide a detailed explanation of rigorous experimental protocol. He believes something about God; but he knows something about psychology.

Ask a hunter-gatherer how the world began, and he’ll surely spin you a similarly tall tale about some combination of gods and spirits and whatever else, and it will all be peculiarly particular to his own tribe and no other. But ask him how to gut a fish, and he’ll explain every detail with meticulous accuracy, with almost the same rigor as that scientific experiment. He believes something about the sky-god; but he knows something about fish.

To be a rationalist, then, is to aspire to live your whole life in the reality mindset. To seek to know rather than believe.

This isn’t about certainty. A rationalist can be uncertain about many things—in fact, it’s rationalists of all people who are most willing to admit and quantify their uncertainty.

This is about whether you allow your beliefs to float free as bare, almost meaningless assertions that you profess to show you are a member of the tribe, or you make them pay rent, directly linked to other beliefs and your own experience.

As long as I can remember, I have always aspired to do this. But not everyone does. In fact, I dare say most people don’t. And that raises a very important question: Should they? Is it better to live the rationalist way?

I believe that it is. I suppose I would, temperamentally. But say what you will about the Enlightenment and the scientific revolution, they have clearly revolutionized human civilization and made life much better today than it was for most of human existence. We are peaceful, safe, and well-fed in a way that our not-so-distant ancestors could only dream of, and it’s largely thanks to systems built under the principles of reason and rationality—that is, the reality mindset.

We would never have industrialized agriculture if we still thought in terms of plant spirits and sky gods. We would never have invented vaccines and antibiotics if we still believed disease was caused by curses and witchcraft. We would never have built power grids and the Internet if we still saw energy as a mysterious force permeating the world and not as a measurable, manipulable quantity.

This doesn’t mean that ancient people who saw the world in a mythological way were stupid. In fact, it doesn’t even mean that people today who still think this way are stupid. This is not about some innate, immutable mental capacity. It’s about a technology—or perhaps the technology, the meta-technology that makes all other technology possible. It’s about learning to think the same way about the mysterious and the familiar, using the same kind of reasoning about energy and death and sunlight as we already did about rocks and trees and fish. When encountering something new and mysterious, someone in the mythology mindset quickly concocts a fanciful tale about magical beings that inevitably serves to reinforce their existing beliefs and attitudes, without the slightest shred of evidence for any of it. In their place, someone in the reality mindset looks closer and tries to figure it out.

Still, this gives me some compassion for people with weird, crazy ideas. I can better make sense of how someone living in the modern world could believe that the Earth is 6,000 years old or that the world is ruled by lizard-people. Because they probably don’t really believe it, they just mythologically believe it—and they don’t understand the difference.

Are people basically good?

Mar 20 JDN 2459659

I recently finished reading Human Kind by Rutger Bregman. His central thesis is a surprisingly controversial one, yet one I largely agree with: People are basically good. Most people, in most circumstances, try to do the right thing.

Neoclassical economists in particular seem utterly scandalized by any such suggestion. No, they insist, people are selfish! They’ll take any opportunity to exploit each other! On this, Bregman is right and the neoclassical economists are wrong.

One of the best parts of the book is Bregman’s tale of several shipwrecked Tongan boys who were stranded on the remote island of ‘Ata, sometimes called “the real Lord of the Flies but with an outcome quite radically different from that of the novel. There were of course conflicts during their long time stranded, but the boys resolved most of these conflicts peacefully, and by the time help came over a year later they were still healthy and harmonious. Bregman himself was involved in the investigative reporting about these events, and his tale of how he came to meet some of the (now elderly) survivors and tell their tale is both enlightening and heartwarming.

Bregman spends a lot of time (perhaps too much time) analyzing classic experiments meant to elucidate human nature. He does a good job of analyzing the Milgram experiment—it’s valid, but it says more about our willingness to serve a cause than our blind obedience to authority. He utterly demolishes the Zimbardo experiment; I knew it was bad, but I hadn’t even realized how utterly worthless that so-called “experiment” actually is. Zimbardo basically paid people to act like abusive prison guards—specifically instructing them how to act!—and then claimed that he had discovered something deep in human nature. Bregman calls it a “hoax”, which might be a bit too strong—but it’s about as accurate as calling it an “experiment”. I think it’s more like a form of performance art.

Bregman’s criticism of Steven Pinker I find much less convincing. He cites a few other studies that purported to show the following: (1) the archaeological record is unreliable in assessing death rates in prehistoric societies (fair enough, but what else do we have?), (2) the high death rates in prehistoric cultures could be from predators such as lions rather than other humans (maybe, but that still means civilization is providing vital security!), (3) the Long Peace could be a statistical artifact because data on wars is so sparse (I find this unlikely, but I admit the Russian invasion of Ukraine does support such a notion), or (4) the Long Peace is the result of nuclear weapons, globalized trade, and/or international institutions rather than a change in overall attitudes toward violence (perfectly reasonable, but I’m not even sure Pinker would disagree).

I appreciate that Bregman does not lend credence to the people who want to use absolute death counts instead of relative death rates, who apparently would rather live in a prehistoric village of 100 people that gets wiped out by a plague (or for that matter on a Mars colony of 100 people who all die of suffocation when the life support fails) rather than remain in a modern city of a million people that has a few dozen murders each year. Zero homicides is better than 40, right? Personally, I care most about the question “How likely am I to die at any given time?”; and for that, relative death rates are the only valid measure. I don’t even see why we should particularly care about homicide versus other causes of death—I don’t see being shot as particularly worse than dying of Alzheimer’s (indeed, quite the contrary, other than the fact that Alzheimer’s is largely limited to old age and shooting isn’t). But all right, if violence is the question, then go ahead and use homicides—but it certainly should be rates and not absolute numbers. A larger human population is not an inherently bad thing.

I even appreciate that Bregman offers a theory (not an especially convincing one, but not an utterly ridiculous one either) of how agriculture and civilization could emerge even if hunter-gatherer life was actually better. It basically involves agriculture being discovered by accident, and then people gradually transitioning to a sedentary mode of life and not realizing their mistake until generations had passed and all the old skills were lost. There are various holes one can poke in this theory (Were the skills really lost? Couldn’t they be recovered from others? Indeed, haven’t people done that, in living memory, by “going native”?), but it’s at least better than simply saying “civilization was a mistake”.

Yet Bregman’s own account, particularly his discussion of how early civilizations all seem to have been slave states, seems to better support what I think is becoming the historical consensus, which is that civilization emerged because a handful of psychopaths gathered armies to conquer and enslave everyone around them. This is bad news for anyone who holds to a naively Whiggish view of history as a continuous march of progress (which I have oft heard accused but rarely heard endorsed), but it’s equally bad news for anyone who believes that all human beings are basically good and we should—or even could—return to a state of blissful anarchism.

Indeed, this is where Bregman’s view and mine part ways. We both agree that most people are mostly good most of the time. He even acknowledges that about 2% of people are psychopaths, which is a very plausible figure. (The figures I find most credible are about 1% of women and about 4% of men, which averages out to 2.5%. The prevalence you get also depends on how severely lacking in empathy someone needs to be in order to qualify. I’ve seen figures as low as 1% and as high as 4%.) What he fails to see is how that 2% of people can have large effects on society, wildly disproportionate to their number.

Consider the few dozen murders that are committed in any given city of a million people each year. Who is committing those murders? By and large, psychopaths. That’s more true of premeditated murder than of crimes of passion, but even the latter are far more likely to be committed by psychopaths than the general population.

Or consider those early civilizations that were nearly all authoritarian slave-states. What kind of person tends to govern an authoritarian slave-state? A psychopath. Sure, probably not every Roman emperor was a psychopath—but I’m quite certain that Commodus and Caligula were, and I suspect that Augustus and several others were as well. And the ones who don’t seem like psychopaths (like Marcus Aurelius) still seem like narcissists. Indeed, I’m not sure it’s possible to be an authoritarian emperor and not be at least a narcissist; should an ordinary person somehow find themselves in the role, I think they’d immediately set out to delegate authority and improve civil liberties.

This suggests that civilization was not so much a mistake as it was a crime—civilization was inflicted upon us by psychopaths and their greed for wealth and power. Like I said, not great for a “march of progress” view of history. Yet a lot has changed in the last few thousand years, and life in the 21st century at least seems overall pretty good—and almost certainly better than life on the African savannah 50,000 years ago.

In essence, what I think happened was we invented a technology to turn the tables of civilization, use the same tools psychopaths had used to oppress us as a means to contain them. This technology was called democracy. The institutions of democracy allowed us to convert government from a means by which psychopaths oppress and extract wealth from the populace to a means by which the populace could prevent psychopaths from committing wanton acts of violence.

Is it perfect? Certainly not. Indeed, there are many governments today that much better fit the “psychopath oppressing people” model (e.g. Russia, China, North Korea), and even in First World democracies there are substantial abuses of power and violations of human rights. In fact, psychopaths are overrepresented among the police and also among politicians. Perhaps there are superior modes of governance yet to be found that would further reduce the power psychopaths have and thereby make life better for everyone else.

Yet it remains clear that democracy is better than anarchy. This is not so much because anarchy results in everyone behaving badly and causes immediate chaos (as many people seem to erroneously believe), but because it results in enough people behaving badly to be a problem—and because some of those people are psychopaths who will take advantage of power vacuum to seize control for themselves.

Yes, most people are basically good. But enough people aren’t that it’s a problem.

Bregman seems to think that simply outnumbering the psychopaths is enough to keep them under control, but history clearly shows that it isn’t. We need institutions of governance to protect us. And for the most part, First World democracies do a fairly good job of that.

Indeed, I think Bregman’s perspective may be a bit clouded by being Dutch, as the Netherlands has one of the highest rates of trust in the world. Nearly 90% of people in the Netherlands trust their neighbors. Even the US has high levels of trust by world standards, at about 84%; a more typical country is India or Mexico at 64%, and the least-trusting countries are places like Gabon with 31% or Benin with a dismal 23%. Trust in government varies widely, from an astonishing 94% in Norway (then again, have you seen Norway? Their government is doing a bang-up job!) to 79% in the Netherlands, to closer to 50% in most countries (in this the US is more typical), all the way down to 23% in Nigeria (which seems equally justified). Some mysteries remain, like why more people trust the government in Russia than in Namibia. (Maybe people in Namibia are just more willing to speak their minds? They’re certainly much freer to do so.)

In other words, Dutch people are basically good. Not that the Netherlands has no psychopaths; surely they have a few just like everyone else. But they have strong, effective democratic institutions that provide both liberty and security for the vast majority of the population. And with the psychopaths under control, everyone else can feel free to trust each other and cooperate, even in the absence of obvious government support. It’s precisely because the government of the Netherlands is so unusually effective that someone living there can come to believe that government is unnecessary.

In short, Bregman is right that we should have donation boxes—and a lot of people seem to miss that (especially economists!). But he seems to forget that we need to keep them locked.

Signaling and the Curse of Knowledge

Jan 3 JDN 2459218

I received several books for Christmas this year, and the one I was most excited to read first was The Sense of Style by Steven Pinker. Pinker is exactly the right person to write such a book: He is both a brilliant linguist and cognitive scientist and also an eloquent and highly successful writer. There are two other books on writing that I rate at the same tier: On Writing by Stephen King, and The Art of Fiction by John Gardner. Don’t bother with style manuals from people who only write style manuals; if you want to learn how to write, learn from people who are actually successful at writing.

Indeed, I knew I’d love The Sense of Style as soon as I read its preface, containing some truly hilarious takedowns of Strunk & White. And honestly Strunk & White are among the best standard style manuals; they at least actually manage to offer some useful advice while also being stuffy, pedantic, and often outright inaccurate. Most style manuals only do the second part.

One of Pinker’s central focuses in The Sense of Style is on The Curse of Knowledge, an all-too-common bias in which knowing things makes us unable to appreciate the fact that other people don’t already know it. I think I succumbed to this failing most greatly in my first book, Special Relativity from the Ground Up, in which my concept of “the ground” was above most people’s ceilings. I was trying to write for high school physics students, and I think the book ended up mostly being read by college physics professors.

The problem is surely a real one: After years of gaining expertise in a subject, we are all liable to forget the difficulty of reaching our current summit and automatically deploy concepts and jargon that only a small group of experts actually understand. But I think Pinker underestimates the difficulty of escaping this problem, because it’s not just a cognitive bias that we all suffer from time to time. It’s also something that our society strongly incentivizes.

Pinker points out that a small but nontrivial proportion of published academic papers are genuinely well written, using this to argue that obscurantist jargon-laden writing isn’t necessary for publication; but he didn’t seem to even consider the fact that nearly all of those well-written papers were published by authors who already had tenure or even distinction in the field. I challenge you to find a single paper written by a lowly grad student that could actually get published without being full of needlessly technical terminology and awkward passive constructions: “A murian model was utilized for the experiment, in an acoustically sealed environment” rather than “I tested using mice and rats in a quiet room”. This is not because grad students are more thoroughly entrenched in the jargon than tenured professors (quite the contrary), nor that grad students are worse writers in general (that one could really go either way), but because grad students have more to prove. We need to signal our membership in the tribe, whereas once you’ve got tenure—or especially once you’ve got an endowed chair or something—you have already proven yourself.

Pinker seems to briefly touch this insight (p. 69), without fully appreciating its significance: “Even when we have an inlkling that we are speaking in a specialized lingo, we may be reluctant to slip back into plain speech. It could betray to our peers the awful truth that we are still greenhorns, tenderfoots, newbies. And if our readers do know the lingo, we might be insulting their intelligence while spelling it out. We would rather run the risk of confusing them while at least appearing to be soophisticated than take a chance at belaboring the obvious while striking them as naive or condescending.”

What we are dealing with here is a signaling problem. The fact that one can write better once one is well-established is the phenomenon of countersignaling, where one who has already established their status stops investing in signaling.

Here’s a simple model for you. Suppose each person has a level of knowledge x, which they are trying to demonstrate. They know their own level of knowledge, but nobody else does.

Suppose that when we observe someone’s knowledge, we get two pieces of information: We have an imperfect observation of their true knowledge which is x+e, the real value of x plus some amount of error e. Nobody knows exactly what the error is. To keep the model as simple as possible I’ll assume that e is drawn from a uniform distribution between -1 and 1.

Finally, assume that we are trying to select people above a certain threshold: Perhaps we are publishing in a journal, or hiring candidates for a job. Let’s call that threshold z. If x < z-1, then since e can never be larger than 1, we will immediately observe that they are below the threshold and reject them. If x > z+1, then since e can never be smaller than -1, we will immediately observe that they are above the threshold and accept them.

But when z-1 < x < z+1, we may think they are above the threshold when they actually are not (if e is positive), or think they are not above the threshold when they actually are (if e is negative).

So then let’s say that they can invest in signaling by putting some amount of visible work in y (like citing obscure papers or using complex jargon). This additional work may be costly and provide no real value in itself, but it can still be useful so long as one simple condition is met: It’s easier to do if your true knowledge x is high.

In fact, for this very simple model, let’s say that you are strictly limited by the constraint that y <= x. You can’t show off what you don’t know.

If your true value x > z, then you should choose y = x. Then, upon observing your signal, we know immediately that you must be above the threshold.

But if your true value x < z, then you should choose y = 0, because there’s no point in signaling that you were almost at the threshold. You’ll still get rejected.

Yet remember before that only those with z-1 < x < z+1 actually need to bother signaling at all. Those with x > z+1 can actually countersignal, by also choosing y = 0. Since you already have tenure, nobody doubts that you belong in the club.

This means we’ll end up with three groups: Those with x < z, who don’t signal and don’t get accepted; those with z < x < z+1, who signal and get accepted; and those with x > z+1, who don’t signal but get accepted. Then life will be hardest for those who are just above the threshold, who have to spend enormous effort signaling in order to get accepted—and that sure does sound like grad school.

You can make the model more sophisticated if you like: Perhaps the error isn’t uniformly distributed, but some other distribution with wider support (like a normal distribution, or a logistic distribution); perhaps the signalling isn’t perfect, but itself has some error; and so on. With such additions, you can get a result where the least-qualified still signal a little bit so they get some chance, and the most-qualified still signal a little bit to avoid a small risk of being rejected. But it’s a fairly general phenomenon that those closest to the threshold will be the ones who have to spend the most effort in signaling.

This reveals a disturbing overlap between the Curse of Knowledge and Impostor Syndrome: We write in impenetrable obfuscationist jargon because we are trying to conceal our own insecurity about our knowledge and our status in the profession. We’d rather you not know what we’re talking about than have you realize that we don’t know what we’re talking about.

For the truth is, we don’t know what we’re talking about. And neither do you, and neither does anyone else. This is the agonizing truth of research that nearly everyone doing research knows, but one must be either very brave, very foolish, or very well-established to admit out loud: It is in the nature of doing research on the frontier of human knowledge that there is always far more that we don’t understand about our subject than that we do understand.

I would like to be more open about that. I would like to write papers saying things like “I have no idea why it turned out this way; it doesn’t make sense to me; I can’t explain it.” But to say that the profession disincentivizes speaking this way would be a grave understatement. It’s more accurate to say that the profession punishes speaking this way to the full extent of its power. You’re supposed to have a theory, and it’s supposed to work. If it doesn’t actually work, well, maybe you can massage the numbers until it seems to, or maybe you can retroactively change the theory into something that does work. Or maybe you can just not publish that paper and write a different one.

Here is a graph of one million published z-scores in academic journals:

It looks like a bell curve, except that almost all the values between -2 and 2 are mysteriously missing.

If we were actually publishing all the good science that gets done, it would in fact be a very nice bell curve. All those missing values are papers that never got published, or results that were excluded from papers, or statistical analyses that were massaged, in order to get a p-value less than the magical threshold for publication of 0.05. (For the statistically uninitiated, a z-score less than -2 or greater than +2 generally corresponds to a p-value less than 0.05, so these are effectively the same constraint.)

I have literally never read a single paper published in an academic journal in the last 50 years that said in plain language, “I have no idea what’s going on here.” And yet I have read many papers—probably most of them, in fact—where that would have been an appropriate thing to say. It’s actually quite a rare paper, at least in the social sciences, that actually has a theory good enough to really precisely fit the data and not require any special pleading or retroactive changes. (Often the bar for a theory’s success is lowered to “the effect is usually in the right direction”.) Typically results from behavioral experiments are bizarre and baffling, because people are a little screwy. It’s just that nobody is willing to stake their career on being that honest about the depth of our ignorance.

This is a deep shame, for the greatest advances in human knowledge have almost always come from people recognizing the depth of their ignorance. Paradigms never shift until people recognize that the one they are using is defective.

This is why it’s so hard to beat the Curse of Knowledge: You need to signal that you know what you’re talking about, and the truth is you probably don’t, because nobody does. So you need to sound like you know what you’re talking about in order to get people to listen to you. You may be doing nothing more than educated guesses based on extremely limited data, but that’s actually the best anyone can do; those other people saying they have it all figured out are either doing the same thing, or they’re doing something even less reliable than that. So you’d better sound like you have it all figured out, and that’s a lot more convincing when you “utilize a murian model” than when you “use rats and mice”.

Perhaps we can at least push a little bit toward plainer language. It helps to be addressing a broader audience: it is both blessing and curse that whatever I put on this blog is what you will read, without any gatekeepers in my path. I can use plainer language here if I so choose, because no one can stop me. But of course there’s a signaling risk here as well: The Internet is a public place, and potential employers can read this as well, and perhaps decide they don’t like me speaking so plainly about the deep flaws in the academic system. Maybe I’d be better off keeping my mouth shut, at least for awhile. I’ve never been very good at keeping my mouth shut.

Once we get established in the system, perhaps we can switch to countersignaling, though even this doesn’t always happen. I think there are two reasons this can fail: First, you can almost always try to climb higher. Once you have tenure, aim for an endowed chair. Once you have that, try to win a Nobel. Second, once you’ve spent years of your life learning to write in a particular stilted, obscurantist, jargon-ridden way, it can be very difficult to change that habit. People have been rewarding you all your life for writing in ways that make your work unreadable; why would you want to take the risk of suddenly making it readable?

I don’t have a simple solution to this problem, because it is so deeply embedded. It’s not something that one person or even a small number of people can really fix. Ultimately we will need to, as a society, start actually rewarding people for speaking plainly about what they don’t know. Admitting that you have no clue will need to be seen as a sign of wisdom and honesty rather than a sign of foolishness and ignorance. And perhaps even that won’t be enough: Because the fact will still remain that knowing what you know that other people don’t know is a very difficult thing to do.

Pinker Propositions

May 19 2458623

What do the following statements have in common?

1. “Capitalist countries have less poverty than Communist countries.

2. “Black men in the US commit homicide at a higher rate than White men.

3. “On average, in the US, Asian people score highest on IQ tests, White and Hispanic people score near the middle, and Black people score the lowest.

4. “Men on average perform better at visual tasks, and women on average perform better on verbal tasks.

5. “In the United States, White men are no more likely to be mass shooters than other men.

6. “The genetic heritability of intelligence is about 60%.

7. “The plurality of recent terrorist attacks in the US have been committed by Muslims.

8. “The period of US military hegemony since 1945 has been the most peaceful period in human history.

These statements have two things in common:

1. All of these statements are objectively true facts that can be verified by rich and reliable empirical data which is publicly available and uncontroversially accepted by social scientists.

2. If spoken publicly among left-wing social justice activists, all of these statements will draw resistance, defensiveness, and often outright hostility. Anyone making these statements is likely to be accused of racism, sexism, imperialism, and so on.

I call such propositions Pinker Propositions, after an excellent talk by Steven Pinker illustrating several of the above statements (which was then taken wildly out of context by social justice activists on social media).

The usual reaction to these statements suggests that people think they imply harmful far-right policy conclusions. This inference is utterly wrong: A nuanced understanding of each of these propositions does not in any way lead to far-right policy conclusions—in fact, some rather strongly support left-wing policy conclusions.

1. Capitalist countries have less poverty than Communist countries, because Communist countries are nearly always corrupt and authoritarian. Social democratic countries have the lowest poverty and the highest overall happiness (#ScandinaviaIsBetter).

2. Black men commit more homicide than White men because of poverty, discrimination, mass incarceration, and gang violence. Black men are also greatly overrepresented among victims of homicide, as most homicide is intra-racial. Homicide rates often vary across ethnic and socioeconomic groups, and these rates vary over time as a result of cultural and political changes.

3. IQ tests are a highly imperfect measure of intelligence, and the genetics of intelligence cut across our socially-constructed concept of race. There is far more within-group variation in IQ than between-group variation. Intelligence is not fixed at birth but is affected by nutrition, upbringing, exposure to toxins, and education—all of which statistically put Black people at a disadvantage. Nor does intelligence remain constant within populations: The Flynn Effect is the well-documented increase in intelligence which has occurred in almost every country over the past century. Far from justifying discrimination, these provide very strong reasons to improve opportunities for Black children. The lead and mercury in Flint’s water suppressed the brain development of thousands of Black children—that’s going to lower average IQ scores. But that says nothing about supposed “inherent racial differences” and everything about the catastrophic damage of environmental racism.

4. To be quite honest, I never even understood why this one shocks—or even surprises—people. It’s not even saying that men are “smarter” than women—overall IQ is almost identical. It’s just saying that men are more visual and women are more verbal. And this, I think, is actually quite obvious. I think the clearest evidence of this—the “interocular trauma” that will convince you the effect is real and worth talking about—is pornography. Visual porn is overwhelmingly consumed by men, even when it was designed for women (e.g. Playgirla majority of its readers are gay men, even though there are ten times as many straight women in the world as there are gay men). Conversely, erotic novels are overwhelmingly consumed by women. I think a lot of anti-porn feminism can actually be explained by this effect: Feminists (who are usually women, for obvious reasons) can say they are against “porn” when what they are really against is visual porn, because visual porn is consumed by men; then the kind of porn that they like (erotic literature) doesn’t count as “real porn”. And honestly they’re mostly against the current structure of the live-action visual porn industry, which is totally reasonable—but it’s a far cry from being against porn in general. I have some serious issues with how our farming system is currently set up, but I’m not against farming.

5. This one is interesting, because it’s a lack of a race difference, which normally is what the left wing always wants to hear. The difference of course is that this alleged difference would make White men look bad, and that’s apparently seen as a desirable goal for social justice. But the data just doesn’t bear it out: While indeed most mass shooters are White men, that’s because most Americans are White, which is a totally uninteresting reason. There’s no clear evidence of any racial disparity in mass shootings—though the gender disparity is absolutely overwhelming: It’s almost always men.

6. Heritability is a subtle concept; it doesn’t mean what most people seem to think it means. It doesn’t mean that 60% of your intelligence is due to your your genes. Indeed, I’m not even sure what that sentence would actually mean; it’s like saying that 60% of the flavor of a cake is due to the eggs. What this heritability figure actually means that when you compare across individuals in a population, and carefully control for environmental influences, you find that about 60% of the variance in IQ scores is explained by genetic factors. But this is within a particular population—here, US adults—and is absolutely dependent on all sorts of other variables. The more flexible one’s environment becomes, the more people self-select into their preferred environment, and the more heritable traits become. As a result, IQ actually becomes more heritable as children become adults, called the Wilson Effect.

7. This one might actually have some contradiction with left-wing policy. The disproportionate participation of Muslims in terrorism—controlling for just about anything you like, income, education, age etc.—really does suggest that, at least at this point in history, there is some real ideological link between Islam and terrorism. But the fact remains that the vast majority of Muslims are not terrorists and do not support terrorism, and antagonizing all the people of an entire religion is fundamentally unjust as well as likely to backfire in various ways. We should instead be trying to encourage the spread of more tolerant forms of Islam, and maintaining the strict boundaries of secularism to prevent the encroach of any religion on our system of government.

8. The fact that US military hegemony does seem to be a cause of global peace doesn’t imply that every single military intervention by the US is justified. In fact, it doesn’t even necessarily imply that any such interventions are justified—though I think one would be hard-pressed to say that the NATO intervention in the Kosovo War or the defense of Kuwait in the Gulf War was unjustified. It merely points out that having a hegemon is clearly preferable to having a multipolar world where many countries jockey for military supremacy. The Pax Romana was a time of peace but also authoritarianism; the Pax Americana is better, but that doesn’t prevent us from criticizing the real harms—including major war crimes—committed by the United States.

So it is entirely possible to know and understand these facts without adopting far-right political views.

Yet Pinker’s point—and mine—is that by suppressing these true facts, by responding with hostility or even ostracism to anyone who states them, we are actually adding fuel to the far-right fire. Instead of presenting the nuanced truth and explaining why it doesn’t imply such radical policies, we attack the messenger; and this leads people to conclude three things:

1. The left wing is willing to lie and suppress the truth in order to achieve political goals (they’re doing it right now).

2. These statements actually do imply right-wing conclusions (else why suppress them?).

3. Since these statements are true, that must mean the right-wing conclusions are actually correct.

Now (especially if you are someone who identifies unironically as “woke”), you might be thinking something like this: “Anyone who can be turned away from social justice so easily was never a real ally in the first place!”

This is a fundamentally and dangerously wrongheaded view. No one—not me, not you, not anyone—was born believing in social justice. You did not emerge from your mother’s womb ranting against colonalist imperialism. You had to learn what you now know. You came to believe what you now believe, after once believing something else that you now think is wrong. This is true of absolutely everyone everywhere. Indeed, the better you are, the more true it is; good people learn from their mistakes and grow in their knowledge.

This means that anyone who is now an ally of social justice once was not. And that, in turn, suggests that many people who are currently not allies could become so, under the right circumstances. They would probably not shift all at once—as I didn’t, and I doubt you did either—but if we are welcoming and open and honest with them, we can gradually tilt them toward greater and greater levels of support.

But if we reject them immediately for being impure, they never get the chance to learn, and we never get the chance to sway them. People who are currently uncertain of their political beliefs will become our enemies because we made them our enemies. We declared that if they would not immediately commit to everything we believe, then they may as well oppose us. They, quite reasonably unwilling to commit to a detailed political agenda they didn’t understand, decided that it would be easiest to simply oppose us.

And we don’t have to win over every person on every single issue. We merely need to win over a large enough critical mass on each issue to shift policies and cultural norms. Building a wider tent is not compromising on your principles; on the contrary, it’s how you actually win and make those principles a reality.

There will always be those we cannot convince, of course. And I admit, there is something deeply irrational about going from “those leftists attacked Charles Murray” to “I think I’ll start waving a swastika”. But humans aren’t always rational; we know this. You can lament this, complain about it, yell at people for being so irrational all you like—it won’t actually make people any more rational. Humans are tribal; we think in terms of teams. We need to make our team as large and welcoming as possible, and suppressing Pinker Propositions is not the way to do that.

To truly honor veterans, end war

JDN 2457339 EST 20:00 (Nov 11, 2015)

Today is Veterans’ Day, on which we are asked to celebrate the service of military veterans, particularly those who have died as a result of war. We tend to focus on those who die in combat, but actually these have always been relatively uncommon; throughout history, most soldiers have died later of their wounds or of infections. More recently as a result of advances in body armor and medicine, actually relatively few soldiers die even of war wounds or infections—instead, they are permanently maimed and psychologically damaged, and the most common way that war kills soldiers now is by making them commit suicide.

Even adjusting for the fact that soldiers are mostly young men (the group of people most likely to commit suicide), military veterans still have about 50 excess suicides per million people per year, for a total of about 300 suicides per million per year. Using the total number, that’s over 8000 veteran suicides per year, or 22 per day. Using only the excess compared to men of the same ages, it’s still an additional 1300 suicides per year.

While the 14-years-and-counting Afghanistan War has killed 2,271 American soldiers and the 11-year Iraq War has killed 4,491 American soldiers directly (or as a result of wounds), during that same time period from 2001 to 2015 there have been about 18,000 excess suicides as a result of the military—excess in the sense that they would not have occurred if those men had been civilians. Altogether that means there would be nearly 25,000 additional American soldiers alive today were it not for these two wars.

War does not only kill soldiers while they are on the battlefield—indeed, most of the veterans it kills die here at home.

There is a reason Woodrow Wilson chose November 11 as the date for Veterans’ Day: It was on this day in 1918 that World War 1, up to that point the war that had caused the most deaths in human history, was officially ended. Sadly, it did not remain the deadliest war, but was surpassed by World War 2 a generation later. Fortunately, no other war has ever exceeded World War 2—at least, not yet.

We tend to celebrate holidays like this with a lot of ritual and pageantry (or even in the most inane and American way possible, with free restaurant meals and discounts on various consumer products), and there’s nothing inherently wrong with that. Nor is there anything wrong with taking a moment to salute the flag or say “Thank you for your service.” But that is not how I believe veterans should be honored. If I were a veteran, that is not how I would want to be honored.

We are getting much closer to how I think they should be honored when the White House announces reforms at Veterans’ Affairs hospitals and guaranteed in-state tuition at public universities for families of veterans—things that really do in a concrete and measurable way improve the lives of veterans and may even save some of them from that cruel fate of suicide.

But ultimately there is only one way that I believe we can truly honor veterans and the spirit of the holiday as Wilson intended it, and that is to end war once and for all.

Is this an ambitious goal? Absolutely. But is it an impossible dream? I do not believe so.

In just the last half century, we have already made most of the progress that needed to be made. In this brilliant video animation, you can see two things: First, the mind-numbingly horrific scale of World War 2, the worst war in human history; but second, the incredible progress we have made since then toward world peace. It was as if the world needed that one time to be so unbearably horrible in order to finally realize just what war is and why we need a better way of solving conflicts.

This is part of a very long-term trend in declining violence, for a variety of reasons that are still not thoroughly understood. In simplest terms, human beings just seem to be getting better at not killing each other.

Nassim Nicholas Taleb argues that this is just a statistical illusion, because technologies like nuclear weapons create the possibility of violence on a previously unimaginable scale, and it simply hasn’t happened yet. For nuclear weapons in particular, I think he may be right—the consequences of nuclear war are simply so catastrophic that even a small risk of it is worth paying almost any price to avoid.

Fortunately, nuclear weapons are not necessary to prevent war: South Africa has no designs on attacking Japan anytime soon, but neither has nuclear weapons. Germany and Poland lack nuclear arsenals and were the first countries to fight in World War 2, but now that both are part of the European Union, war between them today seems almost unthinkable. When American commentators fret about China today it is always about wage competition and Treasury bonds, not aircraft carriers and nuclear missiles. Conversely, North Korea’s acquisition of nuclear weapons has by no means stabilized the region against future conflicts, and the fact that India and Pakistan have nuclear missiles pointed at one another has hardly prevented them from killing each other over Kashmir. We do not need nuclear weapons as a constant threat of annihilation in order to learn to live together; political and economic ties achieve that goal far more reliably.

And I think Taleb is wrong about the trend in general. He argues that the only reason violence is declining is that concentration of power has made violence rarer but more catastrophic when it occurs. Yet we know that many forms of violence which used to occur no longer do, not because of the overwhelming force of a Leviathan to prevent them, but because people simply choose not to do them anymore. There are no more gladiator fights, no more cat-burnings, no more public lynchings—not because of the expansion in government power, but because our society seems to have grown out of that phase.

Indeed, what horrifies us about ISIS and Boko Haram would have been considered quite normal, even civilized, in the Middle Ages. (If you’ve ever heard someone say we should “bring back chivalry”, you should explain to them that the system of knight chivalry in the 12th century had basically the same moral code as ISIS today—one of the commandments Gautier’s La Chevalerie attributes as part of the chivalric code is literally “Thou shalt make war against the infidel without cessation and without mercy.”) It is not so much that they are uniquely evil by historical standards, as that we grew out of that sort of barbaric violence awhile ago but they don’t seem to have gotten the memo.

In fact, one thing people don’t seem to understand about Steven Pinker’s argument about this “Long Peace” is that it still works if you include the world wars. The reason World War 2 killed so many people was not that it was uniquely brutal, nor even simply because its weapons were more technologically advanced. It also had to do with the scale of integration—we called it a single war even though it involved dozens of countries because those countries were all united into one of two sides, whereas in centuries past that many countries could be constantly fighting each other in various combinations but it would never be called the same war. But the primary reason World War 2 killed the largest raw number of people was simply because the world population was so much larger. Controlling for world population, World War 2 was not even among the top 5 worst wars—it barely makes the top 10. The worst war in history by proportion of the population killed was almost certainly the An Lushan Rebellion in 8th century China, which many of you may not even have heard of until today.

Though it may not seem so as ISIS kidnaps Christians and drone strikes continue, shrouded in secrecy, we really are on track to end war. Not today, not tomorrow, maybe not in any of our lifetimes—but someday, we may finally be able to celebrate Veterans’ Day as it was truly intended: To honor our soldiers by making it no longer necessary for them to die.