Evolutionary skepticism

Post 572 Mar 9 JDN 2460744

In the last two posts I talked about ways that evolutionary theory could influence our understanding of morality, including the dangerous views of naive moral Darwinism as well as some more reasonable approaches; yet there are other senses of the phrase “morality evolves” that we haven’t considered. One of these is actually quite troubling; were it true, the entire project of morality would be in jeopardy. I’ll call it “evolutionary skepticism”; it says that yes, morality has evolved—and this is reason to doubt that morality is true. Richard Joyce, author of The Evolution of Morality, is of such a persuasion, and he makes a quite compelling case. Joyce’s central point is that evolution selects for fitness, not accuracy; we had reason to evolve in ways that would maximize the survival of our genes, not reasons to evolve in ways that would maximize the accuracy of our moral claims.

This is of course absolutely correct, and it is troubling precisely because we can all see that the two are not necessarily the same thing. It’s easy to imagine many ways that beliefs could evolve that had nothing to do with the accuracy of those beliefs.

But note that word: necessarily. Accuracy and fitness aren’t necessarily aligned—but it could still be that they are, in fact, aligned rather well. Yes, we can imagine ways a brain could evolve that would benefit its fitness without improving its accuracy; but is that actually what happened to our ancestors? Do we live on instinct, merely playing out by rote the lifestyles of our forebears, thinking and living the same way we have for hundreds of millennia?

Clearly not! Behold, you are reading a blog post! It was written on a laptop computer! While these facts may seem perfectly banal to you, they represent an unprecedented level of behavioral novelty, one achieved only by one animal species among millions, and even then only very recently. Human beings are incredibly flexible, incredibly creative, and incredibly intelligent. Yes, we evolved to be this way, of course we did; but so what? We are this way. We are capable of learning new things about the world, gaining in a few short centuries knowledge our forebears could never have imagined. Evolution does not always make animals into powerful epistemic engines—indeed, 99.99999\% of the time it does not—but once in awhile it does, and we are the result.

Natural selection is quite frugal; it tends to evolve things the easiest way. The way the world is laid out, it seems to be that the easiest way to evolve a brain that survives really well in a wide variety of ecological and social environments is to evolve a brain that is capable of learning to expand its own knowledge and understanding. After all, no other organism has ever been or is ever likely to be as evolutionarily fit as we are; we span the globe, cover a wide variety of ecological niches, and number in the billions and counting. We’ve even expanded beyond the planet Earth, something no other organism could even contemplate. We are successful because we are smart; is it really so hard to believe that we are smart because it made our ancestors successful?

Indeed, it must be this way, or we wouldn’t be able to make sense of the fact that our human brains, evolved for the African savannah a million years ago with minor tweaks since then, are capable of figuring out chess, calculus, writing, quantum mechanics, special relativity, television broadcasting, space travel, and for that matter Darwinian evolution and meta-ethics. None of these things could possibly have been adaptive in our ancestral ecology. They must be spandrels, fitness-neutral side-effects of evolved traits. And just like the original pendentives of San Marco that motivated Gould’s metaphor, what glorious spandrels they are!

Our genes made us better at gathering information and processing that information into correct beliefs, and calculus and quantum mechanics came along for the ride. Our greatest adaptation is to be adaptable; our niche is to need no niche, for we can carve our own.

This is not to abandon evolutionary psychology, for evolution does have a great deal to tell us about psychology. We do have instincts; preprocessing systems built into our sensory organs, innate emotions that motivate us to action, evolved heuristics that we use to respond quickly under pressure. Steven Pinker argues convincingly that language is an evolved instinct—and where would we be without language? Our instincts are essential for not only our survival, but indeed for our rationality.

Staring at a blinking cursor on the blank white page of a word processor, imagining the infinity of texts that could be written upon that page, you could be forgiven for thinking that you were looking at a blank slate. Yet in fact you are staring at the pinnacle of high technology, an extremely complex interlocking system of hardware and software with dozens of components and billions of subcomponents, all precision-engineered for maximum efficiency. The possibilities are endless not because the system is simple and impinged upon by its environment, but because it is complex, and capable of engaging with that environment in order to convert subtle differences in input into vast differences in output. If this is true of a word processor, how much more true it must be of an organism capable of designing and using word processors! It is the very instincts that seem to limit our rationality which have made that rationality possible in the first place. Witness the eternal wisdom of Immanuel Kant:

Misled by such a proof of the power of reason, the demand for the extension of knowledge recognises no limits. The light dove, cleaving the air in her free flight, and feeling its resistance, might imagine that its flight would be still easier in empty space.

The analogy is even stronger than he knew—for brains, like wings, are an evolutionary adaptation! (What would Kant have made of Darwin?) But because our instincts are so powerful, they are self-correcting; they allow us to do science.

Richard Joyce agrees that we are right to think our evolved brains are reasonably reliable when it comes to scientific facts. He has to, otherwise his whole argument would be incoherent. Joyce agrees that we evolved to think 2+2=4 precisely because 2+2=4, and we evolved to think space is 3-dimensional precisely because space is 3-dimensional. Indeed, he must agree that we evolved to think that we evolved because we evolved! Yet, for some reason Joyce thinks that this same line of reasoning doesn’t apply to ethics.

But why wouldn’t it? In fact, I think we have more reason to trust our evolved capacities in ethics than we do in other domains of science, because the subject matter of morality—human behavior and social dynamics—is something that we have been familiar with even all the way back to the savannah. If we evolved to think that theft and murder are bad, why would that happen? I submit it would happen precisely because theft and murder are Pareto-suboptimal unsustainable strategies—that is, precisely because theft and murder are bad. (Don’t worry if you don’t know what I mean by “Pareto-suboptimal” and “unsustainable strategy”; I’ll get to those in later posts.) Once you realize that “bad” is a concept that can ultimately be unpacked to naturalistic facts, all reason to think it is inaccessible to natural selection drops away; natural selection could well have chosen brains that didn’t like murder precisely because murder is bad. Indeed, because morality is ultimately scientific, part of how natural selection could evolve us to be more moral is by evolving us to be more scientific. We are more scientific than apes, and vastly more scientific than cockroaches; we are, indeed, the most scientific animal that has ever lived on Earth.

I do think that our evolved moral instincts are to some degree mistaken or incomplete; but I can make sense of this, in the same way I make sense of the fact that other evolved instincts don’t quite fit what we have discovered in other sciences. For instance, humans have an innate concept of linear momentum that doesn’t quite fit with what we’ve discovered in physics. We tend to presume that objects have an inherent tendency toward rest, though in fact they do not—this is because in our natural environment, friction makes most objects act as if they had such a tendency. Roll a rock along the ground, and it will eventually stop. Run a few miles, and eventually you’ll have to stop too. Most things in our everyday life really do behave as if they had an inherent tendency toward rest. It’s only once we realized that friction is itself a force, not present everywhere, that we came to see that linear momentum is conserved in the absence of external forces. (Throw a rock in space, and it will not ever stop. Nor will you, by Newton’s Third Law.) This casts no doubt upon our intuitions about rocks rolled along the ground, which do indeed behave exactly as our intuition predicts.

Similarly, our intuition that animals don’t deserve rights could well be an evolutionary consequence of the fact that we sometimes had to eat animals in order to survive, and so would do better not thinking about it too much; but now that we don’t need to do this anymore, we can reflect upon the deeper issues involved in eating meat. This is no reason to doubt our intuitions that parents should care for their children and murder is bad.

Moore’s “naturalistic fallacy”

Jan 12 JDN 2460688

In last week’s post I talked about some of the arguments against ethical naturalism, which have sometimes been called “the naturalistic fallacy”.

The “naturalistic fallacy” that G.E. Moore actually wrote about is somewhat subtler; it says that there is something philosophically suspect about defining something non-natural in terms of natural things—and furthermore, it says that “good” is not a natural thing and so cannot be defined in terms of natural things. For Moore, “good” is not something that can be defined with recourse to facts about psychology, biology or mathematics; “good” is simply an indefinable atomic concept that exists independent of all other concepts. As such Moore was criticizing moral theories like utilitarianism and hedonism that seek to define “good” in terms of “pleasure” or “lack of pain”; for Moore, good cannot have a definition in terms of anything except itself.

My greatest problem with this position is less philosophical than linguistic; how does one go about learning a concept that is so atomic and indefinable? When I was a child, I acquired an understanding of the word “good” that has since expanded as I grew in knowledge and maturity. I need not have called it “good”: had I been raised in Madrid, I would have called it bueno; in Beijing, hao; in Kyoto, ii; in Cairo, jaiid; and so on.

I’m not even sure if all these words really mean exactly the same thing, since each word comes with its own cultural and linguistic connotations. A vast range of possible sounds could be used to express this concept and related concepts—and somehow I had to learn which sounds were meant to symbolize which concepts, and what relations were meant to hold between them. This learning process was highly automatic, and occurred when I was very young, so I do not have great insight into its specifics; but nonetheless it seems clear to me that in some sense I learned to define “good” in terms of things that I could perceive. No doubt this definition was tentative, and changed with time and experience; indeed, I think all definitions are like this. Perhaps my knowledge of other concepts, like “pleasure”, “happiness”, “hope” and “justice”, is interconnected with “good” in such a way that none can be defined separately from the others—indeed perhaps language itself is best considered a network of mutually-reinforcing concepts, each with some independent justification and some connection to other concepts, not a straightforward derivation from more basic atomic notions. If you wish, call me a “foundherentist” in the tradition of Susan Haack; I certainly do think that all beliefs have some degree of independent justification by direct evidence and some degree of mutual justification by coherence. Haack uses the metaphor of a crossword puzzle, but I prefer Alison Gopnik’s mathematical model of a Bayes net. In any case, I had to learn about “good” somehow. Even if I had some innate atomic concept of good, we are left to explain two things: First, how I managed to associate that innate atomic concept with my sense experiences, and second, how that innate atomic concept got in my brain in the first place. If it was genetic, it must have evolved; but it could only have evolved by phenotypic interaction with the external environment—that is, with natural things. We are natural beings, made of natural material, evolved by natural selection. If there is a concept of “good” encoded into my brain either by learning or instinct or whatever combination, it had to get there by some natural mechanism.

The classic argument Moore used to support this position is now called the Open Question Argument; it says, essentially, that we could take any natural property that would be proposed as the definition of “good” and call it X, and we could ask: “Sure, that’s X, but is it good?” The idea is that since we can ask this question and it seems to make sense, then X cannot be the definition of “good”. If someone asked, “I know he is an unmarried man, but is he a bachelor?” or “I know that has three sides, but is it a triangle?” we would think that they didn’t understand what they were talking about; but Moore argues that for any natural property, “I know that is X, but is it good?” is still a meaningful question. Moore uses two particular examples, X = “pleasant” and X = “what we desire to desire”; and indeed those fit what he is saying. But are these really very good examples?

One subtle point that many philosophers make about this argument is that science can discover identities between things and properties that are not immediately apparent. We now know that water is H2O, but until the 19th century we did not know this. So we could perfectly well imagine someone asking, “I know that’s H2O, but is it water?” even though in fact water is H2O and we know this. I think this sort of argument would work for some very complicated moral claims, like the claim that constitutional democracy is good; I can imagine someone who was quite ignorant of international affairs asking: “I know that it’s constitutional democracy, but is that good?” and be making sense. This is because the goodness of constitutional democracy isn’t something conceptually necessary, it is an empirical result based on the fact that constitutional democracies are more peaceful, fair, egalitarian, and prosperous than other governmental systems. In fact, it may even be only true relative to other systems we know of; perhaps there is an as-yet-unimagined governmental system that is better still. No one thinks that constitutional democracy is a definition of moral goodness. And indeed, I think few would argue that H2O is the definition of water; instead the definition of water is something like “that wet stuff we need to drink to survive” and it just so happens that this turns out to be H2O. If someone asked “is that wet stuff we need to drink to survive really water?” he would rightly be thought talking nonsense; that’s just what water means.

But if instead of the silly examples Moore uses, we take a serious proposal that real moral philosophers have suggested, it’s not nearly so obvious that the question is open. From Kant: “Yes, that is our duty as rational beings, but is it good?” From Mill: “Yes, that increases the amount of happiness and decreases the amount of suffering in the world, but is it good?” From Aristotle: “Yes, that is kind, just, and fair, but is it good?” These do sound dangerously close to talking nonsense! If someone asked these questions, I would immediately expect an explanation of what they were getting at. And if no such explanation was forthcoming, I would, in fact, be led to conclude that they literally don’t understand what they’re talking about.

I can imagine making sense of “I know that has three sides, but is it a triangle?”in some bizarre curved multi-dimensional geometry. Even “I know he is an unmarried man, but is he a bachelor?” makes sense if you are talking about a celibate priest. Very rarely do perfect synonyms exist in natural languages, and even when they do they are often unstable due to the effects of connotations. None of this changes the fact that bachelors are unmarried men, triangles have three sides, and yes, goodness involves fulfilling rational duties, alleviating suffering, and being kind and just (Deontology, consequentialism, and virtue theory are often thought to be distinct and incompatible; I’m convinced they amount to the same thing, which I’ll say more about in later posts.).

This line of reasoning has led some philosophers (notably Willard Quine) to deny the existence of analytic truths altogether; on Quine’s view even “2+2=4” isn’t something we can deduce directly from the meaning of the symbols. This is clearly much too strong; no empirical observation could ever lead us to deny 2+2=4. In fact, I am convinced that all mathematical truths are ultimately reducible to tautologies; even “the Fourier transform of a Gaussian is Gaussian” is ultimately a way of saying in compact jargon some very complicated statement that amounts to A=A. This is not to deny that mathematics is useful; of course mathematics is tremendously useful, because this sort of compact symbolic jargon allows us to make innumerable inferences about the world and at the same time guarantee that these inferences are correct. Whenever you see a Gaussian and you need its Fourier transform (I know, it happens a lot, right?), you can immediately know that the result will be a Gaussian; you don’t have to go through the whole derivation yourself. We are wrong to think that “ultimately reducible to a tautology” is the same as “worthless and trivial”; on the contrary, to realize that mathematics is reducible to tautology is to say that mathematics is undeniable, literally impossible to coherently deny. At least the way I use the words, the statement “Happiness is good and suffering is bad” is pretty close to that same sort of claim; if you don’t agree with it, I sense that you honestly don’t understand what I mean.

In any case, I see no more fundamental difficulty in defining “good” than I do in defining any concept, like “man”, “tree”, “multiplication”, “green” or “refrigerator”; and nor do I see any point in arguing about the semantics of definition as an approach to understanding moral truth. It seems to me that Moore has confused the map with the territory, and later authors have confused him with Hume, to all of our detriment.

Moral progress and moral authority

Dec 8 JDN 2460653

In previous posts I’ve written about why religion is a poor source of morality. But it’s worse than that. Religion actually holds us back morally. It is because of religion that our society grants the greatest moral authority to precisely the people and ideas which have most resisted moral progress. Most religious people are good, well-intentioned people—but religious authorities are typically selfish, manipulative, Machiavellian leaders who will say or do just about anything to maintain power. They have trained us to respect and obey them without question; they even call themselves “shepherds” and us the “flock”, as if we were not autonomous humans but obedient ungulates.

I’m sure that most of my readers are shocked that I would assert such a thing; surely priests and imams are great, holy men who deserve our honor and respect? The evidence against such claims is obvious. We only believe such things because the psychopaths have told us to believe them.

I am not saying that these evil practices are inherent to religion—they aren’t. Other zealous, authoritarian ideologies, like Communism and fascism, have been just as harmful for many of the same reasons. Rather, I am saying that religion gives authority and respect to people who would otherwise not have it, people who have long histories of evil, selfish, and exploitative behavior. For a particularly striking example, Catholicism as an idea is false and harmful, but not nearly as harmful as the Catholic Church as an institution, which has harbored some of the worst criminals in history.

The Catholic Church hierarchy is quite literally composed of a cadre of men who use tradition and rhetoric to extort billions of dollars from the poor and who have gone to great lengths to defend men who rape children—a category of human being that normally is so morally reviled that even thieves and murderers consider them beyond the pale of human society. Pope Ratzinger himself, formerly the most powerful religious leader in the world, has been connected with the coverup based on a letter he wrote in 1985. The Catholic Church was also closely tied to Nazi Germany and publicly celebrated Hitler’s birthday for many years; there is evidence that the Vatican actively assisted in the exodus of Nazi leaders along “ratlines” to South America. More recently the Church once again abetted genocide, when in Rwanda it turned away refugees and refused to allow prosecution against any of the perpetrators who were affiliated with the Catholic Church. Yes, that’s right; the Vatican has quite literally been complicit in the worst moral crimes human beings have ever committed. Embezzlement of donations and banning of life-saving condoms seem rather beside the point once we realize that these men and their institutions have harbored genocidaires and child rapists. I can scarcely imagine a more terrible source of moral authority.

Most people respect evangelical preachers, like Jerry Falwell who blamed 9/11 and Hurricane Katrina on feminists, gays, and secularists, then retracted the statement about 9/11 when he realized how much it had offended people. These people have concepts of morality that were antiquated in the 19th century; they base their ethical norms on books that were written by ignorant and cultish nomads thousands of years ago. Leviticus 18:22 and 20:13 indeed condemn homosexuality, but Leviticus 19:27 condemns shaving and Leviticus 11:9-12 says that eating fish is fine but eating shrimp is evil. By the way, Leviticus 11:21-22 seems to say that locusts have only four legs, when they very definitely have six and you can see this by looking at one. (I cannot emphasize this enough: Don’t listen to what people say about the book, read the book.)

But we plainly don’t respect scientists or philosophers to make moral and political decisions. If we did, we would have enacted equal rights for LGBT people sometime around 1898 when the Scientific-Humanitarian Committee was founded or at least by 1948 when Alfred Kinsey showed how common, normal, and healthy homosexuality is. Democracy and universal suffrage (for men at least) would have been the norm shortly after 1689 when Locke wrote his Two Treatises of Government. Women would have been granted the right to vote in 1792 upon the publication of Mary Woolstonecraft’s A Vindication of the Rights of Woman, instead of in 1920 after a long and painful political battle. Animal rights would have become law in 1789 with the publication of Bentham’s Introduction to the Principles of Morals and Legislation. We should have been suspicious of slavery since at least Kant if not Socrates, but instead it took until the 19th century for slavery to finally be banned. We owe the free world to moral science; but nonetheless we rarely listen to the arguments of moral scientists. As a species we fight for our old traditions even in the face of obvious and compelling evidence to the contrary, and this holds us back—far back. If they haven’t sunk in yet, read these dates again: Society is literally about 200 years behind the cutting edge of moral science. Imagine being 200 years behind in technology; you would be riding horses instead of flying in jet airliners and writing letters with quills instead of texting on your iPhone. Imagine being 200 years behind in ecology; you would be considering the environmental impact of not photovoltaic panels or ethanol but whale oil. This is how far behind we are in moral science.

One subfield of moral science has done somewhat better: The economics of theory and the economics of practice differ by only about 100 years. Capitalism really was instituted on a large scale only a few decades after Adam Smith argued for it, and socialism (while horrifyingly abused in the Communism of Lenin and Stalin) has nonetheless been implemented on a wide scale only a century after Marx. Keynesian stimulus was international policy (despite its numerous detractors) in 2008 and 2020, and Keynes himself died in only 1946. This process is still slower than it probably should be, but at least we aren’t completely ignoring new advances the way we do in ethics. 100 years behind in technology we would have cars and electricity at least.

Except perhaps in economics, in general we entrust our moral claims to the authority of men in tall hats and ornate robes who merely assert their superiority and ties to higher knowledge, while ignoring the thousands of others who actually apply their reason and demonstrate knowledge and expertise. A criminal in pretty robes who calls himself a moral leader might as well be a moral leader, as far as we’re concerned; a genuinely wise teacher of morality who isn’t arrogant enough to assert special revelation from the divine is instead ignored. Why do we do this? Religion. Religion is holding us back.

We need to move beyond religion in order to make real and lasting moral progress.

Against deontology

Aug 6 JDN 2460163

In last week’s post I argued against average utilitarianism, basically on the grounds that it devalues the lives of anyone who isn’t of above average happiness. But you might be tempted to take these as arguments against utilitarianism in general, and that is not my intention.

In fact I believe that utilitarianism is basically correct, though it needs some particular nuances that are often lost in various presentations of it.

Its leading rival is deontology, which is really a broad class of moral theories, some a lot better than others.

What characterizes deontology as a class is that it uses rules, rather than consequences; an act is just right or wrong regardless of its consequences—or even its expected consequences.

There are certain aspects of this which are quite appealing: In fact, I do think that rules have an important role to play in ethics, and as such I am basically a rule utilitarian. Actually trying to foresee all possible consequences of every action we might take is an absurd demand far beyond the capacity of us mere mortals, and so in practice we have no choice but to develop heuristic rules that can guide us.

But deontology says that these are no mere heuristics: They are in fact the core of ethics itself. Under deontology, wrong actions are wrong even if you know for certain that their consequences will be good.

Kantian ethics is one of the most well-developed deontological theories, and I am quite sympathetic to Kantian ethics In fact I used to consider myself one of its adherents, but I now consider that view a mistaken one.

Let’s first dispense with the views of Kant himself, which are obviously wrong. Kant explicitly said that lying is always, always, always wrong, and even when presented with obvious examples where you could tell a small lie to someone obviously evil in order to save many innocent lives, he stuck to his guns and insisted that lying is always wrong.

This is a bit anachronistic, but I think this example will be more vivid for modern readers, and it absolutely is consistent with what Kant wrote about the actual scenarios he was presented with:

You are living in Germany in 1945. You have sheltered a family of Jews in your attic to keep them safe from the Holocaust. Nazi soldiers have arrived at your door, and ask you: “Are there any Jews in this house?” Do you tell the truth?

I think it’s utterly, agonizingly obvious that you should not tell the truth. Exactly what you should do is less obvious: Do you simply lie and hope they buy it? Do you devise a clever ruse? Do you try to distract them in some way? Do you send them on a wild goose chase elsewhere? If you could overpower them and kill them, should you? What if you aren’t sure you can; should you still try? But one thing is clear: You don’t hand over the Jewish family to the Nazis.

Yet when presented with similar examples, Kant insisted that lying is always wrong. He had a theory to back it up, his Categorical Imperative: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”

And, so his argument goes: Since it would be obviously incoherent to say that everyone should always lie, lying is wrong, and you’re never allowed to do it. He actually bites that bullet the size of a Howitzer round.

Modern deontologists—even though who consider themselves Kantians—are more sophisticated than this. They realize that you could make a rule like “Never lie, except to save the life of an innocent person.” or “Never lie, except to stop a great evil.” Either of these would be quite adequate to solve this particular dilemma. And it’s absolutely possible to will that these would be universal laws, in the sense that they would apply to anyone. ‘Universal’ doesn’t have to mean ‘applies equally to all possible circumstances’.

There are also a couple of things that deontology does very well, which are worth preserving. One of them is supererogation: The idea that some acts are above and beyond the call of duty, that something can be good without being obligatory.

This is something most forms of utilitarianism are notoriously bad at. They show us a spectrum of worlds from the best to the worst, and tell us to make things better. But there’s nowhere we are allowed to stop, unless we somehow manage to make it all the way to the best possible world.

I find this kind of moral demand very tempting, which often leads me to feel a tremendous burden of guilt. I always know that I could be doing more than I do. I’ve written several posts about this in the past, in the hopes of fighting off this temptation in myself and others. (I am not entirely sure how well I’ve succeeded.)

Deontology does much better in this regard: Here are some rules. Follow them.

Many of the rules are in fact very good rules that most people successfully follow their entire lives: Don’t murder. Don’t rape. Don’t commit robbery. Don’t rule a nation tyrannically. Don’t commit war crimes.

Others are oft more honored in the breach than the observance: Don’t lie. Don’t be rude. Don’t be selfish. Be brave. Be generous. But a well-developed deontology can even deal with this, by saying that some rules are more important than others, and thus some sins are more forgivable than others.

Whereas a utilitarian—at least, anything but a very sophisticated utilitarian—can only say who is better and who is worse, a deontologist can say who is good enough: who has successfully discharged their moral obligations and is otherwise free to live their life as they choose. Deontology absolves us of guilt in a way that utilitarianism is very bad at.

Another good deontological principle is double-effect: Basically this says that if you are doing something that will have bad outcomes as well as good ones, it matters whether you intend the bad one and what you do to try to mitigate it. There does seem to be a morally relevant difference between a bombing that kills civilians accidentally as part of an attack on a legitimate military target, and a so-called “strategic bombing” that directly targets civilians in order to maximize casualties—even if both occur as part of a justified war. (Both happen a lot—and it may even be the case that some of the latter were justified. The Tokyo firebombing and atomic bombs on Hiroshima and Nagasaki were very much in the latter category.)

There are ways to capture this principle (or something very much like it) in a utilitarian framework, but like supererogation, it requires a sophisticated, nuanced approach that most utilitarians don’t seem willing or able to take.

Now that I’ve said what’s good about it, let’s talk about what’s really wrong with deontology.

Above all: How do we choose the rules?

Kant seemed to think that mere logical coherence would yield a sufficiently detailed—perhaps even unique—set of rules for all rational beings in the universe to follow. This is obviously wrong, and seems to be simply a failure of his imagination. There is literally a countably infinite space of possible ethical rules that are logically consistent. (With probability 1 any given one is utter nonsense: “Never eat cheese on Thursdays”, “Armadillos should rule the world”, and so on—but these are still logically consistent.)

If you require the rules to be simple and general enough to always apply to everyone everywhere, you can narrow the space substantially; but this is also how you get obviously wrong rules like “Never lie.”

In practice, there are two ways we actually seem to do this: Tradition and consequences.

Let’s start with tradition. (It came first historically, after all.) You can absolutely make a set of rules based on whatever your culture has handed down to you since time immemorial. You can even write them down in a book that you declare to be the absolute infallible truth of the universe—and, amazingly enough, you can get millions of people to actually buy that.

The result, of course, is what we call religion. Some of its rules are good: Thou shalt not kill. Some are flawed but reasonable: Thou shalt not steal. Thou shalt not commit adultery. Some are nonsense: Thou shalt not covet thy neighbor’s goods.

And some, well… some rules of tradition are the source of many of the world’s most horrific human rights violations. Thou shalt not suffer a witch to live (Exodus 22:18). If a man also lie with mankind, as he lieth with a woman, both of them have committed an abomination: they shall surely be put to death; their blood shall be upon them (Leviticus 20:13).

Tradition-based deontology has in fact been the major obstacle to moral progress throughout history. It is not a coincidence that utilitarianism began to become popular right before the abolition of slavery, and there is an even more direct casual link between utilitarianism and the advancement of rights for women and LGBT people. When the sole argument you can make for moral rules is that they are ancient (or allegedly handed down by a perfect being), you can make rules that oppress anyone you want. But when rules have to be based on bringing happiness or preventing suffering, whole classes of oppression suddenly become untenable. “God said so” can justify anything—but “Who does it hurt?” can cut through.

It is an oversimplification, but not a terribly large one, to say that the arc of moral history has been drawn by utilitarians dragging deontologists kicking and screaming into a better future.

There is a better way to make rules, and that is based on consequences. And, in practice, most people who call themselves deontologists these days do this. They develop a system of moral rules based on what would be expected to lead to the overall best outcomes.

I like this approach. In fact, I agree with this approach. But it basically amounts to abandoning deontology and surrendering to utilitarianism.

Once you admit that the fundamental justification for all moral rules is the promotion of happiness and the prevention of suffering, you are basically a rule utilitarian. Rules then become heuristics for promoting happiness, not the fundamental source of morality itself.

I suppose it could be argued that this is not a surrender but a synthesis: We are looking for the best aspects of deontology and utilitarianism. That makes a lot of sense. But I keep coming back to the dark history of traditional rules, the fact that deontologists have basically been holding back human civilization since time immemorial. If deontology wants to be taken seriously now, it needs to prove that it has broken with that dark tradition. And frankly the easiest answer to me seems to be to just give up on deontology.

Are humans rational?

JDN 2456928 PDT 11:21.

The central point of contention between cognitive economists and neoclassical economists hinges upon the word “rational”: Are humans rational? What do we mean by “rational”?

Neoclassicists are very keen to insist that they think humans are rational, and often characterize the cognitivist view as saying that humans are irrational. (Daniel Ariely has a habit of feeding this view, titling books things like Predictably Irrational and The Upside of Irrationality.) But I really don’t think this is the right way to characterize the difference.

Daniel Kahneman has a somewhat better formulation (from Thinking, Fast and Slow): “I often cringe when my work is credited as demonstrating that human choices are irrational, when in fact our research only shows that Humans are not well described by the rational-agent model.” (Yes, he capitalizes the word “Humans” throughout, which is annoying; but in general it is a great book.)

The problem is that saying “humans are irrational” has the connotation of a universal statement; it seems to be saying that everything we do, all the time, is always and everywhere utterly irrational. And this of course could hardly be further from the truth; we would not have even survived in the savannah, let alone invented the Internet, if we were that irrational. If we simply lurched about randomly without any concept of goals or response to information in the environment, we would have starved to death millions of years ago.

But at the same time, the neoclassical definition of “rational” obviously does not describe human beings. We aren’t infinite identical psychopaths. Particularly bizarre (and frustrating) is the continued insistence that rationality entails selfishness; apparently economists are getting all their philosophy from Ayn Rand (who barely even qualifies as such), rather than the greats such as Immanuel Kant and John Stuart Mill or even the best contemporary philosophers such as Thomas Pogge and John Rawls. All of these latter would be baffled by the notion that selfless compassion is irrational.

Indeed, Kant argued that rationality implies altruism, that a truly coherent worldview requires assent to universal principles that are morally binding on yourself and every other rational being in the universe. (I am not entirely sure he is correct on this point, and in any case it is clear to me that neither you nor I are anywhere near advanced enough beings to seriously attempt such a worldview. Where neoclassicists envision infinite identical psychopaths, Kant envisions infinite identical altruists. In reality we are finite diverse tribalists.)

But even if you drop selfishness, the requirements of perfect information and expected utility maximization are still far too strong to apply to real human beings. If that’s your standard for rationality, then indeed humans—like all beings in the real world—are irrational.

The confusion, I think, comes from the huge gap between ideal rationality and total irrationality. Our behavior is neither perfectly optimal nor hopelessly random, but somewhere in between.

In fact, we are much closer to the side of perfect rationality! Our brains are limited, so they operate according to heuristics: simplified, approximate rules that are correct most of the time. Clever experiments—or complex environments very different from how we evolved—can cause those heuristics to fail, but we must not forget that the reason we have them is that they work extremely well in most cases in the environment in which we evolved. We are about 90% rational—but woe betide that other 10%.

The most obvious example is phobias: Why are people all over the world afraid of snakes, spiders, falling, and drowning? Because those used to be leading causes of death. In the African savannah 200,000 years ago, you weren’t going to be hit by a car, shot with a rifle bullet or poisoned by carbon monoxide. (You’d probably die of malaria, actually; for that one, instead of evolving to be afraid of mosquitoes we evolved a biological defense mechanism—sickle-cell red blood cells.) Death in general was actually much more likely then, particularly for children.

A similar case can be made for other heuristics we use: We are tribal because the proper functioning of our 100-person tribe used to be the most important factor in our survival. We are racist because people physically different from us were usually part of rival tribes and hence potential enemies. We hoard resources even when our technology allows abundance, because a million years ago no such abundance was possible and every meal might be our last.

When asked how common something is, we don’t calculate a posterior probability based upon Bayesian inference—that’s hard. Instead we try to think of examples—that’s easy. That’s the availability heuristic. And if we didn’t have mass media constantly giving us examples of rare events we wouldn’t otherwise have known about, the availability heuristic would actually be quite accurate. Right now, people think of terrorism as common (even though it’s astoundingly rare) because it’s always all over the news; but if you imagine living in an ancient tribe—or even an medieval village!—anything you heard about that often would almost certainly be something actually worth worrying about. Our level of panic over Ebola is totally disproportionate; but in the 14th century that same level of panic about the Black Death would be entirely justified.

When we want to know whether something is a member of a category, again we don’t try to calculate the actual probability; instead we think about how well it seems to fit a model we have of the paradigmatic example of that category—the representativeness heuristic. You see a Black man on a street corner in New York City at night; how likely is it that he will mug you? Pretty small actually, because there were less than 200,000 crimes in all of New York City last year in a city of 8,000,000 people—meaning the probability any given person committed a crime in the previous year was only 2.5%; the probability on any given day would then be less than 0.01%. Maybe having those attributes raises the probability somewhat, but you can still be about 99% sure that this guy isn’t going to mug you tonight. But since he seemed representative of the category in your mind “criminals”, your mind didn’t bother asking how many criminals there are in the first place—an effect called base rate neglect. Even 200 years ago—let alone 1 million—you didn’t have these sorts of reliable statistics, so what else would you use? You basically had no choice but to assess based upon representative traits.

As you probably know, people have trouble dealing with big numbers, and this is a problem in our modern economy where we actually need to keep track of millions or billions or even trillions of dollars moving around. And really I shouldn’t say it that way, because $1 million ($1,000,000) is an amount of money an upper-middle class person could have in a retirement fund, while $1 billion ($1,000,000,000) would make you in the top 1000 richest people in the world, and $1 trillion ($1,000,000,000,000) is enough to end world hunger for at least the next 15 years (it would only take about $1.5 trillion to do it forever, by paying only the interest on the endowment). It’s important to keep this in mind, because otherwise the natural tendency of the human mind is to say “big number” and ignore these enormous differences—it’s called scope neglect. But how often do you really deal with numbers that big? In ancient times, never. Even in the 21st century, not very often. You’ll probably never have $1 billion, and even $1 million is a stretch—so it seems a bit odd to say that you’re irrational if you can’t tell the difference. I guess technically you are, but it’s an error that is unlikely to come up in your daily life.

Where it does come up, of course, is when we’re talking about national or global economic policy. Voters in the United States today have a level of power that for 99.99% of human existence no ordinary person has had. 2 million years ago you may have had a vote in your tribe, but your tribe was only 100 people. 2,000 years ago you may have had a vote in your village, but your village was only 1,000 people. Now you have a vote on the policies of a nation of 300 million people, and more than that really: As goes America, so goes the world. Our economic, cultural, and military hegemony is so total that decisions made by the United States reverberate through the entire human population. We have choices to make about war, trade, and ecology on a far larger scale than our ancestors could have imagined. As a result, the heuristics that served us well millennia ago are now beginning to cause serious problems.

[As an aside: This is why the “Downs Paradox” is so silly. If you’re calculating the marginal utility of your vote purely in terms of its effect on you—you are a psychopath—then yes, it would be irrational for you to vote. And really, by all means: psychopaths, feel free not to vote. But the effect of your vote is much larger than that; in a nation of N people, the decision will potentially affect N people. Your vote contributes 1/N to a decision that affects N people, making the marginal utility of your vote equal to N*1/N = 1. It’s constant. It doesn’t matter how big the nation is, the value of your vote will be exactly the same. The fact that your vote has a small impact on the decision is exactly balanced by the fact that the decision, once made, will have such a large effect on the world. Indeed, since larger nations also influence other nations, the marginal effect of your vote is probably larger in large elections, which means that people are being entirely rational when they go to greater lengths to elect the President of the United States (58% turnout) rather than the Wayne County Commission (18% turnout).]

So that’s the problem. That’s why we have economic crises, why climate change is getting so bad, why we haven’t ended world hunger. It’s not that we’re complete idiots bumbling around with no idea what we’re doing. We simply aren’t optimized for the new environment that has been recently thrust upon us. We are forced to deal with complex problems unlike anything our brains evolved to handle. The truly amazing part is actually that we can solve these problems at all; most lifeforms on Earth simply aren’t mentally flexible enough to do that. Humans found a really neat trick (actually in a formal evolutionary sense a goodtrick, which we know because it also evolved in cephalopods): Our brains have high plasticity, meaning they are capable of adapting themselves to their environment in real-time. Unfortunately this process is difficult and costly; it’s much easier to fall back on our old heuristics. We ask ourselves: Why spend 10 times the effort to make it work 99% of the time when you can make it work 90% of the time so much easier?

Why? Because it’s so incredibly important that we get these things right.