Moore’s “naturalistic fallacy”

Jan 12 JDN 2460688

In last week’s post I talked about some of the arguments against ethical naturalism, which have sometimes been called “the naturalistic fallacy”.

The “naturalistic fallacy” that G.E. Moore actually wrote about is somewhat subtler; it says that there is something philosophically suspect about defining something non-natural in terms of natural things—and furthermore, it says that “good” is not a natural thing and so cannot be defined in terms of natural things. For Moore, “good” is not something that can be defined with recourse to facts about psychology, biology or mathematics; “good” is simply an indefinable atomic concept that exists independent of all other concepts. As such Moore was criticizing moral theories like utilitarianism and hedonism that seek to define “good” in terms of “pleasure” or “lack of pain”; for Moore, good cannot have a definition in terms of anything except itself.

My greatest problem with this position is less philosophical than linguistic; how does one go about learning a concept that is so atomic and indefinable? When I was a child, I acquired an understanding of the word “good” that has since expanded as I grew in knowledge and maturity. I need not have called it “good”: had I been raised in Madrid, I would have called it bueno; in Beijing, hao; in Kyoto, ii; in Cairo, jaiid; and so on.

I’m not even sure if all these words really mean exactly the same thing, since each word comes with its own cultural and linguistic connotations. A vast range of possible sounds could be used to express this concept and related concepts—and somehow I had to learn which sounds were meant to symbolize which concepts, and what relations were meant to hold between them. This learning process was highly automatic, and occurred when I was very young, so I do not have great insight into its specifics; but nonetheless it seems clear to me that in some sense I learned to define “good” in terms of things that I could perceive. No doubt this definition was tentative, and changed with time and experience; indeed, I think all definitions are like this. Perhaps my knowledge of other concepts, like “pleasure”, “happiness”, “hope” and “justice”, is interconnected with “good” in such a way that none can be defined separately from the others—indeed perhaps language itself is best considered a network of mutually-reinforcing concepts, each with some independent justification and some connection to other concepts, not a straightforward derivation from more basic atomic notions. If you wish, call me a “foundherentist” in the tradition of Susan Haack; I certainly do think that all beliefs have some degree of independent justification by direct evidence and some degree of mutual justification by coherence. Haack uses the metaphor of a crossword puzzle, but I prefer Alison Gopnik’s mathematical model of a Bayes net. In any case, I had to learn about “good” somehow. Even if I had some innate atomic concept of good, we are left to explain two things: First, how I managed to associate that innate atomic concept with my sense experiences, and second, how that innate atomic concept got in my brain in the first place. If it was genetic, it must have evolved; but it could only have evolved by phenotypic interaction with the external environment—that is, with natural things. We are natural beings, made of natural material, evolved by natural selection. If there is a concept of “good” encoded into my brain either by learning or instinct or whatever combination, it had to get there by some natural mechanism.

The classic argument Moore used to support this position is now called the Open Question Argument; it says, essentially, that we could take any natural property that would be proposed as the definition of “good” and call it X, and we could ask: “Sure, that’s X, but is it good?” The idea is that since we can ask this question and it seems to make sense, then X cannot be the definition of “good”. If someone asked, “I know he is an unmarried man, but is he a bachelor?” or “I know that has three sides, but is it a triangle?” we would think that they didn’t understand what they were talking about; but Moore argues that for any natural property, “I know that is X, but is it good?” is still a meaningful question. Moore uses two particular examples, X = “pleasant” and X = “what we desire to desire”; and indeed those fit what he is saying. But are these really very good examples?

One subtle point that many philosophers make about this argument is that science can discover identities between things and properties that are not immediately apparent. We now know that water is H2O, but until the 19th century we did not know this. So we could perfectly well imagine someone asking, “I know that’s H2O, but is it water?” even though in fact water is H2O and we know this. I think this sort of argument would work for some very complicated moral claims, like the claim that constitutional democracy is good; I can imagine someone who was quite ignorant of international affairs asking: “I know that it’s constitutional democracy, but is that good?” and be making sense. This is because the goodness of constitutional democracy isn’t something conceptually necessary, it is an empirical result based on the fact that constitutional democracies are more peaceful, fair, egalitarian, and prosperous than other governmental systems. In fact, it may even be only true relative to other systems we know of; perhaps there is an as-yet-unimagined governmental system that is better still. No one thinks that constitutional democracy is a definition of moral goodness. And indeed, I think few would argue that H2O is the definition of water; instead the definition of water is something like “that wet stuff we need to drink to survive” and it just so happens that this turns out to be H2O. If someone asked “is that wet stuff we need to drink to survive really water?” he would rightly be thought talking nonsense; that’s just what water means.

But if instead of the silly examples Moore uses, we take a serious proposal that real moral philosophers have suggested, it’s not nearly so obvious that the question is open. From Kant: “Yes, that is our duty as rational beings, but is it good?” From Mill: “Yes, that increases the amount of happiness and decreases the amount of suffering in the world, but is it good?” From Aristotle: “Yes, that is kind, just, and fair, but is it good?” These do sound dangerously close to talking nonsense! If someone asked these questions, I would immediately expect an explanation of what they were getting at. And if no such explanation was forthcoming, I would, in fact, be led to conclude that they literally don’t understand what they’re talking about.

I can imagine making sense of “I know that has three sides, but is it a triangle?”in some bizarre curved multi-dimensional geometry. Even “I know he is an unmarried man, but is he a bachelor?” makes sense if you are talking about a celibate priest. Very rarely do perfect synonyms exist in natural languages, and even when they do they are often unstable due to the effects of connotations. None of this changes the fact that bachelors are unmarried men, triangles have three sides, and yes, goodness involves fulfilling rational duties, alleviating suffering, and being kind and just (Deontology, consequentialism, and virtue theory are often thought to be distinct and incompatible; I’m convinced they amount to the same thing, which I’ll say more about in later posts.).

This line of reasoning has led some philosophers (notably Willard Quine) to deny the existence of analytic truths altogether; on Quine’s view even “2+2=4” isn’t something we can deduce directly from the meaning of the symbols. This is clearly much too strong; no empirical observation could ever lead us to deny 2+2=4. In fact, I am convinced that all mathematical truths are ultimately reducible to tautologies; even “the Fourier transform of a Gaussian is Gaussian” is ultimately a way of saying in compact jargon some very complicated statement that amounts to A=A. This is not to deny that mathematics is useful; of course mathematics is tremendously useful, because this sort of compact symbolic jargon allows us to make innumerable inferences about the world and at the same time guarantee that these inferences are correct. Whenever you see a Gaussian and you need its Fourier transform (I know, it happens a lot, right?), you can immediately know that the result will be a Gaussian; you don’t have to go through the whole derivation yourself. We are wrong to think that “ultimately reducible to a tautology” is the same as “worthless and trivial”; on the contrary, to realize that mathematics is reducible to tautology is to say that mathematics is undeniable, literally impossible to coherently deny. At least the way I use the words, the statement “Happiness is good and suffering is bad” is pretty close to that same sort of claim; if you don’t agree with it, I sense that you honestly don’t understand what I mean.

In any case, I see no more fundamental difficulty in defining “good” than I do in defining any concept, like “man”, “tree”, “multiplication”, “green” or “refrigerator”; and nor do I see any point in arguing about the semantics of definition as an approach to understanding moral truth. It seems to me that Moore has confused the map with the territory, and later authors have confused him with Hume, to all of our detriment.

Argumentum ab scientia is not argumentum baculo: The difference between authority and expertise

May 7, JDN 2457881

Americans are, on the whole, suspicious of authority. This is a very good thing; it shields us against authoritarianism. But it comes with a major downside, which is a tendency to forget the distinction between authority and expertise.

Argument from authority is an informal fallacy, argumentum baculo. The fact that something was said by the Pope, or the President, or the General Secretary of the UN, doesn’t make it true. (Aside: You’re probably more familiar with the phrase argumentum ad baculum, which is terrible Latin. That would mean “argument toward a stick”, when clearly the intended meaning was “argument by means of a stick”, which is argumentum baculo.)

But argument from expertise, argumentum ab scientia, is something quite different. The world is much too complicated for any one person to know everything about everything, so we have no choice but to specialize our knowledge, each of us becoming an expert in only a few things. So if you are not an expert in a subject, when someone who is an expert in that subject tells you something about that subject, you should probably believe them.

You should especially be prepared to believe them when the entire community of experts is in consensus or near-consensus on a topic. The scientific consensus on climate change is absolutely overwhelming. Is this a reason to believe in climate change? You’re damn right it is. Unless you have years of education and experience in understanding climate models and atmospheric data, you have no basis for challenging the expert consensus on this issue.

This confusion has created a deep current of anti-intellectualism in our culture, as Isaac Asimov famously recognized:

There is a cult of ignorance in the United States, and there always has been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that “my ignorance is just as good as your knowledge.”

This is also important to understand if you have heterodox views on any scientific topic. The fact that the whole field disagrees with you does not prove that you are wrong—but it does make it quite likely that you are wrong. Cranks often want to compare themselves to Galileo or Einstein, but here’s the thing: Galileo and Einstein didn’t act like cranks. They didn’t expect the scientific community to respect their ideas before they had gathered compelling evidence in their favor.

When behavioral economists found that neoclassical models of human behavior didn’t stand up to scrutiny, did they shout from the rooftops that economics is all a lie? No, they published their research in peer-reviewed journals, and talked with economists about the implications of their results. There may have been times when they felt ignored or disrespected by the mainstream, but they pressed on, because the data was on their side. And ultimately, the mainstream gave in: Daniel Kahneman won the Nobel Prize in Economics.

Experts are not always right, that is true. But they are usually right, and if you think they are wrong you’d better have a good reason to think so. The best reasons are the sort that come about when you yourself have spent the time and effort to become an expert, able to challenge the consensus on its own terms.

Admittedly, that is a very difficult thing to do—and more difficult than it should be. I have seen firsthand how difficult and painful the slow grind toward a PhD can be, and how many obstacles will get thrown in your way, ranging from nepotism and interdepartmental politics, to discrimination against women and minorities, to mismatches of interest between students and faculty, all the way to illness, mental health problems, and the slings and arrows of outrageous fortune in general. If you have particularly heterodox ideas, you may face particularly harsh barriers, and sometimes it behooves you to hold your tongue and toe the lie awhile.

But this is no excuse not to gain expertise. Even if academia itself is not available to you, we live in an age of unprecedented availability of information—it’s not called the Information Age for nothing. A sufficiently talented and dedicated autodidact can challenge the mainstream, if their ideas are truly good enough. (Perhaps the best example of this is the mathematician savant Srinivasa Ramanujan. But he’s… something else. I think he is about as far from the average genius as the average genius is from the average person.) No, that won’t be easy either. But if you are really serious about advancing human understanding rather than just rooting for your political team (read: tribe), you should be prepared to either take up the academic route or attack it as an autodidact from the outside.

In fact, most scientific fields are actually quite good about admitting what they don’t know. A total consensus that turns out to be wrong is actually a very rare phenomenon; much more common is a clash of multiple competing paradigms where one ultimately wins out, or they end up replaced by a totally new paradigm or some sort of synthesis. In almost all cases, the new paradigm wins not because it becomes fashionable or the ancien regime dies out (as Planck cynically claimed) but because overwhelming evidence is observed in its favor, often in the form of explaining some phenomenon that was previously impossible to understand. If your heterodox theory doesn’t do that, then it probably won’t win, because it doesn’t deserve to.

(Right now you might think of challenging me: Does my heterodox theory do that? Does the tribal paradigm explain things that either total selfishness or total altruism cannot? I think it’s pretty obvious that it does. I mean, you are familiar with a little thing called “racism”, aren’t you? There is no explanation for racism in neoclassical economics; to understand it at all you have to just impose it as an arbitrary term on the utility function. But at that point, why not throw in whatever you please? Maybe some people enjoy bashing their heads against walls, and other people take great pleasure in the taste of arsenic. Why would this particular self- (not to mention other-) destroying behavior be universal to all human societies?)

In practice, I think most people who challenge the mainstream consensus aren’t genuinely interested in finding out the truth—certainly not enough to actually go through the work of doing it. It’s a pattern you can see in a wide range of fringe views: Anti-vaxxers, 9/11 truthers, climate denialists, they all think the same way. The mainstream disagrees with my preconceived ideology, therefore the mainstream is some kind of global conspiracy to deceive us. The overwhelming evidence that vaccination is safe and (wildly) cost-effective, 9/11 was indeed perpetrated by Al Qaeda and neither planned nor anticipated by anyone in the US government , and the global climate is being changed by human greenhouse gas emissions—these things simply don’t matter to them, because it was never really about the truth. They knew the answer before they asked the question. Because their identity is wrapped up in that political ideology, they know it couldn’t possibly be otherwise, and no amount of evidence will change their mind.

How do we reach such people? That, I don’t know. I wish I did. But I can say this much: We can stop taking them seriously when they say that the overwhelming scientific consensus against them is just another “appeal to authority”. It’s not. It never was. It’s an argument from expertise—there are people who know this a lot better than you, and they think you’re wrong, so you’re probably wrong.