Signaling and the Curse of Knowledge

Jan 3 JDN 2459218

I received several books for Christmas this year, and the one I was most excited to read first was The Sense of Style by Steven Pinker. Pinker is exactly the right person to write such a book: He is both a brilliant linguist and cognitive scientist and also an eloquent and highly successful writer. There are two other books on writing that I rate at the same tier: On Writing by Stephen King, and The Art of Fiction by John Gardner. Don’t bother with style manuals from people who only write style manuals; if you want to learn how to write, learn from people who are actually successful at writing.

Indeed, I knew I’d love The Sense of Style as soon as I read its preface, containing some truly hilarious takedowns of Strunk & White. And honestly Strunk & White are among the best standard style manuals; they at least actually manage to offer some useful advice while also being stuffy, pedantic, and often outright inaccurate. Most style manuals only do the second part.

One of Pinker’s central focuses in The Sense of Style is on The Curse of Knowledge, an all-too-common bias in which knowing things makes us unable to appreciate the fact that other people don’t already know it. I think I succumbed to this failing most greatly in my first book, Special Relativity from the Ground Up, in which my concept of “the ground” was above most people’s ceilings. I was trying to write for high school physics students, and I think the book ended up mostly being read by college physics professors.

The problem is surely a real one: After years of gaining expertise in a subject, we are all liable to forget the difficulty of reaching our current summit and automatically deploy concepts and jargon that only a small group of experts actually understand. But I think Pinker underestimates the difficulty of escaping this problem, because it’s not just a cognitive bias that we all suffer from time to time. It’s also something that our society strongly incentivizes.

Pinker points out that a small but nontrivial proportion of published academic papers are genuinely well written, using this to argue that obscurantist jargon-laden writing isn’t necessary for publication; but he didn’t seem to even consider the fact that nearly all of those well-written papers were published by authors who already had tenure or even distinction in the field. I challenge you to find a single paper written by a lowly grad student that could actually get published without being full of needlessly technical terminology and awkward passive constructions: “A murian model was utilized for the experiment, in an acoustically sealed environment” rather than “I tested using mice and rats in a quiet room”. This is not because grad students are more thoroughly entrenched in the jargon than tenured professors (quite the contrary), nor that grad students are worse writers in general (that one could really go either way), but because grad students have more to prove. We need to signal our membership in the tribe, whereas once you’ve got tenure—or especially once you’ve got an endowed chair or something—you have already proven yourself.

Pinker seems to briefly touch this insight (p. 69), without fully appreciating its significance: “Even when we have an inlkling that we are speaking in a specialized lingo, we may be reluctant to slip back into plain speech. It could betray to our peers the awful truth that we are still greenhorns, tenderfoots, newbies. And if our readers do know the lingo, we might be insulting their intelligence while spelling it out. We would rather run the risk of confusing them while at least appearing to be soophisticated than take a chance at belaboring the obvious while striking them as naive or condescending.”

What we are dealing with here is a signaling problem. The fact that one can write better once one is well-established is the phenomenon of countersignaling, where one who has already established their status stops investing in signaling.

Here’s a simple model for you. Suppose each person has a level of knowledge x, which they are trying to demonstrate. They know their own level of knowledge, but nobody else does.

Suppose that when we observe someone’s knowledge, we get two pieces of information: We have an imperfect observation of their true knowledge which is x+e, the real value of x plus some amount of error e. Nobody knows exactly what the error is. To keep the model as simple as possible I’ll assume that e is drawn from a uniform distribution between -1 and 1.

Finally, assume that we are trying to select people above a certain threshold: Perhaps we are publishing in a journal, or hiring candidates for a job. Let’s call that threshold z. If x < z-1, then since e can never be larger than 1, we will immediately observe that they are below the threshold and reject them. If x > z+1, then since e can never be smaller than -1, we will immediately observe that they are above the threshold and accept them.

But when z-1 < x < z+1, we may think they are above the threshold when they actually are not (if e is positive), or think they are not above the threshold when they actually are (if e is negative).

So then let’s say that they can invest in signaling by putting some amount of visible work in y (like citing obscure papers or using complex jargon). This additional work may be costly and provide no real value in itself, but it can still be useful so long as one simple condition is met: It’s easier to do if your true knowledge x is high.

In fact, for this very simple model, let’s say that you are strictly limited by the constraint that y <= x. You can’t show off what you don’t know.

If your true value x > z, then you should choose y = x. Then, upon observing your signal, we know immediately that you must be above the threshold.

But if your true value x < z, then you should choose y = 0, because there’s no point in signaling that you were almost at the threshold. You’ll still get rejected.

Yet remember before that only those with z-1 < x < z+1 actually need to bother signaling at all. Those with x > z+1 can actually countersignal, by also choosing y = 0. Since you already have tenure, nobody doubts that you belong in the club.

This means we’ll end up with three groups: Those with x < z, who don’t signal and don’t get accepted; those with z < x < z+1, who signal and get accepted; and those with x > z+1, who don’t signal but get accepted. Then life will be hardest for those who are just above the threshold, who have to spend enormous effort signaling in order to get accepted—and that sure does sound like grad school.

You can make the model more sophisticated if you like: Perhaps the error isn’t uniformly distributed, but some other distribution with wider support (like a normal distribution, or a logistic distribution); perhaps the signalling isn’t perfect, but itself has some error; and so on. With such additions, you can get a result where the least-qualified still signal a little bit so they get some chance, and the most-qualified still signal a little bit to avoid a small risk of being rejected. But it’s a fairly general phenomenon that those closest to the threshold will be the ones who have to spend the most effort in signaling.

This reveals a disturbing overlap between the Curse of Knowledge and Impostor Syndrome: We write in impenetrable obfuscationist jargon because we are trying to conceal our own insecurity about our knowledge and our status in the profession. We’d rather you not know what we’re talking about than have you realize that we don’t know what we’re talking about.

For the truth is, we don’t know what we’re talking about. And neither do you, and neither does anyone else. This is the agonizing truth of research that nearly everyone doing research knows, but one must be either very brave, very foolish, or very well-established to admit out loud: It is in the nature of doing research on the frontier of human knowledge that there is always far more that we don’t understand about our subject than that we do understand.

I would like to be more open about that. I would like to write papers saying things like “I have no idea why it turned out this way; it doesn’t make sense to me; I can’t explain it.” But to say that the profession disincentivizes speaking this way would be a grave understatement. It’s more accurate to say that the profession punishes speaking this way to the full extent of its power. You’re supposed to have a theory, and it’s supposed to work. If it doesn’t actually work, well, maybe you can massage the numbers until it seems to, or maybe you can retroactively change the theory into something that does work. Or maybe you can just not publish that paper and write a different one.

Here is a graph of one million published z-scores in academic journals:

It looks like a bell curve, except that almost all the values between -2 and 2 are mysteriously missing.

If we were actually publishing all the good science that gets done, it would in fact be a very nice bell curve. All those missing values are papers that never got published, or results that were excluded from papers, or statistical analyses that were massaged, in order to get a p-value less than the magical threshold for publication of 0.05. (For the statistically uninitiated, a z-score less than -2 or greater than +2 generally corresponds to a p-value less than 0.05, so these are effectively the same constraint.)

I have literally never read a single paper published in an academic journal in the last 50 years that said in plain language, “I have no idea what’s going on here.” And yet I have read many papers—probably most of them, in fact—where that would have been an appropriate thing to say. It’s actually quite a rare paper, at least in the social sciences, that actually has a theory good enough to really precisely fit the data and not require any special pleading or retroactive changes. (Often the bar for a theory’s success is lowered to “the effect is usually in the right direction”.) Typically results from behavioral experiments are bizarre and baffling, because people are a little screwy. It’s just that nobody is willing to stake their career on being that honest about the depth of our ignorance.

This is a deep shame, for the greatest advances in human knowledge have almost always come from people recognizing the depth of their ignorance. Paradigms never shift until people recognize that the one they are using is defective.

This is why it’s so hard to beat the Curse of Knowledge: You need to signal that you know what you’re talking about, and the truth is you probably don’t, because nobody does. So you need to sound like you know what you’re talking about in order to get people to listen to you. You may be doing nothing more than educated guesses based on extremely limited data, but that’s actually the best anyone can do; those other people saying they have it all figured out are either doing the same thing, or they’re doing something even less reliable than that. So you’d better sound like you have it all figured out, and that’s a lot more convincing when you “utilize a murian model” than when you “use rats and mice”.

Perhaps we can at least push a little bit toward plainer language. It helps to be addressing a broader audience: it is both blessing and curse that whatever I put on this blog is what you will read, without any gatekeepers in my path. I can use plainer language here if I so choose, because no one can stop me. But of course there’s a signaling risk here as well: The Internet is a public place, and potential employers can read this as well, and perhaps decide they don’t like me speaking so plainly about the deep flaws in the academic system. Maybe I’d be better off keeping my mouth shut, at least for awhile. I’ve never been very good at keeping my mouth shut.

Once we get established in the system, perhaps we can switch to countersignaling, though even this doesn’t always happen. I think there are two reasons this can fail: First, you can almost always try to climb higher. Once you have tenure, aim for an endowed chair. Once you have that, try to win a Nobel. Second, once you’ve spent years of your life learning to write in a particular stilted, obscurantist, jargon-ridden way, it can be very difficult to change that habit. People have been rewarding you all your life for writing in ways that make your work unreadable; why would you want to take the risk of suddenly making it readable?

I don’t have a simple solution to this problem, because it is so deeply embedded. It’s not something that one person or even a small number of people can really fix. Ultimately we will need to, as a society, start actually rewarding people for speaking plainly about what they don’t know. Admitting that you have no clue will need to be seen as a sign of wisdom and honesty rather than a sign of foolishness and ignorance. And perhaps even that won’t be enough: Because the fact will still remain that knowing what you know that other people don’t know is a very difficult thing to do.

Sheepskin effect doesn’t prove much

Sep 20 JDN 2459113

The sheepskin effect is the observation that the increase in income from graduating from college after four years, relative going through college for three years, is much higher than the increase in income from simply going through college for three years instead of two.

It has been suggested that this provides strong evidence that education is primarily due to signaling, and doesn’t provide any actual value. In this post I’m going to show why this view is mistaken. The sheepskin effect in fact tells us very little about the true value of college. (Noah Smith actually made a pretty decent argument that it provides evidence against signaling!)

To see this, consider two very simple models.

In both models, we’ll assume that markets are competitive but productivity is not directly observable, so employers sort you based on your education level and then pay a wage equal to the average productivity of people at your education level, compensated for the cost of getting that education.

Model 1:

In this model, people all start with the same productivity, and are randomly assigned by their life circumstances to go to either 0, 1, 2, 3, or 4 years of college. College itself has no long-term cost.

The first year of college you learn a lot, the next couple of years you don’t learn much because you’re trying to find your way, and then in the last year of college you learn a lot of specialized skills that directly increase your productivity.

So this is your productivity after x years of college:

Years of collegeProductivity
010
117
222
325
431

We assumed that you’d get paid your productivity, so these are also your wages.

The increase in income each year goes from +7, to +5, to +3, then jumps up to +6. So if you compare the 4-year-minus-3-year gap (+6) with the 3-year-minus-2-year gap (+3), you get a sheepskin effect.

Model 2:

In this model, college is useless and provides no actual benefits. People vary in their intrinsic productivity, which is also directly correlated with the difficulty of making it through college.

In particular, there are five types of people:

TypeProductivityCost per year of college
0108
1116
2144
3193
4310

The wages for different levels of college education are as follows:

Years of collegeWage
010
117
222
325
431

Notice that these are exactly the same wages as in scenario 1. This is of course entirely intentional. In a moment I’ll show why this is a Nash equilibrium.

Consider the choice of how many years of college to attend. You know your type, so you know the cost of college to you. You want to maximize your net benefit, which is the wage you’ll get minus the total cost of going to college.

Let’s assume that if a given year of college isn’t worth it, you won’t try to continue past it and see if more would be.

For a type-0 person, they could get 10 by not going to college at all, or 17-(1)(8) = 9 by going for 1 year, so they stop.

For a type-1 person, they could get 10 by not going to college at all, or 17-(1)(6) = 11 by going for 1 year, or 22-(2)(6) = 10 by going for 2 years, so they stop.

Filling out all the possibilities yields this table:

Years \ Type01234
01010101010
1911131417
2
10141622
3

131925
4


1930

I’d actually like to point out that it was much harder to find numbers that allowed me to make the sheepskin effect work in the second model, where education was all signaling. In the model where education provides genuine benefit, all I need to do is posit that the last year of college is particularly valuable (perhaps because high-level specialized courses are more beneficial to productivity). I could pretty much vary that parameter however I wanted, and get whatever magnitude of sheepskin effect I chose.

For the signaling model, I had to carefully calibrate the parameters so that the costs and benefits lined up just right to make sure that each type chose exactly the amount of college I wanted them to choose while still getting the desired sheepskin effect. It took me about two hours of very frustrating fiddling just to get numbers that worked. And that’s with the assumption that someone who finds 2 years of college not worth it won’t consider trying for 4 years of college (which, given the numbers above, they actually might want to), as well as the assumption that when type-3 individuals are indifferent between staying and dropping out they drop out.

And yet the sheepskin effect is supposed to be evidence that the world works like the signaling model?

I’m sure a more sophisticated model could make the signaling explanation a little more robust. The biggest limitation of these models is that once you observe someone’s education level, you immediately know their true productivity, whether it came from college or not. Realistically we should be allowing for unobserved variation that can’t be sorted out by years of college.

Maybe it seems implausible that the last year of college is actually more beneficial to your productivity than the previous years. This is probably the intuition behind the idea that sheepskin effects are evidence of signaling rather than genuine learning.

So how about this model?

Model 3:

As in the second model, there are four types of people, types 0, 1, 2, 3, and 4. They all start with the same level of productivity, and they have the same cost of going to college; but they get different benefits from going to college.

The problem is, people don’t start out knowing what type they are. Nor can they observe their productivity directly. All they can do is observe their experience of going to college and then try to figure out what type they must be.

Type 0s don’t benefit from college at all, and they know they are type 0; so they don’t go to college.

Type 1s benefit a tiny amount from college (+1 productivity per year), but don’t realize they are type 1s until after one year of college.

Type 2s benefit a little from college (+2 productivity per year), but don’t realize they are type 2s until after two years of college.

Type 3s benefit a moderate amount from college (+3 productivity per year), but don’t realize they are type 3s until after three years of college.

Type 4s benefit a great deal from college (+5 productivity per year), but don’t realize they are type 4s until after three years of college.

What then will happen? Type 0s will not go to college. Type 1s will go one year and then drop out. Type 2s will go two years and then drop out. Type 3s will go three years and then drop out. And type 4s will actually graduate.

That results in the following before-and-after productivity:

TypeProductivity before collegeYears of collegeProductivity after college
010010
110111
210214
310319
410430

If each person is paid a wage equal to their productivity, there will be a huge sheepskin effect; wages only go up +1 for 1 year, +3 for 2 years, +5 for 3 years, but then they jump up to +11 for graduation. It appears that the benefit of that last year of college is more than the other three combined. But in fact it’s not; for any given individual, the benefits of college are the same each year. It’s just that college is more beneficial to the people who decided to stay longer.

And I could of course change that assumption too, making the early years more beneficial, or varying the distribution of types, or adding more uncertainty—and so on. But it’s really not hard at all to make a model where college is beneficial and you observe a large sheepskin effect.

In reality, I am confident that some of the observed benefit of college is due to sorting—not the same thing as signaling—rather than the direct benefits of education. The earnings advantage of going to a top-tier school may be as much about the selection of students as they are the actual quality of the education, since once you control for measures of student ability like GPA and test scores those benefits drop dramatically.

Moreover, I agree that it’s worth looking at this: Insofar as college is about sorting or signaling, it’s wasteful from a societal perspective, and we should be trying to find more efficient sorting mechanisms.

But I highly doubt that all the benefits of college are due to sorting or signaling; there definitely are a lot of important things that people learn in college, not just conventional academic knowledge like how to do calculus, but also broader skills like how to manage time, how to work in groups, and how to present ideas to others. Colleges also cultivate friendships and provide opportunities for networking and exposure to a diverse community. Judging by voting patterns, I’m going to go out on a limb and say that college also makes you a better citizen, which would be well worth it by itself.

The truth is, we don’t know exactly why college is beneficial. We certainly know that it is beneficial: Unemployment rates and median earnings are directly sorted by education level. Yes, even PhDs in philosophy and sociology have lower unemployment and higher incomes (on average) than the general population. (And of course PhDs in economics do better still.)

Authoritarianism and Masculinity

Apr 19 JDN 2458957

There has always been a significant difference between men and women voters, at least as long as we have been gathering data—and probably as long as women have been voting, which is just about to hit its centennial in the United States.

But the 2016 and 2018 elections saw the largest gender gaps we’ve ever recorded. Dividing by Presidential administrations, Bush would be from 2000 to 2006, when the gender gap never exceeded 18 percentage points, and averaged less than 10 points. Obama would be from 2008 to 2014, when the gender gap never exceeded 20 points and averaged about 15 points. In 2018, the gap stood at 23 percentage points.

Indeed, it is quite clear at this point that Trump’s support base comes mainly from White men.

This is far from the only explanatory factor here: Younger voters are much more liberal than older voters, more educated voters are more liberal than less educated voters, and urban voters are much more liberal than rural voters.

But the gender and race gaps are large enough that even if only White men with a college degree had voted, Trump would have still won, and even if only women without a college degree had voted, Trump would have lost. Trumpism is a white male identity movement.

And indeed it seems significant that Trump’s opponent was the first woman to be a US Presidential nominee from a major party.

Why would men be so much more likely to support Trump than women? Well, there’s the fact that Trump has been accused of sexual harassment dozens of times and sexual assault several times. Women are more likely to be victims of such behavior, and men are more likely to be perpetrators of it.

But I think that’s really a symptom of a broader cause, which is that authoritarianism is masculine.

Think about it: Can you even name a woman who was an authoritarian dictator? There have been a few queen tyrants historically, but not many; tyrants are almost always kings. And for all her faults, Margaret Thatcher was assuredly no Joseph Stalin.

Masculinity is tied to power, authority, strength, dominance: All things that authoritarians promise. It doesn’t even seem to matter that it’s always the dictator asserting power and dominance upon us, taking away the power and authority we previously had; the mere fact that some man is exerting power and dominance on someone seems to satisfy this impulse. And of course those who support authoritarians always seem to imagine that the dictator will oppress someone else—never me. (“I never thought leopards would eat my face!”)

Conversely, the virtues of democracy, such as equality, fairness, cooperation, and compromise, are coded feminine. This is how toxic masculinity sustains itself: Even being willing to talk about disagreements rather than fighting over them constitutes surrender to the feminine. So the mere fact that I am trying to talk them out of their insanely (both self- and other-) destructive norms proves that I serve the enemy.

I don’t often interact with Trump supporters, because doing so is a highly unpleasant experience. But when I have, certain themes kept reoccurring: “Trump is a real man”; “Democrats are pussies”; “they [people of color] are taking over our [White people’s] country”; “you’re a snowflake libtard beta cuck”.

Almost all of the content was about identity, particularly masculine and White identity. Virtually none of their defenses of Trump involved any substantive claims about policy, though some did at least reference the relatively good performance of the economy (up until recently—and that they all seem to blame on the “unforeseeable” pandemic, a “Black Swan”; nevermind that people actually did foresee it and were ignored). Ironically they are always the ones complaining about “identity politics”.

And while they would be the last to admit it, I noticed something else as well: Most of these men were deeply insecure about their own masculinity. They kept constantly trying to project masculine dominance, and getting increasingly aggravated when I simply ignored it rather than either submitting or responding with my own displays of dominance. Indeed, they probably perceived me as displaying a kind of masculine dominance: I was just countersignaling instead of signaling, and that’s what made them so angry. They clearly felt deeply envious of the fact that I could simply be secure in my own identity without feeling a need to constantly defend it.

But of course I wasn’t born that way. Indeed, the security I now feel in my own identity was very hard-won through years of agony and despair—necessitated by being a bisexual man in a world that even today isn’t very accepting of us. Even now I’m far from immune to the pressures of masculinity; I’ve simply learned to channel them better and resist their worst effects.

They call us “snowflakes” because they feel fragile, and fear their own fragility. And in truth, they are fragile. Indeed, fragile masculinity is one of the strongest predictors of support for Trump. But it is in the nature of fragile masculinity that pointing it out only aggravates it and provokes an even angrier response. Toxic masculinity is a very well-adapted meme; its capacity to defend itself is morbidly impressive, like the way that deadly viruses spread themselves is morbidly impressive.

This is why I think it is extremely dangerous to mock the size of Trump’s penis (or his hands, metonymously—though empirically, digit ratio slightly correlates with penis size, but overall hand size does not), or accuse his supporters of likewise having smaller penises. In doing so, you are reinforcing the very same toxic masculinity norms that underlie so much of Trump’s support. And this is even worse if the claim is true: In that case you’re also reinforcing that man’s own crisis of masculine identity.

Indeed, perhaps the easiest way to anger a man who is insecure about his masculinity is to accuse him of being insecure about his masculinity. It’s a bit of a paradox. I have even hesitated to write this post, for fear of triggering the same effect; but I realized that it’s more likely that you, my readers, would trigger it inadvertently, and by warning you I might reduce the overall rate at which it is triggered.

I do not use the word “triggered” lightly; I am talking about a traumatic trigger response. These men have been beaten down their whole lives for not being “manly enough”, however defined, and they lash out by attacking the masculinity of every other man they encounter—thereby perpetuating the cycle of trauma. And stricter norms of masculinity also make coping with trauma more difficult, which is why men who exhibit stricter masculinity also are more likely to suffer PTSD in war. There are years of unprocessed traumatic memories in these men’s brains, and the only way they know to cope with them is to try to inflict them on someone else.

The ubiquity of “cuck” as an insult in the alt-right is also quite notable in this context. It’s honestly a pretty weird insult to throw around casually; it implies knowing all sorts of things about a person’s sexual relationships that you can’t possibly know. (For someone in an openly polyamorous relationship, it’s probably quite amusing.) But it’s a way of attacking masculine identity: If you were a “real man”, your wife wouldn’t be sleeping around. We accuse her of infidelity in order to accuse you of inferiority. (And if your spouse is male? Well then obviously you’re even worse than a “cuck”—you’re a “fag”.) There also seems to be some sort of association that the alt-right made between cuckoldry and politics, as though the election of Obama constitutes America “cheating” on them. I’m not sure whether it bothers them more that Obama is liberal, or that he is Black. Both definitely bother them a great deal.

How do we deal with these men? If we shouldn’t attack their masculinity for fear of retrenchment, and we can’t directly engage them on questions of policy because it means nothing to them, what then should we do? I’m honestly not sure. What these men actually need is years of psychotherapy to cope with their deep-seated traumas; but they would never seek it out, because that, too, is considered unmasculine. Of course you can’t be expected to provide the effect of years of psychotherapy in a single conversation with a stranger. Even a trained therapist wouldn’t be able to do that, nor would they be likely to give actual therapy sessions to angry strangers for free.

What I think we can do, however, is to at least try to refrain from making their condition worse. We can rigorously resist the temptation to throw the same insults back at them, accusing them of having small penises, or being cuckolds, or whatever. We should think of this the way we think of using “gay” as an insult (something I all too well remember from middle school): You’re not merely insulting the person you’re aiming it at, you’re also insulting an entire community of innocent people.

We should even be very careful about directly addressing their masculine insecurity; it may sometimes be necessary, but it, too, is sure to provoke a defensive response. And as I mentioned earlier, if you are a man and you are not constantly defending your own masculinity, they can read that as countersignaling your own superiority. This is not an easy game to win.

But the stakes are far too high for us to simply give up. The fate of America and perhaps even the world hinges upon finding a solution.

Are some ideas too ridiculous to bother with?

Apr 22 JDN 2458231

Flat Earth. Young-Earth Creationism. Reptilians. 9/11 “Truth”. Rothschild conspiracies.

There are an astonishing number of ideas that satisfy two apparently-contrary conditions:

  1. They are so obviously ridiculous that even a few minutes of honest, rational consideration of evidence that is almost universally available will immediately refute them;
  2. They are believed by tens or hundreds of millions of otherwise-intelligent people.

Young-Earth Creationism is probably the most alarming, seeing as it grips the minds of some 38% of Americans.

What should we do when faced with such ideas? This is something I’ve struggled with before.

I’ve spent a lot of time and effort trying to actively address and refute them—but I don’t think I’ve even once actually persuaded someone who believes these ideas to change their mind. This doesn’t mean my time and effort were entirely wasted; it’s possible that I managed to convince bystanders, or gained some useful understanding, or simply improved my argumentation skills. But it does seem likely that my time and effort were mostly wasted.

It’s tempting, therefore, to give up entirely, and just let people go on believing whatever nonsense they want to believe. But there’s a rather serious downside to that as well: Thirty-eight percent of Americans.

These people vote. They participate in community decisions. They make choices that affect the rest of our lives. Nearly all of those Creationists are Evangelical Christians—and White Evangelical Christians voted overwhelmingly in favor of Donald Trump. I can’t be sure that changing their minds about the age of the Earth would also change their minds about voting for Trump, but I can say this: If all the Creationists in the US had simply not voted, Hillary Clinton would have won the election.

And let’s not leave the left wing off the hook either. Jill Stein is a 9/11 “Truther”, and pulled a lot of fellow “Truthers” to her cause in the election as well. Had all of Jill Stein’s votes gone to Hillary Clinton instead, again Hillary would have won, even if all the votes for Trump had remained the same. (That said, there is reason to think that if Stein had dropped out, most of those folks wouldn’t have voted at all.)

Therefore, I don’t think it is safe to simply ignore these ridiculous beliefs. We need to do something; the question is what.

We could try to censor them, but first of all that violates basic human rights—which should be a sufficient reason not to do it—and second, it probably wouldn’t even work. Censorship typically leads to radicalization, not assimilation.

We could try to argue against them. Ideally this would be the best option, but it has not shown much effect so far. The kind of person who sincerely believes that the Earth is 6,000 years old (let alone that governments are secretly ruled by reptilian alien invaders) isn’t the kind of person who is highly responsive to evidence and rational argument.

In fact, there is reason to think that these people don’t actually believe what they say the same way that you and I believe things. I’m not saying they’re lying, exactly. They think they believe it; they want to believe it. They believe in believing it. But they don’t actually believe it—not the way that I believe that cyanide is poisonous or the way I believe the sun will rise tomorrow. It isn’t fully integrated into the way that they anticipate outcomes and choose behaviors. It’s more of a free-floating sort of belief, where professing a particular belief allows them to feel good about themselves, or represent their status in a community.

To be clear, it isn’t that these beliefs are unimportant to them; on the contrary, they are in some sense more important. Creationism isn’t really about the age of the Earth; it’s about who you are and where you belong. A conventional belief can be changed by evidence about the world because it is about the world; a belief-in-belief can’t be changed by evidence because it was never really about that.

But if someone’s ridiculous belief is really about their identity, how do we deal with that? I can’t refute an identity. If your identity is tied to a particular social group, maybe they could ostracize you and cause you to lose the identity; but an outsider has no power to do that. (Even then, I strongly suspect that, for instance, most excommunicated Catholics still see themselves as Catholic.) And if it’s a personal identity not tied to a particular group, even that option is unavailable.

Where, then, does that leave us? It would seem that we can’t change their minds—but we also can’t afford not to change their minds. We are caught in a terrible dilemma.

I think there might be a way out. It’s a bit counter-intuitive, but I think what we need to do is stop taking them seriously as beliefs, and start treating them purely as announcements of identity.

So when someone says something like, “The Rothschilds run everything!”, instead of responding as though this were a coherent proposition being asserted, treat it as if someone had announced, “Boo! I hate the Red Sox!” Belief in the Rothschild conspiracies isn’t a well-defined set of propositions about the world; it’s an assertion of membership in a particular sort of political sect that is vaguely left-wing and anarchist. You don’t really think the Rothschilds rule everything. You just want to express your (quite justifiable) anger at how our current political system privileges the rich.

Likewise, when someone says they think the Earth is 6,000 years old, you could try to present the overwhelming scientific evidence that they are wrong—but it might be more productive, and it is certainly easier, to just think of this as a funny way of saying “I’m an Evangelical Christian”.

Will this eliminate the ridiculous beliefs? Not immediately. But it might ultimately do so, in the following way: By openly acknowledging the belief-in-belief as a signaling mechanism, we can open opportunities for people to develop new, less pathological methods of signaling. (Instead of saying you think the Earth is 6,000 years old, maybe you could wear a funny hat, like Orthodox Jews do. Funny hats don’t hurt anybody. Everyone loves funny hats.) People will always want to signal their identity, and there are fundamental reasons why such signals will typically be costly for those who use them; but we can try to make them not so costly for everyone else.

This also makes arguments a lot less frustrating, at least at your end. It might make them more frustrating at the other end, because people want their belief-in-belief to be treated like proper belief, and you’ll be refusing them that opportunity. But this is not such a bad thing; if we make it more frustrating to express ridiculous beliefs in public, we might manage to reduce the frequency of such expression.

No, advertising is not signaling

JDN 2457373

Awhile ago, I wrote a post arguing that advertising is irrational, that at least with advertising as we know it, no real information is conveyed and thus either consumers are being irrational in their purchasing decisions, or advertisers are irrational for buying ads that don’t work.

One of the standard arguments neoclassical economists make to defend the rationality of advertising is that advertising is signaling—that even though the content of the ads conveys no useful information, the fact that there are ads is a useful signal of the real quality of goods being sold.

The idea is that by spending on advertising, a company shows that they have a lot of money to throw around, and are therefore a stable and solvent company that probably makes good products and is going to stick around for awhile.

Here are a number of different papers all making this same basic argument, often with sophisticated mathematical modeling. This paper takes an even bolder approach, arguing that people benefit from ads and would therefore pay to get them if they had to. Does that sound even remotely plausible to you? It sure doesn’t to me. Some ads are fairly entertaining, but generally if someone is willing to pay money for a piece of content, they charge money for that content.

Could spending on advertising offer a signal of the quality of a product or the company that makes it? Yes. That is something that actually could happen. The reason this argument is ridiculous is not that advertising signaling couldn’t happen—it’s that advertising is clearly nowhere near the best way to do that. The content of ads is clearly nothing remotely like what it would be if advertising were meant to be a costly signal of quality.

Look at this ad for Orangina. Look at it. Look at it.

Now, did that ad tell you anything about Orangina? Anything at all?

As far as I can tell, the thing it actually tells you isn’t even true—it strongly implies that Orangina is a form of aftershave when in fact it is an orange-flavored beverage. It’d be kind of like having an ad for the iPad that involves scantily-clad dog-people riding the iPad like it’s a hoverboard. (Now that I’ve said it, Apple is probably totally working on that ad.)

This isn’t an isolated incident for Orangina, who have a tendency to run bizarre and somewhat suggestive (let’s say PG-13) TV spots involving anthropomorphic animals.

But more than that, it’s endemic to the whole advertising industry.

Look at GEICO, for instance; without them specifically mentioning that this is car insurance, you’d never know what they were selling from all the geckos,

and Neanderthals,

and… golf Krakens?

Progressive does slightly better, talking about some of their actual services while also including an adorably-annoying spokesperson (she’s like Jar Jar, but done better):

State Farm also includes at least a few tidbits about their insurance amidst the teleportation insanity:

But honestly the only car insurance commercials I can think of that are actually about car insurance are Allstate’s, and even then they’re mostly about Dennis Haybert’s superhuman charisma. I would buy bacon cheeseburgers from this man, and I’m vegetarian.

Esurance is also relatively informative (and owned by Allstate, by the way); they talk about their customer service and low prices (in other words, the only things you actually care about with car insurance). But even so, what reason do we have to believe their bald assertions of good customer service? And what’s the deal with the whole money-printing thing?

And of course I could deluge you with examples from other companies, from Coca-Cola’s polar bears and Santa Claus to this commercial, which is literally the most American thing I have ever seen:

If you’re from some other country and are going, “What!?” right now, that’s totally healthy. Honestly I think we would too if constant immersion in this sort of thing hadn’t deadened our souls.

Do these ads signal that their companies have a lot of extra money to burn? Sure. But there are plenty of other ways to do that which would also serve other valuable functions. I honestly can’t imagine any scenario in which the best way to tell me the quality of an auto insurance company is to show me 30-second spots about geckos and Neanderthals.

If a company wants to signal that they have a lot of money, they could simply report their financial statement. That’s even regulated so that we know it has to be accurate (and this is one of the few financial regulations we actually enforce). The amount you spent on an ad is not obvious from the result of the ad, and doesn’t actually prove that you’re solvent, only that you have enough access to credit. (Pets.com famously collapsed the same year they ran a multi-million-dollar Super Bowl ad.)

If a company wants to signal that they make a good product, they could pay independent rating agencies to rate products on their quality (you know, like credit rating agencies and reviewers of movies and video games). Paying an independent agency is far more reliable than the signaling provided by advertising. Consumers could also pay their own agencies, which would be even more reliable; credit rating agencies and movie reviewers do sometimes have a conflict of interest, which could be resolved by making them report to consumers instead of producers.

If a company wants to establish that they are both financially stable and socially responsible, they could make large public donations to important charities. (This is also something that corporations do on occasion, such as Subaru’s recent campaign.) Or they could publicly announce a raise for all their employees. This would not only provide us with the information that they have this much money to spend—it would actually have a direct positive social effect, thus putting their money where there mouth is.

Signaling theory in advertising is based upon the success of signaling theory in evolutionary biology, which is beyond dispute; but evolution is tightly constrained in what it can do, so wasteful costly signals make sense. Human beings are smarter than that; we can find ways to convey information that don’t involve ludicrous amounts of waste.

If we were anywhere near as rational as these neoclassical models assume us to be, we would take the constant bombardment of meaningless ads not as a signal of a company’s quality but as a personal assault—they are needlessly attacking our time and attention when all the genuinely-valuable information they convey could have been conveyed much more easily and reliably. We would not buy more from them; we would refuse to buy from them. And indeed, I’ve learned to do just that; the more a company bombards me with annoying or meaningless advertisements, the more I make a point of not buying their product if I have a viable substitute. (For similar reasons, I make a point of never donating to any charity that uses hard-sell tactics to solicit donations.)

But of course the human mind is limited. We only have so much attention, and by bombarding us frequently and intensely enough they can overcome our mental defenses and get us to make decisions we wouldn’t if we were optimally rational. I can feel this happening when I am hungry and a food ad appears on TV; my autonomic hunger response combined with their expert presentation of food in the perfect lighting makes me want that food, if only for the few seconds it takes my higher cognitive functions to kick in and make me realize that I don’t eat meat and I don’t like mayonnaise.

Car commercials have always been particularly baffling to me. Who buys a car based on a commercial? A decision to spend $20,000 should not be made based upon 30 seconds of obviously biased information. But either people do buy cars based on commercials or they don’t; if they do, consumers are irrational, and if they don’t, car companies are irrational.

Advertising isn’t the source of human irrationality, but it feeds upon human irrationality, and is specifically designed to exploit our own stupidity to make us spend money in ways we wouldn’t otherwise. This means that markets will not be efficient, and huge amounts of productivity can be wasted because we spent it on what they convinced us to buy instead of what would truly have made our lives better. Those companies then profit more, which encourages them to make even more stuff nobody actually wants and sell it that much harder… and basically we all end up buying lots of worthless stuff and putting it in our garages and wondering what happened to our money and the meaning in our lives. Neoclassical economists really need to stop making ridiculous excuses for this damaging and irrational behavior–and maybe then we could actually find a way to make it stop.