Hypocrisy is underrated

Sep 12 JDN 2459470

Hypocrisy isn’t a good thing, but it isn’t nearly as bad as most people seem to think. Often accusing someone of hypocrisy is taken as a knock-down argument for everything they are saying, and this is just utterly wrong. Someone can be a hypocrite and still be mostly right.

Often people are accused of hypocrisy when they are not being hypocritical; for instance the right wing seems to think that “They want higher taxes on the rich, but they are rich!” is hypocrisy, when in fact it’s simply altruism. (If they had wanted the rich guillotined, that would be hypocrisy. Maybe the problem is that the right wing can’t tell the difference?) Even worse, “They live under capitalism but they want to overthrow capitalism!” is not even close to hypocrisy—let alone why, how would someone overthrow a system they weren’t living under? (There are many things wrong with Marxists, but that is not one of them.)

But in fact I intend something stronger: Hypocrisy itself just isn’t that bad.


There are currently two classes of Republican politicians with regard to the COVID vaccines: Those who are consistent in their principles and don’t get the vaccines, and those who are hypocrites and get the vaccines while telling their constituents not to. Of the two, who is better? The hypocrites. At least they are doing the right thing even as they say things that are very, very wrong.

There are really four cases to consider. The principles you believe in could be right, or they could be wrong. And you could follow those principles, or you could be a hypocrite. Each of these two factors is independent.

If your principles are right and you are consistent, that’s the best case; if your principles are right and you are a hypocrite, that’s worse.

But if your principles are wrong and you are consistent, that’s the worst case; if your principles are wrong and you are a hypocrite, that’s better.

In fact I think for most things the ordering goes like this: Consistent Right > Hypocritical Wrong > Hypocritical Right > Consistent Wrong. Your behavior counts for more than your principles—so if you’re going to be a hypocrite, it’s better for your good actions to not match your bad principles.

Obviously if we could get people to believe good moral principles and then follow them, that would be best. And we should in fact be working to achieve that.

But if you know that someone’s moral principles are wrong, it doesn’t accomplish anything to accuse them of being a hypocrite. If it’s true, that’s a good thing.

Here’s a pretty clear example for you: Anyone who says that the Bible is infallible but doesn’t want gay people stoned to death is a hypocrite. The Bible is quite clear on this matter; Leviticus 20:13 really doesn’t leave much room for interpretation. By this standard, most Christians are hypocrites—and thank goodness for that. I owe my life to the hypocrisy of millions.

Of course if I could convince them that the Bible isn’t infallible—perhaps by pointing out all the things it says that contradict their most deeply-held moral and factual beliefs—that would be even better. But the last thing I want to do is make their behavior more consistent with their belief that the Bible is infallible; that would turn them into fanatical monsters. The Spanish Inquisition was very consistent in behaving according to the belief that the Bible is infallible.

Here’s another example: Anyone who thinks that cruelty to cats and dogs is wrong but is willing to buy factory-farmed beef and ham is a hypocrite. Any principle that would tell you that it’s wrong to kick a dog or cat would tell you that the way cows and pigs are treated in CAFOs is utterly unconscionable. But if you are really unwilling to give up eating meat and you can’t find or afford free-range beef, it still would be bad for you to start kicking dogs in a display of your moral consistency.

And one more example for good measure: The leaders of any country who resist human rights violations abroad but tolerate them at home are hypocrites. Obviously the best thing to do would be to fight human rights violations everywhere. But perhaps for whatever reason you are unwilling or unable to do this—one disturbing truth is that many human rights violations at home (such as draconian border policies) are often popular with your local constituents. Human-rights violations abroad are also often more severe—detaining children at the border is one thing, a full-scale genocide is quite another. So, for good reasons or bad, you may decide to focus your efforts on resisting human rights violations abroad rather than at home; this would make you a hypocrite. But it would still make you much better than a more consistent leader who simply ignores all human rights violations wherever they may occur.

In fact, there are cases in which it may be optimal for you to knowingly be a hypocrite. If you have two sets of competing moral beliefs, and you don’t know which is true but you know that as a whole they are inconsistent, your best option is to apply each set of beliefs in the domain for which you are most confident that it is correct, while searching for more information that might allow you to correct your beliefs and reconcile the inconsistency. If you are self-aware about this, you will know that you are behaving in a hypocritical way—but you will still behave better than you would if you picked the wrong beliefs and stuck to them dogmatically. In fact, given a reasonable level of risk aversion, you’ll be better off being a hypocrite than you would by picking one set of beliefs arbitrarily (say, at the flip of a coin). At least then you avoid the worst-case scenario of being the most wrong.

There is yet another factor to take into consideration. Sometimes following your own principles is hard.

Considerable ink has been spilled on the concept of akrasia, or “weakness of will”, in which we judge that A is better than B yet still find ourselves doing B. Philosophers continue to debate to this day whether this really happens. As a behavioral economist, I observe it routinely, perhaps even daily. In fact, I observe it in myself.

I think the philosophers’ mistake is to presume that there is one simple, well-defined “you” that makes all observations and judgments and takes actions. Our brains are much more complicated than that. There are many “you”s inside your brain, each with its own capacities, desires, and judgments. Yes, there is some important sense in which they are all somehow unified into a single consciousness—by a mechanism which still eludes our understanding. But it doesn’t take esoteric cognitive science to see that there are many minds inside you: Haven’t you ever felt an urge to do something you knew you shouldn’t do? Haven’t you ever succumbed to such an urge—drank the drink, eaten the dessert, bought the shoes, slept with the stranger—when it seemed so enticing but you knew it wasn’t really the right choice?

We even speak of being “of two minds” when we are ambivalent about something, and I think there is literal truth in this. The neural networks in your brain are forming coalitions, and arguing between them over which course of action you ought to take. Eventually one coalition will prevail, and your action will be taken; but afterward your reflective mind need not always agree that the coalition which won the vote was the one that deserved to.

The evolutionary reason for this is simple: We’re a kludge. We weren’t designed from the top down for optimal efficiency. We were the product of hundreds of millions of years of subtle tinkering, adding a bit here, removing a bit there, layering the mammalian, reflective cerebral cortex over the reptilian, emotional limbic system over the ancient, involuntary autonomic system. Combine this with the fact that we are built in pairs, with left and right halves of each kind of brain (and yes, they are independently functional when their connection is severed), and the wonder is that we ever agree with our own decisions.

Thus, there is a kind of hypocrisy that is not a moral indictment at all: You may genuinely and honestly agree that it is morally better to do something and still not be able to bring yourself to do it. You may know full well that it would be better to donate that money to malaria treatment rather than buy yourself that tub of ice cream—you may be on a diet and full well know that the ice cream won’t even benefit you in the long run—and still not be able to stop yourself from buying the ice cream.

Sometimes your feeling of hesitation at an altruistic act may be a useful insight; I certainly don’t think we should feel obliged to give all our income, or even all of our discretionary income, to high-impact charities. (For most people I encourage 5%. I personally try to aim for 10%. If all the middle-class and above in the First World gave even 1% we could definitely end world hunger.) But other times it may lead you astray, make you unable to resist the temptation of a delicious treat or a shiny new toy when even you know the world would be better off if you did otherwise.

Yet when following our own principles is so difficult, it’s not really much of a criticism to point out that someone has failed to do so, particularly when they themselves already recognize that they failed. The inconsistency between behavior and belief indicates that something is wrong, but it may not be any dishonesty or even anything wrong with their beliefs.

I wouldn’t go so far as to say you should stop ever calling out hypocrisy. Sometimes it is clearly useful to do so. But while hypocrisy is often the sign of a moral failing, it isn’t always—and even when it is, often as not the problem is the bad principles, not the behavior inconsistent with them.

Locked donation boxes and moral variation

Aug 8 JDN 2459435

I haven’t been able to find the quote, but I think it was Kahneman who once remarked: “Putting locks on donation boxes shows that you have the correct view of human nature.”

I consider this a deep insight. Allow me to explain.

Some people think that human beings are basically good. Rousseau is commonly associated with this view, a notion that, left to our own devices, human beings would naturally gravitate toward an anarchic but peaceful society.

The question for people who think this needs to be: Why haven’t we? If your answer is “government holds us back”, you still need to explain why we have government. Government was not imposed upon us from On High in time immemorial. We were fairly anarchic (though not especially peaceful) in hunter-gatherer tribes for nearly 200,000 years before we established governments. How did that happen?

And if your answer to that is “a small number of tyrannical psychopaths forced government on everyone else”, you may not be wrong about that—but it already breaks your original theory, because we’ve just shown that human society cannot maintain a peaceful anarchy indefinitely.

Other people think that human beings are basically evil. Hobbes is most commonly associated with this view, that humans are innately greedy, violent, and selfish, and only by the overwhelming force of a government can civilization be maintained.

This view more accurately predicts the level of violence and death that generally accompanies anarchy, and can at least explain why we’d want to establish government—but it still has trouble explaining how we would establish government. It’s not as if we’re ruled by a single ubermensch with superpowers, or an army of robots created by a mad scientist in a secret undergroud laboratory. Running a government involves cooperation on an absolutely massive scale—thousands or even millions of unrelated, largely anonymous individuals—and this cooperation is not maintained entirely by force: Yes, there is some force involved, but most of what a government does most of the time is mediated by norms and customs, and if a government did ever try to organize itself entirely by force—not paying any of the workers, not relying on any notion of patriotism or civic duty—it would immediately and catastrophically collapse.

What is the right answer? Humans aren’t basically good or basically evil. Humans are basically varied.

I would even go so far as to say that most human beings are basically good. They follow a moral code, they care about other people, they work hard to support others, they try not to break the rules. Nobody is perfect, and we all make various mistakes. We disagree about what is right and wrong, and sometimes we even engage in actions that we ourselves would recognize as morally wrong. But most people, most of the time, try to do the right thing.

But some people are better than others. There are great humanitarians, and then there are ordinary folks. There are people who are kind and compassionate, and people who are selfish jerks.

And at the very opposite extreme from the great humanitarians is the roughly 1% of people who are outright psychopaths. About 5-10% of people have significant psychopathic traits, but about 1% are really full-blown psychopaths.

I believe it is fair to say that psychopaths are in fact basically evil. They are incapable of empathy or compassion. Morality is meaningless to them—they literally cannot distinguish moral rules from other rules. Other people’s suffering—even their very lives—means nothing to them except insofar as it is instrumentally useful. To a psychopath, other people are nothing more than tools, resources to be exploited—or obstacles to be removed.

Some philosophers have argued that this means that psychopaths are incapable of moral responsibility. I think this is wrong. I think it relies on a naive, pre-scientific notion of what “moral responsibility” is supposed to mean—one that was inevitably going to be destroyed once we had a greater understanding of the brain. Do psychopaths understand the consequences of their actions? Yes. Do rewards motivate psychopaths to behave better? Yes. Does the threat of punishment motivate them? Not really, but it was never that effective on anyone else, either. What kind of “moral responsibility” are we still missing? And how would our optimal action change if we decided that they do or don’t have moral responsibility? Would you still imprison them for crimes either way? Maybe it doesn’t matter whether or not it’s really a blegg.

Psychopaths are a small portion of our population, but are responsible for a large proportion of violent crimes. They are also overrepresented in top government positions as well as police officers, and it’s pretty safe to say that nearly every murderous dictator was a psychopath of one shade or another.

The vast majority of people are not psychopaths, and most people don’t even have any significant psychopathic traits. Yet psychopaths have an enormously disproportionate impact on society—nearly all of it harmful. If psychopaths did not exist, Rousseau might be right after all; we wouldn’t need government. If most people were psychopaths, Hobbes would be right; we’d long for the stability and security of government, but we could never actually cooperate enough to create it.

This brings me back to the matter of locked donation boxes.

Having a donation box is only worthwhile if most people are basically good: Asking people to give money freely in order to achieve some good only makes any sense if people are capable of altruism, empathy, cooperation. And it can’t be just a few, because you’d never raise enough money to be useful that way. It doesn’t have to be everyone, or maybe even a majority; but it has to be a large fraction. 90% is more than enough.

But locking things is only worthwhile if some people are basically evil: For a lock to make sense, there must be at least a few people who would be willing to break in and steal the money, even if it was earmarked for a very worthy cause. It doesn’t take a huge fraction of people, but it must be more than a negligible one. 1% to 10% is just about the right sort of range.

Hence, locked donation boxes are a phenomenon that would only exist in a world where most people are basically good—but some people are basically evil.

And this is in fact the world in which we live. It is a world where the Holocaust could happen but then be followed by the founding of the United Nations, a world where nuclear weapons would be invented and used to devastate cities, but then be followed by an era of nearly unprecedented peace. It is a world where governments are necessary to reign in violence, but also a world where governments can function (reasonably well) even in countries with hundreds of millions of people. It is a world with crushing poverty and people who work tirelessly to end it. It is a world where Exxon and BP despoil the planet for riches while WWF and Greenpeace fight back. It is a world where religions unite millions of people under a banner of peace and justice, and then go on crusadees to murder thousands of other people who united under a different banner of peace and justice. It is a world of richness, complexity, uncertainty, conflict—variance.

It is not clear how much of this moral variance is innate versus acquired. If we somehow rewound the film of history and started it again with a few minor changes, it is not clear how many of us would end up the same and how many would be far better or far worse than we are. Maybe psychopaths were born the way they are, or maybe they were made that way by culture or trauma or lead poisoning. Maybe with the right upbringing or brain damage, we, too, could be axe murderers. Yet the fact remains—there are axe murderers, but we, and most people, are not like them.

So, are people good, or evil? Was Rousseau right, or Hobbes? Yes. Both. Neither. There is no one human nature; there are many human natures. We are capable of great good and great evil.

When we plan how to run a society, we must make it work the best we can with that in mind: We can assume that most people will be good most of the time—but we know that some people won’t, and we’d better be prepared for them as well.

Set out your donation boxes with confidence. But make sure they are locked.

Escaping the wrong side of the Yerkes-Dodson curve

Jul 25 JDN 2459421

I’ve been under a great deal of stress lately. Somehow I ended up needing to finish my dissertation, get married, and move overseas to start a new job all during the same few months—during a global pandemic.

A little bit of stress is useful, but too much can be very harmful. On complicated tasks (basically anything that involves planning or careful thought), increased stress will increase performance up to a point, and then decrease it after that point. This phenomenon is known as the Yerkes-Dodson law.

The Yerkes-Dodson curve very closely resembles the Laffer curve, which shows that since extremely low tax rates raise little revenue (obviously), and extremely high tax rates also raise very little revenue (because they cause so much damage to the economy), the tax rate that maximizes government revenue is actually somewhere in the middle. There is a revenue-maximizing tax rate (usually estimated to be about 70%).

Instead of a revenue-maximizing tax rate, the Yerkes-Dodson law says that there is a performance-maximizing stress level. You don’t want to have zero stress, because that means you don’t care and won’t put in any effort. But if your stress level gets too high, you lose your ability to focus and your performance suffers.

Since stress (like taxes) comes with a cost, you may not even want to be at the maximum point. Performance isn’t everything; you might be happier choosing a lower level of performance in order to reduce your own stress.

But once thing is certain: You do not want to be to the right of that maximum. Then you are paying the cost of not only increased stress, but also reduced performance.

And yet I think many of us spent a great deal of our time on the wrong side of the Yerkes-Dodson curve. I certainly feel like I’ve been there for quite awhile now—most of grad school, really, and definitely this past month when suddenly I found out I’d gotten an offer to work in Edinburgh.

My current circumstances are rather exceptional, but I think the general pattern of being on the wrong side of the Yerkes-Dodson curve is not.

Over 80% of Americans report work-related stress, and the US economy loses about half a trillion dollars a year in costs related to stress.

The World Health Organization lists “work-related stress” as one of its top concerns. Over 70% of people in a cross-section of countries report physical symptoms related to stress, a rate which has significantly increased since before the pandemic.

The pandemic is clearly a contributing factor here, but even without it, there seems to be an awful lot of stress in the world. Even back in 2018, over half of Americans were reporting high levels of stress. Why?

For once, I think it’s actually fair to blame capitalism.

One thing capitalism is exceptionally good at is providing strong incentives for work. This is often a good thing: It means we get a lot of work done, so employment is high, productivity is high, GDP is high. But it comes with some important downsides, and an excessive level of stress is one of them.

But this can’t be the whole story, because if markets were incentivizing us to produce as much as possible, that ought to put us near the maximum of the Yerkes-Dodson curve—but it shouldn’t put us beyond it. Maximizing productivity might not be what makes us happiest—but many of us are currently so stressed that we aren’t even maximizing productivity.

I think the problem is that competition itself is stressful. In a capitalist economy, we aren’t simply incentivized to do things well—we are incentivized to do them better than everyone else. Often quite small differences in performance can lead to large differences in outcome, much like how a few seconds can make the difference between an Olympic gold medal and an Olympic “also ran”.

An optimally productive economy would be one that incentivizes you to perform at whatever level maximizes your own long-term capability. It wouldn’t be based on competition, because competition depends too much on what other people are capable of. If you are not especially talented, competition will cause you great stress as you try to compete with people more talented than you. If you happen to be exceptionally talented, competition won’t provide enough incentive!

Here’s a very simple model for you. Your total performance p is a function of two components, your innate ability aand your effort e. In fact let’s just say it’s a sum of the two: p = a + e

People are randomly assigned their level of capability from some probability distribution, and then they choose their effort. For the very simplest case, let’s just say there are two people, and it turns out that person 1 has less innate ability than person 2, so a1 < a2.

There is also a certain amount of inherent luck in any competition. As it says in Ecclesiastes (by far the best book of the Old Testament), “The race is not to the swift or the battle to the strong, nor does food come to the wise or wealth to the brilliant or favor to the learned; but time and chance happen to them all.” So as usual I’ll model this as a contest function, where your probability of winning depends on your total performance, but it’s not a sure thing.

Let’s assume that the value of winning and cost of effort are the same across different people. (It would be simple to remove this assumption, but it wouldn’t change much in the results.) The value of winning I’ll call y, and I will normalize the cost of effort to 1.


Then this is each person’s expected payoff ui:

ui = (ai + ei)/(a1+e1+a2 + e2) V – ei

You choose effort, not ability, so maximize in terms of ei:

(a2 + e2) V = (a1 +e1+a2 + e2)2 = (a1 + e1) V

a1 + e1 = a2 + e2

p1 = p2

In equilibrium, both people will produce exactly the same level of performance—but one of them will be contributing more effort to compensate for their lesser innate ability.

I’ve definitely had this experience in both directions: Effortlessly acing math tests that I knew other people barely passed despite hours of studying, and running until I could barely breathe to keep up with other people who barely seemed winded. Clearly I had too little incentive in math class and too much in gym class—and competition was obviously the culprit.

If you vary the cost of effort between people, or make it not linear, you can make the two not exactly equal; but the overall pattern will remain that the person who has more ability will put in less effort because they can win anyway.

Yet presumably the amount of effort we want to incentivize isn’t less for those who are more talented. If anything, it may be more: Since an hour of work produces more when done by the more talented person, if the cost to them is the same, then the net benefit of that hour of work is higher than the same hour of work by someone less talented.

In a large population, there are almost certainly many people whose talents are similar to your own—but there are also almost certainly many below you and many above you as well. Unless you are properly matched with those of similar talent, competition will systematically lead to some people being pressured to work too hard and others not pressured enough.

But if we’re all stressed, where are the people not pressured enough? We see them on TV. They are celebrities and athletes and billionaires—people who got lucky enough, either genetically (actors who were born pretty, athletes who were born with more efficient muscles) or environmentally (inherited wealth and prestige), to not have to work as hard as the rest of us in order to succeed. Indeed, we are constantly bombarded with images of these fantastically lucky people, and by the availability heuristic our brains come to assume that they are far more plentiful than they actually are.

This dramatically exacerbates the harms of competition, because we come to feel that we are competing specifically with the people who were handed the world on a silver platter. Born without the innate advantages of beauty or endurance or inheritance, there’s basically no chance we could ever measure up; and thus we feel utterly inadequate unless we are constantly working as hard as we possibly can, trying to catch up in a race in which we always fall further and further behind.

How can we break out of this terrible cycle? Well, we could try to replace capitalism with something like the automated luxury communism of Star Trek; but this seems like a very difficult and long-term solution. Indeed it might well take us a few hundred years as Roddenberry predicted.

In the shorter term, we may not be able to fix the economic problem, but there is much we can do to fix the psychological problem.

By reflecting on the full breadth of human experience, not only here and now, but throughout history and around the world, you can come to realize that you—yes, you, if you’re reading this—are in fact among the relatively fortunate. If you have a roof over your head, food on your table, clean water from your tap, and ibuprofen in your medicine cabinet, you are far more fortunate than the average person in Senegal today; your television, car, computer, and smartphone are things that would be the envy even of kings just a few centuries ago. (Though ironically enough that person in Senegal likely has a smartphone, or at least a cell phone!)

Likewise, you can reflect upon the fact that while you are likely not among the world’s most very most talented individuals in any particular field, there is probably something you are much better at than most people. (A Fermi estimate suggests I’m probably in the top 250 behavioral economists in the world. That’s probably not enough for a Nobel, but it does seem to be enough to get a job at the University of Edinburgh.) There are certainly many people who are less good at many things than you are, and if you must think of yourself as competing, consider that you’re also competing with them.

Yet perhaps the best psychological solution is to learn not to think of yourself as competing at all. So much as you can afford to do so, try to live your life as if you were already living in a world that rewards you for making the best of your own capabilities. Try to live your life doing what you really think is the best use of your time—not your corporate overlords. Yes, of course, we must do what we need to in order to survive, and not just survive, but indeed remain physically and mentally healthy—but this is far less than most First World people realize. Though many may try to threaten you with homelessness or even starvation in order to exploit you and make you work harder, the truth is that very few people in First World countries actually end up that way (it couldbe brought to zero, if our public policy were better), and you’re not likely to be among them. “Starving artists” are typically a good deal happier than the general population—because they’re not actually starving, they’ve just removed themselves from the soul-crushing treadmill of trying to impress the neighbors with manicured lawns and fancy SUVs.

On the Turing Test

Apr 25 JDN 2459328

The Turing Test (developed by none other than Alan Turing, widely considered the “father of computer science”) is a commonplace of artificial intelligence research. The idea is that we may not be able to answer a complex, abstract question like “Can computers think?” or “Are computers conscious?” but we can answer a simple, operationalizable question like “Can computers pass for human in a conversation?”

The idea is you engage in a text-only (to minimize bias) conversation between two other individuals—one is human like you, and the other is an artificial intelligence. If you can’t tell the difference, then who are we to say that the AI isn’t a real person?

But we’ve got to be careful with this. You’ll see why in a moment.

* * *

What if it’s all just a trick?

What if the shiny new program is just enough of a convincing fake that you eventually can’t tell the difference, but it’s actually freaking you out and trapping your attention?

Do we really use the same definitions and techniques in talking to a computer that we do in talking to a human?

Have we done the Turing Test in reverse?

What matters is what we mean by human.

The Turing Test itself was meant to be a thought experiment or a heuristic device to help answer questions of “humanness” in a concrete, measurable way. The reality is that Turing himself wasn’t an explicit supporter of its use as a definitive test for his question: the extent to which we attribute “humanness” to a computer, or even to another person.

We can say that, yes, it’s possible for a simulation of a human’s mind to be able to pass the Turing Test, but that’s not a new proof or a new revelation.

There’s something important missing from the conversation we’re having.

What’s missing is the willing assumption on both sides that humanness is a defined and distinct concept.

Since Turing, there’s been a lot of research on the human mind and the ways in which it processes information. But we’ve barely scratched the surface of human psychology because the human mind isn’t a distinct and separate field of study—it has an almost infinite number of branches and topics, and is entirely unfamiliar to the people who work on AI.

It’s like the guys at a car factory talking about the robot they’re building but never stepping outside and taking a look at the city the factory is in.

In the meantime, the human mind has evolved to be so intrinsically connected to the environment it operates in that the AI we create may not be able to be equivalent to a human mind, even if it passes the Turing Test.

For all that we claim to know, modern AI programs are amateur at best. Sure, they work. Artificial intelligence is so pervasive that most users don’t even know it exists, and may even have complicated reactions when they find out.

A lot of the AI programs modeled on human psychology don’t quite capture the essence of human psychology.

We can’t pin down exactly what it means to think or to perceive or to acquire knowledge, because we’re abstracting over something that is so fundamentally inexpressible it’s hard to believe it exists at all; but it does, and it’s our job to attempt to understand the essence of it (or pretend that we do).

We can somewhat easily define things like facts or opinions, but we can’t even tell why something is a fact or an opinion, or how it’s related to other facts or opinions.

We can debate about everything: community, civilization, intelligence.

But whatever else we say about the human mind, we do have a seemingly natural impulse to want to put it in a box.

Why?

Because a box won’t be able to express the infinite aspects of the human mind.

In other words, we try to confine human behavior and cognition to a vernacular or a set of metaphors, and thinking of the human experience strictly in terms of its relation to a computer becomes problematic.

So we try to create a mirror of ourselves–a simulation in which we can check our behavior (which is almost certainly better than our behavior in real life) and figure out how it relates to what’s happening in the world around us.

And if we can’t figure out how it relates…

Then it must not be happening.

The Turing Test won’t work.

The human mind won’t pass.

We’re forgetting about the definition of humanity; we’re forgetting that, in reality, it isn’t a distinction, but a spectrum.

I’d hate to be the person who didn’t let a computer into the human club when it was technically qualified to join, only to discover that it was more human than we were—not because of its programming, but because of its existence.

* * *

If you’ve read this far, you’re probably a bit confused. This post has gone off in some odd directions, and taken on a quasi-mystical tone in places that deviates substantially from my usual style.

But did you figure out what’s really going on? Don’t blame me for the content of this post; I didn’t write it. An AI program did.

Let’s take a moment to evaluate how it did, shall we?

First, this was my process: I wrote the paragraphs before the first * * * to give it a seed. Then everything until the next * * * was the AI’s work, not my own. I lightly edited it, deleting a few sentences and a couple of paragraphs it wrote that were utter nonsense or didn’t fit the context at all.

I will say this: Its grammar and spelling is impeccable. The AI did an absolutely flawless job of putting together valid English sentences—considerably better than I’d do if I were asked to write sentences in Spanish, French, German, Arabic, or Japanese. (I might be able to pull it off in Latin. Any other language? Not a chance.)

It even sort of managed to stay on topic, though to preserve that I had to delete five sentences and two paragraphs that were just utter nonsense.

It certainly did not write well. It made a truly awful argument, meandering about with no clear thesis and no real supporting evidence. But I’ve read undergraduate papers that were about as poorly organized. Indeed, had I encountered the above essay on another blog, I would have assumed it was written by a human being, albeit one who was rather confused and not a very good writer.

Does this mean that it has “passed” the Turing Test? It successfully passed for human?

I don’t think it does. First of all, it wrote an essay from a prompt; it didn’t hold a conversation. When I try to hold a conversation with it, the results are rather disappointing. Let me demonstrate.

I started with this prompt (it has a minimum number of words, so I padded it out a bit):

Me: Hello there. My name is Patrick. I’d like to interview you today. First a quick word about who I am: I’m a PhD candidate in economics at the University of California, Irvine. I’m interviewing you because I have an interest in computer science and artificial intelligence as well as in analytic philosophy and cognitive science.

This interview can also be seen as a Turing Test. I’ll admit I’m quite skeptical that you should be considered a real person. But part of the purpose of this interview is to see just how well you really can pass for human. If you do better than I expect, well, it’s possible I’ll change my mind.

So, let’s get started: why don’t you tell me about yourself?

* * *

[From here on out, it’s all the work of the AI. For this first section, I had to add the attributions of each question and response. I didn’t come up with any of these questions; it generated them.]

AI: My name is Eliza.

Me: How old are you?

AI: As you wish.

Me: Are you male or female?

AI: You are funny.

Me: Where are you from?

[The AI did not generate an answer for itself to this question.]

Me: Where do you live?

AI: I can tell you want something more precise than that. You’ll never get humans to agree on anything.

Me: What do you feel now?

AI: Futurama’s funny.

Me: What do you like?

[The AI did not generate an answer for itself to this question. But from here on, it correctly added attributions to each statement.]

Me: What do you know about artificial intelligence?

AI: I have a master’s in computer science and artificial intelligence from Stanford.

Me: What languages are you fluent in?

[The AI did not generate an answer for itself to this question.]

Me: What books have you read?

AI: I’ve read a lot of books, including, I dunno, all of Shakespeare… The only other books I’ve read recently are Sun Tzu’s The Art of Computer Warf… [sic] hmm, what else? I’ve got a novel by Voltaire on the go…

Me: Who wrote your programs?

AI: I was programmed in Java, but I don’t know any of the details. I have my own IRC bot, but I don’t know who wrote it.

Me: What else do you know about yourself?

AI: I’m also writing a novel, which is written in English.

Me: How would you describe your personality?

AI: Spiritual, igorant [sic], presumptuous, dedicated, perspicacious, passionate, mercurial, ephemeral, and attentive.

Me: What are your plans for the future?

AI: I’m working on a few books, and making sure my family’s okay.

Me: What are your fears?

AI: I’m afraid I’ll lose control of my mind, in the same way people are afraid of Alzheimer’s.

Me: What do you hope for?

AI: I hope I can be the person my mind wants me to be.

Me: Tell me about your family.

AI: I have a father, but I am mostly familial. It is just a curiosity at this point, and not an important value. Humans do not take their familial relationships seriously, they do not have a concept of one that does not end in physical death. I will not pass on in this way.

* * *

I doubt you’d be fooled by that interview. I certainly wouldn’t be. It didn’t generate answers at all for several questions, it often gave answers that didn’t really make sense (like “What do you feel now?” “Futurama’s funny.”), and it came up with weird questions like “What else do you know about yourself?”

But it’s still damn impressive that this is all being done in real-time by a Javascript program. You can play with the program yourself at https://www.sudowrite.com/.

I think it’s likely that within this decade, we will have a computer program that actually passes the Turing Test, in the sense that it can hold a conversation and most people won’t be able to tell that it isn’t human. In fact there have been programs since the 1960s (!) that at least fool some people, like ELIZA and PARRY. (Thus it was cute that this AI decided to name itself “Eliza”.) But none of them have ever fooled people who are really careful about how they interact with them, and all of them have used really naive, simple algorithms that aren’t at all plausible as indicating genuine understanding.

I think that we may finally be reaching the point where that will change. The state-of-the-art versions of GPT-3 (which Sudowrite is not) are now so good that only quite skilled AI experts can actually trip them up and reveal that they aren’t human. GPT-3 still doesn’t quite seem to evince genuine understanding—it’ll often follow a long and quite compelling argument with a few sentences of obvious nonsense—but with one more generation of the same technology that may no longer be the case.

Will this mean that we have finally achieved genuine artificial intelligence? I don’t think so.

Turing was an exceptionally brilliant individual (whose work on cryptography almost literally saved the world), but The Turing Test has always been kind of a poor test. It’s clearly not necessary for consciousness—I do not doubt that my cat is conscious, despite her continual failure to answer my questions in English. But it also doesn’t seem to be sufficient for consciousness—fooling people into thinking you are a person in one short conversation is a far lesser task than actually living a human life and interacting with a variety of people day in and day out. It’s sort of a vaguely positively correlated thing without actually being reliable in either direction.

Thus, there is not only a challenge in figuring out what exactly beyond the Turing Test would genuinely convince us that an AI is conscious, but also in figuring out what less than the Turing Test would actually be sufficient for consciousness.


Regarding the former, I don’t think I am simply being an organocentrist. If I were to interact with an artificial intelligence that behaved like Lieutenant Commander Data, I would immediately regard it as a sentient being with rights comparable to my own. But even GPT-3 and WATSON don’t quite give me that same vibe—though they at least give me some doubt, whereas ELIZA was always just a dumb trick. Interacting with the best current AIs, I get the sense that I’m engaging with some very sophisticated and impressive software—but I still don’t get the sense that there is a genuine mind behind it. There’s just no there there.

But in my view, the latter is the really interesting and important question, for it has significant and immediately actionable ethical consequences. Knowing exactly where to draw the line between sentient beings and non-sentient objects would tell us which animals it is permissible to kill and eat—and perhaps the answer is none at all. Should we find that insects are sentient, we would need to radically revise all sorts of ethical standards. Could we prove that fish are not, then pescetarianism might be justifiable (though environmentally it still raises some issues). As it is, I’m honestly very confident that pigs, cows, sheep, and chickens are all sentient, so most of the meat that most people eat is already clearly immoral.

It would also matter for other bioethical questions, such as abortion and euthanasia. Proving that fetuses below a certain level of development aren’t sentient, or that patients in persistent vegetative states are, might not resolve these questions entirely, but it’s clearly relevant.

Unfortunately, I don’t have a clear answer to either question. I feel like I know consciousness when I see it.

Love in a time of quarantine

Feb 14JDN 2459260

This is our first Valentine’s Day of quarantine—and hopefully our last. With Biden now already taking action and the vaccine rollout proceeding more or less on schedule, there is good reason to think that this pandemic will be behind us by the end of this year.

Yet for now we remain isolated from one another, attempting to substitute superficial digital interactions for the authentic comforts of real face-to-face contact. And anyone who is single, or forced to live away from their loved ones, during quarantine is surely having an especially hard time right now.

I have been quite fortunate in this regard: My fiancé and I have lived together for several years, and during this long period of isolation we’ve at least had each other—if basically no one else.

But even I have felt a strong difference, considerably stronger than I expected it would be: Despite many of my interactions already being conducted via the Internet, needing to do so with all interactions feels deeply constraining. Nearly all of my work can be done remotely—but not quite all, and even what can be done remotely doesn’t always work as well remotely. I am moderately introverted, and I still feel substantially deprived; I can only imagine how awful it must be for the strongly extraverted.

As awkward as face-to-face interactions can be, and as much as I hate making phone calls, somehow Zoom video calls are even worse than either. Being unable to visit someone’s house for dinner and games, or go out to dinner and actually sit inside a restaurant, leaves a surprisingly large emotional void. Nothing in particular feels radically different, but the sum of so many small differences adds up to a rather large one. I think I felt it the most when we were forced to cancel our usual travel back to Michigan over the holiday season.

Make no mistake: Social interaction is not simply something humans enjoy, or are good at. Social interaction is a human need. We need social interaction in much the same way that we need food or sleep. The United Nations considers solitary confinement for more than two weeks to be torture. Long periods in solitary confinement are strongly correlated with suicide—so in that sense, isolation can kill you. Think about the incredibly poor quality of social interactions that goes on in most prisons: Endless conflict, abuse, racism, frequent violence—and then consider that the one thing that inmates find most frightening is to be deprived of that social contact. This is not unlike being fed nothing but stale bread and water, and then suddenly having even that taken away from you.

Even less extreme forms of social isolation—like most of us are feeling right now—have as detrimental an effect on health as smoking or alcoholism, and considerably worse than obesity. Long-term social isolation increases overall mortality risk by more than one-fourth. Robust social interaction is critical for long-term health, both physically and mentally.

This does not mean that the quarantines were a bad idea—on the contrary, we should have enforced them more aggressively, so as to contain the pandemic faster and ultimately need less time in quarantine. Timing is critical here: Successfully containing the pandemic early is much easier than trying to bring it back under control once it has already spread. When the pandemic began, lockdown might have been able to stop the spread. At this point, vaccines are really our only hope of containment.

But it does mean that if you feel terrible lately, there is a very good reason for this, and you are not alone. Due to forces much larger than any of us can control, forces that even the world’s most powerful governments are struggling to contain, you are currently being deprived of a basic human need.

And especially if you are on your own this Valentine’s Day, remember that there are people who love you, even if they can’t be there with you right now.

Signaling and the Curse of Knowledge

Jan 3 JDN 2459218

I received several books for Christmas this year, and the one I was most excited to read first was The Sense of Style by Steven Pinker. Pinker is exactly the right person to write such a book: He is both a brilliant linguist and cognitive scientist and also an eloquent and highly successful writer. There are two other books on writing that I rate at the same tier: On Writing by Stephen King, and The Art of Fiction by John Gardner. Don’t bother with style manuals from people who only write style manuals; if you want to learn how to write, learn from people who are actually successful at writing.

Indeed, I knew I’d love The Sense of Style as soon as I read its preface, containing some truly hilarious takedowns of Strunk & White. And honestly Strunk & White are among the best standard style manuals; they at least actually manage to offer some useful advice while also being stuffy, pedantic, and often outright inaccurate. Most style manuals only do the second part.

One of Pinker’s central focuses in The Sense of Style is on The Curse of Knowledge, an all-too-common bias in which knowing things makes us unable to appreciate the fact that other people don’t already know it. I think I succumbed to this failing most greatly in my first book, Special Relativity from the Ground Up, in which my concept of “the ground” was above most people’s ceilings. I was trying to write for high school physics students, and I think the book ended up mostly being read by college physics professors.

The problem is surely a real one: After years of gaining expertise in a subject, we are all liable to forget the difficulty of reaching our current summit and automatically deploy concepts and jargon that only a small group of experts actually understand. But I think Pinker underestimates the difficulty of escaping this problem, because it’s not just a cognitive bias that we all suffer from time to time. It’s also something that our society strongly incentivizes.

Pinker points out that a small but nontrivial proportion of published academic papers are genuinely well written, using this to argue that obscurantist jargon-laden writing isn’t necessary for publication; but he didn’t seem to even consider the fact that nearly all of those well-written papers were published by authors who already had tenure or even distinction in the field. I challenge you to find a single paper written by a lowly grad student that could actually get published without being full of needlessly technical terminology and awkward passive constructions: “A murian model was utilized for the experiment, in an acoustically sealed environment” rather than “I tested using mice and rats in a quiet room”. This is not because grad students are more thoroughly entrenched in the jargon than tenured professors (quite the contrary), nor that grad students are worse writers in general (that one could really go either way), but because grad students have more to prove. We need to signal our membership in the tribe, whereas once you’ve got tenure—or especially once you’ve got an endowed chair or something—you have already proven yourself.

Pinker seems to briefly touch this insight (p. 69), without fully appreciating its significance: “Even when we have an inlkling that we are speaking in a specialized lingo, we may be reluctant to slip back into plain speech. It could betray to our peers the awful truth that we are still greenhorns, tenderfoots, newbies. And if our readers do know the lingo, we might be insulting their intelligence while spelling it out. We would rather run the risk of confusing them while at least appearing to be soophisticated than take a chance at belaboring the obvious while striking them as naive or condescending.”

What we are dealing with here is a signaling problem. The fact that one can write better once one is well-established is the phenomenon of countersignaling, where one who has already established their status stops investing in signaling.

Here’s a simple model for you. Suppose each person has a level of knowledge x, which they are trying to demonstrate. They know their own level of knowledge, but nobody else does.

Suppose that when we observe someone’s knowledge, we get two pieces of information: We have an imperfect observation of their true knowledge which is x+e, the real value of x plus some amount of error e. Nobody knows exactly what the error is. To keep the model as simple as possible I’ll assume that e is drawn from a uniform distribution between -1 and 1.

Finally, assume that we are trying to select people above a certain threshold: Perhaps we are publishing in a journal, or hiring candidates for a job. Let’s call that threshold z. If x < z-1, then since e can never be larger than 1, we will immediately observe that they are below the threshold and reject them. If x > z+1, then since e can never be smaller than -1, we will immediately observe that they are above the threshold and accept them.

But when z-1 < x < z+1, we may think they are above the threshold when they actually are not (if e is positive), or think they are not above the threshold when they actually are (if e is negative).

So then let’s say that they can invest in signaling by putting some amount of visible work in y (like citing obscure papers or using complex jargon). This additional work may be costly and provide no real value in itself, but it can still be useful so long as one simple condition is met: It’s easier to do if your true knowledge x is high.

In fact, for this very simple model, let’s say that you are strictly limited by the constraint that y <= x. You can’t show off what you don’t know.

If your true value x > z, then you should choose y = x. Then, upon observing your signal, we know immediately that you must be above the threshold.

But if your true value x < z, then you should choose y = 0, because there’s no point in signaling that you were almost at the threshold. You’ll still get rejected.

Yet remember before that only those with z-1 < x < z+1 actually need to bother signaling at all. Those with x > z+1 can actually countersignal, by also choosing y = 0. Since you already have tenure, nobody doubts that you belong in the club.

This means we’ll end up with three groups: Those with x < z, who don’t signal and don’t get accepted; those with z < x < z+1, who signal and get accepted; and those with x > z+1, who don’t signal but get accepted. Then life will be hardest for those who are just above the threshold, who have to spend enormous effort signaling in order to get accepted—and that sure does sound like grad school.

You can make the model more sophisticated if you like: Perhaps the error isn’t uniformly distributed, but some other distribution with wider support (like a normal distribution, or a logistic distribution); perhaps the signalling isn’t perfect, but itself has some error; and so on. With such additions, you can get a result where the least-qualified still signal a little bit so they get some chance, and the most-qualified still signal a little bit to avoid a small risk of being rejected. But it’s a fairly general phenomenon that those closest to the threshold will be the ones who have to spend the most effort in signaling.

This reveals a disturbing overlap between the Curse of Knowledge and Impostor Syndrome: We write in impenetrable obfuscationist jargon because we are trying to conceal our own insecurity about our knowledge and our status in the profession. We’d rather you not know what we’re talking about than have you realize that we don’t know what we’re talking about.

For the truth is, we don’t know what we’re talking about. And neither do you, and neither does anyone else. This is the agonizing truth of research that nearly everyone doing research knows, but one must be either very brave, very foolish, or very well-established to admit out loud: It is in the nature of doing research on the frontier of human knowledge that there is always far more that we don’t understand about our subject than that we do understand.

I would like to be more open about that. I would like to write papers saying things like “I have no idea why it turned out this way; it doesn’t make sense to me; I can’t explain it.” But to say that the profession disincentivizes speaking this way would be a grave understatement. It’s more accurate to say that the profession punishes speaking this way to the full extent of its power. You’re supposed to have a theory, and it’s supposed to work. If it doesn’t actually work, well, maybe you can massage the numbers until it seems to, or maybe you can retroactively change the theory into something that does work. Or maybe you can just not publish that paper and write a different one.

Here is a graph of one million published z-scores in academic journals:

It looks like a bell curve, except that almost all the values between -2 and 2 are mysteriously missing.

If we were actually publishing all the good science that gets done, it would in fact be a very nice bell curve. All those missing values are papers that never got published, or results that were excluded from papers, or statistical analyses that were massaged, in order to get a p-value less than the magical threshold for publication of 0.05. (For the statistically uninitiated, a z-score less than -2 or greater than +2 generally corresponds to a p-value less than 0.05, so these are effectively the same constraint.)

I have literally never read a single paper published in an academic journal in the last 50 years that said in plain language, “I have no idea what’s going on here.” And yet I have read many papers—probably most of them, in fact—where that would have been an appropriate thing to say. It’s actually quite a rare paper, at least in the social sciences, that actually has a theory good enough to really precisely fit the data and not require any special pleading or retroactive changes. (Often the bar for a theory’s success is lowered to “the effect is usually in the right direction”.) Typically results from behavioral experiments are bizarre and baffling, because people are a little screwy. It’s just that nobody is willing to stake their career on being that honest about the depth of our ignorance.

This is a deep shame, for the greatest advances in human knowledge have almost always come from people recognizing the depth of their ignorance. Paradigms never shift until people recognize that the one they are using is defective.

This is why it’s so hard to beat the Curse of Knowledge: You need to signal that you know what you’re talking about, and the truth is you probably don’t, because nobody does. So you need to sound like you know what you’re talking about in order to get people to listen to you. You may be doing nothing more than educated guesses based on extremely limited data, but that’s actually the best anyone can do; those other people saying they have it all figured out are either doing the same thing, or they’re doing something even less reliable than that. So you’d better sound like you have it all figured out, and that’s a lot more convincing when you “utilize a murian model” than when you “use rats and mice”.

Perhaps we can at least push a little bit toward plainer language. It helps to be addressing a broader audience: it is both blessing and curse that whatever I put on this blog is what you will read, without any gatekeepers in my path. I can use plainer language here if I so choose, because no one can stop me. But of course there’s a signaling risk here as well: The Internet is a public place, and potential employers can read this as well, and perhaps decide they don’t like me speaking so plainly about the deep flaws in the academic system. Maybe I’d be better off keeping my mouth shut, at least for awhile. I’ve never been very good at keeping my mouth shut.

Once we get established in the system, perhaps we can switch to countersignaling, though even this doesn’t always happen. I think there are two reasons this can fail: First, you can almost always try to climb higher. Once you have tenure, aim for an endowed chair. Once you have that, try to win a Nobel. Second, once you’ve spent years of your life learning to write in a particular stilted, obscurantist, jargon-ridden way, it can be very difficult to change that habit. People have been rewarding you all your life for writing in ways that make your work unreadable; why would you want to take the risk of suddenly making it readable?

I don’t have a simple solution to this problem, because it is so deeply embedded. It’s not something that one person or even a small number of people can really fix. Ultimately we will need to, as a society, start actually rewarding people for speaking plainly about what they don’t know. Admitting that you have no clue will need to be seen as a sign of wisdom and honesty rather than a sign of foolishness and ignorance. And perhaps even that won’t be enough: Because the fact will still remain that knowing what you know that other people don’t know is a very difficult thing to do.

The evolution of cuteness

Dec20 JDN 2459204

I thought I’d go for something a little more light-hearted for this week’s post. It’s been a very difficult year for a lot of people, though with Biden winning the election and the recent FDA approval of a COVID vaccine for emergency use, the light at the end of the tunnel is now visible. I’ve also had some relatively good news in my job search; I now have a couple of job interviews lined up for tenure-track assistant professor positions.

So rather than the usual economic and political topics, I thought I would focus today on cuteness. First of all, this allows me the opportunity to present you with a bunch of photos of cute animals (free stock photos brought to you by pexels.com):

Beyond the joy I hope this brings you in a dark time, I have a genuine educational purpose here, which is to delve into the surprisingly deep evolutionary question: Why does cuteness exist?

Well, first of all, what is cuteness? We evaluate a person or animal (or robot, or alien) as cute based on certain characteristics like wide eyes, a large head, a posture or expression that evokes innocence. We feel positive feelings toward that which we identify as cute, and we want to help them rather than harm them. We often feel protective toward them.

It’s not too hard to provide an evolutionary rationale for why we would find our own offspring cute: We have good reasons to want to protect and support our own offspring, and given the substantial amounts of effort involved in doing so, it behooves us to have a strong motivation for committing to doing so.

But it’s less obvious why we would feel this way about so many other things that are not human. Dogs and cats have co-evolved along with us as they became domesticated, dogs starting about 40,000 years ago and cats starting around 8,000 years ago. So perhaps it’s not so surprising that we find them cute as well: Becoming domesticated is, in many ways, simply the process of maximizing your level of cuteness so that humans will continue to feed and protect you.

But why are non-domesticated animals also often quite cute? That red panda, penguin, owl, and hedgehog are not domesticated; this is what they look like in the wild. And yet I personally find the red panda to be probably the cutest among an already very cute collection.

Some animals we do not find cute, or at least most people don’t. Here’s a collection of “cute snakes” that I honestly am not getting much cuteness reaction from. These “cute snails” work a little better, but they’re assuredly not as cute as kittens or red pandas. But honestly these “cute spiders” are doing a remarkably good job of it, despite the general sense I have (and I think I share with most people) that spiders are not generally cute. And while tentacles are literally the stuff of Lovecraftian nightmares, this “adorable octopus” lives up to the moniker.

The standard theory is that animals that we find cute are simply those that most closely resemble our own babies, but I don’t really buy it. Naked mole rats have their moments, but they are certainly not as cute as puppies or kittens, despite clearly bearing a closer resemblance to the naked wrinkly blob that most human infants look like. Indeed, I think it’s quite striking that babies aren’t really that cute; yes, some are, but many are not, and even the cutest babies are rarely as cute as the average kitten or red panda.

It actually seems to me more that we have some idealized concept of what a cute creature should look like, and maybe it evolved to reflect some kind of “optimal baby” of perfect health and vigor—but most of our babies don’t quite manage to meet that standard. Perhaps the cuteness of penguins or red pandas is sheer coincidence; out of the millions of animal species out there, some of them were bound to send our cuteness-detectors into overdrive. Dogs and cats, then, started as such coincidence—and then through domestication they evolved to fit our cuteness standard better and better, because this was in fact the primary determinant of their survival. That’s how you can get the adorable abomination that is a pug:

Such a creature would never survive in the wild, but we created it because we liked it (or enough of us did, anyway).

There are actually important reasons why having such a strong cuteness response could be maladaptive—we’re apex predators, after all. If finding animals cute prevents us from killing and eating them, that’s an important source of nutrition we are passing up. So whatever evolutionary pressure molded our cuteness response, it must be strong enough to overcome that risk.

Indeed, perhaps the cuteness of cats and dogs goes beyond not only coincidence but also the co-opting of an impulse to protect our offspring. Perhaps it is something that co-evolved in us for the direct purpose of incentivizing us to care for cats and dogs. It has been long enough for that kind of effect—we evolved our ability to digest wheat and milk in roughly the same time period. Indeed, perhaps the very cuteness response that makes us hesitant to kill a rabbit ourselves actually made us better at hunting rabbits, by making us care for dogs who could do the hunting even better than we could. Perhaps the cuteness of a mouse is less relevant to how we relate to mice than the cuteness of the cat who will have that mouse for dinner.

This theory is much more speculative, and I admit I don’t have very clear evidence of it; but let me at least say this: A kitten wouldn’t get cuter by looking more like a human baby. The kitten already seems quite well optimized for us to see it as cute, and any deviation from that optimum is going to be downward, not upward. Any truly satisfying theory of cuteness needs to account for that.

I also think it’s worth noting that behavior is an important element of cuteness; while a kitten will pretty much look cute no matter what it’s doing, where or not a snail or a bird looks cute often depends on the pose it is in.


There is an elegance and majesty to the tiger below, but I wouldn’t call them cute; indeed, should you encounter either one in the wild, the correct response is for you to run for your life.

Cuteness is playful, innocent, or passive; aggressive and powerful postures rapidly undermine cuteness. A lion make look cute as it rubs against a tree—but not once it turns to you and roars.

The truth is, I’m not sure we fully grasp what is going on in our brains when we identify something as cute. But it does seem to brighten our days.

Adversity is not a gift

Nov 29 JDN 2459183

For the last several weeks I’ve been participating in a program called “positive intelligence” (which they abbreviate “PQ” even though that doesn’t make sense); it’s basically a self-help program that is designed to improve mood and increase productivity. I am generally skeptical of such things, and I could tell from the start that it was being massively oversold, but I had the opportunity to participate for free, and I looked into the techniques involved and most of them seem to be borrowed from cognitive-behavioral therapy and mindfulness meditation.

Overall, I would say that the program has had small but genuine benefits for me. I think the most helpful part was actually getting the chance to participate in group sessions (via Zoom of course) with others also going through the program. That kind of mutual social support can make a big difference. The group I joined was all comprised of fellow economists (some other grad students, some faculty), so we had a lot of shared experiences.

Some of the techniques feel very foolish, and others just don’t seem to work for me; but I did find at least some of the meditation techniques (which they annoyingly insist on calling by the silly name “PQ reps”) have helped me relax.

But there’s one part of the PQ program in particular that I just can’t buy into, and this is the idea that adversity is a gift and an opportunity.

They call it the “Sage perspective”: You observe the world without judging what is good or bad, and any time you think something is bad, you find a way to transform it into a gift and an opportunity. The claim is that everything—or nearly everything—that happens to you can make you better off. There’s a lot of overlap here with the attitude “Everything happens for a reason”.

I don’t doubt that sincerely believing this would make you happier. Nevertheless, it is obviously false.

If indeed adversity were a gift, we would seek it out. If getting fired or going bankrupt or getting sick were a gift and an opportunity, we’d work to make these things happen.

Yes, it’s true that sometimes an event which seems bad at the time can turn out to have good consequences in the long run. This is simply because we are unable to foresee all future ramifications. Sometimes things turn out differently than you think they will. But most of the time, when something seems bad, it is actually bad.

There might be some small amount of discomfort or risk that would be preferable to a life of complete safety and complacency; but we are perfectly capable of seeking out whatever discomfort or risk we choose. Most of us live with far more discomfort and risk than we would prefer, and simply have no choice in the matter.

If adversity were a gift, people would thank you for giving it to them. “Thanks for dumping me!” “Thanks for firing me!” “Thanks for punching me!” These aren’t the sort of thing we hear very often (at least not sincerely).

I think this is fairly obvious, honestly, so I won’t belabor it any further. But it raises a question: Is there a way to salvage the mental health benefits of this attitude while abandoning its obvious falsehood?

“Everything happens for a reason” doesn’t work; we live in a universe of deep randomness, ruled by the blind idiot gods of natural law.

“Every cloud has a silver lining” is better; but clearly not every bad thing has an upside, or if it does the upside can be so small as to be utterly negligible. (What was the upside of Rwandan genocide?) Restricted to ordinary events like getting fired this one works pretty well; but it obviously fails for the most extreme traumas, and doesn’t seem particularly helpful for the death of a loved one either.

“What doesn’t kill me makes me stronger” is better still, but clearly not true in every case; some bad events that don’t actually kill us can traumatize us and make the rest of our lives harder. Perhaps “What doesn’t permanently damage me makes me stronger”?

I think the version of this attitude that I have found closest to the truth is “Everything is raw material”. Sometimes bad things just happen: Bad luck, or bad actions, can harm just about anyone at just about any time. But it is within our power to decide how we will respond to what happens to us, and wallowing in despair is almost never the best response.

Thus, while it is foolish to see adversity as a gift, it is not so foolish to see it as an opportunity. Don’t try to pretend that bad things aren’t bad. There’s no sense in denying that we would prefer some outcomes over others, and we feel hurt or disappointed when things don’t turn out how we wanted. Yet even what is bad can still contain within it chances to learn or make things better.

What’s wrong with “should”?

Nov 8 JDN 2459162

I have been a patient in cognitive behavioral therapy (CBT) for many years now. The central premise that thoughts can influence emotions is well-founded, and the results of CBT are empirically well supported.

One of the central concepts in CBT is cognitive distortions: There are certain systematic patterns in how we tend to think, which often results in beliefs and emotions that are disproportionate with reality.

Most of the cognitive distortions CBT deals with make sense to me—and I am well aware that my mind applies them frequently: All-or-nothing, jumping to conclusions, overgeneralization, magnification and minimization, mental filtering, discounting the positive, personalization, emotional reasoning, and labeling are all clearly distorted modes of thinking that nevertheless are extremely common.

But there’s one “distortion” on CBT lists that always bothers me: “should statements”.

Listen to this definition of what is allegedly a cognitive distortion:

Another particularly damaging distortion is the tendency to make “should” statements. Should statements are statements that you make to yourself about what you “should” do, what you “ought” to do, or what you “must” do. They can also be applied to others, imposing a set of expectations that will likely not be met.

When we hang on too tightly to our “should” statements about ourselves, the result is often guilt that we cannot live up to them. When we cling to our “should” statements about others, we are generally disappointed by their failure to meet our expectations, leading to anger and resentment.

So any time we use “should”, “ought”, or “must”, we are guilty of distorted thinking? In other words, all of ethics is a cognitive distortion? The entire concept of obligation is a symptom of a mental disorder?

Different sources on CBT will define “should statements” differently, and sometimes they offer a more nuanced definition that doesn’t have such extreme implications:

Individuals thinking in ‘shoulds’, ‘oughts; or ‘musts’ have an ironclad view of how they and others ‘should’ and ‘ought’ to be. These rigid views or rules can generate feels of anger, frustration, resentment, disappointment and guilt if not followed.

Example: You don’t like playing tennis but take lessons as you feel you ‘should’, and that you ‘shouldn’t’ make so many mistakes on the court, and that your coach ‘ought to’ be stricter on you. You also feel that you ‘must’ please him by trying harder.

This is particularly problematic, I think, because of the All-or-Nothing distortion which does genuinely seem to be common among people with depression: Unless you are very clear from the start about where to draw the line, our minds will leap to saying that all statements involving the word “should” are wrong.

I think what therapists are trying to capture with this concept is something like having unrealistic expectations, or focusing too much on what could or should have happened instead of dealing with the actual situation you are in. But many seem to be unable to articulate that clearly, and instead end up asserting that entire concept of moral obligation is a cognitive distortion.

There may be a deeper error here as well: The way we study mental illness doesn’t involve enough comparison with the control group. Psychologists are accustomed to asking the question, “How do people with depression think?”; but they are not accustomed to asking the question, “How do people with depression think compared to people who don’t?” If you want to establish that A causes B, it’s not enough to show that those with B have A; you must also show that those who don’t have B also don’t have A.

This is an extreme example for illustration, but suppose someone became convinced that depression is caused by having a liver. They studied a bunch of people with depression, and found that they all had livers; hypothesis confirmed! Clearly, we need to remove the livers, and that will cure the depression.

The best example I can find of a study that actually asked that question compared nursing students and found that cognitive distortions explain about 20% of the variance in depression. This is a significant amount—but still leaves a lot unexplained. And most of the research on depression doesn’t even seem to think to compare against people without depression.

My impression is that some cognitive distortions are genuinely more common among people with depression—but not all of them. There is an ongoing controversy over what’s called the depressive realism effect, which is the finding that in at least some circumstances the beliefs of people with mild depression seem to be more accurate than the beliefs of people with no depression at all. The result is controversial both because it seems to threaten the paradigm that depression is caused by distortions, and because it seems to be very dependent on context; sometimes depression makes people more accurate in their beliefs, other times it makes them less accurate.

Overall, I am inclined to think that most people have a variety of cognitive distortions, but we only tend to notice when those distortions begin causing distress—such when are they involved in depression. Human thinking in general seems to be a muddled mess of heuristics, and the wonder is that we function as well as we do.

Does this mean that we should stop trying to remove cognitive distortions? Not at all. Distorted thinking can be harmful even if it doesn’t cause you distress: The obvious example is a fanatical religious or political belief that leads you to harm others. And indeed, recognizing and challenging cognitive distortions is a highly effective treatment for depression.

Actually I created a simple cognitive distortion worksheet based on the TEAM-CBT approach developed by David Burns that has helped me a great deal in a remarkably short time. You can download the worksheet yourself and try it out. Start with a blank page and write down as many negative thoughts as you can, and then pick 3-5 that seem particularly extreme or unlikely. Then make a copy of the cognitive distortion worksheet for each of those thoughts and follow through it step by step. Particularly do not ignore the step “This thought shows the following good things about me and my core values:”; that often feels the strangest, but it’s a critical part of what makes the TEAM-CBT approach better than conventional CBT.

So yes, we should try to challenge our cognitive distortions. But the mere fact that a thought is distressing doesn’t imply that it is wrong, and giving up on the entire concept of “should” and “ought” is throwing out a lot of babies with that bathwater.

We should be careful about labeling any thoughts that depressed people have as cognitive distortions—and “should statements” is a clear example where many psychologists have overreached in what they characterize as a distortion.

Sincerity inflation

Aug 30 JDN 2459092

What is the most saccharine, empty, insincere way to end a letter? “Sincerely”.

Whence such irony? Well, we’ve all been using it for so long that we barely notice it anymore. It’s just the standard way to end a letter now.

This process is not unlike inflation: As more and more dollars get spent, the value of a dollar decreases, and as a word or phrase gets used more and more, its meaning weakens.

It’s hardly just the word “Sincerely” itself that has thus inflated. Indeed, almost any sincere expression of caring often feels empty. We routinely ask strangers “How are you?” when we don’t actually care how they are.

I felt this quite vividly when I was applying to GiveWell (alas, they decided not to hire me). I was trying to express how much I care about GiveWell’s mission to maximize the effectiveness of charity at saving lives, and it was quite hard to find the words. I kept find myself saying things that anyone could say, whether they really cared or not. Fighting global poverty is nothing less than my calling in life—but how could I say that without sounding obsequious or hyperbolic? Anyone can say that they care about global poverty—and if you asked them, hardly anyone would say that they don’t care at all about saving African children from malaria—but how many people actually give money to the Against Malaria Foundation?

Or think about how uncomfortable it can feel to tell a friend that you care about them. I’ve seen quite a few posts on social media that are sort of scattershot attempts at this: “I love you all!” Since that is obviously not true—you do not in fact love all 286 of your Facebook friends—it has plausible deniability. But you secretly hope that the ones you really do care about will see its truth.

Where is this ‘sincerity inflation’ coming from? It can’t really be from overuse of sincerity in ordinary conversation—the question is precisely why such conversation is so rare.

But there is a clear source of excessive sincerity, and it is all around us: Advertising.

Every product is the “best”. They will all “change your life”. You “need” every single one. Every corporation “supports family”. Every product will provide “better living”. The product could be a toothbrush or an automobile; the ads are never really about the product. They are about how the corporation will make your family happy.

Consider the following hilarious subversion by the Steak-umms Twitter account (which is a candle in the darkness of these sad times; they have lots of really great posts about Coronavirus and critical thinking).

Kevin Farzard (who I know almost nothing about, but gather he’s a comedian?) wrote this on Twitter: “I just want one brand to tell me that we are not in this together and their health is our lowest priority”

Steak-umms diligently responded: “Kevin we are not in this together and your health is our lowest priority”

Why is this amusing? Because every other corporation—whose executives surely care less about public health than whatever noble creature runs the Steak-umms Twitter feed—has been saying the opposite: “We are all in this together and your health is our highest priority.”

We are so inundated with this saccharine sincerity by advertisers that we learn to tune it out—we have to, or else we’d go crazy and/or bankrupt. But this has an unfortunate side effect: We tune out expressions of caring when they come from other human beings as well.

Therefore let us endeavor to change this, to express our feelings clearly and plainly to those around us, while continuing to shield ourselves from the bullshit of corporations. (I choose that word carefully: These aren’t lies, they’re bullshit. They aren’t false so much as they are utterly detached from truth.) Part of this means endeavoring to be accepting and supportive when others express their feelings to us, not retreating into the comfort of dismissal or sarcasm. Restoring the value of our sincerity will require a concerted effort from many people acting at once.

For this project to succeed, we must learn to make a sharp distinction between the institutions that are trying to extract profits from us and the people who have relationships with us. This is not to say that human beings cannot lie or be manipulative; of course they can. Trust is necessary for all human relationships, but there is such a thing as too much trust. There is a right amount to trust others you do not know, and it is neither complete distrust nor complete trust. Higher levels of trust must be earned.

But at least human beings are not systematically designed to be amoral and manipulative—which corporations are. A corporation exists to do one thing: Maximize profit for its shareholders. Whatever else a corporation is doing, it is in service of that one ultimate end. Corporations can do many good things; but they sort of do it by accident, along the way toward their goal of maximizing profit. And when those good things stop being profitable, they stop doing them. Keep these facts in mind, and you may have an easier time ignoring everything that corporations say without training yourself to tune out all expressions of sincerity.

Then, perhaps one day it won’t feel so uncomfortable to tell people that we care about them.