Yet for now we remain isolated from one another, attempting to substitute superficial digital interactions for the authentic comforts of real face-to-face contact. And anyone who is single, or forced to live away from their loved ones, during quarantine is surely having an especially hard time right now.
I have been quite fortunate in this regard: My fiancé and I have lived together for several years, and during this long period of isolation we’ve at least had each other—if basically no one else.
But even I have felt a strong difference, considerably stronger than I expected it would be: Despite many of my interactions already being conducted via the Internet, needing to do so with all interactions feels deeply constraining. Nearly all of my work can be done remotely—but not quite all, and even what can be done remotely doesn’t always work as well remotely. I am moderately introverted, and I still feel substantially deprived; I can only imagine how awful it must be for the strongly extraverted.
As awkward as face-to-face interactions can be, and as much as I hate making phone calls, somehow Zoom video calls are even worse than either. Being unable to visit someone’s house for dinner and games, or go out to dinner and actually sit inside a restaurant, leaves a surprisingly large emotional void. Nothing in particular feels radically different, but the sum of so many small differences adds up to a rather large one. I think I felt it the most when we were forced to cancel our usual travel back to Michigan over the holiday season.
This does not mean that the quarantines were a bad idea—on the contrary, we should have enforced them more aggressively, so as to contain the pandemic faster and ultimately need less time in quarantine. Timing is critical here: Successfully containing the pandemic early is much easier than trying to bring it back under control once it has already spread. When the pandemic began, lockdown might have been able to stop the spread. At this point, vaccines are really our only hope of containment.
But it does mean that if you feel terrible lately, there is a very good reason for this, and you are not alone. Due to forces much larger than any of us can control, forces that even the world’s most powerful governments are struggling to contain, you are currently being deprived of a basic human need.
And especially if you are on your own this Valentine’s Day, remember that there are people who love you, even if they can’t be there with you right now.
I received several books for Christmas this year, and the one I was most excited to read first was The Sense of Style by Steven Pinker. Pinker is exactly the right person to write such a book: He is both a brilliant linguist and cognitive scientist and also an eloquent and highly successful writer. There are two other books on writing that I rate at the same tier: On Writing by Stephen King, and The Art of Fiction by John Gardner. Don’t bother with style manuals from people who only write style manuals; if you want to learn how to write, learn from people who are actually successful at writing.
Indeed, I knew I’d love The Sense of Style as soon as I read its preface, containing some truly hilarious takedowns of Strunk & White. And honestly Strunk & White are among the best standard style manuals; they at least actually manage to offer some useful advice while also being stuffy, pedantic, and often outright inaccurate. Most style manuals only do the second part.
One of Pinker’s central focuses in The Sense of Style is on The Curse of Knowledge, an all-too-common bias in which knowing things makes us unable to appreciate the fact that other people don’t already know it. I think I succumbed to this failing most greatly in my first book, Special Relativity from the Ground Up, in which my concept of “the ground” was above most people’s ceilings. I was trying to write for high school physics students, and I think the book ended up mostly being read by college physics professors.
The problem is surely a real one: After years of gaining expertise in a subject, we are all liable to forget the difficulty of reaching our current summit and automatically deploy concepts and jargon that only a small group of experts actually understand. But I think Pinker underestimates the difficulty of escaping this problem, because it’s not just a cognitive bias that we all suffer from time to time. It’s also something that our society strongly incentivizes.
Pinker points out that a small but nontrivial proportion of published academic papers are genuinely well written, using this to argue that obscurantist jargon-laden writing isn’t necessary for publication; but he didn’t seem to even consider the fact that nearly all of those well-written papers were published by authors who already had tenure or even distinction in the field. I challenge you to find a single paper written by a lowly grad student that could actually get published without being full of needlessly technical terminology and awkward passive constructions: “A murian model was utilized for the experiment, in an acoustically sealed environment” rather than “I tested using mice and rats in a quiet room”. This is not because grad students are more thoroughly entrenched in the jargon than tenured professors (quite the contrary), nor that grad students are worse writers in general (that one could really go either way), but because grad students have more to prove. We need to signal our membership in the tribe, whereas once you’ve got tenure—or especially once you’ve got an endowed chair or something—you have already proven yourself.
Pinker seems to briefly touch this insight (p. 69), without fully appreciating its significance: “Even when we have an inlkling that we are speaking in a specialized lingo, we may be reluctant to slip back into plain speech. It could betray to our peers the awful truth that we are still greenhorns, tenderfoots, newbies. And if our readers do know the lingo, we might be insulting their intelligence while spelling it out. We would rather run the risk of confusing them while at least appearing to be soophisticated than take a chance at belaboring the obvious while striking them as naive or condescending.”
What we are dealing with here is a signalingproblem. The fact that one can write better once one is well-established is the phenomenon of countersignaling, where one who has already established their status stops investing in signaling.
Here’s a simple model for you. Suppose each person has a level of knowledge x, which they are trying to demonstrate. They know their own level of knowledge, but nobody else does.
Suppose that when we observe someone’s knowledge, we get two pieces of information: We have an imperfect observation of their true knowledge which is x+e, the real value of x plus some amount of error e. Nobody knows exactly what the error is. To keep the model as simple as possible I’ll assume that e is drawn from a uniform distribution between -1 and 1.
Finally, assume that we are trying to select people above a certain threshold: Perhaps we are publishing in a journal, or hiring candidates for a job. Let’s call that threshold z. If x < z-1, then since e can never be larger than 1, we will immediately observe that they are below the threshold and reject them. If x > z+1, then since e can never be smaller than -1, we will immediately observe that they are above the threshold and accept them.
But when z-1 < x < z+1, we may think they are above the threshold when they actually are not (if e is positive), or think they are not above the threshold when they actually are (if e is negative).
So then let’s say that they can invest in signaling by putting some amount of visible work in y (like citing obscure papers or using complex jargon). This additional work may be costly and provide no real value in itself, but it can still be useful so long as one simple condition is met: It’s easier to do if your true knowledge x is high.
In fact, for this very simple model, let’s say that you are strictly limited by the constraint that y <= x. You can’t show off what you don’t know.
If your true value x > z, then you should choose y = x. Then, upon observing your signal, we know immediately that you must be above the threshold.
But if your true value x < z, then you should choose y = 0, because there’s no point in signaling that you were almost at the threshold. You’ll still get rejected.
Yet remember before that only those with z-1 < x < z+1 actually need to bother signaling at all. Those with x > z+1 can actually countersignal, by also choosing y = 0. Since you already have tenure, nobody doubts that you belong in the club.
This means we’ll end up with three groups: Those with x < z, who don’t signal and don’t get accepted; those with z < x < z+1, who signal and get accepted; and those with x > z+1, who don’t signal but get accepted. Then life will be hardest for those who are just above the threshold, who have to spend enormous effort signaling in order to get accepted—and that sure does sound like grad school.
You can make the model more sophisticated if you like: Perhaps the error isn’t uniformly distributed, but some other distribution with wider support (like a normal distribution, or a logistic distribution); perhaps the signalling isn’t perfect, but itself has some error; and so on. With such additions, you can get a result where the least-qualified still signal a little bit so they get some chance, and the most-qualified still signal a little bit to avoid a small risk of being rejected. But it’s a fairly general phenomenon that those closest to the threshold will be the ones who have to spend the most effort in signaling.
This reveals a disturbing overlap between the Curse of Knowledge and Impostor Syndrome: We write in impenetrable obfuscationist jargon because we are trying to conceal our own insecurity about our knowledge and our status in the profession. We’d rather you not know what we’re talking about than have you realize that we don’t know what we’re talking about.
For the truth is, we don’t know what we’re talking about. And neither do you, and neither does anyone else. This is the agonizing truth of research that nearly everyone doing research knows, but one must be either very brave, very foolish, or very well-established to admit out loud: It is in the nature of doing research on the frontier of human knowledge that there is always far more that we don’t understand about our subject than that we do understand.
I would like to be more open about that. I would like to write papers saying things like “I have no idea why it turned out this way; it doesn’t make sense to me; I can’t explain it.” But to say that the profession disincentivizes speaking this way would be a grave understatement. It’s more accurate to say that the profession punishes speaking this way to the full extent of its power. You’re supposed to have a theory, and it’s supposed to work. If it doesn’t actually work, well, maybe you can massage the numbers until it seems to, or maybe you can retroactively change the theory into something that does work. Or maybe you can just not publish that paper and write a different one.
It looks like a bell curve, except that almost all the values between -2 and 2 are mysteriously missing.
If we were actually publishing all the good science that gets done, it would in fact be a very nice bell curve. All those missing values are papers that never got published, or results that were excluded from papers, or statistical analyses that were massaged, in order to get a p-value less than the magical threshold for publication of 0.05. (For the statistically uninitiated, a z-score less than -2 or greater than +2 generally corresponds to a p-value less than 0.05, so these are effectively the same constraint.)
I have literally never read a single paper published in an academic journal in the last 50 years that said in plain language, “I have no idea what’s going on here.” And yet I have read many papers—probably most of them, in fact—where that would have been an appropriate thing to say. It’s actually quite a rare paper, at least in the social sciences, that actually has a theory good enough to really precisely fit the data and not require any special pleading or retroactive changes. (Often the bar for a theory’s success is lowered to “the effect is usually in the right direction”.) Typically results from behavioral experiments are bizarre and baffling, because people are a little screwy. It’s just that nobody is willing to stake their career on being that honest about the depth of our ignorance.
This is a deep shame, for the greatest advances in human knowledge have almost always come from people recognizing the depth of their ignorance. Paradigms never shift until people recognize that the one they are using is defective.
This is why it’s so hard to beat the Curse of Knowledge: You need to signal that you know what you’re talking about, and the truth is you probably don’t, because nobody does. So you need to sound like you know what you’re talking about in order to get people to listen to you. You may be doing nothing more than educated guesses based on extremely limited data, but that’s actually the best anyone can do; those other people saying they have it all figured out are either doing the same thing, or they’re doing something even less reliable than that. So you’d better sound like you have it all figured out, and that’s a lot more convincing when you “utilize a murian model” than when you “use rats and mice”.
Perhaps we can at least push a little bit toward plainer language. It helps to be addressing a broader audience: it is both blessing and curse that whatever I put on this blog is what you will read, without any gatekeepers in my path. I can use plainer language here if I so choose, because no one can stop me. But of course there’s a signaling risk here as well: The Internet is a public place, and potential employers can read this as well, and perhaps decide they don’t like me speaking so plainly about the deep flaws in the academic system. Maybe I’d be better off keeping my mouth shut, at least for awhile. I’ve never been very good at keeping my mouth shut.
Once we get established in the system, perhaps we can switch to countersignaling, though even this doesn’t always happen. I think there are two reasons this can fail: First, you can almost always try to climb higher. Once you have tenure, aim for an endowed chair. Once you have that, try to win a Nobel. Second, once you’ve spent years of your life learning to write in a particular stilted, obscurantist, jargon-ridden way, it can be very difficult to change that habit. People have been rewarding you all your life for writing in ways that make your work unreadable; why would you want to take the risk of suddenly making it readable?
I don’t have a simple solution to this problem, because it is so deeply embedded. It’s not something that one person or even a small number of people can really fix. Ultimately we will need to, as a society, start actually rewarding people for speaking plainly about what they don’t know. Admitting that you have no clue will need to be seen as a sign of wisdom and honesty rather than a sign of foolishness and ignorance. And perhaps even that won’t be enough: Because the fact will still remain that knowing what you know that other people don’t know is a very difficult thing to do.
I thought I’d go for something a little more light-hearted for this week’s post. It’s been a very difficult year for a lot of people, though with Biden winning the election and the recent FDA approval of a COVID vaccine for emergency use, the light at the end of the tunnel is now visible. I’ve also had some relatively good news in my job search; I now have a couple of job interviews lined up for tenure-track assistant professor positions.
So rather than the usual economic and political topics, I thought I would focus today on cuteness. First of all, this allows me the opportunity to present you with a bunch of photos of cute animals (free stock photos brought to you by pexels.com):
Beyond the joy I hope this brings you in a dark time, I have a genuine educational purpose here, which is to delve into the surprisingly deep evolutionary question: Why does cuteness exist?
Well, first of all, what is cuteness? We evaluate a person or animal (or robot, or alien) as cute based on certain characteristics like wide eyes, a large head, a posture or expression that evokes innocence. We feel positive feelings toward that which we identify as cute, and we want to help them rather than harm them. We often feel protective toward them.
It’s not too hard to provide an evolutionary rationale for why we would find our own offspring cute: We have good reasons to want to protect and support our own offspring, and given the substantial amounts of effort involved in doing so, it behooves us to have a strong motivation for committing to doing so.
But it’s less obvious why we would feel this way about so many other things that are not human. Dogs and cats have co-evolved along with us as they became domesticated, dogs starting about 40,000 years ago and cats starting around 8,000 years ago. So perhaps it’s not so surprising that we find them cute as well: Becoming domesticated is, in many ways, simply the process of maximizing your level of cuteness so that humans will continue to feed and protect you.
But why are non-domesticated animals also often quite cute? That red panda, penguin, owl, and hedgehog are not domesticated; this is what they look like in the wild. And yet I personally find the red panda to be probably the cutest among an already very cute collection.
The standard theory is that animals that we find cute are simply those that most closely resemble our own babies, but I don’t really buy it. Naked mole rats have their moments, but they are certainly not as cute as puppies or kittens, despite clearly bearing a closer resemblance to the naked wrinkly blob that most human infants look like. Indeed, I think it’s quite striking that babies aren’t really that cute; yes, some are, but many are not, and even the cutest babies are rarely as cute as the average kitten or red panda.
It actually seems to me more that we have some idealized concept of what a cute creature should look like, and maybe it evolved to reflect some kind of “optimal baby” of perfect health and vigor—but most of our babies don’t quite manage to meet that standard. Perhaps the cuteness of penguins or red pandas is sheer coincidence; out of the millions of animal species out there, some of them were bound to send our cuteness-detectors into overdrive. Dogs and cats, then, started as such coincidence—and then through domestication they evolved to fit our cuteness standard better and better, because this was in fact the primary determinant of their survival. That’s how you can get the adorable abomination that is a pug:
Such a creature would never survive in the wild, but we created it because we liked it (or enough of us did, anyway).
There are actually important reasons why having such a strong cuteness response could be maladaptive—we’re apex predators, after all. If finding animals cute prevents us from killing and eating them, that’s an important source of nutrition we are passing up. So whatever evolutionary pressure molded our cuteness response, it must be strong enough to overcome that risk.
Indeed, perhaps the cuteness of cats and dogs goes beyond not only coincidence but also the co-opting of an impulse to protect our offspring. Perhaps it is something that co-evolved in us for the direct purpose of incentivizing us to care for cats and dogs. It has been long enough for that kind of effect—we evolved our ability to digest wheat and milk in roughly the same time period. Indeed, perhaps the very cuteness response that makes us hesitant to kill a rabbit ourselves actually made us better at hunting rabbits, by making us care for dogs who could do the hunting even better than we could. Perhaps the cuteness of a mouse is less relevant to how we relate to mice than the cuteness of the cat who will have that mouse for dinner.
This theory is much more speculative, and I admit I don’t have very clear evidence of it; but let me at least say this: A kitten wouldn’t get cuter by looking more like a human baby. The kitten already seems quite well optimized for us to see it as cute, and any deviation from that optimum is going to be downward, not upward. Any truly satisfying theory of cuteness needs to account for that.
I also think it’s worth noting that behavior is an important element of cuteness; while a kitten will pretty much look cute no matter what it’s doing, where or not a snail or a bird looks cute often depends on the pose it is in.
There is an elegance and majesty to the tiger below, but I wouldn’t call them cute; indeed, should you encounter either one in the wild, the correct response is for you to run for your life.
Cuteness is playful, innocent, or passive; aggressive and powerful postures rapidly undermine cuteness. A lion make look cute as it rubs against a tree—but not once it turns to you and roars.
The truth is, I’m not sure we fully grasp what is going on in our brains when we identify something as cute. But it does seem to brighten our days.
For the last several weeks I’ve been participating in a program called “positive intelligence” (which they abbreviate “PQ” even though that doesn’t make sense); it’s basically a self-help program that is designed to improve mood and increase productivity. I am generally skeptical of such things, and I could tell from the start that it was being massively oversold, but I had the opportunity to participate for free, and I looked into the techniques involved and most of them seem to be borrowed from cognitive-behavioral therapy and mindfulness meditation.
Overall, I would say that the program has had small but genuine benefits for me. I think the most helpful part was actually getting the chance to participate in group sessions (via Zoom of course) with others also going through the program. That kind of mutual social support can make a big difference. The group I joined was all comprised of fellow economists (some other grad students, some faculty), so we had a lot of shared experiences.
Some of the techniques feel very foolish, and others just don’t seem to work for me; but I did find at least some of the meditation techniques (which they annoyingly insist on calling by the silly name “PQ reps”) have helped me relax.
But there’s one part of the PQ program in particular that I just can’t buy into, and this is the idea that adversity is a gift and an opportunity.
They call it the “Sage perspective”: You observe the world without judging what is good or bad, and any time you think something is bad, you find a way to transform it into a gift and an opportunity. The claim is that everything—or nearly everything—that happens to you can make you better off. There’s a lot of overlap here with the attitude “Everything happens for a reason”.
I don’t doubt that sincerely believing this would make you happier. Nevertheless, it is obviously false.
If indeed adversity were a gift, we would seek it out. If getting fired or going bankrupt or getting sick were a gift and an opportunity, we’d work to make these things happen.
Yes, it’s true that sometimes an event which seems bad at the time can turn out to have good consequences in the long run. This is simply because we are unable to foresee all future ramifications. Sometimes things turn out differently than you think they will. But most of the time, when something seems bad, it is actually bad.
There might be some small amount of discomfort or risk that would be preferable to a life of complete safety and complacency; but we are perfectly capable of seeking out whatever discomfort or risk we choose. Most of us live with far more discomfort and risk than we would prefer, and simply have no choice in the matter.
If adversity were a gift, people would thank you for giving it to them. “Thanks for dumping me!” “Thanks for firing me!” “Thanks for punching me!” These aren’t the sort of thing we hear very often (at least not sincerely).
I think this is fairly obvious, honestly, so I won’t belabor it any further. But it raises a question: Is there a way to salvage the mental health benefits of this attitude while abandoning its obvious falsehood?
“Everything happens for a reason” doesn’t work; we live in a universe of deep randomness, ruled by the blind idiot gods of natural law.
“Every cloud has a silver lining” is better; but clearly not every bad thing has an upside, or if it does the upside can be so small as to be utterly negligible. (What was the upside of Rwandan genocide?) Restricted to ordinary events like getting fired this one works pretty well; but it obviously fails for the most extreme traumas, and doesn’t seem particularly helpful for the death of a loved one either.
“What doesn’t kill me makes me stronger” is better still, but clearly not true in every case; some bad events that don’t actually kill us can traumatize us and make the rest of our lives harder. Perhaps “What doesn’t permanently damage me makes me stronger”?
I think the version of this attitude that I have found closest to the truth is “Everything is raw material”. Sometimes bad things just happen: Bad luck, or bad actions, can harm just about anyone at just about any time. But it is within our power to decide how we will respond to what happens to us, and wallowing in despair is almost never the best response.
Thus, while it is foolish to see adversity as a gift, it is not so foolish to see it as an opportunity. Don’t try to pretend that bad things aren’t bad. There’s no sense in denying that we would prefer some outcomes over others, and we feel hurt or disappointed when things don’t turn out how we wanted. Yet even what is bad can still contain within it chances to learn or make things better.
One of the central concepts in CBT is cognitive distortions: There are certain systematic patterns in how we tend to think, which often results in beliefs and emotions that are disproportionate with reality.
Most of the cognitive distortions CBT deals with make sense to me—and I am well aware that my mind applies them frequently: All-or-nothing, jumping to conclusions, overgeneralization, magnification and minimization, mental filtering, discounting the positive, personalization, emotional reasoning, and labeling are all clearly distorted modes of thinking that nevertheless are extremely common.
But there’s one “distortion” on CBT lists that always bothers me: “should statements”.
Another particularly damaging distortion is the tendency to make “should” statements. Should statements are statements that you make to yourself about what you “should” do, what you “ought” to do, or what you “must” do. They can also be applied to others, imposing a set of expectations that will likely not be met.
When we hang on too tightly to our “should” statements about ourselves, the result is often guilt that we cannot live up to them. When we cling to our “should” statements about others, we are generally disappointed by their failure to meet our expectations, leading to anger and resentment.
So any time we use “should”, “ought”, or “must”, we are guilty of distorted thinking? In other words, all of ethics is a cognitive distortion? The entire concept of obligation is a symptom of a mental disorder?
Different sources on CBT will define “should statements” differently, and sometimes they offer a more nuanced definition that doesn’t have such extreme implications:
Individuals thinking in ‘shoulds’, ‘oughts; or ‘musts’ have an ironclad view of how they and others ‘should’ and ‘ought’ to be. These rigid views or rules can generate feels of anger, frustration, resentment, disappointment and guilt if not followed.
Example: You don’t like playing tennis but take lessons as you feel you ‘should’, and that you ‘shouldn’t’ make so many mistakes on the court, and that your coach ‘ought to’ be stricter on you. You also feel that you ‘must’ please him by trying harder.
This is particularly problematic, I think, because of the All-or-Nothing distortion which does genuinely seem to be common among people with depression: Unless you are very clear from the start about where to draw the line, our minds will leap to saying that all statements involving the word “should” are wrong.
I think what therapists are trying to capture with this concept is something like having unrealistic expectations, or focusing too much on what could or should have happened instead of dealing with the actual situation you are in. But many seem to be unable to articulate that clearly, and instead end up asserting that entire concept of moral obligation is a cognitive distortion.
There may be a deeper error here as well: The way we study mental illness doesn’t involve enough comparison with the control group. Psychologists are accustomed to asking the question, “How do people with depression think?”; but they are not accustomed to asking the question, “How do people with depression think compared to people who don’t?” If you want to establish that A causes B, it’s not enough to show that those with B have A; you must also show that those who don’t have B also don’t have A.
This is an extreme example for illustration, but suppose someone became convinced that depression is caused by having a liver. They studied a bunch of people with depression, and found that they all had livers; hypothesis confirmed! Clearly, we need to remove the livers, and that will cure the depression.
My impression is that some cognitive distortions are genuinely more common among people with depression—but not all of them. There is an ongoing controversy over what’s called the depressive realismeffect, which is the finding that in at least some circumstances the beliefs of people with mild depression seem to be more accurate than the beliefs of people with no depression at all. The result is controversial both because it seems to threaten the paradigm that depression is caused by distortions, and because it seems to be very dependent on context; sometimes depression makes people more accurate in their beliefs, other times it makes them less accurate.
Overall, I am inclined to think that most people have a variety of cognitive distortions, but we only tend to notice when those distortions begin causing distress—such when are they involved in depression. Human thinking in general seems to be a muddled mess of heuristics, and the wonder is that we function as well as we do.
Does this mean that we should stop trying to remove cognitive distortions? Not at all. Distorted thinking can be harmful even if it doesn’t cause you distress: The obvious example is a fanatical religious or political belief that leads you to harm others. And indeed, recognizing and challenging cognitive distortions is a highly effective treatment for depression.
Actually I created a simple cognitive distortion worksheet based on the TEAM-CBT approach developed by David Burns that has helped me a great deal in a remarkably short time. You can download the worksheet yourself and try it out. Start with a blank page and write down as many negative thoughts as you can, and then pick 3-5 that seem particularly extreme or unlikely. Then make a copy of the cognitive distortion worksheet for each of those thoughts and follow through it step by step. Particularly do not ignore the step “This thought shows the following good things about me and my core values:”; that often feels the strangest, but it’s a critical part of what makes the TEAM-CBT approach better than conventional CBT.
So yes, we should try to challenge our cognitive distortions. But the mere fact that a thought is distressing doesn’t imply that it is wrong, and giving up on the entire concept of “should” and “ought” is throwing out a lot of babies with that bathwater.
We should be careful about labeling any thoughts that depressed people have as cognitive distortions—and “should statements” is a clear example where many psychologists have overreached in what they characterize as a distortion.
What is the most saccharine, empty, insincere way to end a letter? “Sincerely”.
Whence such irony? Well, we’ve all been using it for so long that we barely notice it anymore. It’s just the standard way to end a letter now.
This process is not unlike inflation: As more and more dollars get spent, the value of a dollar decreases, and as a word or phrase gets used more and more, its meaning weakens.
It’s hardly just the word “Sincerely” itself that has thus inflated. Indeed, almost any sincere expression of caring often feels empty. We routinely ask strangers “How are you?” when we don’t actually care how they are.
I felt this quite vividly when I was applying to GiveWell (alas, they decided not to hire me). I was trying to express how much I care about GiveWell’s mission to maximize the effectiveness of charity at saving lives, and it was quite hard to find the words. I kept find myself saying things that anyone could say, whether they really cared or not. Fighting global poverty is nothing less than my calling in life—but how could I say that without sounding obsequious or hyperbolic? Anyone can say that they care about global poverty—and if you asked them, hardly anyone would say that they don’t care at all about saving African children from malaria—but how many people actually give money to the Against Malaria Foundation?
Or think about how uncomfortable it can feel to tell a friend that you care about them. I’ve seen quite a few posts on social media that are sort of scattershot attempts at this: “I love you all!” Since that is obviously not true—you do not in fact love all 286 of your Facebook friends—it has plausible deniability. But you secretly hope that the ones you really do care about will see its truth.
Where is this ‘sincerity inflation’ coming from? It can’t really be from overuse of sincerity in ordinary conversation—the question is precisely why such conversation is so rare.
But there is a clear source of excessive sincerity, and it is all around us: Advertising.
Every product is the “best”. They will all “change your life”. You “need” every single one. Every corporation “supports family”. Every product will provide “better living”. The product could be a toothbrush or an automobile; the ads are never really about the product. They are about how the corporation will make your family happy.
Consider the following hilarious subversion by the Steak-umms Twitter account (which is a candle in the darkness of these sad times; they have lots of really great posts about Coronavirus and critical thinking).
Kevin Farzard (who I know almost nothing about, but gather he’s a comedian?) wrote this on Twitter: “I just want one brand to tell me that we are not in this together and their health is our lowest priority”
Why is this amusing? Because every other corporation—whose executives surely care less about public health than whatever noble creature runs the Steak-umms Twitter feed—has been saying the opposite: “We are all in this together and your health is our highest priority.”
We are so inundated with this saccharine sincerity by advertisers that we learn to tune it out—we have to, or else we’d go crazy and/or bankrupt. But this has an unfortunate side effect: We tune out expressions of caring when they come from other human beings as well.
Therefore let us endeavor to change this, to express our feelings clearly and plainly to those around us, while continuing to shield ourselves from the bullshit of corporations. (I choose that word carefully: These aren’t lies, they’re bullshit. They aren’t false so much as they are utterly detached from truth.) Part of this means endeavoring to be accepting and supportive when others express their feelings to us, not retreating into the comfort of dismissal or sarcasm. Restoring the value of our sincerity will require a concerted effort from many people acting at once.
For this project to succeed, we must learn to make a sharp distinction between the institutions that are trying to extract profits from us and the people who have relationships with us. This is not to say that human beings cannot lie or be manipulative; of course they can. Trust is necessary for all human relationships, but there is such a thing as too much trust. There is a right amount to trust others you do not know, and it is neither complete distrust nor complete trust. Higher levels of trust must be earned.
But at least human beings are not systematically designed to be amoral and manipulative—which corporations are. A corporation exists to do one thing: Maximize profit for its shareholders. Whatever else a corporation is doing, it is in service of that one ultimate end. Corporations can do many good things; but they sort of do it by accident, along the way toward their goal of maximizing profit. And when those good things stop being profitable, they stop doing them. Keep these facts in mind, and you may have an easier time ignoring everything that corporations say without training yourself to tune out all expressions of sincerity.
Then, perhaps one day it won’t feel so uncomfortable to tell people that we care about them.
Perhaps the most famous thought experiment in the philosophy of mind, John Searle’s Chinese Room is the sort of argument that basically every expert knows is wrong, yet can’t quite explain what is wrong with it. Here’s a brief summary of the argument; for more detail you can consult Wikipedia or the Stanford Encyclopedia of Philosophy.
I am locked in a room. The only way to communicate with me is via a slot in the door, through which papers can be passed.
Someone on the other side of the door is passing me papers with Chinese writing on them. I do not speak any Chinese. Fortunately, there is a series of file cabinets in the room, containing instruction manuals which explain (in English) what an appropriate response in Chinese would be to any given input of Chinese characters. These instructions are simply conditionals like “After receiving input A B C, output X.”
I can follow these instructions and thereby ‘hold a conversation’ in Chinese with the person outside, despite never understanding Chinese.
This room is like a Turing Test. A computer is fed symbols and has instructions telling it to output symbols; it may ‘hold a conversation’, but it will never really understand language.
First, let me note that if this argument were right, it would pretty much doom the entire project of cognitive science. Searle seems to think that calling consciousness a “biological function” as opposed to a “computation” can somehow solve this problem; but this is not how functions work. We don’t say that a crane ‘isn’t really lifting’ because it’s not made of flesh and bone. We don’t say that an airplane ‘isn’t really flying’ because it doesn’t flap its wings like a bird. He often compares to digestion, which is unambiguously a biological function; but if you make a machine that processes food chemically in the same way as digestion, that is basically a digestion machine. (In fact there is a machine called a digesterthat basically does that.) If Searle is right that no amount of computation could ever get you to consciousness, then we basically have no idea how anything would ever get us to consciousness.
Second, I’m guessing that the argument sounds fairly compelling, especially if you’re not very familiar with the literature. Searle chose his examples very carefully to create a powerfully seductive analogy that tilts our intuitions in a particular direction.
There are various replies that have been made to the Chinese Room. Some have pointed out that the fact that I don’t understand Chinese doesn’t mean that the system doesn’t understand Chinese (the “Systems Reply”). Others have pointed out that in the real world, conscious beings interact with their environment; they don’t just passively respond to inputs (the “Robot Reply”).
Searle has his own counter-reply to these arguments: He insists that if instead of having all those instruction manuals, I memorized all the rules, and then went out in the world and interacted with Chinese speakers, it would still be the case that I didn’t actually understand Chinese. This seems quite dubious to me: For one thing, how is that different from what we would actually observe in someone who does understand Chinese? For another, once you’re interacting with people in the real world, they can do things like point to an object and say the word for it; in such interactions, wouldn’t you eventually learn to genuinely understand the language?
But I’d like to take a somewhat different approach, and instead attack the analogy directly. The argument I’m making here is very much in the spirit of Churchland’s Luminous Room reply, but a little more concrete.
I want you to stop and think about just how big those file cabinets would have to be.
For a proper Turing Test, you can’t have a pre-defined list of allowed topics and canned responses. You’re allowed to talk about anything and everything. There are thousands of symbols in Chinese. There’s no specified limit to how long the test needs to go, or how long each sentence can be.
After each 10-character sequence, the person in the room has to somehow sort through all those file cabinets and find the right set of instructions—not simply to find the correct response to that particular 10-character sequence, but to that sequence in the context of every other sequence that has occurred so far. “What do you think about that?” is a question that one answers very differently depending on what was discussed previously.
The key issue here is combinatoric explosion. Suppose we’re dealing with 100 statements, each 10 characters long, from a vocabulary of 10,000 characters. This means that there are ((10,000)^10)^100 = 10^4000 possible conversations. That’s a ludicrously huge number. It’s bigger than a googol. Even if each atom could store one instruction, there aren’t enough atoms in the known universe. After a few dozen sentences, simply finding the correct file cabinet would be worse than finding a needle in a haystack; it would be finding a hydrogen atom in the whole galaxy.
Even if you assume a shorter memory (which I don’t think is fair; human beings can absolutely remember 100 statements back), say only 10 statements, things aren’t much better: ((10,000)^10)^10 is 10^400, which is still more atoms than there are in the known universe.
In fact, even if I assume no memory at all, just a simple Markov chain that responds only to your previous statement (which can be easily tripped up by asking the same question in a few different contexts), that would still be 10,000^10 = 10^40 sequences, which is at least a quintillion times the total data storage of every computer currently on Earth.
And I’m supposed to imagine that this can be done by hand, in real time, in order to carry out a conversation?
Note that I am not simply saying that a person in a room is too slow for the Chinese Room to work. You can use an exaflop quantum supercomputer if you like; it’s still utterly impossible to store and sort through all possible conversations.
This means that, whatever is actually going on inside the head of a real human being, it is nothing like a series of instructions that say “After receiving input A B C, output X.” A human mind cannot even fathom the total set of possible conversations, much less have a cached response to every possible sequence. This means that rules that simple cannot possibly mimic consciousness. This doesn’t mean consciousness isn’t computational; it means you’re doing the wrong kind of computations.
I’m sure Searle’s response would be to say that this is a difference only of degree, not of kind. But is it, really? Sometimes a sufficiently large difference of degree might as well be a difference of kind. (Indeed, perhaps all differences of kind are really very large differences of degree. Remember, there is a continuous series of common ancestors that links you and I to bananas.)
Moreover, Searle has claimed that his point was about semantics rather than consciousness: In an exchange with Daniel Dennett he wrote “Rather he [Dennett] misstates my position as being about consciousness rather than about semantics.” Yet semantics is exactly how we would solve this problem of combinatoric explosion.
Suppose that instead of simply having a list of symbol sequences, the file cabinets contained detailed English-to-Chinese dictionaries and grammars. After reading and memorizing those, then conversing for awhile with the Chinese speaker outside the room, who would deny that the person in the room understands Chinese? Indeed what other way is there to understand Chinese, if not reading dictionaries and talking to Chinese speakers?
Now imagine somehow converting those dictionaries and grammars into a form that a computer could directly apply. I don’t simply mean digitizing the dictionary; of course that’s easy, and it’s been done. I don’t even mean writing a program that translates automatically between English and Chinese; people are currently working on this sort of thing, and while still pretty poor, it’s getting better all the time.
No, I mean somehow coding the software so that the computer can respond to sentences in Chinese with appropriate responses in Chinese. I mean having some kind of mapping within the software of how different concepts relate to one another, with categorizations and associations built in.
I mean something like a searchable cross-referenced database, so that when asked the question, “What’s your favorite farm animal?” despite never having encountered this sentence before, the computer can go through a list of farm animals and choose one to designate as its ‘favorite’, and then store that somewhere so that later on when it is again asked it will give the same answer. And then why asked “Why do you like goats?” the computer can go through the properties of goats, choose some to be the ‘reason’ why it ‘likes’ them, and then adjust its future responses accordingly. If it decides that the reason is “horns are cute”, then when you mention some other horned animal, it updates to increase its probability of considering that animal “cute”.
I mean something like a program that is programmed to follow conversational conventions, so when you ask it its name, will not only tell you something; it will ask you your name in return, and stores that information for later. And then it will map the sound of your name to known patterns of ethnic naming conventions, and so when you say your name is “Ling-Ling Xu” it asks “Is your family Chinese?” And then when you say “yes” it asks “What part of China are they from?” and then when you say “Shanghai” it asks “Did you grow up there?” and so on. It’s not that it has some kind of rule that says “Respond to ‘Shanghai’ with ‘Did you grow up there?’”; on the contrary, later in the conversation you may say “Shanghai” and get a different response because it was in a different context. In fact, if you were to keep spamming “Shanghai” over and over again, it would sound confused: “Why do you keep saying ‘Shanghai’? I don’t understand.”
In other words, I mean semantics. I mean something approaching how human beings actually seem to organize the meanings of words in their brains. Words map to other words and contexts, and some very fundamental words (like “pain” or “red”) map directly to sensory experiences. If you are asked to define what a word means, you generally either use a lot of other words, or you point to a thing and say “It means that.” Why can’t a robot do the same thing?
I really cannot emphasize enough how radically different that process would be from simply having rules like “After receiving input A B C, output X.” I think part of why Searle’s argument is so seductive is that most people don’t have a keen grasp of computer science, and so the difference between a task that is O(N^2) like what I just outlined above doesn’t sound that different to them compared to a task that is O(10^(10^N)) like the simple input-output rules Searle describes. With a fast enough computer it wouldn’t matter, right? Well, if by “fast enough” you mean “faster than could possibly be built in our known universe”, I guess so. But O(N^2) tasks with N in the thousands are done by your computer all the time; no O(10^(10^N)) task will ever be accomplished for such an N within the Milky Way in the next ten billion years.
I suppose you could still insist that this robot, despite having the same conceptual mappings between words as we do, and acquiring new knowledge in the same way we do, and interacting in the world in the same way we do, and carrying on conversations of arbitrary length on arbitrary topics in ways indistinguishable from the way we do, still nevertheless “is not really conscious”. I don’t know how I would conclusively prove you wrong.
But I have two things to say about that: One, how do I know you aren’t such a machine? This is the problem of zombies. Two, is that really how you would react, if you met such a machine? When you see Lieutenant Commander Data on Star Trek: The Next Generation, is your thought “Oh, he’s just a calculating engine that makes a very convincing simulation of human behavior”? I don’t think it is. I think the natural, intuitive response is actually to assume that anything behaving that much like us is in fact a conscious being.
And that’s all the Chinese Room was anyway: Intuition. Searle never actually proved that the person in the room, or the person-room system, or the person-room-environment system, doesn’t actually understand Chinese. He just feels that way, and expects us tofeel that way as well. But I contend that if you ever did actually meet a machine that really, truly passed the strictest form of a Turing Test, your intuition would say something quite different: You would assume that machine was as conscious as you and I.
One of the most dangerous moves to make in an argument is to accuse your opponent of bad faith. It’s a powerful, and therefore tempting, maneuver: If they don’t even really believe what they are saying, then you can safely ignore basically whatever comes out of their mouth. And part of why this is so tempting is that it is in fact occasionally true—people do sometimes misrepresent their true beliefs in various ways for various reasons. On the Internet especially, sometimes people are just trolling.
But unless you have really compelling evidence that someone is arguing in bad faith, you should assume good faith. You should assume that whatever they are asserting is what they actually believe. For if you assume bad faith and are wrong, you have just cut off any hope of civil discourse between the two of you. You have made it utterly impossible for either side to learn anything or change their mind in any way. If you assume good faith and are wrong, you may have been overly charitable; but in the end you are the one that is more likely to persuade any bystanders, not the one who was arguing in bad faith.
Furthermore, it is important to really make an effort to understand your opponent’s position as they understand it before attempting to respond to it. Far too many times, I have seen someone accused of bad faith by an opponent who simply did not understand their worldview—and did not even seem willing to try to understand their worldview.
In this post, I’m going to point out some particularly egregious examples of this phenomenon that I’ve found, all statements made by left-wing people in response to right-wing people. Why am I focusing on these? Well, for one thing, it’s as important to challenge bad arguments on your own side as it is to do so on the other side. I also think I’m more likely to be persuasive to a left-wing audience. I could find right-wing examples easily enough, but I think it would be less useful: It would be too tempting to think that this is something only the other side does.
The phrase “pro-life” means thinking that abortion is wrong. That’s all it means. It’s jargon at this point. The phrase has taken on this meaning independent of its constituent parts, just as a red herring need not be either red or a fish.
Stop accusing people of not being “truly pro-life” because they don’t adopt some other beliefs that are not related to abortion. Even if those would be advancing life in some sense (most people probably think that most things they think are good advance life in some sense!), they aren’t relevant to the concept of being “pro-life”. Moreover, being “pro-life” in the traditional conservative sense isn’t even about minimizing the harm of abortion or the abortion rate. It’s about emphasizing the moral wrongness of abortion itself, and often even criminalizing it.
I don’t think this is really so hard to understand. If someone truly, genuinely believes that abortion is murdering a child, it’s quite clear why they won’t be convinced by attempts at minimizing harm or trying to reduce the abortion rate via contraception or other social policy. Many policies are aimed at “reducing the demand for abortion”; would you want to “reduce the demand for murder”? No, you’d want murderers to be locked up. You wouldn’t care what their reasons were, and you wouldn’t be interested in using social policies to address those reasons. It’s not even hard to understand why this would be such an important issue to them, overriding almost anything else: If you thought that millions of people were murdering children you would consider that an extremely important issue too.
If you want to convince people to support Roe v. Wade, you’re going to have to change their actual belief that abortion is murder. You may even be able to convince them that they don’t really think abortion is murder—many conservatives support the death penalty for murder, but very few do so for abortion. But they clearly do think that abortion is a grave moral wrong, and you can’t simply end-run around that by calling them hypocrites because they don’t care about whatever other issue you think they should care about.
Example 2: “Stop pretending to care about human life if you support wars in the Middle East”
I had some trouble finding the exact wording of the meme I originally saw with this sentiment, but the gist of it was basically that if you support bombing Afghanistan, Libya, Iraq, and/or Syria, you have lost all legitimacy to claiming that you care about human life.
Say what you will about these wars (though to be honest I think what the US has done in Libya and Syria has done more good than harm), but simply supporting a war does not automatically undermine all your moral legitimacy. The kind of radical pacifism that requires us to never kill anyone ever is utterly unrealistic; the question is and has always been “Which people is it okay to kill, when and how and why?” Some wars are justified; we have to accept that.
It would be different if these were wars of genocidal extermination; I can see a case for saying that anyone who supported the Holocaust or the Rwandan Genocide has lost all moral legitimacy. But even then it isn’t really accurate to say that those people don’t care about human life; it’s much more accurate to say that they have assigned the group of people they want to kill to a subhuman status. Maybe you would actually get more traction by saying “They are human beings too!” rather than by accusing people of not believing in the value of human life.
And clearly these are not wars of extermination—if the US military wanted to exterminate an entire nation of people, they could do so much more efficiently than by using targeted airstrikes and conventional warfare. Remember: They have nuclear weapons. Even if you think that they wouldn’t use nukes because of fear of retaliation (Would Russia or China really retaliate using their own nukes if the US nuked Afghanistan or Iran?), it’s clear that they could have done a lot more to kill a lot more innocent people if that were actually their goal. It’s one thing to say they don’t take enough care not to kill innocent civilians—I agree with that. It’s quite another to say that they actively try to kill innocent civilians—that’s clearly not what is happening.
This one I find a good deal more tempting: In the Bible, Jesus does spend an awful lot more words on helping the poor than he does on, well, almost anything else; and he doesn’t even once mention abortion or homosexuality. (The rest of the Bible does at least mention homosexuality, but it really doesn’t have any clear mentions of abortion.) So it really is tempting to say that anyone who doesn’t make helping the poor their number one priority can’t really be a Christian.
Yes, it’s true; people can worship a penniless son of a carpenter who preached peace and charity and at the same time support cutting social welfare programs and bombing the Middle East. Such a worldview may not be entirely self-consistent; it’s certainly not the worldview that Jesus himself espoused. But it nevertheless is quite sincerely believed by many millions of people.
It may still be useful to understand the Bible in order to persuade Christians to help the poor more. There are certainly plenty of passages you can point them to where Jesus talks about how important it is to help the poor. Likewise, Jesus doesn’t seem to much like the rich, so it is fair to ask: How Christian is it for Republicans to keep cutting taxes on the rich? (I literally laughed out loud when I first saw this meme: “Celebrate Holy Week By Flogging a Banker: It’s What Jesus Would Have Done!“) But you should not accuse people of “pretending to be Christian”. They really do strongly identify themselves as Christian, and would sooner give up almost anything else about their identity. If you accuse them of pretending, all that will do is shut down the conversation.
Now, after all that, let me give one last example that doesn’t fit the trend, one example where I really do think the other side is acting in bad faith.
Example 4: “#AllLivesMatter is a lie. You don’t actually think all lives matter.”
I think this one is actually true. If you truly believed that all lives matter, you wouldn’t post the hashtag #AllLivesMatter in response to #BlackLivesMatter protests against police brutality.
First of all, you’d probably be supporting those protests. But even if you didn’t for some reason, that isn’t how you would use the hashtag. As a genuine expression of caring, the hashtag #AllLivesMatter would only really make sense for something like Oxfam or UNICEF: Here are these human lives that are in danger and we haven’t been paying enough attention to them, and here, you can follow my hashtag and give some money to help them because all lives matter. If it were really about all lives mattering, then you’d see the hashtag pop up after a tsunami in Southeast Asia or a famine in central Africa. (For awhile I tried actually using it that way; I quickly found that it was overwhelmed by the bad faith usage and decided to give up.)
No, this hashtag really seems to be trying to use a genuinely reasonable moral norm—all lives matter—as a weapon against a political movement. We don’t see #AllLivesMatter popping up asking people to help save some lives—it’s always as a way of shouting down other people who want to save some lives. It’s a glib response that lets you turn away and ignore their pleas, without ever actually addressing the substance of what they are saying. If you really believed that all lives matter, you would not be so glib; you would want to understand how so many people are suffering and want to do something to help them. Even if you ultimately disagreed with what they were saying, you would respect them enough to listen.
The counterpart #BlueLivesMatter isn’t in bad faith, but it is disturbing in a different way: What are ‘blue lives’? People aren’t born police officers. They volunteer for that job. They can quit if want. No one can quit being Black. Working as a police officer isn’t even especially dangerous! But it’s not a bad faith argument: These people really do believe that the lives of police officers are worth more—apparently much more—than the lives of Black civilians.
I do admit, the phrasing “#BlackLivesMatter” is a bit awkward, and could be read to suggest that other lives don’t matter, but it takes about 2 minutes of talking to someone (or reading a blog by someone) who supports those protests to gather that this is not their actual view. Perhaps they should have used #BlackLivesMatterToo, but when your misconception is that easily rectified the responsibility to avoid it falls on you. (Then again, some people do seem to stoke this misconception: I was quite annoyed when a question was asked at a Democratic debate: “Do Black Lives Matter, or Do All Lives Matter?” The correct answer of course is “All lives matter, which is why I support the Black Lives Matter movement.”)
So, yes, bad faith arguments do exist, and sometimes we need to point them out. But I implore you, consider that a last resort, a nuclear option you’ll only deploy when all other avenues have been exhausted. Once you accuse someone of bad faith, you have shut down the conversation completely—preventing you, them, and anyone else who was listening from having any chance of learning or changing their mind.
The human brain is a remarkably awkward machine. It’s really quite bad at organizing data, relying on associations rather than formal categories.
It is particularly bad at negation. For instance, if I tell you that right now, no matter what, you must not think about a yellow submarine, the first thing you will do is think about a yellow submarine. (You may even get the Beatles song stuck in your head, especially now that I’ve mentioned it.) A computer would never make such a grievous error.
The human brain is also quite bad at separation. Daniel Dennett coined a word “deepity” for a particular kind of deep-sounding but ultimately trivial aphorism that seems to be quite common, which relies upon this feature of the brain. A deepity has at least two possible readings: On one reading, it is true, but utterly trivial. On another, it would be profound if true, but it simply isn’t true. But if you experience both at once, your brain is triggered for both “true” and “profound” and yields “profound truth”. The example he likes to use is “Love is just a word”. Well, yes, “love” is in fact just a word, but who cares? Yeah, words are words. But love, the underlying concept it describes, is not just a word—though if it were that would change a lot.
One thing I’ve come to realize about my own anxiety is that it involves a wide variety of different scenarios I imagine in my mind, and broadly speaking these can be sorted into two categories: Those that are likely but not terrible, and those that are terrible but not likely.
But when both of those scenarios are running through my brain at the same time, the first triggers a reaction for “likely” and the second triggers a reaction for “terrible”, and I get this feeling that something terrible is actually likely to happen. And indeed if my probability of dying were as high as my probability of needing a 6th year to finish my PhD, that would be catastrophic.
I suppose it’s a bit strange that the opposite doesn’t happen: I never seem to get the improbability of dying attached to the mildness of needing an extra year. The confusion never seems to trigger “neither terrible nor likely”. Or perhaps it does, and my brain immediately disregards that as not worthy of consideration? It makes a certain sort of sense: An event that is neither probable nor severe doesn’t seem to merit much anxiety.
I suspect that many other people’s brains work the same way, eliding distinctions between different outcomes and ending up with a sort of maximal product of probability and severity.
The solution to this is not an easy one: It requires deliberate effort and extensive practice, and benefits greatly from formal training by a therapist. Counter-intuitively, you need to actually focus more on the scenarios that cause you anxiety, and accept the anxiety that such focus triggers in you. I find that it helps to actually write down the details of each scenario as vividly as possible, and review what I have written later. After doing this enough times, you can build up a greater separation in your mind, and more clearly categorize—this one is likely but not terrible, that one is terrible but not likely. It isn’t a cure, but it definitely helps me a great deal. Perhaps it could help you.
My guess is that in order to be creative, you need to think differently than other people. You need to see the world in a way that others do not see it. Mental illness is surely not the only way to do that, but it’s definitely one way.
But creativity also requires basic functioning: If you are totally crippled by a mental illness, you’re not going to be very creative. So the people who are most creative have just enough craziness to think differently, but not so much that it takes over their lives.
This might even help explain how mental illness persisted in our population, despite its obvious survival disadvantages. It could be some form of heterozygote advantage.
The classic example of heterozygote advantage is sickle-cell anemia: If you have no copies of the sickle-cell gene, you’re normal. If you have two copies, you have sickle-cell anemia, which is very bad. But if you have only one copy, you’re healthy—and you’re resistant to malaria. Thus, high risk of malaria—as we certainly had, living in central Africa—creates a selection pressure that keeps sickle-cell genes in the population, even though having two copies is much worse than having none at all.
Mental illness might function something like this. I suspect it’s far more complicated than sickle-cell anemia, which is literally just two alleles of a single gene; but the overall process may be similar. If having just a little bit of bipolar disorder or schizophrenia makes you see the world differently than other people and makes you more creative, there are lots of reasons why that might improve the survival of your genes: There are the obvious problem-solving benefits, but also the simple fact that artists are sexy.
The downside of such “weird-thinking” genes is that they can go too far and make you mentally ill, perhaps if you have too many copies of them, or if you face an environmental trigger that sets them off. Sometimes the reason you see the world differently than everyone else is that you’re just seeing it wrong. But if the benefits of creativity are high enough—and they surely are—this could offset the risks, in an evolutionary sense.
But one thing is quite clear: If you are mentally ill, don’t avoid treatment for fear it will damage your creativity. Quite the opposite: A mental illness that is well treated and in remission is the optimalstate for creativity. Go seek treatment, so that your creativity may blossom.